Videos uploaded by user “Association for Computing Machinery (ACM)”
ACM Turing Award 2012
Shafi Goldwasser, Silvio Micali Receive 2012 ACM Turing Award For Advances In Cryptography Shafi Goldwasser and Silvio Micali laid the foundations of modern theoretical cryptography, taking it from a field of heuristics and hopes to a mathematical science with careful definitions and security models, precise specifications of adversarial capabilities, and rigorous reductions from formally defined computational problems. Their results, jointly and with others, established the now-standard definitions of security for the fundamental primitives of encryption and digital signatures, and provided exemplary implementations meeting the stated security objectives. Even more importantly, their work helped to establish the tone and character of modern cryptographic research. Jointly and in collaboration with others, they provided stunning innovations in the form of random functions, interactive proofs, and zero-knowledge protocols, with implications beyond cryptography to theoretical computer science in general. http://amturing.acm.org
CACM Mar. 2018 - A Programmable Programming Language
In the ideal world, software developers would analyze each problem in the language of its domain and then articulate solutions in matching terms. In the real world, however, programmers use a mainstream programming language someone else picked for them. The Racket project seeks to address this problem by utilizing the emerging idea of language-oriented programming. In this video, Matthias Felleisen discusses "A Programmable Programming Language" (cacm.acm.org/magazines/2018/3/225475), a Contributed Article in the March 2018 issue of Communications of the ACM. (Racket is available at http://racket-lang.org/).
CACM July 2016 - The Rise of Social Bots
A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior. These bots have become more prevalent on social networking sites in the past few years. In this video, Emilio Ferrara discusses "The Rise of Social Bots" (cacm.acm.org/magazines/2016/7/204021), a Review Article in the July 2016 Communications of the ACM.
ACM A.M. Turing Award - Whitfield Diffie and Martin E. Hellman
Whitfield Diffe and Martin Hellman received the 2015 ACM A.M. Turing Award for critical contributions to modern cryptography. The ability for two parties to use encryption to communicate privately over an otherwise insecure channel is fundamental for billions of people around the world. On a daily basis, individuals establish secure online connections with banks, e-commerce sites, email servers and the cloud. Diffie and Hellman's groundbreaking 1976 paper, "New Directions in Cryptography," introduced the ideas of public-key cryptography and digital signatures, which are the foundation for most regularly-used security protocols on the Internet today. The Diffie-Hellman Protocol protects daily Internet communications and trillions of dollars in financial transactions.
Celebrating 50 Years of the ACM A.M. Turing Award and Computing's Greatest Achievements
Since its inauguration in 1966, the ACM A. M. Turing Award has recognized major contributions of lasting importance in computing. Through the years, it has become the most prestigious technical award in the field, often referred to as the “Nobel Prize of computing.” During the next several months, ACM will celebrate 50 years of the Turing Award and the visionaries who have received it. Our aim is to highlight the significant impact of the contributions of the Turing Laureates on computing and society, to look ahead to the future of technology and innovation, and to help inspire the next generation of computer scientists to invent and dream. Our celebration will culminate with a conference on June 23 - 24, 2017 at the Westin St. Francis in San Francisco with lively moderated discussions exploring how computing has evolved and where the field is headed. We hope you can join us there, or via the web—we will be streaming the sessions in real time. For more information please visit, http://www.acm.org/turing-award-50
CACM June 2014 - Leslie Lamport, recipient of the 2013 ACM A.M. Turing Award
ACM's 2013 A.M. Turing Award recipient Leslie Lamport was cited for discovering the field of distributed computing systems that work as intended, making it possible for computers to cooperate, avoid error, and reach consensus. The June 2014 issue of Communications of the ACM details Lamport's innovative advances in an article (cacm.acm.org/news/175166), a Q&A, and an original video highlighting some of his renowned colleagues. In his own voice, he asserts that the best logic for stating things clearly is mathematics, a concept, he notes, that some find controversial. Assessing his body of work, he concludes that he created a path that others have followed to places well beyond his imagination. cacm.acm.org
Computer Science, Zuckerberg and Video Games
From his February 14, 2013 Google+ Hangout, President Obama discusses the importance of computer science in preparing the nation's future workforce.
UIST 2015 Technical Program Preview
A glimpse at the exciting technical program coming up at UIST 2015 in Charlotte, 8-11 November 2015. www.uist.org ----------------------------- Music is Big Car Theft by Jason Shaw http://freemusicarchive.org/music/Jason_Shaw/Audionautix_Tech_Urban_Dance/TU-BigCarTheft
Why I Belong to ACM
Bryan Cantrill, Vice President of Engineering at Joyent, Ben Fried, Chief Information Officer at Google, and Theo Schlossnagle, Chief Executive Officer at OmniTI, discuss motivations and benefits of joining the Association for Computing Machinery (ACM). To join ACM: http://www.acm.org/join/professional/PWEBVID More information about ACM: http://www.acm.org
Program your next server in Go
Author: Sameer Ajmani Abstract: Go is a new general-purpose programming language for professionals who build and maintain production systems. Hundreds of companies and thousands of open-source projects are using Go, including Google, DropBox, Docker, Apcera, and SoundCloud. This talk will present Go to the experienced service developer and show how its radically simple approach to software construction can make teams more productive. ACM DL: http://dl.acm.org/citation.cfm?id=2960078 DOI: http://dx.doi.org/10.1145/2959689.2960078
Bryan Cantrill on why he belongs to ACM
Bryan Cantrill, Vice President of Engineering at Joyent, on ability of the Association for Computing Machinery (ACM) to inspire professional excellence, broaden personal horizons, and bridge the academic/practitioner divide. To join ACM: http://www.acm.org/join/professional/PWEBVID More information about ACM: http://www.acm.org
CACM Mar. 2016 - Lessons Learned from 30 Years of MINX
Andrew S. Tanenbaum, the author of the MINX operating system, discusses "Lessons Learned from 30 Years of MINIX" (cacm.acm.org/magazines/2016/3/198874), his Contributed Article in the March 2016 CACM.
CACM Mar 2015 - Local Laplacian Filters  Edge aware Image Processing with a Laplacian Pyramid HD
Co-author Sylvain Paris discusses "Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid," the Research Highlights article published in the March 2015 Communications of the ACM (cacm.acm.org/magazines/2015/3/183587).
Andrew Ng on Building a Career in Machine Learning
Title: Break Into AI: A Q&A with Andrew Ng on Building a Career in Machine Learning Speaker: Andrew Ng Date: 12/4/2018 Abstract Andrew Ng will share tips and tricks on how to break into AI. He will discuss some of the most valuable skills for today's machine learning engineers, how to gain the experience to successfully switch careers, and how to build a habit of lifelong learning. He will also take questions from aspiring engineers and business professionals who want to work on AI-powered products. SPEAKER Andrew Ng, General Partner, AI Fund; CEO, Landing AI; Adjunct Professor, Stanford University Dr. Andrew Ng, a globally recognized leader in AI, is a General Partner at AI Fund and CEO of Landing AI. As the former Chief Scientist at Baidu and the founding lead of Google Brain, he led the AI transformation of two of the world’s leading technology companies. A longtime advocate of accessible education, Dr. Ng is the Co-founder of Coursera, an online learning platform, and founder of deeplearning.ai, an AI education platform. Dr. Ng is also an Adjunct Professor at Stanford University’s Computer Science Department. MODERATOR Juan Miguel de Joya, UN ITU; ACM Practitioners Board Juan Miguel de Joya is the in-house consultant for Artificial Intelligence and Emerging Technologies at the United Nations International Telecommunications Union. Prior to this role, he served as a contractor at Facebook/Oculus and Google, worked at Pixar Animation Studios and Walt Disney Animation Studios, and was an undergraduate researcher in graphics at the Visual Computing Lab at the University of California, Berkeley. In his spare time, he is part of the ACM Practitioners Board, the ACM Professional Development Committee, and the ACM SIGGRAPH Strategy Group. His current interests include artificial intelligence, computer vision, mixed reality, computational physics, the web, and the human impact of computing in society at large.
There is more consensus in Egalitarian parliaments
This paper describes the design and implementation of Egalitarian Paxos (EPaxos), a new distributed consensus algorithm based on Paxos. EPaxos achieves three goals: (1) optimal commit latency in the wide-area when tolerating one and two failures, under realistic conditions; (2) uniform load balancing across all replicas (thus achieving high throughput); and (3) graceful performance degradation when replicas are slow or crash. Egalitarian Paxos is to our knowledge the first protocol to achieve the previously stated goals efficiently---that is, requiring only a simple majority of replicas to be non-faulty, using a number of messages linear in the number of replicas to choose a command, and committing commands after just one communication round (one round trip) in the common case or after at most two rounds in any case. We prove Egalitarian Paxos's properties theoretically and demonstrate its advantages empirically through an implementation running on Amazon EC2. In the ACM Digital Library: http://dl.acm.org/citation.cfm?id=2517350
ACM A M Turing Award 2013 - Leslie Lamport
ACM named Leslie Lamport, a Principal Researcher at Microsoft Research, as the recipient of the 2013 ACM A.M. Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems. ttp://amturing.acm.org/
CACM August 2016 - Computational Biology in the 21st Century
In the past two decades, biological data sets have become so massive that it has become difficult to analyze them to discover patterns that illuminate underlying biological processes. In this video, Bonnie Berger discusses "Computational Biology in the 21st Century,” a Review Article in the August 2016 Communications of the ACM.(http://cacm.acm.org/magazines/2016/8/205052-computational-biology-in-the-21st-century/fulltext)
Pivot tracing: dynamic causal monitoring for distributed systems
Authors: Jonathan Mace, Ryan Roelke, Rodrigo Fonseca Abstracts: Monitoring and troubleshooting distributed systems is notoriously difficult; potential problems are complex, varied, and unpredictable. The monitoring and diagnosis tools commonly used today -- logs, counters, and metrics -- have two important limitations: what gets recorded is defined a priori, and the information is recorded in a component- or machine-centric way, making it extremely hard to correlate events that cross these boundaries. This paper presents Pivot Tracing, a monitoring framework for distributed systems that addresses both limitations by combining dynamic instrumentation with a novel relational operator: the happened-before join. Pivot Tracing gives users, at runtime, the ability to define arbitrary metrics at one point of the system, while being able to select, filter, and group by events meaningful at other parts of the system, even when crossing component or machine boundaries. We have implemented a prototype of Pivot Tracing for Java-based systems and evaluate it on a heterogeneous Hadoop cluster comprising HDFS, HBase, MapReduce, and YARN. We show that Pivot Tracing can effectively identify a diverse range of root causes such as software bugs, misconfiguration, and limping hardware. We show that Pivot Tracing is dynamic, extensible, and enables cross-tier analysis between inter-operating applications, with low execution overhead. ACM DL: http://dl.acm.org/citation.cfm?id=2815400.2815415 DOI: http://dx.doi.org/10.1145/2815400.2815415
Inside Websockets
Author: Leah Hanson Abstract: This talk will focus on how WebSockets work -- the details of the protocol and why they are the way they are. Protocol design is about tradeoffs, and if you pick the wrong tradeoff, you may regret it for a very long time. Were going to take a look at the tradeoffs that the WebSockets protocol made and talk about how you can apply the same principles to your own protocols. ACM DL: http://dl.acm.org/citation.cfm?id=2960084 DOI: http://dx.doi.org/10.1145/2959689.2960084
From L3 to seL4 what have we learnt in 20 years of L4 microkernels?
The L4 microkernel has undergone 20 years of use and evolution. It has an active user and developer community, and there are commercial versions which are deployed on a large scale and in safety-critical systems. In this paper we examine the lessons learnt in those 20 years about microkernel design and implementation. We revisit the L4 design papers, and examine the evolution of design and implementation from the original L4 to the latest generation of L4 kernels, especially seL4, which has pushed the L4 model furthest and was the first OS kernel to undergo a complete formal verification of its implementation as well as a sound analysis of worst-case execution times. We demonstrate that while much has changed, the fundamental principles of minimality and high IPC performance remain the main drivers of design and implementation decisions. In the ACM Digital Library: http://dl.acm.org/citation.cfm?id=2522720
Large-scale cluster management at Google with Borg
Authors: Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, John Wilkes Abstract: Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it. ACM DL: http://dl.acm.org/citation.cfm?id=2741964 DOI: http://dx.doi.org/10.1145/2741948.2741964
CACM July 2015 - Unifying Logic and Probability
Stuart Russell discusses the BLOG (Bayesian logic) language and open-universe probability models, the subject of "Unifying Logic and Probability," his Review Article in the July 2015 Communications of the ACM. Perhaps the most enduring idea from the early days of AI is that of a declarative system reasoning over explicitly represented knowledge with a general inference engine. Such systems require a formal language to describe the real world; and the real world has things in it. For this reason, classical AI adopted first-order logic—the mathematics of objects and relations—as its foundation. http://cacm.acm.org/magazines/2015/7/188745
Naiad: a timely dataflow system
Naiad is a distributed system for executing data parallel, cyclic dataflow programs. It offers the high throughput of batch processors, the low latency of stream processors, and the ability to perform iterative and incremental computations. Although existing systems offer some of these features, applications that require all three have relied on multiple platforms, at the expense of efficiency, maintainability, and simplicity. Naiad resolves the complexities of combining these features in one framework. A new computational model, timely dataflow, underlies Naiad and captures opportunities for parallelism across a wide class of algorithms. This model enriches dataflow computation with timestamps that represent logical points in the computation and provide the basis for an efficient, lightweight coordination mechanism. We show that many powerful high-level programming models can be built on Naiad's low-level primitives, enabling such diverse tasks as streaming data analysis, iterative machine learning, and interactive graph mining. Naiad outperforms specialized systems in their target application domains, and its unique features enable the development of new high-performance applications. In the ACM Digital Library: http://dl.acm.org/citation.cfm?id=2522738
"Advances in Deep Neural Networks," at ACM Turing 50 Celebration
Deep neural networks can be trained with relatively modest amounts of information and then successfully be applied to large quantities of unstructured data. Deep learning techniques have been applied with great success to areas such as speech recognition, image recognition, natural language processing, drug discovery and toxicology, customer relationship management, recommendation systems, and biomedical informatics. The capabilities of deep neural networks, in some domains, have proven to rival those of human beings. Panelists will explore how deep neural networks are changing our world and our jobs. They will also discuss how things may further change going forward. Moderator: Judea Pearl (2011 Turing Laureate), University of California, Los Angeles Panelists: Michael I. Jordan, University of California, Berkeley Fei-Fei Li, Stanford University Stuart Russell, University of California, Berkeley Ilya Sutskever, OpenAI Raquel Urtasun, University of Toronto
CACM Oct. 2018 - Human-Level Intelligence or Animal-Like Abilities?
The recent successes of neural networks in applications like speech recognition, vision, and autonomous navigation has led to great excitement by members of the artificial intelligence (AI) community, as well as by the general public. Over a relatively short time, by the science clock, we managed to automate some tasks that have defied us for decades, using one of the more classical techniques due to AI research. The triumph of these achievements has led some to describe the automation of these tasks as having reached human-level intelligence. This perception, originally hinted at in academic circles, has gained momentum more broadly and is leading to some implications. In this video, Adnan Darwiche discusses "Human-Level Intelligence or Animal-Like Abilities?" (https://cacm.acm.org/magazines/2018/10/231373), a Contributed Article in the October 2018 Communications of the ACM.
CACM July 2016 - Inverse Privacy
Institutions are now much better than you at recording data. As a result, shared data decays into inversely private. More inversely private information is produced when institutions analyze your private data. In this video, Jeannette Wing discusses "Inverse Privacy," (cacm.acm.org/magazines/2016/7/204020), a Viewpoint column in the July 2016 Communications of the ACM.
CACM May 2018 - Speech Emotion Recognition
In the 22 years since what is arguably first research paper on the topic of Speech Emotion Recognition was published, the field has come a long way -- SRE technologies by the names of Alexa, Cortana, Siri, and many others are now on the consumer market on a broader basis than ever. But do any of them truly notice our emotions and react to them like a human conversational partner would? In this video, Björn Schuller discusses "Speech Emotion Recognition: Two Decades in a Nutshell, Benchmarks, and Ongoing Trends," (cacm.acm.org/magazines/2018/5/227191), a Review Article in the May 2018 Communications of the ACM.
John Hennessy and David Patterson 2017 ACM A.M. Turing Award Lecture
2017 ACM A.M. Turing Award recipients John Hennessy and David Patterson delivered their Turing Lecture on June 4 at ISCA 2018 in Los Angeles. The lecture took place from 5 - 6 p.m. PDT and was open to the public. Titled “A New Golden Age for Computer Architecture: Domain-Specific Hardware/Software Co-Design, Enhanced Security, Open Instruction Sets, and Agile Chip Development,” the talk will cover recent developments and future directions in computer architecture. Hennessy and Patterson were recognized with the Turing Award for “pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry.”
CACM Feb. 2018 - The Next Phase in the Digital Revolution
Digital Platforms in the computing "cloud" are fundamental features of the digital revolution, entangled with what we term "intelligent tools." An abundance of computing power enabling generation and analysis of data on a scale never before imagined permits the reorganization/transformation of services and manufacturing. How will the increased movement of work to digital platforms provide real and rising incomes with reasonable levels of equality? In this video, John Zysman and Martin Kenney discuss "The Next Phase in the Digital Revolution: Intelligent Tools, Platforms, Growth, Employment," a Contributed Article in the February 2018 issue of Communications of the ACM. Read the full article here: https://cacm.acm.org/magazines/2018/2/224635-the-next-phase-in-the-digital-revolution
Tick Tock, malloc Needs a Clock
The jemalloc memory allocator is well known for low fragmentation and high concurrency, yet those two strengths sabotage each other. Eager free memory coalescence facilitates fragmentation avoidance, but concurrency scalability benefits from large and loosely coupled caches. Historically jemalloc has cached conservatively, with especially deleterious effects during rapid, large memory usage fluctuations. This is because jemalloc's only sense of time has been in terms of allocation events, thus precluding work postponement. This talk will provide an overview of jemalloc internals, critically analyze several past approaches to cache management, and describe multiple opportunities enabled by wall clock awareness. Jason Evans http://dx.doi.org/10.1145/2742580.2742807
CACM June 2018 David Patterson and John Hennessy, 2017 ACM A.M.  Turing Award
At a time when "making an impact" can feel like a vague or even overwhelming prospect, it's worth reviewing the accomplishments of two scientists who have done just that: ACM A.M. Turing Award recipients John Hennessy and David Patterson. What began as a simple-sounding insight—that you could improve microprocessor performance by including only instructions that are actually used—blossomed into a paradigm shift as the two honed their ideas in the MIPS (Microprocessor without Interlocked Pipeline Stages) and RISC (Reduced Instruction Set Computer) processors, respectively. A subsequent textbook, Computer Architecture: A Quantitative Approach, introduced generations of students not just to that particular architecture, but to critical principles that continue to guide designers as they balance constraints and strive for maximum efficiency. In this video, Hennessy and Patterson discuss their pioneering work, their partnership, and the future of computer architecture.
The Challenges of Writing a Massive and Complex Go Application
Author: Ben Darnell Abstract: We opted for Go when building CockroachDB, a scale-out, relational database, because of its support for libraries, interfaces, and tooling. However, it has come with its own frustrations, often related to performance and synchronization. And as for Cgo, RocksDB, and other critical external libraries, we've had to hunt down or develop creative workarounds to ensure they work well the rest of the toolchain. In this talk, we'll share how we've optimized our memory usage to mitigate issues related to garbage collection and improved our use of channels to avoid deadlocks. We will also share creative techniques to integrate non-Go dependencies into the Go build process. ACM DL: http://dl.acm.org/citation.cfm?id=2960085 DOI: http://dx.doi.org/10.1145/2959689.2960085
ACM Queue Inspirations with Tom Limoncelli HD
Tom Limoncelli, an author and site reliability engineer at Stack Overflow, will be discussing development operations in a new column in acmqueue called "Everything Sysadmin." Download the full interactive issue of acmqueue here: https://queue.acm.org/app/landing.cfm
Extracting Energy from the Turing Tarpit
Talk by ACM A.M. Turing Laureate Alan C. Kay during the ACM A.M. Turing Centenary Celebration, June, 2012. Abstract: Part of Turing's fame and inspiration came from showing how a simple computer can simulate every other computer, and so "anything is possible". The "Turing Tarpit" is getting caught by "anything is possible but nothing is easy". One way to get caught is to stay close to the underlying machine with our languages so that things seem comprehensible in the small but the code blows up into intractable millions of lines. What if we used "anything is possible" to make very different kinds of computers which require new learning but the code compactly fits the problem and stays small?
Past and future of hardware and architecture
Author: David Patterson Abstract: We start by looking back at 50 years of computer architecture, where philosophical debates on instruction sets (RISC vs. CISC, VLIW vs. RISC) and parallel architectures (NUMA vs clusters) were settled with billion dollar investments on both sides. In the second half, we look forward. First, Moore's Law is ending, so the free ride is over software-oblivious increasing performance. Since we've already played the multicore card, the most-likely/only path left is domain-specific processors. The memory system is radically changing too. First, Jim Gray's decade-old prediction is finally true: "Tape is dead; flash is disk; disk is tape." New ways to connect to DRAM and new non-volatile memory technologies promise to make the memory hierarchy even deeper. Finally, and surprisingly, there is now widespread agreement on instruction set architecture, namely Reduced Instruction Set Computers. However, unlike most other fields, despite this harmony has been no open alternative to proprietary offerings from ARM and Intel. RISC-V ("RISC Five") is the proposed free and open champion. It has a small base of classic RISC instructions that run a full open-source software stack; opcodes reserved for tailoring an System-On-a-Chip (SOC) to applications; standard instruction extensions optionally included in an SoC; and it is unrestricted: there is no cost, no paperwork, and anyone can use it. The ability to prototype using ever-more-powerful FPGAs and astonishingly inexpensive custom chips combined with collaboration on open-source software and hardware offers hope of a new golden era for hardware/software systems. ACM DL: http://dl.acm.org/citation.cfm?id=2830903.2830910 DOI: http://dx.doi.org/10.1145/2830903.2830910
Lambda Calculus Then and Now
Talk by ACM A.M. Turing Laureate Dana S. Scott during the ACM A.M. Turing Centenary Celebration, June, 2012. Abstract: A very fast development in the early 1930s, following Hilbert's codification of Mathematical Logic, led to the Incompleteness Theorems, Computable Functions, Undecidability Theorems, and the general formulation of recursive Function Theory. The so-called Lambda Calculus played a key role. The history of these developments will be traced, and the much later place of Lambda Calculus in Mathematics and Programming-Language Theory will be outlined.
The History of Rust
Author: Steve Klabnik Abstract: The Rust programming language recently celebrated its one year anniversary since 1.0. While that's not a long time, there were eight years of development before that, which saw radical changes in the language. In this talk, Steve will show off some of Rust's history, with all of the decisions and changes that were made along the way. ACM DL: http://dl.acm.org/citation.cfm?id=2960081 DOI: http://dx.doi.org/10.1145/2959689.2960081
On Methodology: Turing Laureates Discuss their Approach to Work
In this video from ACM's Celebration of 50 Years of the A.M. Turing Award, Turing Laureates Andrew Yao, Marvin Minsky, Herbert Simon, Shafi Goldwasser, James Gray, Edmund Clarke and Richard Karp discuss their approach to work and share advice for those who aspire to follow in their footsteps.
CACM May 2016 - Parallel Graph Analytics
Co-authors Andrew Lenharth and Keshav Pingali discuss "Parallel Graph Analytics" (cacm.acm.org/magazines/2016/5/201591), a Contributed Article in the May 2016 CACM.
Keynote - JSON Graph: Reactive REST at Netflix
Every user of a web application wants to believe that all of the data in the cloud is sitting right on their device. Netflix's data platform "JSON Graph" creates this illusion for the web developer. One Model, Available Everywhere. Using an innovative combination of reactive programming techniques and RESTful principles, JSON Graph allows web developers to create a virtual server JSON model for their web application and transparently access it from any cloud-connected device. The Data is the API. Using JSON Graph, Netflix developers retrieve data from the virtual server model the same way they would from an in-memory JSON object. Efficient client/server interactions are ensured by batching concurrent idempotent requests, transparently optimizing requests into point queries, and caching recently-used data. Come learn about the innovative data platform the powers the Netflix UIs, and the new design patterns it enables. Jafar Husain http://dx.doi.org/10.1145/2742580.2742640
CACM Sept. 2015 - Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence
Ernest Davis and Gary Marcus discuss the shortcomings of AI systems and "Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence," their Review Article in the September 2015 Communications of the ACM. http://cacm.acm.org/magazines/2015/9/191169
Utilizing the other 80% of your system's performance: Starting with Vectorization
Vectorization, as opposed to parallelization, is less utilized as a means of exploiting the full capabilities of a processor. This is a problem since even today this means only ¼ to ½ of the performance of the CPU is used. This is only getting worse in future, especially as accelerators are becoming more prevalent. After an introduction to the basics and history of vectorization the talk will introduce various techniques available for vectorization of compiled code. This talk will focus on gcc and for some details on Linux but the knowledge should be transferable if the features are fully implemented elsewhere. Ulrich Drepper http://dx.doi.org/10.1145/2742580.2742805
CACM September 2016 - Imaging the Propagation of Light through Scenes at Picosecond Resolution
Andreas Velten discusses "Imaging the Propagation of Light through Scenes at Picosecond Resolution" (cacm.acm.org/magazines/2016/9/206260), a Research Highlights article in the September 2016 Communications of the ACM.
CACM Feb. 2018 - Elements of the Theory of Dynamic Networks
We are rapidly approaching the era of dynamicity and of the highly unpredictable. A great variety of modern networked systems are highly dynamic both in space and time. Many traditional approaches and measures for static networks are not adequate for dynamic networks. There is already strong evidence that there is room for the development of a rich theory in this space. In this video, Othon Michail and Paul G. Spirakis discuss "Elements of the Theory of Dynamic Networks," a Review Article in the February 2018 issue of Communications of the ACM. Read the full article here: https://cacm.acm.org/magazines/2018/2/224637-elements-of-the-theory-of-dynamic
CACM July 2018 - Digital Nudging: Guiding Online User Choices through Interface Design
Life is full of choices, often in digital environments. People interact with e-government applications; trade financial products online; buy products in Web shops; book hotel rooms on mobile booking apps; and make decisions based on content presented in organizational information systems. All such choices are influenced by the environments in which they take place, and designers of these environments can subtly guide users' behavior by gently "nudging" them toward certain choices. “Digital Nudging: Guiding Online User Choices through Interface Design,” a contributed article in the July issue of Communications of the ACM, shows how designers can consider the effects of nudges when designing digital choice environments.
CACM Dec. 2017 - Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic
Martin E. Hellman received the 2015 ACM A.M. Turing Award with Whitfield Diffie for groundbreaking work in the field of public key cryptography. This video, along with its accompanying article "Cybersecurity, Nuclear Security, Alan Turing, and Illogical Logic" in the December 2017 CACM, outlines Hellman’s Turing Lecture, which chronicles a personal story weaving past and present, logic and illogic, and even love and marriage. Read the full article here: https://cacm.acm.org/magazines/2017/12/223042-cybersecurity-nuclear-security-alan-turing-and-illogical-logic/fulltext
Bruce Sterling -  The dark side impacts of IT on society
ACM97 Speaker: Bruce Sterling Position: Author, journalist, editor, and critic of science fiction and non-fiction Talk: The dark side impacts of IT on society Running time: 33 minutes
Mary Livecodes a JavaScript Game from Scratch
When I made my first game, I was scared of writing graphics code and dealing with browser quirks and player input events. So, I used a game framework to handle that stuff for me. Later, I discovered that stuff is not so scary. I will live-code an action game from scratch without using any libraries. We will cover keyboard input, graphics, collision detection and sound. Mary Rose Cook http://dx.doi.org/10.1145/2742580.2742816
CACM August 2015 - Programming the Quantum Future
Co-author Benoît Valiron discusses quantum computing and "Programming the Quantum Future." The earliest computers, like the ENIAC, were rare and heroically difficult to program. That difficulty stemmed from the requirement that algorithms be expressed in a "vocabulary" suited to the particular hardware available, ranging from function tables for the ENIAC to more conventional arithmetic and movement operations on later machines. Introduction of symbolic programming languages, exemplified by FORTRAN, solved a major difficulty for the next generation of computing devices by enabling specification of an algorithm in a form more suitable for human understanding, then translating this specification to a form executable by the machine. The "programming language" used for such specification bridged a semantic gap between the human and the computing device. It provided two important features: high-level abstractions, taking care of automated bookkeeping, and modularity, making it easier to reason about sub-parts of programs. For more, go to http://cacm.acm.org/magazines/2015/8/189851-programming-the-quantum-future/fulltext, a Contributed Article in the August 2015 Communications of the ACM.
Sparrow: distributed, low latency scheduling
Large-scale data analytics frameworks are shifting towards shorter task durations and larger degrees of parallelism to provide low latency. Scheduling highly parallel jobs that complete in hundreds of milliseconds poses a major challenge for task schedulers, which will need to schedule millions of tasks per second on appropriate machines while offering millisecond-level latency and high availability. We demonstrate that a decentralized, randomized sampling approach provides near-optimal performance while avoiding the throughput and availability limitations of a centralized design. We implement and deploy our scheduler, Sparrow, on a 110-machine cluster and demonstrate that Sparrow performs within 12% of an ideal scheduler. In the ACM Digital Library: http://dl.acm.org/citation.cfm?id=2522716