UI vs. UX

The user interface (UI) vs the user’s experience (UX) is a very modern “debate” in Computer Science. ux1This can also be summarized as the tension between usability and composability, between software that is user-friendly and software that is programmer-friendly (see this talk by Conal Elliott from Google). Consumers like software that’s easy to use. But programmers like software that’s easy to compose, i.e. to combine in unanticipated ways. Users want applications; programmers want libraries. Users like GUIs; programmers like APIs. It’s not immediately obvious that usability and composability are in tension. Why can’t you make users and programmers happy? You may be able to make some initial improvements that please both communities, but at some point their interests diverge. Looking at it another way, we can look at “operation versus expression” to express the same idea of usability versus composability (see this article by Vivek Haldar). Combining these ideas, we have these contrasts.

 Usability  Composability
 Operation  Expression
 Visual / GUI  Syntactic / CLI
 Bounded  Unbounded
 Externalize knowledge  Internalize knowledge

Neither column is necessarily better. Sometimes you want to be in the left column, sometimes in the right. Sometimes you want a stereo and sometimes you want a guitar.


When I file my taxes, I want the software to be as easy to use as possible right now. There’s no long-term use to consider since I’m not going to use it again for a year, so I’ll have forgotten anything peculiar about the software by the time I open it again. But when I’m writing software, I have a different set of values. I don’t mind internalizing some knowledge of how my tools work in exchange for long-term ease of use. Read the original article here


States are making computer science a curriculum staple in 2018

New York, Indiana and North Dakota have made announcements around computer science education in just the last few days. States’ commitment to computer science education expanded nationwidein 2017, and the trend seems to be continuing across the country in 2018. The Governors’ Partnership for K-12 Computer Science announced this week that eight more governors have joined the coalition, bringing the total number up to 16. Beyond that group’s efforts, New York, Indiana and North Dakota have reinforced their commitment to computer science education in just the last few days. On Monday, Gov. Andrew Cuomo made New York the latest state to invest significant resources in K-12 computer science education. As part of the 2018 Women’s Agenda for New York: Equal Rights, Equal Opportunity, Cuomo promised to tackle the gender disparity in New York’s computer science programs. Only 25 percent of the 3,761 New York students who took the AP Computer Science exam in 2016 were female, he said. To begin closing that gender gap in New York, he announced a $6 million annual grant in support of the state’s Smart Start program. The program will provide need-based grants to schools for teacher development in computer science, with the award-winning schools also receiving the opportunity to work with Regional Economic Development Councils to tailor the program to regional businesses or future employers’ needs. Cuomo, a Democrat, also plans to “convene a working group of educators and industry partners” to facilitate model computer science standards that any school could use, he said in a statement following the announcement. Indiana, meanwhile, is seeing action on a legislative mandate. In his 2018 NextLevel agenda, Indiana Gov. Eric Holcomb, a Republican, offered his support of SB 172, a bill moving through the Indiana General Assembly that would require the state’s public schools to include computer science in their K-12 curriculum and require high schools to offer it as an elective course by 2021. The bill would also establish a grant program similar to Cuomo’s plan for New York, with the Indiana Department of Education administering a fund tasked with awarding grants in support of teacher professional development programs for training in teaching computer science. Other states, such as North Dakota, are facilitating their computer science education expansion with private sector partnerships. State Superintendent Kirsten Baesler announced Monday that North Dakota will be the next expansion site for Microsoft’s Technology Education and Literacy in Schools program, or TEALS. Baesler — who in 2017 successfully urged the North Dakota legislature to approve a new law that allows for high school students in the state to to substitute a computer science course in place of a math class while still meeting North Dakota’s three-math class requirement to graduate — said she is hopeful that the partnership will go even further than providing critical job skills for students.

“This program is about problem solving and being creative,” Baesler said in Monday’s announcement. “It teaches our students to think rigorously and systematically. It helps to teach the North Dakota values of persistence, tenacity and self-reliance.”

The program operates in the classroom through a team-teaching system comprised of a volunteer computer science researcher or expert from Microsoft or another industry partner. As the primary classroom teacher becomes more familiar with the topic of computer science through the process of working with the industry professional, he or she would gradually take over the lesson plan. See the original article here

Building A.I. That Can Build A.I.

SAN FRANCISCO — They are a dream of researchers but perhaps a nightmare for
highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines. With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine-learning algorithm that learns to build other machine-learning algorithms.
With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry. The project is part of a much larger effort to bring the latest and greatest A.I. techniques to a wider collection of companies and software developers.
The tech industry is promising everything from smartphone apps that can recognize
faces to cars that can drive on their own. But by some estimates, only 10,000 people worldwide have the education, experience and talent needed to build the complex
and sometimes mysterious mathematical algorithms that will drive this new breed of artificial intelligence. The world’s largest tech businesses, including Google, Facebook and Microsoft, sometimes pay millions of dollars a year to A.I. experts, effectively cornering the market for this hard-to-find talent. The shortage isn’t going away anytime soon, just because mastering these skills takes years of work. The industry is not willing to wait. Companies are developing all sorts of tools that will make it easier for any operation to build its own A.I. software, including things like image and speech recognition services and online chatbots. “We are following the same path that computer science has followed with every new type of technology,” said Joseph Sirosh, a vice president at Microsoft, which recently unveiled a tool to help coders build deep neural networks, a type of computer algorithm that is driving much of the recent progress in the A.I. field. “We are eliminating a lot of the heavy lifting.” This is not altruism. Researchers like Mr. Dean believe that if more people and companies are working on artificial intelligence, it will propel their own research. At the same time, companies like Google, Amazon and Microsoft see serious money in the trend that Mr. Sirosh described. All of them are selling cloud-computing services that can help other businesses and developers build A.I. “There is real demand for this,” said Matt Scott, a co-founder and the chief technical officer of Malong, a start-up in China that offers similar services. “And the tools are not yet satisfying all the demand.” This is most likely what Google has in mind for AutoML, as the company continues to hail the project’s progress. Google’s chief executive, Sundar Pichai, boasted about AutoML last month while unveiling a new Android smartphone. Eventually, the Google project will help companies build systems with artificial intelligence even if they don’t have extensive expertise, Mr. Dean said. Today, he estimated, no more than a few thousand companies have the right talent for building A.I., but many more have the necessary data. “We want to go from thousands of organizations solving machine learning problems to millions,” he said. Google is investing heavily in cloud-computing services — services that help other businesses build and run software — which it expects to be one of its primary economic engines in the years to come. And after snapping up such a large portion of the worlds top A.I researchers, it has a means of jump-starting this engine. Neural networks are rapidly accelerating the development of A.I. Rather than building an image-recognition service or a language translation app by hand, one line of code at a time, engineers can much more quickly build an algorithm that learns tasks on its own. By analyzing the sounds in a vast collection of old technical support calls, for instance, a machine-learning algorithm can learn to recognize spoken words. But building a neural network is not like building a website or some run-of-the-mill smartphone app. It requires significant math skills, extreme trial and error, and a fair amount of intuition. Jean-Fransois Gagna, the chief executive of an independent machine-learning lab called Element AI, refers to the process as “a new kind of computer programming. In building a neural network, researchers run dozens or even hundreds of experiments across a vast network of machines, testing how well an algorithm can learn a task like recognizing an image or translating from one language to another. Then they adjust particular parts of the algorithm over and over again, until they settle on something that works. Some call it a “dark art,” just because researchers find it difficult to explain why they make particular adjustments. But with AutoML, Google is trying to automate this process. It is building algorithms that analyze the development of other algorithms, learning which methods are successful and which are not. Eventually, they learn to build more effective machine learning. Google said AutoML could now build algorithms that, in some cases, identified objects in photos more accurately than services built solely by human experts. Barret Zoph, one of the Google researchers behind the project, believes that the same method will eventually work well for other tasks, like speech recognition or machine translation. This is not always an easy thing to wrap your head around. But it is part of a significant trend in A.I. research. Experts call it “learning to learn” or “meta-learning.” Many believe such methods will significantly accelerate the progress of A.I. in both the online and physical worlds. At the University of California, Berkeley, researchers are building techniques that could allow robots to learn new tasks based on what they have learned in the past. “Computers are going to invent the algorithms for us, essentially,” said a Berkeley professor, Pieter Abbeel. “Algorithms invented by computers can solve many, many problems very quickly — at least that is the hope.” This is also a way of expanding the number of people and businesses that can build artificial intelligence. These methods will not replace A.I. researchers entirely. Experts, like those at Google, must still do much of the important design work. But the belief is that the work of a few experts can help many others build their own software. Renato Negrinho, a researcher at Carnegie Mellon University who is exploring technology similar to AutoML, said this was not a reality today but should be in the years to come. “It is just a matter of when,” he said.

See the whole article here