Automating code, Twitter's hack(s), a robot named Stretch
Language models write code, Twitter gets hacked (again), new robots, and top-tier conferences.
Check out this video for the first compelling demonstration of automated software generation I have seen.
What is it
OpenAI’s newest language processing model GPT-3 (Generative Pretrained Transformer) creating a variety of front-end code snippets from two example snippets. (Front-end code is code that renders on a website, and is often repeated in chunks to get variations of the same designs, hence why it is an easy initial target for automation).
You can engage with the author of the tool here (Twitter). You can find a collection of more creative examples here, or another code generation example here. One I particularly liked was written creative fiction or a written auto-generated game on the last generation of the model.
How this works
The language model “Generative Pretrained Transformer” uses a new pay-to-play API from OpenAI. Here is an excerpt on NLP and Transformers from my post (AI & Arbitration of Truth - which seems to need to be revisited every week).
The Tech — Transformers & NLP
Natural Language Processing (NLP) is the subfield of machine learning concerned with manipulating and extracting information from text. It’s used in smart assistants, translators, search engines, online stores, and more. NLP (along with computer vision) is one of a few monetized state-of-the-art machine learning developments. It’s the candidate for being used to interpret truth.
The best NLP tool to date is a neural network architecture called the transformer. Long story short, transformers use an encoder and decoder structure that encodes words to a latent space, and decodes to a translation, typo fix, or classification (you can think of an encoder-decoder as compressing a complicated feature space to a simpler space via a neural network — nonlinear function approximation). A key tool in the NLP space is something called Attention that learns which words to focus on and for how long (rather than hard coding it into an engineered system).
A transformer combines these tools, and a couple other advancements that allows the models to be trained efficiently in parallel. Below is a diagram showing how data flows through a transformer.
A visualization from an awesome tutorial I found.
Why it matters
This is the first application I have seen where people could use this to replace engineering time. Front-end designers can drastically increase their speed with this tool. This will likely be sold to many existing companies, and new businesses will be using it in the future to create valuable services. It takes a creative person to find the best applications, is certainly limited by us human designers, and will soon be replaced by the next state-of-the-art model [More].
This is eye-raising for more reasons because of OpenAI’s famous charter. In short - we will work towards AGI, and if another company looks to be getting there first, we will join them. The claim behind this product is that the funds will help them execute AI research, but their leadership has in the past withdrawn models out of fear that they are “too dangerous to share.” This fine-line of AI danger will only get sharper.
Nerd corner: The training amount for this model was 50 petaflop/s-days (what exactly does this mean?) amounts to over $12million for training costs alone [Source]. That’s a bit of a cost to recoup in fees. I like to think about how this model compares to the shallow neural networks I use for regression tasks - it’s over 100million times the number of parameters. That is a totally different regime of function approximation. For the nerdy-nerds, the academic paper is here.
Robotics & Generative Text
I requested access to the beta for robotics research. I am interested to see what level of planning a language model (big neural network) can achieve given context in the form of a game. Does language capture the basic intent in a game and the structure of a solution?
Longer term I think language integration into robotic rewards is of interest - it will allow humans who work with the robots to give the machines verbal tasks (verification of said tasks is a problem for another day).
Given an embedding of a game board (written, grid, other methods), say “where should I move.”
Given a description of an environment: “the block is on the ball which is to the right of the chair,” ask “is the ball above the chair?”
This is a very rudimentary example, but I think links from commercialized machine learning fields such as deep learning for vision and language are high potential.
What you need to know:
A summary of the extent from the Wall Street Journal:
Twitter Inc. was hit with a widespread attack Wednesday that allowed hackers to take over an array of accounts including those of celebrities, politicians and billionaires such as Bill Gates, Kanye West, Joe Biden and Barack Obama, as well as Apple Inc. and other companies. The attack, which security experts called the most significant hacking incident in Twitter’s history, began before 4 p.m. EDT, when compromised accounts — many of them related to the digital currency bitcoin — began posting messages requesting money be sent to cryptocurrency accounts. The attacks quickly spread to additional, more prominent accounts, with the bogus messages sometimes receiving thousands of likes before they were taken down, only to be posted again a short time later, sometimes on the same account.
…the company took the extraordinary step of limiting posts from verified accounts with blue check marks, which Twitter generally designates for more prominent users. Twitter, late Wednesday, said it believed the hackers perpetrated the attack by targeting employees who had access to the company’s internal systems and tools.
Vice reported the method could have been bribery:
A Twitter insider was responsible for a wave of high profile account takeovers on Wednesday, according to leaked screenshots obtained by Motherboard and two sources who took over accounts…” We used a rep that literally done all the work for us,” one of the sources told Motherboard. The second source added they paid the Twitter insider. Motherboard granted the sources anonymity to speak candidly about a security incident.
A Twitter spokesperson told Motherboard that the company is still investigating whether the employee hijacked the accounts themselves or gave hackers access to the tool. The accounts were taken over using an internal tool at Twitter, according to the sources, as well as screenshots of the tool obtained by Motherboard.
A history of Twitter blips
Twitter doesn’t have the best history in security. In 2017 a contractor (non full-time employee) deactivated Trumps account for a few minutes (it is suggested his account has new protections), and in 2019 it was found that employees accessed info on Saudi Arabian dissidents.
These issues come down to the fact that employees have access to internal data (common practice in technology industry), and that is going to change.
Geopolitical and Automation Implications
Many social media companies operate on the following personnel assumption:
The few bad apples are going to be outweighed by the dramatic majority of happy employees.
This is not the case with how information travels and exponentiates on the internet. One bad employee can cause a lot of damage, so we are going to see changes in the future. The “damage” exposed by the hackers so far was ~$100,000 in bitcoin, but there could be a lot of leverage cash downstream from the rumored collection of all Twitter content which includes private messages.
I think this Twitter breach will be the landmark case in a new set of regulation on a) who can access sensitive data at these companies and b) how they do so. Well, the government likely should regulate both, but getting one would be a good starting point. Multiple companies have a 0 strike policy on data-infractions, but when the 0th strike’s cost can be limited by the creativity of the hacker, that is a big downside.
When a “social engineering attack” has the potential to de-stabalize the geopolitical order (think, the President). It really fits the trend of how cyber-warfare will define our next decades. It’s weird to think most companies will be in a warfare state behind the scenes, but the public only will know when there’s a landed strike. Both sides of this war will be automated (and scary).
The snacks section of my post is in-between a full analysis and more relevant than “I listened to this.” This week we have a launch of a fun new robot, multiple high-profile robotics and AI conferences, and more.
Stretch, the robot
Some Xooglers (ex-Googlers) have created a company called Hello Robotics and introduced their new robot, Stretch. Stretch is designed to be simple and useful for multiple tasks. It is lean, cheap, durable, and sensor-packed. And it comes with cute advertising, below. [More]
It’s interesting to note that this company has opted out of VC funding (most likely). Dry-powder at venture capital firms is at an all-time high right now and investments into automation have been growing exponentially (the few signals we have seem to say). What this means is they have a specific market in mind, don’t want the added platforms that VC can offer, and / or no need for money.
It’s really sparse when it comes to in-depth, operational reporting on robotics startups these days. I think that these robots will be acquired by researchers to start with due to the cheap price tag and functional approach. Anywho, it’s of note to see more players in the area. Within 5 years people in Silicon Valley will probably be buying (expensive) robots for their homes (markup is due to software), so give it another decade to trickle down into everyone else.
Could they be trying to replicate this video (from 2004) below? Do you think this robot is autonomous? Just adding the famous history-point in some robotics circles.
The catch is that there was a human running it and it’s sped up 8x. Hopefully having cheaper platforms helps us get to true home-autonomy. Although, a famous robotics researcher has rented Airbnb’s to collect home-training data.
This week has been the Robotics: Science and Systems Conference and the Conference on Machine Learning. I have been spending most of my time on the robotics side of the coin, and decided to learn something new. I went to the workshop on Self-Supervised Learning (which had an impressive lineup) to learning something new.
Self-supervised learning is the idea that we can get robots to explore and label their own data. The huge potential upside here in the age of lockdowns is: no human oversight needed. Right now, the robots don’t learn any advanced behaviors like stacking objects into shapes, but they can learn to grasp and manipulate objects (or walk). This is a very young field that I will keep my eye on. It is the curiosity side of robotic learning, and I want to see more on the lifelong side of things (improvements to peak performance over control theory).
Here are my notes from the workshop, and many nice slide screenshots - thanks Zoom.
Multiple levels to recommender systems
I am doing some research on how recommender systems effect more actions then intended in design, and I came across a related paper today - “What are you optimizing for? Aligning Recommender Systems with Human Values.” This diagram takes it another step of the way - unintended consequences in groups effected by those actions. It is yet another trend in people realizing when optimizing a reward function in machine learning there will always be unmeasured effects.
Please subscribe if you want to see the issue analyzing ethics of recommender systems (these things are in so many major products like Netflix, online-shopping, and more). The diagram below is from the paper’s presentation.
The tangentially related
I am reading (newsletters/blogs):
An article that summarizes well why I went to Substack. The only problem for me is that I don’t have a strong followership to begin with, but I am abiding by the philosophy that consistent quality will build it. Help me build this community.
More press along the core theme of this blog: we need better and new AI ethics.
A local’s view on how Hong Kong is “impossible.” Ever since visiting in November of 2019, I have been following HK closely. It looks like going back will be increasingly risky or different.
Human Compatible: Artificial Intelligence and the Problem of Control - Stuart Russell and Race After Technology: Abolitionist Tools for the New Jim Code - Ruha Benjamin raise different problems with a “standard model” in AI. A standard model would be a formulation of agent A optimizes set object R. This is problematic because a single-minded agent may exploit other avenues (at the risk of humans). Continuing this, there will be issues when assuming any moral / societal structure in an equality-focused AI method. We need curious methods, not set methods - post on rethinking robot (and AI agent) design is in the works.
I am listening to / watching:
Hopefully you find some of this interesting and fun. You can find more about the author here. Forwarded this? Subscribe here. It helps me a lot to forward this to interested parties, are Tweet at me @natolambert. I write to learn and converse.