Star magnitudes. An introduction to astronomy. 1868. Internet Archive
Ghalib, from Mirza Ghalib: Selected Lyrics and Letters (trans. K.C. Kanda)
The Sun released this weak non-Earth directed coronal mass ejection (CME) on December 24, 2018.
The Aurora and the Sunrise : On the International Space Station (ISS), you can only admire an aurora until the sun rises. Then the background Earth becomes too bright. Unfortunately, after sunset, the rapid orbit of the ISS around the Earth means that sunrise is usually less than 47 minutes away. In the featured image, a green aurora is visible below the ISS – and on the horizon to the upper right, while sunrise approaches ominously from the upper left. Watching an aurora from space can be mesmerizing as its changing shape has been compared to a giant green amoeba. Auroras are composed of energetic electrons and protons from the Sun that impact the Earth’s magnetic field and then spiral down toward the Earth so fast that they cause atmospheric atoms and molecules to glow. The ISS orbits at nearly the same height as auroras, many times flying right through an aurora’s thin upper layers, an event that neither harms astronauts nor changes the shape of the aurora. via NASA
Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.
This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.
But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.
When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.
So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.
Bending the rules to win
First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk.
Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.
[Image: Robot is simply a tower that falls over.]
Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so – once again – the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can.
[Image: Tall robot flinging a leg into the air instead of jumping]
Hacking the Matrix for superpowers
Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.
Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.
Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.
[Image: robot moving by vibrating into the floor]
Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life.
Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points.
A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs – but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here
[Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything]
Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.
Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.
Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.
Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.
How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.
When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny.
Biological evolution works this way, too – as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.
So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it.
Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake.
Mailing list plug
If you enter your email, there will be cake!
“The most powerful determinant of whether a woman goes on in science might be whether anyone encourages her to go on. My freshman year at Yale, I earned a 32 on my first physics midterm. My parents urged me to switch majors. All they wanted was that I be able to earn a living until I married a man who could support me, and physics seemed unlikely to accomplish either goal. I trudged up Science Hill to ask my professor, Michael Zeller, to sign my withdrawal slip. I took the elevator to Professor Zeller’s floor, then navigated corridors lined with photos of the all-male faculty and notices for lectures whose titles struck me as incomprehensible. I knocked at my professor’s door and managed to stammer that I had gotten a 32 on the midterm and needed him to sign my drop slip. “Why?” he asked. He received D’s in two of his physics courses. Not on the midterms — in the courses. The story sounded like something a nice professor would invent to make his least talented student feel less dumb. In his case, the D’s clearly were aberrations. In my case, the 32 signified that I wasn’t any good at physics. “Just swim in your own lane,” he said. Seeing my confusion, he told me that he had been on the swimming team at Stanford. His stroke was as good as anyone’s. But he kept coming in second. “Zeller,” the coach said, “your problem is you keep looking around to see how the other guys are doing. Keep your eyes on your own lane, swim your fastest and you’ll win.” I gathered this meant he wouldn’t be signing my drop slip. “You can do it,” he said. “Stick it out.” I stayed in the course. Week after week, I struggled to do my problem sets, until they no longer seemed impenetrable. The deeper I now tunnel into my four-inch-thick freshman physics textbook, the more equations I find festooned with comet-like exclamation points and theorems whose beauty I noted with exploding novas of hot-pink asterisks. The markings in the book return me to a time when, sitting in my cramped dorm room, I suddenly grasped some principle that governs the way objects interact, whether here on earth or light years distant, and I marveled that such vastness and complexity could be reducible to the equation I had highlighted in my book. Could anything have been more thrilling than comprehending an entirely new way of seeing, a reality more real than the real itself? I earned a B in the course; the next semester I got an A. By the start of my senior year, I was at the top of my class, with the most experience conducting research. But not a single professor asked me if I was going on to graduate school. When I mentioned shyly to Professor Zeller that my dream was to apply to Princeton and become a theoretician, he shook his head and said that if you went to Princeton, you had better put your ego in your back pocket, because those guys were so brilliant and competitive that you would get that ego crushed, which made me feel as if I weren’t brilliant or competitive enough to apply. Not even the math professor who supervised my senior thesis urged me to go on for a Ph.D. I had spent nine months missing parties, skipping dinners and losing sleep, trying to figure out why waves — of sound, of light, of anything — travel in a spherical shell, like the skin of a balloon, in any odd-dimensional space, but like a solid bowling ball in any space of even dimension. When at last I found the answer, I knocked triumphantly at my adviser’s door. Yet I don’t remember him praising me in any way. I was dying to ask if my ability to solve the problem meant that I was good enough to make it as a theoretical physicist. But I knew that if I needed to ask, I wasn’t.”
— Eileen Pollack (via logicandgrace)