Everyone seems to be insisting on installing cameras all over their homes these days, which seems incongruous with the ongoing privacy crisis — but that’s a post for another time. Today, we’re talking about enabling those cameras to send high-definition video signals wirelessly without killing their little batteries. A new technique makes beaming video out more than 99 percent more efficient, possibly making batteries unnecessary altogether.
Cameras found in smart homes or wearables need to transmit HD video, but it takes a lot of power to process that video and then transmit the encoded data over Wi-Fi. Small devices leave little room for batteries, and they’ll have to be recharged frequently if they’re constantly streaming. Who’s got time for that?
The idea behind this new system, created by a University of Washington team led by prolific researcher Shyam Gollakota, isn’t fundamentally different from some others that are out there right now. Devices with low data rates, like a digital thermometer or motion sensor, can something called backscatter to send a low-power signal consisting of a couple of bytes.
Backscatter is a way of sending a signal that requires very little power, because what’s actually transmitting the power is not the device that’s transmitting the data. A signal is sent out from one source, say a router or phone, and another antenna essentially reflects that signal, but modifies it. By having it blink on and off you could indicate 1s and 0s, for instance.
UW’s system attaches the camera’s output directly to the output of the antenna, so the brightness of a pixel directly correlates to the length of the signal reflected. A short pulse means a dark pixel, a longer one is lighter, and the longest length indicates white.
Some clever manipulation of the video data by the team reduced the number of pulses necessary to send a full video frame, from sharing some data between pixels to using a “zigzag” scan (left to right, then right to left) pattern. To get color, each pixel needs to have its color channels sent in succession, but this too can be optimized.
Assembly and rendering of the video is accomplished on the receiving end, for example on a phone or monitor, where power is more plentiful.
In the end, a full-color HD signal at 60FPS can be sent with less than a watt of power, and a more modest but still very useful signal — say, 720p at 10FPS — can be sent for under 80 microwatts. That’s a huge reduction in power draw, mainly achieved by eliminating the entire analog to digital converter and on-chip compression. At those levels, you can essentially pull all the power you need straight out of the air.
They put together a demonstration device with off-the-shelf components, though without custom chips it won’t reach those
microwatt power levels; still, the technique works as described. The prototype helped them determine what type of sensor and chip package would be necessary in a dedicated device.
Of course, it would be a bad idea to just blast video frames into the ether without any compression; luckily, the way the data is coded and transmitted can easily be modified to be meaningless to an observer. Essentially you’d just add an interfering signal known to both devices before transmission, and the receiver can subtract it.
Video is the first application the team thought of, but there’s no reason their technique for efficient, quick backscatter transmission couldn’t be used for non-video data.
The tech is already licensed to Jeeva Wireless, a startup founded by UW researchers (including Gollakota) a while back that’s already working on commercializing another low-power wireless device. You can read the details about the new system in their paper, presented last week at the Symposium on Networked Systems Design and Implementation.
IkeaBot is a project built at Control Robotics Intelligence (CRI) group at NTU in Singapore. The team began by teaching robots to insert pins and manipulate IKEA parts, then, slowly, they began to figure out how to pit the robots against the furniture. The results, if you’ve ever fought with someone trying to put together a Billy, are heartening.
The assembly process from CRI is not quite that autonomous; “although all the steps were automatically planned and controlled, their sequence was hard-coded through a considerable engineering effort.” The researchers mention that they can “envision such a sequence being automatically determined from the assembly manual, through natural-language interaction with a human supervisor or, ultimately, from an image of the chair,” although we feel like they should have a chat with Ross Knepper, whose IkeaBot seemed to do just fine without any of that stuff.
In other words the robots are semi-autonomous but never get frustrated and can use basic heuristics to figure out next steps. The robots can now essentially assemble chairs in about 20 minutes, a feat that I doubt many of us can emulate. You can watch the finished dance here, in all its robotic glory.
The best part? Even robots get frustrated and fling parts around:
I, for one, welcome our IKEA chair manufacturing robotic overlords.
Although I do my best to minimize the trash produced by my lifestyle (blog posts notwithstanding), one I can’t really control, at least without carrying a spoon on my person at all times, is the necessity of using a disposable stick to stir my coffee. That could all change with the Stircle, a little platform that spins your drink around to mix it.
Now, of course this is ridiculous. And there are other things to worry about. But honestly, the scale of waste here is pretty amazing. Design house Amron Experimental says that 400 million stir sticks are used every day, and I have no reason to doubt that. My native Seattle probably accounts for a quarter of that.
So you need to get the sugar (or agave nectar) and cream (or almond milk) mixed in your iced americano. Instead of reaching for a stick and stirring vigorously for 10 or 15 seconds, you could instead place your cup in the Stircle (first noticed by New Atlas and a few other design blogs), which would presumably be built into the fixins table at your coffee shop.
Once you put your cup on the Stircle, it starts spinning — first one way, then the other, and so on, agitating your drink and achieving the goal of an evenly mixed beverage without using a wood or plastic stirrer. It’s electric, but I can imagine one being powered by a lever or button that compresses a spring. That would make it even greener.
The video shows that it probably gets that sugar and other low-lying mixers up into the upper strata of the drink, so I think we’re set there. And it looks as though it will take a lot of different sizes, including reusable tumblers. It clearly needs a cup with a lid, since otherwise the circling liquid will fly out in every direction, which means you have to be taking your coffee to go. That leaves out pretty much every time I go out for coffee in my neighborhood, where it’s served (to stay) in a mug or tall glass.
But a solution doesn’t have to fix everything to be clever or useful. This would be great at an airport, for instance, where I imagine every order is to go. Maybe they’ll put it in a bar, too, for extra smooth stirring of martinis.
Actually, I know that people in labs use automatic magnetic stirrers to do their coffee. This would be a way to do that without appropriating lab property. Those things are pretty cool too, though.
You might remember Amron from one of their many previous clever designs; I happen to remember the Keybrid and Split Ring Key, both of which I used for a while. I’ll be honest, I don’t expect to see a Stircle in my neighborhood cafe any time soon, but I sure hope they show up in Starbucks stores around the world. We’re going to run out of those stirrer things sooner or later.
It’s almost time for SpaceX to launch NASA’s TESS, a space telescope that will search for exoplants across nearly the entire night sky. The launch has been delayed more than once already: originally scheduled for March 20, it slipped to April 16 (Monday), then some minor issues pushed it to today — at 3:51 PM Pacific time, to be precise. You can watch the launch live below.
TESS, which stands for Transiting Exoplanet Survey Satellite, is basically a giant wide-angle camera (four of them, actually) that will snap pictures of the night sky from a wide, eccentric and never before tried orbit.
The technique it will use is fundamentally the same as that employed by NASA’s long-running and highly successful Kepler mission. When distant plants pass between us and their star, it causes a momentary decrease in that star’s brightness. TESS will monitor thousands of stars simultaneously for such “transits,” watching a single section of sky for a month straight before moving on to another.
By two years, it will have imaged 85 percent of the sky — hundreds of times the area Kepler observed, and on completely different stars: brighter ones that should yield more data.
TESS, which is about the size of a small car, will launch on top of a SpaceX Falcon 9 rocket. SpaceX will attempt to recover the first stage of the rocket by having it land on a drone ship, and the nose cone will, hopefully, get a gentle parachute-assisted splashdown in the Atlantic, where it too can be retrieved.
The feed below should go live 15 minutes before launch, or at about 3:35.
Hardware isn’t easy — especially if you decline to take advantage of the global manufacturing infrastructure, build everything in a flat in London and use only local labor and materials. But that’s what the creators of successful Kickstarter project Moon did, and they have no regrets.
Back in 2016, I got a pitch for the Moon, an accurate replica of our satellite around which a set of LEDs rotated, illuminating the face in perfect time with the actual phase. A cool idea, though for some reason or another I didn’t cover it, instead asking Alex du Preez, one of the creators, to hit me back later to talk about the challenges of crowdfunded, home-brewed hardware.
The project was a success, raising £145,393 — well over the £25,000 goal — and Alex and I chatted late last year while the team was wrapping up production and starting on a second run, which in fact they just recently wrapped up, as well.
It’s an interesting case study of a crowdfunded hardware project, not least because the Moon team made the unusual choice to keep everything local: from the resin casting of the moon itself to the chassis and electronics.
“At the time we wanted to make sure that we made them correctly, and that we didn’t spend a lot of our energy and money prototyping with a factory,” du Preez said. “We’ve seen a lot of Kickstarter campaigns go straight to China, to some manufacturing facility, and we were afraid we’d lose a lot of the quality of the product if we did that.”
The chief benefit, in addition to the good feeling they got by sourcing everything from no farther than the next town over, was the ability to talk directly to these people and explain or work through problems in person.
“We can just get on a train and go visit them,” du Preez said. “For instance, there’s a bent pipe which is the arm of the device — even that part alone, we worked with a pipe-bending company and went out there like three times to have conversations with the guy.”
Of course, they weren’t helpless themselves; the three people behind the project are designers and engineers who have helped launch crowdfunding campaigns before, though this one was the first they had done on their own.
“I think Oscar [Lhermitte, who led the project] probably worked two and a half or three years on this, from ideation all the way to manufacturing,” said du Preez. “He had this idea and he contacted NASA and asked for this topographical data to make the map. He came to us because he wanted some technical and engineering input.”
The decision to do it all in the U.K. wasn’t made any easier by the fact that it was a demanding piece of hardware, the team’s standards were high and. despite being a great success, $200,000 or so still isn’t a lot with which to build a unique, high-precision electronic device from scratch.
The whole operation was run out of a small apartment in London, and the team had to improvise quite a bit.
“We had this tiny little room the size of a kitchen we were producing these things out of,” du Preez recalled. “It wasn’t like a warehouse. And we were on the second floor — we’d get a delivery of like, a ton of metal, and we’d have to spend half a day hauling it up, then boxes would arrive and it would fill up the whole studio.”
They resisted the urge to get something off the shelf or ready-made from Shenzhen, choosing instead to rely on their own ingenuity (and that of nearby, puzzlingly specific artisans) to solve problems.
“One of the trickiest parts was that every single part is made with a different process,” he said. “If you want to make a piece of electronics in a plastic case,” for example a security camera or cheap Android phone, “it’s a lot quicker to develop and execute.”
Obviously the most important part to get right is the globe of the moon itself — and no one had ever made something quite like this before, so they had to figure out how to do it themselves.
“It’s quite large, so we can’t cast it in one solid piece,” du Preez explained. “It would be too heavy to ship. And it sinks — the material moves too much. So what you do is you make a mold, like a negative of the moon, and you pour the liquid inside it. And while the liquid is setting, you rotate it around, to make sure the inner surface is being coated by resin while it’s drying.”
In order to do this for their prototyping stage, they jury-rigged a solution from “wood, bicycle parts, and I think a sewing machine engine,” he said. “We had to put that together on the spot to keep costs down. We kind of replicated what we knew was already out there to test our materials and concepts. We knew if we could make this work, we just had to build or find a better one.”
As luck would have it, they did find someone — right up the tracks.
“We found this guy in Birmingham who basically has an industrial version of this; he makes molds and he has this big metal cage rotating around all day,” du Preez said. “The quality of his work is amazing.” And, of course, it’s just a short train trip away — relative to a trip to Guangzhuo, anyway.
Attention to detail, especially regarding the globe, led to delays in shipping the Moon; they ended up about four months late.
Late arrivals are of course to be expected when it comes to Kickstarter projects, but du Preez said that the response of backers, both friendly and unfriendly, surprised him.
“It seemed quite binary. We had 541 backers, and I’d say only two were really pissed off about not having their moon, and they were irate. I mean they were fuming,” he said.
“But no one really got publicly angry with us. They’d just check in. Once they email you and you give them a response, they seem to be very understanding. As long as we kept the momentum going, people were okay with it.”
That said, four months late isn’t really that late. There are projects that have raised far more than Moon and were years late or never even shipped (full disclosure, I’ve backed a couple!). Du Preez offered some advice to would-be crowdfunders who want to keep the goodwill of their backers.
“It’s really important to understand your pricing, who’s going to manufacture it, all the way down to shipping. If you have no game plan for after Kickstarter you’re going to be in a tricky situation,” he said. “We had a bill of materials and priced everything out before we went to Kickstarter. And you need some kind of proof of concept to show that the product works. There are so many great hardware development platforms out there that I think that’s quite easy to do now.”
Their attention to detail and obvious pride in their work has resulted in a lasting business, du Preez told me; the company has attracted attention from Adam Savage, Mark Hamill and MOMA, while a second run of 250 has just completed and the team is looking into other projects along these lines.
You can track the team’s projects or order your own unit (though you may wish you’d gotten the early bird discount) over at the dedicated Moon website.
Skagen is a well-know maker of thin and uniquely Danish watches. Founded in 1989, the company is now part of the Fossil group and, as such, has begin dabbling in both the analog with the Hagen and now Android Wear with the Falster. The Falster is unique in that it stuffs all of the power of a standard Android Wear device into a watch that mimics the chromed aesthetic of Skagen’s austere design while offering just enough features to make you a fashionable smartwatch wearer.
The Falster, which costs $275 and is available now, has a fully round digital OLED face which means you can read the time at all times. When the watch wakes up you can see an ultra bright white on black time-telling color scheme and then tap the crown to jump into the various features including Android Fit and the always clever Translate feature that lets you record a sentence and then show it the person in front of you.
You can buy it with a leather or metal band and the mesh steel model costs $20 extra.
Sadly, in order stuff the electronics into such a small case, Skagen did away with GPS, LTE connectivity, and even a heart-rate monitor. In other words if you were expecting a workout companion then the Falster isn’t the Android you’re looking for. However, if you’re looking for a bare-bones fashion smartwatch, Skagen ticks all the boxes.
What you get from the Flasterou do get, however, is a low-cost, high-style Android Wear watch with most of the trimmings. I’ve worn this watch off and on few a few weeks now and, although I do definitely miss the heart rate monitor for workouts, the fact that this thing looks and acts like a normal watch 99% of the time makes it quite interesting. If obvious brand recognition nee ostentation are your goal, the Apple Watch or any of the Samsung Gear line are more your style. This watch, made by a company famous for its Danish understatement, offers the opposite of that.
Skagen offers a few very basic watch faces with the Skagen branding at various points on the dial. I particularly like the list face which includes world time or temperature in various spots around the world, offering you an at-a-glance view of timezones. Like most Android Wear systems you can change the display by pressing and holding on the face.
It lasts about a day on one charge although busy days may run down the battery sooner as notifications flood the screen. The notification system – essentially a little icon that appears over the watch face – sometimes fails and instead shows a baffling grey square. This is the single annoyance I noticed, UI-wise, when it came to the Falster. It works with both Android smartphones and iOS.
What this watch boils down to is an improved fitness tracker and notification system. If you’re wearing, say, a Fitbit, something like the Skagen Falster offers a superior experience in a very chic package. Because the watch is fairly compact (at 42mm I won’t say it’s small but it would work on a thinner wrist) it takes away a lot of the bulk of other smartwatches and, more important, doesn’t look like a smartwatch. Those of use who don’t want to look like we’re wearing robotic egg sacs on our wrists will enjoy that aspect of Skagen’s effort, even without all the trimmings we expect from a modern smartwatch.
Skagen, like so many other watch manufacturers, decided if it couldn’t been the digital revolution it would join it. The result is the Falster and, to a lesser degree, their analog collections. Whether or not traditional watchmakers will survive the 21st century is still up in the air but, as evidenced by this handsome and well-made watch, they’re at least giving it the old Danish try.
The devices are priced at Rs 9,999 ($154), and Rs 4,499 ($69), respectively, and Google confirmed that they are available for purchase online via Flipkart and offline through over 750 retailer stores, including Reliance Digital, Croma and Bajaj Electronics.
The Google smart speakers don’t cater to India’s multitude of local languages at this point, but the U.S. company said that they do understand “distinctly” India voices and “will respond to you with uniquely Indian contexts,” such as answering questions about local sport, cooking or TV shows.
For a limited time, Google is incentivizing early customers who will get six months of Google Play Music alongside offers for local streaming services Saavn and Gaana when they buy the Home or Home Mini.
Google Home and Home Mini were first announced at Google I/O in 2016. The company said recently that it has sold “tens of millions” of speakers, with more than seven million sales between October 2017 and January 18.
Still, it’s been a long time coming to India, which has allowed others to get into the market first. Amazon, which is pouring considerable resources into its India-based business to battle Flipkart, brought its rival Echo smart devices to India last October.
We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.
Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.
And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.
As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.
In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.
They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.
This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.
“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”
That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.
This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”
When Luminar came out of stealth last year with its built-from-scratch lidar system, it seemed to beat established players like Velodyne at their own game — but at great expense and with no capability to build at scale. After the tech proved itself on the road, however, Luminar got to work making its device better, cheaper, and able to be assembled in minutes rather than hours.
“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”
Lest you think the company has sacrificed quality for quantity, be it known that the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).
The secret — to just about the whole operation, really — is the sensor. Luminar’s lidar systems, like all others, fire out a beam of light and essentially time its return. That means you need a photosensitive surface that can discern just a handful of photons.
Most photosensors, like those found in digital cameras and in other lidar systems, use a silicon-based photodetector. Silicon is well-understood, cheap, and the fabrication processes are mature.
Luminar, however, decided to start from the ground up with its system, using an alloy called indium gallium arsenide, or InGaAs. An InGaAs-based photodetector works at a different frequency of light (1,550nm rather than ~900) and is far more efficient at capturing it. (Some physics here.)
The more light you’ve got, the better your sensor — that’s usually the rule. And so it is here; Luminar’s InGaAs sensor and a single laser emitter produced images tangibly superior to devices of a similar size and power draw, but with fewer moving parts.
The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by minimizing the amount of InGaAs used: only a tiny sliver of it is used where it’s needed, and they engineered around that rather than use the arrays of photodetectors found in many other lidar products. (This restriction goes hand in glove with the “fewer moving parts” and single laser method.)
Last year Luminar was working with a company called Black Forest Engineering to design these chips, and finding their paths inextricably linked (unless someone in the office wanted to volunteer to build InGaAs ASICs), Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.
By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.
“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”
“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”
Sure — all it took was a bunch of geniuses, five years, and a seven-figure budget (and I’d be surprised if the $36M in seed funding was all they had to work with). But let’s not quibble.
It’s all being done with a view to the long road ahead, though. Last year the company demonstrated that its systems not only worked, but worked well, even if there were only a few dozen of them at first. And they could get away with it, since as Russell put it, “What everyone has been building out so far has been essentially an autonomous test fleet. But now everyone is looking into building an actual, solidified hardware platform that can scale to real world deployment.”
Some companies took a leap of faith, like Toyota and a couple other unnamed companies, even though it might have meant temporary setbacks.
“It’s a very high barrier to entry, but also a very high barrier to exit,” Russell pointed out. “Some of our partners, they’ve had to throw out tens of thousands of miles of data and redo a huge portion of their software stack to move over to our sensor. But they knew they had to do it eventually. It’s like ripping off the band-aid.”
We’ll soon see how the industry progresses — with steady improvement but also intense anxiety and scrutiny following the fatal crash of an Uber autonomous car, it’s difficult to speculate on the near future. But Luminar seems to be looking further down the road.
The record-setting score that settled the Donkey Kong arcade rivalry, made famous by the documentary The King of Kong, has been invalidated by Twin Galaxies, the de facto arbiter of arcade world records. What’s more, Billy Mitchell, the occasionally controversial player who set the scores, has been permanently banned from consideration for future records.
It’s a huge upset that calls into question decades of history. Will other similarly disputed scores get the ax? Are any old-school arcade legends safe?
Before anything, it should be noted that although this sounds like kind of a random niche issue, the classic gaming scene is huge and millions follow it closely and take it very seriously. Breaking a high score on a 30-year-old game or shaving a quarter of a second off a record time can and will be celebrated as if the player has won an Olympic medal. One can never underestimate the size or sincerity of online communities. Cheating is, of course, not tolerated.
With that said, it’s worth considering that Billy Mitchell’s case is unique. He is undoubtedly a highly skilled player and has been setting records since the ’80s. But, as anyone who watched The King of Kong will have learned, he’s also a bit shady and his Donkey Kong acumen is far from established.
The issue is simply that despite having provided tapes of games setting records — including being the first to break a million in Donkey Kong — no one has seen him play like that in person.
That may sound like a red flag, but in the speedrunning and record-setting community, a great deal of practice happens alone, in an empty arcade, or otherwise with no credible witnesses (though Twitch has changed that). You could set a world record while in the zone after getting home from work, but it doesn’t count unless it’s reviewed and accredited by a neutral party. Twin Galaxies is the largest organization performing that duty, and they take it very seriously indeed.
You may remember that at the end of The King of Kong, Mitchell reestablishes his supremacy over plucky local kid Steve Wiebe with a “direct capture” tape of a run scoring 1,047,200 points. There are no witnesses to this game. Shortly after this, he also recorded a 1,050,200 score, also not witnessed. And just a week before being inducted into the International Video Game Hall of Fame in Iowa, he set records in both Donkey Kong (1,062,800) and Donkey Kong 2.
Now here’s where things get dicey (and nerdy).
Jeremy Young, aka Xelnia, put together the official two-part complaint on Twin Galaxies. For one part of it, he mentioned the suspicions some already had regarding the evidence set forth of the last and highest score Mitchell set, in an arcade called Boomers.
As others had already pointed out, not only are the run itself and resulting score not shown in the video, but the referee is among the least reliable, and the timeline is unclear, among other things. Most damning, however, it is clear that when Mitchell’s confederate ostentatiously “swaps out” the Donkey Kong board (so it can be verified elsewhere) for a Donkey Kong Jr. one (which Mitchell supposedly later set a record on), both PCBs were in fact the latter.
Twin Galaxies user Robert.F explained the differences in charming internet forum argot:
to a UN-trained train eye Dk and DKjr look the same and in fact they are vary similar, except for a few noticeable differences…the DK pcb has white text on the pcb and the Dk jr has banana yellow text printed on the board ,, the DK pcb is 1/2 digital and 1/2 Analog sound and there is a adjustment pot on the dk pcb for the Analog sound`s, The Dk Jr board is fully digital and has no Analog sound adjustment pot in the exact same position on the dkjr board, and the 3rd noticeable differences and you will see; it if you review the video carefully Dk has the same ROM socket lay out and the same number of sockets as a Dkjr pcb ,, But DKjr has one of them ROM socket empty ,,,,,,
But these circumstantial issues could be explained as a bit of confusion in the moment, a misspoken word in their excitement, and so on. Fortunately, that wasn’t the extent of the evidence.
As you may know, emulators are a type of application made to run old software (like arcade games) as closely as possible to how it ran on the original hardware. MAME is by far the most complex and perhaps the best-known emulator; this amazing app can emulate everything from Donkey Kong to much more recent games with complex 3D graphics. Of course, MAME runs aren’t accepted for world records — you could easily manipulate the software or even the game data itself. Real arcade hardware is required.
But MAME isn’t perfect; there are tiny differences in how it displays graphics — things you wouldn’t notice unless you were watching a game frame by frame looking for them in particular.
Which is exactly what people started doing with Mitchell’s no-witnesses, only-on-video scores.
It turns out that the original Donkey Kong PCBs had a specific method of rendering a scene during graphics transitions called a “sliding door effect,” distinctive in the pattern of how pixels are updated. Careful inspection of Mitchell’s tapes showed not a sliding door, but instead a distinctive artifact of MAME emulation whereby the frame is rendered in chunks according to how the data is loaded from memory.
You can see the similarity in the GIFs below, provided as evidence by Young.
First is footage of an actual machine taken at 60FPS. Note the diagonal “sliding door” that reveals the scene from the top left downwards:
Next, Mitchell’s 1,050,200 run:
Last, how MAME renders a similar scene:
See how the ladders come in all at once in that pattern, and there’s no sliding door? As you can tell, it’s something of a smoking gun. Certainly Twin Galaxies investigators thought so. In their conclusions, issued today on the forums, they wrote (emphasis theirs):
The taped Donkey Kong score performances of 1,047,200 (the King of Kong “tape”), 1,050,200 (the Mortgage Brokers score) that were historically used by Twin Galaxies to substantiate those scores and place them in the database were not produced by the direct feed output of an original unmodified Donkey Kong Arcade PCB.
They decline to go so far as saying they know it was MAME, but that’s a mere scruple — everyone understands it’s the most likely situation. Regardless, the very fact that Mitchell passed off non-authentic footage as real is more than enough to strike his scores and, as they also announce, ban him from further placement anywhere in the system.
Perhaps more importantly, Steve Wiebe, the underdog challenger in The King of Kong, has been elevated to become the first player to actually hit a million points in the game. Better late than never! Belated congratulations to Wiebe. (Wikipedia has already been updated.)
Mitchell, on the other hand, has remained out of sight during the investigation that has gone on these last few months, and has essentially been ruined for good in the arcade world. Even if he were to set a world record today (and existing record holders doubt he has the skill to do so based on reviewing his play), it would be tainted by years of proven deception. The community won’t forgive him.
And that’s the worry others are voicing: Will the investigators come for other scores that for years have been venerated but have not been verified as strictly as modern records are? Will, for example, any score without an accredited witness or reliable recording be removed from the lists?
In their decision, Twin Galaxies’ authorities write:
Twin Galaxies is dedicated to absolutely rooting out invalid scores from our historic database wherever we find them.
Our methodic approach has allowed many things to surface, not only related to this specific score, but other scores as well as some previously never-before-discussed video game related history.
We must repeat, the truth is the priority. That is the concern. Whatever it takes.
This dispute is closed, and a controversial but nevertheless legendary gaming figure covered in shame (or he should be if he has any). Who will be next? Regardless of who falls, the community will no doubt continue to thrive; the passion for these old games is undying and, as new generations have shown, is not limited to an aging cohort of Gen-Xers striving to extend a bygone era of glory (though admittedly they are a big part of it).
If this strange saga interested you anywhere near as much as it interested me, go ahead and dive in. You might find you have a new hobby. Just don’t try to fake it. And by the way, the current top score in Donkey Kong is 1,247,700, set just two months ago by Robbie Lakeman. Good luck.