Sooner or later, our children will be raised by robots, so it’s natural that Disney, purveyor of both robots and child-related goods, would want to get ahead of that trend. A trio of studies from its Research division aim at understanding and improving how kids converse with and otherwise interact with robots and other reasonably smart machines.
The three studies were executed at once as a whole, with each part documented separately in papers posted today. The kids in the study (about 80 of them) proceeded through a series of short activities generally associated with storytelling and spoken interaction, their progress carefully recorded by the experimenters.
First they were introduced (individually as they took part in the experiment, naturally) to a robot named Piper, which was controlled remotely (“wizarded”) by a puppeteer in another room, but had a set of recorded responses it drew from for different experimental conditions. The idea is that the robot should use what it knows to inform what it says and how it says it, but it’s not clear quite how that should work, especially with kids. As the researchers put it:
As human-robot dialog faces the challenges of long-term interaction, understanding how to use prior conversation to foster a sense of relationship is key because whether robots remember what we’ve said, as well as how and when they expose that memory, will contribute to how we feel about them.
After saying hi, kids participated in a collaborative storytelling activity, which was its own experiment. The researchers describe the reasoning behind this activity thusly:
Despite recent progress, AI remains imperfect in recognizing children’s speech and understanding the semantics of natural language. Imperfect speech recognition and natural language understanding imply that the robot may not respond to children in a semantically coherent manner. With these impeding factors, it remains an open question whether fluid collaborative child-robot storytelling is feasible or is perceived as valuable by children.
An experimenter, essentially sitting in for a theoretical collaborative AI, added characters to a story the two were improvising — in some cases according to the context of the story (“They found a kitten in the cave”), and in some cases randomly (“Add a kitten to the story”). The goal was to see which engaged kids more, and when each one was more feasible for an app or device to use.
Younger kids and boys stumbled when given contextual additions, presumably because they required some thought to understand and integrate — so it’s possible to be too responsive when interacting with them.
On the way out from the story activity, kids would stop by Piper again, who asked them about their story in either a generic way, a way that acknowledged a character in the story and a way that in addition added some feeling to it (e.g. “I hope the kitten got out of the cave okay”). Another activity followed (a collaborative game with a robot), after which a similar interaction took place with similarly varying responses.
Next came the third experiment, which is best summarized as “what would happen if Dora the Explorer could hear you answer her questions?”
As children begin to watch more television programming on systems that allow for interaction, such as tablets and videogame systems, there are different opportunities to engage them… We performed three studies to examine the effects of accurate program response times, repeating unanswered questions, and providing feedback on the children’s likelihood of response.
Instead of just waiting a couple of seconds during which a kid may or may not say anything, the show would wait (up to 10 seconds) for a response and then continue, or prompt them to answer again. Waiting and prompting definitely increased response rates, but there wasn’t much of an effect when feedback was included, for example pointing out a wrong answer.
After doing this activity, kids popped by Piper again to have another chat, then rated the robots on friendliness, smarts and so on.
What the researchers found with Piper was that older kids preferred, and were more responsive to, the more human-like responses from the robot that remembered previous interactions or choices — suggesting this basic social function is important in building rapport.
All this is important not actually for letting robots raise our kids as I jested above, but for making all human-computer interactions more natural — without overdoing it or making it creepy. No one wants their Alexa or Google Home to say “would you like to listen to the same playlist you did last week when you were feeling depressed and cooking a pizza while alone in the house?” But it could!
The papers also suggest that this kind of work is highly applicable in situations like speech therapy, where kids often engage in games like this to improve their understanding or diction. And it’s not hard to imagine the broader applications. A warmer, fuzzier, context-aware collaborative AI could have many benefits, and these early experiments are only the start of making that happen.
Featured Image: Jeff Spicer / Stringer/Getty Images
I’ve been following the multiple “easy” brewing systems available online for a while now and have never found one that truly spoke to me. Devices like the Minibrew seem great but price and shipping problems have always kept them out of my boozy little hands. So, rather than wait for the perfect automatic system, I decided to look at the Catalyst Fermentation System, a $199 carboy kit that promises to make brewing as easy as boiling some oats and hops and managing a trub.
Fermentation is a fairly simple process. At its core you create a “tea” or juice using sugar-rich ingredients and introduce yeast. The yeast eats the sugar, produces carbon dioxide and alcohol, and dies. In wine you try to drive out the CO2 and clarify the product as much as possible and with beer and other sparkling beverages you want to maintain the CO2 through the careful addition of extra sugar or gas in a keg. In my test case I ran a batch of Stone Pale Ale. The kit comes complete with grain, a cheesecloth bag, and three different hops to drop in at various times in order to get the right flavor profile. It also includes a sack of dry malt – the aforementioned sugar – that you mix together and boil in a big pot (not included) and then quickly cool before you pour it into the Catalyst.
Beer-making is easy in theory it is difficult in practice. If your ingredients are poor or your sterilization is incomplete you can infect and ruin the batch. In fact many brewers won’t use a system like the Catalyst because it is made of plastic and not glass. Plastic can scratch easily, they reckon, you can introduce dormant bacteria and yeast into a batch with the wrong equipment.
I personally have never gone wrong with plastic. I’ve found that as long as you sanitize the entire system you usually can use almost anything in your brewing process. That said I didn’t notice any scratching in my vessel after preparing single batch of beer.
The Catalyst is very easy to use. After making and cooling the wort I transferred the liquid to the container, closed the airtight lid, added a bubbler, and attached the trub. The trub is essentially a jar that screws in under a 3-inch valve. The trub is designed to catch all of the sediment in the beer including the spent yeast and water-logged hops. Because the trub can be closed off from the actual beer, you’re able to “rack” the beer – remove the sediment – simply by closing the valve and getting rid of the stuff in the jar. The company also suggests that, after initial racking, you can attach a smaller jar and grab some of the yeast for next time, thereby ensuring consistency between batches.
Unfortunately I didn’t get any pictures of my batch in action but essentially the vessel is very similar to the V-Vessel, a winemaking system that has a similar racking solution. The Catalyst, however, is designed primarily for beer so the trub is much larger and can hold more sediment. Once you clear the fermenter a few times by removing first the wet hops and then the yeast, you can add a funnel-like bottling attachment that lets you squirt the beer into bottles or kegs using an included hose.
I found little to dislike about the system except for the trub valve. I screwed in a jar as required and let the beer sit for a few days. However, when I came back I noticed the space around the lip of the jar was leaking a little, leaving a malty little puddle in my basement. I was able to stop it by screwing the jar in more tightly but then that made it harder to remove the jar for racking. It wasn’t a major problem but it was a definite annoyance. I let the beer settle for about four weeks and then kegged the IPA in a small keg. My friend connected the keg to his home dispensing system. The result? A solid, tasty beer without many off-tastes or issues.
I would love an automatic brewing system. As it stands, however, a kit like the Catalyst is the next best thing. It doesn’t take much skill or effort to make a batch of acceptable beer and, because the system is fairly self-contained, it forgives many of the sins committed by beginning brewers. I would argue that an inexpensive system like this is far better than some of the automatic systems out there – I’m particularly enamored of the Grandfather – simply because you learn more about the brewing process and you learn early on the difference between a successful brew and a bad one. However, as technology improves, I could see setting up an automated brew kit in the kitchen and getting fresh, tasty beer at a moment’s notice. The technology and price aren’t quite there yet, however, so until then something like the Catalyst is an excellent and inexpensive tool.
Mirrorless cameras. On paper, some of them seem perfect. They’re quick and powerful with lightweight bodies, but their main drawback has been equally light lens catalogs.
Having a variety of prime, telephoto, macro, sport and the like, all with distinctive shooting characteristics is what give lenses their charm.
However, Sony just took Nikon’s spot as number two in the full-frame camera industry — Canon holds pole position. Currently, only 24 full-frame E-mount lenses exist compared to the dozens that Canon has in each category. Still, it’s inching forward.
The Sony Alpha A9 is the first new mirrorless camera since taking their new perch. Is it really a show of maturation of the format, worth paying the premium and jumping on board? I think so.
If you want to know what makes the A9 different compared to other mirrorless cameras, but maybe even more importantly the other, cheaper Sony Alpha cameras, then this is your section.
Five things really stand out about the A9: its no-blackout 20fps shutter, the 693 autofocus points, its port versatility and its 5-axis, 4K video stabilization. The A9 is an absolute beast, in both raw tech (full-frame 24.2MP sensor) and styling (the body is composed of a magnesium alloy).
Or as (only) I like to put it: looking through the A9 is very much like looking through the scope of a rifle.
The 20fps shutter speed and lack of blackout in the electronic viewfinder is the ultimate iteration of WYSIWYG (what you see is what you get). Not having blackout means your view of the scene is never interrupted by the shutter in the continuous shooting modes.
Instead, it has a quad-VGA OLED viewfinder that uses shutter indicator at each corner of the screen, sort of like a screenshot or a vignette. Or as (only) I like to put it: looking through the A9 is like looking through the scope of a rifle.
As for the shutter speed: it’s fast, that’s it. Just be sure to have UHS II class SD cards that can keep up with writing that many images, both RAWs and JPEGs, without the need to stop for processing.
Hundreds of autofocus points means you get to rely on the camera’s computer to have more precise focus on a subject in the frame of the camera. It also means you can track subjects (think action) or select a group of points, rather than just one, to keep a portion of the frame always-focused.
While overall usefulness varies on the type of shooter you are, you’ll always be paying respect to those 693 focus points.
Video on the A9 isn’t an afterthought, it’s a feature: you can shoot at 100Mbps for 4K 30p (25p)/24p recording, up to 50Mbps for full-HD 60p (50p)/30p (25p)/24p recording, or just go slow-motion at 120fps, at full HD. Because video is almost completely stabilized on the A9, you’d be comfortable shooting at crazy settings.
While the awesome video team here at TechCrunch can’t even make immediate use of a feature like 4K, because web video and compression is the bane its existence. But, even if you are shooting a short film and I am not, you can always super-sample 1080p videos by shooting them in 4K — just a thought.
The port selection on the A9 includes an Ethernet port. While it sounds crazy at first, it’s for those organizations who have to network hundreds of images at high speed. Things look normal when you look at the audio-out/mic-in ports, micro-HDMI and micro USB, as well as the dedicated port for RAW recordings.
I think the most gratifying point of the Sony A9: it’s hard for it to take a bad photo. Okay, it’s possible if you’re the person that buys a camera and leaves it on auto mode — shame on you — but that aside, the sensor seems to make the best of every scenario.
Shots are strong at night, during the day, for portraits, urban settings, action scenes and even video. Colors are vibrant, details are sharp, shadows have depth, RAWs have lots of 14-bit flexibility in Lightroom, while the built-in filters can completely flip those characteristics. You get the idea.
All of this great performance comes consistent with a control interface that makes you feel personally involve; there’s even a level balance in the electronic viewfinder. No more slanted photos.
Finally, the 2.95 inch (3.0-type) wide type TFT touchscreen: it’s a very crispy and vibrant 3,686,400 dot resolution. Respect.
Below are some basic scenes shot with the A9, so you can get a better idea of the quality and aesthetic. All shots were resized down for web viewing.
Very personal opinion here, but I feel the requirement of safety switches on the shooting mode, shutter mode and focus mode buttons make it way too complex to switch up basic settings, on the fly.
To make the situation weirder, because of their positions on the camera body, there’s no way you can do it all single-handedly. Or maybe, I shouldn’t complain and start using custom setting profiles.
Battery life is improved over the A7 and A7II, but the A9 won’t last you longer than 400 stills, or an evaporated battery when shooting 4K video. Buy a few (authentic) spare batteries; you’ll need them.
For nearly $6,700 you can be at the edge of the camera industry. If the A7 and A7 II series of cameras before it are any indication, the A9 is a camera that will age well. Shooters and videographers don’t have many full-frame cameras that meet each other in the middle, but the A9 is probably one of them.
If you really, really don’t think Sony’s lenses are gonna cut it, you can always buy an adapter to use Canon’s lenses, but you’ll see your continuous shooting drop down to 10fps. So, despite all the A9’s good graces, the crux of mirrorless cameras a very wide lens portfolio, is still on its way.
However, if you’re feeling brave and have at least $4500 dollars to spare on a new camera system, then I can only imagine you’ll exceed that number with fantastic photos and videos to match. It’s just what the A9 does.
If you’re anything like me, you spend a significant portion of the day wondering about the paths viruses take when they’re cruising around your internals. Luckily for us, a newly developed microscope from Duke researchers can show the exact path taken by the little critters (?), down to the micrometer.
The system, designed by a team led by assistant professor Kevin Welsher, isn’t like a traditional microscope. Instead of magnifying an image using natural or augmented light, it scans a laser through a small volume repeatedly and from multiple angles. This illuminates special fluorescent particles, the positions of which can be tracked over time.
Attach one of those particles to something else and you can track what it’s doing. It’s kind of like a mocap studio for microbiology. But until recently, those particles were too big to attach to viruses — imagine trying to do your Gollum impression with basketballs taped all over your body. Welsher’s team recently improved the power of the system enough that it can detect much smaller dots — and even fluorescent proteins built right into the virus’s system. The result, as you see up top, is quite a detailed little track!
I’m reminded of the old Family Circus cartoons, with Billy or whoever going all over the neighborhood, petting dogs, tracking mud on the neighbor’s porch and so on. Except Billy is a lentivirus, and the neighborhood is the soupy exterior of a cell membrane.
It’s not all just for kicks, of course: The goal is to be able to watch as a virus makes contact with a cell and does whatever it does to penetrate and infect it. That moment, so critical to understanding viral behavior, is poorly understood because it’s been nearly impossible to observe directly.
“What we are trying to investigate is the very first contacts of the virus with the cell surface — how it calls receptors, and how it sheds its envelope,” said Welsher in a Duke news release. “We want to watch that process in real time, and to do that, we need to be able to lock on to the virus right from the first moment.”
With this system, we’re a step closer to understanding one of the most sophisticated biological machines ever created. The team’s work is published this week in the journal of the Optical Society.
Researchers at the University of Calgary have released the latest version of their “Wearable Microsystem for Minimally Invasive, Pseudo-Continuous Blood Glucose Monitoring,” a watch-like wearable that “bites” you every few hours to draw blood and test your glucose levels.
The system uses a shape memory alloy actuator which contracts when heated and then snaps back into its original form. “When equipped with a small needle, the SMA-based actuators produced much greater penetration force into the skin than the bioelectric actuators and allowed the team to significantly miniaturize the device,” writes IEEE Spectrum.
“The idea is to have periodic, spontaneous and autonomous biting resulting in reliable blood testing, said researcher Martin Mintchev. “It’s a very significant step in demonstrating autonomous contact with the capillary.”
The system can be used for diabetes management as well as regular genetic testing or any sort of blood analysis that needs to be done regularly. From the paper:
￼Unlike prevalent solutions which estimate blood glucose levels from interstitial fluids or tears, our design extracts a whole blood sample from a small lanced skin wound using a novel shape memory alloy (SMA)-based microactuator and directly measures the blood glucose level from the sample. In vitro characterization determined that the SMA microactuator produced penetration force of 225 gf, penetration depth of 3.55 mm, and consumed approximately 5.56 mW·h for triggering. The microactuation mechanism was also evaluated by extracting blood samples from the wrist of four human volunteers. A total of 19 out of 23 actuations successfully reached capillary vessels below the wrists producing blood droplets on the surface of the skin. The integrated potentiostat-based glucose sensing circuit of our e-Mosquito device also showed a good linear correlation (R2 = 0.9733) with measurements using standard blood glucose monitoring technology. These proof-of-concept studies demonstrate the feasibility of the e-Mosquito microsystem for autonomous intermittent blood glucose monitoring.
The e-Mosquito isn’t quite ready for prime time but it’s a fascinating move forward for folks who have to check their blood sugar regularly. A quick pinch by a cool watch might just be better than the current prick methods used by diabetics.
Dust off your Google Glasses, those who still have them — the $1,500 face computer is back in the spotlight today with a few updates.
Today, in its first update since September 2014, Google Glass got a “MyGlass” companion app update, some bug fixes and now supports Bluetooth. That means the new “XE23” version can now hook up mice, keyboards and other Bluetooth-enabled objects to their Glass device.
The app update rolled out yesterday and, in an even bigger surprise, the firmware update for Glass came out today.
So, Glass is alive? Well, yes, but it never really died. Despite seeming to go the way of the Dodo (you can’t buy it anymore and Google shut down the website in 2015), it never really left us, it just “graduated” from Google X after failing to capture consumer attention. Google then quietly moved it into the enterprise. But, apparently, someone at Google is still working on the dork-inducing consumer version.
We don’t know why Google chose to release these two updates. It’s odd for an update to pop up after nearly three years — especially one without too much of a difference from the old version. But it shows Google has not completely forgotten about its optical-mounted wearable.
Swatch, the fashion watch for the masses, has created Swatch X You, a clever online watch “factory” that lets you pick a face, band, and extra doodads to truly customize your $65 to $85 watch. The service, available now, offers watches in two sizes 34mm and 41mm.
I’ve asked the Swatch group for comment – I basically want to know who is buying Swatches these days besides bored people in airports – but until I hear back we can take a closer look a the project.
Basically you can choose from among five very basic watch bodies in various colors. You can mix and match the straps and various accessories including little jewels that attach to the band and even modify the strap holders to reflect special occasions or personal preferences. The prices start at about $65 and can go up past $100 with the right accessories.
Swatch partnered with Emersya to create the experience which then lets you rotate the watch in three dimensions to see the majesty of your creation. Swatch then builds and ships the watch.
This sort of service is unique for Swatch. Long dedicated to “designer” pieces that can (and do) rise in value this choice to go completely custom is long overdue and very important. Sites like Blancier have long offered custom watches at fairly acceptable prices and that Swatch is just getting into this market exhibits a need to expand outside of the traditional Swatch customer and into the occasion buyer – parents buying custom watches for new grads or birthday kids – and those who think of watches as solely fashion accessories. Either way it’s a clever and important move by a company with a lot to lose.
A new system by University of California, Santa Barbara researchers Yasamin Mostofi and Chitra R. Karanam uses two drones, a massive Wi-Fi antenna, and a little interpolation to literally see through solid walls.
The system is two-fold. The one drone blasts Wi-Fi through the structure and another picks up the signal. Then, working in tandem, the two drones fly around the solid structure until it maps the differences in wave strength at different points. Using this information the researchers have been able to create a 3D model of a closed building.
In the video below you can see the drones idly flying around a brick structure. They cannot see inside. As the waves penetrate the brick they change as they pass through other structures behind the wall. After a few passes the drones start mapping the entire structure in high resolution.
“Our proposed approach has enabled unmanned aerial vehicles to image details through walls in 3D with only WiFi signals,” said Mostofi. “This approach utilizes only Wi-Fi RSSI measurements, does not require any prior measurements in the area of interest and does not need objects to move to be imaged.”
The team was first able to create 2D models of objects using this technique but quickly graduated to 3D models. The system uses off-the-shelf devices including a simple Wi-Fi router and a Google Tango tablet. It also uses a Raspberry Pi and Wi-Fi card for the receiver. Drones talk to each other and act autonomously.
While you’re not going to get a Predator-like view of living things through walls – yet – this project does have a lot of potential for indoor mapping and emergency situations when you need to know what’s inside a building without breaching the door. The researchers even expect some interesting archeological applications as well.
Elon Musk says that he’s had “promising conversations” with L.A. Mayor Eric Garcetti, regarding the potential of a network of tunnels underneath the city that would allow for a high-speed transit network unburdened by surface traffic. That’s the vision Musk’s recently founded Boring Company hopes to make a reality, as illustrated by a concept video debuted by Musk at this year’s TED conference in Vancouver.
Garcetti name-checked Musk during an interview on ABC 7, the network’s L.A. affiliate. The Mayor suggested that tunnel digging tech improvements included those being developed by Musk might make it possible to create an express line to LAX airport from L.A.’s Union Station central ground transit hub.
Musk noted in his tweet that the permits required from cities and regulatory bodies are likely the most difficult part of making a network of interconnected underground tunnels a reality. The technology, he said, would likely be easier to achieve than permits, hence why discussions with regulators even at this very early stage are so important.
In May, Musk posted a short clip to Instagram of the first section of the inaugural tunnel being dug by The Boring Company, which is designed to span LAX to Culver City, Santa Monica, Westwood and Sherman Oaks. Eventually, he hopes to create a network with tunnels that cover all of LA. These will be used by surface vehicles, including individual passenger cars, which will be transported below and moved around the tunnel network at high-speed using sleds on rails, provided the final version resembles the concept video created by The Boring Co. to illustrate its designs.
Sphero continues its partnership with Disney today, with the launch of a new toy based on the Marvel superhero Spider-Man. But where BB-8 and Lightning McQueen could move around the room, Spider-Man is more stationary — his real power involves holding conversations.
The simplest thing this Spider-Man can do is tell jokes — he seems to have an infinite simply of eye-rollers. If you just ask him to chat, he’ll start a conversation about random topics like school or dating. And as Sphero co-founder and Chief Software Architect Adam Wilson put it, he’s also “a storyteller,” describing his adventures to kids and asking them to participate in key moments.
You can see a few of my interactions with Spider-Man in the video above. Users are encouraged to try out different prompts and discover new modes of interaction — though there were plenty of times where Spider-Man would answer a different question from the one I asked, or he would just sit there silently.
The toy includes expressive LCD eyes, a microphone, a speaker and an accelerometer — so he’ll offer enthusiastic commentary if you pick him up and pretend to fight with him. There’s even an infrared sensor, allowing Spider-Man to go into “guard mode,” warning off any intruders who enter his owner’s bedroom.
Aside from using third-party speech recognition technology, Wilson said Spider-Man’s conversational engine was built “from scratch” — in essence, he’s “a full Android device” inside a superhero-shaped toy. (While your main interactions will be through voice, you’ll also need either an iOS or an Android app to control him.)
Wilson also emphasized the importance of privacy and security. He said Spider-Man is only listening when the spider on his chest lights up, and the user’s voice is never stored or shared. (The security measures are certified by AppliedTrust.)
It’s worth noting that while Spider-Man’s launch is timed to just a few weeks before the release of Spider-Man: Homecoming on July 7, he isn’t supposed to represent the movie version of the character, and he’s not voiced by Homecoming actor Tom Holland. (The fact that Homecoming will be distributed by Sony Pictures, not Disney, may have something to do with the toy’s lack of movie ties.) Still, Wilson said this Spider-Man comes with more than “100 comic books worth of content” and will also offer “tons of Easter Eggs.”
As for price, the toy costs $149.99.