Scientists Want Your Slips of the Tongue


You know that feeling when you’re halfway through a sentence and can’t think of the next word you need? It’s a word you know, but you can’t quite bring it to mind. There’s a name for that phenomenon…what is it, again?

Oh right, the “tip of the tongue.”

Everyday failures in our speech, like forgetting a word or saying the wrong one, are great fodder for scientists who want to understand language. But they’re hard to study in the lab, because you can’t force someone to make a mistake. Most of the time, we speak just fine.

So University of Kansas psychologist Michael Vitevitch has created an online tool for anyone, anytime, to record their speech errors. It’s like an ongoing goof diary for the public. And he hopes that if enough people use it, the data collected will be useful to the researchers who want to learn more about our minds.

“Most things break at their weak points,” Vitevitch says, “and the systems involved in language processing are no different.” The errors we make in speaking reveal where the weakest links are in the process of turning thoughts into sounds. For example, one of those weak points is getting from the meaning of a word to the word itself—when we can’t make the leap, we have a tip-of-the-tongue problem.

This is the most fascinating type of error to Vitevitch. “You know a word and have used it in the past, but now that you need it, it stays just out of reach,” he says. These errors “are very telling about how faulty and transient our memory can be.” But they aren’t the only mistakes his new online tool will track.

The tool is called SpEDi, for “Speech Error Diary.” It collects three categories of mistakes: words that are misspoken, words that are misheard, and words on the tip of your tongue. Vitevitch describes how SpEDi works in Frontiers of Science.

New visitors to the SpEDi website will be prompted to register. They’ll create a profile that includes details like their education level and what languages they speak. (Errors by multilingual people are especially interesting, Vitevitch says. Rather than making the leap from one word-idea to one word-form, they have to choose between multiple forms.) Then, anytime users make one of these errors—or hear someone else screw up—they can record it.

They’ll be prompted to describe the error in detail. There’s more than one way to misspeak, of course. There’s the malapropism, where you use a word that sounds similar to the one you want but has a totally different meaning (as in the recent headline about the amphibious pitcher). There’s swapping sounds in adjacent words (if you swap the first letters, it’s a spoonerism, named for a certain Reverend Spooner who allegedly made a lot of them). There’s blending two words into one.

If you mishear the lyrics of a song, it’s a mondegreen.* There are whole websites devoted to funny examples of these, but SpEDi is also interested in the non-musical (and unfunny) misheard words. And if you have a tip-of-the-tongue problem, the website will ask for the details of the missing word—even if you haven’t found it yet.

The website also gently asks if you’re sure this word exists. On a seven-point scale, “How certain are you that you will be able to recall this word?” Finally, a space for additional notes lets you record “that the error occurred in a noisy environment,” Vitevitch notes in his paper, or “that an alcoholic beverage had been consumed shortly before the error was made.”

Vitevitch is spreading the word about SpEDi on social media and to other language researchers. Anyone who registers for the site can download all the raw data it’s gathered so far, and use that data for their own research if they want. By opening up the diary to everyone, and leaving it open indefinitely, Vitevitch hopes to build a research tool that’s truly useful.

“I hope people will see that they don’t need to have a PhD to be involved in and contribute to science,” Vitevitch says. For this particular experiment, they only need to be people who have made a mistake. And we all make mistakes.

“Back in college I called my current girlfriend the name of my previous girlfriend,” Vitevitch recalls. “You only make that error once.”


Beyond Tupac Can Hologram Concerts Take Off?


There’s little doubt that if Jimi Hendrix, Janis Joplin or Jim Morrison headlined a concert today, it would be the hottest ticket in town.

It could happen tomorrow.

Entertainment companies are spending big bucks to fit venues with holographic technology capable of resurrecting beloved musicians, comedians and even Jesus Christ. For all the futuristic glitz holograms exude, today’s notable holographic performances are still based on a 19th century parlor trick. However, there are researchers around the world working to bring holographic technology into the 21st century.

Pepper’s Ghost

John Henry Pepper was a British scientist and inventor who’s best known for making a “ghost” appear on stage during an 1862 demonstration at the Polytechnic Institute in London. Pepper fitted an angled pane of glass on stage to reflect a brightly lit actor hiding beneath the stage. The actor’s reflection was refracted through the angled glass and directed onto the stage. As a result, it looked as if a ghost was floating on stage.

It was fitting, then, that 150 years after Pepper’s demonstration, Tupac Shakur appeared on stage at Coachella in much the same way. A Mylar film was placed on the big stage at a 45-degree angle; a high-definition video feed was projected onto a reflective screen, and finally bounced off the Mylar film to create the illusion. Tupac’s head was digitally recreated, then placed atop a body double. Pepper’s Ghost had returned in a big way.

You can see variations of Pepper’s Ghost everywhere: teleprompter screens, Disney’s Haunted Mansion ride and Jimmy Kimmel Live. In fact, the setup is simple enough that you can easily make ghosts appear in your living room with a little effort.

High-Tech Holograms

With the help of scientists around the world, holograms are getting a 21st century makeover. There are labs around the world dedicated to advancing holographic technologies in myriad ways. Take, for instance, the Digital Nature Group in Japan. The DNG team combined femtosecond lasers, mirrors and cameras to produce holograms that you can actually touch. A femtosecond is a quadrillionth of a second, and the team’s laser transmits bursts that last 30 to 270 femtoseconds. The image that results is actually light emitted by the plasma created when the laser ionizes the air.

The result is a holographic image that feels a bit like sandpaper or a static shock. However, an incredibly small image is produced. The DNG team is working on producing larger images with lasers, but the proof-of-concept study means all sorts of science-fiction computer displays may someday be possible. Think back to the move “Minority Report,” and you can get a sense of the type of holographic, tangible display that might be possible.

Commercial Tech

In the meantime, profit-seeking companies are buying up licenses to resurrect our favorite celebrities to put on timeless performances. Alkiviades David, founder of Hologram USA, bought the patent for the technology that created the Tupac hologram. He plans to put on shows featuring Ray Charles, Richard Prior and Liberace to name a few. Even living artists are embracing holograms to perform multiple live shows simultaneously, like Mariah Carey did in Europe a few years ago.

The trick now is to digitally reproduce dead icons in a way that is indistinguishable from the real person, as Vulture reports:

It’s entirely possible, even probable, that, at some point, David’s technology will be fully able to create and project a celebrity digital likeness that’s indistinguishable from the real thing, one that moves fluidly and organically and delivers unerringly consistent performances.

For now, holographic performances are still a 150-year-old illusion that’s yet to hit the big-time.

Electronics-Sniffing Dogs Help Solve Cybercrimes


A dog’s nose knows best, even in the digital age.

By now you’ve probably heard about the downfall of former Subway spokesman Jared Fogle, who has said he will plea guilty to having paid minors for sex and having obtained child pornography. But an interesting factoid of the case is that justice was served thanks to dogs trained to sniff out electronics. From iPads to tiny memory cards, these dogs with a rare talent are finding themselves in high demand in an era rife with cyber crime.

Follow Your Nose

There are just three dogs in the United States trained to find electronic components with their noses. Bear, an electronics-sniffing black lab, helped officers locate 16 smartphones, 10 flash drives and six laptops during an 11-hour search last month of Fogle’s home. How are dogs able to find what to the rest of us has no smell whatsoever?

For some insight, we can look to Jack Hubball, a chemist who discovers the chemical compounds that dogs are eventually trained to find. He identified the so-called accerlerants (gasoline, diesel, kerosene, etc.) dogs should focus on to sniff out arson, and helped train dogs to find narcotics and bombs.

To fight computer crimes, Hubball tested circuit boards, flash drives and other electronics components to isolate a single common chemical in each device, which police are keeping under wraps. After isolating the chemical, it was a matter of homing dogs’ sniffers onto the telltale compound, as Bloomberg reported:

After months of training, the dogs were able to detect the odor of the chemical in people’s hands, concrete blocks, metal boxes and clothing. The dogs also had to ignore distracting smells such as food and coffee.

The dogs have since been involved in numerous child pornography warrants, as well as other investigations where electronic documents were key evidence. After helping with the Fogle investigation, Bear’s trainer says he’s received some 30 inquiries from police who want to buy their own electronics-sniffing dog, the International Business Times reports.

Super Sniffers

Apart from electronics, dogs are putting their noses to work in a multitude of disciplines – from fighting crime to diagnosing diseases. In one of the largest studies of its kind, dogs detected the presence of prostate cancer with 98 percent accuracy in the urine of 600 test patients. Dogs can also detect lung cancer simply by sniffing patients’ breath. Dogs have been used to sniff out bed bugs, explosives, dead bodies, contamination in water and more.

In other words, if something smells fishy, it may be time to call on man’s best friend to lend a helping hand — or snout.


Cyber Warriors Need Not Be Soldiers

Throughout history, warriors of all cultures have trained their bodies to endure physical hardship and combat, whether they wielded swords and shields or carried guns and ammunition. In the 21st century, countries such as China and Estonia have recruited a new breed of warriors who fight as part of cyber militias rather than as official military personnel in uniform. Such cyber warriors often represent civilians with high-tech jobs who spend their days tapping away at keyboards rather than practicing how to accurately shoot an assault rifle or pass fitness tests. A number of U.S. military officers and national security experts say that the United States also needs to begin recruiting tech-savvy civilians without requiring them to become traditional soldiers.

The U.S. military has trained a growing number of uniformed cyber warriors. But it faces a special challenge in recruiting a certain caliber of computer professionals who would rather work at high-paying jobs in Silicon Valley than enlist for military service. That issue came up during a session of the “Future of War” conference hosted by the New America Foundation on Feb. 25, when an audience member asked a panel of experts if the U.S. should “militarize” the country’s technological talent and resources in Silicon Valley.

“Somebody who is high on Coke, skittles and slinging code is not a good candidate for basic training,” said Brad Allenby, professor of engineering and ethics at Arizona State University, in response.

Giving Up Guns for Computers

U.S. military generals have previously echoed Allenby’s sentiment in less colorful terms. Lt Gen Robert Brown, commander of the U.S. Army Combined Arms Centre at Fort Leavenworth, spoke about the challenge of recruiting individuals with cyber skills who were “not natural candidates for a military career” during a previous New America Foundation event in December, according to The Telegraph. The general talked about possibly lowering or removing the physical and combat training standards for cyber warriors.

“They grew up on Google and wear ponytails,” Lt Gen Brown said. “We need to look at ways to bring them into the Army without necessarily going through the same training procedures as our combat troops.”

One possible approach was voiced by Lt Gen Edward Cardon, commanding general for United States Army Cyber Command. Lt Gen Cardon talked about the long-term goal of recruiting 30 percent of the U.S. Army Cyber Command’s talent from civilians through a two-year career field. He spoke during another New America Foundation conference called “Cybersecurity for a New America” on Feb. 23

The U.S. is not alone in trying to ease military training standards for cyber warrior recruits. The U.K.’s Ministry of Defense plans to waive physical fitness requirements on a case by case basis for a new cyber reservist unit made up of 500 computer professionals, according to The Telegraph. Such reservists will also be exempt from carrying weapons or being deployed abroad.

That U.K. approach may prove more appealing for computer experts than serving as a traditional military reservist. But Peter Singer, a strategist and senior fellow at the New America Foundation, suggested that U.S. should consider a civilian militia approach that exists outside the traditional military. During the Future of War conference, Singer pointed to the Estonian Cyber Defense League as a leading example.

“The Estonia model is more like a militia model or a civil air patrol model where you’re pulling in experts but not dropping them into a formal military organization,” Singer said.

You’re in the Cyber Militia Now

Estonia formed the Estonian Cyber Defense League after suffering from cyber attacks during a 2007 incident involving tensions with Russia. The hackers behind the 20o7 cyber assault launched Distributed denial of service (DDoS) attacks that targeted Estonia’s reliance upon Internet service for a fully-functional government and economy. The cyber attacks disrupted credit card and automatic teller transactions for several days, took down the government’s parliamentary email server and crippled the IT capabilities of government ministries.

As a paramilitary organization, the Estonian Cyber Defense League includes some of the country’s best information technology and security professionals from both the public and private sectors, according to LTC Scott Applegate, author of a 2012 paper titled “Leveraging Cyber Militias as a Force Multiplier in CyberOperations.” Such a cyber militia provides motivated civilians with a way to serve in times of crisis without having to formally join the military.

Still, Singer observed that the U.S. Department of Defense and defense contractors might hesitate at the idea of a cyber militia intruding on their turf. He also pointed out that Silicon Valley remains largely distrustful of the U.S. national security establishment, especially after Edward Snowden’s leaks of National Security Agency files revealed widespread surveillance of the Internet and cellular networks. That culture gap means many computer professionals many not be so eager to heed Uncle Sam’s call to become cyber warriors.

The U.S. has already fallen behind the times in recruiting either a cyber reservist force or cyber militia. China has organized a civilian cyber militia to support its military since 2005; even earlier than the Estonian Cyber Defense League. The Chinese cyber militia consists of “workers with high-tech day jobs” who “focus on various aspects of military communications, electronic warfare, and computer network operations,” according to the annual report by the U.S.-China Economic and Security Review Commission released in 2012.

How Facebook Keeps Paul Walker’s Memory Alive

Picture a group of mourners surrounding a tombstone within a huge graveyard. Their voices blend together in a respectful murmur of words such as “Miss you so much” and “You always gonna be my brother.” Then the imaginary tombstone’s face displays an image of actor Vin Diesel, his head bowed as if in grief, standing opposite deceased actor Paul Walker. The tagline “One last ride” appears above the film title “Furious 7.”

The seventh film in the high-octane “Fast and Furious” franchise may appear to be an unlikely vehicle for memorializing Paul Walker, who died in a car crash on Nov. 30, 2013. But the real-life relationships of the close-knit “Fast and Furious” cast and the running theme in the films about the importance of family make “Furious 7″ a surprisingly suitable vehicle for sending off Walker in style—especially given that the actor’s brothers helped finish filming the movie in his place. In that spirit, Walker’s official Facebook page has become an online memorial to the deceased actor that also promotes the newest film in the “Fast and Furious” franchise. Fans flock to Paul Walker’s page to both pay their respects and get hyped for the actor’s last appearance in the series.

“A lot of my research has found that postmortem social networking practices are not sanctified,” said Jed Brubaker, a Ph.D. candidate in the department of Informatics at the University of California, Irvine. “While there is a kind of reverence that enters into these spaces, it doesn’t have the overwhelming sense of a graveyard. Instead, what we find is a blend of a familiar social media genre form of communication intermixed with a more traditional funerary style similar to how you would talk at a funeral or gravesite.”

Brubaker has collaborated with Facebook in studying death and social media since last year, but has conducted research on such topics for the past six years. While his research hasn’t focused specifically on celebrities, he sees many similarities between the fan comments left on Paul Walker’s Facebook page and the comments left on the Facebook profiles of ordinary people who have passed on.

The rise of Facebook and other social networks has changed the cultural way that people approach death, Brubaker explains. When Facebook profiles linger on beyond the lives of their owners, they allow people to maintain a continuing relationship with the deceased. Friends and family can often still post to the dead person’s profile, send them messages and browse through the photos of the deceased. Facebook allows for such relationships with deceased Facebook users through its “memorialized account” feature and “legacy contact” option that allows people to designate someone to manage their memorialized account after they die.

“Social media is a place where the kinds of things you could do to interact with people while they were alive are still available after they die,” Brubaker said.

Paul Walker’s Facebook page isn’t exactly the same as any given person’s personal profile. For one thing, it’s a Facebook page tailored for businesses, brands and organizations rather than personal Facebook profile. Walker seemed to personally post messages to the Facebook page every now and then. For example, his last message wished people a happy Thanksgiving. But most messages were posted by a team of publicists under the collective name “Team PW.”

The nature of relationships with a celebrities also makes a difference in how fans continue to interact with Walker’s Facebook page. It’s probably safe to assume that most never actually got the chance to meet Walker in person and form a personal connection. Fans grieved for Walker, but the nature of their relationship with him through social media didn’t necessarily change much.

“The ways that someone can be a fan are still available after Paul Walker has died,” Brubaker said. “In the case of a celebrity death, the individual relationship to the deceased hasn’t changed because they never met in real life.”

One thing that people can find “incredibly distressing” is when a memorialized Facebook profile posts a status update. Even if the family member or another legacy contact signs the message, the appearance of a dead person’s profile posting a message can still seem haunting to the deceased’s Facebook network, Brubaker explained. He added that the team managing Walker’s Facebook page seems to try avoiding such confusion by signing all messages with “Team PW” and avoiding the use of first person pronouns such as “I” or “We.”

The release of “Furious 7″ and Walker’s continued social medial presence have helped keep the actor in the public eye. “Fast and Furious” castmates such as Vin Diesel and Jordana Brewster have made emotional speeches remembering Walker in the run-up to the release of the latest film.

At the same time, Walker’s Facebook fan page maintains almost 40 million “likes” and continues to attract thousands of “likes” and “shares” for each new status update post. A survey commissioned by Variety last year showed that Walker was more influential than living Hollywood celebrities among Americans between the ages of 13 and 18.

Most deceased people’s Facebook profiles won’t attract nearly as much attention as the Facebook fan pages of celebrities. But just having that digital presence after death allows for each individual’s “fans”—family and friends—from all different geographic locations to continue celebrating and remembering the deceased for a long time afterward. Many family and friends return year after year for certain anniversaries or other events, Brubaker said.

“One thing I find interesting is how, by virtue of us all having pages or profiles that remain on Facebook or other social media platforms, we all get a micro form of celebrity in a sense,” Brubaker said. “That is a substantial difference you can see social media having on everyday American deaths.”

When Robot Personalities Mimic the Dead

Hollywood actress Audrey Hepburn and martial arts legend Bruce Lee represent just a few of the dead celebrities whom have been resurrected as digital avatars in TV commercials to sell products such as chocolate or whiskey. A Google patent raises a new possibility by describing robot personalities based upon the voices and behaviors of dead celebrities or loved ones. Such a vision may not necessarily come true, but it raises the question of whether people would feel comfortable dealing with a robot that actively mimics deceased people.

The patent awarded to Google on March 31, 2015, focuses on the idea of creating robot personalities that could be downloaded as software and transferred between different robots through an online service. It also describes the idea of creating customizable robot personalities tailored to the preferences of human users. That lays the groundwork for a future where robotic hardware could update and switch their software personalities based on the specific human customers they’re serving. The patent also covers the idea of a base personality that act out different moods such as happiness, fear, surprise, and thoughtfulness. Google’s patent even uses well-known celebrities — such as a perplexed “Woody Allen robot” or a derisive “Rodney Dangerfield robot” — to describe a range of possible robot moods.

“The personality could be multifarious, in the sense of multiple personalities, which may be selected by the robot according to cues or circumstances, or a personality could be selected by a user (a human),” according to the Google patent. “The robot may be programmed to take on the personality of real-world people (e.g., behave based on the user, a deceased loved one, a celebrity and so on) so as to take on character traits of people to be emulated by a robot.”

To be fair, companies frequently patent ideas that never become commercial products for one reason or another. On the other hand, Google has gone on a notable robot buying spree by snapping up at least eight robotics companies in six months just last year. The technology giant clearly sees a big future in robotics one way or the other. And as the patent suggests, the company has put some thought into how future robots might socially interact with humans on a more regular basis.

Bring Out Your Dead

Customizable robot personalities represent a logical extension of smartphone assistants such as Apple’s Siri. But the “deceased love one” and “celebrity” personality examples described by the Google patent almost certainly won’t meet with universal joy and acceptance if they ever become a reality. We only need to look at past commercials that resurrected deceased celebrities as computer-generated avatars to get some idea about the people’s possible reactions, said Karl MacDorman, a robotics researcher at Indiana University. MacDorman has spent much of his research career studying the “uncanny valley,” an idea that describes how certain human-like figures in animated films or robotics can come off as appearing eerie or creepy.

The idea of using dead celebrities in commercials was alive and well even before the arrival of modern computer-generated imagery (CGI) techniques; older commercials simply combined old footage of the celebrities with new footage through computer compositing techniques. Remember John Wayne in all those Coors Light commercials? How about Fred Astaire dancing with a Dirt Devil vacuum cleaner? Or Audrey Hepburn being repurposed for selling Gap jeans?

More recently, the advancement of CGI technology has allowed advertising executives to direct the digital avatars of deceased celebrities in ways that they never acted before while still living. That has given rise to controversial cases such as a digital avatar of Orville Redenbacher cracking awkward jokes about mp3 players in a 2007 popcorn commercial. YouTube comments ranged from some people being impressed to others describing the digital avatar’s look as “creepy” or like a “zombie.”

MacDorman personally thought that the digital recreation of Redenbacher lacked authenticity, in part because the voice in the commercial failed to capture the real-life Redenbacher’s distinctive Indiana accent. But the robotics researcher also conducted an informal poll of about 20 people to gauge their reactions to the Orville Redenbacher commercial.

“Some people thought it was Orville Redenbacher, and it didn’t bother them at all,” MacDorman said. “Others could tell it was computer generated. Others thought the idea of resurrecting Orville Redenbacher was really sick. There was quite a diversity of opinion.”

In 2013, martial artist Bruce Lee was digitally resurrected for a Johnnie Walker whiskey ad. That commercial drew less controversy about the appearance of the digital avatar — perhaps because of the better CGI — but still drew disapproving comments about the use of Lee’s likeness to sell whiskey. Some people suggested that the whiskey commercial was disrespectful because Lee was a “health nut” who was never big on alcohol, according to Time.

Last year, actress Audrey Hepburn was given the digital avatar treatment in a Galaxy (Dove) chocolate commercial. That commercial was generally successful in winning over audiences, judging by the YouTube comments. Rather than having Hepburn hawk the product directly to customers, the commercial featured the digital Hepburn in a romantic scene vaguely reminiscent of some of her more famous Hollywood roles. It even plucked at heart strings with the inclusion of the famous song “Moon River” sung by Hepburn in the 1961 film “Breakfast at Tiffany’s.” Altogether, the commercial wisely allowed Hepburn to stay in character, MacDorman said.

Maybe some future robot owners might find it amusing or even comforting to have their robot speak and behave like their favorite celebrity, dead or alive. Whether or not such a future might happen depends in large part upon how celebrities and their descendants — or whatever entity owns the right to their likeness — choose to participate in such projects. For example, fans of deceased comedian Robin Williams might be either relieved or disappointed to find out that he chose to restrict exploitation of his likeness for at least 25 years after his death, according to the Hollywood Reporter.

We Have the Technology

But individuals could still choose whether or not they would want robot personalities based on a “deceased loved one.” The technology may already exist for enabling a robot personality that can partially simulate a real-life person’s personality. A real-life person’s interactions with other people could provide behavioral data for developing a robot personality based on the person, MacDorman said. Ideally, the real-life person might even directly control the robot’s behavioral actions for a while so that the robot could build up a database of behavior. Existing software can already create a synthesized version of someone’s voice based on vocal samples.

The Google patent describes an example of how a personality program could vacuum up information from a person’s smartphone or laptop to create a new personality based on a living or dead person:

Adoption of a personality, or some personification attributes, could be more direct, such as a simple user command to adopt a character by name: “Be mom”; “Become Gwynneth”; “Adopt persona Beta.” The character (personality) may be a program already stored, or it could be something in the cloud. If the later, the robot would interact with the cloud to pull sufficient information regarding the “new” persona to thereby recreate a simulacrum for the robot. The information for the persona could also come from a user device. Such as, in response to a “Be mom” command, “mom” may not be known to the robot. The robot processor can then search user devices for information about “mom”… For example, the robot may be able to determine “mom’s” voice from recordings, and further how the user interacts with “mom” from text messages and recordings. A photograph of “mom” may result in a display for the monitor of FIG. 2C.

Google may or may not choose to ever provide future robot owners with such robot personality options. But whatever the legal situation, such options will almost inevitably spark broader discussions among individual families and within society as a whole about resurrecting the dead in robotic form.

“While an individual may find comfort in having a robot or digital double impersonate a deceased loved one, others may well find this creepy, and the practice could be stigmatized,” MacDorman said.

X-Ray Vision of an Exploding Lithium Ion Battery

Lithium-ion batteries have drawn lots of headlines in recent years, and not in a good way. From the fires caused by the batteries in Boeing planes to exploding Tesla car batteries, these ubiquitous batteries powering our tech have shown they have a dark side.

These fiery events are extremely rare. However, scientists want to understand why these explosions occur, especially since phones, tablets and automobiles will be increasingly reliant on these batteries in the future.

And this week scientists took a big step toward discerning the root cause of blow-ups, by observing and recording a lithium-ion battery meltdown, for the first time, using X-ray vision.

Thar’ She Blows!

The experimental design was quite simple: Scientists subjected two commercially available lithium batteries to external heat. Then, they filmed the batteries using thermal and high frequency X-ray imaging. The goal was to determine how, and when, a runaway reaction (aka, BOOM!) occurred.

Scientists found that lithium-ion batteries start to break down when temperatures reach 194 degrees Fahrenheit. As the temperature rose, thermal and chemical reactions inside the superheated battery produced gas pockets that deformed the spiral layers inside the cell.

This set up a domino effect that led to higher temperatures and more gas. The gases built up until there was nowhere to go but out. When the battery literally blew its lid, hot gas and molten material jetted out. Meanwhile temperatures inside the cell hit 1,985 degrees — hot enough to melt copper — researchers reported Tuesday in Nature Communications.

Making Batteries Safer

Scientists hope their findings help engineers better understand the explosive personality of lithium ion batteries, and design safeguards against disaster. Future research may look into other common assaults, such as stabbing and crushing the batteries, to see how, exactly, they fail.

Google Envisions Robot Remote Controls That Know Your Face

Cuddly robot toys such as Furby or AIBO the robot dog have won many human hearts and minds over the past decade. That may be why Google researchers envision the possibility of turning such robot toys into intelligent remote controls for home entertainment systems. But the idea of a teddy bear or doll constantly watching or listening in a home has already stirred some controversy about home privacy.

A Google patent application spotted by SmartUp, a legal technology firm, describes how an “anthropomorphic device” with hidden cameras for eyes and microphones for ears could automatically translate simple voice commands into actions that activate smart TVs, DVRs, DVD players and other devices. Instead of manually pushing buttons on remote controls or even a large universal remote control, people could simply tell their robot remote control to stream the latest episode of a favorite TV show through their Blu-ray player or Apple TV. The Google patent — filed in 2012 but published on the U.S. Patent and Trademark Office website on May 21, 2015 — suggests that the anthropomorphic device could simplify the process of accessing TV shows and movies through the growing swarm of home devices and online services. The patent also includes drawing concepts for the lovable robot toy as a teddy bear and a stuffed rabbit.

“There are at least some advantages to an anthropomorphic device taking on a familiar, toy-like, or “cute” form…” according to the Google patent application. “Some users, especially young children, might find these forms to be attractive user interfaces. However, individuals of all ages may find interacting with these anthropomorphic devices to be more natural than interacting with traditional types of user interfaces.”

The device — let’s just call it “Teddy” — would work something like this. If Teddy detects a person in the room, it would look at that person so that its camera and microphones are pointed in his or her direction. The Teddy might simply recognize the person visually through its camera. Or it might turn its head in the direction of the person based on the sound of his or her voice. A person could also directly address Teddy by name or by using certain keywords, which would be Teddy’s cue to look in his or her direction. Teddy might even use video captured by its camera to read the lips of someone speaking, in case the audio coming in through the microphone is too soft or distorted.

The Teddy may not necessarily take the form of a physical toy; Google’s patent application also allows for the possibility of a hologram or a virtual avatar that only appears on a screen. But the main function of being able to translate voice commands into actions for coordinating home media devices would remain the same in any case. Google’s patent specifically describes the possibility of the Teddy device communicating with an “cloud-based” online server that could handle much of the computer processing. Alternately, the Teddy device might be a more capable robot with its own self-contained computer processing power and data storage.

They See You When They’re Sleeping

It’s worth keeping in mind that companies file patents all the time which never translate into commercial products. Still, Google’s patent idea for a smart Teddy has already led to some alarm. Representatives for watchdog groups expressed worries about a cuddly device capable of constantly monitoring people with cameras and microphones. “The privacy concerns are clear when devices have the capacity to record conversations and log activity,” said Emma Carr, director of Big Brother Watch, in a BBC News interview. “When those devices are aimed specifically at children, then for many this will step over the creepy line.”

Google’s patent idea includes a description of how Teddy might still be listening or detecting movement even when it appears to be “asleep.” This makes sense if you want to save battery power for Teddy by having a “sleep mode” that still allows it to respond when needed, but it does admittedly come off sounding a bit eerie.

It should be noted that while the anthropomorphic devices described herein may have eyes that can “close,” or may be able to simulate “sleeping,” the anthropomorphic devices may maintain their camera and microphones in an operational state. Thus, the anthropomorphic devices may be able to detect movement and sounds even when appearing to be asleep. Nonetheless, when in such a “sleep mode” an anthropomorphic device may deactivate or limit at least some of its functionality in order to use less power.

Other possibilities for Teddy include having a “profile” of each resident in a home. That would allow Teddy to tailor its actions and responses to individual residents, but it would need to store representative voice samples or possibly a facial picture so that it could recognize people by their voice or face. A separate Google patent application on robots with multiple personalities tailored to the preferences of individual people — even personalities based on dead celebrities or family members — could also theoretically come into play and allow a Teddy to change its behavior based on the person it’s interacting with.

How Robot Remote Controls Can Respect Privacy

The early concerns swirling around Google’s patent idea are similar to those that have arisen around existing smart devices designed for home entertainment. For example, Samsung’s Smart TVs have a voice recognition system that allows people to change the channel or volume level with voice commands. Such smart TVs already caused some controversy over fears that they were recording people’s living room conversations, which prompted Samsung to post a clarification about the data being stored. Similarly, Microsoft had to assuage privacy concerns over its Xbox One and Kinect accessory that can capture videos, photos, facial expressions and even read heart rates.

For the most part, Google’s patent idea for Teddy seems to mainly put a personable face on existing home entertainment devices; it’s the difference between interacting with a cuddly robot toy and a faceless remote control or device. It also hints at a possible future of homes filled with social robots designed to interact well with humans. Such social robots would likely have many, if not most, of the capabilities found in Google’s patent filing.

Certain design choices may lead people to see a Teddy device or social robots as behaving in a creepy manner, but they don’t necessarily compromise home privacy any more than the Microsoft Kinect or any existing devices that can silently monitor people’s behaviors. It’s up to companies to have transparent privacy policies that explain what a Teddy or any smart device can or can’t do. If companies also clearly allow customers to set privacy levels on devices, that may go a long way toward reassuring fears over the future Teddy sitting on the couch.

UPDATE: A previous version of this post mistakenly described SmartUp as a law firm. I’ve corrected the post to reflect the fact that SmartUp is actually a legal technology firm that has a platform for connecting consumers with attorneys.

Encrypting Transactions With an App

The billions of credit card transactions each year in the United States rely on secure cryptographic keys based on random numbers, which require specialized hardware to generate. Now, European physicists have shown how smartphones like the one in your pocket can reliably produce random numbers based on the laws of quantum mechanics.

Modern smartphone cameras are sensitive enough to detect variations in light by just a few photons — the fundamental particles of light. Researchers at the University of Geneva exploited this sensitivity to help produce random numbers. They also capitalized on another aspect of light: The exact number of photons emitted by a source at any instant is fundamentally unpredictable.

Bruno Sanguinetti and his team used an 8-megapixel camera in a Nokia N9 smartphone to take a picture of a light-emitting diode. The camera’s image sensor recorded the number of photons hitting each pixel, recovering a slightly different number each time due to quantum uncertainties. The scientists then turned these inherently random photon counts into strings of random numbers suitable for creating cryptographic keys.

After one of the random strings of digits is created on the phone, the digits would be sent to the other partner involved in the transaction (that is, the buyer or seller). The digits would function just like the encrypted numbers used today in credit card transactions. Only these would be even more secure because they’d be grounded in quantum uncertainties.

In an Octopus’s Garden with You (for Five Years)

Earlier this month, while I was busy taking screenshots of autofill suggestions for medical searches on Google, something shocking happened: Inkfish had its five-year anniversary.

Five years!!  It’s about twice the lifespan of a pet hamster. It’s closer to three times the lifespan of a common octopus. In that time I’ve written 519 posts, had three different online homes, and somehow managed to learn almost no HTML.

To mark the occasion, I dug through my analytics and found the most-viewed story from each year of Inkfish’s existence.

Year Five: The Women Who Stare at Babies

Year Four: Scientists Convince People Their Hands Are Rocks

Year Three: Enslaved Ants Get Even by Killing Captors’ Babies (and here’s the original version, at the blog network Field of Science)

Year Two: Human Dung Wins Interspecies Taste Test (original here)

In Year One I had a blogspot address, which no longer exists. My two most popular posts were tied with 78 pageviews. One was Daring to Discuss, a complaint about a column in the New York Times. The other was an announcement about a byline I had in National Geographic, which I think highlights how many of my readers were related to me.

Thanks to all of you who, even if we’re not related, are reading today. And thanks to Lisa Raffensperger for inviting me to blog at Discover. Five years ago I sat on a secondhand couch and decided it would be fun to share a few science stories with my friends. Everyone who’s given me a reason to keep doing this has my gratitude, in oceans.


PS: If you want to get me a blogday gift, you could follow me on Twitter or Facebook, or subscribe to Inkfish’s RSS feed. If you’re curious what else I’ve been up to since that magazine byline I bragged about five years ago, you can find some of my non-Inkfish writing at my website. Or, you know, cupcakes are good.

What Happens When 28,000 Volunteers Are Set Loose in the Virtual Serengeti

What’s a scientist to do with 1.2 million photos, most of grass but some containing valuable data about endangered animals? Turn the whole thing over to the public, if you’re the creators of Snapshot Serengeti. This project caught the attention of tens of thousands of volunteers. Now their work has produced a massive dataset that’s already helping scientists in a range of fields.

Most online citizen science involves a degree of tedium—counting craters, tracing kelp mats. But Snapshot Serengeti is part safari, part detective work. That may be why volunteers tore through the photos so eagerly.

The pictures came from 225 camera traps set up in a grid across 1,125 square kilometers of Serengeti National Park in Tanzania. The cameras have infrared sensors that are triggered by a combination of heat and motion. That means when an animal walks past, the camera snaps a quick burst of pictures.

The cameras were bolted onto trees or metal poles and surrounded by steel cases. Nevertheless, about 15 percent of the cameras had to be replaced each year after being damaged by weather or animals.

Between 2010 and 2013, the camera traps captured 1.2 million scenes. To sort through the overwhelming number of pictures, scientists turned them into an online game for citizen scientists. Snapshot Serengeti is hosted at the Zooniverse, a citizen science portal. (All the images uploaded to Snapshot Serengeti have now been classified, but you can still play around with it. And the cameras are still running, so aspiring classifiers should stay tuned for new pictures.)

Volunteers could classify a picture as empty if the camera had misfired on some branches or grass blades waving in the sun. That was the case for about three quarters of the photos. When an animal was present, users went through a quick guide to determine the most likely species. (What color or pattern does its fur have? What are its horns and tail shaped like? What might it be mistaken for?)

Animals could be classified as one of 48 different species (aardvark, porcupine, hippopotamus) or groups of species (rodent, miscellaneous bird). Users also reported how many animals they saw, what the animals were doing (moving? eating?), and whether any young were around.

The 28,000 registered Snapshot Serengeti users, along with about 40,000 unregistered users, classified more than 300,000 animal photos. Then scientists led by Alexandra Swanson at the University of Oxford used a “simple algorithm” to merge these classifications into a single consensus dataset. They designated each picture with the animal or animals that the most people had picked.

They also gave each image a score for uncertainty and difficulty. A photo of a furry haunch pressed against the camera lens, for example, might have high uncertainty because volunteers didn’t agree on how to classify it. A clear shot of two giraffes, on the other hand, would get more consistent answers.

But how accurate were the volunteers? Swanson and her coauthors created a smaller, “gold standard” set of images to find out. Experts classified 4,149 of the Snapshot Serengeti images. When they checked these classifications against the larger, volunteer dataset, the researchers saw that species IDs by citizen scientists were almost 97 percent accurate.

The researchers are making their dataset available to other scientists, and hope that it will be as useful as the photos are entertaining. Already, they say, their collaborators are using the data to work on automated species detection and classification—in other words, teaching computers to do the same tasks that the tens of thousands of volunteers did.

If you participated in Snapshot Serengeti, you can rest assured that your time (and my time) spent staring at warthogs and elands wasn’t wasted. Like these cheetahs, you’ve earned a nap.

This Website Wants to Guess Your Age

Engineers at the software company wanted to test their newly released face detection software, which guesses your age and sex, so they opened it up to the public. They hoped to get 50 respondents — and they got over 35,000 in a few hours. But be forewarned, will either make your day, or ruin it.

So we decided to put it to the test.

A Little Background

It took a group of developers about a day to set up the online demonstration, but the facial recognition algorithm has been in development for much longer. The site was really meant to kickstart a conversation about the powerful potential of cloud-based software that can analyze data quickly.

Behind the scenes, Microsoft is working to perfect machine learning — as are most other tech giants — to better sort through the Internet’s countless photographs. In addition to facial recognition, Microsoft is also testing speech recognition and image processing algorithms to sift out the chaff in image searches.

So far, it seems like the algorithm needs a little work (Microsoft admits this as well). The algorithms, much to the pleasure or humiliation of thousands, add decades to the clearly young, and trim years off the obviously old:

This Was the Most Radical Shift in U.S. Music of the Last 50 Years

The Beatles are credited with igniting a rock ’n’ roll revolution when they toured the United States in 1964. I don’t want to spoil the party, but that revolution was well underway in the states long before the mop-top quartet arrived, and this is more than just a rumor. It’s science.

Researchers in the United Kingdom used big data analysis to build the first evolutionary history of popular music in the United States. They processed over 17,000 songs that appeared on the U.S. Billboard Hot 100 list from 1960 to 2010 to pinpoint style trends, musical diversity and the timing of revolutions.

According to their results, the single most radical change in American music had nothing to do with “the British Invasion.” Instead, it occurred much more recently, with the surge in popularity of hip-hop. 

Musical Theory

Researchers from Queen Mary University of London and Imperial College London used signal processing and text mining to analyze the musical properties of chart-toppers over five decades using music from Their system separated songs into groups based on properties, such as patterns of chord changes, tone color and lyrics.

For example, songs with minor-seventh chords — used for harmonic color in funk and soul — peaked in the mid 1970s. A surge in loud, energetic lead guitars and aggressive percussion, seen in the late 70s and early 80s, mirrored the rise and fall of arena rock.

Musical Revolutions

Music in the United States continuously changed over five decades, but researchers say that time period was interrupted with three abrupt revolutions. The first, in 1964, saw the expansion of several different styles of music as well as the arrival of The Beatles and The Rolling Stones. But the Beatles and Stones didn’t cause a revolution.

Instead, researchers say, the groups’ music reflected existing trends toward the use of major chords, increased guitar aggression, and decreased use of mellow vocals. The Beatles and Stones didn’t change the game, but they sure benefited from it.

The second revolution occurred in 1983 as hair bands and arena rockers added a flood of songs with heavy percussion and vocals.

In 1991, however, the biggest revolution in American music occurred. Suddenly, more songs did away with musical chords, and songs with energetic speech skyrocketed.

Why? Hip-hop had officially arrived on the musical charts. In other words, John Lennon and Mick Jagger didn’t lead a musical revolution in the U.S., but Tupac, LL Cool J and other rappers did. Researchers published their analysis Tuesday in the journal Royal Society Open Science.

It’s Not as Bad as You Think

In terms of variety, 1984 was the blandest year between 1960 and 2010, and researchers attribute that to the dominance of genres such as new-wave, disco and hard rock. The emergence of hip-hop, and the decline of those genres, helped musical diversity bounce back.

However, despite what your hipster friends say, researchers found no evidence to back up the claim that a hegemonic recording industry oligopoly is today contributing to a decline in musical diversity. In fact, musical diversity has remained pretty consistent over the last five decades, they say.