Tag Archives: Artificial Intelligence

Stuff in the news 6/18/2013 – Communication

Vintage Telephone

Image courtesy of Daniel St. Pierre / FreeDigitalPhotos.net

  • Researchers in the UK used AI algorithms to model online reading preferences, using machine learning. The results are somewhat disappointing for my public affairs friends.
    • “The research, led by Nello Cristianini, Professor of , identified the most attractive keywords, as well as the least attractive ones, and explained the choices readers made… Professor Cristianini, speaking about the research, said: “We found significant inverse correlations between the appeal to users and the amount of attention devoted to public affairs… …”People are put off by public affairs and attracted by entertainment, crime, and other non-public affairs topics.”
  • Looking for a robust social media management system. It may require some monetary investment but check out Figure 8 in this Slide Share. A Strategy for Managing Social Media Proliferation
  • “In moments of political and military crisis, people want to control their media and connect with family and friends. And ruling elites respond by investing in broadcast media and censoring and surveilling digital networks.” – Why Governments Use Broadcast TV and Dissidents Use Twitter
    • Note to ruling elites (and PIOs, as well):  You do not control the message on the interwebs. Get used to it. Engage or be overshadowed. Monitoring is a good thing, if you know what to do with the results.
    • Don’t really get this engagement thing? Here’s one place to start: 3 Models of Citizen Engagement – GovLoop
  • 5 (weird) Ways Government is Experimenting with Social Media
  • Do you really want to know what your cat thinks? Will Translation Devices Soon Allow Us To Talk With Animals? 
  • Use it only for good….  NSA-Style Intelligence Comes To Financial Services Communications
  • “Monitoring electronic conversations just got a lot more powerful with the alliance of Digital Reasoning and OP3Nvoice, announced at SIFMA today.

    Digital Reasoning, with roots in defense and intelligence, can search and understand structured and unstructured data and use it to build a view of underlying entities, facts, relationships and discover geospatial and temporal patterns. OP3Nvoice can monitor audio, video and text conversations across channels including fixed lines, mobile and Skype and search and locate conversations very fast.”

  • If the story is being reported accurately, this policy in River Bend, IL makes so much more sense than just banning teachers and students from “friending each other”. The policy prohibits individual contact, including by phone and requires the teachers to go through parents or a mass messaging system for event cancellations. Have a policy and make it consistent. Social media is just a communication tool. School District Limiting Communication Between Staff and Students
  • In another school district in Massachusetts, a teacher is advocating and implementing a comm plan. How about that? I have concerns about the notion of her students e-mailing her individually and probably would have opted for a collaborative space where they can post pictures without doing that but just the notion that she has a plan for everyone to follow is quite impressive:
    • “Communication protocols and systems create an almost seamless structure which leaves room for more responsive student service and coaching.”
    • Indeed.
  • Apparently, there’s a downside to more connectivity. Increased cell phone coverage tied to uptick in African violence: ‘Significantly and substantially increases the probability of violent conflict’

On Land Destroyed by the Tsunami, Japan is Building a Futuristic Robot Farm

By Clay Dillow

Posted 01.06.2012 at 10:02 am

2 Comments

Devastation Wrought by the Tohoku Earthquake and Tsunami

U.S. Navy via Wikimedia

You have to hand it to the Japanese; Last March’s Tohoku earthquake and associated tsunami wasn’t the first natural (or unnatural, for that matter) disaster to befall the island nation, but as just as before the country isn’t simply rebuilding. Instead, it’s rethinking and improving upon what was there before. The latest example: Japan’s agriculture ministry is building a fully robotic experimental farm on a swath of farmland inundated by the tsunami.

After salt is removed from the soil of the 600 acre plot, the agriculture ministry’s plan calls for unmanned tractors to work fields lit by LEDs that will keep insects at bay in lieu of pesticides. The robotic tractors will till, plant, and tend to rice, soybeans, wheat, and various fruits and vegetables that will then also be harvested by their robotic overseers.

The robo-farm, planned for a space in Miyagi prefecture roughly 200 miles north of Tokyo, is part of an effort to find smarter ways to reclaim Japan’s farmland–some 60,000 acres of which was fouled by the tsunami–and find more efficient ways to make use of the country’s limited agricultural space.

Getting more out of each square foot of agricultural real estate isn’t just a Japanese imperative, of course. As the global population increases, increasing the per-acre yield of agricultural space is becoming more and more crucial. Leave it to tech-savvy Japan to understand fundamentally that technology is the way forward in farming.

As such, the “Dream Project,” as the robo-farm initiative is known, will be built by partners like Panasonic, Hitachi, Fujitsu, NEC, and Sharp–technology companies most of us would probably wouldn’t associate with agriculture. But perhaps we should start thinking that way. The Japanese certainly are.

[AFP]

First Cyborg Cerebellum | IdeaFeed | Big Think

Cerebellum

What’s the Latest Development?

Researchers at Tel Aviv University have successfully engineered a robotic cerebellum that functions in the brain of rats. First the scientists sought to understand what kind of signal a rat cerebellum sends when it receives stimuli, then they duplicated that response in the mechanical cerebellum they engineered. “Attaching the synthetic cerebellum to the rat, the scientists tried to condition it to blink at the sound of a tone. To get the rat to blink they first fired a puff of air at the rat when the tone sounded and then just sounded the tone.” When the motorized cerebellum was attached, the rat blinked. 

What’s the Big Idea?

The artificial cerebellum represents a higher order of brain-computer interface than what is currently experienced by users of advanced prosthetics that receive and execute orders from the brain. Since the cerebellum is but a part of the brain, the scientists had to engineer the “cerebellum to receive information from one part of the brain and send it back to another.” Scientists need to understand more about how the cerebellum functions before a test is performed on humans but this recent experiment is good news for those with brain injuries. 

Read it at Discovery News

Related Content



Debunking Common Brain Myths

Debunking Common Brain Myths

Sam Wang

Neuroscientist, Princeton University

A Small Step for a Robot, A Great Leap for Space Travel? | Think Tank | Big Think

Robonaut-2-twitter-international-space-station-100804-02

What’s the Big Idea?

Robonaut is literally ascending its stairway to the moon in baby steps. Robonaut, aka R2, the first humanoid robot in space, was delivered to the International Space station on space shuttle Discovery’s final flight this past February, and finally powered up this week. “Sure wish I could move my head and look around,” Robonaut said in the tweet. (You can follow Robonaut’s progress on Twitter here: @Robonaut)

Sorry, R2, but that won’t happen until next week, when the robot will finally get to wiggle its fingers and move its arms and hands. R2 will still not be able to walk, as its legs are currently being designed and will not be attached to its torso until 2013.

 

And yet, who needs legs when you have a Centaur rover (above) as your whip? An improved set of wheels–developed by GM and NASA, called the Centaur 2 was produced in 2010 and features prospecting censors, excavation implants,and devices for converting planetary materials into usable products.

What’s the Significance?

While Robonaut may take a while to get its space legs, it is considerably cheaper than a human astronaut. And cheaper is the name of the game in 21st century space exploration. While the U.S. space program has its sights on landing on an asteroid, and making an eventual Mars landing, a permanent robot presence may be the most feasible option for future lunar exploration.

That is still a long way off, as the goal right now is for R2 to fulfill the mission it was primarily designed for–to serve as a human assistant, as explained in the video below.

Watch here:

Computer security: Blame game | The Economist

How to mimic human laxness with computers

TO ERR is human, but to foul things up completely takes a computer, or so the old saw goes. Although this may seem a little unfair to computers, a group of cybersecurity experts led by Jim Blythe of the University of Southern California are counting on there being at least some truth in the saying. They have created a system for testing computer-security networks by making computers themselves simulate the sorts of human error that leave networks vulnerable.

Mistakes by users are estimated to be responsible for as many as 60% of breaches of computer security. Repeated warnings about being vigilant, for example, often go unheeded as people fail to recognise the dangers of seemingly innocuous actions such as downloading files. On top of that, some “mistakes” are actually the result of deliberation. Users—both regular staff and members of the information-technology (IT) department, who should know better—often disable security features on their computers, because those features slow things down or make the computer more complicated to use.

Yet according to Dr Blythe, such human factors are often overlooked when security systems are tested. This is partly because it would be impractical to manipulate the behaviour of users in ways that would give meaningful results. He and his colleagues have therefore created a way of testing security systems with computer programs called cognitive agents. These agents’ motives and behaviours can be fine-tuned to mess things up with the same aplomb as a real employee. The difference is that what happened can be analysed precisely afterwards.

Read more here: economist.com

My daughter is engaged to a robot

If your kid came home and told you she was planning to marry a robot, would you be accepting? Do you think the possibility of such is just folly? The author of this piece thinks we should take it seriously.

Guess Who’s Rolling to Dinner

August 13, 2011 by Mark Brady

In the 1967 movie Guess Who’s Coming to Dinner, Katherine Hepburn and Spencer Tracy have their white, liberal views tested when their daughter brings home Sidney Poitier as her new fiancé. Well, many parents raising children today are going to have their own views and values put to a similar test in the not too distant future. Only this time it won’t be a fiancé of a different color, he or she will be of a different techno-biological persuasion. Welcome to the singular world of … Loveotics.

Many parents already struggle with how much time their kids spend on the computer playing video games, living a Second Life and social networking. Just imagine what it’s going to be like when Johnny or Jane calls and breaks the news: they’ve decided to marry a bot. Sounds pretty far-fetched, doesn’t it? So do many developments in today’s world until we begin to raise our head and our consciousness and begin to look around. And then drill down into specifics and discover astonishing developments going on that we were completely unaware of.

Read the rest here: committedparent.wordpress.com

David Ferrucci, Lead Researcher for IBM’s Watson Project, moves beyond Jeopardy!

Lead Watson scientist David Ferrucci (via satellite). Click image to expand.

David Ferrucci speaks via satellite at a PBS panel

When IBM’s question-answering supercomputer Watson soundly defeated two Jeopardy! champions in February, it looked like curtains for humanity. Sure, computers had beaten people before—at chess, Scrabble, sometimes Go, among other games—but this was different. Jeopardy! is all about fighting through thickets of language—puns, idioms, homonyms, homophones, and other quirks of English that seem uniquely suited to humans. The fact that a computer could understand this wordplay, let alone thump some of the best people who’d ever played the game, felt like a moment of eclipse. And if, in a decade or two, the machines have taken over, we’ll have one man to thank: David Ferrucci, leader of the Semantic Analysis and Integration Department at IBM’s T.J. Watson’s Research Center.

Ferrucci, an artificial-intelligence researcher who specializes in teaching computers how to understand natural human language, has repeatedly downplayed the notion that Watson’s Jeopardy! victory portends humanity’s decline. Computers are getting better at understanding us, he says, but they still need a lot more training, and that training can only come from collaboration. As machines get better at finding information, Ferrucci says, they’ll “dialogue with the user trying to find out what they need,” and this back-and-forth will generate the precise answers that today’s search engines too rarely deliver.

Ferrucci says IBM is already in talks to implement parts of Watson for a few of its customers, but the really amazing stuff will take a few years to debut. In fields like medicine and law, humans—both professionals and the public—must sift through huge amounts of data to find answers to common problems. (Search Google for ways to treat your headache and you’re likely to come away thinking you’ve got a brain tumor.) A Watson-like machine would step in to do these “high-value” searches for us, Ferrucci says. Even more importantly, the computer might sit between you and your doctor as a kind of intelligent mediator. You’d enter your symptoms, the computer would dive deep into everything that’s known about your condition, and it would present possibilities to your doctor, including suggestions for follow-up questions she should ask you. “This would be something you’d use anywhere you’re trying to make high-value decisions,” Ferrucci says.

Slate‘s list of
the 25 Americans who combine inventive genius and practicality—our
best real-world problem solvers. Read more about how we chose them.

Advertisement

<body><div style=”position:relative; z-index:1″ align=”center”>


<div style=”position:absolute;top:0px;left:0px;display:none;z-index:0″>
<img src=”http://ad.doubleclick.net/ad/N3867.slate.com/B5657659.15;sz=1×1;pc=%5BTPAS_ID%5D;ord=6279088?&#8221; border=”0″ width=”1″ height=”1″ /></div>
<div style=”position:absolute;top:0px;left:0px;z-index:2;display:none”>
<img src=”http://cdn.eyewonder.com/100125/771613/1522063/ewtrack.gif?ewadid=150241&#8243; border=”0″ width=”1″ height=”1″ />
</div>
<div style=”position:absolute;top:0px;left:0px;z-index:3;display:none”>
<img src=”http://cdn.eyewonder.com/100125/771613/1522063/ewtrack_f.gif?ewadid=150241&#8243; border=”0″ width=”1″ height=”1″ />
</div></div></body>

Because Watson’s powers increase as computers get faster, and because it learns from its conversations with humans, it’s bound to keep getting better. Still, Ferrucci says that researchers continue to look for ways to teach machines language—the technology is still in its infancy, and thorny problems remain. “Language is just hard, and it’s hard for a fundamental reason—to the computer it’s just symbols, and for the human it’s a map to actual experiences,” Ferrucci says. Thanks to Watson, we’re finally getting closer to understanding each other.

Read a Q&A with David Ferrucci.

Read Ferrucci’s essay on the challenges of designing a computer that can understand human language.

Check out the rest of our technology Top Right:
Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Lab.
Jeff Bezos, founder and CEO of Amazon.com.
Salman Khan, founder of Khan Academy.
Brian Tucker, president of GeoHazards International.

Created with Raphaël

Jeff Bezos
Brian Tucker
Cynthia Breazeal
David Ferrucci
Salman Khan

Farhad Manjoo is Slate‘s technology columnist and the author of True Enough: Learning To Live in a Post-Fact Society. You can e-mail him at “)farhad.manjoo@slate.com‘);

and follow him on Twitter.

Photo by Frederick M. Brown/Getty Images.

If comments do not automatically load, click here.

Rise of the Intelligent Machines (Part 1)

Will machines ever think?

 

Clockwork gears

The human mind is the most complex intelligence we know of. It represents the apex in a world filled with intellectual diversity. Perhaps because of this, most of us seem to find it exceedingly easy to dismiss any attempt to equate the behaviors of machines with intelligence.* After all, these are mere bits of metal and silicon, the descendants of clockwork dolls and mechanical calculators. But perhaps if we looked a little closer, we’d think differently.

 

We are far from the only biological intelligence on the planet. Primates and cetaceans are certainly intelligent, even sentient. Few of us would quibble with that. If we step back a little further, even “lower” mammals, birds, reptiles and amphibians display significant intelligence. Taken a bit further, we have little problem ascribing intelligence to insects and worms, even if it is, by our standards, rudimentary.

So how far back along our ancestral line can we take this? One of the earliest multi-celled animals, cnidarians, had the first neural net, a precursor to the far more complex brains that would come later. Can we go further still? Single-celled organisms such as paramecium and cyanobacteria can move in response to light, heat and chemical gradients. Is this intelligence? The better question might be, relative to what?

Read more here: pt5.psychologytoday.com