Awesome, not awesome.
“A team from AI pharma startup Insilico Medicine, working with researchers at the University of Toronto, took 21 days to create 30,000 designs for molecules that target a protein linked with fibrosis (tissue scarring). They synthesized six of these molecules in the lab and then tested two in cells; the most promising one was tested in mice. The researchers concluded it was potent against the protein and showed “drug-like” qualities. All in all, the process took just 46 days… [which is huge since] getting a new drug to market is hugely costly and time consuming: it can take 10 years and cost as much as $2.6 billion, with the vast majority of candidates failing at the testing stage” — Charlotte Jee, Writer Learn More from MIT Technology Review >
“Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists…AI developers are working to build systems that can detect and combat fake audio, but the voice-mimicking technology is evolving rapidly.” — Drew Harwell, Reporter Learn More from The Washington Post >
What we’re reading.
1/ A once secret Marine Corps program called “Sea Mob” is paving the way for the introduction of artificially intelligent weapons to the battlefield — ones that don’t require human intervention to kill people. Learn More from The Atlantic >
2/ We must intentionally code inclusivity into algorithms if we want to build technology that doesn’t oppress people who have already been oppressed. Learn More from WIRED >
3/ Two nuclear policy wonks advocate for the US giving control of their nukes to an artificial intelligence system that would act as a “dead hand,” returning fire automatically in the even of its human controllers dying. Learn More from VICE >
4/ Today’s approach to training AI might limit what machines are capable of — and ultimately be more dangerous — than if we helped them develop a conceptual understanding of the world. Learn More from The New York Times >
5/ Facebook’s leadership is worried that fake videos created with AI will be used to spread misinformation in an attempt to sway the next US presidential election. Learn More from MIT Technology Review >
6/ Machine learning systems designed to identify someone’s gender based on a photograph are so complex that their creators are struggling to understand why they keep failing. Learn More from Pew Research Center >
7/ A new AI system passed an 8th grade-level exam with a grade of 90%, indicating that researchers have made a major breakthrough in the technology. Learn More from The New York Times >
What we’re building.
We’ve started writing a newsletter called Noteworthy in Tech.
Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter >
Links from the community.
? First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >
The secret “Sea Mob” weapon and AI on the battlefield was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.