We examine all info-packed Monday announcements from this year’s Microsoft BUILD event. Plus Robocalls are still a thing and unfortunately they’re getting worse and Spotify is not a publicly traded company.
Starring Tom Merritt, Sarah Lane, Roger Chang and Lamarr Wilson.
Using a Screen Reader? Click here
Multiple versions (ogg, video etc.) from Archive.org.
Please SUBSCRIBE HERE.
Subscribe through Apple Podcasts.
Follow us on Soundcloud.
A special thanks to all our supporters–without you, none of this would be possible.
If you are willing to support the show or give as little as 5 cents a day on Patreon. Thank you!
Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!
Big thanks to Mustafa A. from thepolarcat.com for the logo!
Thanks to Anthony Lemos of Ritual Misery for the expanded show notes!
Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit
To read the show notes in a separate page click here!
- Quick Hits
- (00:35) Fitbit gets quick replies and menstrual cycle tracking | cnet
- (00:55) Instagram code reveals upcoming music feature | tech crunch
- (01:15) Google’s IoT platform Android Things is open to all developers | engadget
- (01:50) A serious security vulnerability has been found in 7-Zip | pc gamer
- More Top Stories
- (02:15) ZTE asks U.S. Commerce Department to suspend business ban | reuters
- (04:10) Drive.ai will launch an autonomous ride-hailing service in Texas | the verge
- (07:45) The big music labels are selling big chunks of their Spotify stakes | ercode
- (10:00) Yes, It’s Bad. Robocalls, and Their Scams, Are Surging. | the new york times
- Discussion Story (13:50): Microsoft Build
- IoT Edge
- Without its own phone OS, Microsoft now focuses on its Android Launcher and new ‘Your Phone’ experience | tech crunch
- Microsoft launches a unified API for all of its AI speech services | tech crunch
- Microsoft’s Project Ink Analysis lets developers add handwriting recognition to their apps | tech crunch
- Microsoft brings more AI smarts to the edge | tech crunch
- Microsoft and DJI team up to bring smarter drones to the enterprise | tech crunch
- Microsoft launches Project Brainwave, its deep learning acceleration platform | tech crunch
- Microsoft’s new IntelliCode is a smarter IntelliSense | tech crunch
- Microsoft Kinect lives on as a new sensor package for Azure | tech crunch
- Microsoft continues its quest to bring machine learning to every application | ars technica
- Microsoft shows off Alexa-Cortana integration, launches sign-up website for news | tech crunch
- Microsoft overhauls its conversational AI chatbot tools | tech crunch
- Microsoft commits $25M to its AI for Accessibility program | tech crunch
- Microsoft’s meeting room of the future is wild | the verge
- Microsoft Pay comes to Outlook, integrating Stripe, Braintree, Sage, Wave and more | tech crunch
- Microsoft brings its Visual Studio App Center lifecycle management tool to GitHub | tech crunch
- Microsoft taps mixed reality for better collaboration and user support | engadget
- IoT Edge
- Thing of the Day
- Messages of the Day
- (26:25) Robert – Augmented Reality
- Today’s Contributors
2 thoughts on “DTNS 3277 – Microsoft Edges Toward IoT and AI”
About the unverified story that the prototype autonomous car that hit and killed a jaywalking pedestrian was set to a high treshold to avoid false positive detection of obstacles.
Here’s what I think is weird about that story (so maybe the story isn’t true).
In order to have autonomous driving you need a very capable system for visual recognition.
For comparison: some time ago Google demonstrated a trained AI, that when presented with any picture could tell whether there was a dog in the picture, and if so it could tell the breed of the dog.
An autonomous car must have similar capability, but far more versatile. First and foremost the system must be able to tell – in a fraction of a second – whether its looking at a human (walking, cycling), or something else, and in the category ‘something else’ the system must be able to assess whether it needs to avoid collision (and that would be a point where tuning may come in).
But whatever you do, you need to prioritize recognition of a human being.
According to the story the prototype autonomous system has a single tuning entry, governing the entire decision between ‘avoid collision if possible’ or ‘false positive’.
If true that would mean that that prototype autonomous system does not prioritize recognition of a human being. If that would be true then the Uber prototype autonomous system would be fundamentally flawed, and a deep redesign would be needed.
It was not unverified, it was anonymous. Multiple outlets confirmed it to their satisfaction but were not at liberty to name the source.
Aslo the story did not say that collision avoidance was entirely based on that one function but that the failure likely occurred because of the function’s sensitivity level.