Argon is one of the sponsors of Hack Cambridge 2017. The event runs over the weekend of 28-29 January 2017. We'll be fielding a team of engineers as mentors, and we're hoping to work with teams to create some exciting examples of machine learning.
There’s been a lot of buzz over the last year about machine learning and neural networks. It’s behind self-driving cars, improvements in speech recognition, automatic classification of images, website recommendation systems, many big-data analysis tasks and voice assistants such as Google Assistant or Siri.
There was also the announcement that AlphaGo had cracked the incredibly difficult task of winning at Go. There’s a long way to go but we are starting to understand how the brain works and harness machine learning to create meaningful intelligent systems. As well as technical interest there is a lot of commercial activity with many new start-ups and some high-profile exits.
Here at Argon we’ve long been fascinated by this area. We’ve been using it in image processing, for instance to give real-time feedback composing a photo of your face. Think of taking a passport photo and all the things you mustn’t do, like smiling or having any hair obscuring your eyes. See http://www.argondesign.com/case-studies/2016/feb/18/image-quality-analysis-machine-learning/.
One rather fun area is using neural networks to invent results in the style of training material. They can compose music in a particular style or create text that follows the style of the training works. There’s a nice introduction to this by Andrej Karpathy at http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
We offer student internships and usually have a couple of undergraduate students working with us over the summer. Occasionally we offer work experience to younger people who have just finished their GCSEs. One such person was Tommy, who came and worked with us for two weeks last summer. We taught him about neural networks and helped him to build some examples that “created text in the style of”. You can see his results at http://www.argondesign.com/case-studies/2016/nov/23/neural-networks-investigation/.
For Hack Cambridge 2017 we’re putting forward the theme of using machine learning to create an audio sentence finisher. We'll bring along some Echo Dots to provide the voice I/O and our engineers will be pitching in to help teams who want to explore and create some interesting examples in this area.
The idea is to connect a speech-to-text engine to a LSTM machine learning network to a text-to-speech engine. I can then speak to the device for a while (or select a personality trained from a pre-existing corpus of text). Then I can say half a sentence and allow the “Sentence Finisher” to predict and say the rest.
Perhaps in the style of Trump finish the sentence "I've never told anyone that I..." Or maybe it could be trained on the spoken contributions of Theresa May scraped from https://hansard.parliament.uk/ and finally tell us what Brexit means…
One simple way of doing the audio input, speech-to-text and then text-to-speech and audio output is to use a device like Amazon’s Echo Dot. You can add custom “skills” to the device which allow you to receive some text, run your own processing on it in the cloud and then get the device to speak your answer.
As an example, I made a simple skill called King James that completed sentences in the style of the King James Bible. I deliberately used a very basic sentence generating algorithm that simply analysed the corpus and built a data structure of all the words used and for each of these words, the words that follow and their relative probabilities and then generated text according to these probabilities one word at a time. Here’s a demo.
The I/O is great, but the sentence generator is awful. That’s where the hackathon comes in. Use proper machine learning, not a simple probability model and see what you can achieve!
We’ll be bringing the Echo Dots and some experience of machine learning and we're looking forward to helping you make something truly exciting.
See the post-Hack report: http://www.argondesign.com/news/2017/feb/6/after-hackathon/