Introducing Dr Radim Tylecek, our machine-vision specialist

Dr Radim TylecekDr Radim Tylecek is one of the newest members of the Boundary team. As a machine-vision specialist, he is helping to develop our next product: an AI camera that will intelligently recognise threats and react to them.

The camera will be available in late 2021 and will integrate with all Boundary smart alarms.

Here we ask him about himself, the camera project, and what the future holds for machine-vision (MV).

Why did you choose a career in machine-vision?

Because MV is cool! As a keen photographer, I’ve always been interested in visual communication. It led me to wonder whether machines could ever understand images in a similar way to humans. Then, I discovered there’s actually an area that deals with this, called machine-vision.

Can you give us an outline of your career to date?

I finished my masters in electrical engineering and informatics at the Czech Technical University in Prague, then pursued a PhD in artificial intelligence at the Centre for Machine Perception at the same university. I focused on MV research, but also taught related subjects. After completing my PhD in 2016 I moved to University of Edinburgh to work at its School of Informatics as a research associate.

What have been your favourite previous projects, before Boundary?

Previously I helped build an autonomous gardening robot that uses cameras to navigate around the garden and trim bushes. The robot was aptly called TrimBot and used algorithms to find its way around the garden, detect bush shapes and then trim them. Bringing the robot to life was a challenge, but I got there in the end.

What is Machine Learning (ML)?

Traditional programming relies on knowledgeable developers to write specific instructions for the computer to perform the desired task, eg. recognising oranges knowing they must be round and have orange colour.

In contrast, machine learning does not require explicit instruction and instead learns such properties automatically from a set of examples, eg. orange image dataset. The main goal of ML is to learn rather than be taught, so the application can process random inputs from the real world.

Ai machine learning example
A traffic camera using AI to recognise cars


Tell us about your role at Boundary?

My role is to develop algorithms for a smart outdoor camera that will be able to detect malicious activities at the boundary of your property. This will enable the security system to quickly engage burglars to prevent them from carrying out any theft or damage.

How are the different areas of machine vision being used?

Video streams are used to detect motion in the camera’s field of view.

Once motion is detected, image recognition is used to understand what is moving – persons, animals, cars or other objects. We are particularly interested in persons and their intentions, which focusses on the person’s appearance, objects they are carrying etc.

Finally, dynamic video analysis is used to classify the person’s behaviour as suspicious or anomalous. This is how the camera knows if the stranger is a burglar or the postie!

What’s the relationship between machine-learning and AI?

As hinted above, ML enables an AI system to learn from experience, in a similar way like humans do. A kid will learn what is an orange from his mother, showing him the fruit repeatedly with “this is an orange” narrative. This is known as supervised learning, but there are also approaches that require little or no supervision for certain tasks, for example reinforcement learning can teach a machine how to play games through repeated interactions with the environment.

How long do you expect it to take before you have a prototype?

It will be several months before prototype system is ready. Its evolution into a product that’s integrated with existing Boundary alarm will need a similar amount of time. Summing this up, we expect a year of research and development.

There are other cameras that can recognise faces, how will Boundary’s camera be different?

Existing smart cameras feature image recognition, but despite this tend to produce a large number of false alarms, which can be annoying to the users. Our AI will go beyond image recognition to reduce the false positive rate with the help of machine learning and other MV techniques.

Are there any challenges/barriers to executing the project?

We need to teach our camera using real-life footage, and it’s rare for burglaries, or attempted burglaries, to be caught on camera. It’s also difficult to obtain that footage (with permission to use it in our research). We are therefore grateful for all footage that our followers provide through the customer participation program.

This is even more important for the latest deep learning approach, which relies on large datasets to train complex artificial neural networks.

Are there any uses other than protecting the home?

There’s the potential to use the camera for deliveries. Currently with doorbell cameras, you have to open your app and instruct the courier where to leave the package. The Boundary camera might be able to play a recorded message – without disturbing you with an alert. This is only a possibility though, the current focus is security!


Do you know of any current examples of ML being used in consumer products?

Nowadays, ML is the invisible driving force of improvements to many electronic products. For example, your smartphone’s camera uses ML to give you the best looking image. It’s also being used in speech recognition, language translation  and word predictions when you’re writing an email. It’s already omnipresent in your digital world.

What the next BIG breakthrough? 

Machine perception plays a major role in self-driving cars. It’s also being developed in the recognition of diseases.

What I can see coming sooner is more augmented reality. The software is ready, we’re just waiting for an optical device, like a VR headset, that’s sophisticated enough to convincingly render a 3D world in front of you.

How many years are we away from a robot like the one in Ex Machina?

ai robot image of the future

I think that we will be able to see a robot with similar motion capabilities within our lifetime, on this end I have some faith in mechanical engineering and control electronics to improve on current bulky robots like Atlas.

Regarding its artificial intelligence, I am quite sceptical we will get anywhere near human brain capabilities at the same time. Currently, the largest deep neural networks have somewhere under billion parameters, which corresponds to a bee. Human brain has million times more synapses than that, which seems hardly achievable with the conventional computing technology we currently have. A breakthrough like large-scale quantum computing could change this – but I bet we won’t get this sooner than useful fusion power.

Have you got any heroes?

I will name Geoffrey Hinton, also known as “godfather of AI and deep learning”, who laid the foundations for the current boom in artificial intelligence. I have recently discovered he graduated from University of Edinburgh.

We plan to launch our AI camera in 2021 to integrate with the Boundary smart alarm. The alarm is launching later this year, in August.

The Boundary smart alarm is available to pre-order now. Delivery will be in early 2021.