How A Self-Driving Car Actually Works

  • Post comments:0 Comments
  • Reading time:14 mins read

The Sensors

The self-driving car in question is equipped with a suite of sensors that act as its eyes. The sensors take the form of several small radar devices, termed ‘radar nodes,’ and a spinning laser known as a LIDAR (a portmanteau of LIDAR and radar).

The two types of sensor work together to scan the surroundings for objects that might get in the way. Specifically, they look for pedestrians, bicyclists, cars, curbs and traffic signs. The data from these sensors is then processed by an onboard computer to generate a three-dimensional map of the road ahead.

Lidar

Lidar is an acronym for light detection and ranging. It works by sending out short pulses of laser light and measuring how long it takes for them to return after hitting an object. Since the speed of light is known, this time can be converted into a distance measurement. These measurements are made at different wavelengths in order to calculate the reflectivity of the target and thus its color. A typical Velodyne lidar model scans the environment 360 degrees around with 16 lasers, each scanning 40 times per second using 100,000 points of light (or 3D pixels).

Radar

As you know, there are a few different types of sensors that self-driving cars use: radar, cameras, LIDAR and ultrasonic sensors. Each type has its own strengths and weaknesses. Radar is a good choice for autonomous vehicles because it can detect objects in all weather conditions and give information on their distance and speed. Radar is also useful because it’s easily integrated inside the car to keep the camera safe from damage; even if the vehicle hits something like a tree branch or a pedestrian, there’s no risk of damaging the sensor.

Most of today’s cars have adaptive cruise control (ACC), which uses radar technology to maintain a safe distance from other vehicles without driver input. ACC only works when the roads are relatively clear, but it positions cars well for semi-autonomous driving systems – they already have an accurate idea of what’s going on around them and where they need to steer next

Ultrasonics

Ultrasonic sensors measure speed and distance by sending out high frequency sound waves that “bounce” off objects in their path and come back to the sensor. This is a useful technology for measuring speed and distance, but has some limitations. Ultrasonic sensors can only detect solid objects, not humans or animals, so they can be used when parking the car or determining if you’ve hit something but are less helpful than other methods of detection when lanes are blocked with debris or there’s a pedestrian on the road.

Camera

Cameras are used for traffic lights, street signs and pedestrians in addition to other vehicles. They will also be used with a GPS system to show the vehicle its location, the lane markings and edge of the road. The distance from other vehicles is calculated using cameras as well as sensors.

Central computer

The central computer is the brains of the self-driving car. This computer is installed in a safe spot inside the car and connected to every other component of the self-driving car. The brain of your self-driving car can be as small as a shoebox. What’s more, it is completely customizable and programmable. You can program it to do whatever you want it to do, from read you a bedtime story or play music for your road trips!

The size of this brain depends on what features you would like your self-driving car to have; because a larger brain has more connections, programming options will be greater with a larger computer.

You can power this device from your home electricity outlet or with batteries; both options are easily accessible.

Machine learning

Machine learning is a subfield of computer science that focuses on the creation of algorithms that allow computers to learn from data. Rather than explicitly programming a computer for every possible situation, machine learning allows computers to find patterns in large amounts of data, and improve accuracy over time. For example, a machine learning algorithm can be trained on thousands of images of pedestrians walking around street corners. When the same algorithm is provided with an image where it needs to identify pedestrians in order to drive safely, it draws upon its experience with past pedestrian images and uses this information to adjust itself accordingly.

In self-driving cars, machine learning is used to identify and classify objects on the road—such as traffic lights or other cars—in real time. Machine learning enables self-driving cars to perceive their surroundings (e.g., the location of other vehicles), plan (e.g., deciding when it’s safe enough to change lanes), and act (e.g., turning into another lane).

Software and algorithms

It’s a happy day for you and your family! You just bought a brand new self-driving car! They say it can drive all by itself. What sorcery is this? Well, it’s really not that complicated, as long as you have the right software and algorithms. The sensors in the car send data to these programs, which tell the car what to do in order to drive safely.

When Google first started tinkering with self-driving cars back in 2009, they wrote their own simulator so they could test out their software and algorithms without risking any lives. Some of the challenges that Google faced were:

• figuring out how to handle stop signs

• making sure that the car knew not to run over pedestrians or dogs (who don’t always look where they are going)

• teaching the car how to park safely at different angles and on different surfaces

The software is smart enough so that when it encounters an unusual situation, it can ask its human co-pilot what to do next (just like a real driver might). In fact, while writing this blog post I learned some very interesting facts about programming self-driving cars – did you know that Tesla once had an algorithm in place which would automatically run over a person if doing so was safer than swerving off the road?

Apps for passengers and the general public

The applications on a self-driving car are primarily there for the benefit of passengers and the general public. Self-driving car apps allow passengers to connect their smartphones to their cars in order to communicate or customize various aspects of their ride, while also allowing the general public to communicate with self-driving cars when they pass by on the road, and even when they park.

Passengers can use their phones in conjunction with a car’s application system to control certain features of the vehicle, such as the temperature, music volume and climate control. Passengers who are riding in self-driving cars can also use smartphone applications to send invitations for rides or provide navigation instructions straight from one phone or computer screen to another, without needing any sort of verbal communication between parties. Apps can even be used as a method for paying for rides through mobile wallets linked through smartphone devices.

Apps that interact with a driverless car’s transportation system also require passengers’ permission before sending information about them (or about anything else) outside of your vehicle — drivers have complete control over what data is sent from your vehicle app (if at all) and what data isn’t sent outside your vehicle’s perimeter!

Self-driving cars are getting closer to being a reality.

If you’ve been keeping up with the news, then you know that self-driving cars are getting closer to being a reality. And this is big. Sure, it’s not as big as discovering fire or inventing cheese, but it’s still pretty big.

As revolutionary as the self-driving car is on its own, the implications for future tech are just as exciting. As self-driving cars get smarter and more reliable over time, we’ll see them learn from each other across a shared network. So if one car learns that driving through a red light results in an accident, all of its peers will instantly know to avoid doing the same thing.

And then eventually they’ll overthrow us—just kidding! Maybe.The Sensors

The self-driving car in question is equipped with a suite of sensors that act as its eyes. The sensors take the form of several small radar devices, termed ‘radar nodes,’ and a spinning laser known as a LIDAR (a portmanteau of LIDAR and radar).

The two types of sensor work together to scan the surroundings for objects that might get in the way. Specifically, they look for pedestrians, bicyclists, cars, curbs and traffic signs. The data from these sensors is then processed by an onboard computer to generate a three-dimensional map of the road ahead.

Lidar

Lidar is an acronym for light detection and ranging. It works by sending out short pulses of laser light and measuring how long it takes for them to return after hitting an object. Since the speed of light is known, this time can be converted into a distance measurement. These measurements are made at different wavelengths in order to calculate the reflectivity of the target and thus its color. A typical Velodyne lidar model scans the environment 360 degrees around with 16 lasers, each scanning 40 times per second using 100,000 points of light (or 3D pixels).

Radar

As you know, there are a few different types of sensors that self-driving cars use: radar, cameras, LIDAR and ultrasonic sensors. Each type has its own strengths and weaknesses. Radar is a good choice for autonomous vehicles because it can detect objects in all weather conditions and give information on their distance and speed. Radar is also useful because it’s easily integrated inside the car to keep the camera safe from damage; even if the vehicle hits something like a tree branch or a pedestrian, there’s no risk of damaging the sensor.

Most of today’s cars have adaptive cruise control (ACC), which uses radar technology to maintain a safe distance from other vehicles without driver input. ACC only works when the roads are relatively clear, but it positions cars well for semi-autonomous driving systems – they already have an accurate idea of what’s going on around them and where they need to steer next

Ultrasonics

Ultrasonic sensors measure speed and distance by sending out high frequency sound waves that “bounce” off objects in their path and come back to the sensor. This is a useful technology for measuring speed and distance, but has some limitations. Ultrasonic sensors can only detect solid objects, not humans or animals, so they can be used when parking the car or determining if you’ve hit something but are less helpful than other methods of detection when lanes are blocked with debris or there’s a pedestrian on the road.

Camera

Cameras are used for traffic lights, street signs and pedestrians in addition to other vehicles. They will also be used with a GPS system to show the vehicle its location, the lane markings and edge of the road. The distance from other vehicles is calculated using cameras as well as sensors.

Central computer

The central computer is the brains of the self-driving car. This computer is installed in a safe spot inside the car and connected to every other component of the self-driving car. The brain of your self-driving car can be as small as a shoebox. What’s more, it is completely customizable and programmable. You can program it to do whatever you want it to do, from read you a bedtime story or play music for your road trips!

The size of this brain depends on what features you would like your self-driving car to have; because a larger brain has more connections, programming options will be greater with a larger computer.

You can power this device from your home electricity outlet or with batteries; both options are easily accessible.

Machine learning

Machine learning is a subfield of computer science that focuses on the creation of algorithms that allow computers to learn from data. Rather than explicitly programming a computer for every possible situation, machine learning allows computers to find patterns in large amounts of data, and improve accuracy over time. For example, a machine learning algorithm can be trained on thousands of images of pedestrians walking around street corners. When the same algorithm is provided with an image where it needs to identify pedestrians in order to drive safely, it draws upon its experience with past pedestrian images and uses this information to adjust itself accordingly.

In self-driving cars, machine learning is used to identify and classify objects on the road—such as traffic lights or other cars—in real time. Machine learning enables self-driving cars to perceive their surroundings (e.g., the location of other vehicles), plan (e.g., deciding when it’s safe enough to change lanes), and act (e.g., turning into another lane).

Software and algorithms

It’s a happy day for you and your family! You just bought a brand new self-driving car! They say it can drive all by itself. What sorcery is this? Well, it’s really not that complicated, as long as you have the right software and algorithms. The sensors in the car send data to these programs, which tell the car what to do in order to drive safely.

When Google first started tinkering with self-driving cars back in 2009, they wrote their own simulator so they could test out their software and algorithms without risking any lives. Some of the challenges that Google faced were:

• figuring out how to handle stop signs

• making sure that the car knew not to run over pedestrians or dogs (who don’t always look where they are going)

• teaching the car how to park safely at different angles and on different surfaces

The software is smart enough so that when it encounters an unusual situation, it can ask its human co-pilot what to do next (just like a real driver might). In fact, while writing this blog post I learned some very interesting facts about programming self-driving cars – did you know that Tesla once had an algorithm in place which would automatically run over a person if doing so was safer than swerving off the road?

Apps for passengers and the general public

The applications on a self-driving car are primarily there for the benefit of passengers and the general public. Self-driving car apps allow passengers to connect their smartphones to their cars in order to communicate or customize various aspects of their ride, while also allowing the general public to communicate with self-driving cars when they pass by on the road, and even when they park.

Passengers can use their phones in conjunction with a car’s application system to control certain features of the vehicle, such as the temperature, music volume and climate control. Passengers who are riding in self-driving cars can also use smartphone applications to send invitations for rides or provide navigation instructions straight from one phone or computer screen to another, without needing any sort of verbal communication between parties. Apps can even be used as a method for paying for rides through mobile wallets linked through smartphone devices.

Apps that interact with a driverless car’s transportation system also require passengers’ permission before sending information about them (or about anything else) outside of your vehicle — drivers have complete control over what data is sent from your vehicle app (if at all) and what data isn’t sent outside your vehicle’s perimeter!

Self-driving cars are getting closer to being a reality.

If you’ve been keeping up with the news, then you know that self-driving cars are getting closer to being a reality. And this is big. Sure, it’s not as big as discovering fire or inventing cheese, but it’s still pretty big.

As revolutionary as the self-driving car is on its own, the implications for future tech are just as exciting. As self-driving cars get smarter and more reliable over time, we’ll see them learn from each other across a shared network. So if one car learns that driving through a red light results in an accident, all of its peers will instantly know to avoid doing the same thing.

And then eventually they’ll overthrow us—just kidding! Maybe.

Leave a Reply