Oftentimes, sci-fi movie storylines start with a robot that realizes its role, and the roles of other similar robots in a colony.
The plot then progresses as a form of collaboration between these robots, as they deviate from the human-designed objectives, ignoring human safety to some evil control conspiracies.
Witnessing the current rapid development of AI, one might notice the major similarity between Sci-Fi and reality. Not to suggest that the fate of artificial intelligence is destined to follow that path, but it seems that this stage of “autonomous collaboration” is already being planned.
Imagine an automated city with modules constantly communicating with each other: your car, the road, the traffic signal, and all the neighboring cars.
No need to check that noisy radio channel you hate, waiting for the traffic congestion to dissolve, no red lights, and no time are spent on finding your spot in a parking lot. As all of these tasks could be coordinated autonomously without the need for human intervention.
The idea of autonomous vehicles (AV) collaboration depends on two very exciting concepts: vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) communication.
Basically, the two provide the required communication protocols for vehicles to communicate with each other, in addition to neighboring infrastructures in-range. It’s like their own language; if an AV desires to change lanes while speeding up on a road, it knows which signal to send to which vehicle in order to arrange the process.
A very exciting application, for instance, is collective perception, where vehicles share sensor readings and camera images with each other. Suppose an AV is computing its own local path, while a gigantic truck is blocking its camera from inferring what’s ahead on the road.
In this kind of case, the AV directly asks the truck—in a fluent AV tongue—to share the specs of the road ahead, as if that AV is seeing through the truck. All of that is in the beautiful digital alphabet consisting of two digits—0 and 1!
In this article, we investigate the possibility of autonomous vehicles’ collaboration, technologies, challenges, and how they may affect our daily lives. We also go through some of the promising applications made and track their current progress.
Autonomous driving
The term “autonomous vehicle” represents an area of research that’s fascinated the world since the turn of the century. Specifically, it’s been developed and optimized in the last 10 to 20 years.
This boom in the development of autonomous driving technology is due to the exploitation of various sensor capabilities such as radar, LIDAR, laser scanners, etc. In addition to progress in real-time, machine learning is accompanied by higher computation efficiency and hardware speed.
The primary idea behind self-driving cars is to use the aforementioned sensors to map a clear vision of the environment surrounding the vehicle. With the help of artificial intelligence and control algorithms, the vehicle can then navigate through the environment and perform certain tasks, such as object avoidance, local and global path planning, parking, and navigation. Cars are now capable of doing all of these tasks autonomously with little to no human intervention at all.
Although there are currently no fully-automated vehicles operating legally (as they are still under testing and development), it’s estimated that it shouldn’t take long for level 4 AVs to drive on the streets (see below for more on levels of autonomy).
Efforts are being exerted by different governments to ease the spread of AVs in their cities. Countries including the Netherlands, Singapore, Japan, and UAE are working on paving roads for autonomous driving and passing laws and legal actions to support the deployment of self-driving cars.
In addition, highway infrastructure and road safety are being taken care of to be suitable for AV technology.
Finally, the consumer mindset itself is in the phase of developing a largely accepting attitude towards AV as an implicit part of human life. Uber self-driving Taxis, for instance, seem to grab the attention of the market as the new innovative way of transportation.
This widening range of acceptance of autonomous driving comes from a certain level of trust that we’ve already witnessed. The giants of the automotive industry have already started adding autonomous features to regular driving vehicles. These features have proven both helpful and safe. This has created a 5-level hierarchical structure of levels of autonomy:
- Level 0: No Automation. This is basically normal driving, where a human driver controls all aspects at all times.
- Level 1: Driver Assistance. A human driver is assisted with either steering or acceleration/deceleration by the driver assistance system, with the expectation that the human driver will perform all the remaining functions.
- Level 2: Partial Automation. The driver assistance system undertakes steering and acceleration/deceleration using information about the driving environment, with the expectation that the human driver will perform all the other driving tasks.
- Level 3: Conditional Automation. The automated driving system undertakes all dynamic aspects of driving, with the expectation that the human driver will respond appropriately to a request to intervene.
- Level 4: High Automation. The automated driving system undertakes all dynamic aspects of dynamic driving, even if a human driver does not respond appropriately to a request to intervene.
- Level 5: Full Automation. The automated driving system undertakes all dynamic aspects of driving, in all roadway and environmental conditions.
Until now, the market has only accommodated level 2 AVs, the ones concerned with providing wider assistance to a human driver. Like cruise control, lane-keeping, auto parking, etc. The rest is still under development and testing for performance monitoring and safety concerns.
For the rest of this article, we’ll assume that AVs are fully independent with no need for human intervention. Although we haven’t reach that level yet, it’s better to start working through that concept as early as possible, so it can be ready for deployment (and our own comprehension) when the market feels confident enough to accept fully autonomous vehicles.
Connected autonomous vehicles
In order for AVs to be fully autonomous, they need to communicate—or more precisely, collaborate with each other on certain tasks. Hence, we come to the idea of connected autonomous vehicles (CAVs). CAVs are an area of research concerned with vehicle collaboration, where vehicles operate and make decisions both autonomously and cooperatively.
Let’s imagine a self-driving car on a highway with all the objects around the car sensed and projected on a precise map. The car has a goal of reaching a certain location as fast as possible while conserving the safety of itself, its passengers, and the surroundings. Replicating this car model gives us a neighborhood of AVs—a community!
On a very basic level, this is similar to human evolution, wherein Homosapiens found themselves in communities and families that share common interests. That structure suggests the need for communication in order to convey ideas and assign roles for each member of society.
Hence, the first issue to address here is communication—how will AVs be able to communicate their ideas optimally and securely?
Vehicular communication
Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) are two existing communication protocols for smart transportation systems. In such communication networks, vehicles and roadside units form the communicating nodes, providing information such as safety warnings and traffic information.
This usually works in a 5.9 GHz band with a bandwidth of 75 MHz and an approximate range of 300m. This technology might form the basis for the future deployment of CAV systems, where vehicles on the road can communicate and share information including traffic congestion, desired actions like a lane change or take over, or even precise locations, acceleration and speed, if needed.
The receiver might be another neighboring vehicle, or an infrastructure unit: a traffic signal, or a parking control unit for instance.
Vehicular cooperation
Now that we have all the information needed about the dynamic environment surrounding us, we need to both collaborate and reciprocate. The main difference between the two is that in collaboration, we all have a common goal, which could be, for example, to alleviate the traffic congestion. So we act accordingly regardless of our own private goals.
Reciprocation, however, is concerned with a more realistic case, where the collective goal of the group isn’t the first priority for individuals. Hence, it’s important for agents to encourage their neighbors and give them a reason to cooperate in the first place.
Giving it a quick thought, you begin to realize how complex this is becoming. We now have multiple AVs, all with common and private goals, and we want them to peacefully optimize all of these goals without a single human intervention. The question to ask now is, who gives the order? In the case of traffic congestion, who has the authority to advise a vehicle to speed up or change lanes?
Centralized vs decentralized networks
To answer the previous question, we might come up with two different solutions: coordinating the CAVs network in a centralized manner where a single agent is assigned the role of a master car, with all the rest simply following orders. Alternatively, the infrastructure itself might take that role. A traffic signal, for example, might direct the vehicles existing in its area to cooperate on certain tasks. Although this might seem simple and effective, this solution has many cons.
First, who gets the rank of a master vehicle? What is the territory of his control? What about boundary areas with neighboring masters? Also, in the case that the master leaves the network, ensuring that their information is properly sent along to the following leader might be a heavy task.
Hence, this solution might be more applicable to partial autonomy, where only a number of tasks are assigned to a cooperating group (e.g. traffic congestion or auto parking).
This pushes us to explore the second solution, which is concerned with decentralized cooperation, where each AV is responsible for its own actions.
Sartre once said, “Man is condemned to be free; because once thrown into the world, he is responsible for everything he does”. We humans are free to choose our actions. This places a heavy burden on our choices, as each action might result in a sequence of actions worsening (or improving) a given situation.
Now that robots are becoming as complex as we are, they also face that problem. In addition, having multiple agents in the decision-making process is even harder. Implicitly, they all need to cooperate without directly taking orders from each other.
This field combines the two giants of scientific achievements—game theory and reinforcement learning. Thus, it’s still a very young area at the frontier of human knowledge—with few research proposals currently available.
Applications of CAVs
The promising field of CAVs is imagined to have a great impact on our daily lives. So let’s explore the possible applications that might emerge in the very near future:
1. Cooperative parking
Using both V2V and V2I communication technologies, cars can cooperate with each other and with a parking central control unit to assign a suitable parking slot for each vehicle. Planning such a task is actually easier than you might expect and doesn’t require much learning and hard AI stuff. Usually, the implemented algorithms use Euclidean distance to find the nearest slot for each vehicle.
2. Misbehavior detection
Here’s another cool application of CAVs—suppose a vehicle is causing problems on the road: breaking speed limits, performing dangerous maneuvers, etc. Once a car in the network notices such a behavior, it will send along that information to all the remaining ones. Consequently, they all take a joint action to avoid that bad vehicle or isolate it if possible.
3. Collective perception:
Let’s consider a truck that blocks another vehicle’s vision. This might place be dangerous for the vehicle being blocked, as at any time the truck might move and leave it on an unknown path.
For an AV connected with the truck, however, constant feedback of the environment ahead of the truck should be available for the AV to be prepared for any sudden change in that environment. Also, this could apply if an accident or a construction zone is noticed by one car—the information might be easily communicated to the rest in the network of CAVs.
4. Emergency vehicle clearance
This is an advantage for CAV networks with respect to normal driving. If you’re driving on a highway and hear the sirens of an ambulance approaching from behind, your reaction might be to change lanes to allow the ambulance to pass with no delay for its path.
However, this might not always be the optimal action. Maybe the best action was to do nothing because a neighboring car was supposed to speed up and allow the ambulance to pass. CAV takes control of all of these procedures safely and optimally, generating solutions and actively performing them.
5. Intersection priority management
Traffic congestion is usually a result of intersections. In an intersection, the whole road is basically put on halt for a minute or more to allow the movement in the intersecting road. This costly operation is a must and is witnessed daily.
But with CAVs, this fate is avoidable. CAVs may actually bypass intersections without stopping, because all cars know how to avoid collisions and know the paths of neighboring cars, too. This application is even clearer in cases of first-in-priority intersections such as T-intersections. In real life, T-intersections cause many accidents, but with CAV, the problem should be solvable.
Conclusion
The world is currently captivated by the development of autonomous vehicles. However, the next step is actually more interesting. Having a complete city where all roads, infrastructure, and autonomous vehicles are connected in real-time is such an exciting idea to think about.
CAVs could drastically improve our daily lives. Technologies proposed up until now suggest a path moving towards total autonomy for the AVs; this raises concerns about nefarious AI control of elements of our world.
Especially because of the complexity of the problem—and the more complicated a problem gets, the harder it is for humans to understand a machine’s proposed solution. Think of it as a chess game being played by your computer. You don’t know why it left its queen to die…but it won the game nonetheless. The same idea applies to CAVs decisions: they require our complete trust, even if we weight that trust against the immediate consequences.
Written by: Abdulhady A. Feteiha
Edited by: Areeg Wael
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461