It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: strongfp
a reply to: FamCore
The radio stations here in Southern Ontario ripped apart self driving cars during the winter months, I remember listening in on an open lines about people bringing up very good points, like:
How will it know when the tires are slipping on black ice? [/qutote]
Anti-lock braking systems. Each wheel has motion sensors, a flywheel and a hydraulic valve attached to the braking system. If there is a difference between the speed of the flywheel and the wheel itself, then the pressure on the brake is adjusted.
en.wikipedia.org...
How can is navigate in deep un-plowed streets?
I guess they haven't got there yet, but that is going to eliminate the use of only having visual sensors at bumper level. I would imagine they will have a mix of sensors, perhaps even cameras that can filter out those wavelengths of light propagated by fog. There were some experiments on having windscreens that did this.
How would a self driving car deal with the 400 series highways, the busiest in the world? When there is traffic and it needs to pull over for emergency vehicles?
That shouldn't be too hard to program in. If the system detects one or more moving object behind it, and those objects has flashing blue or red lights or is making a loud sireny sound, then slow down and move to the left so long as there are no obstacles in the way. Once those flashing vehicles have passed, then move back to the regular route.
The list goes on, these cars cannot critically think, maybe on the open freeways it might work, but no residential or in harsh weather, no way.
From my own experiments with AI, the best way to get a system to learn is to record the moves of experts through black box recorders. Then you get the AI system to attempt to replicate the drivers decisions. Every time there is a difference, you go back and look at the code to see what should have been done. Given enough test cases, eventually it would have the skill of an experienced driver with no claims.
originally posted by: JAY1980
a reply to: FamCore
So the accidents took place at around 10mph? Doesn't sound like very stable technology if it gets in accidents at 10mph. Let alone barreling down the highway at 70mph. My opinion is it shouldn't be on the roadways yet if the technology isn't sound. It's jeopardizing others safety.
originally posted by: rickymouse
a reply to: peck420
But when the light turns green, they can't see the distraced driver looking down at his kindle as he goes through his red light.
originally posted by: FamCore
a reply to: JAY1980
It sounds like it wasn't the technology's "fault", it was other vehicles with people driving them that caused the accidents, which is why in my OP I discussed the idea of having Dept. of Transportation making certain roads "designated" for self-driving cars, where regular vehicles cannot go (since us humans aren't able to interact properly with them)
originally posted by: LadyGreenEyes
I don't know bout anyone else, but if the car I am in plans to drive, his name had better be KITT! Otherwise, no way!
10mph collisions, and we can't hear what happened? Sure.....totally safe.....
originally posted by: stumason
originally posted by: LadyGreenEyes
I don't know bout anyone else, but if the car I am in plans to drive, his name had better be KITT! Otherwise, no way!
10mph collisions, and we can't hear what happened? Sure.....totally safe.....
We did hear what happened - they were all hit in the rear by other drivers. I am puzzled why people don't read the articles.
Delphi sent AP an accident report showing its car was hit, but Google has not made public any records, so both enthusiasts and critics of the emerging technology have only the company's word on what happened. The California Department of Motor Vehicles said it could not release details from accident reports.
originally posted by: JAY1980
a reply to: FamCore
So the accidents took place at around 10mph? Doesn't sound like very stable technology if it gets in accidents at 10mph. Let alone barreling down the highway at 70mph. My opinion is it shouldn't be on the roadways yet if the technology isn't sound. It's jeopardizing others safety.
originally posted by: peck420
originally posted by: rickymouse
Now, a car cannot think and see and evaluate the situation.
A professional race driver has a reaction time of approx 0.4 seconds.
A good street driver has a reaction time of approx 0.7 seconds.
My 10 year old daily's electronic suite...0.07 seconds.
Computers will "see", "evaluate" and "react" long before a human will.
originally posted by: Soylent Green Is People
Sure, it may react more quickly, but the specific reaction made will be based on software instructions input by a human.
I think that one day autonomous automobiles will be the norm, and will be much more safe, but it isn't a s simple as saying "computers can react more quickly than a human can right now; therefore, they are definitely safer than a human driver right now."
I'm not saying that these four accidents were the fault of the autonomous car; I actually think they were not, but rather they were the fault of the other human driver. All I'm saying is that safe driving about more than just reaction time. It's also about what you do once you react. They need to teach the cars to react properly, not just quickly.
originally posted by: Soylent Green Is People
The answer is that the course of action (reaction) that the computer will decide to make will be based upon what humans (fallible humans) tell it to make via its programming. Therein lies the issue. Sure -- it will react quickly, but that reaction will have been programmed by a human.
The computer-driven car (at least for the foreseeable future) will be performing specific reactions based on instructions from humans. If the autonomous car senses a danger, it will quickly follow a set of per-arranged instructions that a human has input into the car's software.
No matter how quickly it reacts, that specific reaction will still be decided upon by the human software writer.
originally posted by: peck420
originally posted by: Soylent Green Is People
The answer is that the course of action (reaction) that the computer will decide to make will be based upon what humans (fallible humans) tell it to make via its programming. Therein lies the issue. Sure -- it will react quickly, but that reaction will have been programmed by a human.
The computer-driven car (at least for the foreseeable future) will be performing specific reactions based on instructions from humans. If the autonomous car senses a danger, it will quickly follow a set of per-arranged instructions that a human has input into the car's software.
No matter how quickly it reacts, that specific reaction will still be decided upon by the human software writer.
So..it is bad because it will do exactly what driving instructors and safety boards try to get humans to do?
The fallacy in your argument is that you think humans make good drivers. We don't. The most common cause of an accident is human error...and the most common error is the inability to follow the proper reaction sequence.
If all an automated car ever gets to (it terms of ability) is the already determined, through years of research on human drivers, reaction sequence...it will be far better than human drivers.