Tumgik
#autonomous driving
techploration · 2 months
Text
Self driving cars are an inevitability at this point.
Not just assisted, but full autonomous driving. And it is going to fundamentally change our relationship with cars and transportation in general.
People are going to stop buying cars. The whole sales pitch of everyone buying their own self driving car is ludicrous— it fundamentally misses the opportunity that self driving actually presents.
Owning a car sucks. Having constant immediate access to transportation is a form of autonomy.
Both those things can be true. A car has to be maintained, insured, parked, replaced, protected, fueled— this whole laundry lists of responsibilities to maintain access to self directed transportation.
What about all the perks of having a car, but none of the hassle? That’s what a self driving car offers. A car when you need it, where you need it, without having to worry about everything else that goes along with owning a car.
Because you won’t own the car.
How much time do you actually spend driving? How much time is your car just sitting there? Why worry about and pay for a car you’re not driving?
Your car is going to be a subscription service
Uber is already testing this basic model, but in a world of self driving cars it makes perfect sense. You don’t own a car. You have a Car Subscription, which means there is a car there to drive you when you need— scheduled in advance or on demand. You pay for different subscription levels (pay per mile, unlimited, luxury, etc)
A personalized public transportation
People will realize owning a car is actually a burden, and a fleet of self driving cars that take themselves for servicing and refueling is actually a world easier.
There are going to be two major downsides
First, you are going to be tracked. Not just where you’re going but what you’re listening to and riding with on the way there. Think about it— you will not be able to anonymously go anywhere
Owning a car will become suspicious— an expensive luxury that offers anonymity. It will be like having a pager in the 90s— associated with doctors and drug dealers. Bikes and motorcycles will thrive in the ‘socially acceptable non tracked transportation’
Second major issue will be ads
The double edge sword of a self driving car is that it frees you up to do other things.
You think you are going to get to sit and enjoy life uninterrupted by ads during your morning commute? Your Hulu and Netflix are already synced— you buckle your seatbelt and your episode picks up where you left off. Spotify is connected. Your use profile instantly tailors the ride to your tastes
Just watch a couple ads first
You can always pay extra to go ad free. You’re just sitting there anyways. Also means they can finally get rid of billboards (or at least move them to inside the car). Short on funds? Watch ads your whole ride for a discount.
Even shorter on funds? Well, we reached your destination, but the doors won’t unlock until you finish watching this two minute ad (and no closing your eyes)
51 notes · View notes
Text
Honda has to have one of the worst Lane Keep Assist Systems ever.
Like how tf are you, the computer, gonna push me into the rumble strips and them scream at me on your wittle screen about Lane Departure. Like, I know we're departing the lane. You're the one who took the curve too tight!
And don't even get me started on how awful the line tracking it. Like, oops, lane's getting wider. Better move over really fast and jerk the steering wheel really hard. Wait... what's this "interstate exit" you speak of? Better jerk the steering wheel back the other way and get back centered into the lane we were never supposed to leave.
Oh, the car in front of us is entering the turn lane? Better slam on the brakes and match its speed until it's been fully out of the main lane for a solid fifteen seconds.
And then there are the times it just gives you back control without warning. No audible chime at all. You'll be mid-turn on a curvy highway, and it'll just decide "nope, I'm done" and all off a sudden steering assist is disabled, and you're veering into the next lane, and then you realize the car can't see the lines anymore, so you have to jerk really hard back into your lane, and it's just ugh.
Remember when Honda said all of their cars would come standard with Level 5 autonomy by 2025. Lol.
6 notes · View notes
aifyit · 1 year
Text
The DARK SIDE of AI : Can we trust Self-Driving Cars?
Read our new blog and follow/subscribe for more such informative content. #artificialintelligence #ai #selfdrivingcars #tesla
Self-driving cars have been hailed as the future of transportation, promising safer roads and more efficient travel. They use artificial intelligence (AI) to navigate roads and make decisions on behalf of the driver, leading many to believe that they will revolutionize the way we commute. However, as with any technology, there is a dark side to AI-powered self-driving cars that must be…
Tumblr media
View On WordPress
16 notes · View notes
jcmarchi · 4 hours
Text
This tiny chip can safeguard user data while enabling efficient computing on a smartphone
New Post has been published on https://thedigitalinsider.com/this-tiny-chip-can-safeguard-user-data-while-enabling-efficient-computing-on-a-smartphone/
This tiny chip can safeguard user data while enabling efficient computing on a smartphone
Tumblr media Tumblr media
Health-monitoring apps can help people manage chronic diseases or stay on track with fitness goals, using nothing more than a smartphone. However, these apps can be slow and energy-inefficient because the vast machine-learning models that power them must be shuttled between a smartphone and a central memory server.
Engineers often speed things up using hardware that reduces the need to move so much data back and forth. While these machine-learning accelerators can streamline computation, they are susceptible to attackers who can steal secret information.
To reduce this vulnerability, researchers from MIT and the MIT-IBM Watson AI Lab created a machine-learning accelerator that is resistant to the two most common types of attacks. Their chip can keep a user’s health records, financial information, or other sensitive data private while still enabling huge AI models to run efficiently on devices.
The team developed several optimizations that enable strong security while only slightly slowing the device. Moreover, the added security does not impact the accuracy of computations. This machine-learning accelerator could be particularly beneficial for demanding AI applications like augmented and virtual reality or autonomous driving.
While implementing the chip would make a device slightly more expensive and less energy-efficient, that is sometimes a worthwhile price to pay for security, says lead author Maitreyi Ashok, an electrical engineering and computer science (EECS) graduate student at MIT.
“It is important to design with security in mind from the ground up. If you are trying to add even a minimal amount of security after a system has been designed, it is prohibitively expensive. We were able to effectively balance a lot of these tradeoffs during the design phase,” says Ashok.
Her co-authors include Saurav Maji, an EECS graduate student; Xin Zhang and John Cohn of the MIT-IBM Watson AI Lab; and senior author Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of EECS. The research will be presented at the IEEE Custom Integrated Circuits Conference.
Side-channel susceptibility
The researchers targeted a type of machine-learning accelerator called digital in-memory compute. A digital IMC chip performs computations inside a device’s memory, where pieces of a machine-learning model are stored after being moved over from a central server.
The entire model is too big to store on the device, but by breaking it into pieces and reusing those pieces as much as possible, IMC chips reduce the amount of data that must be moved back and forth.
But IMC chips can be susceptible to hackers. In a side-channel attack, a hacker monitors the chip’s power consumption and uses statistical techniques to reverse-engineer data as the chip computes. In a bus-probing attack, the hacker can steal bits of the model and dataset by probing the communication between the accelerator and the off-chip memory.
Digital IMC speeds computation by performing millions of operations at once, but this complexity makes it tough to prevent attacks using traditional security measures, Ashok says.
She and her collaborators took a three-pronged approach to blocking side-channel and bus-probing attacks.
First, they employed a security measure where data in the IMC are split into random pieces. For instance, a bit zero might be split into three bits that still equal zero after a logical operation. The IMC never computes with all pieces in the same operation, so a side-channel attack could never reconstruct the real information.
But for this technique to work, random bits must be added to split the data. Because digital IMC performs millions of operations at once, generating so many random bits would involve too much computing. For their chip, the researchers found a way to simplify computations, making it easier to effectively split data while eliminating the need for random bits.
Second, they prevented bus-probing attacks using a lightweight cipher that encrypts the model stored in off-chip memory. This lightweight cipher only requires simple computations. In addition, they only decrypted the pieces of the model stored on the chip when necessary.
Third, to improve security, they generated the key that decrypts the cipher directly on the chip, rather than moving it back and forth with the model. They generated this unique key from random variations in the chip that are introduced during manufacturing, using what is known as a physically unclonable function.
“Maybe one wire is going to be a little bit thicker than another. We can use these variations to get zeros and ones out of a circuit. For every chip, we can get a random key that should be consistent because these random properties shouldn’t change significantly over time,” Ashok explains.
They reused the memory cells on the chip, leveraging the imperfections in these cells to generate the key. This requires less computation than generating a key from scratch.
“As security has become a critical issue in the design of edge devices, there is a need to develop a complete system stack focusing on secure operation. This work focuses on security for machine-learning workloads and describes a digital processor that uses cross-cutting optimization. It incorporates encrypted data access between memory and processor, approaches to preventing side-channel attacks using randomization, and exploiting variability to generate unique codes. Such designs are going to be critical in future mobile devices,” says Chandrakasan.
Safety testing
To test their chip, the researchers took on the role of hackers and tried to steal secret information using side-channel and bus-probing attacks.
Even after making millions of attempts, they couldn’t reconstruct any real information or extract pieces of the model or dataset. The cipher also remained unbreakable. By contrast, it took only about 5,000 samples to steal information from an unprotected chip.
The addition of security did reduce the energy efficiency of the accelerator, and it also required a larger chip area, which would make it more expensive to fabricate.
The team is planning to explore methods that could reduce the energy consumption and size of their chip in the future, which would make it easier to implement at scale.
“As it becomes too expensive, it becomes harder to convince someone that security is critical. Future work could explore these tradeoffs. Maybe we could make it a little less secure but easier to implement and less expensive,” Ashok says.
The research is funded, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship.
0 notes
techdriveplay · 2 months
Text
Rivian Introduces R2, R3, and R3X Built on New Midsize Platform
LAGUNA BEACH, Calif. March 7, 2024 – Rivian today unveiled its new midsize platform which underpins R2 and R3 product lines. R2 is Rivian’s all-new midsize SUV delivering a combination of performance, capability and utility in a five-seat package optimized for big adventures and everyday use. The silhouette and face of R2 are distinctly Rivian. The powered rear glass fully drops into the…
Tumblr media
View On WordPress
0 notes
reallytoosublime · 2 months
Text
youtube
Self-driving cars, also known as autonomous vehicles or driverless cars, are a revolutionary technological advancement poised to transform the way we commute, travel, and interact with our urban and rural environments. In this video, we'll discuss the progress made by Tesla in their development of self-driving cars, and how close we are to achieving this technology.
Self-driving cars are equipped with a range of sensors such as LiDAR, cameras, radar, and ultrasonic sensors. These sensors provide the vehicle with a comprehensive view of its surroundings, allowing it to perceive other vehicles, pedestrians, road signs, traffic lights, and obstacles in real-time.
The heart of self-driving cars lies in their AI systems. These AI algorithms process the data from sensors to make complex decisions and control the vehicle's movements. Machine learning and deep learning techniques are used to teach the AI system to recognize patterns, predict the behaviors of other road users, and respond appropriately to various scenarios.
Self-driving cars rely on high-definition maps to understand their location and the environment. These maps provide information about lane markings, road geometries, traffic signs, and more. Simultaneous Localization and Mapping technology is used to continuously update the vehicle's position within the mapped environment.
Self-driving cars represent a technological frontier that holds the promise of safer, more efficient, and more accessible transportation. While there are hurdles to overcome, ongoing advancements in AI, sensor technology, and infrastructure development continue to push the boundaries of what's possible in the realm of autonomous vehicles.
Self-Driving Cars: How Close We Are?
1 note · View note
youtubemarketing1234 · 2 months
Text
youtube
Self-driving cars, also known as autonomous vehicles or driverless cars, are a revolutionary technological advancement poised to transform the way we commute, travel, and interact with our urban and rural environments. In this video, we'll discuss the progress made by Tesla in their development of self-driving cars, and how close we are to achieving this technology.
Self-driving cars are equipped with a range of sensors such as LiDAR, cameras, radar, and ultrasonic sensors. These sensors provide the vehicle with a comprehensive view of its surroundings, allowing it to perceive other vehicles, pedestrians, road signs, traffic lights, and obstacles in real-time.
The heart of self-driving cars lies in their AI systems. These AI algorithms process the data from sensors to make complex decisions and control the vehicle's movements. Machine learning and deep learning techniques are used to teach the AI system to recognize patterns, predict the behaviors of other road users, and respond appropriately to various scenarios.
The AI system of a self-driving car interfaces with the vehicle's control systems, including the steering, throttle, and brakes. It translates its decisions into precise actions to navigate the vehicle safely and efficiently.
Self-driving cars rely on high-definition maps to understand their location and the environment. These maps provide information about lane markings, road geometries, traffic signs, and more. Simultaneous Localization and Mapping technology is used to continuously update the vehicle's position within the mapped environment.
Communication technology plays a crucial role in the functioning of self-driving cars. These vehicles can exchange information with each other and with infrastructure elements like traffic lights and road sensors. This communication enhances safety and enables cooperative maneuvers.
Self-driving cars represent a technological frontier that holds the promise of safer, more efficient, and more accessible transportation. While there are hurdles to overcome, ongoing advancements in AI, sensor technology, and infrastructure development continue to push the boundaries of what's possible in the realm of autonomous vehicles.
Self-Driving Cars: How Close We Are?
1 note · View note
wigoutlet · 2 months
Text
Tumblr media
0 notes
monahvee · 3 months
Text
Old Man Rant
Ever since I was a small kid, I held an affinity for cool and sometimes quirky cars. In addition to thumbing through hip hop magazines at Walgreens, I’d look through Motor Trend or Car and Driver faithfully. Really, any print magazine about cars or hip hop I’d view with no desire (or cash) to buy. Instead, I’d buy some gummy bears or a pack of gum as my way of being an actual patron. This of…
Tumblr media
View On WordPress
0 notes
danieldavidreitberg · 6 months
Text
Unlocking Tomorrow's World: Witness Robots Redefine Mobility! 🤖🚀 #TechInnovation #robotics #artificialintelligence Daniel Reitberg
0 notes
arjunvib · 6 months
Text
How important is deep learning in autonomous driving?
In the automated driving development journey, verification and validation (V&V) coverage is one of the most intensely discussed topics, especially scenario-based or data-driven validation. Building realistic scenarios is a key challenge where the critical requirement is to have realistic scenarios of road conditions (Long roads), like traffic behaviors, weather, time of day and many more. Here are few points that will help you
Deep learning is used for perception, the first pillar of autonomous driving. This includes tasks such as object detection, lane detection, and traffic sign recognition.
Deep learning is used for planning, the second pillar of autonomous driving. This includes tasks such as path planning and obstacle avoidance. Deep learning algorithms can learn to predict the behavior of other vehicles and pedestrians
Deep learning is used for control, the third pillar of autonomous driving. This includes tasks such as steering, acceleration, and braking.
Read the blog on KPIT as they are leaders in this industry and have the right knowledge on deep learning in autonomous driving and adas
0 notes
jcmarchi · 14 days
Text
Enhancing AI Transparency and Trust with Composite AI
New Post has been published on https://thedigitalinsider.com/enhancing-ai-transparency-and-trust-with-composite-ai/
Enhancing AI Transparency and Trust with Composite AI
The adoption of Artificial Intelligence (AI) has increased rapidly across domains such as healthcare, finance, and legal systems. However, this surge in AI usage has raised concerns about transparency and accountability. Several times black-box AI models have produced unintended consequences, including biased decisions and lack of interpretability.
Composite AI is a cutting-edge approach to holistically tackling complex business problems. It achieves this by integrating multiple analytical techniques into a single solution. These techniques include Machine Learning (ML), deep learning, Natural Language Processing (NLP), Computer Vision (CV), descriptive statistics, and knowledge graphs.
Composite AI plays a pivotal role in enhancing interpretability and transparency. Combining diverse AI techniques enables human-like decision-making. Key benefits include:
reducing the necessity of large data science teams.
enabling consistent value generation.
building trust with users, regulators, and stakeholders.
Gartner has recognized Composite AI as one of the top emerging technologies with a high impact on business in the coming years. As organizations strive for responsible and effective AI, Composite AI stands at the forefront, bridging the gap between complexity and clarity.
The Need for Explainability
The demand for Explainable AI arises from the opacity of AI systems, which creates a significant trust gap between users and these algorithms. Users often need more insight into how AI-driven decisions are made, leading to skepticism and uncertainty. Understanding why an AI system arrived at a specific outcome is important, especially when it directly impacts lives, such as medical diagnoses or loan approvals.
The real-world consequences of opaque AI include life-altering effects from incorrect healthcare diagnoses and the spread of inequalities through biased loan approvals. Explainability is essential for accountability, fairness, and user confidence.
Explainability also aligns with business ethics and regulatory compliance. Organizations deploying AI systems must adhere to ethical guidelines and legal requirements. Transparency is fundamental for responsible AI usage. By prioritizing explainability, companies demonstrate their commitment to doing what they deem right for users, customers, and society.
Transparent AI is not optional—it is a necessity now. Prioritizing explainability allows for better risk assessment and management. Users who understand how AI decisions are made feel more comfortable embracing AI-powered solutions, enhancing trust and compliance with regulations like GDPR. Moreover, explainable AI promotes stakeholder collaboration, leading to innovative solutions that drive business growth and societal impact.
Transparency and Trust: Key Pillars of Responsible AI
Transparency in AI is essential for building trust among users and stakeholders. Understanding the nuances between explainability and interpretability is fundamental to demystifying complex AI models and enhancing their credibility.
Explainability involves understanding why a model makes specific predictions by revealing influential features or variables. This insight empowers data scientists, domain experts, and end-users to validate and trust the model’s outputs, addressing concerns about AI’s “black box” nature.
Fairness and privacy are critical considerations in responsible AI deployment. Transparent models help identify and rectify biases that may impact different demographic groups unfairly. Explainability is important in uncovering such disparities, enabling stakeholders to take corrective actions.
Privacy is another essential aspect of responsible AI development, requiring a delicate balance between transparency and data privacy. Techniques like differential privacy introduce noise into data to protect individual privacy while preserving the utility of analysis. Similarly, federated learning ensures decentralized and secure data processing by training models locally on user devices.
Techniques for Enhancing Transparency
Two key approaches are commonly employed to enhance transparency in machine learning namely, model-agnostic methods and interpretable models.
Model-Agnostic Techniques
Model-agnostic techniques like Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Anchors are vital in improving the transparency and interpretability of complex AI models. LIME is particularly effective at generating locally faithful explanations by simplifying complex models around specific data points, offering insights into why certain predictions are made.
SHAP utilizes cooperative game theory to explain global feature importance, providing a unified framework for understanding feature contributions across diverse instances. Conversely, Anchors provide rule-based explanations for individual predictions, specifying conditions under which a model’s output remains consistent, which is valuable for critical decision-making scenarios like autonomous vehicles. These model-agnostic methods enhance transparency by making AI-driven decisions more interpretable and trustworthy across various applications and industries.
Interpretable Models
Interpretable models play a crucial role in machine learning, offering transparency and understanding of how input features influence model predictions. Linear models such as logistic regression and linear Support Vector Machines (SVMs) operate on the assumption of a linear relationship between input features and outputs, offering simplicity and interpretability.
Decision trees and rule-based models like CART and C4.5 are inherently interpretable due to their hierarchical structure, providing visual insights into specific rules guiding decision-making processes. Additionally, neural networks with attention mechanisms highlight relevant features or tokens within sequences, enhancing interpretability in complex tasks like sentiment analysis and machine translation. These interpretable models enable stakeholders to understand and validate model decisions, enhancing trust and confidence in AI systems across critical applications.
Real-World Applications
Real-world applications of AI in healthcare and finance highlight the significance of transparency and explainability in promoting trust and ethical practices. In healthcare, interpretable deep learning techniques for medical diagnostics improve diagnostic accuracy and provide clinician-friendly explanations, enhancing understanding among healthcare professionals. Trust in AI-assisted healthcare involves balancing transparency with patient privacy and regulatory compliance to ensure safety and data security.
Similarly, transparent credit scoring models in the financial sector support fair lending by providing explainable credit risk assessments. Borrowers can better understand credit score factors, promoting transparency and accountability in lending decisions. Detecting bias in loan approval systems is another vital application, addressing disparate impact and building trust with borrowers. By identifying and mitigating biases, AI-driven loan approval systems promote fairness and equality, aligning with ethical principles and regulatory requirements. These applications highlight AI’s transformative potential when coupled with transparency and ethical considerations in healthcare and finance.
Legal and Ethical Implications of AI Transparency
In AI development and deployment, ensuring transparency carries significant legal and ethical implications under frameworks like General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). These regulations emphasize the need for organizations to inform users about the rationale behind AI-driven decisions to uphold user rights and cultivate trust in AI systems for widespread adoption.
Transparency in AI enhances accountability, particularly in scenarios like autonomous driving, where understanding AI decision-making is vital for legal liability. Opaque AI systems pose ethical challenges due to their lack of transparency, making it morally imperative to make AI decision-making transparent to users. Transparency also aids in identifying and rectifying biases in training data.
Challenges in AI Explainability
Balancing model complexity with human-understandable explanations in AI explainability is a significant challenge. As AI models, particularly deep neural networks, become more complex, they often need to be more interpretable. Researchers are exploring hybrid approaches combining complex architectures with interpretable components like decision trees or attention mechanisms to balance performance and transparency.
Another challenge is multi-modal explanations, where diverse data types such as text, images, and tabular data must be integrated to provide holistic explanations for AI predictions. Handling these multi-modal inputs presents challenges in explaining predictions when models process different data types simultaneously.
Researchers are developing cross-modal explanation methods to bridge the gap between modalities, aiming for coherent explanations considering all relevant data types. Furthermore, there is a growing emphasis on human-centric evaluation metrics beyond accuracy to assess trust, fairness, and user satisfaction. Developing such metrics is challenging but essential for ensuring AI systems align with user values.
The Bottom Line
In conclusion, integrating Composite AI offers a powerful approach to enhancing transparency, interpretability, and trust in AI systems across diverse sectors. Organizations can address the critical need for AI explainability by employing model-agnostic methods and interpretable models.
As AI continues to advance, embracing transparency ensures accountability and fairness and promotes ethical AI practices. Moving forward, prioritizing human-centric evaluation metrics and multi-modal explanations will be pivotal in shaping the future of responsible and accountable AI deployment.
0 notes
techdriveplay · 2 months
Text
The Green Revolution: Eco-Friendly Cars and Their Benefits
Eco-friendly cars have emerged not just as a trend but as a necessity to combat environmental challenges and reduce our carbon footprint.
In the 21st century, the automotive industry has seen a significant shift towards sustainability, marking the advent of the green revolution. Eco-friendly cars have emerged not just as a trend but as a necessity to combat environmental challenges and reduce our carbon footprint. This comprehensive guide delves into the world of eco-friendly cars, highlighting their benefits, innovations, and the…
Tumblr media
View On WordPress
0 notes
automotiveera · 7 months
Text
Electric Bus Charging Station Market Is Propelled by Supportive Government Policies
The electric bus charging station market is projected to grow at a significant CAGR. The market for electric buses is experiencing significant growth due to the increasing use of these buses in the public transit fleets, and the government’s supportive policies for the development of charging infrastructure for electric buses, further boosting the market.
The depot charging type category holds the largest share in terms of volume, driven by the advantages it possesses such as ease of operation similar to diesel bus stations and installation costs compared to others. In addition, public fleet operators prefer overnight charging for electric buses.
Tumblr media
APAC accounted for the leading contributor by revenue in the electric bus charging station market driven by China's extensive use of electric buses. Whereas, North America is the fastest-growing market during the forecast period because of the favorable government initiatives aimed at promoting electric mobility, leading to an increased adoption of electric buses in the region.
The rising adoption of electric buses in public transportation fleets is a key driver behind the growth of the market, as the global shift towards a low-carbon economy, supported by the Kyoto Protocol ratified by 192 countries. Governments worldwide are implementing incentive programs, such as subsidies, tax rebates, and grants, to foster the development of electric bus charging infrastructure.
Additionally, the increasing demand for electric bus charging stations in private spaces is opening up significant growth prospects for market players. The competition within the hospitality sector is driving market growth, with many hospitality service providers now offering charging facilities for electric buses on their premises.
Moreover, numerous large multinational corporations are incorporating charging facilities into their employee welfare programs. This presents lucrative opportunities for manufacturers and installers of charging stations.
Hence, this industry is propelled by many key initiatives taken by government or private players for a safer environment and innovations in mobility solutions, thus boosting the market.
0 notes
abhishekinfotech · 7 months
Link
The transportation industry is on the verge of a transformative revolution led by autonomous vehicles, including self-driving cars and drones. These once-futuristic technologies are now a reality, poised to profoundly impact transportation and logistics. In this article, we delve into their progress, challenges, and potential to reshape mobility and the movement of goods.
0 notes
aifyit · 8 months
Text
The Top 10 AI Innovations That Will Change Your Life (2023)
youtube
1 note · View note