
AI's rapid development presents a complex mix of ethical concerns that we must address as a technology-driven society. While AI offers tremendous potential for solving global challenges, its development raises critical questions about the environment, privacy, fairness, transparency, and human autonomy that require careful consideration and proactive solutions. Like any transformative technology, AI's impact will largely depend on how we choose to develop and implement it, making it crucial to address these ethical considerations as the technology continues to evolve.
An AI hallucination is when an AI system generates false or made-up information and presents it as true. Think of it like someone confidently explaining a dream as if it were a real memory. The details might sound convincing and fit together logically, but they aren't actually true.

For example, ChatGPT might confidently write a detailed response about a made-up historical event that never happened, or cite a non-existent research paper, or describe features of a product that don't exist. These aren't intentional lies - they happen because of how AI systems process and generate information based on patterns in training data.
AI hallucinations do happen, making it vital to:
- Verify AI-generated information from reliable sources
- Be especially careful with facts, figures, and citations
- Watch out for overly specific details about obscure topics
- Remember that AI systems can sound very confident even when they’re wrong

One of the reasons it’s so important to understand how AI systems are trained is because it reveals how tools can reflect and amplify biases present in their training data or introduced by their creators. For example:
- If a language model is trained primarily on text written by one demographic group, it may not represent diverse perspectives accurately.
- An image recognition system trained on a dataset with mostly light-skinned faces might perform poorly on darker skin tones.
- A hiring algorithm trained on historical data might perpetuate existing gender or racial biases in employment.
Consider this example, explained in the book Teaching with AI by C. Edward Watson and José Antonio Bowen (Watson & Bowen 18):
“When Stable Diffusion, an AI capable of creating photorealistic images, is asked to create images related to high-paying jobs, the images have lighter skin tones than when asked about lower-paying jobs, with three times as many men over women in the high-paying jobs. When asked for images of “doctors,” only 7% of the images generated are women, when 39% of US doctors are women: Stable Diffusion exaggerates existing inequities, which is apparent in images in the internet training set (Nicoletti & Bass, 2023). Images generated by other AI image creators also yield biases.
Adobe’s Firefly AI image generator tries to correct this by making the number of women or Black doctors proportional to the population of that group in the United States: half the images of doctors it generates are women, and 14% of the doctors are Black, even though only 6% of US doctors are Black (Howard, 2023). Firefly has been trained to increase the probability that a request for an image of a Supreme Court justice in 1960 will be a woman, even though Sandra Day O’Connor be- came the first woman appointed to the Court in 1981.
Bias can come from training data, but the well-intentioned Firefly examples highlight another set of potential problems: human reviewers who rate and provide feedback for the models output also have bias. If AIs can create images of the world as it could be or as it is, who gets to choose? Bias can also be hidden in network architecture, decoding algorithms, model goals, and perhaps more worrisome, in the undiscovered potential of these models.”
To address these issues, AI developers need to carefully consider data selection, model design, and ongoing testing for fairness and bias.
The environmental concerns related to AI primarily relate to massive energy consumption and the resulting carbon footprint.

Training large AI models requires an enormous amount of computing power, which translates to massive energy consumption. The larger the model and the more parameters it has, the more energy is needed for training. To put this in perspective, training a major AI model can consume as much electricity as hundreds of American homes use in a year. The carbon footprint is equally significant: training a single large AI model can produce as much carbon dioxide as five cars would emit over their entire lifetimes. This energy-intensive process is necessary for the AI to learn from its training data and develop its capabilities, but it comes at a substantial environmental cost.
In addition, data centers that house AI systems require constant power for both computing and cooling, with many still relying on fossil fuels. These facilities also use significant amounts of water in their cooling systems. The environmental impact extends beyond just the training phase, as the ongoing operation of AI systems continues to consume substantial energy - every interaction with an AI tool requires computing power, and popular AI services serving millions of users daily have a significant cumulative energy footprint.
The manufacturing of specialized computer chips needed for AI systems presents another environmental challenge. These chips require rare earth minerals, whose mining can cause significant ecosystem damage. Additionally, these components often have relatively short lifespans and contribute to the growing problem of electronic waste.

While AI's training and operation do have a large carbon footprint, these systems can help create efficiencies and solutions that reduce overall environmental impact far beyond their own footprint. Think of it like investing energy now to build solar panels. There's an upfront environmental cost, but the long-term environmental benefits can outweigh it. The key is continuing to make AI systems more energy efficient while focusing their use on environmental solutions.
In the long run, AI might be able to help fight climate change and reduce environmental impact. AI systems can optimize energy grids to use more renewable energy efficiently, improve building energy usage, reduce waste in manufacturing, and design more efficient transportation routes. For example, Google's DeepMind AI reduced data center cooling energy by 40% by optimizing their systems.
In agriculture, AI helps optimize water usage and reduce pesticide use. In conservation, it tracks wildlife populations and monitors deforestation. In weather forecasting, it improves climate modeling and natural disaster prediction. In materials science, it speeds up research into new sustainable materials and better batteries.

It's true! AI uses a lot of energy and the environmental costs are real. But at this point, using AI is like driving a car: while your individual impact contributes to greenhouse gases, it's negligible compared to the Big Actors' footprint. AI is now deeply woven into the fabric of modern life – from securing your bank transactions to powering your social media feeds to delivering your search results. Choosing not to use Large Language Models (LLMs) like ChatGPT does more to limit your participation in today's digital world than it does to help the environment.
Real change will come from consumers demanding renewable energy adoption from the tech giants that power AI. And we're seeing progress: Microsoft has committed to being carbon negative by 2030, Google already matches 100% of its energy use with renewable purchases, and Amazon has pledged to power its operations with 100% renewable energy by 2025. As more data centers shift to clean energy sources, AI's environmental impact can dramatically decrease. The key is to maintain pressure on these companies to accelerate their transition to sustainable energy.
What are other ethical CONCERNS related to AI?
Just as the Internet revolutionized human society with sweeping changes - bringing both unprecedented connectivity and serious challenges like cybercrime and misinformation - AI's rapid development presents a complex mix of transformative benefits and significant ethical concerns that we must address as a society:
Privacy and Data Collection
AI systems collect and use massive amounts of personal data, often without explicit consent, raising serious concerns about privacy rights and data ownership, especially with technologies like facial recognition that can track and predict individual behavior.
Creative Rights and Attribution
The ability of AI to generate art, music, writing, and other creative works based on existing artists' work raises serious concerns about copyright, fair compensation, and the future of creative professions, especially since many AI systems were trained on artists' work without their explicit permission or compensation. The ethical consideration is particularly complex because AI-generated art exists in a legal grey area where copyright laws haven't fully caught up with the technology.
Job Displacement
As AI automates tasks traditionally performed by humans, entire industries face disruption, requiring society to address both the economic impact of job losses and the need for workforce retraining and transition.
Control and Safety
The increasing power of AI systems raises critical questions about who controls their development and deployment, along with concerns about potential misuse and the need for robust safety measures and oversight.
Access and Inequality
The uneven distribution of AI tools and benefits risks widening existing social and economic gaps, creating a digital divide between those who can access and benefit from AI technology and those who cannot.
Military and Weapons Applications
The development of AI-powered autonomous weapons and military systems raises profound questions about human control over military decisions and the implications for international security.
As AI systems become more powerful and integrated into society, issues like privacy, inequal access, and the continued development of autonomous weapons will become even more vital to address. Many organizations and governments are working to develop ethical guidelines and regulations to address these concerns, and citizens can continue to advocate for thoughtful, comprehensive guidance and restrictions.

As a teacher, there are several important steps you can take to protect your students' privacy when using AI tools.
First up, never, ever put student information into AI. You can describe a student generally without using names, dates, locations, or any other identifying information. This protects their data without requiring you to think about when and how and if there are other safeguards in place.
The second best way to protect students is to use school-sanctioned AI tools like Khanmigo, School AI, or MagicSchool AI that comply with student privacy laws. These paid tools often pull from ChatGPT or other Large Language Models (LLMs) without feeding information back into AI training.
The third best way to support student safety is to teach AI literacy. By informing yourself and your students about what AI is, how the systems are trained, what personal information is, and how to interact safety with AI tools, students will be better equipped to protect themselves beyond the classroom.
