2026-05-07 14:00
2026-05-06 15:27
2026-05-07 13:11
2026-05-07 03:42
2026-05-06 21:31
2026-05-07 15:25
A thin sliver of Earth’s edge is brightly illuminated against the vast darkness of space in this April 3, 2026, image taken during the Artemis II mission. Artemis II was the first crewed flight in a series of missions to test NASA’s human deep space capabilities, paving the way for future lunar surface missions.
See more imagery from the Artemis II mission.
Image credit: NASA
2026-05-07 14:44

Through NASA, a university-designed small spacecraft is paving the way to studying particles, known as neutrinos, that move through the universe at near-light speeds. The Solar Neutrino Astro-Particle PhYsics CubeSat, known as SNAPPY, launched at 12 a.m. PDT on Sunday aboard a SpaceX Falcon 9 rocket from Space Launch Complex 4 East at Vandenberg Space Force Base in California and was deployed via launch integraor Exolaunch.
The SNAPPY project will test a prototype solar neutrino detector in low Earth polar orbit. Weighing approximately half a pound, the prototype detector consists of four crystals and is encased in a shielding block made of epoxy loaded with tungsten dust to match the density of steel. The detector and a dedicated electronics stack for power and readout purposes are housed inside a CubeSat platform from Kongsberg NanoAvionics.
The idea behind SNAPPY was sparked by interest in NASA’s Parker Solar Probe mission. As the probe prepared to become the first spacecraft to fly through the Sun’s corona, Nick Solomey, a professor of mathematics, statistics, and physics at Wichita State University, was inspired knowing the spacecraft would pass an area where the solar neutrino flux, the rate of particles passing through a specific area, is nearly 1,000 times stronger than what reaches Earth.
“All life on Earth – past, present, and future – relies on the Sun,” remarked Solomey, whose career is centered on elementary particle physics. “We must work to understand this ball of energy to the best of our abilities because it’s what makes life on Earth possible.”
Neutrinos are believed to be the second most abundant fundamental particles in the universe and could help us better understand the structure of the universe, the origin of mass, and the core of the Sun itself. On Earth, neutrino detectors must be buried deep underground to isolate their extremely faint signals. Using what we learn from SNAPPY, a future mission may one day place a detector closer to the Sun, allowing scientists to observe and study solar neutrinos in a completely new way.
Before such a mission is possible, researchers must understand how a neutrino detector performs in space, and SNAPPY is designed to take the critical first step. This includes proving it can operate reliably in orbit and eliminating signatures from other activities, such as energy interactions, that could mimic a true neutrino interaction in space. These measurements will help scientists determine whether a future large detector positioned closer to the Sun is feasible.
Through NASA’s Innovative Advanced Concepts program, within the Space Technology Mission Directorate, SNAPPY was selected for a Phase I award in 2018, followed by a Phase II award in 2019, and a Phase III award in 2021, helping mature the project from its early studies through flight demonstration.
NASA’s Marshall Space Flight Center in Huntsville, Alabama, designed and built the dedicated electronic readout cards for the SNAPPY detector, and Wichita State University graduate students programmed the payload computer to interact with the electronics.
To date, 36 graduate and undergraduate students have had the opportunity to work on the SNAPPY project. This achievement reflects the dedication of experts across agency and academia, including NASA Marshall, NASA’s Jet Propulsion Laboratory in Southern California, the University of Minnesota, the University of Michigan, and South Dakota State University.
To learn more, visit:
https://www.nasa.gov/about-niac/
2026-05-07 14:23
4 min read
A team of researchers from Adelaide University and the SmartSat Cooperative Research Center in South Australia has successfully uploaded and demonstrated NASA and IBM’s open-source Prithvi Geospatial artificial intelligence (AI) foundation model aboard two in-orbit platforms, making it the first geospatial foundation model to be deployed in orbit. Trained on 13 years’ worth of data, Prithvi can facilitate a wide variety of Earth observation tasks.
By uploading a compressed version of Prithvi to the South Australian government’s Kanyini satellite and to the Thales Alenia Space IMAGIN-e (ISS Mounted Accessible Global Imaging Nod-e) payload aboard the International Space Station, the researchers tested the model’s flood and cloud detection performance across two different orbiting platforms and computing environments.
The team chose Prithvi for their research because of its strong generalization across Earth observation tasks, and because of its availability as an open-source model.
“If Prithvi weren’t open source, I would have to train my own foundation model,” said Dr. Andrew Du, the project’s lead researcher, who is a postdoctoral researcher at Adelaide University and an AI engineer at the SmartSat Cooperative Research Center. “Having that model openly available saved a lot of time and effort.”
A foundation model is an AI model trained on an enormous amount of unlabeled data, which allows the model to begin detecting patterns in the data that humans wouldn’t notice on their own. The model can then be fine-tuned for specific applications using much smaller amounts of labeled data.
“Prithvi is the first model of its kind to be deployed in orbit, and that demonstrates exactly why we make our AI models open source,” said Kevin Murphy, chief science data officer at NASA Headquarters in Washington, whose office led the collaboration that created Prithvi. “By sharing these tools with anyone who wants to use them, we accelerate scientific and technological development into the future.”
Developed by a team of data scientists from IBM and NASA’s IMPACT team within the Office of Data Science and Informatics at NASA’s Marshall Space Flight Center in Huntsville, Alabama, the Prithvi Geospatial model was trained on the Harmonized Landsat and Sentinel-2 dataset. This dataset compiles over a decade of global geospatial data from NASA’s Landsat and ESA (European Space Agency) Sentinel-2 satellites. Prithvi can be adapted for tasks such as mapping flood plains, monitoring disasters, and predicting crop yields.
Kevin Murphy
NASA Chief Science Data Officer and Acting Chief Data Officer/Chief AI Officer
Earth-observing satellites collect enormous amounts of data about our planet. Processing and analyzing the data in orbit before the satellite sends it back to Earth can help researchers gain insights more quickly. However, active satellites often can’t accept large software updates because of bandwidth limits, so the AI models they carry for data analysis tend to be lightweight and highly specialized.
Researchers can use the flexibility of a foundation model to facilitate a wide range of Earth observation tasks in one software architecture. If they want the model to take on a new task once the satellite is in orbit, they only need to upload a small extra decoder package – using far less bandwidth than uploading a whole new model to the satellite.

Sending Prithvi to orbit is an early demonstration of how foundation models could transform Earth observation. In addition to data analysis, foundation models could eventually help scientists interact with the instruments collecting the data.
“A large language model is also a type of foundation model,” Du said. “In the future, this could allow operators to interact with satellites in natural language, asking questions about onboard data or system status and receiving responses in a conversational way.”
The NASA team behind Prithvi continues to work on open-source foundation models trained on NASA data. A heliophysics model, Surya, was released in 2025, and the team intends to create foundation models for planetary science, astrophysics, and biological and physical sciences as well.
The Prithvi Geospatial foundation model is funded by the Office of the Chief Science Data Officer within NASA’s Science Mission Directorate at NASA Headquarters in Washington. The Office of the Chief Science Data Officer advances scientific discovery through innovative applications and partnerships in data science, advanced analytics, and artificial intelligence. To learn more about NASA’s AI foundation models and other AI tools for science, visit:
https://science.nasa.gov/artificial-intelligence-science
By Lauren Leese
Web Content Strategist for the Office of the Chief Science Data Officer
2026-05-07 14:07
The four crew members of NASA’s Mars simulation recently marked 200 days into their 378-day Red Planet mission on May 7. Currently, the crew is in a simulated two‑week loss‑of‑signal period that mimics a Mars-Earth communications blackout when Mars moves behind the Sun. During this blackout, the crew works without contact with mission control, using preplanned procedures and available resources to complete tasks and handle any issues that may arise.
The CHAPEA (Crew Health and Performance Exploration Analog) mission 2 crew, commanded by Ross Elder and with medical officer Ellen Ellis, science officer Matthew Montgomery, and flight engineer James Spicer, entered the 3D-printed habitat last year at NASA’s Johnson Space Center in Houston on Oct. 19. They will exit in about six months on Oct. 31.
“I’m proud of the crew’s accomplishments over the past 200 days — facing each challenge with fortitude and finding new ways to improve our performance and efficiency daily,” said Ellis.
Now over halfway through the mission, the crew continues to provide NASA with valuable insights and data on how humans adapt to isolation, confinement, and resource limitations — all critical factors for future exploration of the Moon and Mars.
“We approach every day committed to doing our best work, whether we’re doing a simulated spacewalk, geology, exercise, a medical activity, or anything in between,” said Spicer. “What keeps us motivated is knowing that we’re contributing directly to NASA’s deep space exploration objectives.”
The crew has completed robotic operations, performed habitat maintenance, and grown crops inside the 1,700-square-foot habitat. Crew members also experience mission constraints such as delayed communications, limited supplies, and simulated equipment malfunctions. These realistic stressors are designed to help researchers better understand how crews perform under pressure during deep space missions.
“Having limited resources, be it tools, equipment, software, supplies, or no internet, really bounds what you have to solve problems,” said Montgomery. “Finding creative and clever solutions has been both challenging and rewarding.”
A key objective of NASA’s CHAPEA missions is to gather data on cognitive and physical performance during extended isolation. Researchers monitor how the crew adapts to the environment, manages stress, and maintains productivity. The data will help NASA refine mission planning, habitat design, and support systems for future long-duration missions.
“Extended-duration missions are relatively rare in NASA’s history to date,” said Sara Whiting, project scientist and mission manager at Johnson for NASA’s Human Research Program. “The operational lessons learned, along with the detailed health and performance data this crew is providing, come at the perfect time to inform the development of a sustainable lunar presence and longer-term objectives for crewed Mars missions.”
As NASA advances toward its long-term goal of human exploration of Mars, simulated missions like CHAPEA are essential to understanding how to keep astronauts healthy, safe, and mission-ready — both during the journey and on the surface of another world.
____
NASA’s Human Research Program
NASA’s Human Research Program pursues methods and technologies to support safe, productive human space travel. Through science conducted in laboratories, ground-based analogs, commercial missions, the International Space Station and Artemis missions, the program scrutinizes how spaceflight affects human bodies and behaviors. Such research drives the program’s quest to innovate ways that keep astronauts healthy and mission ready as human space exploration expands to the Moon, Mars, and beyond.
2026-05-07 10:00
6 min read

A team of Cornell University students are turning heads within industry and the federal government with the results of their research into creating a national air transportation management system in which thousands of drones could safely operate together.
NASA is sponsoring their work through the University Student Research Challenge (USRC), which provides grants to college students interested in helping the agency realize its aeronautical research goals.
“Looking at new traffic management systems for drones is not new,” said Mehrnaz Sabet, a doctoral student in the field of information science who serves as principal investigator on the grant and leads the Cornell team. “In fact, NASA has led that effort for years.”
Now, through USRC, NASA is giving Sabet and her team the chance to offer up innovative approaches to drone safety by managing their movements in the air, taking advantage of their young minds and fresh ideas.
The ultimate benefit of Cornell’s research in this area is the full realization of advanced air mobility, an area of industry focus that includes everything from urban flying taxis, more robust disaster response aircraft, and hot fresh pizza delivered right to your door.
The work also underscores the value NASA places on maturing cutting-edge technologies and helping to develop its future workforce through initiatives like USRC.
“Sabet and her team have demonstrated versatile skills involving software, algorithms, hardware, sensors development, laboratory tests, simulations, and actual flight tests – a rare combination,” said Parimal Koperdekar, acting director of NASA’s Airspace Operations and Safety Program.
Currently, drone operators must file plans that fully describes the intended flight path of the drone with a traffic management service. Those plans are checked with others to ensure there will be no collisions – what Sabet calls strategic deconfliction.
The challenge is that today’s air traffic management system is limited in its ability to handle the growing number of aircraft taking to the sky. Adding thousands of drones to the mix during the coming years risks over burdening the system, Sabet said.
What is needed in the air is essentially what we have on the ground – where millions of people drive on a road every day, she said.
As a driver you might know your whole “trajectory,” or the path you’d follow to reach your destination. But you would never coordinate your plan with every other driver on the road before you leave. Instead, traffic laws and infrastructure such as stop lights and traffic signs allow you to deconflict with other cars as you go.
Drone operators will still have to file flight plans saying where they intend to go, but the idea is to incorporate that car-like flexibility into drone operating systems, allowing them to be adaptable during their journeys.
“We need to ensure all these different types of drones can tactically deconflict with each other so that it is safe for them to operate like cars do on the ground. And that missing piece – tactical deconfliction – is at the center of our project,” Sabet said.

The key to the Cornell team’s research is the notion of integrating a simulated world with the real one to test and demonstrate how drones can learn to adapt to potentially hazardous conditions and make necessary corrections in their flight path on their own.
Knowing they could not go out and fly 100 drones at the same time to test their ideas for tactical deconfliction, the students decided to create an entirely virtual urban world to evaluate different high-volume traffic models, separation algorithms, and related data.
“Our first year of the project went into adapting and scaling that simulation engine and it all went very well,” Sabet said. “But we didn’t want to stick to a simulation. We wanted to see how the simulation translated to the real world, which mattered more.”
Still hampered by the limitations of how many drones they could operate and where they could fly – not many and basically in the middle of nowhere – they sought the best of both worlds, real and imagined.
“What we wound up doing was to embed the simulation into a real drone, so the drone thought it was flying in a dense urban environment although it was actually flying out in an open field where there wasn’t a real city in sight,” Sabet said.




drone flight test
This allowed the team to try out different traffic management tools and evaluate how drones might coordinate course corrections and avoid collisions with each other.
During the past year, they’ve taken the idea further by flying two real drones in the real world, each running the real-time simulation on board, allowing them to coordinate and “see” both simulated traffic and each other within the integrated test environment.
“We would then intentionally put them on a direct collision course to stress-test the detect and avoid and coordination models and see how well they react and coordinate the drone’s maneuvers to avoid hitting each other,” Sabet said.
Their success struck a chord with NASA experts in Unmanned Aircraft Systems Traffic Management (UTM).
“What’s impressive is that Cornell’s study included over 10,000 runs involving more than one million trajectories, and over 200,000 hours of experimentation to understand how multi-agent decentralized coordination would safely take place,” Kopardekar said.
Industry and the Federal Aviation Administration have also responded positively to this research and its potential. The team was asked to use its infrastructure and technology to virtually recreate an incident in 2025 in which a pair of drones collided with a stationary crane in Arizona. The team also showed how the accident could have been prevented.
The team was also asked to simulate recent, real-world fires in California to showcase how drones could better coordinate their movements both to provide situational awareness for public safety officials on the ground and to stay clear of fire-suppressing air tankers.
And according to the Cornell team, the FAA is interested in applying the project’s mix of virtual and real-world testing to evaluate drone operations under increasing levels of operational complexity.
“This kind of mixed-reality type of operational complexity enables them to test drone operations in a way that was not possible before,” Sabet said.
Thanks to NASA’s support through USRC, the Cornell team will continue to expand their capabilities and manage increasingly complex advanced air mobility operations.
“Our goal is to build the foundational systems that enable safe, large-scale autonomy in the skies,” Sabet said.
USRC is an opportunity within NASA’s Transformative Aeronautics Concepts Program under the agency’s Aeronautics Research Mission Directorate.
Jim Banke is a veteran aviation and aerospace communicator with more than 40 years of experience as a writer, producer, consultant, and project manager based at Cape Canaveral, Florida. He is part of NASA Aeronautics' Strategic Communications Team and is Managing Editor for the Aeronautics topic on nasa.gov. In 2007 he was recognized with a Distinguished Public Service Medal, NASA's highest honor for a non-government employee.
2026-05-07 15:35
2026-05-07 15:17
2026-05-07 15:00
2026-05-07 14:49
2026-05-07 14:30