She asked a robot about race. The answer scared her

Transdisciplinary artist Stephanie Dinkins challenges us to rethink what we feed our machines—and asks what AI might become if it were trained on care

Portrait of Stephanie Dinkins against a blue background

Anje Jager

Twelve years ago Stephanie Dinkins traveled to Vermont to meet a robot. Bina48, a humanoid bust with dark skin, was designed to hold conversations about memory, identity and consciousness. Dinkins, a photographer by training, wanted to understand how a Black woman had become the model for one of the world’s most advanced social robots—and whether she could befriend it.

What she found during that encounter launched a decade of work that has made Dinkins one of the most influential artists exploring artificial intelligence.

Dinkins grew up in Tottenville’s enclave of Black families at the southern tip of Staten Island. Her grandmother tended a flower garden with such care that even reluctant neighbors came to admire it and then stayed to talk. Dinkins has described this as her first lesson in art as social practice—using beauty to build community.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Today she asks a simple but revolutionary question: What might our machines become if they were trained on that same level of care and human experience? She challenges the ways that AI is often used, showing that data can be intimate, culturally rooted and deeply alive. Through public-facing art installations at places such as the Smithsonian Arts + Industries Building in Washington, D.C., and the Queens Museum in New York City, she encourages people to reflect on technology, power and responsibility.

Scientific American spoke to Dinkins about the violence hidden in datasets and why communities must gift their stories to AI so that it can understand them on their own terms.

[An edited transcript of the interview follows.]

You’ve described your first meeting with the Bina48 robot as a turning point in your career. What were you expecting to find, and what actually happened?

I thought that if I could befriend the robot, it could let me in on where it thought it fit between humans and technology. But as I spoke to Bina48, it became apparent that some of her answers felt flat alongside her representative self. If I asked her about race, she didn’t have the deepest answers or the most nuanced answers as a Black woman figure, and that scared me. If these people who have really good intentions are producing something that is seemingly flat, then what happens when people aren’t even concerned with these questions?

How did that realization shape your work?

It shaped everything. Here in New York [City], I lived in a neighborhood that was predominantly Black and brown. I was wondering if we knew what was coming, if people were thinking about what the systems would do in their world. At the time, ProPublica did an article on judges and sentencing in terms of AI and how they would use sentencing software to come up with how long someone would stay in jail. And that was built on biased data, the historical biased data of a historically biased system, the judicial system, which I equate to a “Black tax.” We have to figure out ways to contend with this because you’re automatically getting more time just by being Black, now, because a machine said so.

I made a project called Not the Only One, which is based on my family. It started as a memoir— really trying to pass down the knowledge from my grandmother so that two generations even more from her would still have some touchpoints of her ethos. It’s an oral history project where we recorded interviews with three women in my family, and then I was forced to find foundational data to support it. It was hard to find base data that didn’t feel violent or felt loving enough to put my family on top of.

How did you define violence in a dataset, and how did you solve for it?

When I think about violence in data, I think, really, about a linguistic violence or a kind of labeling or stereotyping that happens in our popular media. If we’re thinking about a dataset based on movies, what roles Black people could play in films was limited: servitude, the friend—always the supportive friend but not the protagonist—the relegation to a background character instead of one who is a star in one’s own life. I think not being able to inhabit those roles is a sort of violence. So the challenge became to build a base set of language that I felt actually would buoy my family and not pull it down.

I finally wound up trying to make my own dataset. Not the Only One was based on a dataset of 40,000 lines of extra data beyond the oral histories, which is very small, so the piece is very wonky. It sometimes answers correctly, and sometimes it speaks in complete non sequiturs. I prefer that to just sitting my family’s history atop historic cruelty.

How did that project shape the next projects that you did?

That made me think about the value of small, community-minded data. We as humans have always told stories to orient ourselves, to tell ourselves what the values are. So what would happen if we gave—and really, I think about gifting—the AI world some of that information so it knows us better from the inside out? I created an app called The Stories We Tell Our Machines to let people do exactly that.

That’s my quest at the moment, convincing people that that’s a good idea because what we hear out in the world is, “No, they’re taking our data. We’re being exploited,” which we are. But also, we know that if we do not nurture these systems to know us better, they are likely using definitions that did not come from the communities being defined. The quest is truly: What would it look like if the data used mimicked global population?

The next step is to take that data and start to make a dataset that can be widely distributed to help fine-tune or train other systems. I’m starting to talk to computer scientists about how we can do this in a way that does not denature the stories but makes them widely usable.

Can you give an example of how AI could offer opportunities to people who have historically underprivileged?

I’m waiting for an underprivileged kid with not a lot of money to produce some spectacular film using a computer and AI tools that competes with a Hollywood movie. I think that’s possible.

A version of this article appeared in the March 2026 issue of Scientific American as “Stephanie Dinkins.”

Deni Ellis Béchard is Scientific American’s senior writer for technology. He is author of 10 books and has received a Commonwealth Writers’ Prize, a Midwest Book Award and a Nautilus Book Award for investigative journalism. He holds two master’s degrees in literature, as well as a master’s degree in biology from Harvard University. His most recent novel, We Are Dreams in the Eternal Machine, explores the ways that artificial intelligence could transform humanity. You can follow him on X, Instagram and Bluesky @denibechard

More by Deni Ellis Béchard
Scientific American Magazine Vol 334 Issue 3This article was published with the title “AI at Work: Stephanie Dinkins” in Scientific American Magazine Vol. 334 No. 3 (), p. 29
doi:10.1038/scientificamerican032026-52gIanMicDCG7OLgEJPxJn

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe