The DSA Has No Plan for AI
The NY ChatBot Bill Demonstrates the Left's Lack of Leadership
by Alexei Gannon and Zachary Jones
This is a compilation of two essays from One Thousand Means .
An exclusive postscript by Alexei Gannon has been included in this edition.
What does the NY Chatbot Bill Tell us About the Left?
Zachary Jones
This week, a tweet from More Perfect Union made the wider AI policy community and New York City socialist movement aware of S7263, Socialist-in-Office Kristen Gonzalez’s chatbot use bill that would prohibit AI models from being used to provide responses covered by a list of licensed professions in New York State, including medicine, dentistry, pharmacy, social work, and psychology.
I am a member of the New York City Democratic Socialists of America, this country’s most vibrant left-wing organization, the mass political movement that lifted a pragmatic social-democrat into the mayoralty and is busy decimating the Democratic political machines that have held back the dynamism of the greatest city in the world for decades. We have on our side elected representatives pursuing legislation to ensure that our government acts with ambition and drive, free from the constraints of bureaucratic proceduralism. The DSA’s House the Future campaign is pushing for legislation that establishes a state social housing authority that would have complete preemption from local zoning, and it is because of Zohran Mamdani that New York is moving towards reform of its sclerotic system of environmental review.
There are unfortunately some pretty serious, and incredibly stupid blind spots.
I have been aware of Gonzalez’s bill for some time, as it was introduced on April 7th, 2025, but it has now been placed on the Senate floor, meaning a full chamber vote is imminent. Before sponsoring this bill, Gonzalez, a former tech worker, proposed S9381, a general chatbot liability bill covering misleading and harmful information and requiring disclosure that users are interacting with AI. This bill was far less dangerous, but it unfortunately died in committee. This bill goes far beyond liability for misinformation, and now includes the following language to amend the General Business Law:
A proprietor of a chatbot shall not permit such chatbot to provide any substantive response, information, or advice, or take any action which, if taken by a natural person, would constitute a crime under section sixty-five hundred twelve or sixty-five hundred thirteen of the education law
This covers the following professions: medicine, dentistry & dental hygiene, veterinary medicine and animal health technology, physical therapy and physical therapist assistants, pharmacy, nursing, podiatry, optometry, engineering, land surveying & geology, architecture, psychology, social work, mental health counseling, marriage and family therapy, creative arts therapy, psychoanalysis, and separately, Attorney practice. Notably, here engineering refers to licensed professional engineering work, including structural calculations, site assessments, or safety-critical design decisions, and proprietor refers to whoever “owns, operates or deploys a chatbot system used to interact with users”, applying to any organization that decides to use a model in its operations, including non-profits, legal aid groups, or even government agencies.
The bill’s structure utilizes the logic of existing unauthorized practice law rather than creating a new regulatory category, the move being to cross-reference criminal statutes (Ed. Law §§6512–6513, Judiciary Law Article 15) rather than define the prohibited conduct independently. The proprietor of the chatbot is liable if the output would constitute a crime if produced by a human, and goes even further by omitting disclaimers shifting reliance risk to the user, generating a strict liability standard on the output’s content.
Unlike the earlier S9381, which allowed proprietors to escape liability by correcting information and curing harm within a 30 day period, S7263 has no remediation window. Its enforcement mechanisms are a private right of action with fee-shifting on willful violations, meaning that the defendant pays plaintiff’s attorneys’ fees and costs. This makes low-value cases worth filing and will inundate any organization deploying a large language model for the listed fields, or even adjacent ones, with suits.
The lack of a definition of a “substantive response” provides no guide to action and will create a chilling effect on AI use in general, which is likely the intention. However, we can draw out some use-cases that would likely be prohibited:
A legal aid nonprofit using AI to help tenants facing eviction understand their rights, by deploying a chatbot that helps low-income tenants identify whether their landlord’s behavior violates housing code, explains the timeline of an eviction proceeding, or helps them draft a response to a notice. Under S7263, this would constitute unauthorized practice of law under Judiciary Law Article 15. The nonprofit itself would be liable as the proprietor, with no disclaimer defense available and no cure period. The people harmed are tenants who can’t afford an attorney. As One Thousand Means has written before, AI could present solutions to access to justice problems that prevent the working class people the DSA was founded to fight for from navigating the legal system.
A rural pharmacy or community health center using AI to flag dangerous drug interactions. A patient in a medically underserved area submits their medication list and an AI tool flags that combining two of their prescriptions poses a serious interaction risk, advising them to consult their doctor. This is a substantive response in the domain of pharmacy and medicine. The bill would expose the health center to strict liability and plaintiff-side fee-shifting lawsuits for deploying the tool, even though the alternative is that the interaction goes unnoticed entirely.
A free mental health triage chatbot that helps people in crisis find appropriate care. A tool like this might ask someone about their symptoms, suggest that what they’re describing sounds like it could be a panic attack rather than a heart attack, and direct them to an appropriate provider or crisis line. This falls squarely within psychology, social work, and mental health counseling.
If any supporters of the bill would like to contest the idea that all of the above examples are obviously good, then feel free to do so.
This bill would make New York the most restrictive AI environment in the world, driving out startups building AI-assisted tools in covered fields and reducing New York’s leverage to actually regulate with an intelligently-designed framework.
A couple other notable problems include:
The open-source question seems unresolvable. If someone self-hosts Mistral and uses it to get legal information, who is the proprietor? This especially harms organizations more likely to deploy open-source models: non-profits and academic institutions.
If it passes, AI companies will weaponize it politically to militate against any and all regulation.
The NYC-DSA Tech Action Working Group has been catastrophically absent from the central questions of transformative AI: existential risk and capital ownership. Strangely, I am sympathetic to their ‘Principles on Technology’:
Technology must belong to and serve the people. This means building technologies that directly improve the material conditions of society, decrease human suffering, uphold human dignity and strengthen the solidarities that exist among us.
So true! You will notice that this bill does not guarantee the improvement of material conditions and in fact just bans a series of use-cases!
What causes a DSA working group to help craft a bill so obviously counter to the material interests of working people? What causes a DSA elected to support it?
Quite clearly, this is raw rent seeking by the credentialed professionals that compose the base of the DSA in New York City, a class that I am proud to belong to. The careful reader may note that this bill does not cover all licensed professions in New York State, excluding landscape architecture, public accountancy, and professional midwifery. Are these professions not deserving of protection under this framework? Or are there simply not many midwives in our mass socialist organization? This bill recruits AI policy as an enforcement arm of existing professional licensing cartels that restrict supply and inflate prices. The working class pays here, while the therapists collect $300/hour.
Rent-seeking, parochial licensing protections do not protect workers when open confrontation with capital emerges. In a slide discussing Gonzalez’s NY AI Act, a separate piece of safety legislation, the considerations by the working group include “what other AI use cases should be prohibited?” Instead of engaging with a broader social struggle for collective steering of our economy, this bill represents a left that has retreated into the defensive and economically destructive posture of conservatism, of a back-against-the-wall fear of any and all change. The logic that undergirds this bill would justify preventing the automation of any and all work, freezing the world in amber.
What is continually strange to me is the same individuals supporting this bill are also skeptical of existential risk. Gonzalez was, according to several people involved, the hardest Democratic vote to secure for Alex Bores’s RAISE Act, which the NYC-DSA Tech Action Working Group noted at the time had ‘effective altruist connections’.
While Bernie Sanders visits Constellation Research Center and meets with Eliezer Yudkowsky and Daniel Kokotajlo to discuss existential risk, much of the left retreats from the technology entirely. This profoundly unserious approach ignores the real risks presented by the growth of AI capabilities: to human welfare and human life, and discards them in favor of parochial rent seeking and generic opposition to growth and new technologies developed after 2012.
There is an alternative. We could have a truly progressive left that recognizes that the automation of human labor for the benefit of all is one of our central objectives, while working towards building the scientific and state capacity to foreclose existential risk and ensure a flourishing future.
At last, human hands, free from the plow!
Please call your state assembly members and state senators to oppose this bill, especially if you are members of the Democratic Socialists of America and are represented by a Socialist-in-Office..
The Left Must Plan For AI
Alexei Gannon
We should not be haunted by the specter of being automated out of work, We should be excited by that. But the reason we’re not excited by it is because we live in a society where if you don’t have a job, you are left to die. And that is, at its core, our problem.
Alexandria Ocasio Cortez, 2019
Automation is on the way
Last year, Amazon’s one-millionth robot was deployed under the direction of DeepFleet–an automated system that coordinates robot swarms to run unmanned fulfillment factories. The length of software-engineering tasks LLMs like ChatGPT can complete has been doubling year-over-year; we are now at 4 hours. Will AI take your job? Probably not yet, but entry-level employment is already declining in AI-exposed industries. In Q3, without any increase in hiring in the US economy, GDP just increased by an annualized rate of 4%.
Sources: Employment from Amazon SEC filings. Robot deployment from Amazon disclosures (2019, 2025); intermediate years linearly interpolated.
It has become commonplace for leftists to express skepticism regarding the capabilities of LLMs; a general belief in technological “enshittification” in which scientific development has been hijacked by financial capital to capture attention and generate hype for speculative assets. This view is basically true in the context of cryptocurrency and social media. However, given evidence of improving capabilities of AI and robotics alongside increasing deployment, we must address the possibility of mass automation. Those skeptical of human-level artificial general intelligence often balk at this premise, but even sub-human models that can complete domain-specific tasks like driving or coding may cause layoffs sooner than later. In other words, you don’t need fully general intelligence to achieve general automation. Debating whether AI can automate many jobs or all jobs is meaningful, but in either case it becomes clearer by the year that large parts of the economy can and will be automated.
Socialists must have a plan to direct an automated economy towards the benefit of all people. How could the threat of AI make clear the shared interests of working people and build a durable political coalition? How can minimize the risk this technology poses to humans and the environment? What policies are sufficient to prevent an inegalitarian concentration of power?
If the left lacks a plan, the right will implement theirs.
Who will control this technology?
Artificial intelligence (AI) and robotics are going to have a profound and transformative impact on our country and the entire world. The question is not whether these technologies will advance. They will. The question is: Who will control this technology? Who will benefit from it? And who will be left behind?
Bernie Sanders, 2025
The Trump administration along with the barons of technology and finance are engaged in an open conspiracy to keep the benefits of automation to themselves. Venture capitalists like Marc Andressen are funneling their capital into military contractors and political action committees to target politicians who threaten AI development. In return, President Trump has issued an executive order to entirely pre-empt all state AI regulation. A federal pre-emption on regulation hasn’t happened for oil, for tobacco, for any normal technology. While the working class might use automation to escape work, the oligarchs want automation to escape the worker.
Neoliberal visions of an automated economy allow for an unacceptable concentration of power and have no real understanding of political conflict. In this trajectory, those with equity in frontier AI companies will live like planetary aristocrats while some measly portion of future profits is doled out as UBI to a permanent underclass. The first mistake is ignoring that inequality is undesirable in-and-of itself. The second mistake is assuming the politicians and techno-capitalists who control AI will act benevolently towards society solely because improving the welfare of others would come at little cost. In fact, the history of powerful people doing evil-at-cost is quite rich! The solution to the concentration of wealth and power AI portends is exactly what socialist politics has always demanded: the reorganization of the economy directed from the bottom-up.
Luckily, the leftist position on a post-scarcity future is already popular. Those who sincerely believe in artificial general intelligence have independently come to the conclusion that a mass redistribution of wealth might be necessary. If you poll the American public who should own AI in a world where AI replaces human labor, 44% agree every citizen should own an equal share (27% are unsure). This is, conditional on mass automation, near-majority support for full-on communism. Pessimism about AI lines the rust belt; working class voters who have swung towards the right still understand what it’s like to see their entire town be laid-off. Tech workers are starting to know the same feeling. There is an open niche for an AI-forward economic populist politics to unite the working class from California to Pennsylvania.
Unfortunately, establishment Democrats lack the backbone to stand up to the tech oligarchs. As a paradigmatic example, Governor of California and 2028 frontrunner Gavin Newsom has vetoed bills that would have regulated how chatbots interact with children, how employers use AI for hiring and workplace decisions, and mandated safety audits for frontier models. Even the RAISE Act—a bill introduced by a relative moderate, Alex Bores, that requires basic transparency and risk-mitigation from frontier AI companies without threatening profit or ownership in the slightest—had its main mechanism of enforcement cut by New York Governor Kathy Hochul with minimal public backlash. Corporate Democrats are too dependent on large donations to fight back against the AI PACs. Without pressure from the organized left, these politicians will continue to roll over for big tech.
The right is in bed with oligarchy. The center lacks the means to combat corporate money. Then it is the left–and only the left–that can offer a positive future in response to automation.
Towards the Abolition of Poverty and Labor
Proposals beginning to address the automation have come from erstwhile progressives and technocrats: a wealth tax, worker-appointees to AI company boards, regulated development of foundation models. It is shocking that these ostensibly socialist policies aren’t coming from socialist politicians.
While leftist critiques of AI exist, most approach the issue from frameworks that do not recognize transformative automation as a real and desirable possibility.
For example, data center construction is often critiqued on the basis that, either:
Data centers are merely speculative assets;
The production of data centers is inadmissible due to climate effects.
In specific, data centers hit capacity as soon as they go online, water demand is only a problem in places already undergoing drought, and carbon emissions from data centers could be minimized by expanding the grid. We can envision solutions for these problems that do not demand a moratorium on data center construction: AI companies pay for clean expansions to the public grid, build where there’s enough water for cooling, etc.
But truthfully, these critiques are symptoms of a broader worldview that precludes the possibility of a post-scarcity society:
Improvements in AI—and most technology—is not real;
Economic growth is fundamentally at-odds with climate change.
I disagree. Socialism must aspire towards the abolition of both poverty and—ultimately—labor. Poverty deprives people of the bare necessities, and the coercion of labor deprives people of their autonomy. To achieve this, socialism must produce equitably distributed clean economic growth, which is only possible through technological advancement. It is the moral obligation of socialist politics to achieve this future.
In the same breath that we reject tech oligarchy, we reject the scarcity of the present. We must not be caught on our backfoot. The left has long dared to envision a world without poverty and work; now we fight for that world at its cusp.
Post-script
It has been one month since the original publication of this article.
The estimated length of software-engineering tasks LLMs can complete is now twelve hours. Members of METR, the independent organization responsible for such measurements, say that there is a 3-10% chance that AI research and development is automated by the end of the year.
A floor or two above METR’s office, Bernie Sanders met with Eliezer Yudkowsky to discuss the existential risks of artificial super intelligence. It has become apparent that—above carbon emissions or water use or any parochial environmentalist concern—X-risk is the basis of Sen. Sanders’ proposed data center moratiorium.
Meanwhile, the Trump administration has declared Anthropic a supply-chain risk for refusing to give the state unilateral authority to use Claude for autonomous kills and domestic surveillance, while simultaneously using the technology for autonomous target selection in Iran.
There are decades where nothing happens, and then there are weeks where decades happen. I am worried that this will be a decade where centuries happen.
Think seriously about what it would imply for general automation to be achieved in your lifetime. What if it might be achieved in ten years? What politics does this open up? The horizon has crept up behind us.