Experts discuss gains and pitfalls of AI in summit at Stanford Faculty Club

By Ritu Jha-

Leaders of the tech world voted in favor of regulations and guardrails for AI and related innovations at a summit on AI, hosted by Indiaspora recently.

The summit drew a host of experts and industry leaders who had insightful presentations and meaningful discussions. Participants from diverse backgrounds came together to share ideas, best practices, and real-world applications of AI for social good.

Beyond Buzzwords: Can AI Be a Force for Good summit, held April 26 at the Stanford Faculty Club, focused on topics like enhancing access to education and healthcare to promoting environmental sustainability and combating inequality. The participants examined how AI-driven solutions can drive meaningful change and foster a more equitable and sustainable future for all.

“With a lot of focus on the negative aspects of AI, we felt there needed to be a discussion on what is possible for social good. We had a great set of speakers talk about ethics, gender equality, sustainability, and more. Of course, there is a lot of apprehension about jobs but it’s a bit early to tell what will be lost and what new opportunities will be gained, “MR Rangaswami, founder of Indiaspora told indica.

“This was one of the most popular summits we have hosted – we were sold out at 250 attendees and had a waitlist. We also had to deal with gatecrashers. AI is the hottest topic of last year and continues to be so this year,” Rangaswami added.

He said that over 50 speakers across various disciplines of AI “showed the impact of our diaspora!”

“Indiaspora provides a platform for non-profits like Karya and Adalat to tell their story to an influential audience who can not only help with funding but also provide mentorship,” Rangaswami elaborated on how Indiaspora helps such non-profits.

Talking about the panel ‘AI & Trust/ Ethics/Policy’ and his opinion on AI regulation Rangaswami said, “This session was intended to create a good debate that brought out all the challenges and issues. We have a long way to go before we see meaningful regulations.”

The opening keynote was delivered by Romesh Wadhwani, the founder & chairman of Wadhwani Group, Symphony Ai, SAI Group; chairman, Concert AI; philanthropist, who spoke about the use of AI in the Wadhwani Foundation and how he funded to building of a social good enterprise—GENIE AI.

The idea is to use AI in a variety of different ways, co-pilots, and other ways, to act as a counselor to the student. “There’s a whole range of co-pilots that we’re rolling out with the student beneficiary at the center,” he said in his address. “We have a variety of co-pilots because we have entrepreneurship programs in colleges, called the ignite program. We have an entrepreneurship program for startups called the Liftoff program. We have entrepreneurship programs for accelerating small business growth. Each of these, we are developing to increase the scale by 10x.”

Another major keynote speakers of the event was Vishal Sikka, he took a close look at AI impact and regulations to govern new technology innovations. “Regulation of AI is a very necessary, and a very important thing. The government in India has a really good set of capabilities around AI already. I think that the regulatory framework that they have come up with is necessary. Now, how far it goes and what happens exactly, that time will tell. But I think overall regulation in AI is a necessary and a good thing,” Sikka told indica.

When asked to comment on whether regulations may impede growth and innovation, he said: “Technological innovation is also dangerous. It has many negative consequences, hallucinations, deep fakes, misinformation, disinformation, and things like that. We need to make sure that there is an appropriate regulatory framework around the use of AI.”

Sikka spoke in favor of having regulations for AI and allied technologies, Sikka said: “I was shocked by what one of the panelists said about how regulation is a bad thing. These technological advances are incredibly powerful and can be damaging. Yes, they should be regulated. We get our hair cut only by trained barbers. We need to understand and govern these things. We need to learn and teach, especially in India. Otherwise, the 20%, or 30% productivity improvement would have devastating consequences for the service industry. The biggest challenge is that this technology is not a mystery to us. It is, in fact, something that we understand and can use, like other tools that we have learned to use over the millennia.”

Rohini Chakravarti, managing partner at Newbuild Venture Capital, echoed Sikka’s sentiments on AI regulations: “This is an area that needs to be regulated. I think we have learned from the experience of social media. Most of the arguments against regulation say, let’s see how the technology develops, and then we get regulated. But the social media experience has taught us that if you do that, you may not be able to regulate it afterward,” Chakravarti told indica.

How do you now say that you should own your data, and Facebook should forget all your data today? It’s not possible to put in a regulation like that. Forget-me type of regulations have not worked because the data is not stored that way. Most of these systems don’t know what to forget, and what to erase. Adding on she said that it’s very hard to regulate data after it has been captured and put into these large systems because they may not even track where your data exists. And that has been one of the learnings from the social media experience. And when you hand this over to these large public surveillance systems, they are doing whatever they want to do with that information, they own the data.

They own all of this data on consumers. The consumers don’t own it for themselves. There is no implicit or explicit contract on what they’re allowed to do with it and what they owe you. “The European regulation has been trying to put GDPR, but it is insufficient. I think re-instrumenting all of social media to do that is not possible. So now if you come to AI, that is the concern,” she added.

Utkarsh Saxena, Founder of Adalat AI – a SaaS company – practiced law in India for many years and saw first-hand all the delays and backlogs in courts and how much it affects the poorest and most vulnerable populations. “The judicial process has become a punishment, and I wanted to do something about it. I launched a non-profit organization that builds technological and AI solutions for courtrooms to help them expedite case processes to reduce traditional delays,” Saxena told indica.

Saxena is part of a few tech accelerators in the Silicon Valley. “We get a lot of guidance and support here. The Indian diaspora is quite invested in using technology to solve problems in India and we work with them and try getting tech and other support to do our good work.”

“Dialect is a big challenge. The problem is that a lot of the AI models that are being built in the West are for English-speaking, foreign language-speaking people. No one’s focusing on Indian languages, we want to use AI models working on Indian languages and contribute to improving them through our work so that you can use speech-to-text models to solve judicial delays and other development problems in India across language barriers. We are a young organization and we are currently partnering with five states in the south and north of India. We have been operational for less than a year. The dream is to be in half the courts of India by the end of 2025 and by 2026 to be in every courtroom in India,” Saxena said.

Adalat AI doesn’t charge money. “We provide these solutions for free to support courts to solve the problem. Our point of contact is typically the court system. Our biggest challenge is fundraising because we are a non-profit organization and tech talent is expensive. We want to be an ethical AI enterprise. We want to respect security, things like biases and models, and therefore be very conscious about how we’re using technology and ensuring that it’s inclusive and includes all sections of the population: poor, rich, and linguistically different. We’re using a simple speech-to-text model. It’s made things possible that were impossible five years ago.”

Navrina Singh, founder and CEO of Credo AI – an AI governance software company that helps figure out risks within the AI systems, gets users to achieve compliance and adopt generative AI, advises the White House and President Biden as part of the National AI Advisory Commission, which is responsible for looking holistically across the United States and figuring out how to put in place the right guardrails that are in service of the citizens.

‘“There is a lot of excitement around adopting artificial intelligence, and we are seeing the capabilities rise for this technology. But as you can imagine, that comes with a lot of inherent risks. All the governments across the world are thinking about how can we make sure that these technologies are in service of humanity. They are looking for the right guardrails, regulations, and standards to ensure that the companies using these systems are serving the end users,” she told indica.

Europe recently passed the EU AI Act, a very comprehensive regulatory framework focused on AI and risk. “It will come into effect this June. The companies that want to operate and develop AI within Europe will have anywhere from six to 16 months to comply. If you don’t comply with it, and if you put a high-risk application out in the market severe penalties will be associated with it. One of the core things most countries are recognizing is they need to have these regulatory frameworks in place so that they can make sure that the harm and the unintended consequences to end users are minimized,” Singh said.

“This is an international phenomenon. Canada, Singapore, and India have a big focus on the foundation model and its governance.” Singh says that the US might not have a single regulation. “In the US, we generally don’t have one federal regulation, but we are already seeing a lot of state and local regulations show up. As an example, in Colorado, there’s also already a regulation called SB 169 that requires all insurance companies to do impact assessments.

Similarly, in New York, there’s a regulation called NYC 114. If you are a company in New York using third party HR systems, which are based on Machine Learning, you need to do a fairness audit and publish the outcomes of that audit. In California, Governor Newsom recently published an executive order on how to make sure these foundation models are benchmarked and evaluated appropriately.”

“Artificial intelligence can have a massive impact. There is a big focus on how these countries collaborate to ensure there are interoperable standards. And that’s why a lot of work is happening now to set up AI safety institutes across different countries. The US just established one a couple of months back. UK and Korea have established AI safety Institutes. There is a collaboration among all these AI safety institutes to establish standards for AI.”

Related posts