Help shape SOCAP24’s content! Vote for your favorite SOCAP Open session ideas.

A better way to build: Creating enduring impact through responsible innovation

Jama Adams Responsible Innovation Labs

If there is a better way to build, we will find it. That is the hallmark of responsible innovation. Currently, the startup and venture capital ecosystem is facing a unique opportunity: to shift how technology is built and commercially deployed.

Some quick background on the organizing speaker: Responsible Innovation Labs (RIL) is a nonprofit community of leading VCs and startups making responsible innovation the essential mindset and operational norm for tech companies and their investors. We equip startups to build — and win — through responsible innovation by driving improvements through norms, incentives, and behaviors. As a result, we can shape an optimistic, competitive, high growth startup ecosystem with companies that endure precisely because of the responsible innovation they’ve implemented. RIL launched the first industry-driven, pro-innovation Responsible AI framework for startups and their investors, with support from the Department of Commerce and 100+ signatories. The open-source Responsible AI framework offers tools, resources, and best practices for integrating responsible innovation into technology; and it was developed with consultation from industry stakeholders, government and policymakers, civil society, and academia/researchers.

Most wide scale regulatory attention has focused only on large tech incumbents or on highly-resourced frontier foundation models. Relatively little attention has been paid to the numerous AI startups in the earliest stages of company building, so guidance for these startups and their investors has tended to be impractical with respect to implementation. Yet these are the very companies that will define our future. This is where RIL comes in, offering actionable guidance for startups on ethically designing AI. The framework provides practical tools, moving the conversation away from “We need responsible AI” and toward “How exactly can we make AI responsible?”

In the session, we will explain how organizations can build a culture of ethical AI building and investment:

1. How to secure organizational buy-in and implement internal governance processes via a stakeholder forum .
2. How to foster trust through transparency by documenting how AI systems are built and adopted; prioritizing user privacy, safety, and security; and disclosing appropriate information.
3. Methods to forecast AI risks and benefits—including limitations of a new system and its potential harms—and use those assessments to inform product development and risk mitigation efforts.
4. The importance of auditing and documenting system outputs (including bias and discrimination), as well as conducting adversarial testing (like red teaming) to identify potential risks.
5. How to adopt feedback mechanisms to make regular, ongoing, and effective improvements to maintain a responsible AI strategy.

We can also outline some resources that startups can turn to for help. For example, how can startups and investors determine what to include in a values statement? What exercises can startups and investors employ in stakeholder forums? How can founders build a network of reliable AI advisors Likewise, we can share how investors can implement a culture of ethical innovation by investing in the startups that will create positive impact at scale with AI and by helping the startups they invest in innovate responsibly.

The session will be complemented by the CEO of Humanitas, which makes the internet more inclusive through AI-powered tools built for social impact. Humanitas builds a deep understanding of the work of nonprofit and social sector organizations and uses machine learning techniques and training datasets to help them digitally transform. This enhances their fundraising, operations, and corporate partnerships. As a startup building and launching AI tools for social impact – and one that has implemented resources from RIL’s framework – Humanitas can share firsthand experience on what works most effectively.

And although AI is top-of-mind for everybody, we are thinking forward: What emerging technologies will demand responsible innovation next? Where do tech and policy convene? We hope the audience steps away with the understanding that though AI is the tipping point of today, the questions of responsible innovation and impact investment are not limited to the discourse surrounding AI. Instead, responsible innovation is a mindset and norm that endures — cultivating it begins now.

Track

AI = Accelerating Impact

Format

Fireside chat (2 speakers)

Speakers

  • NameGaurab Bansal
  • TitleExecutive Director
  • OrganizationResponsible Innovation Labs
  • NamePhilip Chow
  • TitleCEO (Chief Executive Officer)
  • OrganizationHumanitas

Description

If there is a better way to build, we will find it. That is the hallmark of responsible innovation. Currently, the startup and venture capital ecosystem is facing a unique opportunity: to shift how technology is built and commercially deployed.

Some quick background on the organizing speaker: Responsible Innovation Labs (RIL) is a nonprofit community of leading VCs and startups making responsible innovation the essential mindset and operational norm for tech companies and their investors. We equip startups to build — and win — through responsible innovation by driving improvements through norms, incentives, and behaviors. As a result, we can shape an optimistic, competitive, high growth startup ecosystem with companies that endure precisely because of the responsible innovation they’ve implemented. RIL launched the first industry-driven, pro-innovation Responsible AI framework for startups and their investors, with support from the Department of Commerce and 100+ signatories. The open-source Responsible AI framework offers tools, resources, and best practices for integrating responsible innovation into technology; and it was developed with consultation from industry stakeholders, government and policymakers, civil society, and academia/researchers.

Most wide scale regulatory attention has focused only on large tech incumbents or on highly-resourced frontier foundation models. Relatively little attention has been paid to the numerous AI startups in the earliest stages of company building, so guidance for these startups and their investors has tended to be impractical with respect to implementation. Yet these are the very companies that will define our future. This is where RIL comes in, offering actionable guidance for startups on ethically designing AI. The framework provides practical tools, moving the conversation away from “We need responsible AI” and toward “How exactly can we make AI responsible?”

In the session, we will explain how organizations can build a culture of ethical AI building and investment:

1. How to secure organizational buy-in and implement internal governance processes via a stakeholder forum .
2. How to foster trust through transparency by documenting how AI systems are built and adopted; prioritizing user privacy, safety, and security; and disclosing appropriate information.
3. Methods to forecast AI risks and benefits—including limitations of a new system and its potential harms—and use those assessments to inform product development and risk mitigation efforts.
4. The importance of auditing and documenting system outputs (including bias and discrimination), as well as conducting adversarial testing (like red teaming) to identify potential risks.
5. How to adopt feedback mechanisms to make regular, ongoing, and effective improvements to maintain a responsible AI strategy.

We can also outline some resources that startups can turn to for help. For example, how can startups and investors determine what to include in a values statement? What exercises can startups and investors employ in stakeholder forums? How can founders build a network of reliable AI advisors Likewise, we can share how investors can implement a culture of ethical innovation by investing in the startups that will create positive impact at scale with AI and by helping the startups they invest in innovate responsibly.

The session will be complemented by the CEO of Humanitas, which makes the internet more inclusive through AI-powered tools built for social impact. Humanitas builds a deep understanding of the work of nonprofit and social sector organizations and uses machine learning techniques and training datasets to help them digitally transform. This enhances their fundraising, operations, and corporate partnerships. As a startup building and launching AI tools for social impact – and one that has implemented resources from RIL’s framework – Humanitas can share firsthand experience on what works most effectively.

And although AI is top-of-mind for everybody, we are thinking forward: What emerging technologies will demand responsible innovation next? Where do tech and policy convene? We hope the audience steps away with the understanding that though AI is the tipping point of today, the questions of responsible innovation and impact investment are not limited to the discourse surrounding AI. Instead, responsible innovation is a mindset and norm that endures — cultivating it begins now.

Vote for this session idea

Enter your email address to vote for this submission

In order to provide you the content requested, we need to store and process your personal data. If you consent to us storing your personal data for this purpose, please tick the checkbox above.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

Join the SOCAP Newsletter!