By James Bessen, Stephen Impink, Robert Seamans
Artificial Intelligence startups use training data as direct inputs in product development. These firms must balance numerous trade-offs between ethical issues and data access without substantive guidance from regulators or existing judicial precedence. We survey these startups to determine what actions they have taken to address these ethical issues and the consequences of those actions. We find that 58% of these startups have established a set of AI principles. Startups with data-sharing relationships with high-technology firms; that were impacted by privacy regulations; or with prior (non-seed) funding from institutional investors are more likely to establish ethical AI principles. Lastly, startups with data-sharing relationships with high-technology firms and prior regulatory experience with General Data Protection Regulation are more likely to take costly steps, like dropping training data or turning down business, to adhere to their ethical AI policies.
Download the full working paper here.