From: lexfridman

The role of utility functions in artificial intelligence (AI) has evolved significantly over the years. They remain a fundamental part of defining what AI systems aim to achieve, but there has been a shift towards thinking more deeply about what these functions represent and how they can be optimized.

Evolution of Utility Functions

In previous editions of the textbook “Artificial Intelligence: A Modern Approach,” utility functions were mainly about maximizing expected utility. AI was defined through a framework where one would specify a utility function, and the book would provide numerous techniques to optimize it [00:02:50].

Optimization and Definition

Optimization was initially viewed as the primary challenge, but it is increasingly recognized that defining the utility function—the actual aims of the AI—can be more complex [00:02:55].

Contemporary Perspectives

There’s a growing emphasis on not just how to optimize what AI systems do, but on understanding and specifying what should be done in the first place. This involves probing philosophical questions and societal impacts as part of integrating human values into utility functions [00:03:02].

Philosophical Dimensions

AI researchers are increasingly focusing on how utility functions intersect with ethical and societal concerns. Issues of fairness, bias, and the aggregation of utilities are challenging areas being explored in current AI research [00:03:34].

Learning Human Values

One approach to better encoding human-like values involves inverse reinforcement learning, where the actions of humans are observed to infer the underlying intentions. However, this method faces challenges, as people often perform suboptimal or self-destructive actions that should not be emulated by AI [00:03:25].

Challenges and Trade-offs

A major challenge is articulating utility functions that can comprehensively account for what society or a collection of agents might desire collectively [00:02:55]. Further complexities arise in trying to balance trade-offs, such as when fairness across protected classes becomes theoretically impossible to achieve across all dimensions [00:06:24].

"You can't have everything, but the analysis certainly can't tell you where should we make that trade-off point, but nevertheless we can, as humans, deliberate where that trade-off should be." — Peter Norvig [00:06:44]

Conclusion

As AI systems become increasingly sophisticated, the importance of clearly defined utility functions becomes more pronounced. This involves not only technical optimizations but also philosophical inquiries into what AI systems should aim to achieve. Utility functions remain a pivotal aspect of AI design, influencing how AI systems interact with human values and societal norms.