Fortune | FORTUNE 20小时前
How AI is impacting lawyers, auditors, and accountants holds lessons for us all
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文聚焦于牛津大学会议“专业人士的未来”,探讨了AI在专业服务领域的应用与影响。报告指出,专业服务公司在AI投资回报方面表现突出,但同时面临准确性、数据安全等挑战。文章还关注了AI对职业发展、法律实践的影响,以及企业在AI应用中可能面临的责任、原则和目标差距。此外,文章还提到了AI在审计、法律等领域的应用,以及对政府和司法系统的潜在影响,强调了AI与人类判断的结合,以及对AI专业知识的需求。

🤔 专业服务公司在AI应用上似乎比其他行业更具优势,超过半数的公司在使用AI方面获得了回报,这主要得益于明确的AI战略和完善的治理结构。

⚠️ AI应用面临挑战,包括对准确性的担忧和数据安全问题。公司需要关注模型开发、应用构建和最终用户之间的责任分配问题,以及如何将“负责任的AI”原则转化为实践。

💡 在审计领域,AI可以帮助审计师审查所有交易,但KPMG强调在需要人类判断的领域,如重要性阈值的确定和新产品的质保准备金,不部署AI。

🧑‍💼 AI正在改变职业发展路径,对专业服务公司内的人才构成产生影响。吸引和留住AI专业人才成为关键,尤其是在法律和政府部门,需要解决AI专业人才的晋升通道问题。

Because of this, it was interesting to hear the discussion yesterday at a conference on the “Future of Professionals” at Oxford University’s Said School of Business. The conference was sponsored by Thomson Reuters, in part to coincide with the publication of a report it commissioned on trends in professionals’ use of AI.

That report, based on a global survey of 2,275 professionals in February and March, found that professional services firms seem to be finding a return on their AI investment at a higher rate than in other sectors. Slightly more than half—53%—of the respondents said their firm had found at least one AI use case that was earning a return, which is about twice what other, broader surveys have tended to find.

Not surprisingly, Thomson Reuters found it was the professional firms where AI usage was part of a well-defined strategy and that had implemented governance structures around AI implementation that were most likely to see gains from the technology. Interestingly, among firms where AI adoption was less structured, 64% of those surveyed still reported ROI from at least one use case, which may reflect how powerful and time-saving these tools can be even when used by individuals to improve their own workflows.

The biggest factors holding back AI use cases, the respondents said, included concerns about inaccuracy (with 50% of those surveyed noting this was a problem) and data security (42%). For more on how law firms are using AI, check out this feature from my Fortune colleague Jeff John Roberts.

Mind the gaps

Here are a few tidbits from the conference worth highlighting:

Mari Sako, the Oxford professor of management studies who helped organize the conference, talked about the three gaps that professionals needed to watch out for in trying to manage AI implementation: One was the responsibility gap between model developers, application builders, and end users of AI models. Who bears responsibility for the model’s accuracy and possible harms?

A second was the principles to practice gap. Businesses enact high-minded “Responsible AI” principles but then the teams building or deploying AI products struggle to operationalize them. One reason this happens is that first gap—it means that teams building AI applications may not have visibility into the data used to train a model they are deploying or detailed information about how it may perform. This can make it hard to apply AI principles about transparency and mitigating bias, among other things.

Finally, she said, there is a goals gap. Is everyone in the business aligned about why AI is being used in the first place? Is it for human augmentation or automation? Is it operational efficiency or revenue growth? Is the goal to be more accurate than a human, or simply to come close to human performance at a lower cost? What role should environmental sustainability play in these decisions? All good questions.

Not a substitute for human judgment

Ian Freeman, a partner at KPMG UK, talked about his firm’s increasing use of AI tools to help auditors. In the past, auditors were forced to rely on sampling transactions, trying to apply more scrutiny to those that presented a bigger business risk. But now, with AI, it is possible to run a screen on every single transaction. But still, it is the riskiest transactions that should get the most scrutiny and AI can help identify those. Freeman said AI could also help more junior auditors understand the rationale for probing certain transactions. And he said AI models could help with a lot of routine financial analysis.

But he said KPMG had a policy of not deploying AI in situations that called for human judgment. Auditing is full of such cases, such as deciding on materiality thresholds, making a call about whether a client has submitted enough evidence to justify a particular accounting treatment, or deciding on appropriate warranty reserves for a new product. That sounds good, but I also wonder about the ability of AI models to act as tutors or digital mentors to junior auditors, helping them to develop their professional judgment? Surely, that seems like it might be a good use case for AI too.

A senior partner from a large law firm (parts of the conference were conducted under Chatham House Rules, so I can’t name them) noted that many corporate legal departments are embracing AI faster than legal firms—something the Thomson Reuters survey also showed—and that this disparity was putting pressure on the firms. Corporate counsel are demanding that external lawyers be more transparent about their AI usage—and critically, putting pressure on legal bills on the theory that many legal tasks can now be done in far fewer billable hours.

Changing career paths and the need for AI expertise

AI is also possibly going to change how professional service firms think about career paths within their business and even who leads these firms, several lawyers at the conference said. AI expertise is increasingly important to how these firms operate, and yet it is difficult to attract the talent these businesses need if these “non-qualified” technical experts (the term “non-qualified” is simply used to denote an employee who has not been admitted to the bar, but its pejorative connotations are hard to escape) know they will always be treated as second-class compared to the client-facing lawyers and also are ineligible for promotion to the highest ranks of the firm’s management. 

Michael Buenger, executive vice president and chief operating officer at the National Center for State Courts in the U.S., said that if large law firms had trouble attracting and retaining AI expertise, the situation was far worse for governments. And he pointed out that judges and juries were increasingly being asked to rule on evidence, particularly video evidence, but also other kinds of documentary evidence, that might be AI manipulated, but without access to independent expertise to help them make calls about what has been altered by AI and how. If not addressed, he said, this could seriously undermine faith in the courts to deliver justice.

There were lots more insights from the conference, but that’s all we have space for today. Here’s more AI news.

Note: The essay above was written and edited by humans. The news items below are curated by the newsletter author. Short summaries of the relevant stories were created using AI. These summaries were then edited and fact-checked by the author, who also wrote the blurb headlines. This entire newsletter was then further edited by additional humans.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Then join me at the Ritz-Carlton, Millenia in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. This year’s theme is The Age of Intelligence. We will be joined by leading executives from DBS Bank, Walmart, OpenAI, Arm, Qualcomm, Standard Chartered, Temasek, and our founding partner Accenture, plus many others, along with key government ministers from Singapore and the region, top academics, investors and analysts. We will dive deep into the latest on AI agents, examine the data center build out in Asia, examine how to create AI systems that produce business value, and talk about how to ensure AI is deployed responsibly and safely. You can apply to attend here and, as loyal Eye on AI readers, I’m able to offer complimentary tickets to the event. Just use the discount code BAI100JeremyK when you checkout.

AI IN THE NEWS

Senate strips 10-year moratorium on state AI laws from Trump tax bill. The U.S. Senate voted 99-1 to remove the controversial measure from President Donald Trump’s landmark “Big Beautiful Bill.” The restrictions had been supported by Silicon Valley tech companies and venture capitalists as well as their allies in the Trump administration. Bipartisan opposition to the moratorium—led by Sen. Marsha Blackburn—centered on preserving state-level protections like Tennessee’s Elvis Act, which protects citizens from unauthorized use of their voice or likeness, including in AI-generated content. Critics warned that in the absence of federal AI regulation, the ban on state-level laws would leave U.S. citizens with no protection from AI harms at all. But tech companies argue that the increasing patchwork of state-level AI regulation is unworkable, hampering AI progress. Read more from Bloomberg News here.

Meta announced new AI leadership team and key hires from rival AI labs. Meta CEO Mark Zuckerberg sent a memo to employees formally announcing the creation of Meta Superintelligence Labs, a new organization uniting the company’s foundational AI model, product, and Fundamental AI Research (FAIR) teams under a single umbrella. Scale AI founder and CEO Alexandr Wang—who is joining Meta as part of a $14.3 billion investment into Scale—will have the title “chief AI officer” and will co-lead the new Superintelligence unit along with former GitHub CEO Nat Friedman. Zuckerberg also announced the hiring of 11 prominent AI researchers from OpenAI, Google DeepMind, and Anthropic. You can read more about Meta’s AI talent raid from Wired here.

Cloudflare begins blocking AI web-crawlers by default. Internet content delivery provider Cloudflare announced it has begun blocking AI companies’ web crawlers from accessing website content by default. Owners of the websites can choose to unblock specific crawlers—such as those Google uses to build its search index—or even opt for a “pay per crawl” option that will allow them to monetize the scraping of their content. With around 16% of global internet traffic passing through Cloudflare, the change could significantly impact AI development. (Full disclosure: Fortune is one of the initial participants in the Cloudflare crawler initiative.) Read more from CNBC here.

EYE ON AI RESEARCH

Even better than House? Microsoft has unveiled an AI system for medical diagnoses that it claims can accurately diagnose complex cases four times more accurately than individual human doctors (under certain conditions—more on that in a sec.) The “Microsoft AI Diagnostic Orchestrator” (MAI-DxO—gotta love those AI acronyms) consists of five AI “agents” that each have a distinct role to play in scouring the medical literature, hypothesizing what the patient’s condition might be, ordering tests to eliminate possibilities, and even trying to optimize these tests to derive the most useful information at the least cost. These five “AI doctors” then engage in a process Microsoft is dubbing “chain of debate,” where they collaborate and critique one another, ultimately arriving at a diagnosis.

In trials involving 304 real-world cases from the New England Journal of Medicine, MAI-DxO, achieved an 85.5% success rate, compared to about 20% for human doctors. Microsoft tried powering the system with different AI models from OpenAI, Google, Meta, Anthropic, and DeepSeek, but found it worked best when using OpenAI’s o3 model (Microsoft is a major investor in OpenAI, sells OpenAI's models through its cloud service, and depends on OpenAI for many of its own AI offerings). As for the poor performance of the human docs, it is important to note that in the test they were not allowed to consult either medical textbooks or colleagues.

Nonetheless, Microsoft AI CEO Mustafa Suleyman said the system could transform healthcare—although the company also said MAI-DxO is just a research project and is not yet being turned into a product. You can read more from the Financial Times here.

FORTUNE ON AI

Mark Zuckerberg overhauled Meta’s entire AI org in a risky, multi-billion dollar bet on ‘superintelligence’ —by Sharon Goldman

Longtime Bessemer investor Mary D’Onofrio, who backed Anthropic and Canva, leaves for Crosslink Capital —by Allie Garfinkle

Ford CEO says new technologies like AI are leaving many workers behind, and companies need a plan —by Jessica Mathews

Commentary: When your AI assistant writes your performance review: A glimpse into the future of work —by David Ferrucci

AI CALENDAR

July 8-11: AI for Good Global Summit, Geneva

July 13-19: International Conference on Machine Learning (ICML), Vancouver

July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.

July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai. 

Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

Oct. 6-10: World AI Week, Amsterdam

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

AI tries to run a vending machine business. Hilarity ensues, Part Deux. A month ago in the research section of this newsletter, I wrote about research from Andon Labs about what happens when you try to have various AI models run a simulated vending machine business. Now, Anthropic teamed up with Andon Labs to test one of its latest models, Claude 3.7 Sonnet, to see how it did running a real-life vending machine in Anthropic’s San Francisco office. The answer, as it turns out, is not well at all. As Anthropic writes in its blog on the experiment, “If Anthropic were deciding today to expand into the in-office vending market, we would not hire [Claude 3.7 Sonnet].”

The model made a lot of mistakes—like telling customers to send it payment to Venmo account that didn’t exist (it had hallucinated it)—and also a lot of poor business decisions, like offering far too many discounts (including an Anthropic employee discount in a location where 99% of the customers were Anthropic employees), failing to seize a good arbitrage opportunity, and failing to increase prices in response to high demand.

The entire Anthropic blog makes for fun reading. And the experiment makes it clear that AI agents probably are nowhere near ready for a lot of complex, multi-step tasks over long time periods.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 专业服务 职业发展 人工智能
相关文章