Browse by Date • Publication

Ms. Buyer is a regular columnist for the THE BULLETIN of the Bar Association of Erie County and is a contributor to No Jitter. Previously, she has written numerous commentaries on telecommunications law for other legal and telecommunications publications including, among others, The Daily Record, Communications Convergence and Computer Telephony. Her articles cover a broad range of topics highlighting current telecommunications issues including federal and state telecommunications policy, litigation, wireless technologies, spectrum policy, FCC initiatives, and industry consolidation. Martha Buyer has also contributed to the ABA Journal Report.

Thursday
Aug012024

Reliance on AI Tools Continues to Carry Significant Risks

It’s important to know that AI outputs are best used when they are used as contributing factors and not relied upon solely for decision making
An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.” appeared in the New York Times on July 18th and it describes the downsides of using AI for life-or-death situations. The article describes an effort by Spanish authorities to rely uponan AI algorithm to categorize and assess the likelihood that previous victims of domestic violence would be assaulted again at the hands of their spouses.

Click to read more ...

Thursday
Aug012024

Reliance on AI Tools Continues to Carry Significant Risks

It’s important to know that AI outputs are best used when they are used as contributing factors and not relied upon solely for decision making.
An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.” appeared in the New York Times on July 18th and it describes the downsides of using AI for life-or-death situations. The article describes an effort by Spanish authorities to rely uponan AI algorithm to categorize and assess the likelihood that previous victims of domestic violence would be assaulted again at the hands of their spouses.

Click to read more ...

Wednesday
Jun122024

AI in the Work Environment? Be Mindful

As AI-infused tools become part of the employee experience – from hiring to scheduling to performance assessment – it’s important to assess the risks inherent in these tools.
For a several years, I’ve been thinking and writing about ethical use of artificial intelligence both within the enterprise and beyond. While these topics remain of paramount interest to me and importance to all, given AI’s rapid deployment in various levels of the enterprise, it seems prudent to share some guidance on AI use in the context of employment, particularly from the perspective of the employer.

Click to read more ...

Monday
May132024

Net Neutrality Is On Its Way Back

However, what “net neutrality” means in a world where the Internet’s changed since the first time net neutrality was made law is still up for clarification.
As has been anticipated since Democrats took over the majority of Commissioner slots on the FCC, the network neutrality rules originally brought to bear during the Obama Administration and then removed during the Trump years, will be back.  Regardless of how you feel about the regulatory burden that this transition places will again place upon the ISPs, one thing that is indisputable is that the internet as regulated in 2015 is a completely different animal now than it was when.  Just like it used to be easy to understand the technology evolution as analog moved to digital, the word “internet,” which was the definition of a lifeline particularly during the pandemic, has so many different meanings, that “defining what the new 400 pages of net neutrality rules are designed to regulate” may become its own cottage industry.  But there is no doubt that the internet has evolved into a utility requiring a greater level of regulation and oversight than currently exists.

Click to read more ...

Tuesday
Apr302024

Considerations for AI Product Acquisition

It’s a bit like buying a car (sort of).
A recent article in the Columbia Law Review entitled AI Systems as State Actors contained this stunning and important quote from authors Kate Crawford and Jason Schultz:

..When challenged, many state governments have disclaimed any knowledge or ability to understand, explain or remedy problems created by AI systems that they have procured from third parties. The general position has been ‘we cannot be responsible for something we don’t understand.’ This means that algorithmic systems are contributing to the process of government decision-making without any mechanisms of accountability or liability.

My first reaction to this quote was horror, but once I got a grip, my initial concerns were only slightly mollified. While the article was published in 2019, it nonetheless raises a valid and timely point: Where does liability fall when reliance on an AI-powered system causes harm? The short answer is “everywhere,” but that’s not really a useful answer.

Click to read more ...