11271 stories
·
34 followers

Daniel Dennett (1942-2024)

1 Share

Daniel Dennett, professor emeritus of philosophy at Tufts University, well-known for his work in philosophy of mind and a wide range of other philosophical areas, has died.

Professor Dennett wrote extensively about issues related to philosophy of mind and cognitive science, especially consciousness. He is also recognized as having made significant contributions to the concept of intentionality and debates on free will. Some of Professor Dennett’s books include Content and Consciousness (1969), Brainstorms: Philosophical Essays on Mind and Psychology (1981), The Intentional Stance (1987), Consciousness Explained (1992), Darwin’s Dangerous Idea (1995), Breaking the Spell (2006), and From Bacteria to Bach and Back: The Evolution of Minds (2017). He published a memoir last year entitled I’ve Been Thinking. There are also several books about him and his ideas. You can learn more about his work here.

Professor Dennett held a position at Tufts University for nearly all his career. Prior to this, he held a position at the University of California, Irvine from 1965 to 1971. He also held visiting positions at Oxford, Harvard, Pittsburgh, and other institutions during his time at Tufts University. Professor Dennett was awarded his PhD from the University of Oxford in 1965 and his undergraduate degree in philosophy from Harvard University in 1963.

Professor Dennett is the recipient of several awards and prizes including the Jean Nicod Prize, the Mind and Brain Prize, and the Erasmus Prize. He also held a Fulbright Fellowship, two Guggenheim Fellowships, and a Fellowship at the Center for Advanced Study in Behavioral Sciences. An outspoken atheist, Professor Dennett was dubbed one of the “Four Horsemen of New Atheism”. He was also a Fellow of the Committee for Skeptical Inquiry, an honored Humanist Laureate of the International Academy of Humanism, and was named Humanist of the Year by the American Humanist Organization.

The following interview with Professor Dennett was recorded last year:

(via Eric Schliesser)


Related: “Philosophers: Stop Being Self-Indulgent and Start Being Like Daniel Dennett, says Daniel Dennett“. (Other DN posts on Dennett can be found here.)

Obituaries elsewhere:

The post Daniel Dennett (1942-2024) first appeared on Daily Nous.

Read the whole story
denubis
43 minutes ago
reply
Share this story
Delete

Garden-path incidents; saying hypotheses out loud

2 Shares

Barb’s story

It’s 12 noon on a Minneapolis Wednesday, which means Barb can be found at Quang. As the waiter sets down Barb’s usual order (#307, the Bun Chay, extra spicy), Barb’s nostrils catch the heavenly aroma of peanuts and scallions and red chiles. A wave of calm moves through her. Barb pulls her chair forward, cracks apart her wooden chopsticks, and…her pager goes off.

After cursing under her breath, she dutifully reads the message:

Error rate for `environment:production' exceeds 100 msg/s

Welp.

Barb grabs one quick bite of spring roll as she flags down the waiter for a to-go box. Opening Slack on her phone, she declares an incident, joins the Zoom call, and hurries back up Nicollet Ave. and around the corner, toward her apartment.

Five minutes later, finally sitting at her laptop, Barb is pretty sure she knows what the problem is. The error flooding the logs is:

object 'nil' has no method 'reimport!'

That looks like a straightforward app bug to her, and wouldn’t you know it? Right before these errors started cropping up, there was a deploy to the Rails app by a newish dev named Alice, who according to her Slack profile, is based in Seattle. Barb asks this ‘Alice’ to join the incident Zoom.

– Hi, this is Alice. What’d I do?
– Thanks for joining, Alice. I’m incident commander for this spike of production errors. It looks like you deployed a change at 17:46 UTC and a bunch of errors started happening. Can you revert that change please?
– Sure, no problem. I’ll put together the revert PR now.

5 minutes later, Alice’s PR is approved. Alice click’s “Merge.” The pair begin the anxious but familiar 15-minute wait for CI to pass, all the while greeting and informing the bewildered latecomers who straggle into the call. 

Alice’s story

Alice stares blankly at the white rectangle on her monitor. She spent her first hour getting yesterday’s frontend bugfixes pushed out, and now it’s time to start her PowerPoint. She’ll be working on this PowerPoint for the rest of the morning, probably through lunch, and all afternoon.

Alice shuts her eyes and heaves a dismal sigh. Alice fucking hates PowerPoint. But she can’t put it off anymore. So she dons her headphones, cracks her knuckles,, and… gets an urgent Slack message:

Morning, Alice – we’ve got a production incident involving a spike of errors, and it looks like it coincides with a deploy of yours. Can you jump on https://zoom.globocorp.co/z/123456789… when you have a moment please?

As she waits for Zoom to load, Alice feels something almost like relief. At least she doesn’t have to work on that goddamn PowerPoint yet.

– Hi, this is Alice. What’d I do?
– Thanks for joining, Alice. I’m incident commander for this spike of production errors. It looks like you deployed a change at 16:46 UTC and a bunch of errors started happening. Can you revert that change please?
– Sure, no problem. I’ll put together the revert PR now.

Alice quickly whips up that PR and gets it approved. She spends the next 15 minutes waiting for CI to pass, while absent-mindedly writing the first slide of her PowerPoint. By the time the tests are green, she has typed out and deleted 4 different titles.

The real story

This incident seems to have gone about as well as it could, considering. Alice was on the call within 7 minutes of the alert, and a PR was ready 5 minutes later. It would be great if CI were faster, or even better if CI could be skipped for a revert. They’ll talk about that at the post-mortem.

However, nobody in the call yet knows what really happened. What really happened is this:

    • Alice’s 16:46 UTC deploy was the first to pick up the latest Docker image.
    • The new Docker image includes an update to a software dependency.
    • The updated dependency has a bug that only shows up in production.

    But instead of knowing any of that, Alice and Barb are sitting here for 15 minutes waiting for CI to run, so they can deploy a fix that won’t even work.

    This is a garden-path incident. Barb has what she feels is a strong signal from the telemetry, which points toward a bug in Alice’s code. Alice has what she feels is a strong signal, which is that Barb seems very confident in her conclusion. But they’ve been led up the garden path, and as a consequence, this incident will run longer than it needs to.

    How this could all have been avoided

    Imagine instead, that Barb and Alice are both in the habit of saying their hypotheses out loud.

    When Alice joins the call, Barb instead says:

    – Thanks for joining, Alice. I’m incident commander for this spike of production errors. It looks like you deployed a change at 16:46 UTC and a bunch of errors started happening. My hypothesis is that your change triggered this spike of errors. Can you revert the change please?

    Instead of letting Alice infer that the cause of the error spike is already known to be her deploy, Barb acknowledges the limits of her certainty. She has a hypothesis, not a definitive diagnosis. This gives Alice the opportunity to respond with something like:

    – Well, are the errors from the backend or the frontend? Because my change was frontend-only.

    And just like that, Alice and Barb have stepped back from the garden path. Instead of waiting around for a useless CI cycle, they can continue straight away with diagnosis.

    Note that, even if Barb doesn’t state her hypothesis, things will still be okay as long as Alice does:

    – Hi, this is Alice. What’d I do?
    – Thanks for joining, Alice. I’m incident commander for this spike of production errors. It looks like you deployed a change at 16:46 UTC and a bunch of errors started happening. Can you revert that change please?
    – Sure, no problem. I’ll put together the revert PR now. Just to be clear, the hypothesis is that my frontend-only changeset is somehow causing these nil-has-no-method errors in the backend?
    Uh, did you say frontend-only?

    Again, Barb and Alice have gotten themselves off the garden path. Which means this technique – stating your hypothesis and asking for rule-outs – is something you can do unilaterally starting today to make your team better at troubleshooting.

    Another thing you can do to make your team better at troubleshooting is employ Clinical Troubleshooting against your next head-scratcher of a bug.





    Read the whole story
    denubis
    13 hours ago
    reply
    JayM
    13 hours ago
    reply
    Atlanta, GA
    Share this story
    Delete

    Protocol

    3 Shares

    New Mexico’s state senate took up a startling amendment in 1995 — it would have required psychologists to dress up as wizards when providing expert testimony on a defendant’s competency:

    When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts. Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong.

    The measure had received unanimous approval in the senate and was headed for the house of representatives when sponsor Duncan Scott explained that he’d intended it as satire — he felt that too many mental health practitioners had been acting as expert witnesses. It was withdrawn and never signed into law.

    Read the whole story
    istoner
    10 hours ago
    reply
    Saint Paul, MN, USA
    denubis
    16 hours ago
    reply
    hannahdraper
    16 hours ago
    reply
    Washington, DC
    Share this story
    Delete

    Author granted copyright over book with AI-generated text—with a twist

    1 Share
    Author granted copyright over book with AI-generated text—with a twist

    (credit: Getty Images)

    Last October, I received an email with a hell of an opening line: “I fired a nuke at the US Copyright Office this morning.”

    The message was from Elisa Shupe, a 60-year-old retired US Army veteran who had just filed a copyright registration for a novel she’d recently self-published. She’d used OpenAI's ChatGPT extensively while writing the book. Her application was an attempt to compel the US Copyright Office to overturn its policy on work made with AI, which generally requires would-be copyright holders to exclude machine-generated elements.

    That initial shot didn’t detonate—a week later, the USCO rejected Shupe’s application—but she ultimately won out. The agency changed course earlier this month after Shupe appealed, granting her copyright registration for AI Machinations: Tangled Webs and Typed Words, a work of autofiction self-published on Amazon under the pen name Ellen Rae.

    The novel draws from Shupe’s eventful life, including her advocacy for more inclusive gender recognition. Its registration provides a glimpse of how the USCO is grappling with artificial intelligence, especially as more people incorporate AI tools into creative work. It is among the first creative works to receive a copyright for the arrangement of AI-generated text.

    “We’re seeing the Copyright Office struggling with where to draw the line,” intellectual property lawyer Erica Van Loon, a partner at Nixon Peabody, says. Shupe’s case highlights some of the nuances of that struggle—because the approval of her registration comes with a significant caveat.

    The USCO’s notice granting Shupe copyright registration of her book does not recognize her as author of the whole text as is conventional for written works. Instead she is considered the author of the “selection, coordination, and arrangement of text generated by artificial intelligence.” This means no one can copy the book without permission, but the actual sentences and paragraphs themselves are not copyrighted and could theoretically be rearranged and republished as a different book.

    The agency backdated the copyright registration to October 10, the day that Shupe originally attempted to register her work. It declined to comment on this story. “The Copyright Office does not comment on specific copyright registrations or pending applications for registration,” Nora Scheland, an agency spokesperson, says. President Biden’s executive order on AI last fall asked the US Patent and Trademark Office to make recommendations on copyright and AI to the White House in consultation with the Copyright Office, including on the “scope of protection for works produced using AI.”

    Although Shupe’s limited copyright registration is notable, she originally asked the USCO to open a more significant path to copyright recognition for AI-generated material. “I seek to copyright the AI-assisted and AI-generated material under an ADA exemption for my many disabilities,” she wrote in her original copyright application.

    Shupe believes fervently that she was only able to complete her book with the assistance of generative AI tools. She says she has been assessed as 100 percent disabled by the Department of Veterans Affairs and struggles to write due to cognitive impairment related to conditions including bipolar disorder, borderline personality disorder, and a brain stem malformation.

    She is proud of the finished work and sees working with a text generator as a different but no less worthwhile method of expressing thoughts. “You don't just hit ‘generate’ and get something worthy of publishing. That may come in the future, but we're still far from it,” she says, noting that she spent upwards of 14 hours a day working on her draft.

    After her initial registration was refused, Shupe connected with Jonathan Askin, founder of the Brooklyn Law Incubator and Policy Clinic at Brooklyn Law School, which takes pro bono cases centered on emerging tech and policy questions. Askin and Brooklyn Law student Sofia Vescovo began working on Shupe’s case and filed an appeal with the USCO in January.

    The appeal built on Shupe’s argument about her disabilities, saying she should be granted copyright because she used ChatGPT as an assistive technology to communicate, comparing her use of OpenAI’s chatbot to an amputee using a prosthetic leg. The appeal claimed that the USCO “discriminated against her because of her disability.”

    The Brooklyn Law appeal also claimed that Shupe should be granted copyright for compiling the book—that is, doing the work of selecting and organizing the snippets of AI-generated text. It provided an exhaustive log of how Shupe prompted ChatGPT, showing the custom commands she created and the edits she made.

    It includes a side-by-side comparison of the unedited machine output and the final version of Shupe’s book. On a sentence level, she adjusted almost every line in some way, from changes in word choice to structure. One example describing a character in the novel: “Mark eyed her, a complex mix of concern and annoyance evident in his gaze” becomes “Mark studied her, his gaze reflecting both worry and irritation.”

    The appeal cites another recent AI copyright decision about the graphic novel Zarya and the Dawn, which incorporates AI-generated images created with Midjourney. In February 2023, author Kris Kashtanova was granted copyright to the selection and arrangement of AI-generated images in the text, even though they were denied copyright on the specific images themselves.

    When the USCO granted Shupe’s request for copyright, it did not address the disability argument put forth but agreed with the appeal’s other argument. Shupe could be considered the author of “selection, coordination, and arrangement of text generated by artificial intelligence,” the agency wrote, backdating her copyright registration to October 10, 2023, the day that Shupe had originally attempted to register her work. That gives her authorship of the work overall, prohibiting unauthorized wholecloth reproduction of the entire book, but not copyright protection over the actual sentences of the novel.

    “Overall, we are extremely satisfied,” says Vescovo. The team felt that copyrighting the book’s compilation would provide peace of mind against out-and-out reproduction of the work. “We really wanted to make sure we could get her this protection right now.” The Brooklyn Law team hopes Shupe’s approval can serve as a blueprint for other people experimenting with AI text generation who want some copyright protection.

    “I’m going to take this as a win for now,” Shupe says, even though she knows that “in some ways, it’s a compromise.” She maintains that the way she uses ChatGPT more closely resembles a collaboration than an automated output and that she should be able to copyright the actual text of the book.

    Matthew Sag, a professor of law and artificial intelligence at Emory University, calls what the USCO granted Shupe “thin copyright”—protection against full-fledged duplication of materials that doesn’t stop someone from rearranging the paragraphs into a different story. “This is the same kind of copyright you would get in an anthology of poetry that you didn’t write,” Sag says.

    Erica Van Loon agrees. “It’s hard to imagine something more narrow,” she says.

    Shupe is part of a larger movement to make copyright law friendlier to AI and the people who use it. The Copyright Office, which both administers the copyright registration system and advises Congress, the judiciary system, and other governmental agencies on copyright matters, plays a central role in determining how works that use AI are treated.

    Although it continues to define authorship as an exclusively human endeavor, the USCO has demonstrated openness to registering works that incorporate AI elements. The USCO said in February that it has granted registration to over 100 works with AI incorporated; a search by WIRED found over 200 copyright registration applications explicitly disclosing AI elements, including books, songs, and visual artworks.

    One such application came from Tyler Partin, who works for a chemical manufacturer. He recently registered a tongue-in-cheek song he created about a coworker but excluded lyrics that he spun up using ChatGPT from his registration. Partin sees the text generator as a tool but ultimately doesn’t think he should take credit for its output. Instead, he applied only for the music rather than the accompanying words. “I didn’t do that work,” he says.

    But there are others who share Shupe’s perspective and agree with her mission and believe that AI-generated materials should be registrable. Some high-profile attempts to register AI-generated artworks have resulted in USCO refusals, like artist Matthew Allen’s effort to get his award-winning artwork Théâtre D’opéra Spatial copyrighted last year. AI researcher Stephen Thaler has been on a mission for years to prove that the AI system he invented deserves copyright protections of its own.

    Thaler is currently appealing a ruling in the US last year that rebuffed his attempt to obtain copyright on behalf of his machine. Ryan Abbott, the lead attorney on the case, founded the Artificial Inventor Project, a group of intellectual property lawyers who file test cases seeking legal protections for AI-generated works.

    Abbott is a supporter of Shupe’s mission, although he’s not a member of her legal team. He isn’t happy that the copyright registration excludes the AI-generated work itself. “We all see it as a very big problem,” he says.

    Shupe and her legal helpers don’t have plans to push the ADA argument further by contesting the USCO’s decision, but it’s an issue that is far from settled. “The best path is probably to lobby Congress for an addition to the ADA statute,” says Askin. “There's a potential for us to draft some legislation or testimony to try to move Congress in that direction.”

    Shupe’s qualified victory is still a significant marker in how the Copyright Office is grappling with what it means to be an author in the age of AI. She hopes going public with her efforts will reduce what she sees as a stigma against using AI as a creative tool. Her metaphorical nuke didn’t go off, but she has nonetheless advanced her cause. “I haven't been this excited since I unboxed a Commodore 64 back in the 1980s and, after a lot of noise, connected to a distant computer,” she says.

    This story originally appeared on wired.com.

    Read Comments

    Read the whole story
    denubis
    22 hours ago
    reply
    Share this story
    Delete

    Saturday Morning Breakfast Cereal - K

    2 Shares


    Click here to go see the bonus panel!

    Hovertext:
    SMBC is the 74-almost funniest webcomic.


    Today's News:

    If you were a patreon subscriber, you would be seeing my magnum opus at this very moment.

    Read the whole story
    denubis
    23 hours ago
    reply
    Share this story
    Delete

    How cheap, outsourced labour in Africa is shaping AI English

    1 Share

    How cheap, outsourced labour in Africa is shaping AI English

    The word "delve" has been getting a lot of attention recently as an example of something that might be an indicator of ChatGPT generated content.

    One example: articles on medical research site PubMed now use “delve” 10 to 100 times more than a few years ago!

    Nigerian Twitter took offense recently to Paul Graham's suggestion that "delve" is a sign of bad writing. It turns out Nigerian formal writing has a subtly different vocabulary.

    Alex Hern theorizes that the underlying cause may be related. Companies like OpenAI frequently outsource data annotation to countries like Nigeria that have excellent English skills and low wages. RLHF (reinforcement learning from human feedback) involves annotators comparing and voting on the "best" responses from the models.

    Are they teaching models to favour Nigerian-English? It's a pretty solid theory!

    Read the whole story
    denubis
    1 day ago
    reply
    Share this story
    Delete
    Next Page of Stories