Artificial Listicles

One of the things I’m noticing more an more with automated robot writing is its tendency to summarize key points in lists. Take, for example, the following prompt:

What was The Beatles most significant achievement?

I’m trying to ask ChaptGPT something that doesn’t require a list in order to provide an answer but get one anyway:

ChatGPT output with four list items.

OK, sure. I suppose a list is one possible way to answer that question, but it didn’t have to be. Maybe we can try something a little more yes-or-no:

Are web components a good use case for a widget that displays a table of contents?

ChatGPT output with four list items.

There’s the answer right up front! That’s all I need, yet the bot jabbers on and starts listing things out. It’s not that every query leads to a list but I find they often do.

Now I find myself giving the side eye to articles that make liberal use of lists. It’s become a sort of bad smell that makes me wince at first glance. Many of the articles I review for Smashing Magazine make liberal use of lists. It’s a bit unfortunate, but understandable, that I now feel the need to take extra steps to comfirm that a list is original and distinguishable from the multitudes of autogenerated articifical listicles published daily.

✏️ Handwritten by Geoff Graham on January 25, 2024

3 Comments

  1. # January 25, 2024

    I think the mobile app didn’t behave like this, because I remember last time on mobile app (android), I want more even detail answer and it always give kind of short answer

    Reply
  2. # January 26, 2024

    I think the “list with headings” technique is an SEO strategy. I wonder if chat GPT has consumed so much online SEO-optimised writing it’s picked up this technique?

    Reply

Leave a Reply

Markdown supported