<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "https://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.2" xml:lang="en">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">1832</journal-id>
      <journal-title-group>
        <journal-title>Journal of Cultural Analytics</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2371-4549</issn>
      <publisher>
        <publisher-name>Center for Digital Humanities, Princeton University</publisher-name>
      </publisher>
      <self-uri xlink:href="https://culturalanalytics.org/">Website: Journal of Cultural Analytics</self-uri>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="publisher-id">17212</article-id>
      <article-id pub-id-type="doi">10.22148/001c.17212</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Commentary</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Can GPT-3 Pass a Writer’s Turing Test?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name>
            <surname>Elkins</surname>
            <given-names>Katherine</given-names>
          </name>
          <xref ref-type="aff" rid="author-aff-1">
            <sup>1</sup>
          </xref>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Chun</surname>
            <given-names>Jon</given-names>
          </name>
          <xref ref-type="aff" rid="author-aff-1">
            <sup>1</sup>
          </xref>
        </contrib>
      </contrib-group>
      <aff id="author-aff-1">
        <label>1</label>
        <institution-wrap>
          <institution content-type="edu">Kenyon College</institution>
        </institution-wrap>
        <institution-wrap>
          <institution-id institution-id-type="ROR">https://ror.org/04ckqgs57</institution-id>
        </institution-wrap>
      </aff>
      <pub-date publication-format="electronic" date-type="pub" iso-8601-date="2020-09-14">
        <day>14</day>
        <month>9</month>
        <year>2020</year>
      </pub-date>
      <pub-date publication-format="electronic" date-type="collection" iso-8601-date="2021-09-02">
        <year>2020</year>
      </pub-date>
      <volume>5</volume>
      <issue seq="4">2</issue>
      <issue-title>Articles in 2020</issue-title>
      <elocation-id>17212</elocation-id>
      <permissions>
        <license license-type="open-access">
          <ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">
              http://creativecommons.org/licenses/by/4.0
            </ali:license_ref>
          <license-p>
              This is an open access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0">Creative Commons Attribution License (4.0)</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
            </license-p>
        </license>
      </permissions>
      <self-uri content-type="pdf" xlink:href="https://culturalanalytics.org/article/17212.pdf"/>
      <self-uri content-type="xml" xlink:href="https://culturalanalytics.org/article/17212.xml"/>
      <self-uri content-type="json" xlink:href="https://culturalanalytics.org/article/17212.json"/>
      <self-uri content-type="html" xlink:href="https://culturalanalytics.org/article/17212"/>
      <abstract>
        <p>Until recently the field of natural language generation relied upon formalized grammar systems, small-scale statistical models, and lengthy sets of heuristic rules. This older technology was fairly limited and brittle: it could remix language into word salad poems or chat with humans within narrowly defined topics. Recently, very large-scale statistical language models have dramatically advanced the field, and GPT-3 is just one example. It can internalize the rules of language without explicit programming or rules. Instead, much like a human child, GPT-3 learns language through repeated exposure, albeit on a much larger scale. Without explicit rules, it can sometimes fail at the simplest of linguistic tasks, but it can also excel at more difficult ones like imitating an author or waxing philosophical.</p>
      </abstract>
      <kwd-group>
        <kwd>creative writing</kwd>
        <kwd>philosophy of language</kwd>
        <kwd>theory</kwd>
        <kwd>language models</kwd>
        <kwd>gpt</kwd>
        <kwd>#ai</kwd>
      </kwd-group>
    </article-meta>
  </front>
</article>
