Skip to content

wordnik/nyt-first-said

 
 

Repository files navigation

A twitter bot to track when the New York Times publishes a word for the first time in history. running at: @NYT_first_said

It also powers a sibling bot @NYT_said_where, which replies to each tweet with a few words of source-text context, and a link to the article.

The code takes some steps to throw away un-interesting words like proper nouns and urls, but still picks up a lot of typos and nonsense, so the sanitization is an ongoing process.

Some points of inspiration are Allison Parrish's @everyword bot, and the NewsDiffs editorial change archiving software.

Basic architecture

NYT-first-said is essentially a single script. It's running once an hour as a cron job on a small VPS.

nyt.py is a beautifulsoup parser adapted from the newsdiffs sourcecode.

redis holds a list of scraped URLs and seen words (to reduce load on NYT API). It also holds a count of words tweeted recently to avoid blasting out too many tweets in a short period of time.

api_check.py uses the NYT article_search API to check through all digitized NYT history to be confident this is really the first occurrence of a word. It returns weird 500s for some words. If you know why let me know.

simple_scrape.py Checks for new article urls, retrieves the article text using nyt.py, splits them into words, and then determines whether each word is fit to tweet using (in this order) some heuristics to discard unwanted types of words, uniqueness in our local redis instance, and finally uniqueness against the article_search api. If all of these checks pass, it tweets the word, and replies with the context and link.

Also check out @nyt-finally-said, a cool sibling bot that cross-references these words with the google n-gram dataset!

Running locally

  • Create a .env file in the project root that looks like this:

      S3A=<Your S3-like archive.org key from https://archive.org/account/s3.php>
      WORDNIK_API_KEY=<Your Wordnik API key>
    
  • Set up a virtual Python environment:

    • python3 -m venv venv
    • source venv/bin/activate
  • Install Python packages with pip install -r requirements.txt.

  • Download data needed by textblob with make install-textblob.

  • Run docker compose up to get a Redis service running on port 6379.

    • If you want to inspect the persistent storage, it's on the host machine at /var/lib/docker/volumes/nyt-first-said_redis_data/.
  • Run . tools/init-aws.sh (the space after the . is important) to set the AWS_PROFILE env. var. If you haven't already, use [aws configure --wordnik](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles) to create configuration and credential files for the wordnik` profile.

  • Run the main program with python simple_scrape.py

AWS lambda

There is also a lambda in this project that listens for the sentence objects to drop in the nyt-said-sentences bucket. The lambda uploads those objects to lambda.

Setting up

To run the parts of the project that use AWS from your computer, you need to set up a file at ~/.aws/credentials that has your access keys in it like so:

[default]
aws_access_key_id = <access key>
aws_secret_access_key = <secret>

aws tool

To deploy the lambda from your computer, you'll need to install the aws tool.

Deploying

You can deploy changes to the lambda on AWS with the push-sentences-to-elastic Makefile target.

That will create a zip file with the relevant files, then push it to AWS with the aws command line tool.

Tests

Run tests with make run-test.

Exploratory tools

You can try out the NYT parser with python try_parser.py <NYTurl>;

About

Tweets when words are published for the first time in the NYT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 67.3%
  • JavaScript 17.5%
  • HTML 11.5%
  • Makefile 3.1%
  • Shell 0.6%