I have been preparing a couple of talks I have to give in the next couple of weeks and I needed some pictures of the people working in Signal to have some nice images about the team and the company in general. Although we have some of them store online, I realised that our Twitter account had some of the best pictures, especially for the early days of the company. Almost at the same time, I was reading a blogpost about mining twitter data with python, written by my good friend and ex-colleague (in Queen Mary), Dr. Marco Bonzanini. These two events together seemed like a good excuse to build a little tool in python to download the pictures that a twitter account has published and this is the main focus of this post. I hope you find it useful, I definitely have…
Marco’s post explains very well how to register a Twitter app, a necessary step to be able to use the Twitter API, and how to set up tweepy to return json format. For the sake of completion, the code used for this purpose is illustrated below, but I encourage you to visit the original post for a detailed explanation.
import tweepy from tweepy import OAuthHandler import json consumer_key = 'YOUR-CONSUMER-KEY' consumer_secret = 'YOUR-CONSUMER-SECRET' access_token = 'YOUR-ACCESS-TOKEN' access_secret = 'YOUR-ACCESS-SECRET' @classmethod def parse(cls, api, raw): status = cls.first_parse(api, raw) setattr(status, 'json', json.dumps(raw)) return status # Status() is the data model for a tweet tweepy.models.Status.first_parse = tweepy.models.Status.parse tweepy.models.Status.parse = parse # User() is the data model for a user profil tweepy.models.User.first_parse = tweepy.models.User.parse tweepy.models.User.parse = parse # You need to do it for all the models you need auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_secret) api = tweepy.API(auth)
After this point, we can access the Twitter API in a pythonic way using the variable api which simplifies the coding process greatly, while producing more readable and elegant code. Just to reiterate our goal, we want to get all the pictures that have been published by a specific twitter user. This will involve the following steps:
- Get all the tweets from a user
- Clean those with images and get their full path
- Download the images
1. Getting the tweets from a user
Listing all the tweets from a given user can be done using the method user_timeline, which allows us to specify the screen_name (i.e., twitter anchor) and the number of tweets we want to get (to a maximum of 200). It also allows more fine grained filtering such as including retweets or replies. In our case, we want 200 tweets which are directly created by the user (i.e., No retweets nor replies):
tweets = api.user_timeline(screen_name='miguelmalvarez', count=200, include_rts=False, exclude_replies=True)
This very simple code provides us with the last tweets from an account (mine in this case). However, it doesn’t allow to get more than 200 of them. This type of problem is usually solved with pagination; however, the real-time characteristic of Twitter makes this approach unusable. For this reason, Twitter API uses cursoring, where we can specify the id of the most recent tweet we want to receive. As a result, we will receive 200 tweets that are older than the one we specified. This is explained in detail in this documentation about Working with Timelines, and it is represented by the following code:
tweets = api.user_timeline(screen_name='miguelmalvarez', count=200, include_rts=False, exclude_replies=True) last_id = tweets[-1].id while (True): more_tweets = api.user_timeline(screen_name=username, count=200, include_rts=False, exclude_replies=True, max_id=last_id-1) # There are no more tweets if (len(more_tweets) == 0): break else: last_id = more_tweets[-1].id-1 tweets = tweets + more_tweets
This code stores all the tweets by a specific user in the variable tweets. Now, we are ready to filter those with images.
2. Obtaining the full path for the images
We have all the tweets (actually the maximum the API supports is 3,200) by a given user and we want to filter those tweets which contain a media file. In order to do this we need to understand the return of the user_timeline call, and the way the API deals with entities. We should explore the field media to see any multimedia content within a tweet. After this, we can access the url of each one of the specific media attachments with media_url. This is probably easier to understand in code:
media_files = set() for status in tweets: media = status.entities.get('media', ) if(len(media) > 0): media_files.add(media['media_url'])
This implementation assumes that either each tweet has only one media attachment or we only care about the first one. Also, we do not check its type. Therefore, we can get the url of any multimedia content such as images or videos. All these assumptions are agreeable for my purposes and this blogpost. At this stage, we have the urls of all the multimedia content stored in the variable media_files.
3. Download the images
Downloading files can be easily achieved in python using the wget library:
import wget ... for media_file in media_files: wget.download(media_file)
This will download all the images (or any other multimedia content) into the current folder. More advance solutions could create a new folder and move the files there, as well as filter them by their specific type (image, video, audio,…).
I think this blogpost shows a very simple, yet quite powerful, functionality to download pictures from a Twitter account. In addition to allow me to get some pictures for my future talks, this shows how to use some of the functionality of the Twitter API using the tweetpy library.
I have suggested multiple improvements through the post that I will probably implement at some point in the future. Nonetheless, I invite anyone who wants to extend this little tool to create a pull request in GitHub, where all the code is presented.