How to get Tweets using Python and Twitter API v2

6 min read

By Udisha Alok

In this blog, we continue to explore the premium search features of the Twitter API using the API interface from Tweepy, check out the rate limits and how to deal with them, and get the local trends on Twitter.

We will also look at the Client interface provided by the Tweepy library for the Twitter API v2 and how to work with it to get different types of data from Twitter.

In the previous blog of this mini-series, we covered the methods available with the Twitter API v1 interface for getting different kinds of data from Twitter. We also looked at the types of access levels for the Twitter API.

Here, we will cover the topics:


Premium search is a subscription API provided by Twitter. There are two products available with this API:

  • Search Tweets: 30-day endpoint
  • Search Tweets: Full-archive endpoint

To begin using any of the subscription APIs, you need to set up a dev environment ⁽¹⁾ for the endpoint.


Search last 30 days

Twitter provides the premium Search Tweets: 30-Day API, which gives you access to the tweets posted within the last 30 days. You can search over this database wherein tweets are matched against your query and returned.

This feature can be accessed using the search_30_day() method of the API class.


Search the full archive

We can search the full archive using the search_full_archive() method. We can specify the dates and times from and to which we want to search the archive.


Rate limits

Twitter is an invaluable source of big data, accessed by millions of developers worldwide daily. It imposes utilization limits to make the API scalable and reliable. These limits on the usage depend on your authentication method.

There are limits on the number of requests made in a specific time interval. These are called rate limits ⁽²⁾.

So how do we deal with these limits?
How will you know the status of the rate limit for you?
And what to do if your app breaches the rate limit?
Do you want it to terminate, or should it wait for the limit to get replenished?

Let us explore these methods.


How to check the rate limit status?

The api.rate_limit_status() method returns the available number of API requests for the user before the API limit is reached for the current hour. If you provide the credentials for a user, this method returns the rate limit status for this user. Else, it returns the rate limit status for the requester’s IP address.


How to set the App to wait till the rate limit Is replenished?

When we initialize the API class object after authentication, we can set it up so that it waits for the rate limits to get replenished.


Trends are an important feature of Twitter. So how can we see which places are providing the trending topics currently?

The available_trends() method returns the WOE (Where On Earth) id and other human-readable information for the locations for which Twitter has trending information.

Let us now see how we can get the trending topics on Twitter for a particular location, be it a city or a country. For this, we need the WOE id of the location. You can get this from here ⁽³⁾.

Once you have the WOE id of the place, getting the local trends is just a line of code.


Tweepy client for Twitter API v2

Tweepy provides the API interface for the Twitter API v1.1. For the v2 API, Tweepy provides the Client interface. This is available from Tweepy v4.0 onwards, so you may need to upgrade your Tweepy installation if you installed it a while back.

You can do this by simply running the following command:

pip install tweepy –upgrade

Client authentication

Authentication is similar to API, except you need the bearer token for your project to authenticate the client.


Get the user name for a particular user ID using client


Get the user ID for a particular user name using client


Get the user names for multiple user Ids using client

Let us now fetch the details for multiple user ids. We will fetch only some user fields ⁽⁴⁾.

Using a similar approach, you can also try to fetch the user ids for multiple users. Try it out!


Get Tweet(s) with Tweet Id(s) using client

We can fetch a tweet given its tweet id using the get_tweet() method.

What if we want to get the tweets for multiple tweet ids? The approach is similar to the above.


Get a user’s followers using client

Would you like to now check the followers a user has? Let us see how you can do that.


Get users that the user follows using client

And who does this user follow? You may be interested to know that too.


Get a user’s Tweets using client

Similar to how we fetched a user’s tweet using API, we can fetch a user’s tweets using the client’s get_users_tweets() method. By default, we will have values only for the tweet id and the text in the response. If we want to access the other tweet fields ⁽⁵⁾, we will have to specify them separately, as shown below.


Get Tweets that a user liked using client


Get users who Retweeted a Tweet using client


Search recent Tweets using client

The search_recent_tweets() method returns tweets from the last seven days that match the given search query. You can also specify the start and end times for the search.

By default, the search results will be in the form of a response containing the tweet ID and tweet text. Please note that the max_results parameter can only have a value between 10 and 100.


Get Tweet count for a search query using client

How many tweets about Elon Musk were there recently in English which were not retweets? Let us see how we can get a count of such tweets using the get_recent_tweets_count() method.

Head to this ⁽⁶⁾ link if you would like to know more about building queries for searching tweets.


Pagination in client

We have mentioned earlier that the max_results parameter while searching tweets can have a maximum value of 100. So what do we do if we need more than 100 tweets? The answer is pagination.

Pagination in Client is similar to how we used Cursors for the API. Let us see an example of how we can fetch 1000 tweets. We have taken the same query that we used in the previous section.

Want another example? Here you go.

We have covered some of the important methods to fetch data from Twitter. There are many more to explore. If you would like to read about all the methods that are available, please read the official Client documentation ⁽⁷⁾.

The methods for the Client return a Response object with the results. In the subsequent sections, we will look at how we can save the search results to different data formats for analysis.


Using expansions to get user and media information

The Twitter API provides expansions ⁽⁸⁾ as a means to expand the payload we get from a search for a user or tweet lookup. Expansions help include additional data in the same response without the need for separate queries.

We can expand on the attachments.media_keys to view the media object, author_id to view the user object, and referenced_tweets.id to view the Tweet object the originally requested Tweet was referencing. The expanded objects are nested in the includes object.

Let us look at an example to get the user and media information using expansion:


Writing the search results to a text file

Let us now save the search results of a query to a text file.


Putting the search results into a DataFrame

Pandas DataFrames are great for working with a large amount of data. So let us save our search results as a Pandas DataFrame. If you would like to save this data in a csv file, you can use the Pandas to_csv() method.


Twitter API v2 GitHub

We have attempted to cover most of the common use cases for pulling Twitter data. If you want to explore the Twitter API v2 further, this Twitter Developer Platform Resources ⁽⁹⁾ repository is a great place to start. This repository has many sample codes ⁽¹⁰⁾ for the versatile functionality offered by the Twitter API v2.


Conclusion

In this two-part blog series, we explored how we can pull data from Twitter using the API and Client interfaces of the Tweepy library. We also looked at Twitter APIs and their functionalities.

Now that we have nicely organised data in Pandas DataFrames, we can perform natural language processing on it. Want to learn more? Don't forget about our great course on Twitter Sentiment Analysis, check it out and go live trading!

Till then, happy coding!


Disclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only.

Live Q&A | Skills to Get Quant Jobs