Avoid unstable and unreliable model coefficients with this comprehensive guide to checking for multicollinearity in Python using seaborn and statsmodels. Learn about multicollinearity and how to use the variance inflation factor (VIF) and correlation coefficients.
Testing for heteroskedasticity (with a "k" or "c") is essential when running various regression models. For example, one of the main assumptions of OLS is that there is constant variance (homoscedasticity) among the residuals or errors of your linear regression model. Learn how to run and interpret White's test for heteroskedasticity using statsmodels.
In this post, we’ll review seaborn’s catplot() function, which is helpful for creating different kinds of plots to help you analyze and understand the relationships between continuous and categorical variables. We’ll go over how to use catplot() and some tips for customizing the appearance and layout of your plots.
BeautifulSoup is a Python package designed for parsing HTML and turning the markup code into something navigable and searchable. Easy scraping can improve your life tremendously: here, I was using it to assemble a list of on-sale wines at my local wine store. We also use the Requests package to grab the URL (taking bets on when requests going to be baked in).
In this post, we’ll be going over two ways to perform linear regression using ordinary least squares (OLS) estimation using the statsmodels library. Get a detailed summary of your model fit and access useful summary statistics with these simply functions.
Snowpark allows developers to use familiar languages and coding styles to run code directly on Snowflake compute.
In the following example, we create a day_of_week() function to demonstrate the use of the match and case statements, Python's equivalent to the switch statement.
This code demonstrates how to use the ProcessPoolExecutor and ThreadPoolExecutor classes from the concurrent.futures module to run multiple threads and processes concurrently or in parallel to save you time.
In this article, we will look at an example of how to use vectorized operations instead of for loops in Python to save time.
NumPy arrays are stored in contiguous blocks of memory, which allows NumPy to take advantage of vectorization and other optimization techniques. Python lists are stored as individual objects in memory, which makes them less efficient and performant than NumPy arrays for numerical data.
One useful but not well-understood Python tip for data science is the use of generator expressions. Generator expressions are similar to list comprehensions, but they are more memory efficient because they do not create a new list object in memory.
Caching is a technique for storing the results of expensive computations so that they can be quickly retrieved later. In Python, you can actually use functools.lru_cache(), which stands for least recently used (LRU) cache, to easily add caching to a function.
This handy tool allows you to efficiently add and remove items from the beginning or end of a list, making it a valuable addition to your Python toolkit.
OpenAI has released a powerful API to use with their pre-trained models. This includes generative AI solutions like text completion and natural language, without the need to train models locally or work with heavyweight machines. This canvas example is designed to show you how to get started in Python.
Fast Einblick Tools to make data manipulation faster. This first Tool series explores a sequence of Concat, Sort, and Join operations to manipulate and enrich customer data.
Getting Twitter data into your Python analysis is easy with the use of the Tweepy API. In this Tools post, we cover the crash course on how to find tweets related to a given hashtag, and pull it in (and how to do a quick sentiment analysis).
Here's a quick guide to how you can use sqlite in Python and how to load Pandas dataframes to SQL, manipulate that data, and read it back.
Use pydeck to bring the power of Uber’s open source deck.gl to Python and create stunning map visualizations.
Use the Pushshift API and Reddit API in order to create novel datasets pulling Reddit data into Python data frames. Easily transition to NLP and ML analysis of the Reddit data sets as well.
Let's face it: the Snowflake web uploader is painful to use. Here's my script to take a CSV or the results of a Python notebook, and write it to your Snowflake database.
Combine the data wrangling power of the Python ecosystem and the map visualization strengths of the leaflet.js library through folium.
Define the max, min, and dimensions of the table to generate, and create a Pandas dataframe with random values inside.