AutoML
The built-in AutoML cell has been removed with version 5.0.0. All existing AutoML cells will remain as read-only cells in your existing canvases.
AutoML Package
In lieu of our AutoML cell, we released the Einblick easyml package, which builds on sklearn, TPOT, and shap to streamline building an ML model.
Get started with the following code snippets:
!pip install git+https://github.com/einblick-ai/helpful-functions.git#subdirectory=easyml_einblick
from easyml_einblick import easyml_einblick
import autoskle## First, instantiate the ML object with input parameters (dataframe, name_of_target_variable, how_long_to_search, regression_or_classification)
ml = easyml_einblick(train_df,"target_column",0.5,"regression")
## Then, trigger data preprocessing:
ml.preprocess()
## You can then start model training:
ml.train()
## And apply the model to a new dataframe:
ml.apply_model(df2)
Read more on our Github.
Explainer Alternative
shap
is an open-source Python library that allows you to quickly build visuals to explain machine learning models. Using easyml, just run:
## Get explainability:
ml.explain()
Inputs
Users must provide the training data, the column to predict, and all the features used to predict the outcome.
- Target: the attribute we want to predict (e.g.
sales
) - Features: the attributes the model uses to predict the predict the target (e.g.
location
,month
) - Training Set: the data which establishes the relationship between the target and features
- Test Set (optional): data which can be used to evaluate how well the trained model performs on unseen data.
- The target is either absent or ignored in the test set so that the model can make predictions without "looking at the answer key"
Tasks
Einblick has two main types of modeling tasks:
- Regression: Use input feature variables to predict a numeric value. \ Questions like "how much/many" generally are regression tasks, where the ML model will attempt to predict a quantity in regression.
- Classification: Use input feature variables to predict a label. \ This is used when the outcome being predicted can fall in one of several distinct categorical classes, and the ML model will attempt to use patterns in the data to predict the likelihood to fall in a given class, and return what label seems to best fit the outcome.
Metrics
A scoring metric is used to evaluate a model's performance on data. It is essentially a formula that determines how much to penalize a model when it is incorrect.
For example, let's say we have a dataset with 90 False
values and 10 True
values. A model that guesses False
for all values will have 90% accuracy (one type of metric), since it guesses correctly for 90% of the data. However, the F1-score (a different metric) will be 0, since none of the True
values were predicted correctly.
Different metrics allow you to change what is prioritized when models are evaluated. Some of the most common metrics include:
Regression
- RMSE [root mean squared error]
- standard regression metric that penalizes outliers strongly
- MAE [mean absolute error]
- useful when outliers should not be penalized strongly
Classification
- Accuracy: Useful if all labels should be treated equally, as it is a simple statement of what % of predictions were correct
- Precision: Among the population that we predict to be in class "X," precision asks what % are actually truly "X." For instance, if we are trying to identify targets for high cost treatment, we want to come close to guaranteeing each identified positive is a true positive before investing.
- Recall: Among the population that are actually class "X," what % of them have been identified through modeling prediction. This is used we need to capture as large a % of a given class as possible, usually because treatment is cheap (email marketing) or because any missed observation is huge cost (deadly disease).
- F1, F1-macro, F1-micro: A blend of precision and recall, these are useful when some classes are more common than others (e.g. fraud, rare disease detection)