Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save pb111/cc341409081dffa5e9eaf60d79562a03 to your computer and use it in GitHub Desktop.

Select an option

Save pb111/cc341409081dffa5e9eaf60d79562a03 to your computer and use it in GitHub Desktop.
XGBoost with Python and Scikit-Learn
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@kmlknta21
Copy link

-Nice. It is helpful to run in Jupyter Notebook. Thank you

@malambomutila
Copy link

From the Feature Importance graph, Delicassen has the highest F score. Doesn't this mean that Delicassen was the most important feature as opposed to Grocery which was fourth best?

@ajitbalakrishnan
Copy link

No answer to malambomutila comment?

@Jason2Brownlee
Copy link

Excellent case study!

I was only able to get an accuracy of about 93% with xgboost.

@p-dot-max
Copy link

Thanks a lot

@peet-droid
Copy link

I like how you described each parameter meaning I had no idea you could use Drop out using D.A.R.T

@icebeartellsnolies
Copy link

From the Feature Importance graph, Delicassen has the highest F score. Doesn't this mean that Delicassen was the most important feature as opposed to Grocery which was fourth best?

yes u are right @malambomutila

@snailcoder
Copy link

nice work

@raman118
Copy link

your code is goat

@aaron-galligan
Copy link

Small correction: under Command line parameters I think reg:logistic is meant for classification problems with probabilities and binary:logistic is for classification problems with only decision, not the other way around. Great notebook, cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment