티스토리 뷰
import numpy as np
from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
datasets =load_diabetes()
x = datasets.data
y = datasets.target
x_train, x_test, y_train,y_test = train_test_split(x,y,
train_size=0.8,shuffle=True, random_state=123)
from xgboost import XGBRegressor
model = XGBRegressor()
model.fit(x_train, y_train)
print(model, ':', model.feature_importances_) #feature 중요도를 알 수 有
import matplotlib.pyplot as plt
from xgboost.plotting import plot_importance
plot_importance(model)
plt.show()
XGB 모델에서만 가능하며 f7의 중요도가 가장 낮다.
'Artificial Intelligence > Machine Learning' 카테고리의 다른 글
[ML] XGBoost 개념 (0) | 2022.08.11 |
---|---|
[ML] 이상치(Outlier) 처리-(Tukey Outlier) (0) | 2022.08.10 |
[ML] Data Preprocessing - Missing Value (결측치 처리) (0) | 2022.08.10 |
[ML] GridSearch / Randomsearch /HalvingGridSearch/ HalvingRandomSearch (0) | 2022.08.05 |
[ML] 교차검증 - KFold, StratifiedKFold, cross_val_score, GridSearchCV (0) | 2022.08.04 |