Machine Learning/Models (with codes)

Logistic Regression Model (w/code)

metamong 2022. 4. 25.

๐Ÿ‘‹๐Ÿป ์ €๋ฒˆ ์‹œ๊ฐ„์— Logistic Regression Model์ด ๋ฌด์—‡์ธ์ง€, ๊ธฐ์ดˆ ๊ฐœ๋…์— ๋Œ€ํ•ด์„œ ํ•™์Šตํ–ˆ๋‹ค. (↓↓)

 

Logistic Regression Model (concepts)

** ML ๊ฐœ์š” ํฌ์ŠคํŒ…์—์„œ ๋‹ค๋ฃฌ 'supervised learning'์€ ์•„๋ž˜์™€ ๊ฐ™์€ ์ ˆ์ฐจ๋ฅผ ๋”ฐ๋ฅธ๋‹ค๊ณ  ํ–ˆ๋‹ค (↓↓↓↓ํ•˜๋‹จ ํฌ์ŠคํŒ… ์ฐธ์กฐ ↓↓↓↓) ML Supervised Learning → Regression → Linear Regression 1. ML ๊ธฐ๋ฒ• ๊ตฌ๋ถ„ ๐Ÿ’†..

sh-avid-learner.tistory.com

๐Ÿ™ ์ด์   python์œผ๋กœ model์„ ์ง์ ‘ ๊ตฌํ˜„ํ•ด๋ณด๋ ค ํ•œ๋‹ค!

 

- binary classification problem solving์„ ์œ„ํ•ด ํ•„์š”ํ•œ ๋ชจ๋ธ์ด๋ผ ๋ฐฐ์›€ -

 

++ scikit-learn ์‚ฌ์šฉ ++

 

¶ sklearn.linear_model.LogisticRegression docu() ¶

https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression

 

class sklearn.linear_model.LogisticRegression(penalty='l2', *, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)

 

> logistic regression ๋ชจ๋ธ์— ๋“ค์–ด๊ฐ€๋Š” ์—ฌ๋Ÿฌ hyperparamter ์กฐํ•ฉ์€ ์ถ”ํ›„ ์•Œ์•„๋ณด์ž! (์–ด๋ ต๋‹ค;; ๐Ÿ˜ข) <

์ผ๋ถ€๋งŒ ์•Œ์•„๋ณด์ž๋ฉด.

 

 class_weight(default None): ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€์— data๋ฅผ ํ•™์Šต์‹œํ‚ฌ ๋•Œ target ๋‘ ๋ฒ”์ฃผ ๋น„์œจ์ด ๋งค์šฐ ๋ถˆ๊ท ํ˜•ํ•  ๊ฒฝ์šฐ ๊ท ์ผํ•˜๊ฒŒ ํ•™์Šต์‹œ์ผœ ์„ฑ๋Šฅ์„ ๋†’์ด๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. 'balanced'๊ฐ’์„ ๋„ฃ๊ฑฐ๋‚˜ ๊ฐ class ๋ณ„ ํ•™์Šต์‹œํ‚ค๊ณ  ์‹ถ์€ ์›ํ•˜๋Š” ๊ฐ€์ค‘์น˜๋ฅผ ๋„ฃ์œผ๋ฉด ๋œ๋‹ค.

 

 solver(default lbfgs): ๋ฌธ์ œ๋ฅผ ์ตœ์ ํ™”ํ•  ๋•Œ ์“ฐ์ด๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ข…๋ฅ˜์ด๋‹ค. default๋กœ๋Š” lbfgs๋ผ๋Š” ๊ธฐ์กด BFGS ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ limited-memory ๋ฒ„์ „์ธ๋ฐ ๋‚ด๊ฐ€ ์›ํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์ข…๋ฅ˜๋ฅผ ์„ ํƒํ•ด ์ตœ์ ์˜ ๋ชจ๋ธ๋งํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋งŒ๋“ค์–ด ์ค„ ์ˆ˜ ์žˆ๋‹ค. ์ด ๋•Œ lbfgs๋Š” l2 ๊ทœ์ œ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ๋„ˆ๋ฌด ์–ด๋ ค์šด ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ๋งŽ์•„์„œ ์ž ์‹œ skip. (๋” ๊นŠ๊ฒŒ ํŒ” ์ง€๋Š” ๋‚˜์ค‘์— ๊ฐ€๋ฉด ์•Œ ๊ฒƒ ๊ฐ™๋‹ค?)

 

 warm_start(default False): True๋กœ ์„ค์ •ํ•˜๋ฉด ์ด๋ฏธ ํ•™์Šตํ–ˆ๋˜ ๋ชจ๋ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋‹ค์Œ์— ๋˜ ํ•™์Šต ์‹œ ํ•™์Šต ์ž๋ฃŒ๋กœ ์ œ๊ณต์ด ๋œ๋‹ค. ์ฆ‰ ๋ˆ„์ ํ•ด์„œ ํ•™์Šต์‹œ์ผœ์ฃผ๋Š” ๊ธฐ๋Šฅ

 

 max_iter(default 100): ์•ž์—์„œ ์ •ํ•œ solver algorithm์„ ๋ฐ”ํƒ•์œผ๋กœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๊ธฐ ์œ„ํ•ด ๊ณ„์†ํ•ด์„œ ๋Œ์•„๊ฐ€๋Š”๋ฐ ์ด ๋ช‡ ๋ฒˆ ๋Œ๋ฆด ์ง€ ๊ทธ ๊ฐœ์ˆ˜๋ฅผ ์„ค์ •ํ•˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ. 100๋ฒˆ๋ณด๋‹ค ์กฐ๊ธˆ ๋” ๋งŽ์ด ๋Œ๋ฆฌ๋ฉด ์˜คํžˆ๋ ค ์„ฑ๋Šฅ์ด ์˜ฌ๋ผ๊ฐ€๋Š” case๊ฐ€ ๋”๋Ÿฌ ์žˆ๋‹ค!

 

 verbose: ๋ชจ๋ธ ๋Œ๋ฆฌ๋Š” ๊ณผ์ • ์•Œ์•„๋ณด๊ณ  ์‹ถ์„ ๋•Œ ๊ณผ์ •๋“ค ์ถœ๋ ฅํ•ด์ฃผ๋Š” ์ธ์ž

 

> ์ฃผ์š” method <

 

→ ํƒ€ ๋ชจ๋ธ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ fitting - predicting์„ ์œ„ํ•ด fit & predict method ์กด์žฌ

 score method๋กœ accuracy๋ฅผ ์ถœ๋ ฅํ•ด์ค€๋‹ค (classification evaluation metrics์ค‘ ํ•˜๋‚˜)

 predict_proba method๋กœ feature๋ฅผ ์ง‘์–ด๋„ฃ์œผ๋ฉด ๊ฐ binary class์— ์†ํ•  ํ™•๋ฅ ์„ ๋ณด์—ฌ์ค€๋‹ค! ์ฆ‰, ๋” ํฐ ํ™•๋ฅ ๋กœ ๋‚˜์˜จ ๊ฐ’์˜ class๋กœ ์˜ˆ์ธก๋จ


<Logistic Regression ์˜ˆ์‹œ ์‹ค์Šต (w/scikit-learn) (๊ธฐ์กด ์ ˆ์ฐจ ์•ฝ๊ฐ„ ๋ณ€๊ฒฝ)>

 

โ‘  'Choose a class of model by importing the appropriate estimator class from Scikit-Learn.'

 ์–ด๋–ค model ๋ถ€๋ฅ˜๋ฅผ ์‚ฌ์šฉํ•  ์ง€ ๋ชจ๋ธ ๋ถ€๋ฅ˜๋ฅผ ์ •ํ•œ๋‹ค!

(ex) ์„ ํ˜•๋ชจ๋ธ์„ ๋งŒ๋“ค๊ณ  ์‹ถ์œผ๋ฉด sklearn์˜ linear_model์„ importํ•˜๋ฉด ๋จ!)

 

โ‘ก 'Choose model hyperparameters by instantiating this class with desired values.'

 ๋ชจ๋ธ์˜ ์ธ์ž๋ฅผ ์„ค์ •ํ•˜๋Š”๊ฒŒ โŒ (ML term์—์„œ ํ†ต์šฉ๋˜๋Š” ๋ถ€๋ถ„ - ํ—ท๊ฐˆ๋ฆฌ์ง€ ๋ง๊ธฐ)

→ model ๋ถ€๋ฅ˜(class)๋ฅผ ์„ ํƒํ–ˆ๋‹ค๋ฉด ํ•ด๋‹น ๋ถ€๋ฅ˜์˜ ์–ด๋–ค ๋ชจ๋ธ์„ ์„ ํƒํ•  ์ง€ ๊ณ ๋ฅด๋Š” ๋‹จ๊ณ„

(ex) ์„ ํ˜•๋ชจ๋ธ์„ import ํ–ˆ๋‹ค๋ฉด ๋‹ค์–‘ํ•œ ์„ ํ˜• ๋ชจ๋ธ ์ค‘ ํ•œ ์ข…๋ฅ˜์˜ model์„ import ํ•œ๋‹ค! - ์˜ˆ๋ฅผ ๋“ค์–ด import LinearRegression)

→ ์—ฌ๊ธฐ์„œ model์„ ๊ณ ๋ฅด๋Š” ๊ณผ์ •์—์„œ cross-validation ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜๊ธฐ๋„ ํ•จ

 

โ‘ข 'Arrange data into a features matrix and target vectorfollowing the discussion above.(+ ์ถ”๊ฐ€๋กœ train/val/test ๋ถ„๋ฆฌํ•˜๊ธฐ)'

 ์œ„ 1.์—์„œ ๋ฐฐ์šด ๋Œ€๋กœ ๋ชจ๋ธ์— ๋“ค์–ด๊ฐˆ data ๋‘ ์ข…๋ฅ˜ X์™€ y ์ค€๋น„!

 

โ‘ฃ 'baseline model์„ ๊ธฐ๋ฐ˜์œผ๋กœ metrics ์ˆ˜์น˜ ํ™•์ธํ•˜๊ธฐ'

→ ํ›„์— ๋งŒ๋“ค์–ด์ง„ ๋ชจ๋ธ ์„ฑ๋Šฅ์ด ๋” ์ž˜๋‚˜์™€์•ผ ํ•˜๋ฏ€๋กœ ๊ธฐ์ค€์ด ๋˜๋Š” baseline model์„ ๋งŒ๋“ค๊ณ  ์„ฑ๋Šฅ์„ ํ™•์ธํ•˜์ž

 

โ‘ค 'Fit the model to your data by calling the fit() method of the model instance.'

→ ์ด์   ์ฃผ์–ด์ง„ X์™€ y๋ฅผ model์— fittingํ•ด์„œ model ์™„์„ฑ!

 

โ‘ฅ 'validation data ์„ฑ๋Šฅ์„ ๋†’์ด๊ธฐ ์œ„ํ•ด โ‘ค ๊ณผ์ • hyperparameter-tuning ๋ฌดํ•œ๋ฐ˜๋ณต'

→ ์ตœ์ข…์ ์œผ๋กœ ์ƒˆ๋กœ์šด data๋‚˜ test data๋ฅผ model์— ๋„ฃ๊ธฐ ์ „์— ์ตœ์ ์˜ ์„ฑ๋Šฅ์„ ๋‚ด๋Š” model์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด hyperparameter-tuning ๊ณผ์ • ๋งŒ์กฑํ•  ๋•Œ๊นŒ์ง€ ๊ณ„์† ์ง„ํ–‰

 

โ‘ฆ 'Apply the Model to new data (test data)'

โ‰ซ For supervised learning, often we predict labels for unknown data using the predict() method.

โ‰ซ For unsupervised learning, we often transform or infer properties of the data using the transform() or predict() method.

→ ์ง€๋„ํ•™์Šต, ๋น„์ง€๋„ํ•™์Šต ํ•™์Šต ์ข…๋ฅ˜์— ๋”ฐ๋ผ ์•ฝ๊ฐ„ ๋‹ค๋ฅด๋‹ค! - ํ•˜ํŠผ ์ƒˆ๋กœ์šด data๋ฅผ ์ง‘์–ด๋„ฃ์–ด ์™„์„ฑํ•œ ๋ชจ๋ธ์— ์˜๊ฑฐํ•ด ์˜ˆ์ธก๊ฐ’์„ ์ƒ์„ฑํ•˜์ž!


<์˜ˆ์‹œ>

 

Q. ์‹ ์ธ NBA ๋†๊ตฌ์„ ์ˆ˜์˜ ์—ฌ๋Ÿฌ ์Šคํƒฏ์„ ํŒŒ์•…ํ•ด ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ–ฅํ›„ 5๋…„ ๋‚ด์— ์ถœ์ „์ด ๊ฐ€๋Šฅํ•  ์ง€์˜ ์—ฌ๋ถ€๋ฅผ ํŒ๋ณ„ํ•˜๋Š” ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ๊ตฌ์ถ•ํ•˜์ž

 

A. STEP-BY-STEP

 

โ‘ โ‘ก ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€ ๋ชจ๋ธ์ด๋ผ ์ •ํ–ˆ์Œ!

 

โ‘ข ๋ฐ์ดํ„ฐ - google dataset ์ฐธ์กฐ

 

> feature๋Š” ๋†๊ตฌ ๊ด€๋ จ ์—ฌ๋Ÿฌ ์ˆ˜์น˜ํ˜• data (์ด๋ฆ„ column๋งŒ ์ œ์™ธ)

> target์€ 5๋…„ ๋‚ด์— ์ถœ์ „ ์˜ˆ์ƒํ•œ๋‹ค๋ฉด 1, ์•„๋‹ˆ๋ฉด 0์ธ binary class ํ˜•ํƒœ

 

> preprocessing + train/test ๋ถ„๋ฆฌ + X์™€ y๋ถ„๋ฆฌ

(๊ฐ„๋‹จ ์˜ˆ์ œ๋ผ validation data ์•ˆ ๋งŒ๋“ฆ)

 

dataset = pd.read_csv('./data/nba_logreg.csv')

dataset.dropna(inplace=True) #๊ฒฐ์ธก์น˜ ์žˆ๋Š” ํ–‰ ์‚ญ์ œ
dataset.drop(columns=['Name'],inplace=True) #์ด๋ฆ„ column ์‚ญ์ œ

#train, test
from sklearn.model_selection import train_test_split
train, test = train_test_split(dataset, random_state=2)

target = 'TARGET_5Yrs'

X_train = train.drop(columns=target) 
y_train = train[target] 

X_test = test.drop(columns=target) 
y_test = test[target]

 

> scaling - StandardScaler ์‚ฌ์šฉ

๐Ÿ—ฝ scaling ์ถ”์ฒœ ์ด์œ  - logistic ํšŒ๊ท€์˜ ๊ฒฝ์šฐ ๋ชจ๋“  feature๋“ค์˜ ๋‹จ์œ„ unit์ด ์ œ๊ฐ๊ฐ ๋‹ค๋ฅด๋ฉด unit์ด ํฐ feature์™€ ์ž‘์€ feature์˜ target ๊ธฐ์—ฌ๋„์— ์ฐจ์ด๊ฐ€ ๋‚  ์ˆ˜ ๋ฐ–์— ์—†์œผ๋ฉฐ, ์œ„์—์„œ ์ •ํ•ด์ค€ solver๋กœ optimizeํ•˜๋Š” ๊ณผ์ •์—์„œ converge, ์ฆ‰ gradient descent๊ฐ€ converge ์ˆ˜๋ ดํ•˜๋Š” ์†๋„๊ฐ€ ๋ชจ๋“  feature๊ฐ€ ๋‹จ์œ„๊ฐ€ ์ œ๊ฐ๊ฐ์ด๋ฏ€๋กœ ๋ชจ๋ธ ์„ฑ๋Šฅ ์ธก์ •์— ์˜ํ–ฅ์„ ์ค„ ์ˆ˜ ๋ฐ–์— ์—†๋‹ค.

๐Ÿคธ‍โ™‚๏ธ ๋”ฐ๋ผ์„œ ์šฐ๋ฆฌ๋Š” scaling์„ ํ†ตํ•ด ๋ชจ๋“  feature๋ฅผ ๋™์ผ unit ๋ฒ”์œ„๋กœ ์„ค์ •ํ•˜์—ฌ ๋งค์šฐ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๋กœ์ง€์Šคํ‹ฑ ๋ชจ๋ธ์ด ๋Œ์•„๊ฐ€๋„๋ก ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ตœ์ ํ™”๋œ ์กฐ๊ฑด์„ ์ œ๊ณตํ•ด์ค˜์•ผ ํ•จ!

 

-- ๋” ์ž์„ธํ•œ ์‚ฌํ•ญ์€ normalization, standardization, regularization posting ์ฐธ๊ณ ํ•˜๊ธฐ -- 

 

from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

 

โ‘ฃ baseline model ์„ฑ๋Šฅ ํ™•์ธํ•˜๊ธฐ - ๋ถ„ํฌ ๋†’์€ class ๊ธฐ์ค€

 

#baseline ๋งŒ๋“ค๊ธฐ
major = y_train.mode()[0]

y_pred_baseline = [major] * len(y_train)

from sklearn.metrics import accuracy_score
print("baseline model accuracy: ", accuracy_score(y_train, y_pred_baseline))
#baseline model accuracy:  0.6204819277108434

 

โ‘ค fitting

> ์ธ์ž ํŠœ๋‹์€ ํŠน๋ณ„ํžˆ ํ•˜์ง€ ์•Š์•˜๋‹ค. (โ‘ฅ hyperparameter tuning - validation ์ƒ๋žตํ•จ)

 

#LR
from sklearn.linear_model import LogisticRegression

logistic = LogisticRegression()

# ํ•™์Šต
logistic.fit(X_train_scaled, y_train)

 

โ‘ฆ predicting - testing

 

# ์˜ˆ์ธก
pred_lr = logistic.predict(X_test_scaled)

print("LR accuracy: ", accuracy_score(y_test, pred_lr))
#LR accuracy:  0.7027027027027027

 

๐Ÿคน modelling ๊ฒฐ๊ณผ baseline accuracy๋Š” ์•ฝ 0.62์˜€๋Š”๋ฐ modeling accuracy๋Š” ์•ฝ 0.7์ด ๋‚˜์˜ด (์„ฑ๋Šฅ ํ–ฅ์ƒ!)

 

- ๋กœ์ง€์Šคํ‹ฑ ๋ชจ๋ธ ํšจ๊ณผ ์ฆ๋ช…! - 

 

++ logistic coefficients ๋ถ„์„ ++

 

# plot logistic model coefficients 
coefficients = pd.Series(logistic.coef_[0], X_train.columns) 
plt.figure(figsize=(6,6)) 
coefficients.sort_values().plot.barh() 
plt.show()

 

- logistic coefficients -

 

โ›น๏ธ‍โ™‚๏ธ logistic์˜ coefficients ํ•ด์„ - ํ•ด๋‹น X feature ๋ณ€์ˆ˜๊ฐ€ unit 1์”ฉ ์ฆ๊ฐ€ํ•˜๋ฉด Odds ๋น„(p/(1-p))๊ฐ€ e์˜ coefficients์Šน์”ฉ ๋ณ€ํ™”ํ•œ๋‹ค.

(scaling์„ ๊ฑฐ์น˜๊ณ  ๋‚œ ๋’ค์ด๋ฏ€๋กœ feature๋ณ„๋กœ ๋ชจ๋‘ ๋‹จ์œ„๊ฐ€ ๊ฐ™์•„์„œ coefficients ๊ฐ„ ๋น„๊ต๊ฐ€ ๊ฐ€๋Šฅ!)

๐Ÿ‘ ๋”ฐ๋ผ์„œ coefficient๊ฐ’์ด 0์ด์ƒ ์–‘์ˆ˜์ด๋ฉด Odds ๋น„๊ฐ€ 1์ด์ƒ = ์ฆ‰ p > (1-p)๋กœ ์„ฑ๊ณตํ™•๋ฅ ์— ๋” ๊ธฐ์—ฌํ•˜๋Š” feature

๐Ÿ‘ ๋ฐ˜๋Œ€๋กœ 0 ๋ฏธ๋งŒ ์Œ์ˆ˜์ธ feature๋Š” Odds ๋น„๊ฐ€ 1์ดํ•˜ = ์ฆ‰ p < (1-p)๋กœ ์‹คํŒจํ™•๋ฅ , ์ฆ‰ class 0์— ๋” ๊ธฐ์—ฌํ•˜๋Š” feature

 

→ ๋”ฐ๋ผ์„œ ์œ„ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ 3P Made, GP, FGA feature๊ฐ€ ์‹ ์ธ ํ”Œ๋ ˆ์ด์–ด๊ฐ€ 5๋…„๋™์•ˆ ์ž˜ ๋‚˜๊ฐ„๋‹ค๋Š” ์ ์— ํฐ ๊ธฐ์—ฌ. ํŠนํžˆ 3P Made, ์ฆ‰ 3์  ์„ฑ๊ณต์—ฌ๋ถ€๊ฐ€ ์ง€๋Œ€ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์นœ๋‹ค๋Š” ๊ฑธ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Œ!

 3PA๋Š” 5๋…„ ์ปค๋ฆฌ์–ด ์—ฌ๋ถ€์— ์ œ์ผ ๋ฐ˜๋Œ€๋กœ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ํŠน์„ฑ์ž„(์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š๋Š”๊ฒŒ ์•„๋‹˜. ์˜ํ–ฅ์„ ๋ฐ˜๋Œ€๋กœ ๋ฏธ์นจ)


* ๋ฐ์ดํ„ฐ ์ถœ์ฒ˜) https://data.world/exercises/logistic-regression-exercise-1

* ์ถœ์ฒ˜1) coefficients https://soobarkbar.tistory.com/12 / https://dive-into-ds.tistory.com/44

* ์ถœ์ฒ˜2) scaling ํ•„์š”์„ฑ https://stats.stackexchange.com/questions/48360/is-standardization-needed-before-fitting-logistic-regression

'Machine Learning > Models (with codes)' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

K-Means Clustering (concepts + w/code)  (0) 2022.06.08
Decision Trees (concepts)  (0) 2022.04.27
Polynomial Regression Model  (0) 2022.04.24
Logistic Regression Model (concepts)  (0) 2022.04.24
(L2 Regularization) โ†’ Ridge Regression (w/scikit-learn)  (0) 2022.04.20

๋Œ“๊ธ€