対戦ゲームデータ分析甲子園

目指せ"Another" バトル優勝!

賞金: 100,000 参加ユーザー数: 605 約4年前に終了

モード別に学習モデルを分けた場合の結果(参考)

lobby-mode別に学習した場合の結果


1. 前提

スプラ2をプレイした方はご存知の通り、このゲームには以下のモードがあります。
(今回のコンペに関係ないモードは除外)

  • ナワバリバトル
  • ガチエリア
  • ガチアサリ
  • ガチホコ
  • ガチヤグラ

各モードは勝利条件が異なるため、それぞれ別のゲームと言えるほど、プレーヤーの行動が変わります。
有利/不利なブキの組み合わせも、モードによって変わる可能性があります。
このため今回のデータの学習も、モード別に実施した方がより正確になるのでは? という動機です。


2. やること

以下の学習結果を比較し、どちらの結果が上か確認します。

  1. 全データをまとめて学習
  2. 学習データをモード別に分け、それぞれで学習

学習モデルは以下を使用させていただきました。
「LightGBMを使ったBase line」

なお、「2.」は各モード毎に学習データを分けて学習し、最後に結果をまとめています。(力技です)


3. 結果

「2.」の方が、Publicスコアが上がりました。

  • 全データをまとめて学習した結果を提出した際の、Publicスコア: 0.536344
  • 各モード毎に学習した結果をまとめて提出した際の、Publicスコア: 0.541567

学習結果のCVを見ると、特に「ナワバリバトル」はCV=0.5996257033039711と高い値でした。
他のモードと分けた効果が出ていると思われます。

ただし、「4. 課題 (1)」で述べる通り、学習データに欠損値が生じるという問題があります。
このため上記は参考値と捉えてください。


4. 課題

(1) 学習データに欠損値が生じる

全データをまとめて学習した際は欠損値が出なかったのですが、モード別に学習データを分けると欠損値が生じます。
change_to_target2()のforループ内で欠損値が発生するのですが、原因は未確認です。
何かお気づきの点がありましたら、コメントで教えていただけると助かります。


5. 改良案

(1)上記方法の問題点

上記方法はナワバリバトルのCVは高かったものの、ガチエリア〜ガチヤグラのCVは低めでした(特にガチヤグラ)。
データ分割により、学習データ自体が減ってしまったことが原因かもしれません。

  • ナワバリバトル: CV_score:0.5996257033039711
  • ガチエリア: CV_score:0.5319582472095907
  • ガチアサリ: CV_score:0.5350757493959036
  • ガチホコ: CV_score:0.528967034800994
  • ガチヤグラ: CV_score:0.5206471494607088
  • 平均: CV_score:0.5432547768342336

(2)改良案

ガチエリア〜ガチヤグラは「ガチマッチ」としてまとめて学習します。これにより学習データ数を確保します。
ナワバリバトルはCVが良かったので、分けて学習します。
これらのモードは学習データの"lobby-mode"で判別できます。

  • regular: レギュラーマッチ = ナワバリバトル
  • gachi: ガチマッチ = ガチエリア〜ガチヤグラ

(3)改良案の結果

以下の通り、全体としてはCVが上がりました。
ただしガチマッッチのCVはそれほど向上しなかったので、微妙なところかもしれません。

  • レギュラーマッチ: CV_score:0.5996257033039711
  • ガチマッッチ: CV_score:0.5277763340574645
  • 平均: CV_score:0.543471347270915

5. 最後に

LightGBMのモデルを開示していただいたOreginさん、ありがとうございました。
基準となるモデルがあるとこのような比較がやりやすいため、たいへん助かりました。


# ライブラリのインポート
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
import seaborn as sns
import lightgbm as lgb
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
import warnings
warnings.filterwarnings('ignore')
# データの読込
train = pd.read_csv("../data/train_data.csv")
test = pd.read_csv('../data/test_data.csv')

前処理

# 欠損値は、全て「-1」とする。
def fill_all_null(df):
    for col_name in df.columns[df.isnull().sum()!=0]:
        df[col_name] = df[col_name].fillna(-1)

# 訓練データ、テストデータの欠損値を補完
fill_all_null(train)
fill_all_null(test)
# ターゲットエンコーディングの関数定義
def change_to_target2(train_df,test_df,input_column_name,output_column_name):
    from sklearn.model_selection import KFold
    
    # nan埋め処理
    train[input_column_name] = train[input_column_name].fillna('-1').isnull().sum()
    test[input_column_name] = test[input_column_name].fillna('-1').isnull().sum()

    kf = KFold(n_splits=5, shuffle=True, random_state=71)
    #=========================================================#
    c=input_column_name
    # 学習データ全体で各カテゴリにおけるyの平均を計算
    data_tmp = pd.DataFrame({c: train_df[c],'target':train_df['y']})
    target_mean = data_tmp.groupby(c)['target'].mean()

    #テストデータのカテゴリを置換
    test_df[output_column_name] = test_df[c].map(target_mean)
    
    # 変換後の値を格納する配列を準備
    tmp = np.repeat(np.nan, train_df.shape[0])
    
    for i, (train_index, test_index) in enumerate(kf.split(train_df)): # NFOLDS回まわる
        #学習データについて、各カテゴリにおける目的変数の平均を計算
        target_mean = data_tmp.iloc[train_index].groupby(c)['target'].mean()
        #バリデーションデータについて、変換後の値を一時配列に格納
        tmp[test_index] = train_df[c].iloc[test_index].map(target_mean) 
                
    #変換後のデータで元の変数を置換
    train_df[output_column_name] = tmp
    #========================================================#   

# 'period','game-ver','mode','lobby'の各列は、勝敗に大きく影響しないと思われるため削除
train = train.drop(labels=['period', 'game-ver', 'mode', 'lobby'], axis=1)
test = test.drop(labels=['period', 'game-ver', 'mode', 'lobby'], axis=1)

# regular, gachiの各lobby-mode別データを取得する
train_regular = train[train['lobby-mode'] == "regular"]
train_gachi   = train[train['lobby-mode'] == "gachi"]

test_regular = test[test['lobby-mode'] == "regular"]
test_gachi   = test[test['lobby-mode'] == "gachi"]

# オブジェクトの列のリストを作成
object_col_list = train.select_dtypes(include=object).columns

# オブジェクトの列は全てターゲットエンコーディング実施
for col in object_col_list:
    change_to_target2(train_regular, test_regular, col, "enc_"+col)
    change_to_target2(train_gachi,   test_gachi,   col, "enc_"+col)

# 変換前の列を削除
train_regular = train_regular.drop(object_col_list,axis=1)
train_gachi   = train_gachi.drop(object_col_list,axis=1)

test_regular  = test_regular.drop(object_col_list,axis=1)
test_gachi    = test_gachi.drop(object_col_list,axis=1)

# 予測結果をあとでまとめるために、idの列を抽出する
test_regular_id = test_regular[['id']].copy()  # DataFrameとして抽出するために、リストをネストする
test_gachi_id   = test_gachi[['id']].copy()

# 'id'の列を削除
train_regular = train_regular.drop('id',axis=1)
train_gachi   = train_gachi.drop('id',axis=1)

test_regular  = test_regular.drop('id',axis=1)
test_gachi    = test_gachi.drop('id',axis=1)

データの確認

# 訓練データに欠損がないことの確認(欠損値がある項目数を表示)
print(train_regular.isnull().values.sum())
print(train_gachi.isnull().values.sum())

# テストデータに欠損がないことの確認(欠損値がある項目数を表示)
print(test_regular.isnull().values.sum())
print(test_gachi.isnull().values.sum())
2
0
0
2

学習の準備

# 訓練データを説明変数と目的変数に分割
target_regular = train_regular['y']
train_x_regular = train_regular.drop('y',axis=1)

target_gachi = train_gachi['y']
train_x_gachi = train_gachi.drop('y',axis=1)

# LGBMのパラメータを設定
params = {
    # 二値分類問題
    'objective': 'binary',
    # 損失関数は二値のlogloss
    #'metric': 'auc',
    'metric': 'binary_logloss',
    # 最大イテレーション回数指定
    'num_iterations' : 1000,
    # early_stopping 回数指定
    'early_stopping_rounds' : 100,
}

学習・予測の実行

def predict(train_x, target, test):

    # k-分割交差検証を使って学習&予測(K=10)
    FOLD_NUM = 10
    kf = KFold(n_splits=FOLD_NUM,
                  random_state=42)

    #検証時のスコアを初期化
    scores = []

    #テストデータの予測値を初期化
    pred_cv = np.zeros(len(test.index))

    #lgbmのラウンド数を定義
    num_round = 10000

    for i, (tdx, vdx) in enumerate(kf.split(train_x, target)):
        print(f'Fold : {i}')
        # 訓練用データと検証用データに分割
        X_train, X_valid, y_train, y_valid = train_x.iloc[tdx], train_x.iloc[vdx], target.values[tdx], target.values[vdx]
        lgb_train = lgb.Dataset(X_train, y_train)
        lgb_valid = lgb.Dataset(X_valid, y_valid)

        # 学習の実行
        model = lgb.train(params, lgb_train, num_boost_round=num_round,
                          valid_names=["train", "valid"], valid_sets=[lgb_train, lgb_valid],
                          verbose_eval=100)

        # 検証データに対する予測値を求めて、勝敗(0 or 1)に変換
        va_pred = np.round(model.predict(X_valid,num_iteration=model.best_iteration))

        # accuracyスコアを計算
        score_ = accuracy_score(y_valid, va_pred)

        # フォールド毎の検証時のスコアを格納
        scores.append(score_)

        #テストデータに対する予測値を求める
        submission = model.predict(test,num_iteration=model.best_iteration)

        #テストデータに対する予測値をフォールド数で割って蓄積
        #(フォールド毎の予測値の平均値を求めることと同じ)
        pred_cv += submission/FOLD_NUM

    # 最終的なテストデータに対する予測値を勝敗(0 or 1)に変換
    pred_cv = np.round(pred_cv)

    # 最終的なaccuracyスコアを平均値で出力
    print('')
    print('################################')
    print('CV_score:'+ str(np.mean(scores)))
    print()
    
    return(scores, pred_cv)


# 各モード毎のスコアを初期化
total_scores = []

# 各モードの予測を実行
results = predict(train_x_regular, target_regular, test_regular)
total_scores.append(results[0])
test_regular_id['y'] = results[1]

results = predict(train_x_gachi,   target_gachi,   test_gachi)
total_scores.append(results[0])
test_gachi_id['y'] = results[1]

# 最終的なaccuracyスコアを出力
print('---------------------------------')
print('total CV_score:'+ str(np.mean(total_scores)))
Fold : 0
[I 2020-09-27 04:38:19,970] A new study created in memory with name: no-name-4ce5d708-0a14-4cfc-bc23-b74b1ae8bc8d
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000784 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574988	valid's binary_logloss: 0.650038
[200]	train's binary_logloss: 0.51996	valid's binary_logloss: 0.651275
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.552891	valid's binary_logloss: 0.649537
feature_fraction, val_score: 0.649537:  14%|#4        | 1/7 [00:00<00:04,  1.41it/s][I 2020-09-27 04:38:20,696] Trial 0 finished with value: 0.6495366523489564 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537:  14%|#4        | 1/7 [00:00<00:04,  1.41it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000262 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579394	valid's binary_logloss: 0.650899
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.584248	valid's binary_logloss: 0.650087
feature_fraction, val_score: 0.649537:  29%|##8       | 2/7 [00:01<00:03,  1.58it/s][I 2020-09-27 04:38:21,152] Trial 1 finished with value: 0.6500867396462259 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537:  29%|##8       | 2/7 [00:01<00:03,  1.58it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000583 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.583335	valid's binary_logloss: 0.652707
Early stopping, best iteration is:
[88]	train's binary_logloss: 0.590815	valid's binary_logloss: 0.652179
feature_fraction, val_score: 0.649537:  43%|####2     | 3/7 [00:01<00:02,  1.77it/s][I 2020-09-27 04:38:21,556] Trial 2 finished with value: 0.652179314698687 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537:  43%|####2     | 3/7 [00:01<00:02,  1.77it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000873 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573386	valid's binary_logloss: 0.65332
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.592163	valid's binary_logloss: 0.652672
feature_fraction, val_score: 0.649537:  57%|#####7    | 4/7 [00:02<00:01,  1.90it/s][I 2020-09-27 04:38:21,994] Trial 3 finished with value: 0.6526718623826693 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537:  57%|#####7    | 4/7 [00:02<00:01,  1.90it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000825 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573309	valid's binary_logloss: 0.653575
Early stopping, best iteration is:
[67]	train's binary_logloss: 0.59526	valid's binary_logloss: 0.652553
feature_fraction, val_score: 0.649537:  71%|#######1  | 5/7 [00:02<00:01,  1.58it/s][I 2020-09-27 04:38:22,880] Trial 4 finished with value: 0.6525525222524156 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537:  71%|#######1  | 5/7 [00:02<00:01,  1.58it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006985 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576836	valid's binary_logloss: 0.653247
Early stopping, best iteration is:
[70]	train's binary_logloss: 0.596318	valid's binary_logloss: 0.651989
feature_fraction, val_score: 0.649537:  86%|########5 | 6/7 [00:03<00:00,  1.79it/s][I 2020-09-27 04:38:23,260] Trial 5 finished with value: 0.6519889736642206 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537:  86%|########5 | 6/7 [00:03<00:00,  1.79it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000890 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571148	valid's binary_logloss: 0.654809
Early stopping, best iteration is:
[46]	train's binary_logloss: 0.611505	valid's binary_logloss: 0.653731
feature_fraction, val_score: 0.649537: 100%|##########| 7/7 [00:03<00:00,  1.96it/s][I 2020-09-27 04:38:23,655] Trial 6 finished with value: 0.6537305766184812 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6495366523489564.
feature_fraction, val_score: 0.649537: 100%|##########| 7/7 [00:03<00:00,  1.91it/s]
num_leaves, val_score: 0.649537:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000620 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57822	valid's binary_logloss: 0.649469
Early stopping, best iteration is:
[85]	train's binary_logloss: 0.587383	valid's binary_logloss: 0.649227
num_leaves, val_score: 0.649227:   5%|5         | 1/20 [00:00<00:09,  2.10it/s][I 2020-09-27 04:38:24,142] Trial 7 finished with value: 0.6492270930479551 and parameters: {'num_leaves': 30}. Best is trial 7 with value: 0.6492270930479551.
num_leaves, val_score: 0.649227:   5%|5         | 1/20 [00:00<00:09,  2.10it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000836 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.514021	valid's binary_logloss: 0.655753
Early stopping, best iteration is:
[46]	train's binary_logloss: 0.579148	valid's binary_logloss: 0.653741
num_leaves, val_score: 0.649227:  10%|#         | 2/20 [00:01<00:08,  2.04it/s][I 2020-09-27 04:38:24,666] Trial 8 finished with value: 0.6537410492395737 and parameters: {'num_leaves': 60}. Best is trial 7 with value: 0.6492270930479551.
num_leaves, val_score: 0.649227:  10%|#         | 2/20 [00:01<00:08,  2.04it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003688 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635569	valid's binary_logloss: 0.650064
[200]	train's binary_logloss: 0.617663	valid's binary_logloss: 0.647802
[300]	train's binary_logloss: 0.603448	valid's binary_logloss: 0.648074
Early stopping, best iteration is:
[204]	train's binary_logloss: 0.617119	valid's binary_logloss: 0.647726
num_leaves, val_score: 0.647726:  15%|#5        | 3/20 [00:01<00:08,  2.08it/s][I 2020-09-27 04:38:25,128] Trial 9 finished with value: 0.6477259914776615 and parameters: {'num_leaves': 8}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  15%|#5        | 3/20 [00:01<00:08,  2.08it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003613 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.295966	valid's binary_logloss: 0.669876
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.505319	valid's binary_logloss: 0.658492
num_leaves, val_score: 0.647726:  20%|##        | 4/20 [00:02<00:11,  1.33it/s][I 2020-09-27 04:38:26,505] Trial 10 finished with value: 0.6584916550281572 and parameters: {'num_leaves': 229}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  20%|##        | 4/20 [00:02<00:11,  1.33it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003890 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.614941	valid's binary_logloss: 0.650113
[200]	train's binary_logloss: 0.584546	valid's binary_logloss: 0.649317
[300]	train's binary_logloss: 0.558777	valid's binary_logloss: 0.650116
Early stopping, best iteration is:
[244]	train's binary_logloss: 0.57318	valid's binary_logloss: 0.649088
num_leaves, val_score: 0.647726:  25%|##5       | 5/20 [00:03<00:11,  1.29it/s][I 2020-09-27 04:38:27,335] Trial 11 finished with value: 0.6490878119710577 and parameters: {'num_leaves': 15}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  25%|##5       | 5/20 [00:03<00:11,  1.29it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000790 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.646526	valid's binary_logloss: 0.653926
[200]	train's binary_logloss: 0.634088	valid's binary_logloss: 0.649382
[300]	train's binary_logloss: 0.62524	valid's binary_logloss: 0.648291
[400]	train's binary_logloss: 0.617685	valid's binary_logloss: 0.648421
Early stopping, best iteration is:
[324]	train's binary_logloss: 0.623449	valid's binary_logloss: 0.648061
num_leaves, val_score: 0.647726:  30%|###       | 6/20 [00:04<00:10,  1.34it/s][I 2020-09-27 04:38:28,014] Trial 12 finished with value: 0.6480614052342452 and parameters: {'num_leaves': 5}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  30%|###       | 6/20 [00:04<00:10,  1.34it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008792 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.414675	valid's binary_logloss: 0.664247
Early stopping, best iteration is:
[26]	train's binary_logloss: 0.571385	valid's binary_logloss: 0.659552
num_leaves, val_score: 0.647726:  35%|###5      | 7/20 [00:05<00:09,  1.37it/s][I 2020-09-27 04:38:28,706] Trial 13 finished with value: 0.6595522413168151 and parameters: {'num_leaves': 122}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  35%|###5      | 7/20 [00:05<00:09,  1.37it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000826 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.470281	valid's binary_logloss: 0.657387
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.570263	valid's binary_logloss: 0.654921
num_leaves, val_score: 0.647726:  40%|####      | 8/20 [00:05<00:08,  1.36it/s][I 2020-09-27 04:38:29,454] Trial 14 finished with value: 0.654921475007968 and parameters: {'num_leaves': 85}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  40%|####      | 8/20 [00:05<00:08,  1.36it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004565 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635569	valid's binary_logloss: 0.650064
[200]	train's binary_logloss: 0.617663	valid's binary_logloss: 0.647802
[300]	train's binary_logloss: 0.603448	valid's binary_logloss: 0.648074
Early stopping, best iteration is:
[204]	train's binary_logloss: 0.617119	valid's binary_logloss: 0.647726
num_leaves, val_score: 0.647726:  45%|####5     | 9/20 [00:06<00:08,  1.24it/s][I 2020-09-27 04:38:30,428] Trial 15 finished with value: 0.6477259914776615 and parameters: {'num_leaves': 8}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  45%|####5     | 9/20 [00:06<00:08,  1.24it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000847 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.291012	valid's binary_logloss: 0.673269
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.469537	valid's binary_logloss: 0.661474
num_leaves, val_score: 0.647726:  50%|#####     | 10/20 [00:07<00:09,  1.07it/s][I 2020-09-27 04:38:31,655] Trial 16 finished with value: 0.661474106398792 and parameters: {'num_leaves': 233}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  50%|#####     | 10/20 [00:07<00:09,  1.07it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007106 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.35312	valid's binary_logloss: 0.66936
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.529202	valid's binary_logloss: 0.659327
num_leaves, val_score: 0.647726:  55%|#####5    | 11/20 [00:08<00:08,  1.12it/s][I 2020-09-27 04:38:32,457] Trial 17 finished with value: 0.6593265762527931 and parameters: {'num_leaves': 171}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  55%|#####5    | 11/20 [00:08<00:08,  1.12it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003451 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.516771	valid's binary_logloss: 0.654055
Early stopping, best iteration is:
[73]	train's binary_logloss: 0.545672	valid's binary_logloss: 0.65329
num_leaves, val_score: 0.647726:  60%|######    | 12/20 [00:09<00:06,  1.25it/s][I 2020-09-27 04:38:33,041] Trial 18 finished with value: 0.6532896945252986 and parameters: {'num_leaves': 59}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  60%|######    | 12/20 [00:09<00:06,  1.25it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000421 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.642488	valid's binary_logloss: 0.652362
[200]	train's binary_logloss: 0.628687	valid's binary_logloss: 0.649023
[300]	train's binary_logloss: 0.617877	valid's binary_logloss: 0.649151
Early stopping, best iteration is:
[279]	train's binary_logloss: 0.620188	valid's binary_logloss: 0.648624
num_leaves, val_score: 0.647726:  65%|######5   | 13/20 [00:10<00:05,  1.31it/s][I 2020-09-27 04:38:33,727] Trial 19 finished with value: 0.6486244754568631 and parameters: {'num_leaves': 6}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  65%|######5   | 13/20 [00:10<00:05,  1.31it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009248 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.423908	valid's binary_logloss: 0.659234
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.527404	valid's binary_logloss: 0.655427
num_leaves, val_score: 0.647726:  70%|#######   | 14/20 [00:11<00:05,  1.07it/s][I 2020-09-27 04:38:35,066] Trial 20 finished with value: 0.6554267059223545 and parameters: {'num_leaves': 116}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  70%|#######   | 14/20 [00:11<00:05,  1.07it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004318 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.669306	valid's binary_logloss: 0.670416
[200]	train's binary_logloss: 0.659732	valid's binary_logloss: 0.660321
[300]	train's binary_logloss: 0.654518	valid's binary_logloss: 0.655319
[400]	train's binary_logloss: 0.651377	valid's binary_logloss: 0.652506
[500]	train's binary_logloss: 0.649375	valid's binary_logloss: 0.650671
[600]	train's binary_logloss: 0.648029	valid's binary_logloss: 0.649863
[700]	train's binary_logloss: 0.647077	valid's binary_logloss: 0.649248
[800]	train's binary_logloss: 0.64636	valid's binary_logloss: 0.649015
[900]	train's binary_logloss: 0.645776	valid's binary_logloss: 0.649025
[1000]	train's binary_logloss: 0.645301	valid's binary_logloss: 0.649111
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.645301	valid's binary_logloss: 0.649111
num_leaves, val_score: 0.647726:  75%|#######5  | 15/20 [00:12<00:05,  1.02s/it][I 2020-09-27 04:38:36,284] Trial 21 finished with value: 0.649110563220017 and parameters: {'num_leaves': 2}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  75%|#######5  | 15/20 [00:12<00:05,  1.02s/it][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011497 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.560366	valid's binary_logloss: 0.653399
[200]	train's binary_logloss: 0.496525	valid's binary_logloss: 0.653927
Early stopping, best iteration is:
[123]	train's binary_logloss: 0.544369	valid's binary_logloss: 0.65141
num_leaves, val_score: 0.647726:  80%|########  | 16/20 [00:13<00:03,  1.10it/s][I 2020-09-27 04:38:36,924] Trial 22 finished with value: 0.6514101116698465 and parameters: {'num_leaves': 38}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  80%|########  | 16/20 [00:13<00:03,  1.10it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005527 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.669306	valid's binary_logloss: 0.670416
[200]	train's binary_logloss: 0.659732	valid's binary_logloss: 0.660321
[300]	train's binary_logloss: 0.654518	valid's binary_logloss: 0.655319
[400]	train's binary_logloss: 0.651377	valid's binary_logloss: 0.652506
[500]	train's binary_logloss: 0.649375	valid's binary_logloss: 0.650671
[600]	train's binary_logloss: 0.648029	valid's binary_logloss: 0.649863
[700]	train's binary_logloss: 0.647077	valid's binary_logloss: 0.649248
[800]	train's binary_logloss: 0.64636	valid's binary_logloss: 0.649015
[900]	train's binary_logloss: 0.645776	valid's binary_logloss: 0.649025
[1000]	train's binary_logloss: 0.645301	valid's binary_logloss: 0.649111
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.645301	valid's binary_logloss: 0.649111
num_leaves, val_score: 0.647726:  85%|########5 | 17/20 [00:14<00:03,  1.12s/it][I 2020-09-27 04:38:38,549] Trial 23 finished with value: 0.6491105632200171 and parameters: {'num_leaves': 2}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  85%|########5 | 17/20 [00:14<00:03,  1.12s/it][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005133 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.53935	valid's binary_logloss: 0.652897
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.546487	valid's binary_logloss: 0.652502
num_leaves, val_score: 0.647726:  90%|######### | 18/20 [00:15<00:01,  1.04it/s][I 2020-09-27 04:38:39,143] Trial 24 finished with value: 0.6525017174749979 and parameters: {'num_leaves': 48}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  90%|######### | 18/20 [00:15<00:01,  1.04it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003834 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.669306	valid's binary_logloss: 0.670416
[200]	train's binary_logloss: 0.659732	valid's binary_logloss: 0.660321
[300]	train's binary_logloss: 0.654518	valid's binary_logloss: 0.655319
[400]	train's binary_logloss: 0.651377	valid's binary_logloss: 0.652506
[500]	train's binary_logloss: 0.649375	valid's binary_logloss: 0.650671
[600]	train's binary_logloss: 0.648029	valid's binary_logloss: 0.649863
[700]	train's binary_logloss: 0.647077	valid's binary_logloss: 0.649248
[800]	train's binary_logloss: 0.64636	valid's binary_logloss: 0.649015
[900]	train's binary_logloss: 0.645776	valid's binary_logloss: 0.649025
[1000]	train's binary_logloss: 0.645301	valid's binary_logloss: 0.649111
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.645301	valid's binary_logloss: 0.649111
num_leaves, val_score: 0.647726:  95%|#########5| 19/20 [00:16<00:01,  1.02s/it][I 2020-09-27 04:38:40,291] Trial 25 finished with value: 0.649110563220017 and parameters: {'num_leaves': 2}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726:  95%|#########5| 19/20 [00:16<00:01,  1.02s/it][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000543 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.468108	valid's binary_logloss: 0.654681
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.512521	valid's binary_logloss: 0.652052
num_leaves, val_score: 0.647726: 100%|##########| 20/20 [00:17<00:00,  1.05it/s][I 2020-09-27 04:38:41,075] Trial 26 finished with value: 0.652051918071773 and parameters: {'num_leaves': 86}. Best is trial 9 with value: 0.6477259914776615.
num_leaves, val_score: 0.647726: 100%|##########| 20/20 [00:17<00:00,  1.15it/s]
bagging, val_score: 0.647726:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003535 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63651	valid's binary_logloss: 0.650977
[200]	train's binary_logloss: 0.620185	valid's binary_logloss: 0.652728
Early stopping, best iteration is:
[150]	train's binary_logloss: 0.627724	valid's binary_logloss: 0.650131
bagging, val_score: 0.647726:  10%|#         | 1/10 [00:00<00:04,  1.99it/s][I 2020-09-27 04:38:41,600] Trial 27 finished with value: 0.650131071613353 and parameters: {'bagging_fraction': 0.4586199697400575, 'bagging_freq': 3}. Best is trial 27 with value: 0.650131071613353.
bagging, val_score: 0.647726:  10%|#         | 1/10 [00:00<00:04,  1.99it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000853 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635382	valid's binary_logloss: 0.651145
[200]	train's binary_logloss: 0.61788	valid's binary_logloss: 0.647841
[300]	train's binary_logloss: 0.603222	valid's binary_logloss: 0.647911
[400]	train's binary_logloss: 0.589945	valid's binary_logloss: 0.647612
Early stopping, best iteration is:
[340]	train's binary_logloss: 0.597917	valid's binary_logloss: 0.647006
bagging, val_score: 0.647006:  20%|##        | 2/10 [00:01<00:05,  1.33it/s][I 2020-09-27 04:38:42,919] Trial 28 finished with value: 0.6470063208679462 and parameters: {'bagging_fraction': 0.9958662267298061, 'bagging_freq': 7}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  20%|##        | 2/10 [00:01<00:05,  1.33it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.003784 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635259	valid's binary_logloss: 0.651142
[200]	train's binary_logloss: 0.617565	valid's binary_logloss: 0.64962
Early stopping, best iteration is:
[195]	train's binary_logloss: 0.618361	valid's binary_logloss: 0.649343
bagging, val_score: 0.647006:  30%|###       | 3/10 [00:02<00:04,  1.45it/s][I 2020-09-27 04:38:43,462] Trial 29 finished with value: 0.6493431744945917 and parameters: {'bagging_fraction': 0.9799916779746923, 'bagging_freq': 7}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  30%|###       | 3/10 [00:02<00:04,  1.45it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000779 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635241	valid's binary_logloss: 0.651269
[200]	train's binary_logloss: 0.618036	valid's binary_logloss: 0.649672
[300]	train's binary_logloss: 0.603932	valid's binary_logloss: 0.649408
[400]	train's binary_logloss: 0.590656	valid's binary_logloss: 0.648685
Early stopping, best iteration is:
[368]	train's binary_logloss: 0.594891	valid's binary_logloss: 0.648347
bagging, val_score: 0.647006:  40%|####      | 4/10 [00:03<00:04,  1.37it/s][I 2020-09-27 04:38:44,294] Trial 30 finished with value: 0.6483471798474496 and parameters: {'bagging_fraction': 0.9986614103818544, 'bagging_freq': 7}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  40%|####      | 4/10 [00:03<00:04,  1.37it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007385 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63573	valid's binary_logloss: 0.649902
[200]	train's binary_logloss: 0.618646	valid's binary_logloss: 0.650492
Early stopping, best iteration is:
[170]	train's binary_logloss: 0.623387	valid's binary_logloss: 0.648591
bagging, val_score: 0.647006:  50%|#####     | 5/10 [00:03<00:03,  1.52it/s][I 2020-09-27 04:38:44,781] Trial 31 finished with value: 0.6485905878341305 and parameters: {'bagging_fraction': 0.7328660882461872, 'bagging_freq': 5}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  50%|#####     | 5/10 [00:03<00:03,  1.52it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000825 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635272	valid's binary_logloss: 0.650422
[200]	train's binary_logloss: 0.61747	valid's binary_logloss: 0.648941
[300]	train's binary_logloss: 0.602462	valid's binary_logloss: 0.650582
Early stopping, best iteration is:
[227]	train's binary_logloss: 0.613139	valid's binary_logloss: 0.648451
bagging, val_score: 0.647006:  60%|######    | 6/10 [00:04<00:02,  1.49it/s][I 2020-09-27 04:38:45,485] Trial 32 finished with value: 0.6484510872456424 and parameters: {'bagging_fraction': 0.7900181913598195, 'bagging_freq': 1}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  60%|######    | 6/10 [00:04<00:02,  1.49it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003362 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.636504	valid's binary_logloss: 0.650927
[200]	train's binary_logloss: 0.620547	valid's binary_logloss: 0.651909
Early stopping, best iteration is:
[115]	train's binary_logloss: 0.633633	valid's binary_logloss: 0.6507
bagging, val_score: 0.647006:  70%|#######   | 7/10 [00:05<00:02,  1.40it/s][I 2020-09-27 04:38:46,298] Trial 33 finished with value: 0.6506995210705391 and parameters: {'bagging_fraction': 0.47267376312879716, 'bagging_freq': 5}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  70%|#######   | 7/10 [00:05<00:02,  1.40it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005528 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635448	valid's binary_logloss: 0.651834
[200]	train's binary_logloss: 0.617764	valid's binary_logloss: 0.650603
[300]	train's binary_logloss: 0.603475	valid's binary_logloss: 0.652326
Early stopping, best iteration is:
[226]	train's binary_logloss: 0.614069	valid's binary_logloss: 0.649886
bagging, val_score: 0.647006:  80%|########  | 8/10 [00:05<00:01,  1.47it/s][I 2020-09-27 04:38:46,894] Trial 34 finished with value: 0.649885537943031 and parameters: {'bagging_fraction': 0.876875549392964, 'bagging_freq': 7}. Best is trial 28 with value: 0.6470063208679462.
bagging, val_score: 0.647006:  80%|########  | 8/10 [00:05<00:01,  1.47it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000544 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635851	valid's binary_logloss: 0.649962
[200]	train's binary_logloss: 0.6186	valid's binary_logloss: 0.647048
[300]	train's binary_logloss: 0.603846	valid's binary_logloss: 0.649943
Early stopping, best iteration is:
[207]	train's binary_logloss: 0.617528	valid's binary_logloss: 0.646928
bagging, val_score: 0.646928:  90%|######### | 9/10 [00:06<00:00,  1.56it/s][I 2020-09-27 04:38:47,444] Trial 35 finished with value: 0.6469279610503994 and parameters: {'bagging_fraction': 0.6178680364111143, 'bagging_freq': 3}. Best is trial 35 with value: 0.6469279610503994.
bagging, val_score: 0.646928:  90%|######### | 9/10 [00:06<00:00,  1.56it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000490 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635909	valid's binary_logloss: 0.648293
[200]	train's binary_logloss: 0.618922	valid's binary_logloss: 0.649973
Early stopping, best iteration is:
[149]	train's binary_logloss: 0.627266	valid's binary_logloss: 0.647531
bagging, val_score: 0.646928: 100%|##########| 10/10 [00:06<00:00,  1.68it/s][I 2020-09-27 04:38:47,929] Trial 36 finished with value: 0.6475307026936266 and parameters: {'bagging_fraction': 0.5956628615183736, 'bagging_freq': 2}. Best is trial 35 with value: 0.6469279610503994.
bagging, val_score: 0.646928: 100%|##########| 10/10 [00:06<00:00,  1.46it/s]
feature_fraction_stage2, val_score: 0.646928:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000428 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.634952	valid's binary_logloss: 0.65056
[200]	train's binary_logloss: 0.617682	valid's binary_logloss: 0.649553
[300]	train's binary_logloss: 0.603161	valid's binary_logloss: 0.652262
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.611549	valid's binary_logloss: 0.649321
feature_fraction_stage2, val_score: 0.646928:  17%|#6        | 1/6 [00:00<00:03,  1.60it/s][I 2020-09-27 04:38:48,575] Trial 37 finished with value: 0.6493214800336689 and parameters: {'feature_fraction': 0.748}. Best is trial 37 with value: 0.6493214800336689.
feature_fraction_stage2, val_score: 0.646928:  17%|#6        | 1/6 [00:00<00:03,  1.60it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001333 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635533	valid's binary_logloss: 0.650663
[200]	train's binary_logloss: 0.618659	valid's binary_logloss: 0.651432
Early stopping, best iteration is:
[141]	train's binary_logloss: 0.628172	valid's binary_logloss: 0.650017
feature_fraction_stage2, val_score: 0.646928:  33%|###3      | 2/6 [00:01<00:02,  1.67it/s][I 2020-09-27 04:38:49,111] Trial 38 finished with value: 0.6500171564584806 and parameters: {'feature_fraction': 0.652}. Best is trial 37 with value: 0.6493214800336689.
feature_fraction_stage2, val_score: 0.646928:  33%|###3      | 2/6 [00:01<00:02,  1.67it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000674 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635851	valid's binary_logloss: 0.649962
[200]	train's binary_logloss: 0.6186	valid's binary_logloss: 0.647048
[300]	train's binary_logloss: 0.603846	valid's binary_logloss: 0.649943
Early stopping, best iteration is:
[207]	train's binary_logloss: 0.617528	valid's binary_logloss: 0.646928
feature_fraction_stage2, val_score: 0.646928:  50%|#####     | 3/6 [00:02<00:02,  1.33it/s][I 2020-09-27 04:38:50,213] Trial 39 finished with value: 0.6469279610503994 and parameters: {'feature_fraction': 0.6839999999999999}. Best is trial 39 with value: 0.6469279610503994.
feature_fraction_stage2, val_score: 0.646928:  50%|#####     | 3/6 [00:02<00:02,  1.33it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000915 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635444	valid's binary_logloss: 0.649405
[200]	train's binary_logloss: 0.618068	valid's binary_logloss: 0.648194
Early stopping, best iteration is:
[173]	train's binary_logloss: 0.622667	valid's binary_logloss: 0.647565
feature_fraction_stage2, val_score: 0.646928:  67%|######6   | 4/6 [00:02<00:01,  1.46it/s][I 2020-09-27 04:38:50,749] Trial 40 finished with value: 0.6475654368255805 and parameters: {'feature_fraction': 0.7799999999999999}. Best is trial 39 with value: 0.6469279610503994.
feature_fraction_stage2, val_score: 0.646928:  67%|######6   | 4/6 [00:02<00:01,  1.46it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004998 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63624	valid's binary_logloss: 0.650875
[200]	train's binary_logloss: 0.619436	valid's binary_logloss: 0.649352
Early stopping, best iteration is:
[165]	train's binary_logloss: 0.62485	valid's binary_logloss: 0.648991
feature_fraction_stage2, val_score: 0.646928:  83%|########3 | 5/6 [00:03<00:00,  1.59it/s][I 2020-09-27 04:38:51,248] Trial 41 finished with value: 0.6489914374612618 and parameters: {'feature_fraction': 0.62}. Best is trial 39 with value: 0.6469279610503994.
feature_fraction_stage2, val_score: 0.646928:  83%|########3 | 5/6 [00:03<00:00,  1.59it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000866 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.634952	valid's binary_logloss: 0.65056
[200]	train's binary_logloss: 0.617682	valid's binary_logloss: 0.649553
[300]	train's binary_logloss: 0.603161	valid's binary_logloss: 0.652262
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.611549	valid's binary_logloss: 0.649321
feature_fraction_stage2, val_score: 0.646928: 100%|##########| 6/6 [00:03<00:00,  1.56it/s][I 2020-09-27 04:38:51,915] Trial 42 finished with value: 0.6493214800336689 and parameters: {'feature_fraction': 0.716}. Best is trial 39 with value: 0.6469279610503994.
feature_fraction_stage2, val_score: 0.646928: 100%|##########| 6/6 [00:03<00:00,  1.51it/s]
regularization_factors, val_score: 0.646928:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000644 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635945	valid's binary_logloss: 0.65002
[200]	train's binary_logloss: 0.61936	valid's binary_logloss: 0.648569
Early stopping, best iteration is:
[161]	train's binary_logloss: 0.625243	valid's binary_logloss: 0.647758
regularization_factors, val_score: 0.646928:   5%|5         | 1/20 [00:00<00:10,  1.73it/s][I 2020-09-27 04:38:52,510] Trial 43 finished with value: 0.6477575542610232 and parameters: {'lambda_l1': 0.5305532619182826, 'lambda_l2': 2.2940234848178292e-05}. Best is trial 43 with value: 0.6477575542610232.
regularization_factors, val_score: 0.646928:   5%|5         | 1/20 [00:00<00:10,  1.73it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004925 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637159	valid's binary_logloss: 0.648561
[200]	train's binary_logloss: 0.62222	valid's binary_logloss: 0.646184
[300]	train's binary_logloss: 0.610053	valid's binary_logloss: 0.646865
Early stopping, best iteration is:
[250]	train's binary_logloss: 0.616221	valid's binary_logloss: 0.645506
regularization_factors, val_score: 0.645506:  10%|#         | 2/20 [00:01<00:12,  1.48it/s][I 2020-09-27 04:38:53,419] Trial 44 finished with value: 0.6455057317841788 and parameters: {'lambda_l1': 1.5729907152074253e-08, 'lambda_l2': 6.408315304180313}. Best is trial 44 with value: 0.6455057317841788.
regularization_factors, val_score: 0.645506:  10%|#         | 2/20 [00:01<00:12,  1.48it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010288 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637347	valid's binary_logloss: 0.64928
[200]	train's binary_logloss: 0.623416	valid's binary_logloss: 0.647003
[300]	train's binary_logloss: 0.611546	valid's binary_logloss: 0.648015
Early stopping, best iteration is:
[225]	train's binary_logloss: 0.620614	valid's binary_logloss: 0.646489
regularization_factors, val_score: 0.645506:  15%|#5        | 3/20 [00:02<00:12,  1.34it/s][I 2020-09-27 04:38:54,330] Trial 45 finished with value: 0.6464889390748203 and parameters: {'lambda_l1': 1.792278679780671e-08, 'lambda_l2': 8.810233526119564}. Best is trial 44 with value: 0.6455057317841788.
regularization_factors, val_score: 0.645506:  15%|#5        | 3/20 [00:02<00:12,  1.34it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000443 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63666	valid's binary_logloss: 0.649464
[200]	train's binary_logloss: 0.62174	valid's binary_logloss: 0.647351
[300]	train's binary_logloss: 0.609288	valid's binary_logloss: 0.648443
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.615011	valid's binary_logloss: 0.646435
regularization_factors, val_score: 0.645506:  20%|##        | 4/20 [00:03<00:11,  1.37it/s][I 2020-09-27 04:38:55,027] Trial 46 finished with value: 0.6464351970725228 and parameters: {'lambda_l1': 2.2856982982765707e-08, 'lambda_l2': 4.5894526330871095}. Best is trial 44 with value: 0.6455057317841788.
regularization_factors, val_score: 0.645506:  20%|##        | 4/20 [00:03<00:11,  1.37it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007799 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637474	valid's binary_logloss: 0.648583
[200]	train's binary_logloss: 0.62337	valid's binary_logloss: 0.645859
[300]	train's binary_logloss: 0.61123	valid's binary_logloss: 0.645293
[400]	train's binary_logloss: 0.599843	valid's binary_logloss: 0.646225
Early stopping, best iteration is:
[317]	train's binary_logloss: 0.609063	valid's binary_logloss: 0.644762
regularization_factors, val_score: 0.644762:  25%|##5       | 5/20 [00:03<00:11,  1.32it/s][I 2020-09-27 04:38:55,839] Trial 47 finished with value: 0.6447624933131433 and parameters: {'lambda_l1': 1.2755111716437187e-08, 'lambda_l2': 9.850714874340618}. Best is trial 47 with value: 0.6447624933131433.
regularization_factors, val_score: 0.644762:  25%|##5       | 5/20 [00:03<00:11,  1.32it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004054 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63754	valid's binary_logloss: 0.648978
[200]	train's binary_logloss: 0.623228	valid's binary_logloss: 0.646323
[300]	train's binary_logloss: 0.611727	valid's binary_logloss: 0.64615
[400]	train's binary_logloss: 0.600473	valid's binary_logloss: 0.646062
Early stopping, best iteration is:
[347]	train's binary_logloss: 0.606258	valid's binary_logloss: 0.645207
regularization_factors, val_score: 0.644762:  30%|###       | 6/20 [00:04<00:11,  1.26it/s][I 2020-09-27 04:38:56,723] Trial 48 finished with value: 0.6452071554754294 and parameters: {'lambda_l1': 1.3011629397041354e-08, 'lambda_l2': 9.988192496777456}. Best is trial 47 with value: 0.6447624933131433.
regularization_factors, val_score: 0.644762:  30%|###       | 6/20 [00:04<00:11,  1.26it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007536 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.6371	valid's binary_logloss: 0.6496
[200]	train's binary_logloss: 0.622819	valid's binary_logloss: 0.647511
[300]	train's binary_logloss: 0.610769	valid's binary_logloss: 0.647971
Early stopping, best iteration is:
[232]	train's binary_logloss: 0.618821	valid's binary_logloss: 0.646619
regularization_factors, val_score: 0.644762:  35%|###5      | 7/20 [00:05<00:11,  1.14it/s][I 2020-09-27 04:38:57,789] Trial 49 finished with value: 0.6466188865283253 and parameters: {'lambda_l1': 1.5462932963049097e-08, 'lambda_l2': 8.71269927175754}. Best is trial 47 with value: 0.6447624933131433.
regularization_factors, val_score: 0.644762:  35%|###5      | 7/20 [00:05<00:11,  1.14it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004606 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637133	valid's binary_logloss: 0.649344
[200]	train's binary_logloss: 0.622572	valid's binary_logloss: 0.647963
[300]	train's binary_logloss: 0.61055	valid's binary_logloss: 0.647491
Early stopping, best iteration is:
[255]	train's binary_logloss: 0.616042	valid's binary_logloss: 0.646564
regularization_factors, val_score: 0.644762:  40%|####      | 8/20 [00:06<00:09,  1.21it/s][I 2020-09-27 04:38:58,493] Trial 50 finished with value: 0.6465642008648574 and parameters: {'lambda_l1': 1.1089576217805702e-08, 'lambda_l2': 7.312599056585437}. Best is trial 47 with value: 0.6447624933131433.
regularization_factors, val_score: 0.644762:  40%|####      | 8/20 [00:06<00:09,  1.21it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000434 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637479	valid's binary_logloss: 0.648584
[200]	train's binary_logloss: 0.62341	valid's binary_logloss: 0.645728
[300]	train's binary_logloss: 0.611552	valid's binary_logloss: 0.64603
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.617107	valid's binary_logloss: 0.644789
regularization_factors, val_score: 0.644762:  45%|####5     | 9/20 [00:07<00:08,  1.27it/s][I 2020-09-27 04:38:59,196] Trial 51 finished with value: 0.6447887069638869 and parameters: {'lambda_l1': 1.7016389032742813e-08, 'lambda_l2': 9.898211761649065}. Best is trial 47 with value: 0.6447624933131433.
regularization_factors, val_score: 0.644762:  45%|####5     | 9/20 [00:07<00:08,  1.27it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007899 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637278	valid's binary_logloss: 0.649089
[200]	train's binary_logloss: 0.622576	valid's binary_logloss: 0.647208
[300]	train's binary_logloss: 0.610418	valid's binary_logloss: 0.648157
Early stopping, best iteration is:
[255]	train's binary_logloss: 0.615802	valid's binary_logloss: 0.646398
regularization_factors, val_score: 0.644762:  50%|#####     | 10/20 [00:07<00:07,  1.31it/s][I 2020-09-27 04:38:59,894] Trial 52 finished with value: 0.6463976805891822 and parameters: {'lambda_l1': 2.2240739211030176e-08, 'lambda_l2': 7.404538655662823}. Best is trial 47 with value: 0.6447624933131433.
regularization_factors, val_score: 0.644762:  50%|#####     | 10/20 [00:07<00:07,  1.31it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000572 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637182	valid's binary_logloss: 0.647682
[200]	train's binary_logloss: 0.623009	valid's binary_logloss: 0.646084
[300]	train's binary_logloss: 0.611035	valid's binary_logloss: 0.645152
Early stopping, best iteration is:
[260]	train's binary_logloss: 0.615822	valid's binary_logloss: 0.643798
regularization_factors, val_score: 0.643798:  55%|#####5    | 11/20 [00:08<00:06,  1.35it/s][I 2020-09-27 04:39:00,581] Trial 53 finished with value: 0.6437977363337001 and parameters: {'lambda_l1': 2.039143182350295e-08, 'lambda_l2': 9.400199053113356}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  55%|#####5    | 11/20 [00:08<00:06,  1.35it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004032 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637206	valid's binary_logloss: 0.647684
[200]	train's binary_logloss: 0.623151	valid's binary_logloss: 0.646089
[300]	train's binary_logloss: 0.611403	valid's binary_logloss: 0.645967
Early stopping, best iteration is:
[255]	train's binary_logloss: 0.616582	valid's binary_logloss: 0.644531
regularization_factors, val_score: 0.643798:  60%|######    | 12/20 [00:09<00:06,  1.17it/s][I 2020-09-27 04:39:01,697] Trial 54 finished with value: 0.6445311936163728 and parameters: {'lambda_l1': 1.6270643281831753e-08, 'lambda_l2': 9.628307902916124}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  60%|######    | 12/20 [00:09<00:06,  1.17it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000874 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637281	valid's binary_logloss: 0.649044
[200]	train's binary_logloss: 0.622584	valid's binary_logloss: 0.647021
[300]	train's binary_logloss: 0.610763	valid's binary_logloss: 0.647525
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.616357	valid's binary_logloss: 0.645998
regularization_factors, val_score: 0.643798:  65%|######5   | 13/20 [00:10<00:05,  1.22it/s][I 2020-09-27 04:39:02,449] Trial 55 finished with value: 0.6459984456397755 and parameters: {'lambda_l1': 1.0240697224966498e-08, 'lambda_l2': 8.012028500403863}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  65%|######5   | 13/20 [00:10<00:05,  1.22it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000558 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635858	valid's binary_logloss: 0.649961
[200]	train's binary_logloss: 0.618625	valid's binary_logloss: 0.647047
[300]	train's binary_logloss: 0.603932	valid's binary_logloss: 0.648748
Early stopping, best iteration is:
[207]	train's binary_logloss: 0.617554	valid's binary_logloss: 0.646927
regularization_factors, val_score: 0.643798:  70%|#######   | 14/20 [00:11<00:04,  1.32it/s][I 2020-09-27 04:39:03,059] Trial 56 finished with value: 0.6469267717887545 and parameters: {'lambda_l1': 1.2959832015054276e-06, 'lambda_l2': 0.01861979600536283}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  70%|#######   | 14/20 [00:11<00:04,  1.32it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010899 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635708	valid's binary_logloss: 0.649563
[200]	train's binary_logloss: 0.61871	valid's binary_logloss: 0.647652
[300]	train's binary_logloss: 0.604402	valid's binary_logloss: 0.650241
Early stopping, best iteration is:
[206]	train's binary_logloss: 0.6179	valid's binary_logloss: 0.647417
regularization_factors, val_score: 0.643798:  75%|#######5  | 15/20 [00:11<00:03,  1.43it/s][I 2020-09-27 04:39:03,628] Trial 57 finished with value: 0.6474166124124395 and parameters: {'lambda_l1': 5.765532893767189e-07, 'lambda_l2': 0.14788147344128014}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  75%|#######5  | 15/20 [00:11<00:03,  1.43it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005282 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635911	valid's binary_logloss: 0.650092
[200]	train's binary_logloss: 0.618697	valid's binary_logloss: 0.649148
[300]	train's binary_logloss: 0.604075	valid's binary_logloss: 0.650674
Early stopping, best iteration is:
[250]	train's binary_logloss: 0.611662	valid's binary_logloss: 0.648499
regularization_factors, val_score: 0.643798:  80%|########  | 16/20 [00:12<00:02,  1.44it/s][I 2020-09-27 04:39:04,302] Trial 58 finished with value: 0.6484991507925789 and parameters: {'lambda_l1': 5.203220654613759e-07, 'lambda_l2': 0.30810693370096226}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  80%|########  | 16/20 [00:12<00:02,  1.44it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004372 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63618	valid's binary_logloss: 0.650527
[200]	train's binary_logloss: 0.619801	valid's binary_logloss: 0.649123
Early stopping, best iteration is:
[160]	train's binary_logloss: 0.625979	valid's binary_logloss: 0.648367
regularization_factors, val_score: 0.643798:  85%|########5 | 17/20 [00:13<00:02,  1.37it/s][I 2020-09-27 04:39:05,120] Trial 59 finished with value: 0.6483666505535219 and parameters: {'lambda_l1': 1.1681262794154124e-07, 'lambda_l2': 0.9991529121249367}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  85%|########5 | 17/20 [00:13<00:02,  1.37it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008489 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635851	valid's binary_logloss: 0.649962
[200]	train's binary_logloss: 0.6186	valid's binary_logloss: 0.647048
[300]	train's binary_logloss: 0.603848	valid's binary_logloss: 0.649943
Early stopping, best iteration is:
[207]	train's binary_logloss: 0.617528	valid's binary_logloss: 0.646928
regularization_factors, val_score: 0.643798:  90%|######### | 18/20 [00:13<00:01,  1.40it/s][I 2020-09-27 04:39:05,803] Trial 60 finished with value: 0.6469279610385475 and parameters: {'lambda_l1': 1.213400855943739e-08, 'lambda_l2': 1.534183918800275e-07}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  90%|######### | 18/20 [00:13<00:01,  1.40it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009995 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637219	valid's binary_logloss: 0.650017
[200]	train's binary_logloss: 0.622187	valid's binary_logloss: 0.647211
[300]	train's binary_logloss: 0.609711	valid's binary_logloss: 0.647263
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.615481	valid's binary_logloss: 0.64613
regularization_factors, val_score: 0.643798:  95%|#########5| 19/20 [00:14<00:00,  1.44it/s][I 2020-09-27 04:39:06,446] Trial 61 finished with value: 0.6461303865450015 and parameters: {'lambda_l1': 1.0232750932997174e-08, 'lambda_l2': 5.61104076150738}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798:  95%|#########5| 19/20 [00:14<00:00,  1.44it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000915 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637448	valid's binary_logloss: 0.649065
[200]	train's binary_logloss: 0.623392	valid's binary_logloss: 0.647489
[300]	train's binary_logloss: 0.611817	valid's binary_logloss: 0.647888
Early stopping, best iteration is:
[236]	train's binary_logloss: 0.61913	valid's binary_logloss: 0.647044
regularization_factors, val_score: 0.643798: 100%|##########| 20/20 [00:15<00:00,  1.48it/s][I 2020-09-27 04:39:07,086] Trial 62 finished with value: 0.6470438910627877 and parameters: {'lambda_l1': 1.4832304242797658e-08, 'lambda_l2': 9.997088293767975}. Best is trial 53 with value: 0.6437977363337001.
regularization_factors, val_score: 0.643798: 100%|##########| 20/20 [00:15<00:00,  1.32it/s]
min_data_in_leaf, val_score: 0.643798:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003521 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637289	valid's binary_logloss: 0.648183
[200]	train's binary_logloss: 0.62309	valid's binary_logloss: 0.64706
[300]	train's binary_logloss: 0.611616	valid's binary_logloss: 0.64692
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.616891	valid's binary_logloss: 0.645943
min_data_in_leaf, val_score: 0.643798:  20%|##        | 1/5 [00:00<00:02,  1.35it/s][I 2020-09-27 04:39:07,843] Trial 63 finished with value: 0.6459430156769583 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.6459430156769583.
min_data_in_leaf, val_score: 0.643798:  20%|##        | 1/5 [00:00<00:02,  1.35it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000832 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637182	valid's binary_logloss: 0.647682
[200]	train's binary_logloss: 0.622869	valid's binary_logloss: 0.646623
[300]	train's binary_logloss: 0.611124	valid's binary_logloss: 0.646202
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.616639	valid's binary_logloss: 0.645162
min_data_in_leaf, val_score: 0.643798:  40%|####      | 2/5 [00:01<00:02,  1.27it/s][I 2020-09-27 04:39:08,731] Trial 64 finished with value: 0.6451618189346408 and parameters: {'min_child_samples': 10}. Best is trial 64 with value: 0.6451618189346408.
min_data_in_leaf, val_score: 0.643798:  40%|####      | 2/5 [00:01<00:02,  1.27it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012841 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637182	valid's binary_logloss: 0.647682
[200]	train's binary_logloss: 0.623074	valid's binary_logloss: 0.64581
[300]	train's binary_logloss: 0.61124	valid's binary_logloss: 0.645222
Early stopping, best iteration is:
[255]	train's binary_logloss: 0.616649	valid's binary_logloss: 0.643974
min_data_in_leaf, val_score: 0.643798:  60%|######    | 3/5 [00:02<00:01,  1.11it/s][I 2020-09-27 04:39:09,890] Trial 65 finished with value: 0.6439740196693694 and parameters: {'min_child_samples': 25}. Best is trial 65 with value: 0.6439740196693694.
min_data_in_leaf, val_score: 0.643798:  60%|######    | 3/5 [00:02<00:01,  1.11it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004742 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637182	valid's binary_logloss: 0.647682
[200]	train's binary_logloss: 0.622869	valid's binary_logloss: 0.646623
[300]	train's binary_logloss: 0.610951	valid's binary_logloss: 0.646227
Early stopping, best iteration is:
[243]	train's binary_logloss: 0.617796	valid's binary_logloss: 0.645499
min_data_in_leaf, val_score: 0.643798:  80%|########  | 4/5 [00:03<00:00,  1.19it/s][I 2020-09-27 04:39:10,600] Trial 66 finished with value: 0.6454986987713198 and parameters: {'min_child_samples': 5}. Best is trial 65 with value: 0.6439740196693694.
min_data_in_leaf, val_score: 0.643798:  80%|########  | 4/5 [00:03<00:00,  1.19it/s][LightGBM] [Info] Number of positive: 13186, number of negative: 12813
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000441 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.507173 -> initscore=0.028695
[LightGBM] [Info] Start training from score 0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637729	valid's binary_logloss: 0.649258
[200]	train's binary_logloss: 0.62325	valid's binary_logloss: 0.646308
[300]	train's binary_logloss: 0.611957	valid's binary_logloss: 0.647641
Early stopping, best iteration is:
[238]	train's binary_logloss: 0.618817	valid's binary_logloss: 0.645531
min_data_in_leaf, val_score: 0.643798: 100%|##########| 5/5 [00:04<00:00,  1.26it/s][I 2020-09-27 04:39:11,288] Trial 67 finished with value: 0.6455312591688435 and parameters: {'min_child_samples': 100}. Best is trial 65 with value: 0.6439740196693694.
min_data_in_leaf, val_score: 0.643798: 100%|##########| 5/5 [00:04<00:00,  1.19it/s]
Fold : 1
[I 2020-09-27 04:39:11,344] A new study created in memory with name: no-name-8ec748bd-7c24-485d-8dfe-5099f051a9d9
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001101 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575663	valid's binary_logloss: 0.661357
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.605835	valid's binary_logloss: 0.659163
feature_fraction, val_score: 0.659163:  14%|#4        | 1/7 [00:00<00:03,  1.89it/s][I 2020-09-27 04:39:11,890] Trial 0 finished with value: 0.6591626166519308 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6591626166519308.
feature_fraction, val_score: 0.659163:  14%|#4        | 1/7 [00:00<00:03,  1.89it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000533 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577716	valid's binary_logloss: 0.658007
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.596211	valid's binary_logloss: 0.65673
feature_fraction, val_score: 0.656730:  29%|##8       | 2/7 [00:00<00:02,  1.98it/s][I 2020-09-27 04:39:12,340] Trial 1 finished with value: 0.6567303283064773 and parameters: {'feature_fraction': 0.5}. Best is trial 1 with value: 0.6567303283064773.
feature_fraction, val_score: 0.656730:  29%|##8       | 2/7 [00:00<00:02,  1.98it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001310 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576271	valid's binary_logloss: 0.660374
Early stopping, best iteration is:
[66]	train's binary_logloss: 0.598062	valid's binary_logloss: 0.658574
feature_fraction, val_score: 0.656730:  43%|####2     | 3/7 [00:01<00:02,  1.53it/s][I 2020-09-27 04:39:13,335] Trial 2 finished with value: 0.6585740014120882 and parameters: {'feature_fraction': 0.6}. Best is trial 1 with value: 0.6567303283064773.
feature_fraction, val_score: 0.656730:  43%|####2     | 3/7 [00:01<00:02,  1.53it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000370 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.582075	valid's binary_logloss: 0.6565
Early stopping, best iteration is:
[94]	train's binary_logloss: 0.58569	valid's binary_logloss: 0.655992
feature_fraction, val_score: 0.655992:  57%|#####7    | 4/7 [00:02<00:01,  1.67it/s][I 2020-09-27 04:39:13,802] Trial 3 finished with value: 0.6559924908506093 and parameters: {'feature_fraction': 0.4}. Best is trial 3 with value: 0.6559924908506093.
feature_fraction, val_score: 0.655992:  57%|#####7    | 4/7 [00:02<00:01,  1.67it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000609 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571094	valid's binary_logloss: 0.661412
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.604656	valid's binary_logloss: 0.657544
feature_fraction, val_score: 0.655992:  71%|#######1  | 5/7 [00:02<00:01,  1.78it/s][I 2020-09-27 04:39:14,285] Trial 4 finished with value: 0.6575438872927727 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 3 with value: 0.6559924908506093.
feature_fraction, val_score: 0.655992:  71%|#######1  | 5/7 [00:02<00:01,  1.78it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000923 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.569935	valid's binary_logloss: 0.659342
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.602845	valid's binary_logloss: 0.658926
feature_fraction, val_score: 0.655992:  86%|########5 | 6/7 [00:03<00:00,  1.86it/s][I 2020-09-27 04:39:14,768] Trial 5 finished with value: 0.658926225620644 and parameters: {'feature_fraction': 1.0}. Best is trial 3 with value: 0.6559924908506093.
feature_fraction, val_score: 0.655992:  86%|########5 | 6/7 [00:03<00:00,  1.86it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000465 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574321	valid's binary_logloss: 0.659912
Early stopping, best iteration is:
[66]	train's binary_logloss: 0.595979	valid's binary_logloss: 0.657826
feature_fraction, val_score: 0.655992: 100%|##########| 7/7 [00:03<00:00,  1.91it/s][I 2020-09-27 04:39:15,259] Trial 6 finished with value: 0.65782592600512 and parameters: {'feature_fraction': 0.8}. Best is trial 3 with value: 0.6559924908506093.
feature_fraction, val_score: 0.655992: 100%|##########| 7/7 [00:03<00:00,  1.79it/s]
num_leaves, val_score: 0.655992:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000434 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.631152	valid's binary_logloss: 0.657113
[200]	train's binary_logloss: 0.611555	valid's binary_logloss: 0.657151
Early stopping, best iteration is:
[114]	train's binary_logloss: 0.627933	valid's binary_logloss: 0.656291
num_leaves, val_score: 0.655992:   5%|5         | 1/20 [00:00<00:07,  2.47it/s][I 2020-09-27 04:39:15,682] Trial 7 finished with value: 0.6562905644151281 and parameters: {'num_leaves': 10}. Best is trial 7 with value: 0.6562905644151281.
num_leaves, val_score: 0.655992:   5%|5         | 1/20 [00:00<00:07,  2.47it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000313 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.313854	valid's binary_logloss: 0.672207
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.498751	valid's binary_logloss: 0.663462
num_leaves, val_score: 0.655992:  10%|#         | 2/20 [00:01<00:13,  1.32it/s][I 2020-09-27 04:39:17,270] Trial 8 finished with value: 0.6634620451568378 and parameters: {'num_leaves': 238}. Best is trial 7 with value: 0.6562905644151281.
num_leaves, val_score: 0.655992:  10%|#         | 2/20 [00:02<00:13,  1.32it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000229 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.442275	valid's binary_logloss: 0.665841
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.552251	valid's binary_logloss: 0.660374
num_leaves, val_score: 0.655992:  15%|#5        | 3/20 [00:02<00:12,  1.37it/s][I 2020-09-27 04:39:17,923] Trial 9 finished with value: 0.6603744399138358 and parameters: {'num_leaves': 116}. Best is trial 7 with value: 0.6562905644151281.
num_leaves, val_score: 0.655992:  15%|#5        | 3/20 [00:02<00:12,  1.37it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010332 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.308823	valid's binary_logloss: 0.670809
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.48791	valid's binary_logloss: 0.665738
num_leaves, val_score: 0.655992:  20%|##        | 4/20 [00:03<00:12,  1.27it/s][I 2020-09-27 04:39:18,840] Trial 10 finished with value: 0.6657380834337528 and parameters: {'num_leaves': 244}. Best is trial 7 with value: 0.6562905644151281.
num_leaves, val_score: 0.655992:  20%|##        | 4/20 [00:03<00:12,  1.27it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000322 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658235	valid's binary_logloss: 0.664731
[200]	train's binary_logloss: 0.647762	valid's binary_logloss: 0.657647
[300]	train's binary_logloss: 0.642542	valid's binary_logloss: 0.655465
[400]	train's binary_logloss: 0.638941	valid's binary_logloss: 0.654486
[500]	train's binary_logloss: 0.636035	valid's binary_logloss: 0.653981
[600]	train's binary_logloss: 0.63343	valid's binary_logloss: 0.65371
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.634597	valid's binary_logloss: 0.653553
num_leaves, val_score: 0.653553:  25%|##5       | 5/20 [00:04<00:11,  1.26it/s][I 2020-09-27 04:39:19,645] Trial 11 finished with value: 0.6535525819216924 and parameters: {'num_leaves': 3}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  25%|##5       | 5/20 [00:04<00:11,  1.26it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000332 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.620698	valid's binary_logloss: 0.657525
[200]	train's binary_logloss: 0.594023	valid's binary_logloss: 0.657636
Early stopping, best iteration is:
[151]	train's binary_logloss: 0.606458	valid's binary_logloss: 0.657287
num_leaves, val_score: 0.653553:  30%|###       | 6/20 [00:05<00:10,  1.32it/s][I 2020-09-27 04:39:20,325] Trial 12 finished with value: 0.6572874929871441 and parameters: {'num_leaves': 14}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  30%|###       | 6/20 [00:05<00:10,  1.32it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011876 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.467946	valid's binary_logloss: 0.670734
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.556356	valid's binary_logloss: 0.664088
num_leaves, val_score: 0.653553:  35%|###5      | 7/20 [00:05<00:10,  1.25it/s][I 2020-09-27 04:39:21,228] Trial 13 finished with value: 0.6640878662544877 and parameters: {'num_leaves': 97}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  35%|###5      | 7/20 [00:05<00:10,  1.25it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000489 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.524406	valid's binary_logloss: 0.663032
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.595701	valid's binary_logloss: 0.662191
num_leaves, val_score: 0.653553:  40%|####      | 8/20 [00:06<00:08,  1.39it/s][I 2020-09-27 04:39:21,759] Trial 14 finished with value: 0.6621905066342226 and parameters: {'num_leaves': 62}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  40%|####      | 8/20 [00:06<00:08,  1.39it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000238 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.367271	valid's binary_logloss: 0.673756
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.528584	valid's binary_logloss: 0.666502
num_leaves, val_score: 0.653553:  45%|####5     | 9/20 [00:07<00:08,  1.30it/s][I 2020-09-27 04:39:22,638] Trial 15 finished with value: 0.6665016344232115 and parameters: {'num_leaves': 180}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  45%|####5     | 9/20 [00:07<00:08,  1.30it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000232 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.36627	valid's binary_logloss: 0.674896
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.521716	valid's binary_logloss: 0.66343
num_leaves, val_score: 0.653553:  50%|#####     | 10/20 [00:08<00:08,  1.21it/s][I 2020-09-27 04:39:23,590] Trial 16 finished with value: 0.6634300540848975 and parameters: {'num_leaves': 183}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  50%|#####     | 10/20 [00:08<00:08,  1.21it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000415 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.536123	valid's binary_logloss: 0.658196
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.543103	valid's binary_logloss: 0.657473
num_leaves, val_score: 0.653553:  55%|#####5    | 11/20 [00:09<00:07,  1.13it/s][I 2020-09-27 04:39:24,613] Trial 17 finished with value: 0.6574725321803361 and parameters: {'num_leaves': 55}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  55%|#####5    | 11/20 [00:09<00:07,  1.13it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000386 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.386366	valid's binary_logloss: 0.670881
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.54618	valid's binary_logloss: 0.663828
num_leaves, val_score: 0.653553:  60%|######    | 12/20 [00:10<00:07,  1.14it/s][I 2020-09-27 04:39:25,475] Trial 18 finished with value: 0.663827571242285 and parameters: {'num_leaves': 162}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  60%|######    | 12/20 [00:10<00:07,  1.14it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000380 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.543127	valid's binary_logloss: 0.660244
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.549667	valid's binary_logloss: 0.659503
num_leaves, val_score: 0.653553:  65%|######5   | 13/20 [00:10<00:05,  1.28it/s][I 2020-09-27 04:39:26,025] Trial 19 finished with value: 0.6595031085541457 and parameters: {'num_leaves': 51}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  65%|######5   | 13/20 [00:10<00:05,  1.28it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000521 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.636989	valid's binary_logloss: 0.657697
[200]	train's binary_logloss: 0.620568	valid's binary_logloss: 0.656088
Early stopping, best iteration is:
[171]	train's binary_logloss: 0.624795	valid's binary_logloss: 0.655485
num_leaves, val_score: 0.653553:  70%|#######   | 14/20 [00:11<00:04,  1.49it/s][I 2020-09-27 04:39:26,451] Trial 20 finished with value: 0.6554852556407339 and parameters: {'num_leaves': 8}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  70%|#######   | 14/20 [00:11<00:04,  1.49it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004552 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.643142	valid's binary_logloss: 0.658677
[200]	train's binary_logloss: 0.62998	valid's binary_logloss: 0.656522
[300]	train's binary_logloss: 0.620354	valid's binary_logloss: 0.655925
Early stopping, best iteration is:
[297]	train's binary_logloss: 0.620584	valid's binary_logloss: 0.655776
num_leaves, val_score: 0.653553:  75%|#######5  | 15/20 [00:11<00:03,  1.58it/s][I 2020-09-27 04:39:26,995] Trial 21 finished with value: 0.655776407729715 and parameters: {'num_leaves': 6}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  75%|#######5  | 15/20 [00:11<00:03,  1.58it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000245 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.639989	valid's binary_logloss: 0.657478
[200]	train's binary_logloss: 0.62545	valid's binary_logloss: 0.65644
Early stopping, best iteration is:
[140]	train's binary_logloss: 0.633249	valid's binary_logloss: 0.655769
num_leaves, val_score: 0.653553:  80%|########  | 16/20 [00:12<00:02,  1.77it/s][I 2020-09-27 04:39:27,399] Trial 22 finished with value: 0.6557689832038824 and parameters: {'num_leaves': 7}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  80%|########  | 16/20 [00:12<00:02,  1.77it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004511 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575815	valid's binary_logloss: 0.658024
[200]	train's binary_logloss: 0.522157	valid's binary_logloss: 0.657932
Early stopping, best iteration is:
[154]	train's binary_logloss: 0.544614	valid's binary_logloss: 0.656365
num_leaves, val_score: 0.653553:  85%|########5 | 17/20 [00:13<00:02,  1.40it/s][I 2020-09-27 04:39:28,462] Trial 23 finished with value: 0.6563652624863247 and parameters: {'num_leaves': 34}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  85%|########5 | 17/20 [00:13<00:02,  1.40it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009938 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.647155	valid's binary_logloss: 0.660323
[200]	train's binary_logloss: 0.635295	valid's binary_logloss: 0.655924
Early stopping, best iteration is:
[188]	train's binary_logloss: 0.63642	valid's binary_logloss: 0.655643
num_leaves, val_score: 0.653553:  90%|######### | 18/20 [00:13<00:01,  1.59it/s][I 2020-09-27 04:39:28,891] Trial 24 finished with value: 0.655643287434117 and parameters: {'num_leaves': 5}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  90%|######### | 18/20 [00:13<00:01,  1.59it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000491 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.488408	valid's binary_logloss: 0.664489
Early stopping, best iteration is:
[78]	train's binary_logloss: 0.517222	valid's binary_logloss: 0.661867
num_leaves, val_score: 0.653553:  95%|#########5| 19/20 [00:14<00:00,  1.53it/s][I 2020-09-27 04:39:29,600] Trial 25 finished with value: 0.6618666948452248 and parameters: {'num_leaves': 84}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553:  95%|#########5| 19/20 [00:14<00:00,  1.53it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000236 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.580318	valid's binary_logloss: 0.659611
Early stopping, best iteration is:
[67]	train's binary_logloss: 0.601799	valid's binary_logloss: 0.658994
num_leaves, val_score: 0.653553: 100%|##########| 20/20 [00:14<00:00,  1.71it/s][I 2020-09-27 04:39:30,031] Trial 26 finished with value: 0.6589942104081719 and parameters: {'num_leaves': 32}. Best is trial 11 with value: 0.6535525819216924.
num_leaves, val_score: 0.653553: 100%|##########| 20/20 [00:14<00:00,  1.35it/s]
bagging, val_score: 0.653553:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.020148 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657525	valid's binary_logloss: 0.662893
[200]	train's binary_logloss: 0.647047	valid's binary_logloss: 0.656095
[300]	train's binary_logloss: 0.641935	valid's binary_logloss: 0.654303
[400]	train's binary_logloss: 0.638294	valid's binary_logloss: 0.653695
[500]	train's binary_logloss: 0.635159	valid's binary_logloss: 0.653205
[600]	train's binary_logloss: 0.632193	valid's binary_logloss: 0.652803
[700]	train's binary_logloss: 0.629395	valid's binary_logloss: 0.651799
[800]	train's binary_logloss: 0.626828	valid's binary_logloss: 0.652063
Early stopping, best iteration is:
[745]	train's binary_logloss: 0.628188	valid's binary_logloss: 0.651395
bagging, val_score: 0.651395:  10%|#         | 1/10 [00:01<00:10,  1.19s/it][I 2020-09-27 04:39:31,244] Trial 27 finished with value: 0.6513952391116833 and parameters: {'bagging_fraction': 0.8097719374771206, 'bagging_freq': 2}. Best is trial 27 with value: 0.6513952391116833.
bagging, val_score: 0.651395:  10%|#         | 1/10 [00:01<00:10,  1.19s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000403 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627226	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628657	valid's binary_logloss: 0.651255
bagging, val_score: 0.651255:  20%|##        | 2/10 [00:02<00:10,  1.35s/it][I 2020-09-27 04:39:32,959] Trial 28 finished with value: 0.6512546217026939 and parameters: {'bagging_fraction': 0.8280339557117807, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  20%|##        | 2/10 [00:02<00:10,  1.35s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000249 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657845	valid's binary_logloss: 0.663325
[200]	train's binary_logloss: 0.647265	valid's binary_logloss: 0.656205
[300]	train's binary_logloss: 0.642195	valid's binary_logloss: 0.653918
[400]	train's binary_logloss: 0.638463	valid's binary_logloss: 0.653807
[500]	train's binary_logloss: 0.635325	valid's binary_logloss: 0.653223
[600]	train's binary_logloss: 0.632547	valid's binary_logloss: 0.652408
[700]	train's binary_logloss: 0.629962	valid's binary_logloss: 0.652142
[800]	train's binary_logloss: 0.627395	valid's binary_logloss: 0.652339
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628893	valid's binary_logloss: 0.651858
bagging, val_score: 0.651255:  30%|###       | 3/10 [00:04<00:09,  1.32s/it][I 2020-09-27 04:39:34,223] Trial 29 finished with value: 0.6518582592610941 and parameters: {'bagging_fraction': 0.8423665601800546, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  30%|###       | 3/10 [00:04<00:09,  1.32s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000300 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657643	valid's binary_logloss: 0.663425
[200]	train's binary_logloss: 0.647094	valid's binary_logloss: 0.65631
[300]	train's binary_logloss: 0.642016	valid's binary_logloss: 0.653742
[400]	train's binary_logloss: 0.638407	valid's binary_logloss: 0.653279
Early stopping, best iteration is:
[377]	train's binary_logloss: 0.639168	valid's binary_logloss: 0.653138
bagging, val_score: 0.651255:  40%|####      | 4/10 [00:04<00:06,  1.14s/it][I 2020-09-27 04:39:34,939] Trial 30 finished with value: 0.6531380481905572 and parameters: {'bagging_fraction': 0.8282561902576878, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  40%|####      | 4/10 [00:04<00:06,  1.14s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000244 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65787	valid's binary_logloss: 0.662913
[200]	train's binary_logloss: 0.647202	valid's binary_logloss: 0.656389
[300]	train's binary_logloss: 0.642181	valid's binary_logloss: 0.65432
[400]	train's binary_logloss: 0.638556	valid's binary_logloss: 0.653816
Early stopping, best iteration is:
[361]	train's binary_logloss: 0.639969	valid's binary_logloss: 0.653683
bagging, val_score: 0.651255:  50%|#####     | 5/10 [00:06<00:05,  1.14s/it][I 2020-09-27 04:39:36,081] Trial 31 finished with value: 0.6536825951065313 and parameters: {'bagging_fraction': 0.8381544500351495, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  50%|#####     | 5/10 [00:06<00:05,  1.14s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000455 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657707	valid's binary_logloss: 0.662425
[200]	train's binary_logloss: 0.64726	valid's binary_logloss: 0.655364
[300]	train's binary_logloss: 0.64231	valid's binary_logloss: 0.653736
[400]	train's binary_logloss: 0.638629	valid's binary_logloss: 0.653251
[500]	train's binary_logloss: 0.63557	valid's binary_logloss: 0.652996
[600]	train's binary_logloss: 0.632836	valid's binary_logloss: 0.653094
[700]	train's binary_logloss: 0.630172	valid's binary_logloss: 0.652558
[800]	train's binary_logloss: 0.627493	valid's binary_logloss: 0.652618
Early stopping, best iteration is:
[738]	train's binary_logloss: 0.629206	valid's binary_logloss: 0.65221
bagging, val_score: 0.651255:  60%|######    | 6/10 [00:07<00:04,  1.16s/it][I 2020-09-27 04:39:37,277] Trial 32 finished with value: 0.6522103084600892 and parameters: {'bagging_fraction': 0.8240973522704712, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  60%|######    | 6/10 [00:07<00:04,  1.16s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000240 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657759	valid's binary_logloss: 0.6633
[200]	train's binary_logloss: 0.647172	valid's binary_logloss: 0.656277
[300]	train's binary_logloss: 0.642123	valid's binary_logloss: 0.654052
[400]	train's binary_logloss: 0.638549	valid's binary_logloss: 0.653547
[500]	train's binary_logloss: 0.635603	valid's binary_logloss: 0.653766
Early stopping, best iteration is:
[419]	train's binary_logloss: 0.637971	valid's binary_logloss: 0.653468
bagging, val_score: 0.651255:  70%|#######   | 7/10 [00:08<00:03,  1.04s/it][I 2020-09-27 04:39:38,055] Trial 33 finished with value: 0.6534681190347887 and parameters: {'bagging_fraction': 0.8133668384521671, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  70%|#######   | 7/10 [00:08<00:03,  1.04s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000309 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65767	valid's binary_logloss: 0.663092
[200]	train's binary_logloss: 0.646954	valid's binary_logloss: 0.65622
[300]	train's binary_logloss: 0.641882	valid's binary_logloss: 0.653856
[400]	train's binary_logloss: 0.638359	valid's binary_logloss: 0.653238
[500]	train's binary_logloss: 0.635264	valid's binary_logloss: 0.652485
[600]	train's binary_logloss: 0.632569	valid's binary_logloss: 0.652572
Early stopping, best iteration is:
[500]	train's binary_logloss: 0.635264	valid's binary_logloss: 0.652485
bagging, val_score: 0.651255:  80%|########  | 8/10 [00:08<00:01,  1.01it/s][I 2020-09-27 04:39:38,927] Trial 34 finished with value: 0.6524845330402806 and parameters: {'bagging_fraction': 0.833189344448115, 'bagging_freq': 2}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  80%|########  | 8/10 [00:08<00:01,  1.01it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000447 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657357	valid's binary_logloss: 0.663082
[200]	train's binary_logloss: 0.646962	valid's binary_logloss: 0.656317
[300]	train's binary_logloss: 0.642048	valid's binary_logloss: 0.65432
Early stopping, best iteration is:
[285]	train's binary_logloss: 0.642706	valid's binary_logloss: 0.653911
bagging, val_score: 0.651255:  90%|######### | 9/10 [00:09<00:01,  1.02s/it][I 2020-09-27 04:39:40,016] Trial 35 finished with value: 0.653910601498707 and parameters: {'bagging_fraction': 0.6953833757673704, 'bagging_freq': 3}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255:  90%|######### | 9/10 [00:09<00:01,  1.02s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000404 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658127	valid's binary_logloss: 0.664339
[200]	train's binary_logloss: 0.647507	valid's binary_logloss: 0.656812
[300]	train's binary_logloss: 0.642315	valid's binary_logloss: 0.65461
[400]	train's binary_logloss: 0.638734	valid's binary_logloss: 0.653365
[500]	train's binary_logloss: 0.635615	valid's binary_logloss: 0.653259
Early stopping, best iteration is:
[439]	train's binary_logloss: 0.637483	valid's binary_logloss: 0.652984
bagging, val_score: 0.651255: 100%|##########| 10/10 [00:10<00:00,  1.03it/s][I 2020-09-27 04:39:40,868] Trial 36 finished with value: 0.6529843808564952 and parameters: {'bagging_fraction': 0.9750898583135423, 'bagging_freq': 5}. Best is trial 28 with value: 0.6512546217026939.
bagging, val_score: 0.651255: 100%|##########| 10/10 [00:10<00:00,  1.08s/it]
feature_fraction_stage2, val_score: 0.651255:   0%|          | 0/3 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000412 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657426	valid's binary_logloss: 0.66294
[200]	train's binary_logloss: 0.646763	valid's binary_logloss: 0.65579
[300]	train's binary_logloss: 0.641842	valid's binary_logloss: 0.653608
[400]	train's binary_logloss: 0.638277	valid's binary_logloss: 0.653005
[500]	train's binary_logloss: 0.635065	valid's binary_logloss: 0.652871
Early stopping, best iteration is:
[437]	train's binary_logloss: 0.637058	valid's binary_logloss: 0.652763
feature_fraction_stage2, val_score: 0.651255:  33%|###3      | 1/3 [00:00<00:01,  1.20it/s][I 2020-09-27 04:39:41,719] Trial 37 finished with value: 0.6527633350213652 and parameters: {'feature_fraction': 0.41600000000000004}. Best is trial 37 with value: 0.6527633350213652.
feature_fraction_stage2, val_score: 0.651255:  33%|###3      | 1/3 [00:00<00:01,  1.20it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000510 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657376	valid's binary_logloss: 0.66298
[200]	train's binary_logloss: 0.646944	valid's binary_logloss: 0.656526
[300]	train's binary_logloss: 0.641961	valid's binary_logloss: 0.653998
[400]	train's binary_logloss: 0.638308	valid's binary_logloss: 0.653033
[500]	train's binary_logloss: 0.634963	valid's binary_logloss: 0.652584
[600]	train's binary_logloss: 0.632079	valid's binary_logloss: 0.652716
[700]	train's binary_logloss: 0.629142	valid's binary_logloss: 0.652471
[800]	train's binary_logloss: 0.62641	valid's binary_logloss: 0.652186
Early stopping, best iteration is:
[758]	train's binary_logloss: 0.627519	valid's binary_logloss: 0.651784
feature_fraction_stage2, val_score: 0.651255:  67%|######6   | 2/3 [00:02<00:00,  1.00it/s][I 2020-09-27 04:39:43,108] Trial 38 finished with value: 0.6517839548740617 and parameters: {'feature_fraction': 0.44800000000000006}. Best is trial 38 with value: 0.6517839548740617.
feature_fraction_stage2, val_score: 0.651255:  67%|######6   | 2/3 [00:02<00:00,  1.00it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011259 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657376	valid's binary_logloss: 0.66298
[200]	train's binary_logloss: 0.646944	valid's binary_logloss: 0.656526
[300]	train's binary_logloss: 0.641961	valid's binary_logloss: 0.653998
[400]	train's binary_logloss: 0.638308	valid's binary_logloss: 0.653033
[500]	train's binary_logloss: 0.634963	valid's binary_logloss: 0.652584
[600]	train's binary_logloss: 0.632079	valid's binary_logloss: 0.652716
[700]	train's binary_logloss: 0.629142	valid's binary_logloss: 0.652471
[800]	train's binary_logloss: 0.62641	valid's binary_logloss: 0.652186
Early stopping, best iteration is:
[758]	train's binary_logloss: 0.627519	valid's binary_logloss: 0.651784
feature_fraction_stage2, val_score: 0.651255: 100%|##########| 3/3 [00:03<00:00,  1.15s/it][I 2020-09-27 04:39:44,601] Trial 39 finished with value: 0.6517839548740617 and parameters: {'feature_fraction': 0.48000000000000004}. Best is trial 38 with value: 0.6517839548740617.
feature_fraction_stage2, val_score: 0.651255: 100%|##########| 3/3 [00:03<00:00,  1.24s/it]
regularization_factors, val_score: 0.651255:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000460 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:   5%|5         | 1/20 [00:01<00:23,  1.23s/it][I 2020-09-27 04:39:45,848] Trial 40 finished with value: 0.6512546032948433 and parameters: {'lambda_l1': 3.7407304516474708e-06, 'lambda_l2': 0.0003236321792160619}. Best is trial 40 with value: 0.6512546032948433.
regularization_factors, val_score: 0.651255:   5%|5         | 1/20 [00:01<00:23,  1.23s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000479 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  10%|#         | 2/20 [00:02<00:23,  1.33s/it][I 2020-09-27 04:39:47,419] Trial 41 finished with value: 0.6512546004999574 and parameters: {'lambda_l1': 1.58423580909119e-06, 'lambda_l2': 0.00037329518153918976}. Best is trial 41 with value: 0.6512546004999574.
regularization_factors, val_score: 0.651255:  10%|#         | 2/20 [00:02<00:23,  1.33s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000390 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  15%|#5        | 3/20 [00:04<00:22,  1.33s/it][I 2020-09-27 04:39:48,734] Trial 42 finished with value: 0.6512545924218361 and parameters: {'lambda_l1': 1.3626757884196743e-06, 'lambda_l2': 0.0005160151192105561}. Best is trial 42 with value: 0.6512545924218361.
regularization_factors, val_score: 0.651255:  15%|#5        | 3/20 [00:04<00:22,  1.33s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000381 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  20%|##        | 4/20 [00:05<00:20,  1.29s/it][I 2020-09-27 04:39:49,939] Trial 43 finished with value: 0.6512546024195541 and parameters: {'lambda_l1': 1.826384761461434e-06, 'lambda_l2': 0.000339404582931327}. Best is trial 42 with value: 0.6512545924218361.
regularization_factors, val_score: 0.651255:  20%|##        | 4/20 [00:05<00:20,  1.29s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000442 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  25%|##5       | 5/20 [00:07<00:21,  1.42s/it][I 2020-09-27 04:39:51,672] Trial 44 finished with value: 0.6512545971181739 and parameters: {'lambda_l1': 9.830713052753886e-07, 'lambda_l2': 0.00043290604666443164}. Best is trial 42 with value: 0.6512545924218361.
regularization_factors, val_score: 0.651255:  25%|##5       | 5/20 [00:07<00:21,  1.42s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000564 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  30%|###       | 6/20 [00:08<00:19,  1.36s/it][I 2020-09-27 04:39:52,891] Trial 45 finished with value: 0.6512546006093322 and parameters: {'lambda_l1': 7.770048321005081e-07, 'lambda_l2': 0.00037155770796455103}. Best is trial 42 with value: 0.6512545924218361.
regularization_factors, val_score: 0.651255:  30%|###       | 6/20 [00:08<00:19,  1.36s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000386 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  35%|###5      | 7/20 [00:09<00:17,  1.32s/it][I 2020-09-27 04:39:54,130] Trial 46 finished with value: 0.65125458769479 and parameters: {'lambda_l1': 5.65299654936304e-07, 'lambda_l2': 0.0005992895040580498}. Best is trial 46 with value: 0.65125458769479.
regularization_factors, val_score: 0.651255:  35%|###5      | 7/20 [00:09<00:17,  1.32s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000244 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  40%|####      | 8/20 [00:11<00:17,  1.42s/it][I 2020-09-27 04:39:55,765] Trial 47 finished with value: 0.651254582843981 and parameters: {'lambda_l1': 5.113147117976975e-07, 'lambda_l2': 0.0006849316707389379}. Best is trial 47 with value: 0.651254582843981.
regularization_factors, val_score: 0.651255:  40%|####      | 8/20 [00:11<00:17,  1.42s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000452 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632512	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.62981	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627227	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628658	valid's binary_logloss: 0.651255
regularization_factors, val_score: 0.651255:  45%|####5     | 9/20 [00:12<00:14,  1.35s/it][I 2020-09-27 04:39:56,969] Trial 48 finished with value: 0.6512545741011545 and parameters: {'lambda_l1': 3.656814382369128e-07, 'lambda_l2': 0.0008392662361736358}. Best is trial 48 with value: 0.6512545741011545.
regularization_factors, val_score: 0.651255:  45%|####5     | 9/20 [00:12<00:14,  1.35s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000266 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632513	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.629811	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627228	valid's binary_logloss: 0.651696
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628659	valid's binary_logloss: 0.651254
regularization_factors, val_score: 0.651254:  50%|#####     | 10/20 [00:13<00:13,  1.31s/it][I 2020-09-27 04:39:58,184] Trial 49 finished with value: 0.65125447186593 and parameters: {'lambda_l1': 4.097168681166982e-08, 'lambda_l2': 0.002642450368328453}. Best is trial 49 with value: 0.65125447186593.
regularization_factors, val_score: 0.651254:  50%|#####     | 10/20 [00:13<00:13,  1.31s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000234 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657642	valid's binary_logloss: 0.663104
[200]	train's binary_logloss: 0.647161	valid's binary_logloss: 0.6561
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653789
[400]	train's binary_logloss: 0.638567	valid's binary_logloss: 0.652995
Early stopping, best iteration is:
[378]	train's binary_logloss: 0.639273	valid's binary_logloss: 0.65285
regularization_factors, val_score: 0.651254:  55%|#####5    | 11/20 [00:14<00:11,  1.26s/it][I 2020-09-27 04:39:59,310] Trial 50 finished with value: 0.6528496531447342 and parameters: {'lambda_l1': 1.8926948433638843e-08, 'lambda_l2': 0.18218516904146864}. Best is trial 49 with value: 0.65125447186593.
regularization_factors, val_score: 0.651254:  55%|#####5    | 11/20 [00:14<00:11,  1.26s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000499 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657637	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642211	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638597	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632513	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.629811	valid's binary_logloss: 0.651852
[800]	train's binary_logloss: 0.627228	valid's binary_logloss: 0.651697
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628659	valid's binary_logloss: 0.651254
regularization_factors, val_score: 0.651254:  60%|######    | 12/20 [00:15<00:10,  1.25s/it][I 2020-09-27 04:40:00,559] Trial 51 finished with value: 0.651254491688544 and parameters: {'lambda_l1': 9.464855403688449e-08, 'lambda_l2': 0.0022922152533675836}. Best is trial 49 with value: 0.65125447186593.
regularization_factors, val_score: 0.651254:  60%|######    | 12/20 [00:15<00:10,  1.25s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000381 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642213	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.6386	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635493	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632518	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.629817	valid's binary_logloss: 0.651851
[800]	train's binary_logloss: 0.627236	valid's binary_logloss: 0.651695
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628666	valid's binary_logloss: 0.651254
regularization_factors, val_score: 0.651254:  65%|######5   | 13/20 [00:17<00:09,  1.34s/it][I 2020-09-27 04:40:02,095] Trial 52 finished with value: 0.651253816244402 and parameters: {'lambda_l1': 1.1047734679593184e-08, 'lambda_l2': 0.014255881410398455}. Best is trial 52 with value: 0.651253816244402.
regularization_factors, val_score: 0.651254:  65%|######5   | 13/20 [00:17<00:09,  1.34s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000248 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663102
[200]	train's binary_logloss: 0.647156	valid's binary_logloss: 0.656097
[300]	train's binary_logloss: 0.642216	valid's binary_logloss: 0.653703
[400]	train's binary_logloss: 0.638607	valid's binary_logloss: 0.652863
[500]	train's binary_logloss: 0.6355	valid's binary_logloss: 0.652355
[600]	train's binary_logloss: 0.632662	valid's binary_logloss: 0.652694
Early stopping, best iteration is:
[531]	train's binary_logloss: 0.634567	valid's binary_logloss: 0.652228
regularization_factors, val_score: 0.651254:  70%|#######   | 14/20 [00:19<00:09,  1.53s/it][I 2020-09-27 04:40:04,065] Trial 53 finished with value: 0.6522275662505942 and parameters: {'lambda_l1': 1.4618448056501589e-08, 'lambda_l2': 0.04375470564403219}. Best is trial 52 with value: 0.651253816244402.
regularization_factors, val_score: 0.651254:  70%|#######   | 14/20 [00:19<00:09,  1.53s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000394 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647154	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642213	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.6386	valid's binary_logloss: 0.652862
[500]	train's binary_logloss: 0.635493	valid's binary_logloss: 0.652354
[600]	train's binary_logloss: 0.632518	valid's binary_logloss: 0.652123
[700]	train's binary_logloss: 0.629817	valid's binary_logloss: 0.651851
[800]	train's binary_logloss: 0.627236	valid's binary_logloss: 0.651695
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628666	valid's binary_logloss: 0.651254
regularization_factors, val_score: 0.651254:  75%|#######5  | 15/20 [00:20<00:07,  1.45s/it][I 2020-09-27 04:40:05,343] Trial 54 finished with value: 0.6512538204787435 and parameters: {'lambda_l1': 1.0600410018512844e-08, 'lambda_l2': 0.014180637901064958}. Best is trial 52 with value: 0.651253816244402.
regularization_factors, val_score: 0.651254:  75%|#######5  | 15/20 [00:20<00:07,  1.45s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000400 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647155	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642214	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638602	valid's binary_logloss: 0.652863
[500]	train's binary_logloss: 0.635496	valid's binary_logloss: 0.652353
[600]	train's binary_logloss: 0.632522	valid's binary_logloss: 0.652122
[700]	train's binary_logloss: 0.629822	valid's binary_logloss: 0.65185
[800]	train's binary_logloss: 0.627242	valid's binary_logloss: 0.651695
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628671	valid's binary_logloss: 0.651253
regularization_factors, val_score: 0.651253:  80%|########  | 16/20 [00:22<00:06,  1.51s/it][I 2020-09-27 04:40:06,971] Trial 55 finished with value: 0.6512532800647542 and parameters: {'lambda_l1': 1.1653169345025846e-08, 'lambda_l2': 0.02381661498148718}. Best is trial 55 with value: 0.6512532800647542.
regularization_factors, val_score: 0.651253:  80%|########  | 16/20 [00:22<00:06,  1.51s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014470 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647155	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642213	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638601	valid's binary_logloss: 0.652863
[500]	train's binary_logloss: 0.635495	valid's binary_logloss: 0.652353
[600]	train's binary_logloss: 0.632521	valid's binary_logloss: 0.652122
[700]	train's binary_logloss: 0.62982	valid's binary_logloss: 0.65185
[800]	train's binary_logloss: 0.62724	valid's binary_logloss: 0.651695
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.628669	valid's binary_logloss: 0.651253
regularization_factors, val_score: 0.651253:  85%|########5 | 17/20 [00:23<00:04,  1.46s/it][I 2020-09-27 04:40:08,339] Trial 56 finished with value: 0.6512534743967614 and parameters: {'lambda_l1': 1.0004229968657813e-08, 'lambda_l2': 0.020345235153092046}. Best is trial 55 with value: 0.6512532800647542.
regularization_factors, val_score: 0.651253:  85%|########5 | 17/20 [00:23<00:04,  1.46s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000378 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647155	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642213	valid's binary_logloss: 0.653702
[400]	train's binary_logloss: 0.638602	valid's binary_logloss: 0.652863
[500]	train's binary_logloss: 0.635495	valid's binary_logloss: 0.652353
[600]	train's binary_logloss: 0.632521	valid's binary_logloss: 0.652122
[700]	train's binary_logloss: 0.629821	valid's binary_logloss: 0.65185
[800]	train's binary_logloss: 0.62724	valid's binary_logloss: 0.651695
Early stopping, best iteration is:
[743]	train's binary_logloss: 0.62867	valid's binary_logloss: 0.651253
regularization_factors, val_score: 0.651253:  90%|######### | 18/20 [00:25<00:02,  1.42s/it][I 2020-09-27 04:40:09,647] Trial 57 finished with value: 0.6512534186191336 and parameters: {'lambda_l1': 1.573306324418443e-08, 'lambda_l2': 0.021340714973965415}. Best is trial 55 with value: 0.6512532800647542.
regularization_factors, val_score: 0.651253:  90%|######### | 18/20 [00:25<00:02,  1.42s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004407 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663102
[200]	train's binary_logloss: 0.647156	valid's binary_logloss: 0.656097
[300]	train's binary_logloss: 0.642216	valid's binary_logloss: 0.653703
[400]	train's binary_logloss: 0.638606	valid's binary_logloss: 0.652863
[500]	train's binary_logloss: 0.635499	valid's binary_logloss: 0.652355
[600]	train's binary_logloss: 0.632662	valid's binary_logloss: 0.652694
Early stopping, best iteration is:
[531]	train's binary_logloss: 0.634567	valid's binary_logloss: 0.652228
regularization_factors, val_score: 0.651253:  95%|#########5| 19/20 [00:26<00:01,  1.41s/it][I 2020-09-27 04:40:11,033] Trial 58 finished with value: 0.6522275964597894 and parameters: {'lambda_l1': 1.1442661122184645e-08, 'lambda_l2': 0.04237276613246146}. Best is trial 55 with value: 0.6512532800647542.
regularization_factors, val_score: 0.651253:  95%|#########5| 19/20 [00:26<00:01,  1.41s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002336 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657713	valid's binary_logloss: 0.663154
[200]	train's binary_logloss: 0.647209	valid's binary_logloss: 0.655905
[300]	train's binary_logloss: 0.64221	valid's binary_logloss: 0.65381
[400]	train's binary_logloss: 0.638641	valid's binary_logloss: 0.653241
Early stopping, best iteration is:
[351]	train's binary_logloss: 0.640312	valid's binary_logloss: 0.65316
regularization_factors, val_score: 0.651253: 100%|##########| 20/20 [00:27<00:00,  1.24s/it][I 2020-09-27 04:40:11,871] Trial 59 finished with value: 0.6531604651549241 and parameters: {'lambda_l1': 0.7583136516531738, 'lambda_l2': 0.017657519269218453}. Best is trial 55 with value: 0.6512532800647542.
regularization_factors, val_score: 0.651253: 100%|##########| 20/20 [00:27<00:00,  1.36s/it]
min_data_in_leaf, val_score: 0.651253:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000390 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647129	valid's binary_logloss: 0.656026
[300]	train's binary_logloss: 0.642165	valid's binary_logloss: 0.653612
[400]	train's binary_logloss: 0.638561	valid's binary_logloss: 0.653007
[500]	train's binary_logloss: 0.635388	valid's binary_logloss: 0.652537
[600]	train's binary_logloss: 0.63262	valid's binary_logloss: 0.652704
Early stopping, best iteration is:
[575]	train's binary_logloss: 0.633278	valid's binary_logloss: 0.652405
min_data_in_leaf, val_score: 0.651253:  20%|##        | 1/5 [00:01<00:04,  1.04s/it][I 2020-09-27 04:40:12,924] Trial 60 finished with value: 0.6524054109680009 and parameters: {'min_child_samples': 5}. Best is trial 60 with value: 0.6524054109680009.
min_data_in_leaf, val_score: 0.651253:  20%|##        | 1/5 [00:01<00:04,  1.04s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000685 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647173	valid's binary_logloss: 0.656292
[300]	train's binary_logloss: 0.64235	valid's binary_logloss: 0.653435
[400]	train's binary_logloss: 0.639047	valid's binary_logloss: 0.652707
[500]	train's binary_logloss: 0.636172	valid's binary_logloss: 0.651812
[600]	train's binary_logloss: 0.633542	valid's binary_logloss: 0.651717
[700]	train's binary_logloss: 0.631145	valid's binary_logloss: 0.651603
[800]	train's binary_logloss: 0.628853	valid's binary_logloss: 0.651423
Early stopping, best iteration is:
[737]	train's binary_logloss: 0.630317	valid's binary_logloss: 0.651187
min_data_in_leaf, val_score: 0.651187:  40%|####      | 2/5 [00:02<00:03,  1.09s/it][I 2020-09-27 04:40:14,129] Trial 61 finished with value: 0.6511874541615081 and parameters: {'min_child_samples': 100}. Best is trial 61 with value: 0.6511874541615081.
min_data_in_leaf, val_score: 0.651187:  40%|####      | 2/5 [00:02<00:03,  1.09s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006001 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647155	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.64222	valid's binary_logloss: 0.653679
[400]	train's binary_logloss: 0.638577	valid's binary_logloss: 0.653154
[500]	train's binary_logloss: 0.635489	valid's binary_logloss: 0.652771
[600]	train's binary_logloss: 0.632657	valid's binary_logloss: 0.652758
Early stopping, best iteration is:
[531]	train's binary_logloss: 0.634621	valid's binary_logloss: 0.65239
min_data_in_leaf, val_score: 0.651187:  60%|######    | 3/5 [00:03<00:02,  1.18s/it][I 2020-09-27 04:40:15,534] Trial 62 finished with value: 0.6523903635256941 and parameters: {'min_child_samples': 25}. Best is trial 61 with value: 0.6511874541615081.
min_data_in_leaf, val_score: 0.651187:  60%|######    | 3/5 [00:03<00:02,  1.18s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000824 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647129	valid's binary_logloss: 0.656026
[300]	train's binary_logloss: 0.642173	valid's binary_logloss: 0.653464
[400]	train's binary_logloss: 0.638672	valid's binary_logloss: 0.652989
[500]	train's binary_logloss: 0.635515	valid's binary_logloss: 0.652344
[600]	train's binary_logloss: 0.632548	valid's binary_logloss: 0.652081
[700]	train's binary_logloss: 0.629711	valid's binary_logloss: 0.651529
[800]	train's binary_logloss: 0.627105	valid's binary_logloss: 0.651486
Early stopping, best iteration is:
[749]	train's binary_logloss: 0.628438	valid's binary_logloss: 0.651112
min_data_in_leaf, val_score: 0.651112:  80%|########  | 4/5 [00:04<00:01,  1.19s/it][I 2020-09-27 04:40:16,745] Trial 63 finished with value: 0.6511117328857268 and parameters: {'min_child_samples': 10}. Best is trial 63 with value: 0.6511117328857268.
min_data_in_leaf, val_score: 0.651112:  80%|########  | 4/5 [00:04<00:01,  1.19s/it][LightGBM] [Info] Number of positive: 13151, number of negative: 12848
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000400 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4237
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505827 -> initscore=0.023310
[LightGBM] [Info] Start training from score 0.023310
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657638	valid's binary_logloss: 0.663101
[200]	train's binary_logloss: 0.647155	valid's binary_logloss: 0.656096
[300]	train's binary_logloss: 0.642357	valid's binary_logloss: 0.653658
[400]	train's binary_logloss: 0.638938	valid's binary_logloss: 0.653056
[500]	train's binary_logloss: 0.636126	valid's binary_logloss: 0.652589
[600]	train's binary_logloss: 0.633479	valid's binary_logloss: 0.652767
[700]	train's binary_logloss: 0.630836	valid's binary_logloss: 0.652316
Early stopping, best iteration is:
[693]	train's binary_logloss: 0.631059	valid's binary_logloss: 0.652118
min_data_in_leaf, val_score: 0.651112: 100%|##########| 5/5 [00:06<00:00,  1.19s/it][I 2020-09-27 04:40:17,925] Trial 64 finished with value: 0.6521175158063565 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.6511117328857268.
min_data_in_leaf, val_score: 0.651112: 100%|##########| 5/5 [00:06<00:00,  1.21s/it]
Fold : 2
[I 2020-09-27 04:40:18,017] A new study created in memory with name: no-name-75b2cbed-db22-45df-bfc2-635871fa6cfc
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007450 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578542	valid's binary_logloss: 0.656491
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.59215	valid's binary_logloss: 0.655948
feature_fraction, val_score: 0.655948:  14%|#4        | 1/7 [00:01<00:06,  1.04s/it][I 2020-09-27 04:40:19,072] Trial 0 finished with value: 0.655947777974299 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.655947777974299.
feature_fraction, val_score: 0.655948:  14%|#4        | 1/7 [00:01<00:06,  1.04s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007324 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572993	valid's binary_logloss: 0.65828
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.587161	valid's binary_logloss: 0.656458
feature_fraction, val_score: 0.655948:  29%|##8       | 2/7 [00:01<00:04,  1.12it/s][I 2020-09-27 04:40:19,615] Trial 1 finished with value: 0.6564576518870222 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.655947777974299.
feature_fraction, val_score: 0.655948:  29%|##8       | 2/7 [00:01<00:04,  1.12it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006440 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.570115	valid's binary_logloss: 0.66021
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.595593	valid's binary_logloss: 0.658255
feature_fraction, val_score: 0.655948:  43%|####2     | 3/7 [00:02<00:03,  1.29it/s][I 2020-09-27 04:40:20,131] Trial 2 finished with value: 0.6582553269377266 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.655947777974299.
feature_fraction, val_score: 0.655948:  43%|####2     | 3/7 [00:02<00:03,  1.29it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010302 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575364	valid's binary_logloss: 0.659046
Early stopping, best iteration is:
[68]	train's binary_logloss: 0.596133	valid's binary_logloss: 0.65791
feature_fraction, val_score: 0.655948:  57%|#####7    | 4/7 [00:02<00:02,  1.47it/s][I 2020-09-27 04:40:20,586] Trial 3 finished with value: 0.6579096104153804 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.655947777974299.
feature_fraction, val_score: 0.655948:  57%|#####7    | 4/7 [00:02<00:02,  1.47it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000900 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572299	valid's binary_logloss: 0.658723
Early stopping, best iteration is:
[74]	train's binary_logloss: 0.590022	valid's binary_logloss: 0.656515
feature_fraction, val_score: 0.655948:  71%|#######1  | 5/7 [00:03<00:01,  1.58it/s][I 2020-09-27 04:40:21,113] Trial 4 finished with value: 0.6565150373220784 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.655947777974299.
feature_fraction, val_score: 0.655948:  71%|#######1  | 5/7 [00:03<00:01,  1.58it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009831 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576537	valid's binary_logloss: 0.658655
Early stopping, best iteration is:
[67]	train's binary_logloss: 0.598042	valid's binary_logloss: 0.657232
feature_fraction, val_score: 0.655948:  86%|########5 | 6/7 [00:03<00:00,  1.74it/s][I 2020-09-27 04:40:21,552] Trial 5 finished with value: 0.6572319404658664 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.655947777974299.
feature_fraction, val_score: 0.655948:  86%|########5 | 6/7 [00:03<00:00,  1.74it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000333 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.583115	valid's binary_logloss: 0.656735
Early stopping, best iteration is:
[85]	train's binary_logloss: 0.592606	valid's binary_logloss: 0.655167
feature_fraction, val_score: 0.655167: 100%|##########| 7/7 [00:04<00:00,  1.37it/s][I 2020-09-27 04:40:22,641] Trial 6 finished with value: 0.6551672639343036 and parameters: {'feature_fraction': 0.4}. Best is trial 6 with value: 0.6551672639343036.
feature_fraction, val_score: 0.655167: 100%|##########| 7/7 [00:04<00:00,  1.52it/s]
num_leaves, val_score: 0.655167:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004364 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.362	valid's binary_logloss: 0.666306
Early stopping, best iteration is:
[62]	train's binary_logloss: 0.44475	valid's binary_logloss: 0.66107
num_leaves, val_score: 0.655167:   5%|5         | 1/20 [00:00<00:17,  1.09it/s][I 2020-09-27 04:40:23,577] Trial 7 finished with value: 0.6610695796505417 and parameters: {'num_leaves': 186}. Best is trial 7 with value: 0.6610695796505417.
num_leaves, val_score: 0.655167:   5%|5         | 1/20 [00:00<00:17,  1.09it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000378 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.509613	valid's binary_logloss: 0.655007
Early stopping, best iteration is:
[63]	train's binary_logloss: 0.555305	valid's binary_logloss: 0.653724
num_leaves, val_score: 0.653724:  10%|#         | 2/20 [00:01<00:14,  1.22it/s][I 2020-09-27 04:40:24,159] Trial 8 finished with value: 0.6537238118183929 and parameters: {'num_leaves': 71}. Best is trial 8 with value: 0.6537238118183929.
num_leaves, val_score: 0.653724:  10%|#         | 2/20 [00:01<00:14,  1.22it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000384 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.486851	valid's binary_logloss: 0.659048
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.536523	valid's binary_logloss: 0.655529
num_leaves, val_score: 0.653724:  15%|#5        | 3/20 [00:02<00:13,  1.29it/s][I 2020-09-27 04:40:24,823] Trial 9 finished with value: 0.6555293293116815 and parameters: {'num_leaves': 85}. Best is trial 8 with value: 0.6537238118183929.
num_leaves, val_score: 0.653724:  15%|#5        | 3/20 [00:02<00:13,  1.29it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000459 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668828	valid's binary_logloss: 0.671652
[200]	train's binary_logloss: 0.659076	valid's binary_logloss: 0.663493
[300]	train's binary_logloss: 0.653776	valid's binary_logloss: 0.659129
[400]	train's binary_logloss: 0.65062	valid's binary_logloss: 0.656814
[500]	train's binary_logloss: 0.648621	valid's binary_logloss: 0.655373
[600]	train's binary_logloss: 0.647288	valid's binary_logloss: 0.654268
[700]	train's binary_logloss: 0.646365	valid's binary_logloss: 0.653758
[800]	train's binary_logloss: 0.645671	valid's binary_logloss: 0.653488
[900]	train's binary_logloss: 0.645111	valid's binary_logloss: 0.653271
[1000]	train's binary_logloss: 0.644638	valid's binary_logloss: 0.653148
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644638	valid's binary_logloss: 0.653148
num_leaves, val_score: 0.653148:  20%|##        | 4/20 [00:03<00:15,  1.02it/s][I 2020-09-27 04:40:26,293] Trial 10 finished with value: 0.6531478306778341 and parameters: {'num_leaves': 2}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  20%|##        | 4/20 [00:03<00:15,  1.02it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001421 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.647264	valid's binary_logloss: 0.65894
[200]	train's binary_logloss: 0.635718	valid's binary_logloss: 0.655218
[300]	train's binary_logloss: 0.627699	valid's binary_logloss: 0.654772
[400]	train's binary_logloss: 0.620605	valid's binary_logloss: 0.65463
Early stopping, best iteration is:
[337]	train's binary_logloss: 0.625076	valid's binary_logloss: 0.654329
num_leaves, val_score: 0.653148:  25%|##5       | 5/20 [00:04<00:14,  1.05it/s][I 2020-09-27 04:40:27,188] Trial 11 finished with value: 0.6543286934013544 and parameters: {'num_leaves': 5}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  25%|##5       | 5/20 [00:04<00:14,  1.05it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000570 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.631583	valid's binary_logloss: 0.655099
[200]	train's binary_logloss: 0.611849	valid's binary_logloss: 0.653955
Early stopping, best iteration is:
[189]	train's binary_logloss: 0.613791	valid's binary_logloss: 0.653569
num_leaves, val_score: 0.653148:  30%|###       | 6/20 [00:05<00:11,  1.22it/s][I 2020-09-27 04:40:27,696] Trial 12 finished with value: 0.6535687066231268 and parameters: {'num_leaves': 10}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  30%|###       | 6/20 [00:05<00:11,  1.22it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000425 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.634671	valid's binary_logloss: 0.655145
[200]	train's binary_logloss: 0.616535	valid's binary_logloss: 0.655474
Early stopping, best iteration is:
[160]	train's binary_logloss: 0.62285	valid's binary_logloss: 0.654646
num_leaves, val_score: 0.653148:  35%|###5      | 7/20 [00:05<00:09,  1.39it/s][I 2020-09-27 04:40:28,179] Trial 13 finished with value: 0.6546457645585857 and parameters: {'num_leaves': 9}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  35%|###5      | 7/20 [00:05<00:09,  1.39it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000412 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668828	valid's binary_logloss: 0.671652
[200]	train's binary_logloss: 0.659076	valid's binary_logloss: 0.663493
[300]	train's binary_logloss: 0.653776	valid's binary_logloss: 0.659129
[400]	train's binary_logloss: 0.65062	valid's binary_logloss: 0.656814
[500]	train's binary_logloss: 0.648621	valid's binary_logloss: 0.655373
[600]	train's binary_logloss: 0.647288	valid's binary_logloss: 0.654268
[700]	train's binary_logloss: 0.646365	valid's binary_logloss: 0.653758
[800]	train's binary_logloss: 0.645671	valid's binary_logloss: 0.653488
[900]	train's binary_logloss: 0.645111	valid's binary_logloss: 0.653271
[1000]	train's binary_logloss: 0.644638	valid's binary_logloss: 0.653148
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644638	valid's binary_logloss: 0.653148
num_leaves, val_score: 0.653148:  40%|####      | 8/20 [00:06<00:10,  1.12it/s][I 2020-09-27 04:40:29,465] Trial 14 finished with value: 0.6531478306778341 and parameters: {'num_leaves': 2}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  40%|####      | 8/20 [00:06<00:10,  1.12it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000471 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.306465	valid's binary_logloss: 0.670298
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.45679	valid's binary_logloss: 0.663698
num_leaves, val_score: 0.653148:  45%|####5     | 9/20 [00:08<00:12,  1.15s/it][I 2020-09-27 04:40:31,226] Trial 15 finished with value: 0.663697988152403 and parameters: {'num_leaves': 248}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  45%|####5     | 9/20 [00:08<00:12,  1.15s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004276 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.520782	valid's binary_logloss: 0.657921
Early stopping, best iteration is:
[85]	train's binary_logloss: 0.536614	valid's binary_logloss: 0.656043
num_leaves, val_score: 0.653148:  50%|#####     | 10/20 [00:09<00:09,  1.03it/s][I 2020-09-27 04:40:31,778] Trial 16 finished with value: 0.6560430059842651 and parameters: {'num_leaves': 64}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  50%|#####     | 10/20 [00:09<00:09,  1.03it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000459 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.414427	valid's binary_logloss: 0.657771
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.446068	valid's binary_logloss: 0.655491
num_leaves, val_score: 0.653148:  55%|#####5    | 11/20 [00:10<00:08,  1.03it/s][I 2020-09-27 04:40:32,746] Trial 17 finished with value: 0.6554905655085496 and parameters: {'num_leaves': 138}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  55%|#####5    | 11/20 [00:10<00:08,  1.03it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000382 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57465	valid's binary_logloss: 0.656774
[200]	train's binary_logloss: 0.519325	valid's binary_logloss: 0.66119
Early stopping, best iteration is:
[115]	train's binary_logloss: 0.565412	valid's binary_logloss: 0.656102
num_leaves, val_score: 0.653148:  60%|######    | 12/20 [00:10<00:06,  1.20it/s][I 2020-09-27 04:40:33,269] Trial 18 finished with value: 0.6561023872985662 and parameters: {'num_leaves': 35}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  60%|######    | 12/20 [00:10<00:06,  1.20it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000377 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.431599	valid's binary_logloss: 0.661998
Early stopping, best iteration is:
[51]	train's binary_logloss: 0.522437	valid's binary_logloss: 0.656248
num_leaves, val_score: 0.653148:  65%|######5   | 13/20 [00:11<00:06,  1.03it/s][I 2020-09-27 04:40:34,569] Trial 19 finished with value: 0.6562478660337796 and parameters: {'num_leaves': 124}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  65%|######5   | 13/20 [00:11<00:06,  1.03it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000466 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.563816	valid's binary_logloss: 0.657169
Early stopping, best iteration is:
[88]	train's binary_logloss: 0.572908	valid's binary_logloss: 0.656464
num_leaves, val_score: 0.653148:  70%|#######   | 14/20 [00:12<00:05,  1.20it/s][I 2020-09-27 04:40:35,073] Trial 20 finished with value: 0.6564643963450036 and parameters: {'num_leaves': 40}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  70%|#######   | 14/20 [00:12<00:05,  1.20it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000380 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668828	valid's binary_logloss: 0.671652
[200]	train's binary_logloss: 0.659076	valid's binary_logloss: 0.663493
[300]	train's binary_logloss: 0.653776	valid's binary_logloss: 0.659129
[400]	train's binary_logloss: 0.65062	valid's binary_logloss: 0.656814
[500]	train's binary_logloss: 0.648621	valid's binary_logloss: 0.655373
[600]	train's binary_logloss: 0.647288	valid's binary_logloss: 0.654268
[700]	train's binary_logloss: 0.646365	valid's binary_logloss: 0.653758
[800]	train's binary_logloss: 0.645671	valid's binary_logloss: 0.653488
[900]	train's binary_logloss: 0.645111	valid's binary_logloss: 0.653271
[1000]	train's binary_logloss: 0.644638	valid's binary_logloss: 0.653148
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644638	valid's binary_logloss: 0.653148
num_leaves, val_score: 0.653148:  75%|#######5  | 15/20 [00:13<00:04,  1.06it/s][I 2020-09-27 04:40:36,259] Trial 21 finished with value: 0.6531478306778341 and parameters: {'num_leaves': 2}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  75%|#######5  | 15/20 [00:13<00:04,  1.06it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010841 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.63738	valid's binary_logloss: 0.656329
[200]	train's binary_logloss: 0.621173	valid's binary_logloss: 0.654961
[300]	train's binary_logloss: 0.608196	valid's binary_logloss: 0.655376
Early stopping, best iteration is:
[225]	train's binary_logloss: 0.617766	valid's binary_logloss: 0.654241
num_leaves, val_score: 0.653148:  80%|########  | 16/20 [00:14<00:03,  1.14it/s][I 2020-09-27 04:40:36,999] Trial 22 finished with value: 0.654241444152379 and parameters: {'num_leaves': 8}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  80%|########  | 16/20 [00:14<00:03,  1.14it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000584 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576413	valid's binary_logloss: 0.656352
Early stopping, best iteration is:
[85]	train's binary_logloss: 0.58641	valid's binary_logloss: 0.655563
num_leaves, val_score: 0.653148:  85%|########5 | 17/20 [00:15<00:02,  1.02it/s][I 2020-09-27 04:40:38,198] Trial 23 finished with value: 0.6555629950352285 and parameters: {'num_leaves': 34}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  85%|########5 | 17/20 [00:15<00:02,  1.02it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000677 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.647264	valid's binary_logloss: 0.65894
[200]	train's binary_logloss: 0.635718	valid's binary_logloss: 0.655218
[300]	train's binary_logloss: 0.627699	valid's binary_logloss: 0.654772
[400]	train's binary_logloss: 0.620605	valid's binary_logloss: 0.65463
Early stopping, best iteration is:
[337]	train's binary_logloss: 0.625076	valid's binary_logloss: 0.654329
num_leaves, val_score: 0.653148:  90%|######### | 18/20 [00:16<00:01,  1.07it/s][I 2020-09-27 04:40:39,032] Trial 24 finished with value: 0.6543286934013544 and parameters: {'num_leaves': 5}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  90%|######### | 18/20 [00:16<00:01,  1.07it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000412 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.557014	valid's binary_logloss: 0.658549
Early stopping, best iteration is:
[46]	train's binary_logloss: 0.606508	valid's binary_logloss: 0.656484
num_leaves, val_score: 0.653148:  95%|#########5| 19/20 [00:16<00:00,  1.23it/s][I 2020-09-27 04:40:39,562] Trial 25 finished with value: 0.6564836527501194 and parameters: {'num_leaves': 44}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148:  95%|#########5| 19/20 [00:16<00:00,  1.23it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000389 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.455025	valid's binary_logloss: 0.660325
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.562754	valid's binary_logloss: 0.655052
num_leaves, val_score: 0.653148: 100%|##########| 20/20 [00:17<00:00,  1.29it/s][I 2020-09-27 04:40:40,252] Trial 26 finished with value: 0.655051684902722 and parameters: {'num_leaves': 107}. Best is trial 10 with value: 0.6531478306778341.
num_leaves, val_score: 0.653148: 100%|##########| 20/20 [00:17<00:00,  1.14it/s]
bagging, val_score: 0.653148:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000406 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668417	valid's binary_logloss: 0.671074
[200]	train's binary_logloss: 0.658108	valid's binary_logloss: 0.662187
[300]	train's binary_logloss: 0.652572	valid's binary_logloss: 0.657758
[400]	train's binary_logloss: 0.649457	valid's binary_logloss: 0.655824
[500]	train's binary_logloss: 0.64748	valid's binary_logloss: 0.654202
[600]	train's binary_logloss: 0.646194	valid's binary_logloss: 0.653541
[700]	train's binary_logloss: 0.645284	valid's binary_logloss: 0.65328
[800]	train's binary_logloss: 0.644551	valid's binary_logloss: 0.653263
[900]	train's binary_logloss: 0.64391	valid's binary_logloss: 0.653167
[1000]	train's binary_logloss: 0.643382	valid's binary_logloss: 0.653181
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643382	valid's binary_logloss: 0.653181
bagging, val_score: 0.653148:  10%|#         | 1/10 [00:01<00:14,  1.60s/it][I 2020-09-27 04:40:41,874] Trial 27 finished with value: 0.6531811973594188 and parameters: {'bagging_fraction': 0.8306103567609042, 'bagging_freq': 5}. Best is trial 27 with value: 0.6531811973594188.
bagging, val_score: 0.653148:  10%|#         | 1/10 [00:01<00:14,  1.60s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000338 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666533	valid's binary_logloss: 0.669291
[200]	train's binary_logloss: 0.655913	valid's binary_logloss: 0.659959
[300]	train's binary_logloss: 0.650521	valid's binary_logloss: 0.656005
[400]	train's binary_logloss: 0.647915	valid's binary_logloss: 0.654692
[500]	train's binary_logloss: 0.646471	valid's binary_logloss: 0.65411
[600]	train's binary_logloss: 0.645402	valid's binary_logloss: 0.654374
Early stopping, best iteration is:
[511]	train's binary_logloss: 0.646347	valid's binary_logloss: 0.653867
bagging, val_score: 0.653148:  20%|##        | 2/10 [00:02<00:11,  1.42s/it][I 2020-09-27 04:40:42,870] Trial 28 finished with value: 0.6538669216016616 and parameters: {'bagging_fraction': 0.4517567998193014, 'bagging_freq': 1}. Best is trial 27 with value: 0.6531811973594188.
bagging, val_score: 0.653148:  20%|##        | 2/10 [00:02<00:11,  1.42s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000347 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667071	valid's binary_logloss: 0.669326
[200]	train's binary_logloss: 0.656588	valid's binary_logloss: 0.660429
[300]	train's binary_logloss: 0.651211	valid's binary_logloss: 0.656184
[400]	train's binary_logloss: 0.648478	valid's binary_logloss: 0.653762
[500]	train's binary_logloss: 0.646843	valid's binary_logloss: 0.653211
[600]	train's binary_logloss: 0.645633	valid's binary_logloss: 0.652424
Early stopping, best iteration is:
[573]	train's binary_logloss: 0.645934	valid's binary_logloss: 0.652252
bagging, val_score: 0.652252:  30%|###       | 3/10 [00:03<00:08,  1.25s/it][I 2020-09-27 04:40:43,705] Trial 29 finished with value: 0.65225160633841 and parameters: {'bagging_fraction': 0.4737096463396763, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  30%|###       | 3/10 [00:03<00:08,  1.25s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000502 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666936	valid's binary_logloss: 0.668846
[200]	train's binary_logloss: 0.656163	valid's binary_logloss: 0.659914
[300]	train's binary_logloss: 0.650959	valid's binary_logloss: 0.655368
[400]	train's binary_logloss: 0.648469	valid's binary_logloss: 0.653546
[500]	train's binary_logloss: 0.646883	valid's binary_logloss: 0.653403
Early stopping, best iteration is:
[438]	train's binary_logloss: 0.647744	valid's binary_logloss: 0.652963
bagging, val_score: 0.652252:  40%|####      | 4/10 [00:04<00:06,  1.07s/it][I 2020-09-27 04:40:44,379] Trial 30 finished with value: 0.6529627183444107 and parameters: {'bagging_fraction': 0.4264803439292804, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  40%|####      | 4/10 [00:04<00:06,  1.07s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000349 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666824	valid's binary_logloss: 0.668808
[200]	train's binary_logloss: 0.656165	valid's binary_logloss: 0.65966
[300]	train's binary_logloss: 0.650811	valid's binary_logloss: 0.654901
[400]	train's binary_logloss: 0.648589	valid's binary_logloss: 0.652893
[500]	train's binary_logloss: 0.647086	valid's binary_logloss: 0.653134
[600]	train's binary_logloss: 0.645858	valid's binary_logloss: 0.652547
[700]	train's binary_logloss: 0.644904	valid's binary_logloss: 0.65326
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.64558	valid's binary_logloss: 0.652267
bagging, val_score: 0.652252:  50%|#####     | 5/10 [00:05<00:05,  1.18s/it][I 2020-09-27 04:40:45,808] Trial 31 finished with value: 0.6522673041330311 and parameters: {'bagging_fraction': 0.41942184957156203, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  50%|#####     | 5/10 [00:05<00:05,  1.18s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000470 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666715	valid's binary_logloss: 0.669238
[200]	train's binary_logloss: 0.656103	valid's binary_logloss: 0.660182
[300]	train's binary_logloss: 0.650838	valid's binary_logloss: 0.655485
[400]	train's binary_logloss: 0.648497	valid's binary_logloss: 0.653732
[500]	train's binary_logloss: 0.646909	valid's binary_logloss: 0.653654
Early stopping, best iteration is:
[470]	train's binary_logloss: 0.6473	valid's binary_logloss: 0.652896
bagging, val_score: 0.652252:  60%|######    | 6/10 [00:06<00:04,  1.07s/it][I 2020-09-27 04:40:46,607] Trial 32 finished with value: 0.6528955403221239 and parameters: {'bagging_fraction': 0.41256437409812413, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  60%|######    | 6/10 [00:06<00:04,  1.07s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000341 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666696	valid's binary_logloss: 0.669282
[200]	train's binary_logloss: 0.656159	valid's binary_logloss: 0.660236
[300]	train's binary_logloss: 0.65083	valid's binary_logloss: 0.655589
[400]	train's binary_logloss: 0.648563	valid's binary_logloss: 0.653633
[500]	train's binary_logloss: 0.64694	valid's binary_logloss: 0.653341
Early stopping, best iteration is:
[470]	train's binary_logloss: 0.647303	valid's binary_logloss: 0.652827
bagging, val_score: 0.652252:  70%|#######   | 7/10 [00:06<00:02,  1.07it/s][I 2020-09-27 04:40:47,241] Trial 33 finished with value: 0.6528271831886884 and parameters: {'bagging_fraction': 0.40860834213805497, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  70%|#######   | 7/10 [00:06<00:02,  1.07it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000446 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666727	valid's binary_logloss: 0.669333
[200]	train's binary_logloss: 0.656066	valid's binary_logloss: 0.660295
[300]	train's binary_logloss: 0.650735	valid's binary_logloss: 0.655605
[400]	train's binary_logloss: 0.648394	valid's binary_logloss: 0.653499
[500]	train's binary_logloss: 0.646916	valid's binary_logloss: 0.653049
Early stopping, best iteration is:
[470]	train's binary_logloss: 0.647295	valid's binary_logloss: 0.652461
bagging, val_score: 0.652252:  80%|########  | 8/10 [00:07<00:01,  1.15it/s][I 2020-09-27 04:40:47,944] Trial 34 finished with value: 0.6524608085550598 and parameters: {'bagging_fraction': 0.40094624308690235, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  80%|########  | 8/10 [00:07<00:01,  1.15it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000704 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666735	valid's binary_logloss: 0.669024
[200]	train's binary_logloss: 0.656104	valid's binary_logloss: 0.660308
[300]	train's binary_logloss: 0.650686	valid's binary_logloss: 0.655603
[400]	train's binary_logloss: 0.648444	valid's binary_logloss: 0.653985
[500]	train's binary_logloss: 0.646852	valid's binary_logloss: 0.653461
Early stopping, best iteration is:
[470]	train's binary_logloss: 0.647262	valid's binary_logloss: 0.652979
bagging, val_score: 0.652252:  90%|######### | 9/10 [00:08<00:00,  1.21it/s][I 2020-09-27 04:40:48,677] Trial 35 finished with value: 0.6529787611145468 and parameters: {'bagging_fraction': 0.40896873900368547, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252:  90%|######### | 9/10 [00:08<00:00,  1.21it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010501 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667557	valid's binary_logloss: 0.670365
[200]	train's binary_logloss: 0.656945	valid's binary_logloss: 0.660765
[300]	train's binary_logloss: 0.651474	valid's binary_logloss: 0.656544
[400]	train's binary_logloss: 0.648605	valid's binary_logloss: 0.653988
[500]	train's binary_logloss: 0.646848	valid's binary_logloss: 0.653526
[600]	train's binary_logloss: 0.645737	valid's binary_logloss: 0.653035
Early stopping, best iteration is:
[588]	train's binary_logloss: 0.645878	valid's binary_logloss: 0.652666
bagging, val_score: 0.652252: 100%|##########| 10/10 [00:09<00:00,  1.03it/s][I 2020-09-27 04:40:49,991] Trial 36 finished with value: 0.65266565454767 and parameters: {'bagging_fraction': 0.5936827208546511, 'bagging_freq': 7}. Best is trial 29 with value: 0.65225160633841.
bagging, val_score: 0.652252: 100%|##########| 10/10 [00:09<00:00,  1.03it/s]
feature_fraction_stage2, val_score: 0.652252:   0%|          | 0/3 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004126 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.65666	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646904	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645658	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644752	valid's binary_logloss: 0.652982
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645415	valid's binary_logloss: 0.652211
feature_fraction_stage2, val_score: 0.652211:  33%|###3      | 1/3 [00:00<00:01,  1.14it/s][I 2020-09-27 04:40:50,883] Trial 37 finished with value: 0.6522110872949437 and parameters: {'feature_fraction': 0.41600000000000004}. Best is trial 37 with value: 0.6522110872949437.
feature_fraction_stage2, val_score: 0.652211:  33%|###3      | 1/3 [00:00<00:01,  1.14it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000546 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667214	valid's binary_logloss: 0.669614
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660306
[300]	train's binary_logloss: 0.651258	valid's binary_logloss: 0.656152
[400]	train's binary_logloss: 0.648532	valid's binary_logloss: 0.653868
[500]	train's binary_logloss: 0.64686	valid's binary_logloss: 0.65353
[600]	train's binary_logloss: 0.645694	valid's binary_logloss: 0.652608
[700]	train's binary_logloss: 0.644833	valid's binary_logloss: 0.653223
Early stopping, best iteration is:
[622]	train's binary_logloss: 0.645442	valid's binary_logloss: 0.652368
feature_fraction_stage2, val_score: 0.652211:  67%|######6   | 2/3 [00:01<00:00,  1.14it/s][I 2020-09-27 04:40:51,765] Trial 38 finished with value: 0.6523684425922491 and parameters: {'feature_fraction': 0.44800000000000006}. Best is trial 37 with value: 0.6522110872949437.
feature_fraction_stage2, val_score: 0.652211:  67%|######6   | 2/3 [00:01<00:00,  1.14it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000441 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667214	valid's binary_logloss: 0.669614
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660306
[300]	train's binary_logloss: 0.651258	valid's binary_logloss: 0.656152
[400]	train's binary_logloss: 0.648532	valid's binary_logloss: 0.653868
[500]	train's binary_logloss: 0.64686	valid's binary_logloss: 0.65353
[600]	train's binary_logloss: 0.645694	valid's binary_logloss: 0.652608
[700]	train's binary_logloss: 0.644833	valid's binary_logloss: 0.653223
Early stopping, best iteration is:
[622]	train's binary_logloss: 0.645442	valid's binary_logloss: 0.652368
feature_fraction_stage2, val_score: 0.652211: 100%|##########| 3/3 [00:02<00:00,  1.13it/s][I 2020-09-27 04:40:52,662] Trial 39 finished with value: 0.6523684425922491 and parameters: {'feature_fraction': 0.48000000000000004}. Best is trial 37 with value: 0.6522110872949437.
feature_fraction_stage2, val_score: 0.652211: 100%|##########| 3/3 [00:02<00:00,  1.12it/s]
regularization_factors, val_score: 0.652211:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000446 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.65666	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644752	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645415	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:   5%|5         | 1/20 [00:01<00:25,  1.34s/it][I 2020-09-27 04:40:54,031] Trial 40 finished with value: 0.6522109977626771 and parameters: {'lambda_l1': 0.00014164147515972837, 'lambda_l2': 0.005481479902021038}. Best is trial 40 with value: 0.6522109977626771.
regularization_factors, val_score: 0.652211:   5%|5         | 1/20 [00:01<00:25,  1.34s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004749 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651253	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648543	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646906	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.64566	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644753	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645416	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  10%|#         | 2/20 [00:02<00:21,  1.20s/it][I 2020-09-27 04:40:54,895] Trial 41 finished with value: 0.6522109708484218 and parameters: {'lambda_l1': 0.00010302212398099651, 'lambda_l2': 0.007995173172021716}. Best is trial 41 with value: 0.6522109708484218.
regularization_factors, val_score: 0.652211:  10%|#         | 2/20 [00:02<00:21,  1.20s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000406 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644752	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645415	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  15%|#5        | 3/20 [00:03<00:18,  1.10s/it][I 2020-09-27 04:40:55,766] Trial 42 finished with value: 0.6522109575004142 and parameters: {'lambda_l1': 5.703427719344543e-05, 'lambda_l2': 0.01133484756475216}. Best is trial 42 with value: 0.6522109575004142.
regularization_factors, val_score: 0.652211:  15%|#5        | 3/20 [00:03<00:18,  1.10s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001530 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644753	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645416	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  20%|##        | 4/20 [00:04<00:17,  1.08s/it][I 2020-09-27 04:40:56,793] Trial 43 finished with value: 0.6522109461124184 and parameters: {'lambda_l1': 7.037388835726531e-05, 'lambda_l2': 0.009023034240515858}. Best is trial 43 with value: 0.6522109461124184.
regularization_factors, val_score: 0.652211:  20%|##        | 4/20 [00:04<00:17,  1.08s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017232 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644753	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645416	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  25%|##5       | 5/20 [00:05<00:16,  1.12s/it][I 2020-09-27 04:40:57,991] Trial 44 finished with value: 0.6522108862289743 and parameters: {'lambda_l1': 4.695264971426557e-05, 'lambda_l2': 0.012990997644376091}. Best is trial 44 with value: 0.6522108862289743.
regularization_factors, val_score: 0.652211:  25%|##5       | 5/20 [00:05<00:16,  1.12s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000418 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644753	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645416	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  30%|###       | 6/20 [00:06<00:14,  1.05s/it][I 2020-09-27 04:40:58,893] Trial 45 finished with value: 0.6522109405266442 and parameters: {'lambda_l1': 6.628550474869429e-05, 'lambda_l2': 0.00939838589902522}. Best is trial 44 with value: 0.6522108862289743.
regularization_factors, val_score: 0.652211:  30%|###       | 6/20 [00:06<00:14,  1.05s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000752 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644753	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645416	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  35%|###5      | 7/20 [00:07<00:13,  1.01s/it][I 2020-09-27 04:40:59,810] Trial 46 finished with value: 0.6522109149315195 and parameters: {'lambda_l1': 5.01427220376058e-05, 'lambda_l2': 0.011108962252046103}. Best is trial 44 with value: 0.6522108862289743.
regularization_factors, val_score: 0.652211:  35%|###5      | 7/20 [00:07<00:13,  1.01s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000545 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.66963
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651253	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653739
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.64566	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644753	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645416	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  40%|####      | 8/20 [00:08<00:12,  1.03s/it][I 2020-09-27 04:41:00,882] Trial 47 finished with value: 0.6522108308126086 and parameters: {'lambda_l1': 4.076885797493761e-05, 'lambda_l2': 0.016633645127292034}. Best is trial 47 with value: 0.6522108308126086.
regularization_factors, val_score: 0.652211:  40%|####      | 8/20 [00:08<00:12,  1.03s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000476 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.66963
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651253	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648543	valid's binary_logloss: 0.653739
[500]	train's binary_logloss: 0.646906	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.64566	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644754	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645417	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652211:  45%|####5     | 9/20 [00:09<00:11,  1.04s/it][I 2020-09-27 04:41:01,942] Trial 48 finished with value: 0.6522107665018237 and parameters: {'lambda_l1': 4.1633476557442786e-05, 'lambda_l2': 0.020110745479315996}. Best is trial 48 with value: 0.6522107665018237.
regularization_factors, val_score: 0.652211:  45%|####5     | 9/20 [00:09<00:11,  1.04s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000514 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667174	valid's binary_logloss: 0.669606
[200]	train's binary_logloss: 0.656682	valid's binary_logloss: 0.660677
[300]	train's binary_logloss: 0.651269	valid's binary_logloss: 0.656126
[400]	train's binary_logloss: 0.64857	valid's binary_logloss: 0.653596
[500]	train's binary_logloss: 0.646941	valid's binary_logloss: 0.653105
[600]	train's binary_logloss: 0.645643	valid's binary_logloss: 0.652442
[700]	train's binary_logloss: 0.644768	valid's binary_logloss: 0.653208
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645403	valid's binary_logloss: 0.652216
regularization_factors, val_score: 0.652211:  50%|#####     | 10/20 [00:10<00:09,  1.02it/s][I 2020-09-27 04:41:02,794] Trial 49 finished with value: 0.6522161063945356 and parameters: {'lambda_l1': 1.0048108252149604e-05, 'lambda_l2': 0.04595411901882225}. Best is trial 48 with value: 0.6522107665018237.
regularization_factors, val_score: 0.652211:  50%|#####     | 10/20 [00:10<00:09,  1.02it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000250 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667264	valid's binary_logloss: 0.669374
[200]	train's binary_logloss: 0.656765	valid's binary_logloss: 0.660326
[300]	train's binary_logloss: 0.651459	valid's binary_logloss: 0.656068
[400]	train's binary_logloss: 0.648733	valid's binary_logloss: 0.65386
[500]	train's binary_logloss: 0.647116	valid's binary_logloss: 0.653305
[600]	train's binary_logloss: 0.645917	valid's binary_logloss: 0.652237
[700]	train's binary_logloss: 0.645032	valid's binary_logloss: 0.652691
Early stopping, best iteration is:
[622]	train's binary_logloss: 0.645713	valid's binary_logloss: 0.652093
regularization_factors, val_score: 0.652093:  55%|#####5    | 11/20 [00:10<00:08,  1.06it/s][I 2020-09-27 04:41:03,645] Trial 50 finished with value: 0.6520932820946773 and parameters: {'lambda_l1': 5.634084468530463e-06, 'lambda_l2': 2.653747794681544}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  55%|#####5    | 11/20 [00:10<00:08,  1.06it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002392 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667312	valid's binary_logloss: 0.66942
[200]	train's binary_logloss: 0.656827	valid's binary_logloss: 0.660345
[300]	train's binary_logloss: 0.651466	valid's binary_logloss: 0.656086
[400]	train's binary_logloss: 0.648842	valid's binary_logloss: 0.653957
[500]	train's binary_logloss: 0.647208	valid's binary_logloss: 0.653403
[600]	train's binary_logloss: 0.646029	valid's binary_logloss: 0.652507
[700]	train's binary_logloss: 0.645184	valid's binary_logloss: 0.653114
Early stopping, best iteration is:
[622]	train's binary_logloss: 0.645818	valid's binary_logloss: 0.652395
regularization_factors, val_score: 0.652093:  60%|######    | 12/20 [00:12<00:08,  1.04s/it][I 2020-09-27 04:41:04,925] Trial 51 finished with value: 0.6523945935028854 and parameters: {'lambda_l1': 7.71584097888483e-06, 'lambda_l2': 4.4305490881206495}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  60%|######    | 12/20 [00:12<00:08,  1.04s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013219 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667193	valid's binary_logloss: 0.669624
[200]	train's binary_logloss: 0.656716	valid's binary_logloss: 0.660578
[300]	train's binary_logloss: 0.651263	valid's binary_logloss: 0.656044
[400]	train's binary_logloss: 0.648561	valid's binary_logloss: 0.654042
[500]	train's binary_logloss: 0.646891	valid's binary_logloss: 0.653546
[600]	train's binary_logloss: 0.645722	valid's binary_logloss: 0.652543
[700]	train's binary_logloss: 0.644815	valid's binary_logloss: 0.653225
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645497	valid's binary_logloss: 0.652335
regularization_factors, val_score: 0.652093:  65%|######5   | 13/20 [00:13<00:08,  1.16s/it][I 2020-09-27 04:41:06,349] Trial 52 finished with value: 0.6523351333047752 and parameters: {'lambda_l1': 7.422157693379738e-06, 'lambda_l2': 0.7332370123602728}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  65%|######5   | 13/20 [00:13<00:08,  1.16s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000421 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667345	valid's binary_logloss: 0.66941
[200]	train's binary_logloss: 0.656873	valid's binary_logloss: 0.660467
[300]	train's binary_logloss: 0.651513	valid's binary_logloss: 0.655838
[400]	train's binary_logloss: 0.648818	valid's binary_logloss: 0.653846
[500]	train's binary_logloss: 0.647188	valid's binary_logloss: 0.653186
[600]	train's binary_logloss: 0.646098	valid's binary_logloss: 0.652539
Early stopping, best iteration is:
[573]	train's binary_logloss: 0.646377	valid's binary_logloss: 0.652312
regularization_factors, val_score: 0.652093:  70%|#######   | 14/20 [00:14<00:06,  1.07s/it][I 2020-09-27 04:41:07,202] Trial 53 finished with value: 0.6523123313741976 and parameters: {'lambda_l1': 1.6736112176387592, 'lambda_l2': 3.1184950223667215e-07}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  70%|#######   | 14/20 [00:14<00:06,  1.07s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002056 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.65666	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644752	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645415	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652093:  75%|#######5  | 15/20 [00:15<00:05,  1.02s/it][I 2020-09-27 04:41:08,102] Trial 54 finished with value: 0.6522110253635155 and parameters: {'lambda_l1': 0.0016228255112486223, 'lambda_l2': 8.007222095393149e-05}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  75%|#######5  | 15/20 [00:15<00:05,  1.02s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000382 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.65666	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646904	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645658	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644752	valid's binary_logloss: 0.652982
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645415	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652093:  80%|########  | 16/20 [00:16<00:04,  1.09s/it][I 2020-09-27 04:41:09,376] Trial 55 finished with value: 0.6522110831327725 and parameters: {'lambda_l1': 8.98931163949704e-08, 'lambda_l2': 0.0002694496140730187}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  80%|########  | 16/20 [00:16<00:04,  1.09s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000250 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66718	valid's binary_logloss: 0.669612
[200]	train's binary_logloss: 0.656691	valid's binary_logloss: 0.660683
[300]	train's binary_logloss: 0.651278	valid's binary_logloss: 0.656131
[400]	train's binary_logloss: 0.648583	valid's binary_logloss: 0.653603
[500]	train's binary_logloss: 0.646957	valid's binary_logloss: 0.653104
[600]	train's binary_logloss: 0.645691	valid's binary_logloss: 0.652458
[700]	train's binary_logloss: 0.644799	valid's binary_logloss: 0.653165
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645445	valid's binary_logloss: 0.65223
regularization_factors, val_score: 0.652093:  85%|########5 | 17/20 [00:17<00:03,  1.03s/it][I 2020-09-27 04:41:10,268] Trial 56 finished with value: 0.6522296062194786 and parameters: {'lambda_l1': 1.272264047860099e-06, 'lambda_l2': 0.29472834124394065}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  85%|########5 | 17/20 [00:17<00:03,  1.03s/it][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000520 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667157	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656661	valid's binary_logloss: 0.660691
[300]	train's binary_logloss: 0.651252	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.648542	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646905	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645659	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.644752	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645415	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652093:  90%|######### | 18/20 [00:18<00:01,  1.02it/s][I 2020-09-27 04:41:11,137] Trial 57 finished with value: 0.6522109880071963 and parameters: {'lambda_l1': 0.0023192016556508074, 'lambda_l2': 0.0008178552198766434}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  90%|######### | 18/20 [00:18<00:01,  1.02it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000580 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667176	valid's binary_logloss: 0.669608
[200]	train's binary_logloss: 0.656684	valid's binary_logloss: 0.660679
[300]	train's binary_logloss: 0.651272	valid's binary_logloss: 0.656127
[400]	train's binary_logloss: 0.648574	valid's binary_logloss: 0.653598
[500]	train's binary_logloss: 0.646946	valid's binary_logloss: 0.653105
[600]	train's binary_logloss: 0.645649	valid's binary_logloss: 0.652439
[700]	train's binary_logloss: 0.644774	valid's binary_logloss: 0.653205
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.64541	valid's binary_logloss: 0.652214
regularization_factors, val_score: 0.652093:  95%|#########5| 19/20 [00:19<00:00,  1.05it/s][I 2020-09-27 04:41:12,010] Trial 58 finished with value: 0.6522142117867799 and parameters: {'lambda_l1': 0.0007602407297528348, 'lambda_l2': 0.11500147374550874}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093:  95%|#########5| 19/20 [00:19<00:00,  1.05it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005008 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667156	valid's binary_logloss: 0.669629
[200]	train's binary_logloss: 0.656659	valid's binary_logloss: 0.66069
[300]	train's binary_logloss: 0.651251	valid's binary_logloss: 0.656139
[400]	train's binary_logloss: 0.64854	valid's binary_logloss: 0.653738
[500]	train's binary_logloss: 0.646903	valid's binary_logloss: 0.653287
[600]	train's binary_logloss: 0.645657	valid's binary_logloss: 0.652441
[700]	train's binary_logloss: 0.64475	valid's binary_logloss: 0.652981
Early stopping, best iteration is:
[623]	train's binary_logloss: 0.645413	valid's binary_logloss: 0.652211
regularization_factors, val_score: 0.652093: 100%|##########| 20/20 [00:20<00:00,  1.07s/it][I 2020-09-27 04:41:13,360] Trial 59 finished with value: 0.6522109875781879 and parameters: {'lambda_l1': 1.3713971348787328e-06, 'lambda_l2': 0.0013479533247432005}. Best is trial 50 with value: 0.6520932820946773.
regularization_factors, val_score: 0.652093: 100%|##########| 20/20 [00:20<00:00,  1.03s/it]
min_data_in_leaf, val_score: 0.652093:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004509 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667264	valid's binary_logloss: 0.669374
[200]	train's binary_logloss: 0.656765	valid's binary_logloss: 0.660326
[300]	train's binary_logloss: 0.651459	valid's binary_logloss: 0.656068
[400]	train's binary_logloss: 0.648733	valid's binary_logloss: 0.653788
[500]	train's binary_logloss: 0.647111	valid's binary_logloss: 0.653197
[600]	train's binary_logloss: 0.645996	valid's binary_logloss: 0.652398
Early stopping, best iteration is:
[589]	train's binary_logloss: 0.646096	valid's binary_logloss: 0.652267
min_data_in_leaf, val_score: 0.652093:  20%|##        | 1/5 [00:00<00:03,  1.23it/s][I 2020-09-27 04:41:14,186] Trial 60 finished with value: 0.6522671792511908 and parameters: {'min_child_samples': 50}. Best is trial 60 with value: 0.6522671792511908.
min_data_in_leaf, val_score: 0.652093:  20%|##        | 1/5 [00:00<00:03,  1.23it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000404 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667264	valid's binary_logloss: 0.669374
[200]	train's binary_logloss: 0.656765	valid's binary_logloss: 0.660326
[300]	train's binary_logloss: 0.651459	valid's binary_logloss: 0.656068
[400]	train's binary_logloss: 0.648733	valid's binary_logloss: 0.653788
[500]	train's binary_logloss: 0.647134	valid's binary_logloss: 0.653299
[600]	train's binary_logloss: 0.645934	valid's binary_logloss: 0.652389
Early stopping, best iteration is:
[573]	train's binary_logloss: 0.646258	valid's binary_logloss: 0.652282
min_data_in_leaf, val_score: 0.652093:  40%|####      | 2/5 [00:01<00:02,  1.24it/s][I 2020-09-27 04:41:14,982] Trial 61 finished with value: 0.6522824409223347 and parameters: {'min_child_samples': 25}. Best is trial 60 with value: 0.6522671792511908.
min_data_in_leaf, val_score: 0.652093:  40%|####      | 2/5 [00:01<00:02,  1.24it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001187 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667264	valid's binary_logloss: 0.669374
[200]	train's binary_logloss: 0.656765	valid's binary_logloss: 0.660326
[300]	train's binary_logloss: 0.651459	valid's binary_logloss: 0.656068
[400]	train's binary_logloss: 0.648733	valid's binary_logloss: 0.65386
[500]	train's binary_logloss: 0.647145	valid's binary_logloss: 0.653231
[600]	train's binary_logloss: 0.645937	valid's binary_logloss: 0.652184
Early stopping, best iteration is:
[573]	train's binary_logloss: 0.646242	valid's binary_logloss: 0.652
min_data_in_leaf, val_score: 0.652000:  60%|######    | 3/5 [00:02<00:01,  1.25it/s][I 2020-09-27 04:41:15,775] Trial 62 finished with value: 0.6520002540523853 and parameters: {'min_child_samples': 10}. Best is trial 62 with value: 0.6520002540523853.
min_data_in_leaf, val_score: 0.652000:  60%|######    | 3/5 [00:02<00:01,  1.25it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000591 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667264	valid's binary_logloss: 0.669374
[200]	train's binary_logloss: 0.656765	valid's binary_logloss: 0.660326
[300]	train's binary_logloss: 0.651468	valid's binary_logloss: 0.656076
[400]	train's binary_logloss: 0.648844	valid's binary_logloss: 0.65385
[500]	train's binary_logloss: 0.647295	valid's binary_logloss: 0.653583
[600]	train's binary_logloss: 0.646236	valid's binary_logloss: 0.652591
Early stopping, best iteration is:
[572]	train's binary_logloss: 0.64651	valid's binary_logloss: 0.652484
min_data_in_leaf, val_score: 0.652000:  80%|########  | 4/5 [00:03<00:00,  1.07it/s][I 2020-09-27 04:41:17,010] Trial 63 finished with value: 0.6524841574858531 and parameters: {'min_child_samples': 100}. Best is trial 62 with value: 0.6520002540523853.
min_data_in_leaf, val_score: 0.652000:  80%|########  | 4/5 [00:03<00:00,  1.07it/s][LightGBM] [Info] Number of positive: 13149, number of negative: 12850
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000499 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505750 -> initscore=0.023002
[LightGBM] [Info] Start training from score 0.023002
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667264	valid's binary_logloss: 0.669374
[200]	train's binary_logloss: 0.656765	valid's binary_logloss: 0.660326
[300]	train's binary_logloss: 0.651459	valid's binary_logloss: 0.656068
[400]	train's binary_logloss: 0.648733	valid's binary_logloss: 0.65386
[500]	train's binary_logloss: 0.647145	valid's binary_logloss: 0.653231
[600]	train's binary_logloss: 0.645937	valid's binary_logloss: 0.652184
Early stopping, best iteration is:
[573]	train's binary_logloss: 0.646242	valid's binary_logloss: 0.652
min_data_in_leaf, val_score: 0.652000: 100%|##########| 5/5 [00:04<00:00,  1.13it/s][I 2020-09-27 04:41:17,796] Trial 64 finished with value: 0.6520002540523853 and parameters: {'min_child_samples': 5}. Best is trial 62 with value: 0.6520002540523853.
min_data_in_leaf, val_score: 0.652000: 100%|##########| 5/5 [00:04<00:00,  1.13it/s]
Fold : 3
[I 2020-09-27 04:41:17,860] A new study created in memory with name: no-name-2f38a2a7-25fd-480a-a0ca-a8d760f2fdde
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004894 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572639	valid's binary_logloss: 0.657549
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.593294	valid's binary_logloss: 0.65643
feature_fraction, val_score: 0.656430:  14%|#4        | 1/7 [00:00<00:03,  1.89it/s][I 2020-09-27 04:41:18,402] Trial 0 finished with value: 0.6564300006090903 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430:  14%|#4        | 1/7 [00:00<00:03,  1.89it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000471 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.582961	valid's binary_logloss: 0.657331
[200]	train's binary_logloss: 0.532881	valid's binary_logloss: 0.663509
Early stopping, best iteration is:
[105]	train's binary_logloss: 0.579867	valid's binary_logloss: 0.657057
feature_fraction, val_score: 0.656430:  29%|##8       | 2/7 [00:01<00:02,  1.88it/s][I 2020-09-27 04:41:18,934] Trial 1 finished with value: 0.6570566397890953 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430:  29%|##8       | 2/7 [00:01<00:02,  1.88it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000466 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578663	valid's binary_logloss: 0.658469
Early stopping, best iteration is:
[94]	train's binary_logloss: 0.582397	valid's binary_logloss: 0.657964
feature_fraction, val_score: 0.656430:  43%|####2     | 3/7 [00:01<00:02,  1.85it/s][I 2020-09-27 04:41:19,494] Trial 2 finished with value: 0.6579636929665533 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430:  43%|####2     | 3/7 [00:01<00:02,  1.85it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008636 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571407	valid's binary_logloss: 0.662124
Early stopping, best iteration is:
[73]	train's binary_logloss: 0.589006	valid's binary_logloss: 0.660046
feature_fraction, val_score: 0.656430:  57%|#####7    | 4/7 [00:02<00:02,  1.47it/s][I 2020-09-27 04:41:20,502] Trial 3 finished with value: 0.6600459491612519 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430:  57%|#####7    | 4/7 [00:02<00:02,  1.47it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007591 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576825	valid's binary_logloss: 0.659351
Early stopping, best iteration is:
[59]	train's binary_logloss: 0.60446	valid's binary_logloss: 0.656858
feature_fraction, val_score: 0.656430:  71%|#######1  | 5/7 [00:03<00:01,  1.57it/s][I 2020-09-27 04:41:21,037] Trial 4 finished with value: 0.6568575848518106 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430:  71%|#######1  | 5/7 [00:03<00:01,  1.57it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005521 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574018	valid's binary_logloss: 0.658815
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.603435	valid's binary_logloss: 0.65692
feature_fraction, val_score: 0.656430:  86%|########5 | 6/7 [00:03<00:00,  1.70it/s][I 2020-09-27 04:41:21,513] Trial 5 finished with value: 0.6569204528754933 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430:  86%|########5 | 6/7 [00:03<00:00,  1.70it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000901 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571583	valid's binary_logloss: 0.66348
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.601331	valid's binary_logloss: 0.660277
feature_fraction, val_score: 0.656430: 100%|##########| 7/7 [00:04<00:00,  1.81it/s][I 2020-09-27 04:41:21,986] Trial 6 finished with value: 0.660277034185044 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.6564300006090903.
feature_fraction, val_score: 0.656430: 100%|##########| 7/7 [00:04<00:00,  1.70it/s]
num_leaves, val_score: 0.656430:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000435 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.316367	valid's binary_logloss: 0.678402
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.477737	valid's binary_logloss: 0.661617
num_leaves, val_score: 0.656430:   5%|5         | 1/20 [00:01<00:22,  1.18s/it][I 2020-09-27 04:41:23,181] Trial 7 finished with value: 0.6616171717247548 and parameters: {'num_leaves': 199}. Best is trial 7 with value: 0.6616171717247548.
num_leaves, val_score: 0.656430:   5%|5         | 1/20 [00:01<00:22,  1.18s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000849 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.294294	valid's binary_logloss: 0.683498
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.496866	valid's binary_logloss: 0.663982
num_leaves, val_score: 0.656430:  10%|#         | 2/20 [00:02<00:24,  1.34s/it][I 2020-09-27 04:41:24,901] Trial 8 finished with value: 0.6639821465726022 and parameters: {'num_leaves': 220}. Best is trial 7 with value: 0.6616171717247548.
num_leaves, val_score: 0.656430:  10%|#         | 2/20 [00:02<00:24,  1.34s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000432 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.317927	valid's binary_logloss: 0.67117
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.478124	valid's binary_logloss: 0.658535
num_leaves, val_score: 0.656430:  15%|#5        | 3/20 [00:04<00:21,  1.28s/it][I 2020-09-27 04:41:26,051] Trial 9 finished with value: 0.6585349270193589 and parameters: {'num_leaves': 197}. Best is trial 9 with value: 0.6585349270193589.
num_leaves, val_score: 0.656430:  15%|#5        | 3/20 [00:04<00:21,  1.28s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000872 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.6214	valid's binary_logloss: 0.657582
[200]	train's binary_logloss: 0.595975	valid's binary_logloss: 0.659137
Early stopping, best iteration is:
[112]	train's binary_logloss: 0.618201	valid's binary_logloss: 0.656843
num_leaves, val_score: 0.656430:  20%|##        | 4/20 [00:04<00:16,  1.04s/it][I 2020-09-27 04:41:26,504] Trial 10 finished with value: 0.6568431632806528 and parameters: {'num_leaves': 12}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  20%|##        | 4/20 [00:04<00:16,  1.04s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000825 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.633772	valid's binary_logloss: 0.658331
[200]	train's binary_logloss: 0.615857	valid's binary_logloss: 0.657932
Early stopping, best iteration is:
[151]	train's binary_logloss: 0.623812	valid's binary_logloss: 0.657048
num_leaves, val_score: 0.656430:  25%|##5       | 5/20 [00:04<00:12,  1.16it/s][I 2020-09-27 04:41:26,971] Trial 11 finished with value: 0.6570482164914353 and parameters: {'num_leaves': 8}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  25%|##5       | 5/20 [00:04<00:12,  1.16it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008860 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.6214	valid's binary_logloss: 0.657582
[200]	train's binary_logloss: 0.595975	valid's binary_logloss: 0.659137
Early stopping, best iteration is:
[112]	train's binary_logloss: 0.618201	valid's binary_logloss: 0.656843
num_leaves, val_score: 0.656430:  30%|###       | 6/20 [00:05<00:11,  1.18it/s][I 2020-09-27 04:41:27,770] Trial 12 finished with value: 0.6568431632806528 and parameters: {'num_leaves': 12}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  30%|###       | 6/20 [00:05<00:11,  1.18it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012578 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.458511	valid's binary_logloss: 0.667979
Early stopping, best iteration is:
[42]	train's binary_logloss: 0.55365	valid's binary_logloss: 0.659615
num_leaves, val_score: 0.656430:  35%|###5      | 7/20 [00:06<00:11,  1.14it/s][I 2020-09-27 04:41:28,720] Trial 13 finished with value: 0.6596145393331994 and parameters: {'num_leaves': 89}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  35%|###5      | 7/20 [00:06<00:11,  1.14it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007578 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.458372	valid's binary_logloss: 0.664586
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.562981	valid's binary_logloss: 0.659102
num_leaves, val_score: 0.656430:  40%|####      | 8/20 [00:07<00:09,  1.23it/s][I 2020-09-27 04:41:29,393] Trial 14 finished with value: 0.6591016497023785 and parameters: {'num_leaves': 90}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  40%|####      | 8/20 [00:07<00:09,  1.23it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001036 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.500134	valid's binary_logloss: 0.660323
Early stopping, best iteration is:
[55]	train's binary_logloss: 0.556151	valid's binary_logloss: 0.658021
num_leaves, val_score: 0.656430:  45%|####5     | 9/20 [00:08<00:08,  1.29it/s][I 2020-09-27 04:41:30,070] Trial 15 finished with value: 0.6580206440647851 and parameters: {'num_leaves': 66}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  45%|####5     | 9/20 [00:08<00:08,  1.29it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004995 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.37816	valid's binary_logloss: 0.67245
Early stopping, best iteration is:
[33]	train's binary_logloss: 0.531324	valid's binary_logloss: 0.658413
num_leaves, val_score: 0.656430:  50%|#####     | 10/20 [00:08<00:07,  1.27it/s][I 2020-09-27 04:41:30,896] Trial 16 finished with value: 0.6584130655795914 and parameters: {'num_leaves': 144}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  50%|#####     | 10/20 [00:08<00:07,  1.27it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000912 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.557347	valid's binary_logloss: 0.658946
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.597034	valid's binary_logloss: 0.657578
num_leaves, val_score: 0.656430:  55%|#####5    | 11/20 [00:09<00:07,  1.15it/s][I 2020-09-27 04:41:31,958] Trial 17 finished with value: 0.6575775290564063 and parameters: {'num_leaves': 38}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  55%|#####5    | 11/20 [00:09<00:07,  1.15it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004745 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.380846	valid's binary_logloss: 0.673729
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.545732	valid's binary_logloss: 0.661242
num_leaves, val_score: 0.656430:  60%|######    | 12/20 [00:10<00:06,  1.17it/s][I 2020-09-27 04:41:32,785] Trial 18 finished with value: 0.6612424428440397 and parameters: {'num_leaves': 143}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  60%|######    | 12/20 [00:10<00:06,  1.17it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000446 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.270716	valid's binary_logloss: 0.684739
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.522558	valid's binary_logloss: 0.66216
num_leaves, val_score: 0.656430:  65%|######5   | 13/20 [00:12<00:06,  1.03it/s][I 2020-09-27 04:41:34,008] Trial 19 finished with value: 0.6621600912726282 and parameters: {'num_leaves': 248}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  65%|######5   | 13/20 [00:12<00:06,  1.03it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000999 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.53747	valid's binary_logloss: 0.659534
Early stopping, best iteration is:
[53]	train's binary_logloss: 0.583686	valid's binary_logloss: 0.65689
num_leaves, val_score: 0.656430:  70%|#######   | 14/20 [00:12<00:05,  1.18it/s][I 2020-09-27 04:41:34,577] Trial 20 finished with value: 0.6568899584326288 and parameters: {'num_leaves': 47}. Best is trial 10 with value: 0.6568431632806528.
num_leaves, val_score: 0.656430:  70%|#######   | 14/20 [00:12<00:05,  1.18it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000858 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668556	valid's binary_logloss: 0.673758
[200]	train's binary_logloss: 0.658927	valid's binary_logloss: 0.665036
[300]	train's binary_logloss: 0.653688	valid's binary_logloss: 0.660827
[400]	train's binary_logloss: 0.650562	valid's binary_logloss: 0.658072
[500]	train's binary_logloss: 0.648564	valid's binary_logloss: 0.656465
[600]	train's binary_logloss: 0.647233	valid's binary_logloss: 0.655679
[700]	train's binary_logloss: 0.646297	valid's binary_logloss: 0.655125
[800]	train's binary_logloss: 0.64559	valid's binary_logloss: 0.654927
[900]	train's binary_logloss: 0.645037	valid's binary_logloss: 0.654748
[1000]	train's binary_logloss: 0.644582	valid's binary_logloss: 0.654744
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644582	valid's binary_logloss: 0.654744
num_leaves, val_score: 0.654744:  75%|#######5  | 15/20 [00:14<00:05,  1.10s/it][I 2020-09-27 04:41:36,250] Trial 21 finished with value: 0.6547443275104798 and parameters: {'num_leaves': 2}. Best is trial 21 with value: 0.6547443275104798.
num_leaves, val_score: 0.654744:  75%|#######5  | 15/20 [00:14<00:05,  1.10s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005030 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607455	valid's binary_logloss: 0.658232
Early stopping, best iteration is:
[89]	train's binary_logloss: 0.611867	valid's binary_logloss: 0.657658
num_leaves, val_score: 0.654744:  80%|########  | 16/20 [00:14<00:03,  1.11it/s][I 2020-09-27 04:41:36,691] Trial 22 finished with value: 0.6576583665143456 and parameters: {'num_leaves': 17}. Best is trial 21 with value: 0.6547443275104798.
num_leaves, val_score: 0.654744:  80%|########  | 16/20 [00:14<00:03,  1.11it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657455	valid's binary_logloss: 0.665049
[200]	train's binary_logloss: 0.647037	valid's binary_logloss: 0.658049
[300]	train's binary_logloss: 0.641503	valid's binary_logloss: 0.655542
[400]	train's binary_logloss: 0.6376	valid's binary_logloss: 0.6548
[500]	train's binary_logloss: 0.634488	valid's binary_logloss: 0.654825
[600]	train's binary_logloss: 0.631676	valid's binary_logloss: 0.654591
Early stopping, best iteration is:
[550]	train's binary_logloss: 0.633043	valid's binary_logloss: 0.654226
num_leaves, val_score: 0.654226:  85%|########5 | 17/20 [00:15<00:02,  1.11it/s][I 2020-09-27 04:41:37,600] Trial 23 finished with value: 0.6542264931984162 and parameters: {'num_leaves': 3}. Best is trial 23 with value: 0.6542264931984162.
num_leaves, val_score: 0.654226:  85%|########5 | 17/20 [00:15<00:02,  1.11it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000878 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.44017	valid's binary_logloss: 0.669582
Early stopping, best iteration is:
[27]	train's binary_logloss: 0.579264	valid's binary_logloss: 0.660699
num_leaves, val_score: 0.654226:  90%|######### | 18/20 [00:16<00:01,  1.17it/s][I 2020-09-27 04:41:38,351] Trial 24 finished with value: 0.6606988142412215 and parameters: {'num_leaves': 101}. Best is trial 23 with value: 0.6542264931984162.
num_leaves, val_score: 0.654226:  90%|######### | 18/20 [00:16<00:01,  1.17it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000476 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.534907	valid's binary_logloss: 0.660247
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.583748	valid's binary_logloss: 0.656588
num_leaves, val_score: 0.654226:  95%|#########5| 19/20 [00:17<00:00,  1.05it/s][I 2020-09-27 04:41:39,536] Trial 25 finished with value: 0.6565875567036938 and parameters: {'num_leaves': 48}. Best is trial 23 with value: 0.6542264931984162.
num_leaves, val_score: 0.654226:  95%|#########5| 19/20 [00:17<00:00,  1.05it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000989 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.412133	valid's binary_logloss: 0.670637
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.542676	valid's binary_logloss: 0.662727
num_leaves, val_score: 0.654226: 100%|##########| 20/20 [00:18<00:00,  1.09it/s][I 2020-09-27 04:41:40,357] Trial 26 finished with value: 0.6627267041691768 and parameters: {'num_leaves': 120}. Best is trial 23 with value: 0.6542264931984162.
num_leaves, val_score: 0.654226: 100%|##########| 20/20 [00:18<00:00,  1.09it/s]
bagging, val_score: 0.654226:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004771 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656611	valid's binary_logloss: 0.664545
[200]	train's binary_logloss: 0.64587	valid's binary_logloss: 0.657646
[300]	train's binary_logloss: 0.640645	valid's binary_logloss: 0.655721
[400]	train's binary_logloss: 0.636853	valid's binary_logloss: 0.655725
[500]	train's binary_logloss: 0.633496	valid's binary_logloss: 0.655542
[600]	train's binary_logloss: 0.630016	valid's binary_logloss: 0.655279
Early stopping, best iteration is:
[583]	train's binary_logloss: 0.6307	valid's binary_logloss: 0.655229
bagging, val_score: 0.654226:  10%|#         | 1/10 [00:00<00:08,  1.05it/s][I 2020-09-27 04:41:41,322] Trial 27 finished with value: 0.6552294862634005 and parameters: {'bagging_fraction': 0.7419038111841914, 'bagging_freq': 4}. Best is trial 27 with value: 0.6552294862634005.
bagging, val_score: 0.654226:  10%|#         | 1/10 [00:00<00:08,  1.05it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011398 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656644	valid's binary_logloss: 0.664211
[200]	train's binary_logloss: 0.646379	valid's binary_logloss: 0.65746
[300]	train's binary_logloss: 0.64107	valid's binary_logloss: 0.655347
[400]	train's binary_logloss: 0.63725	valid's binary_logloss: 0.654521
[500]	train's binary_logloss: 0.633892	valid's binary_logloss: 0.654747
Early stopping, best iteration is:
[418]	train's binary_logloss: 0.636564	valid's binary_logloss: 0.654327
bagging, val_score: 0.654226:  20%|##        | 2/10 [00:01<00:07,  1.10it/s][I 2020-09-27 04:41:42,144] Trial 28 finished with value: 0.6543274423941562 and parameters: {'bagging_fraction': 0.7471333362321411, 'bagging_freq': 4}. Best is trial 28 with value: 0.6543274423941562.
bagging, val_score: 0.654226:  20%|##        | 2/10 [00:01<00:07,  1.10it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000868 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657402	valid's binary_logloss: 0.66564
[200]	train's binary_logloss: 0.646902	valid's binary_logloss: 0.659056
[300]	train's binary_logloss: 0.641447	valid's binary_logloss: 0.656325
[400]	train's binary_logloss: 0.637573	valid's binary_logloss: 0.655624
[500]	train's binary_logloss: 0.634342	valid's binary_logloss: 0.655314
Early stopping, best iteration is:
[451]	train's binary_logloss: 0.635853	valid's binary_logloss: 0.655055
bagging, val_score: 0.654226:  30%|###       | 3/10 [00:03<00:07,  1.05s/it][I 2020-09-27 04:41:43,507] Trial 29 finished with value: 0.6550545130006391 and parameters: {'bagging_fraction': 0.9853823451672025, 'bagging_freq': 7}. Best is trial 28 with value: 0.6543274423941562.
bagging, val_score: 0.654226:  30%|###       | 3/10 [00:03<00:07,  1.05s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005058 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.655834	valid's binary_logloss: 0.664357
[200]	train's binary_logloss: 0.645749	valid's binary_logloss: 0.656264
[300]	train's binary_logloss: 0.641025	valid's binary_logloss: 0.656599
Early stopping, best iteration is:
[244]	train's binary_logloss: 0.643386	valid's binary_logloss: 0.656058
bagging, val_score: 0.654226:  40%|####      | 4/10 [00:03<00:05,  1.08it/s][I 2020-09-27 04:41:44,148] Trial 30 finished with value: 0.6560583873045698 and parameters: {'bagging_fraction': 0.5579781104550179, 'bagging_freq': 3}. Best is trial 28 with value: 0.6543274423941562.
bagging, val_score: 0.654226:  40%|####      | 4/10 [00:03<00:05,  1.08it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000466 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657297	valid's binary_logloss: 0.665521
[200]	train's binary_logloss: 0.64689	valid's binary_logloss: 0.658315
[300]	train's binary_logloss: 0.641484	valid's binary_logloss: 0.655464
[400]	train's binary_logloss: 0.637638	valid's binary_logloss: 0.654412
[500]	train's binary_logloss: 0.634281	valid's binary_logloss: 0.653965
[600]	train's binary_logloss: 0.631536	valid's binary_logloss: 0.653942
Early stopping, best iteration is:
[511]	train's binary_logloss: 0.633982	valid's binary_logloss: 0.65385
bagging, val_score: 0.653850:  50%|#####     | 5/10 [00:04<00:04,  1.06it/s][I 2020-09-27 04:41:45,139] Trial 31 finished with value: 0.6538504332512516 and parameters: {'bagging_fraction': 0.9521342276139986, 'bagging_freq': 7}. Best is trial 31 with value: 0.6538504332512516.
bagging, val_score: 0.653850:  50%|#####     | 5/10 [00:04<00:04,  1.06it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005270 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657127	valid's binary_logloss: 0.665533
[200]	train's binary_logloss: 0.646571	valid's binary_logloss: 0.65777
[300]	train's binary_logloss: 0.641217	valid's binary_logloss: 0.655631
[400]	train's binary_logloss: 0.637412	valid's binary_logloss: 0.654782
[500]	train's binary_logloss: 0.63418	valid's binary_logloss: 0.654189
Early stopping, best iteration is:
[483]	train's binary_logloss: 0.634685	valid's binary_logloss: 0.654112
bagging, val_score: 0.653850:  60%|######    | 6/10 [00:05<00:03,  1.07it/s][I 2020-09-27 04:41:46,049] Trial 32 finished with value: 0.6541122278782455 and parameters: {'bagging_fraction': 0.9284471640035712, 'bagging_freq': 7}. Best is trial 31 with value: 0.6538504332512516.
bagging, val_score: 0.653850:  60%|######    | 6/10 [00:05<00:03,  1.07it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000855 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657282	valid's binary_logloss: 0.665245
[200]	train's binary_logloss: 0.646764	valid's binary_logloss: 0.657757
[300]	train's binary_logloss: 0.641399	valid's binary_logloss: 0.655443
[400]	train's binary_logloss: 0.637639	valid's binary_logloss: 0.654744
[500]	train's binary_logloss: 0.634399	valid's binary_logloss: 0.654233
[600]	train's binary_logloss: 0.63167	valid's binary_logloss: 0.654358
Early stopping, best iteration is:
[506]	train's binary_logloss: 0.634248	valid's binary_logloss: 0.654145
bagging, val_score: 0.653850:  70%|#######   | 7/10 [00:07<00:03,  1.05s/it][I 2020-09-27 04:41:47,378] Trial 33 finished with value: 0.6541452383331995 and parameters: {'bagging_fraction': 0.9500310046900615, 'bagging_freq': 7}. Best is trial 31 with value: 0.6538504332512516.
bagging, val_score: 0.653850:  70%|#######   | 7/10 [00:07<00:03,  1.05s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004962 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657608	valid's binary_logloss: 0.665788
[200]	train's binary_logloss: 0.647027	valid's binary_logloss: 0.6581
[300]	train's binary_logloss: 0.641573	valid's binary_logloss: 0.655658
[400]	train's binary_logloss: 0.637741	valid's binary_logloss: 0.655284
[500]	train's binary_logloss: 0.634569	valid's binary_logloss: 0.654728
[600]	train's binary_logloss: 0.631817	valid's binary_logloss: 0.654394
[700]	train's binary_logloss: 0.629035	valid's binary_logloss: 0.654408
Early stopping, best iteration is:
[653]	train's binary_logloss: 0.630298	valid's binary_logloss: 0.654176
bagging, val_score: 0.653850:  80%|########  | 8/10 [00:08<00:02,  1.11s/it][I 2020-09-27 04:41:48,627] Trial 34 finished with value: 0.654175795227107 and parameters: {'bagging_fraction': 0.9963869875397509, 'bagging_freq': 7}. Best is trial 31 with value: 0.6538504332512516.
bagging, val_score: 0.653850:  80%|########  | 8/10 [00:08<00:02,  1.11s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005318 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657371	valid's binary_logloss: 0.665888
[200]	train's binary_logloss: 0.646844	valid's binary_logloss: 0.658962
[300]	train's binary_logloss: 0.64144	valid's binary_logloss: 0.656351
[400]	train's binary_logloss: 0.637614	valid's binary_logloss: 0.654985
[500]	train's binary_logloss: 0.634424	valid's binary_logloss: 0.654861
[600]	train's binary_logloss: 0.631665	valid's binary_logloss: 0.654356
[700]	train's binary_logloss: 0.628979	valid's binary_logloss: 0.654652
Early stopping, best iteration is:
[625]	train's binary_logloss: 0.631017	valid's binary_logloss: 0.654177
bagging, val_score: 0.653850:  90%|######### | 9/10 [00:09<00:01,  1.19s/it][I 2020-09-27 04:41:50,003] Trial 35 finished with value: 0.6541765913434219 and parameters: {'bagging_fraction': 0.9914234683174175, 'bagging_freq': 7}. Best is trial 31 with value: 0.6538504332512516.
bagging, val_score: 0.653850:  90%|######### | 9/10 [00:09<00:01,  1.19s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006921 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657129	valid's binary_logloss: 0.666433
[200]	train's binary_logloss: 0.646683	valid's binary_logloss: 0.658903
[300]	train's binary_logloss: 0.641282	valid's binary_logloss: 0.656802
[400]	train's binary_logloss: 0.637336	valid's binary_logloss: 0.655707
[500]	train's binary_logloss: 0.634119	valid's binary_logloss: 0.654705
[600]	train's binary_logloss: 0.631418	valid's binary_logloss: 0.654631
Early stopping, best iteration is:
[510]	train's binary_logloss: 0.633847	valid's binary_logloss: 0.654396
bagging, val_score: 0.653850: 100%|##########| 10/10 [00:11<00:00,  1.28s/it][I 2020-09-27 04:41:51,487] Trial 36 finished with value: 0.6543958028999971 and parameters: {'bagging_fraction': 0.9429983760903378, 'bagging_freq': 7}. Best is trial 31 with value: 0.6538504332512516.
bagging, val_score: 0.653850: 100%|##########| 10/10 [00:11<00:00,  1.11s/it]
feature_fraction_stage2, val_score: 0.653850:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000804 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657244	valid's binary_logloss: 0.665676
[200]	train's binary_logloss: 0.646834	valid's binary_logloss: 0.657933
[300]	train's binary_logloss: 0.641472	valid's binary_logloss: 0.655835
[400]	train's binary_logloss: 0.637734	valid's binary_logloss: 0.654847
[500]	train's binary_logloss: 0.634612	valid's binary_logloss: 0.654743
[600]	train's binary_logloss: 0.631834	valid's binary_logloss: 0.654606
Early stopping, best iteration is:
[559]	train's binary_logloss: 0.632955	valid's binary_logloss: 0.654387
feature_fraction_stage2, val_score: 0.653850:  17%|#6        | 1/6 [00:01<00:05,  1.06s/it][I 2020-09-27 04:41:52,568] Trial 37 finished with value: 0.6543873568694546 and parameters: {'feature_fraction': 0.7520000000000001}. Best is trial 37 with value: 0.6543873568694546.
feature_fraction_stage2, val_score: 0.653850:  17%|#6        | 1/6 [00:01<00:05,  1.06s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001058 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
feature_fraction_stage2, val_score: 0.653747:  33%|###3      | 2/6 [00:02<00:04,  1.04s/it][I 2020-09-27 04:41:53,565] Trial 38 finished with value: 0.6537470788868942 and parameters: {'feature_fraction': 0.7200000000000001}. Best is trial 38 with value: 0.6537470788868942.
feature_fraction_stage2, val_score: 0.653747:  33%|###3      | 2/6 [00:02<00:04,  1.04s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001682 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657318	valid's binary_logloss: 0.665806
[200]	train's binary_logloss: 0.646625	valid's binary_logloss: 0.658644
[300]	train's binary_logloss: 0.641194	valid's binary_logloss: 0.656049
[400]	train's binary_logloss: 0.63737	valid's binary_logloss: 0.655556
[500]	train's binary_logloss: 0.634009	valid's binary_logloss: 0.654948
[600]	train's binary_logloss: 0.63129	valid's binary_logloss: 0.65525
Early stopping, best iteration is:
[557]	train's binary_logloss: 0.63246	valid's binary_logloss: 0.654813
feature_fraction_stage2, val_score: 0.653747:  50%|#####     | 3/6 [00:03<00:03,  1.21s/it][I 2020-09-27 04:41:55,150] Trial 39 finished with value: 0.6548128899258291 and parameters: {'feature_fraction': 0.8480000000000001}. Best is trial 38 with value: 0.6537470788868942.
feature_fraction_stage2, val_score: 0.653747:  50%|#####     | 3/6 [00:03<00:03,  1.21s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001085 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.665659
[200]	train's binary_logloss: 0.646755	valid's binary_logloss: 0.657975
[300]	train's binary_logloss: 0.641365	valid's binary_logloss: 0.655752
[400]	train's binary_logloss: 0.637538	valid's binary_logloss: 0.654478
[500]	train's binary_logloss: 0.634276	valid's binary_logloss: 0.654404
[600]	train's binary_logloss: 0.63152	valid's binary_logloss: 0.654509
Early stopping, best iteration is:
[533]	train's binary_logloss: 0.633346	valid's binary_logloss: 0.654136
feature_fraction_stage2, val_score: 0.653747:  67%|######6   | 4/6 [00:04<00:02,  1.22s/it][I 2020-09-27 04:41:56,409] Trial 40 finished with value: 0.6541357158243993 and parameters: {'feature_fraction': 0.88}. Best is trial 38 with value: 0.6537470788868942.
feature_fraction_stage2, val_score: 0.653747:  67%|######6   | 4/6 [00:04<00:02,  1.22s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005676 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657297	valid's binary_logloss: 0.665521
[200]	train's binary_logloss: 0.64689	valid's binary_logloss: 0.658315
[300]	train's binary_logloss: 0.641484	valid's binary_logloss: 0.655464
[400]	train's binary_logloss: 0.637638	valid's binary_logloss: 0.654412
[500]	train's binary_logloss: 0.634281	valid's binary_logloss: 0.653965
[600]	train's binary_logloss: 0.631536	valid's binary_logloss: 0.653942
Early stopping, best iteration is:
[511]	train's binary_logloss: 0.633982	valid's binary_logloss: 0.65385
feature_fraction_stage2, val_score: 0.653747:  83%|########3 | 5/6 [00:05<00:01,  1.18s/it][I 2020-09-27 04:41:57,501] Trial 41 finished with value: 0.6538504332512516 and parameters: {'feature_fraction': 0.8160000000000001}. Best is trial 38 with value: 0.6537470788868942.
feature_fraction_stage2, val_score: 0.653747:  83%|########3 | 5/6 [00:06<00:01,  1.18s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010335 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657244	valid's binary_logloss: 0.665676
[200]	train's binary_logloss: 0.646834	valid's binary_logloss: 0.657933
[300]	train's binary_logloss: 0.641472	valid's binary_logloss: 0.655835
[400]	train's binary_logloss: 0.637734	valid's binary_logloss: 0.654847
[500]	train's binary_logloss: 0.634612	valid's binary_logloss: 0.654743
[600]	train's binary_logloss: 0.631834	valid's binary_logloss: 0.654606
Early stopping, best iteration is:
[559]	train's binary_logloss: 0.632955	valid's binary_logloss: 0.654387
feature_fraction_stage2, val_score: 0.653747: 100%|##########| 6/6 [00:07<00:00,  1.29s/it][I 2020-09-27 04:41:59,040] Trial 42 finished with value: 0.6543873568694546 and parameters: {'feature_fraction': 0.784}. Best is trial 38 with value: 0.6537470788868942.
feature_fraction_stage2, val_score: 0.653747: 100%|##########| 6/6 [00:07<00:00,  1.26s/it]
regularization_factors, val_score: 0.653747:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:   5%|5         | 1/20 [00:00<00:17,  1.11it/s][I 2020-09-27 04:41:59,955] Trial 43 finished with value: 0.6537470788623818 and parameters: {'lambda_l1': 1.164724543974479e-08, 'lambda_l2': 6.922005451695355e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:   5%|5         | 1/20 [00:00<00:17,  1.11it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000880 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  10%|#         | 2/20 [00:01<00:16,  1.11it/s][I 2020-09-27 04:42:00,859] Trial 44 finished with value: 0.6537470788688746 and parameters: {'lambda_l1': 1.2101619670742207e-08, 'lambda_l2': 3.569458567905685e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:  10%|#         | 2/20 [00:01<00:16,  1.11it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000804 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  15%|#5        | 3/20 [00:02<00:15,  1.08it/s][I 2020-09-27 04:42:01,825] Trial 45 finished with value: 0.653747078870228 and parameters: {'lambda_l1': 1.0924605897873326e-08, 'lambda_l2': 3.7438094337593657e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:  15%|#5        | 3/20 [00:02<00:15,  1.08it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009826 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  20%|##        | 4/20 [00:04<00:20,  1.28s/it][I 2020-09-27 04:42:03,949] Trial 46 finished with value: 0.6537470788718757 and parameters: {'lambda_l1': 1.479258071574677e-08, 'lambda_l2': 3.0264187607450267e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:  20%|##        | 4/20 [00:04<00:20,  1.28s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004927 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  25%|##5       | 5/20 [00:05<00:18,  1.20s/it][I 2020-09-27 04:42:04,966] Trial 47 finished with value: 0.6537470788672778 and parameters: {'lambda_l1': 1.2446439098514144e-08, 'lambda_l2': 4.099446197874459e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:  25%|##5       | 5/20 [00:05<00:18,  1.20s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004710 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  30%|###       | 6/20 [00:06<00:15,  1.11s/it][I 2020-09-27 04:42:05,863] Trial 48 finished with value: 0.6537470788771779 and parameters: {'lambda_l1': 1.2455751430953037e-08, 'lambda_l2': 2.1195700656837565e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:  30%|###       | 6/20 [00:06<00:15,  1.11s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000751 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  35%|###5      | 7/20 [00:08<00:15,  1.18s/it][I 2020-09-27 04:42:07,198] Trial 49 finished with value: 0.6537470788676445 and parameters: {'lambda_l1': 1.3371676440125825e-08, 'lambda_l2': 4.133315482813227e-06}. Best is trial 43 with value: 0.6537470788623818.
regularization_factors, val_score: 0.653747:  35%|###5      | 7/20 [00:08<00:15,  1.18s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000868 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  40%|####      | 8/20 [00:09<00:13,  1.14s/it][I 2020-09-27 04:42:08,255] Trial 50 finished with value: 0.653747078859269 and parameters: {'lambda_l1': 1.1210362012517112e-08, 'lambda_l2': 6.439200774621681e-06}. Best is trial 50 with value: 0.653747078859269.
regularization_factors, val_score: 0.653747:  40%|####      | 8/20 [00:09<00:13,  1.14s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000976 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  45%|####5     | 9/20 [00:10<00:11,  1.07s/it][I 2020-09-27 04:42:09,153] Trial 51 finished with value: 0.6537470788593895 and parameters: {'lambda_l1': 1.3454643930345611e-08, 'lambda_l2': 6.467158854009067e-06}. Best is trial 50 with value: 0.653747078859269.
regularization_factors, val_score: 0.653747:  45%|####5     | 9/20 [00:10<00:11,  1.07s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006696 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  50%|#####     | 10/20 [00:11<00:10,  1.02s/it][I 2020-09-27 04:42:10,075] Trial 52 finished with value: 0.6537470788591189 and parameters: {'lambda_l1': 1.1646094395679663e-08, 'lambda_l2': 6.358551933697898e-06}. Best is trial 52 with value: 0.6537470788591189.
regularization_factors, val_score: 0.653747:  50%|#####     | 10/20 [00:11<00:10,  1.02s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011178 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  55%|#####5    | 11/20 [00:12<00:09,  1.11s/it][I 2020-09-27 04:42:11,387] Trial 53 finished with value: 0.6537470788474831 and parameters: {'lambda_l1': 1.0924146750147604e-08, 'lambda_l2': 1.2976298424529484e-05}. Best is trial 53 with value: 0.6537470788474831.
regularization_factors, val_score: 0.653747:  55%|#####5    | 11/20 [00:12<00:09,  1.11s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005401 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637619	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.63474	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  60%|######    | 12/20 [00:13<00:08,  1.05s/it][I 2020-09-27 04:42:12,300] Trial 54 finished with value: 0.65374707865709 and parameters: {'lambda_l1': 1.1771590372611887e-08, 'lambda_l2': 6.880400470984478e-05}. Best is trial 54 with value: 0.65374707865709.
regularization_factors, val_score: 0.653747:  60%|######    | 12/20 [00:13<00:08,  1.05s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000845 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657307	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.63762	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.63434	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.634741	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653747:  65%|######5   | 13/20 [00:14<00:07,  1.02s/it][I 2020-09-27 04:42:13,235] Trial 55 finished with value: 0.6537470737830029 and parameters: {'lambda_l1': 1.2904541521956938e-08, 'lambda_l2': 0.0015325466166573717}. Best is trial 55 with value: 0.6537470737830029.
regularization_factors, val_score: 0.653747:  65%|######5   | 13/20 [00:14<00:07,  1.02s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004533 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.64145	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637678	valid's binary_logloss: 0.654439
[500]	train's binary_logloss: 0.634307	valid's binary_logloss: 0.653862
[600]	train's binary_logloss: 0.631527	valid's binary_logloss: 0.653578
[700]	train's binary_logloss: 0.628591	valid's binary_logloss: 0.653558
[800]	train's binary_logloss: 0.625872	valid's binary_logloss: 0.653736
Early stopping, best iteration is:
[750]	train's binary_logloss: 0.627225	valid's binary_logloss: 0.653307
regularization_factors, val_score: 0.653307:  70%|#######   | 14/20 [00:15<00:07,  1.22s/it][I 2020-09-27 04:42:14,917] Trial 56 finished with value: 0.653306938112237 and parameters: {'lambda_l1': 1.1130628894602006e-05, 'lambda_l2': 0.007158975373652711}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307:  70%|#######   | 14/20 [00:15<00:07,  1.22s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.019645 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646792	valid's binary_logloss: 0.657796
[300]	train's binary_logloss: 0.641385	valid's binary_logloss: 0.655452
[400]	train's binary_logloss: 0.637507	valid's binary_logloss: 0.654514
[500]	train's binary_logloss: 0.634295	valid's binary_logloss: 0.653752
Early stopping, best iteration is:
[490]	train's binary_logloss: 0.63465	valid's binary_logloss: 0.653564
regularization_factors, val_score: 0.653307:  75%|#######5  | 15/20 [00:16<00:05,  1.15s/it][I 2020-09-27 04:42:15,922] Trial 57 finished with value: 0.6535639999340161 and parameters: {'lambda_l1': 0.00026873318136680526, 'lambda_l2': 0.011949459080105706}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307:  75%|#######5  | 15/20 [00:16<00:05,  1.15s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000821 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646792	valid's binary_logloss: 0.657796
[300]	train's binary_logloss: 0.641358	valid's binary_logloss: 0.655395
[400]	train's binary_logloss: 0.637525	valid's binary_logloss: 0.654849
[500]	train's binary_logloss: 0.63422	valid's binary_logloss: 0.654174
[600]	train's binary_logloss: 0.631488	valid's binary_logloss: 0.654245
Early stopping, best iteration is:
[588]	train's binary_logloss: 0.631831	valid's binary_logloss: 0.654014
regularization_factors, val_score: 0.653307:  80%|########  | 16/20 [00:17<00:04,  1.12s/it][I 2020-09-27 04:42:16,965] Trial 58 finished with value: 0.6540142566432265 and parameters: {'lambda_l1': 0.0002391919617921031, 'lambda_l2': 0.020184808500303845}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307:  80%|########  | 16/20 [00:17<00:04,  1.12s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004897 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.63762	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.634341	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.634741	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653307:  85%|########5 | 17/20 [00:18<00:03,  1.06s/it][I 2020-09-27 04:42:17,888] Trial 59 finished with value: 0.6537470700910835 and parameters: {'lambda_l1': 0.00026680911329737025, 'lambda_l2': 0.0033110276633739268}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307:  85%|########5 | 17/20 [00:18<00:03,  1.06s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004895 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637621	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.634341	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.634742	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653307:  90%|######### | 18/20 [00:20<00:02,  1.17s/it][I 2020-09-27 04:42:19,323] Trial 60 finished with value: 0.6537470677716328 and parameters: {'lambda_l1': 0.0003014398725480866, 'lambda_l2': 0.004090143660649428}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307:  90%|######### | 18/20 [00:20<00:02,  1.17s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000797 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637621	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.634342	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.634742	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653307:  95%|#########5| 19/20 [00:21<00:01,  1.10s/it][I 2020-09-27 04:42:20,258] Trial 61 finished with value: 0.6537470687771331 and parameters: {'lambda_l1': 0.0005632008256227812, 'lambda_l2': 0.004446124070486416}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307:  95%|#########5| 19/20 [00:21<00:01,  1.10s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000827 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641449	valid's binary_logloss: 0.65506
[400]	train's binary_logloss: 0.637621	valid's binary_logloss: 0.654377
[500]	train's binary_logloss: 0.634341	valid's binary_logloss: 0.653884
Early stopping, best iteration is:
[488]	train's binary_logloss: 0.634742	valid's binary_logloss: 0.653747
regularization_factors, val_score: 0.653307: 100%|##########| 20/20 [00:22<00:00,  1.04s/it][I 2020-09-27 04:42:21,158] Trial 62 finished with value: 0.6537470665554331 and parameters: {'lambda_l1': 0.0003132915633232416, 'lambda_l2': 0.004489563552243147}. Best is trial 56 with value: 0.653306938112237.
regularization_factors, val_score: 0.653307: 100%|##########| 20/20 [00:22<00:00,  1.11s/it]
min_data_in_leaf, val_score: 0.653307:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004952 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641484	valid's binary_logloss: 0.655571
[400]	train's binary_logloss: 0.63758	valid's binary_logloss: 0.654925
Early stopping, best iteration is:
[368]	train's binary_logloss: 0.638713	valid's binary_logloss: 0.654841
min_data_in_leaf, val_score: 0.653307:  20%|##        | 1/5 [00:00<00:03,  1.21it/s][I 2020-09-27 04:42:21,996] Trial 63 finished with value: 0.6548408998104529 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.6548408998104529.
min_data_in_leaf, val_score: 0.653307:  20%|##        | 1/5 [00:00<00:03,  1.21it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005026 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646871	valid's binary_logloss: 0.657816
[300]	train's binary_logloss: 0.641553	valid's binary_logloss: 0.655747
[400]	train's binary_logloss: 0.637618	valid's binary_logloss: 0.655166
[500]	train's binary_logloss: 0.63433	valid's binary_logloss: 0.654821
Early stopping, best iteration is:
[483]	train's binary_logloss: 0.634862	valid's binary_logloss: 0.654649
min_data_in_leaf, val_score: 0.653307:  40%|####      | 2/5 [00:02<00:02,  1.04it/s][I 2020-09-27 04:42:23,270] Trial 64 finished with value: 0.6546487352338517 and parameters: {'min_child_samples': 25}. Best is trial 64 with value: 0.6546487352338517.
min_data_in_leaf, val_score: 0.653307:  40%|####      | 2/5 [00:02<00:02,  1.04it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004979 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646783	valid's binary_logloss: 0.657725
[300]	train's binary_logloss: 0.641452	valid's binary_logloss: 0.655218
[400]	train's binary_logloss: 0.637601	valid's binary_logloss: 0.654481
[500]	train's binary_logloss: 0.634142	valid's binary_logloss: 0.653967
[600]	train's binary_logloss: 0.631213	valid's binary_logloss: 0.654248
Early stopping, best iteration is:
[539]	train's binary_logloss: 0.632957	valid's binary_logloss: 0.653768
min_data_in_leaf, val_score: 0.653307:  60%|######    | 3/5 [00:03<00:01,  1.02it/s][I 2020-09-27 04:42:24,298] Trial 65 finished with value: 0.653767937805501 and parameters: {'min_child_samples': 10}. Best is trial 65 with value: 0.653767937805501.
min_data_in_leaf, val_score: 0.653307:  60%|######    | 3/5 [00:03<00:01,  1.02it/s][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000881 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646871	valid's binary_logloss: 0.657816
[300]	train's binary_logloss: 0.641452	valid's binary_logloss: 0.655693
[400]	train's binary_logloss: 0.637718	valid's binary_logloss: 0.654576
[500]	train's binary_logloss: 0.634473	valid's binary_logloss: 0.654212
[600]	train's binary_logloss: 0.631693	valid's binary_logloss: 0.654007
Early stopping, best iteration is:
[580]	train's binary_logloss: 0.63221	valid's binary_logloss: 0.653668
min_data_in_leaf, val_score: 0.653307:  80%|########  | 4/5 [00:04<00:01,  1.01s/it][I 2020-09-27 04:42:25,378] Trial 66 finished with value: 0.6536677236980378 and parameters: {'min_child_samples': 50}. Best is trial 66 with value: 0.6536677236980378.
min_data_in_leaf, val_score: 0.653307:  80%|########  | 4/5 [00:04<00:01,  1.01s/it][LightGBM] [Info] Number of positive: 13145, number of negative: 12854
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004998 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505596 -> initscore=0.022386
[LightGBM] [Info] Start training from score 0.022386
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657308	valid's binary_logloss: 0.665041
[200]	train's binary_logloss: 0.646872	valid's binary_logloss: 0.657799
[300]	train's binary_logloss: 0.641589	valid's binary_logloss: 0.655365
[400]	train's binary_logloss: 0.638214	valid's binary_logloss: 0.65449
[500]	train's binary_logloss: 0.635191	valid's binary_logloss: 0.653777
[600]	train's binary_logloss: 0.632659	valid's binary_logloss: 0.653578
Early stopping, best iteration is:
[576]	train's binary_logloss: 0.633154	valid's binary_logloss: 0.653401
min_data_in_leaf, val_score: 0.653307: 100%|##########| 5/5 [00:05<00:00,  1.17s/it][I 2020-09-27 04:42:26,912] Trial 67 finished with value: 0.653400524735626 and parameters: {'min_child_samples': 100}. Best is trial 67 with value: 0.653400524735626.
min_data_in_leaf, val_score: 0.653307: 100%|##########| 5/5 [00:05<00:00,  1.15s/it]
Fold : 4
[I 2020-09-27 04:42:26,996] A new study created in memory with name: no-name-95b73563-c9f0-498a-9add-fe0ff51242f2
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000967 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.569629	valid's binary_logloss: 0.666525
Early stopping, best iteration is:
[63]	train's binary_logloss: 0.595461	valid's binary_logloss: 0.665516
feature_fraction, val_score: 0.665516:  14%|#4        | 1/7 [00:00<00:03,  1.75it/s][I 2020-09-27 04:42:27,577] Trial 0 finished with value: 0.6655156235172205 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6655156235172205.
feature_fraction, val_score: 0.665516:  14%|#4        | 1/7 [00:00<00:03,  1.75it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000965 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572137	valid's binary_logloss: 0.663019
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.610212	valid's binary_logloss: 0.661166
feature_fraction, val_score: 0.661166:  29%|##8       | 2/7 [00:01<00:02,  1.85it/s][I 2020-09-27 04:42:28,047] Trial 1 finished with value: 0.6611663859231157 and parameters: {'feature_fraction': 0.8}. Best is trial 1 with value: 0.6611663859231157.
feature_fraction, val_score: 0.661166:  29%|##8       | 2/7 [00:01<00:02,  1.85it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004690 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57535	valid's binary_logloss: 0.662198
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.604866	valid's binary_logloss: 0.660561
feature_fraction, val_score: 0.660561:  43%|####2     | 3/7 [00:01<00:02,  1.92it/s][I 2020-09-27 04:42:28,523] Trial 2 finished with value: 0.660560693860469 and parameters: {'feature_fraction': 0.6}. Best is trial 2 with value: 0.660560693860469.
feature_fraction, val_score: 0.660561:  43%|####2     | 3/7 [00:01<00:02,  1.92it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000938 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571052	valid's binary_logloss: 0.6616
Early stopping, best iteration is:
[50]	train's binary_logloss: 0.607427	valid's binary_logloss: 0.660867
feature_fraction, val_score: 0.660561:  57%|#####7    | 4/7 [00:02<00:01,  1.94it/s][I 2020-09-27 04:42:29,026] Trial 3 finished with value: 0.6608672257275293 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 2 with value: 0.660560693860469.
feature_fraction, val_score: 0.660561:  57%|#####7    | 4/7 [00:02<00:01,  1.94it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000868 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574313	valid's binary_logloss: 0.662136
Early stopping, best iteration is:
[97]	train's binary_logloss: 0.576201	valid's binary_logloss: 0.661927
feature_fraction, val_score: 0.660561:  71%|#######1  | 5/7 [00:02<00:01,  1.79it/s][I 2020-09-27 04:42:29,690] Trial 4 finished with value: 0.661927284753647 and parameters: {'feature_fraction': 0.7}. Best is trial 2 with value: 0.660560693860469.
feature_fraction, val_score: 0.660561:  71%|#######1  | 5/7 [00:02<00:01,  1.79it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012827 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578468	valid's binary_logloss: 0.661585
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.610009	valid's binary_logloss: 0.661106
feature_fraction, val_score: 0.660561:  86%|########5 | 6/7 [00:03<00:00,  1.58it/s][I 2020-09-27 04:42:30,492] Trial 5 finished with value: 0.6611057071152201 and parameters: {'feature_fraction': 0.5}. Best is trial 2 with value: 0.660560693860469.
feature_fraction, val_score: 0.660561:  86%|########5 | 6/7 [00:03<00:00,  1.58it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000508 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.581603	valid's binary_logloss: 0.660601
Early stopping, best iteration is:
[81]	train's binary_logloss: 0.593654	valid's binary_logloss: 0.660443
feature_fraction, val_score: 0.660443: 100%|##########| 7/7 [00:03<00:00,  1.68it/s][I 2020-09-27 04:42:30,996] Trial 6 finished with value: 0.6604425614247917 and parameters: {'feature_fraction': 0.4}. Best is trial 6 with value: 0.6604425614247917.
feature_fraction, val_score: 0.660443: 100%|##########| 7/7 [00:04<00:00,  1.75it/s]
num_leaves, val_score: 0.660443:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004658 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.356495	valid's binary_logloss: 0.671539
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.499664	valid's binary_logloss: 0.664498
num_leaves, val_score: 0.660443:   5%|5         | 1/20 [00:00<00:16,  1.14it/s][I 2020-09-27 04:42:31,893] Trial 7 finished with value: 0.6644984899613434 and parameters: {'num_leaves': 190}. Best is trial 7 with value: 0.6644984899613434.
num_leaves, val_score: 0.660443:   5%|5         | 1/20 [00:00<00:16,  1.14it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000609 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.526375	valid's binary_logloss: 0.665053
Early stopping, best iteration is:
[67]	train's binary_logloss: 0.561219	valid's binary_logloss: 0.662832
num_leaves, val_score: 0.660443:  10%|#         | 2/20 [00:01<00:14,  1.28it/s][I 2020-09-27 04:42:32,449] Trial 8 finished with value: 0.6628317798276503 and parameters: {'num_leaves': 60}. Best is trial 8 with value: 0.6628317798276503.
num_leaves, val_score: 0.660443:  10%|#         | 2/20 [00:01<00:14,  1.28it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000379 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608789	valid's binary_logloss: 0.659931
[200]	train's binary_logloss: 0.573948	valid's binary_logloss: 0.661564
Early stopping, best iteration is:
[106]	train's binary_logloss: 0.606284	valid's binary_logloss: 0.659493
num_leaves, val_score: 0.659493:  15%|#5        | 3/20 [00:01<00:11,  1.48it/s][I 2020-09-27 04:42:32,881] Trial 9 finished with value: 0.6594930122196023 and parameters: {'num_leaves': 19}. Best is trial 9 with value: 0.6594930122196023.
num_leaves, val_score: 0.659493:  15%|#5        | 3/20 [00:01<00:11,  1.48it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000374 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.62851	valid's binary_logloss: 0.661694
[200]	train's binary_logloss: 0.606786	valid's binary_logloss: 0.660582
Early stopping, best iteration is:
[177]	train's binary_logloss: 0.611571	valid's binary_logloss: 0.660135
num_leaves, val_score: 0.659493:  20%|##        | 4/20 [00:02<00:10,  1.56it/s][I 2020-09-27 04:42:33,444] Trial 10 finished with value: 0.6601347890202464 and parameters: {'num_leaves': 11}. Best is trial 9 with value: 0.6594930122196023.
num_leaves, val_score: 0.659493:  20%|##        | 4/20 [00:02<00:10,  1.56it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002220 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.623153	valid's binary_logloss: 0.659612
[200]	train's binary_logloss: 0.598247	valid's binary_logloss: 0.660773
Early stopping, best iteration is:
[106]	train's binary_logloss: 0.621362	valid's binary_logloss: 0.659491
num_leaves, val_score: 0.659491:  25%|##5       | 5/20 [00:03<00:10,  1.41it/s][I 2020-09-27 04:42:34,315] Trial 11 finished with value: 0.6594905167648981 and parameters: {'num_leaves': 13}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  25%|##5       | 5/20 [00:03<00:10,  1.41it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000240 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.633624	valid's binary_logloss: 0.661805
[200]	train's binary_logloss: 0.6154	valid's binary_logloss: 0.661626
Early stopping, best iteration is:
[118]	train's binary_logloss: 0.629667	valid's binary_logloss: 0.661053
num_leaves, val_score: 0.659491:  30%|###       | 6/20 [00:03<00:08,  1.61it/s][I 2020-09-27 04:42:34,733] Trial 12 finished with value: 0.6610531437027433 and parameters: {'num_leaves': 9}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  30%|###       | 6/20 [00:03<00:08,  1.61it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000443 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.489932	valid's binary_logloss: 0.664356
Early stopping, best iteration is:
[42]	train's binary_logloss: 0.574496	valid's binary_logloss: 0.661322
num_leaves, val_score: 0.659491:  35%|###5      | 7/20 [00:04<00:07,  1.63it/s][I 2020-09-27 04:42:35,325] Trial 13 finished with value: 0.6613221980695407 and parameters: {'num_leaves': 83}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  35%|###5      | 7/20 [00:04<00:07,  1.63it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000456 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668466	valid's binary_logloss: 0.674773
[200]	train's binary_logloss: 0.658603	valid's binary_logloss: 0.667149
[300]	train's binary_logloss: 0.653224	valid's binary_logloss: 0.663743
[400]	train's binary_logloss: 0.649988	valid's binary_logloss: 0.661832
[500]	train's binary_logloss: 0.647929	valid's binary_logloss: 0.661043
[600]	train's binary_logloss: 0.646563	valid's binary_logloss: 0.660633
[700]	train's binary_logloss: 0.645608	valid's binary_logloss: 0.660633
[800]	train's binary_logloss: 0.644889	valid's binary_logloss: 0.660438
[900]	train's binary_logloss: 0.644316	valid's binary_logloss: 0.66034
[1000]	train's binary_logloss: 0.64384	valid's binary_logloss: 0.660316
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.64384	valid's binary_logloss: 0.660316
num_leaves, val_score: 0.659491:  40%|####      | 8/20 [00:05<00:09,  1.28it/s][I 2020-09-27 04:42:36,492] Trial 14 finished with value: 0.6603164467173235 and parameters: {'num_leaves': 2}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  40%|####      | 8/20 [00:05<00:09,  1.28it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000395 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.481541	valid's binary_logloss: 0.666981
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.566666	valid's binary_logloss: 0.662735
num_leaves, val_score: 0.659491:  45%|####5     | 9/20 [00:06<00:08,  1.35it/s][I 2020-09-27 04:42:37,143] Trial 15 finished with value: 0.6627351233501201 and parameters: {'num_leaves': 88}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  45%|####5     | 9/20 [00:06<00:08,  1.35it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000349 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.3973	valid's binary_logloss: 0.672802
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.560753	valid's binary_logloss: 0.66553
num_leaves, val_score: 0.659491:  50%|#####     | 10/20 [00:07<00:09,  1.08it/s][I 2020-09-27 04:42:38,498] Trial 16 finished with value: 0.6655299653769075 and parameters: {'num_leaves': 152}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  50%|#####     | 10/20 [00:07<00:09,  1.08it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000373 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.299677	valid's binary_logloss: 0.679053
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.514613	valid's binary_logloss: 0.667796
num_leaves, val_score: 0.659491:  55%|#####5    | 11/20 [00:08<00:08,  1.03it/s][I 2020-09-27 04:42:39,559] Trial 17 finished with value: 0.6677957520143049 and parameters: {'num_leaves': 256}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  55%|#####5    | 11/20 [00:08<00:08,  1.03it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000315 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.5506	valid's binary_logloss: 0.661771
Early stopping, best iteration is:
[46]	train's binary_logloss: 0.602349	valid's binary_logloss: 0.660582
num_leaves, val_score: 0.659491:  60%|######    | 12/20 [00:08<00:06,  1.23it/s][I 2020-09-27 04:42:40,010] Trial 18 finished with value: 0.6605823886498043 and parameters: {'num_leaves': 47}. Best is trial 11 with value: 0.6594905167648981.
num_leaves, val_score: 0.659491:  60%|######    | 12/20 [00:09<00:06,  1.23it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000360 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573596	valid's binary_logloss: 0.659418
Early stopping, best iteration is:
[95]	train's binary_logloss: 0.576854	valid's binary_logloss: 0.658933
num_leaves, val_score: 0.658933:  65%|######5   | 13/20 [00:09<00:04,  1.40it/s][I 2020-09-27 04:42:40,490] Trial 19 finished with value: 0.6589331658992789 and parameters: {'num_leaves': 35}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  65%|######5   | 13/20 [00:09<00:04,  1.40it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000370 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.433641	valid's binary_logloss: 0.669498
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.5737	valid's binary_logloss: 0.666057
num_leaves, val_score: 0.658933:  70%|#######   | 14/20 [00:10<00:04,  1.31it/s][I 2020-09-27 04:42:41,370] Trial 20 finished with value: 0.666056567776029 and parameters: {'num_leaves': 123}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  70%|#######   | 14/20 [00:10<00:04,  1.31it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011203 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578104	valid's binary_logloss: 0.662017
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.596911	valid's binary_logloss: 0.661574
num_leaves, val_score: 0.658933:  75%|#######5  | 15/20 [00:10<00:03,  1.41it/s][I 2020-09-27 04:42:41,958] Trial 21 finished with value: 0.6615737018014829 and parameters: {'num_leaves': 33}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  75%|#######5  | 15/20 [00:10<00:03,  1.41it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000381 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668466	valid's binary_logloss: 0.674773
[200]	train's binary_logloss: 0.658603	valid's binary_logloss: 0.667149
[300]	train's binary_logloss: 0.653224	valid's binary_logloss: 0.663743
[400]	train's binary_logloss: 0.649988	valid's binary_logloss: 0.661832
[500]	train's binary_logloss: 0.647929	valid's binary_logloss: 0.661043
[600]	train's binary_logloss: 0.646563	valid's binary_logloss: 0.660633
[700]	train's binary_logloss: 0.645608	valid's binary_logloss: 0.660633
[800]	train's binary_logloss: 0.644889	valid's binary_logloss: 0.660438
[900]	train's binary_logloss: 0.644316	valid's binary_logloss: 0.66034
[1000]	train's binary_logloss: 0.64384	valid's binary_logloss: 0.660316
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.64384	valid's binary_logloss: 0.660316
num_leaves, val_score: 0.658933:  80%|########  | 16/20 [00:12<00:03,  1.18it/s][I 2020-09-27 04:42:43,132] Trial 22 finished with value: 0.6603164467173235 and parameters: {'num_leaves': 2}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  80%|########  | 16/20 [00:12<00:03,  1.18it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000388 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.581603	valid's binary_logloss: 0.660601
Early stopping, best iteration is:
[81]	train's binary_logloss: 0.593654	valid's binary_logloss: 0.660443
num_leaves, val_score: 0.658933:  85%|########5 | 17/20 [00:12<00:02,  1.34it/s][I 2020-09-27 04:42:43,636] Trial 23 finished with value: 0.6604425614247916 and parameters: {'num_leaves': 31}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  85%|########5 | 17/20 [00:12<00:02,  1.34it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000430 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.517038	valid's binary_logloss: 0.664708
Early stopping, best iteration is:
[42]	train's binary_logloss: 0.589858	valid's binary_logloss: 0.660965
num_leaves, val_score: 0.658933:  90%|######### | 18/20 [00:13<00:01,  1.47it/s][I 2020-09-27 04:42:44,171] Trial 24 finished with value: 0.660965322744683 and parameters: {'num_leaves': 66}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  90%|######### | 18/20 [00:13<00:01,  1.47it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000404 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.590111	valid's binary_logloss: 0.659776
Early stopping, best iteration is:
[96]	train's binary_logloss: 0.592038	valid's binary_logloss: 0.65953
num_leaves, val_score: 0.658933:  95%|#########5| 19/20 [00:13<00:00,  1.61it/s][I 2020-09-27 04:42:44,655] Trial 25 finished with value: 0.6595301896319116 and parameters: {'num_leaves': 27}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933:  95%|#########5| 19/20 [00:13<00:00,  1.61it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000349 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.454148	valid's binary_logloss: 0.667807
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.54925	valid's binary_logloss: 0.664334
num_leaves, val_score: 0.658933: 100%|##########| 20/20 [00:14<00:00,  1.28it/s][I 2020-09-27 04:42:45,813] Trial 26 finished with value: 0.6643342584470427 and parameters: {'num_leaves': 106}. Best is trial 19 with value: 0.6589331658992789.
num_leaves, val_score: 0.658933: 100%|##########| 20/20 [00:14<00:00,  1.35it/s]
bagging, val_score: 0.658933:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000378 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57621	valid's binary_logloss: 0.663844
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.587864	valid's binary_logloss: 0.662968
bagging, val_score: 0.658933:  10%|#         | 1/10 [00:00<00:04,  1.88it/s][I 2020-09-27 04:42:46,361] Trial 27 finished with value: 0.6629683704520333 and parameters: {'bagging_fraction': 0.6524827037560431, 'bagging_freq': 6}. Best is trial 27 with value: 0.6629683704520333.
bagging, val_score: 0.658933:  10%|#         | 1/10 [00:00<00:04,  1.88it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000416 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573429	valid's binary_logloss: 0.66344
Early stopping, best iteration is:
[70]	train's binary_logloss: 0.595186	valid's binary_logloss: 0.661013
bagging, val_score: 0.658933:  20%|##        | 2/10 [00:00<00:04,  1.96it/s][I 2020-09-27 04:42:46,824] Trial 28 finished with value: 0.6610133814310396 and parameters: {'bagging_fraction': 0.9909619380758035, 'bagging_freq': 1}. Best is trial 28 with value: 0.6610133814310396.
bagging, val_score: 0.658933:  20%|##        | 2/10 [00:01<00:04,  1.96it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000474 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.580089	valid's binary_logloss: 0.668617
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.625728	valid's binary_logloss: 0.665767
bagging, val_score: 0.658933:  30%|###       | 3/10 [00:01<00:03,  2.09it/s][I 2020-09-27 04:42:47,229] Trial 29 finished with value: 0.6657670269268177 and parameters: {'bagging_fraction': 0.42763463150211456, 'bagging_freq': 1}. Best is trial 28 with value: 0.6610133814310396.
bagging, val_score: 0.658933:  30%|###       | 3/10 [00:01<00:03,  2.09it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000410 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573775	valid's binary_logloss: 0.662206
Early stopping, best iteration is:
[84]	train's binary_logloss: 0.585244	valid's binary_logloss: 0.661816
bagging, val_score: 0.658933:  40%|####      | 4/10 [00:01<00:02,  2.03it/s][I 2020-09-27 04:42:47,755] Trial 30 finished with value: 0.6618156054088046 and parameters: {'bagging_fraction': 0.9647411027897097, 'bagging_freq': 7}. Best is trial 28 with value: 0.6610133814310396.
bagging, val_score: 0.658933:  40%|####      | 4/10 [00:01<00:02,  2.03it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004478 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579254	valid's binary_logloss: 0.665041
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607077	valid's binary_logloss: 0.658928
bagging, val_score: 0.658928:  50%|#####     | 5/10 [00:02<00:02,  2.12it/s][I 2020-09-27 04:42:48,176] Trial 31 finished with value: 0.6589276849000186 and parameters: {'bagging_fraction': 0.4607699724160956, 'bagging_freq': 4}. Best is trial 31 with value: 0.6589276849000186.
bagging, val_score: 0.658928:  50%|#####     | 5/10 [00:02<00:02,  2.12it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000381 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.580916	valid's binary_logloss: 0.671067
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.628471	valid's binary_logloss: 0.662306
bagging, val_score: 0.658928:  60%|######    | 6/10 [00:03<00:02,  1.82it/s][I 2020-09-27 04:42:48,914] Trial 32 finished with value: 0.6623061121182913 and parameters: {'bagging_fraction': 0.4089026397973168, 'bagging_freq': 4}. Best is trial 31 with value: 0.6589276849000186.
bagging, val_score: 0.658928:  60%|######    | 6/10 [00:03<00:02,  1.82it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001551 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57696	valid's binary_logloss: 0.664792
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.620171	valid's binary_logloss: 0.661311
bagging, val_score: 0.658928:  70%|#######   | 7/10 [00:03<00:01,  1.75it/s][I 2020-09-27 04:42:49,530] Trial 33 finished with value: 0.6613113079015099 and parameters: {'bagging_fraction': 0.5861812306541639, 'bagging_freq': 4}. Best is trial 31 with value: 0.6589276849000186.
bagging, val_score: 0.658928:  70%|#######   | 7/10 [00:03<00:01,  1.75it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000437 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575286	valid's binary_logloss: 0.663569
Early stopping, best iteration is:
[62]	train's binary_logloss: 0.60231	valid's binary_logloss: 0.661403
bagging, val_score: 0.658928:  80%|########  | 8/10 [00:04<00:01,  1.80it/s][I 2020-09-27 04:42:50,048] Trial 34 finished with value: 0.6614028699158252 and parameters: {'bagging_fraction': 0.8070642452561286, 'bagging_freq': 3}. Best is trial 31 with value: 0.6589276849000186.
bagging, val_score: 0.658928:  80%|########  | 8/10 [00:04<00:01,  1.80it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000238 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578059	valid's binary_logloss: 0.66561
Early stopping, best iteration is:
[57]	train's binary_logloss: 0.60883	valid's binary_logloss: 0.660744
bagging, val_score: 0.658928:  90%|######### | 9/10 [00:04<00:00,  1.85it/s][I 2020-09-27 04:42:50,558] Trial 35 finished with value: 0.6607440151387087 and parameters: {'bagging_fraction': 0.5229694779889069, 'bagging_freq': 5}. Best is trial 31 with value: 0.6589276849000186.
bagging, val_score: 0.658928:  90%|######### | 9/10 [00:04<00:00,  1.85it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000982 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574809	valid's binary_logloss: 0.665725
Early stopping, best iteration is:
[73]	train's binary_logloss: 0.593617	valid's binary_logloss: 0.664242
bagging, val_score: 0.658928: 100%|##########| 10/10 [00:05<00:00,  1.76it/s][I 2020-09-27 04:42:51,184] Trial 36 finished with value: 0.6642418801252681 and parameters: {'bagging_fraction': 0.7857998781191295, 'bagging_freq': 2}. Best is trial 31 with value: 0.6589276849000186.
bagging, val_score: 0.658928: 100%|##########| 10/10 [00:05<00:00,  1.86it/s]
feature_fraction_stage2, val_score: 0.658928:   0%|          | 0/3 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004751 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576158	valid's binary_logloss: 0.667365
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.622262	valid's binary_logloss: 0.662845
feature_fraction_stage2, val_score: 0.658928:  33%|###3      | 1/3 [00:00<00:00,  2.32it/s][I 2020-09-27 04:42:51,633] Trial 37 finished with value: 0.6628451796786387 and parameters: {'feature_fraction': 0.44800000000000006}. Best is trial 37 with value: 0.6628451796786387.
feature_fraction_stage2, val_score: 0.658928:  33%|###3      | 1/3 [00:00<00:00,  2.32it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000343 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577968	valid's binary_logloss: 0.668202
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.629452	valid's binary_logloss: 0.662397
feature_fraction_stage2, val_score: 0.658928:  67%|######6   | 2/3 [00:00<00:00,  2.34it/s][I 2020-09-27 04:42:52,051] Trial 38 finished with value: 0.6623967977826815 and parameters: {'feature_fraction': 0.41600000000000004}. Best is trial 38 with value: 0.6623967977826815.
feature_fraction_stage2, val_score: 0.658928:  67%|######6   | 2/3 [00:00<00:00,  2.34it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007814 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576158	valid's binary_logloss: 0.667365
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.622262	valid's binary_logloss: 0.662845
feature_fraction_stage2, val_score: 0.658928: 100%|##########| 3/3 [00:01<00:00,  2.00it/s][I 2020-09-27 04:42:52,719] Trial 39 finished with value: 0.6628451796786387 and parameters: {'feature_fraction': 0.48000000000000004}. Best is trial 38 with value: 0.6623967977826815.
feature_fraction_stage2, val_score: 0.658928: 100%|##########| 3/3 [00:01<00:00,  1.97it/s]
regularization_factors, val_score: 0.658928:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016489 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578756	valid's binary_logloss: 0.666904
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.606361	valid's binary_logloss: 0.662873
regularization_factors, val_score: 0.658928:   5%|5         | 1/20 [00:00<00:12,  1.52it/s][I 2020-09-27 04:42:53,400] Trial 40 finished with value: 0.6628730879769316 and parameters: {'lambda_l1': 0.00027860521966329625, 'lambda_l2': 0.23171527606664835}. Best is trial 40 with value: 0.6628730879769316.
regularization_factors, val_score: 0.658928:   5%|5         | 1/20 [00:00<00:12,  1.52it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000377 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57899	valid's binary_logloss: 0.664607
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607074	valid's binary_logloss: 0.658905
regularization_factors, val_score: 0.658905:  10%|#         | 2/20 [00:01<00:11,  1.62it/s][I 2020-09-27 04:42:53,923] Trial 41 finished with value: 0.6589045464246605 and parameters: {'lambda_l1': 1.0562160571745345e-08, 'lambda_l2': 1.2335861617943539e-08}. Best is trial 41 with value: 0.6589045464246605.
regularization_factors, val_score: 0.658905:  10%|#         | 2/20 [00:01<00:11,  1.62it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000451 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579401	valid's binary_logloss: 0.664539
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607081	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658905:  15%|#5        | 3/20 [00:01<00:09,  1.70it/s][I 2020-09-27 04:42:54,440] Trial 42 finished with value: 0.6589271408352281 and parameters: {'lambda_l1': 1.4846795215967304e-08, 'lambda_l2': 2.1890701373620828e-08}. Best is trial 41 with value: 0.6589045464246605.
regularization_factors, val_score: 0.658905:  15%|#5        | 3/20 [00:01<00:09,  1.70it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000474 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579252	valid's binary_logloss: 0.665028
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607075	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658905:  20%|##        | 4/20 [00:02<00:09,  1.71it/s][I 2020-09-27 04:42:55,018] Trial 43 finished with value: 0.6589273467253626 and parameters: {'lambda_l1': 1.1433697445980986e-08, 'lambda_l2': 1.3023515499807511e-08}. Best is trial 41 with value: 0.6589045464246605.
regularization_factors, val_score: 0.658905:  20%|##        | 4/20 [00:02<00:09,  1.71it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000451 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579216	valid's binary_logloss: 0.664431
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607079	valid's binary_logloss: 0.658949
regularization_factors, val_score: 0.658905:  25%|##5       | 5/20 [00:02<00:08,  1.75it/s][I 2020-09-27 04:42:55,554] Trial 44 finished with value: 0.6589487630970773 and parameters: {'lambda_l1': 1.0140056425681878e-08, 'lambda_l2': 1.2300058074804937e-08}. Best is trial 41 with value: 0.6589045464246605.
regularization_factors, val_score: 0.658905:  25%|##5       | 5/20 [00:02<00:08,  1.75it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000479 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578759	valid's binary_logloss: 0.665688
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607073	valid's binary_logloss: 0.658904
regularization_factors, val_score: 0.658904:  30%|###       | 6/20 [00:03<00:07,  1.84it/s][I 2020-09-27 04:42:56,039] Trial 45 finished with value: 0.658904002539462 and parameters: {'lambda_l1': 1.0953495812808097e-08, 'lambda_l2': 1.828459432273254e-08}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  30%|###       | 6/20 [00:03<00:07,  1.84it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000378 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579237	valid's binary_logloss: 0.665024
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.60708	valid's binary_logloss: 0.658928
regularization_factors, val_score: 0.658904:  35%|###5      | 7/20 [00:04<00:08,  1.49it/s][I 2020-09-27 04:42:57,006] Trial 46 finished with value: 0.6589280313288517 and parameters: {'lambda_l1': 1.0619074780064361e-08, 'lambda_l2': 1.470079266120941e-08}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  35%|###5      | 7/20 [00:04<00:08,  1.49it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000339 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.5794	valid's binary_logloss: 0.66454
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607078	valid's binary_logloss: 0.658904
regularization_factors, val_score: 0.658904:  40%|####      | 8/20 [00:04<00:07,  1.63it/s][I 2020-09-27 04:42:57,486] Trial 47 finished with value: 0.6589040025411772 and parameters: {'lambda_l1': 1.3254321926524086e-08, 'lambda_l2': 2.193885866761881e-08}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  40%|####      | 8/20 [00:04<00:07,  1.63it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000424 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579223	valid's binary_logloss: 0.664396
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607078	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658904:  45%|####5     | 9/20 [00:05<00:06,  1.76it/s][I 2020-09-27 04:42:57,951] Trial 48 finished with value: 0.6589267944230356 and parameters: {'lambda_l1': 2.4619075323767072e-08, 'lambda_l2': 2.017164260358036e-08}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  45%|####5     | 9/20 [00:05<00:06,  1.76it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000536 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57921	valid's binary_logloss: 0.664407
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607077	valid's binary_logloss: 0.658925
regularization_factors, val_score: 0.658904:  50%|#####     | 10/20 [00:05<00:05,  1.81it/s][I 2020-09-27 04:42:58,460] Trial 49 finished with value: 0.6589253929659314 and parameters: {'lambda_l1': 1.2653815920250245e-08, 'lambda_l2': 2.3139792938926494e-08}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  50%|#####     | 10/20 [00:05<00:05,  1.81it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004484 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579211	valid's binary_logloss: 0.664425
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607081	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658904:  55%|#####5    | 11/20 [00:06<00:04,  1.90it/s][I 2020-09-27 04:42:58,929] Trial 50 finished with value: 0.6589271408099914 and parameters: {'lambda_l1': 1.3776160562135245e-08, 'lambda_l2': 2.6228878428660015e-07}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  55%|#####5    | 11/20 [00:06<00:04,  1.90it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000373 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57921	valid's binary_logloss: 0.664409
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607081	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658904:  60%|######    | 12/20 [00:06<00:04,  1.94it/s][I 2020-09-27 04:42:59,423] Trial 51 finished with value: 0.658927140833919 and parameters: {'lambda_l1': 1.664666646319637e-08, 'lambda_l2': 4.250351022086274e-08}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  60%|######    | 12/20 [00:06<00:04,  1.94it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000390 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57924	valid's binary_logloss: 0.665044
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607082	valid's binary_logloss: 0.65895
regularization_factors, val_score: 0.658904:  65%|######5   | 13/20 [00:07<00:03,  1.94it/s][I 2020-09-27 04:42:59,937] Trial 52 finished with value: 0.6589497680473425 and parameters: {'lambda_l1': 1.439041833170667e-08, 'lambda_l2': 3.7007408240165166e-07}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  65%|######5   | 13/20 [00:07<00:03,  1.94it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000634 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579255	valid's binary_logloss: 0.665048
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607079	valid's binary_logloss: 0.658949
regularization_factors, val_score: 0.658904:  70%|#######   | 14/20 [00:08<00:04,  1.47it/s][I 2020-09-27 04:43:01,006] Trial 53 finished with value: 0.6589494209805504 and parameters: {'lambda_l1': 1.7572183872374385e-07, 'lambda_l2': 9.82651046880018e-07}. Best is trial 45 with value: 0.658904002539462.
regularization_factors, val_score: 0.658904:  70%|#######   | 14/20 [00:08<00:04,  1.47it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000481 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579214	valid's binary_logloss: 0.664386
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607075	valid's binary_logloss: 0.658904
regularization_factors, val_score: 0.658904:  75%|#######5  | 15/20 [00:08<00:03,  1.60it/s][I 2020-09-27 04:43:01,494] Trial 54 finished with value: 0.6589036559607917 and parameters: {'lambda_l1': 8.538730576389241e-07, 'lambda_l2': 1.1721978821541268e-08}. Best is trial 54 with value: 0.6589036559607917.
regularization_factors, val_score: 0.658904:  75%|#######5  | 15/20 [00:08<00:03,  1.60it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000414 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579216	valid's binary_logloss: 0.664476
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607081	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658904:  80%|########  | 16/20 [00:09<00:02,  1.72it/s][I 2020-09-27 04:43:01,976] Trial 55 finished with value: 0.6589271405882069 and parameters: {'lambda_l1': 2.1323929878136295e-06, 'lambda_l2': 1.2914444598331435e-08}. Best is trial 54 with value: 0.6589036559607917.
regularization_factors, val_score: 0.658904:  80%|########  | 16/20 [00:09<00:02,  1.72it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004379 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579258	valid's binary_logloss: 0.665048
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607079	valid's binary_logloss: 0.658949
regularization_factors, val_score: 0.658904:  85%|########5 | 17/20 [00:09<00:01,  1.82it/s][I 2020-09-27 04:43:02,455] Trial 56 finished with value: 0.6589494210443969 and parameters: {'lambda_l1': 7.412222130813083e-07, 'lambda_l2': 1.1296738167689051e-08}. Best is trial 54 with value: 0.6589036559607917.
regularization_factors, val_score: 0.658904:  85%|########5 | 17/20 [00:09<00:01,  1.82it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000368 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579256	valid's binary_logloss: 0.665048
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607077	valid's binary_logloss: 0.658949
regularization_factors, val_score: 0.658904:  90%|######### | 18/20 [00:10<00:01,  1.87it/s][I 2020-09-27 04:43:02,951] Trial 57 finished with value: 0.6589493135745349 and parameters: {'lambda_l1': 2.9906109268867005e-07, 'lambda_l2': 1.0924154779547353e-05}. Best is trial 54 with value: 0.6589036559607917.
regularization_factors, val_score: 0.658904:  90%|######### | 18/20 [00:10<00:01,  1.87it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000413 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579254	valid's binary_logloss: 0.665027
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607078	valid's binary_logloss: 0.658927
regularization_factors, val_score: 0.658904:  95%|#########5| 19/20 [00:10<00:00,  1.95it/s][I 2020-09-27 04:43:03,413] Trial 58 finished with value: 0.658926859627446 and parameters: {'lambda_l1': 1.208558071943979e-07, 'lambda_l2': 1.4153597871067393e-07}. Best is trial 54 with value: 0.6589036559607917.
regularization_factors, val_score: 0.658904:  95%|#########5| 19/20 [00:10<00:00,  1.95it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004727 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579258	valid's binary_logloss: 0.665048
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607079	valid's binary_logloss: 0.658949
regularization_factors, val_score: 0.658904: 100%|##########| 20/20 [00:11<00:00,  1.94it/s][I 2020-09-27 04:43:03,937] Trial 59 finished with value: 0.6589494193426407 and parameters: {'lambda_l1': 1.386319739479547e-05, 'lambda_l2': 1.070237025489994e-08}. Best is trial 54 with value: 0.6589036559607917.
regularization_factors, val_score: 0.658904: 100%|##########| 20/20 [00:11<00:00,  1.78it/s]
min_data_in_leaf, val_score: 0.658904:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000392 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576809	valid's binary_logloss: 0.668517
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.605306	valid's binary_logloss: 0.663449
min_data_in_leaf, val_score: 0.658904:  20%|##        | 1/5 [00:00<00:03,  1.01it/s][I 2020-09-27 04:43:04,955] Trial 60 finished with value: 0.6634487376945314 and parameters: {'min_child_samples': 5}. Best is trial 60 with value: 0.6634487376945314.
min_data_in_leaf, val_score: 0.658904:  20%|##        | 1/5 [00:01<00:03,  1.01it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000379 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579781	valid's binary_logloss: 0.664717
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.607283	valid's binary_logloss: 0.661178
min_data_in_leaf, val_score: 0.658904:  40%|####      | 2/5 [00:01<00:02,  1.17it/s][I 2020-09-27 04:43:05,479] Trial 61 finished with value: 0.6611778602130459 and parameters: {'min_child_samples': 25}. Best is trial 61 with value: 0.6611778602130459.
min_data_in_leaf, val_score: 0.658904:  40%|####      | 2/5 [00:01<00:02,  1.17it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004590 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.587755	valid's binary_logloss: 0.665012
Early stopping, best iteration is:
[66]	train's binary_logloss: 0.607178	valid's binary_logloss: 0.662389
min_data_in_leaf, val_score: 0.658904:  60%|######    | 3/5 [00:02<00:01,  1.34it/s][I 2020-09-27 04:43:05,977] Trial 62 finished with value: 0.6623889382406033 and parameters: {'min_child_samples': 100}. Best is trial 61 with value: 0.6611778602130459.
min_data_in_leaf, val_score: 0.658904:  60%|######    | 3/5 [00:02<00:01,  1.34it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000482 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577819	valid's binary_logloss: 0.667336
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.605667	valid's binary_logloss: 0.663345
min_data_in_leaf, val_score: 0.658904:  80%|########  | 4/5 [00:02<00:00,  1.50it/s][I 2020-09-27 04:43:06,461] Trial 63 finished with value: 0.6633454558808694 and parameters: {'min_child_samples': 10}. Best is trial 61 with value: 0.6611778602130459.
min_data_in_leaf, val_score: 0.658904:  80%|########  | 4/5 [00:02<00:00,  1.50it/s][LightGBM] [Info] Number of positive: 13155, number of negative: 12844
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000449 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.505981 -> initscore=0.023925
[LightGBM] [Info] Start training from score 0.023925
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.583153	valid's binary_logloss: 0.66795
Early stopping, best iteration is:
[59]	train's binary_logloss: 0.610418	valid's binary_logloss: 0.663074
min_data_in_leaf, val_score: 0.658904: 100%|##########| 5/5 [00:02<00:00,  1.63it/s][I 2020-09-27 04:43:06,953] Trial 64 finished with value: 0.6630735000484437 and parameters: {'min_child_samples': 50}. Best is trial 61 with value: 0.6611778602130459.
min_data_in_leaf, val_score: 0.658904: 100%|##########| 5/5 [00:02<00:00,  1.67it/s]
Fold : 5
[I 2020-09-27 04:43:06,989] A new study created in memory with name: no-name-b151d15a-5a47-42dc-ace0-4fe96bb3c92c
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004614 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577114	valid's binary_logloss: 0.654088
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.604064	valid's binary_logloss: 0.652612
feature_fraction, val_score: 0.652612:  14%|#4        | 1/7 [00:00<00:02,  2.09it/s][I 2020-09-27 04:43:07,480] Trial 0 finished with value: 0.6526118300374438 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6526118300374438.
feature_fraction, val_score: 0.652612:  14%|#4        | 1/7 [00:00<00:02,  2.09it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001021 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.570555	valid's binary_logloss: 0.652172
[200]	train's binary_logloss: 0.514477	valid's binary_logloss: 0.655545
Early stopping, best iteration is:
[114]	train's binary_logloss: 0.561489	valid's binary_logloss: 0.651258
feature_fraction, val_score: 0.651258:  29%|##8       | 2/7 [00:01<00:03,  1.52it/s][I 2020-09-27 04:43:08,551] Trial 1 finished with value: 0.6512579255125732 and parameters: {'feature_fraction': 1.0}. Best is trial 1 with value: 0.6512579255125732.
feature_fraction, val_score: 0.651258:  29%|##8       | 2/7 [00:01<00:03,  1.52it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004468 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.583108	valid's binary_logloss: 0.653379
Early stopping, best iteration is:
[83]	train's binary_logloss: 0.593455	valid's binary_logloss: 0.652673
feature_fraction, val_score: 0.651258:  43%|####2     | 3/7 [00:01<00:02,  1.69it/s][I 2020-09-27 04:43:08,993] Trial 2 finished with value: 0.652673016771765 and parameters: {'feature_fraction': 0.4}. Best is trial 1 with value: 0.6512579255125732.
feature_fraction, val_score: 0.651258:  43%|####2     | 3/7 [00:01<00:02,  1.69it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000894 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573161	valid's binary_logloss: 0.653887
[200]	train's binary_logloss: 0.518397	valid's binary_logloss: 0.656355
Early stopping, best iteration is:
[117]	train's binary_logloss: 0.562873	valid's binary_logloss: 0.652985
feature_fraction, val_score: 0.651258:  57%|#####7    | 4/7 [00:02<00:01,  1.66it/s][I 2020-09-27 04:43:09,624] Trial 3 finished with value: 0.6529847756787174 and parameters: {'feature_fraction': 0.8}. Best is trial 1 with value: 0.6512579255125732.
feature_fraction, val_score: 0.651258:  57%|#####7    | 4/7 [00:02<00:01,  1.66it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000779 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574837	valid's binary_logloss: 0.653256
Early stopping, best iteration is:
[78]	train's binary_logloss: 0.589091	valid's binary_logloss: 0.651848
feature_fraction, val_score: 0.651258:  71%|#######1  | 5/7 [00:03<00:01,  1.75it/s][I 2020-09-27 04:43:10,125] Trial 4 finished with value: 0.6518483772036818 and parameters: {'feature_fraction': 0.7}. Best is trial 1 with value: 0.6512579255125732.
feature_fraction, val_score: 0.651258:  71%|#######1  | 5/7 [00:03<00:01,  1.75it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000916 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572031	valid's binary_logloss: 0.652234
Early stopping, best iteration is:
[76]	train's binary_logloss: 0.588368	valid's binary_logloss: 0.651017
feature_fraction, val_score: 0.651017:  86%|########5 | 6/7 [00:03<00:00,  1.79it/s][I 2020-09-27 04:43:10,646] Trial 5 finished with value: 0.6510174434829193 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 5 with value: 0.6510174434829193.
feature_fraction, val_score: 0.651017:  86%|########5 | 6/7 [00:03<00:00,  1.79it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000533 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578051	valid's binary_logloss: 0.652005
[200]	train's binary_logloss: 0.525344	valid's binary_logloss: 0.654096
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.557437	valid's binary_logloss: 0.651283
feature_fraction, val_score: 0.651017: 100%|##########| 7/7 [00:04<00:00,  1.76it/s][I 2020-09-27 04:43:11,237] Trial 6 finished with value: 0.6512826266964433 and parameters: {'feature_fraction': 0.5}. Best is trial 5 with value: 0.6510174434829193.
feature_fraction, val_score: 0.651017: 100%|##########| 7/7 [00:04<00:00,  1.65it/s]
num_leaves, val_score: 0.651017:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000858 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.519602	valid's binary_logloss: 0.657482
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.578782	valid's binary_logloss: 0.655742
num_leaves, val_score: 0.651017:   5%|5         | 1/20 [00:01<00:23,  1.23s/it][I 2020-09-27 04:43:12,484] Trial 7 finished with value: 0.655741889936831 and parameters: {'num_leaves': 55}. Best is trial 7 with value: 0.655741889936831.
num_leaves, val_score: 0.651017:   5%|5         | 1/20 [00:01<00:23,  1.23s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000474 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.352805	valid's binary_logloss: 0.663625
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.492845	valid's binary_logloss: 0.6534
num_leaves, val_score: 0.651017:  10%|#         | 2/20 [00:02<00:21,  1.18s/it][I 2020-09-27 04:43:13,542] Trial 8 finished with value: 0.653399633240566 and parameters: {'num_leaves': 164}. Best is trial 8 with value: 0.653399633240566.
num_leaves, val_score: 0.651017:  10%|#         | 2/20 [00:02<00:21,  1.18s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005116 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.645865	valid's binary_logloss: 0.655865
[200]	train's binary_logloss: 0.633609	valid's binary_logloss: 0.652415
[300]	train's binary_logloss: 0.624725	valid's binary_logloss: 0.651323
Early stopping, best iteration is:
[292]	train's binary_logloss: 0.625397	valid's binary_logloss: 0.65122
num_leaves, val_score: 0.651017:  15%|#5        | 3/20 [00:02<00:17,  1.02s/it][I 2020-09-27 04:43:14,213] Trial 9 finished with value: 0.6512203104712043 and parameters: {'num_leaves': 5}. Best is trial 9 with value: 0.6512203104712043.
num_leaves, val_score: 0.651017:  15%|#5        | 3/20 [00:02<00:17,  1.02s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000891 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.297872	valid's binary_logloss: 0.671789
Early stopping, best iteration is:
[27]	train's binary_logloss: 0.510065	valid's binary_logloss: 0.658243
num_leaves, val_score: 0.651017:  20%|##        | 4/20 [00:04<00:20,  1.30s/it][I 2020-09-27 04:43:16,157] Trial 10 finished with value: 0.6582425249764476 and parameters: {'num_leaves': 215}. Best is trial 9 with value: 0.6512203104712043.
num_leaves, val_score: 0.651017:  20%|##        | 4/20 [00:04<00:20,  1.30s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000907 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.669067	valid's binary_logloss: 0.671099
[200]	train's binary_logloss: 0.659436	valid's binary_logloss: 0.661559
[300]	train's binary_logloss: 0.654191	valid's binary_logloss: 0.656836
[400]	train's binary_logloss: 0.651007	valid's binary_logloss: 0.65419
[500]	train's binary_logloss: 0.64896	valid's binary_logloss: 0.652841
[600]	train's binary_logloss: 0.647587	valid's binary_logloss: 0.652048
[700]	train's binary_logloss: 0.646612	valid's binary_logloss: 0.651613
[800]	train's binary_logloss: 0.645876	valid's binary_logloss: 0.651451
[900]	train's binary_logloss: 0.645296	valid's binary_logloss: 0.651421
Early stopping, best iteration is:
[887]	train's binary_logloss: 0.645365	valid's binary_logloss: 0.651313
num_leaves, val_score: 0.651017:  25%|##5       | 5/20 [00:06<00:19,  1.29s/it][I 2020-09-27 04:43:17,415] Trial 11 finished with value: 0.6513127631312726 and parameters: {'num_leaves': 2}. Best is trial 9 with value: 0.6512203104712043.
num_leaves, val_score: 0.651017:  25%|##5       | 5/20 [00:06<00:19,  1.29s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001145 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.487358	valid's binary_logloss: 0.659662
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.581991	valid's binary_logloss: 0.657768
num_leaves, val_score: 0.651017:  30%|###       | 6/20 [00:06<00:15,  1.10s/it][I 2020-09-27 04:43:18,063] Trial 12 finished with value: 0.6577682570812166 and parameters: {'num_leaves': 73}. Best is trial 9 with value: 0.6512203104712043.
num_leaves, val_score: 0.651017:  30%|###       | 6/20 [00:06<00:15,  1.10s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001036 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607358	valid's binary_logloss: 0.650129
[200]	train's binary_logloss: 0.572939	valid's binary_logloss: 0.650833
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593957	valid's binary_logloss: 0.649769
num_leaves, val_score: 0.649769:  35%|###5      | 7/20 [00:07<00:12,  1.07it/s][I 2020-09-27 04:43:18,630] Trial 13 finished with value: 0.649768864866138 and parameters: {'num_leaves': 17}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  35%|###5      | 7/20 [00:07<00:12,  1.07it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001092 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.41967	valid's binary_logloss: 0.663995
Early stopping, best iteration is:
[33]	train's binary_logloss: 0.552391	valid's binary_logloss: 0.657367
num_leaves, val_score: 0.649769:  40%|####      | 8/20 [00:08<00:12,  1.07s/it][I 2020-09-27 04:43:20,007] Trial 14 finished with value: 0.6573672765256382 and parameters: {'num_leaves': 114}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  40%|####      | 8/20 [00:08<00:12,  1.07s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000964 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.262043	valid's binary_logloss: 0.668022
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.470728	valid's binary_logloss: 0.656929
num_leaves, val_score: 0.649769:  45%|####5     | 9/20 [00:10<00:13,  1.26s/it][I 2020-09-27 04:43:21,707] Trial 15 finished with value: 0.6569287641382702 and parameters: {'num_leaves': 256}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  45%|####5     | 9/20 [00:10<00:13,  1.26s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000921 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.534752	valid's binary_logloss: 0.65496
Early stopping, best iteration is:
[63]	train's binary_logloss: 0.569855	valid's binary_logloss: 0.652951
num_leaves, val_score: 0.649769:  50%|#####     | 10/20 [00:11<00:10,  1.07s/it][I 2020-09-27 04:43:22,329] Trial 16 finished with value: 0.6529514815127009 and parameters: {'num_leaves': 48}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  50%|#####     | 10/20 [00:11<00:10,  1.07s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004780 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.382444	valid's binary_logloss: 0.663277
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.523542	valid's binary_logloss: 0.654707
num_leaves, val_score: 0.649769:  55%|#####5    | 11/20 [00:12<00:10,  1.12s/it][I 2020-09-27 04:43:23,589] Trial 17 finished with value: 0.6547072669652428 and parameters: {'num_leaves': 140}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  55%|#####5    | 11/20 [00:12<00:10,  1.12s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003413 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.448357	valid's binary_logloss: 0.661818
Early stopping, best iteration is:
[46]	train's binary_logloss: 0.538243	valid's binary_logloss: 0.657096
num_leaves, val_score: 0.649769:  60%|######    | 12/20 [00:13<00:08,  1.08s/it][I 2020-09-27 04:43:24,567] Trial 18 finished with value: 0.6570959414584372 and parameters: {'num_leaves': 95}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  60%|######    | 12/20 [00:13<00:08,  1.08s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010468 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.589787	valid's binary_logloss: 0.652218
Early stopping, best iteration is:
[97]	train's binary_logloss: 0.591353	valid's binary_logloss: 0.652064
num_leaves, val_score: 0.649769:  65%|######5   | 13/20 [00:13<00:06,  1.09it/s][I 2020-09-27 04:43:25,099] Trial 19 finished with value: 0.6520641906266819 and parameters: {'num_leaves': 24}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  65%|######5   | 13/20 [00:13<00:06,  1.09it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000894 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.332931	valid's binary_logloss: 0.669943
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.541056	valid's binary_logloss: 0.658153
num_leaves, val_score: 0.649769:  70%|#######   | 14/20 [00:14<00:05,  1.02it/s][I 2020-09-27 04:43:26,213] Trial 20 finished with value: 0.6581527972889681 and parameters: {'num_leaves': 181}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  70%|#######   | 14/20 [00:14<00:05,  1.02it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010464 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.645865	valid's binary_logloss: 0.655865
[200]	train's binary_logloss: 0.633609	valid's binary_logloss: 0.652415
[300]	train's binary_logloss: 0.624725	valid's binary_logloss: 0.651323
Early stopping, best iteration is:
[292]	train's binary_logloss: 0.625397	valid's binary_logloss: 0.65122
num_leaves, val_score: 0.649769:  75%|#######5  | 15/20 [00:15<00:04,  1.13it/s][I 2020-09-27 04:43:26,886] Trial 21 finished with value: 0.6512203104712043 and parameters: {'num_leaves': 5}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  75%|#######5  | 15/20 [00:15<00:04,  1.13it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000872 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579189	valid's binary_logloss: 0.654451
[200]	train's binary_logloss: 0.527779	valid's binary_logloss: 0.653698
Early stopping, best iteration is:
[151]	train's binary_logloss: 0.551963	valid's binary_logloss: 0.653462
num_leaves, val_score: 0.649769:  80%|########  | 16/20 [00:16<00:04,  1.00s/it][I 2020-09-27 04:43:28,167] Trial 22 finished with value: 0.6534616704278683 and parameters: {'num_leaves': 28}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  80%|########  | 16/20 [00:16<00:04,  1.00s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011614 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.472661	valid's binary_logloss: 0.658505
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.589272	valid's binary_logloss: 0.658288
num_leaves, val_score: 0.649769:  85%|########5 | 17/20 [00:17<00:02,  1.13it/s][I 2020-09-27 04:43:28,784] Trial 23 finished with value: 0.658288148145066 and parameters: {'num_leaves': 81}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  85%|########5 | 17/20 [00:17<00:02,  1.13it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000925 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.645865	valid's binary_logloss: 0.655865
[200]	train's binary_logloss: 0.633609	valid's binary_logloss: 0.652415
[300]	train's binary_logloss: 0.624725	valid's binary_logloss: 0.651323
Early stopping, best iteration is:
[292]	train's binary_logloss: 0.625397	valid's binary_logloss: 0.65122
num_leaves, val_score: 0.649769:  90%|######### | 18/20 [00:18<00:01,  1.23it/s][I 2020-09-27 04:43:29,433] Trial 24 finished with value: 0.6512203104712043 and parameters: {'num_leaves': 5}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  90%|######### | 18/20 [00:18<00:01,  1.23it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004987 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.565987	valid's binary_logloss: 0.655697
Early stopping, best iteration is:
[59]	train's binary_logloss: 0.596142	valid's binary_logloss: 0.65418
num_leaves, val_score: 0.649769:  95%|#########5| 19/20 [00:18<00:00,  1.34it/s][I 2020-09-27 04:43:30,009] Trial 25 finished with value: 0.6541801455639907 and parameters: {'num_leaves': 34}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769:  95%|#########5| 19/20 [00:18<00:00,  1.34it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010666 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.512496	valid's binary_logloss: 0.657066
Early stopping, best iteration is:
[78]	train's binary_logloss: 0.536548	valid's binary_logloss: 0.654935
num_leaves, val_score: 0.649769: 100%|##########| 20/20 [00:19<00:00,  1.38it/s][I 2020-09-27 04:43:30,691] Trial 26 finished with value: 0.6549352289587036 and parameters: {'num_leaves': 59}. Best is trial 13 with value: 0.649768864866138.
num_leaves, val_score: 0.649769: 100%|##########| 20/20 [00:19<00:00,  1.03it/s]
bagging, val_score: 0.649769:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000913 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.612583	valid's binary_logloss: 0.656379
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.617678	valid's binary_logloss: 0.655144
bagging, val_score: 0.649769:  10%|#         | 1/10 [00:00<00:07,  1.17it/s][I 2020-09-27 04:43:31,563] Trial 27 finished with value: 0.6551437821744738 and parameters: {'bagging_fraction': 0.40320944016041527, 'bagging_freq': 7}. Best is trial 27 with value: 0.6551437821744738.
bagging, val_score: 0.649769:  10%|#         | 1/10 [00:00<00:07,  1.17it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014340 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607438	valid's binary_logloss: 0.653472
[200]	train's binary_logloss: 0.573071	valid's binary_logloss: 0.653346
Early stopping, best iteration is:
[139]	train's binary_logloss: 0.593171	valid's binary_logloss: 0.652081
bagging, val_score: 0.649769:  20%|##        | 2/10 [00:01<00:06,  1.21it/s][I 2020-09-27 04:43:32,320] Trial 28 finished with value: 0.6520811290472097 and parameters: {'bagging_fraction': 0.9561159020668242, 'bagging_freq': 1}. Best is trial 28 with value: 0.6520811290472097.
bagging, val_score: 0.649769:  20%|##        | 2/10 [00:01<00:06,  1.21it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001122 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.610187	valid's binary_logloss: 0.652923
Early stopping, best iteration is:
[81]	train's binary_logloss: 0.617563	valid's binary_logloss: 0.651804
bagging, val_score: 0.649769:  30%|###       | 3/10 [00:02<00:05,  1.39it/s][I 2020-09-27 04:43:32,796] Trial 29 finished with value: 0.6518044921241909 and parameters: {'bagging_fraction': 0.5465375900437621, 'bagging_freq': 4}. Best is trial 29 with value: 0.6518044921241909.
bagging, val_score: 0.649769:  30%|###       | 3/10 [00:02<00:05,  1.39it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002328 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607479	valid's binary_logloss: 0.651182
[200]	train's binary_logloss: 0.571824	valid's binary_logloss: 0.653082
Early stopping, best iteration is:
[118]	train's binary_logloss: 0.600602	valid's binary_logloss: 0.650679
bagging, val_score: 0.649769:  40%|####      | 4/10 [00:02<00:04,  1.46it/s][I 2020-09-27 04:43:33,403] Trial 30 finished with value: 0.6506791175850941 and parameters: {'bagging_fraction': 0.9711225430872865, 'bagging_freq': 7}. Best is trial 30 with value: 0.6506791175850941.
bagging, val_score: 0.649769:  40%|####      | 4/10 [00:02<00:04,  1.46it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005891 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607508	valid's binary_logloss: 0.652005
[200]	train's binary_logloss: 0.572706	valid's binary_logloss: 0.650621
Early stopping, best iteration is:
[129]	train's binary_logloss: 0.596726	valid's binary_logloss: 0.650382
bagging, val_score: 0.649769:  50%|#####     | 5/10 [00:03<00:03,  1.48it/s][I 2020-09-27 04:43:34,052] Trial 31 finished with value: 0.6503824259794613 and parameters: {'bagging_fraction': 0.9985128973254092, 'bagging_freq': 7}. Best is trial 31 with value: 0.6503824259794613.
bagging, val_score: 0.649769:  50%|#####     | 5/10 [00:03<00:03,  1.48it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005483 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60784	valid's binary_logloss: 0.652715
[200]	train's binary_logloss: 0.572748	valid's binary_logloss: 0.653478
Early stopping, best iteration is:
[124]	train's binary_logloss: 0.598892	valid's binary_logloss: 0.651677
bagging, val_score: 0.649769:  60%|######    | 6/10 [00:04<00:02,  1.48it/s][I 2020-09-27 04:43:34,731] Trial 32 finished with value: 0.6516771078927124 and parameters: {'bagging_fraction': 0.9937517875557063, 'bagging_freq': 7}. Best is trial 31 with value: 0.6503824259794613.
bagging, val_score: 0.649769:  60%|######    | 6/10 [00:04<00:02,  1.48it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003559 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607577	valid's binary_logloss: 0.652062
[200]	train's binary_logloss: 0.572523	valid's binary_logloss: 0.651285
[300]	train's binary_logloss: 0.541985	valid's binary_logloss: 0.651463
Early stopping, best iteration is:
[260]	train's binary_logloss: 0.554233	valid's binary_logloss: 0.65052
bagging, val_score: 0.649769:  70%|#######   | 7/10 [00:05<00:02,  1.13it/s][I 2020-09-27 04:43:36,093] Trial 33 finished with value: 0.6505203576283998 and parameters: {'bagging_fraction': 0.8468420031688184, 'bagging_freq': 6}. Best is trial 31 with value: 0.6503824259794613.
bagging, val_score: 0.649769:  70%|#######   | 7/10 [00:05<00:02,  1.13it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001043 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60809	valid's binary_logloss: 0.653721
Early stopping, best iteration is:
[83]	train's binary_logloss: 0.614817	valid's binary_logloss: 0.653287
bagging, val_score: 0.649769:  80%|########  | 8/10 [00:05<00:01,  1.30it/s][I 2020-09-27 04:43:36,593] Trial 34 finished with value: 0.6532867425106659 and parameters: {'bagging_fraction': 0.8464729466696903, 'bagging_freq': 7}. Best is trial 31 with value: 0.6503824259794613.
bagging, val_score: 0.649769:  80%|########  | 8/10 [00:05<00:01,  1.30it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000488 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608096	valid's binary_logloss: 0.652476
[200]	train's binary_logloss: 0.572069	valid's binary_logloss: 0.654848
Early stopping, best iteration is:
[105]	train's binary_logloss: 0.606068	valid's binary_logloss: 0.652367
bagging, val_score: 0.649769:  90%|######### | 9/10 [00:06<00:00,  1.41it/s][I 2020-09-27 04:43:37,161] Trial 35 finished with value: 0.6523668340051799 and parameters: {'bagging_fraction': 0.830591192364971, 'bagging_freq': 5}. Best is trial 31 with value: 0.6503824259794613.
bagging, val_score: 0.649769:  90%|######### | 9/10 [00:06<00:00,  1.41it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000998 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607246	valid's binary_logloss: 0.652045
Early stopping, best iteration is:
[95]	train's binary_logloss: 0.609255	valid's binary_logloss: 0.651848
bagging, val_score: 0.649769: 100%|##########| 10/10 [00:07<00:00,  1.50it/s][I 2020-09-27 04:43:37,726] Trial 36 finished with value: 0.6518478119160271 and parameters: {'bagging_fraction': 0.9992112947235333, 'bagging_freq': 6}. Best is trial 31 with value: 0.6503824259794613.
bagging, val_score: 0.649769: 100%|##########| 10/10 [00:07<00:00,  1.42it/s]
feature_fraction_stage2, val_score: 0.649769:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000951 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.606705	valid's binary_logloss: 0.652233
[200]	train's binary_logloss: 0.572023	valid's binary_logloss: 0.650662
[300]	train's binary_logloss: 0.542509	valid's binary_logloss: 0.653346
Early stopping, best iteration is:
[200]	train's binary_logloss: 0.572023	valid's binary_logloss: 0.650662
feature_fraction_stage2, val_score: 0.649769:  17%|#6        | 1/6 [00:00<00:03,  1.42it/s][I 2020-09-27 04:43:38,443] Trial 37 finished with value: 0.6506622068727096 and parameters: {'feature_fraction': 0.9799999999999999}. Best is trial 37 with value: 0.6506622068727096.
feature_fraction_stage2, val_score: 0.649769:  17%|#6        | 1/6 [00:00<00:03,  1.42it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000943 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607972	valid's binary_logloss: 0.651443
[200]	train's binary_logloss: 0.573729	valid's binary_logloss: 0.651681
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.58195	valid's binary_logloss: 0.650991
feature_fraction_stage2, val_score: 0.649769:  33%|###3      | 2/6 [00:01<00:03,  1.17it/s][I 2020-09-27 04:43:39,648] Trial 38 finished with value: 0.6509908340902059 and parameters: {'feature_fraction': 0.852}. Best is trial 37 with value: 0.6506622068727096.
feature_fraction_stage2, val_score: 0.649769:  33%|###3      | 2/6 [00:01<00:03,  1.17it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000929 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.606705	valid's binary_logloss: 0.652233
[200]	train's binary_logloss: 0.572023	valid's binary_logloss: 0.650662
[300]	train's binary_logloss: 0.542509	valid's binary_logloss: 0.653346
Early stopping, best iteration is:
[200]	train's binary_logloss: 0.572023	valid's binary_logloss: 0.650662
feature_fraction_stage2, val_score: 0.649769:  50%|#####     | 3/6 [00:02<00:02,  1.22it/s][I 2020-09-27 04:43:40,396] Trial 39 finished with value: 0.6506622068727096 and parameters: {'feature_fraction': 0.948}. Best is trial 37 with value: 0.6506622068727096.
feature_fraction_stage2, val_score: 0.649769:  50%|#####     | 3/6 [00:02<00:02,  1.22it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000871 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60867	valid's binary_logloss: 0.651571
Early stopping, best iteration is:
[95]	train's binary_logloss: 0.610608	valid's binary_logloss: 0.651188
feature_fraction_stage2, val_score: 0.649769:  67%|######6   | 4/6 [00:03<00:01,  1.37it/s][I 2020-09-27 04:43:40,904] Trial 40 finished with value: 0.6511879268615787 and parameters: {'feature_fraction': 0.82}. Best is trial 37 with value: 0.6506622068727096.
feature_fraction_stage2, val_score: 0.649769:  67%|######6   | 4/6 [00:03<00:01,  1.37it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005317 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60735	valid's binary_logloss: 0.652555
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.618944	valid's binary_logloss: 0.651968
feature_fraction_stage2, val_score: 0.649769:  83%|########3 | 5/6 [00:03<00:00,  1.53it/s][I 2020-09-27 04:43:41,380] Trial 41 finished with value: 0.6519681937770695 and parameters: {'feature_fraction': 0.9159999999999999}. Best is trial 37 with value: 0.6506622068727096.
feature_fraction_stage2, val_score: 0.649769:  83%|########3 | 5/6 [00:03<00:00,  1.53it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005114 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607358	valid's binary_logloss: 0.650129
[200]	train's binary_logloss: 0.572939	valid's binary_logloss: 0.650833
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593957	valid's binary_logloss: 0.649769
feature_fraction_stage2, val_score: 0.649769: 100%|##########| 6/6 [00:04<00:00,  1.54it/s][I 2020-09-27 04:43:42,023] Trial 42 finished with value: 0.649768864866138 and parameters: {'feature_fraction': 0.8839999999999999}. Best is trial 42 with value: 0.649768864866138.
feature_fraction_stage2, val_score: 0.649769: 100%|##########| 6/6 [00:04<00:00,  1.40it/s]
regularization_factors, val_score: 0.649769:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000939 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607973	valid's binary_logloss: 0.65115
[200]	train's binary_logloss: 0.573107	valid's binary_logloss: 0.649872
[300]	train's binary_logloss: 0.543329	valid's binary_logloss: 0.651302
Early stopping, best iteration is:
[229]	train's binary_logloss: 0.564107	valid's binary_logloss: 0.649364
regularization_factors, val_score: 0.649364:   5%|5         | 1/20 [00:01<00:24,  1.27s/it][I 2020-09-27 04:43:43,315] Trial 43 finished with value: 0.6493642189216182 and parameters: {'lambda_l1': 0.16783910794593826, 'lambda_l2': 6.907642804430548e-07}. Best is trial 43 with value: 0.6493642189216182.
regularization_factors, val_score: 0.649364:   5%|5         | 1/20 [00:01<00:24,  1.27s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000994 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608181	valid's binary_logloss: 0.65244
[200]	train's binary_logloss: 0.574199	valid's binary_logloss: 0.652483
Early stopping, best iteration is:
[119]	train's binary_logloss: 0.601088	valid's binary_logloss: 0.651426
regularization_factors, val_score: 0.649364:  10%|#         | 2/20 [00:01<00:19,  1.08s/it][I 2020-09-27 04:43:43,949] Trial 44 finished with value: 0.6514259112904369 and parameters: {'lambda_l1': 0.3129099140173397, 'lambda_l2': 3.3514640861329857e-07}. Best is trial 43 with value: 0.6493642189216182.
regularization_factors, val_score: 0.649364:  10%|#         | 2/20 [00:01<00:19,  1.08s/it][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009984 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607688	valid's binary_logloss: 0.651501
[200]	train's binary_logloss: 0.57304	valid's binary_logloss: 0.648731
[300]	train's binary_logloss: 0.543627	valid's binary_logloss: 0.650347
Early stopping, best iteration is:
[212]	train's binary_logloss: 0.569524	valid's binary_logloss: 0.64867
regularization_factors, val_score: 0.648670:  15%|#5        | 3/20 [00:02<00:16,  1.01it/s][I 2020-09-27 04:43:44,720] Trial 45 finished with value: 0.6486701923102753 and parameters: {'lambda_l1': 7.774361825486131e-06, 'lambda_l2': 0.006013598728332988}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  15%|#5        | 3/20 [00:02<00:16,  1.01it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000855 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607956	valid's binary_logloss: 0.650205
[200]	train's binary_logloss: 0.573144	valid's binary_logloss: 0.651349
Early stopping, best iteration is:
[109]	train's binary_logloss: 0.604467	valid's binary_logloss: 0.649709
regularization_factors, val_score: 0.648670:  20%|##        | 4/20 [00:03<00:13,  1.17it/s][I 2020-09-27 04:43:45,259] Trial 46 finished with value: 0.6497086221332595 and parameters: {'lambda_l1': 4.0595395206550105e-07, 'lambda_l2': 0.16859061913783596}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  20%|##        | 4/20 [00:03<00:13,  1.17it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000898 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607896	valid's binary_logloss: 0.649423
[200]	train's binary_logloss: 0.574175	valid's binary_logloss: 0.64969
Early stopping, best iteration is:
[121]	train's binary_logloss: 0.600172	valid's binary_logloss: 0.648913
regularization_factors, val_score: 0.648670:  25%|##5       | 5/20 [00:03<00:11,  1.29it/s][I 2020-09-27 04:43:45,841] Trial 47 finished with value: 0.6489130270973635 and parameters: {'lambda_l1': 1.9089796516560767e-07, 'lambda_l2': 0.2199091551428564}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  25%|##5       | 5/20 [00:03<00:11,  1.29it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000906 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607889	valid's binary_logloss: 0.650079
[200]	train's binary_logloss: 0.574212	valid's binary_logloss: 0.65031
Early stopping, best iteration is:
[131]	train's binary_logloss: 0.59635	valid's binary_logloss: 0.648762
regularization_factors, val_score: 0.648670:  30%|###       | 6/20 [00:04<00:11,  1.23it/s][I 2020-09-27 04:43:46,751] Trial 48 finished with value: 0.6487624267173515 and parameters: {'lambda_l1': 4.2319368851060655e-07, 'lambda_l2': 0.34004310864421733}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  30%|###       | 6/20 [00:04<00:11,  1.23it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014681 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608275	valid's binary_logloss: 0.650944
[200]	train's binary_logloss: 0.573416	valid's binary_logloss: 0.651579
Early stopping, best iteration is:
[113]	train's binary_logloss: 0.603189	valid's binary_logloss: 0.650489
regularization_factors, val_score: 0.648670:  35%|###5      | 7/20 [00:05<00:10,  1.22it/s][I 2020-09-27 04:43:47,591] Trial 49 finished with value: 0.6504888821768885 and parameters: {'lambda_l1': 1.875716331797824e-07, 'lambda_l2': 0.3304208617133868}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  35%|###5      | 7/20 [00:05<00:10,  1.22it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001188 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608185	valid's binary_logloss: 0.650237
Early stopping, best iteration is:
[98]	train's binary_logloss: 0.608923	valid's binary_logloss: 0.650108
regularization_factors, val_score: 0.648670:  40%|####      | 8/20 [00:06<00:08,  1.35it/s][I 2020-09-27 04:43:48,143] Trial 50 finished with value: 0.6501084626471252 and parameters: {'lambda_l1': 4.671913690463379e-07, 'lambda_l2': 0.1995460595674108}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  40%|####      | 8/20 [00:06<00:08,  1.35it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000823 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60758	valid's binary_logloss: 0.651374
[200]	train's binary_logloss: 0.57306	valid's binary_logloss: 0.651712
Early stopping, best iteration is:
[154]	train's binary_logloss: 0.588205	valid's binary_logloss: 0.650518
regularization_factors, val_score: 0.648670:  45%|####5     | 9/20 [00:06<00:07,  1.39it/s][I 2020-09-27 04:43:48,805] Trial 51 finished with value: 0.6505179255791086 and parameters: {'lambda_l1': 7.519476866197216e-07, 'lambda_l2': 0.08987068172093761}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  45%|####5     | 9/20 [00:06<00:07,  1.39it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000917 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607423	valid's binary_logloss: 0.652085
[200]	train's binary_logloss: 0.573345	valid's binary_logloss: 0.651502
Early stopping, best iteration is:
[183]	train's binary_logloss: 0.578885	valid's binary_logloss: 0.651003
regularization_factors, val_score: 0.648670:  50%|#####     | 10/20 [00:07<00:07,  1.40it/s][I 2020-09-27 04:43:49,511] Trial 52 finished with value: 0.6510031934822363 and parameters: {'lambda_l1': 7.307330460252661e-07, 'lambda_l2': 0.0186645031244208}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  50%|#####     | 10/20 [00:07<00:07,  1.40it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002545 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607358	valid's binary_logloss: 0.650129
[200]	train's binary_logloss: 0.572941	valid's binary_logloss: 0.650864
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593957	valid's binary_logloss: 0.649769
regularization_factors, val_score: 0.648670:  55%|#####5    | 11/20 [00:08<00:06,  1.46it/s][I 2020-09-27 04:43:50,135] Trial 53 finished with value: 0.6497687989983096 and parameters: {'lambda_l1': 0.0002292384736520708, 'lambda_l2': 8.689375650085261e-05}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  55%|#####5    | 11/20 [00:08<00:06,  1.46it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000469 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607679	valid's binary_logloss: 0.651246
[200]	train's binary_logloss: 0.57339	valid's binary_logloss: 0.653045
Early stopping, best iteration is:
[127]	train's binary_logloss: 0.597662	valid's binary_logloss: 0.650697
regularization_factors, val_score: 0.648670:  60%|######    | 12/20 [00:09<00:06,  1.23it/s][I 2020-09-27 04:43:51,237] Trial 54 finished with value: 0.6506972734005597 and parameters: {'lambda_l1': 0.001430301077955183, 'lambda_l2': 5.329476633929677e-06}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  60%|######    | 12/20 [00:09<00:06,  1.23it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000893 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607359	valid's binary_logloss: 0.650129
[200]	train's binary_logloss: 0.572942	valid's binary_logloss: 0.650833
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593959	valid's binary_logloss: 0.649769
regularization_factors, val_score: 0.648670:  65%|######5   | 13/20 [00:09<00:05,  1.31it/s][I 2020-09-27 04:43:51,881] Trial 55 finished with value: 0.6497686874558546 and parameters: {'lambda_l1': 0.00011580371818856396, 'lambda_l2': 0.0005902239811498276}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  65%|######5   | 13/20 [00:09<00:05,  1.31it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007667 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607682	valid's binary_logloss: 0.651246
[200]	train's binary_logloss: 0.57384	valid's binary_logloss: 0.650954
[300]	train's binary_logloss: 0.543099	valid's binary_logloss: 0.650065
Early stopping, best iteration is:
[273]	train's binary_logloss: 0.551078	valid's binary_logloss: 0.649674
regularization_factors, val_score: 0.648670:  70%|#######   | 14/20 [00:10<00:04,  1.25it/s][I 2020-09-27 04:43:52,762] Trial 56 finished with value: 0.649674471194536 and parameters: {'lambda_l1': 3.268557145355658e-05, 'lambda_l2': 0.003069238534147287}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  70%|#######   | 14/20 [00:10<00:04,  1.25it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000897 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.612768	valid's binary_logloss: 0.651575
[200]	train's binary_logloss: 0.586233	valid's binary_logloss: 0.651305
Early stopping, best iteration is:
[160]	train's binary_logloss: 0.595921	valid's binary_logloss: 0.650515
regularization_factors, val_score: 0.648670:  75%|#######5  | 15/20 [00:11<00:03,  1.31it/s][I 2020-09-27 04:43:53,439] Trial 57 finished with value: 0.6505145838820116 and parameters: {'lambda_l1': 5.615730195745978e-06, 'lambda_l2': 7.365468833430753}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  75%|#######5  | 15/20 [00:11<00:03,  1.31it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007481 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60768	valid's binary_logloss: 0.651246
[200]	train's binary_logloss: 0.573394	valid's binary_logloss: 0.653045
Early stopping, best iteration is:
[127]	train's binary_logloss: 0.597664	valid's binary_logloss: 0.650697
regularization_factors, val_score: 0.648670:  80%|########  | 16/20 [00:12<00:02,  1.38it/s][I 2020-09-27 04:43:54,076] Trial 58 finished with value: 0.6506971138019576 and parameters: {'lambda_l1': 2.3099984324631693e-08, 'lambda_l2': 0.0023773905893283293}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  80%|########  | 16/20 [00:12<00:02,  1.38it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001747 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.61046	valid's binary_logloss: 0.65107
[200]	train's binary_logloss: 0.581203	valid's binary_logloss: 0.651453
Early stopping, best iteration is:
[171]	train's binary_logloss: 0.588919	valid's binary_logloss: 0.650336
regularization_factors, val_score: 0.648670:  85%|########5 | 17/20 [00:13<00:02,  1.15it/s][I 2020-09-27 04:43:55,278] Trial 59 finished with value: 0.6503360249834589 and parameters: {'lambda_l1': 1.2365048737402212e-05, 'lambda_l2': 3.1664647322579405}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  85%|########5 | 17/20 [00:13<00:02,  1.15it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000874 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607694	valid's binary_logloss: 0.6515
[200]	train's binary_logloss: 0.573475	valid's binary_logloss: 0.651154
Early stopping, best iteration is:
[133]	train's binary_logloss: 0.595863	valid's binary_logloss: 0.650792
regularization_factors, val_score: 0.648670:  90%|######### | 18/20 [00:13<00:01,  1.27it/s][I 2020-09-27 04:43:55,883] Trial 60 finished with value: 0.6507917448366202 and parameters: {'lambda_l1': 1.4432590771728913e-08, 'lambda_l2': 0.01029217837602668}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  90%|######### | 18/20 [00:13<00:01,  1.27it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000902 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607359	valid's binary_logloss: 0.650129
[200]	train's binary_logloss: 0.572942	valid's binary_logloss: 0.650864
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593958	valid's binary_logloss: 0.649769
regularization_factors, val_score: 0.648670:  95%|#########5| 19/20 [00:14<00:00,  1.34it/s][I 2020-09-27 04:43:56,523] Trial 61 finished with value: 0.6497687138706673 and parameters: {'lambda_l1': 0.00010150982895717758, 'lambda_l2': 0.0005002352741242428}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670:  95%|#########5| 19/20 [00:14<00:00,  1.34it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000964 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607358	valid's binary_logloss: 0.650129
[200]	train's binary_logloss: 0.57294	valid's binary_logloss: 0.650849
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593957	valid's binary_logloss: 0.649769
regularization_factors, val_score: 0.648670: 100%|##########| 20/20 [00:15<00:00,  1.42it/s][I 2020-09-27 04:43:57,130] Trial 62 finished with value: 0.6497688040658035 and parameters: {'lambda_l1': 1.4113098720716957e-05, 'lambda_l2': 0.00022029341576689507}. Best is trial 45 with value: 0.6486701923102753.
regularization_factors, val_score: 0.648670: 100%|##########| 20/20 [00:15<00:00,  1.32it/s]
min_data_in_leaf, val_score: 0.648670:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000908 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607864	valid's binary_logloss: 0.650842
[200]	train's binary_logloss: 0.573843	valid's binary_logloss: 0.649839
[300]	train's binary_logloss: 0.544181	valid's binary_logloss: 0.65183
Early stopping, best iteration is:
[206]	train's binary_logloss: 0.572189	valid's binary_logloss: 0.649438
min_data_in_leaf, val_score: 0.648670:  20%|##        | 1/5 [00:00<00:03,  1.26it/s][I 2020-09-27 04:43:57,935] Trial 63 finished with value: 0.6494379643673076 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6494379643673076.
min_data_in_leaf, val_score: 0.648670:  20%|##        | 1/5 [00:00<00:03,  1.26it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012759 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.610343	valid's binary_logloss: 0.652543
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.622017	valid's binary_logloss: 0.652141
min_data_in_leaf, val_score: 0.648670:  40%|####      | 2/5 [00:01<00:02,  1.13it/s][I 2020-09-27 04:43:59,052] Trial 64 finished with value: 0.6521414167861022 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.6494379643673076.
min_data_in_leaf, val_score: 0.648670:  40%|####      | 2/5 [00:01<00:02,  1.13it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000904 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60911	valid's binary_logloss: 0.649506
[200]	train's binary_logloss: 0.576093	valid's binary_logloss: 0.651061
Early stopping, best iteration is:
[123]	train's binary_logloss: 0.601092	valid's binary_logloss: 0.648901
min_data_in_leaf, val_score: 0.648670:  60%|######    | 3/5 [00:02<00:01,  1.23it/s][I 2020-09-27 04:43:59,681] Trial 65 finished with value: 0.6489009066838253 and parameters: {'min_child_samples': 50}. Best is trial 65 with value: 0.6489009066838253.
min_data_in_leaf, val_score: 0.648670:  60%|######    | 3/5 [00:02<00:01,  1.23it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000900 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607172	valid's binary_logloss: 0.65
[200]	train's binary_logloss: 0.572515	valid's binary_logloss: 0.651594
Early stopping, best iteration is:
[100]	train's binary_logloss: 0.607172	valid's binary_logloss: 0.65
min_data_in_leaf, val_score: 0.648670:  80%|########  | 4/5 [00:03<00:00,  1.33it/s][I 2020-09-27 04:44:00,293] Trial 66 finished with value: 0.6500004234119392 and parameters: {'min_child_samples': 10}. Best is trial 65 with value: 0.6489009066838253.
min_data_in_leaf, val_score: 0.648670:  80%|########  | 4/5 [00:03<00:00,  1.33it/s][LightGBM] [Info] Number of positive: 12813, number of negative: 13186
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000902 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4239
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.492827 -> initscore=-0.028695
[LightGBM] [Info] Start training from score -0.028695
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607283	valid's binary_logloss: 0.652029
[200]	train's binary_logloss: 0.572898	valid's binary_logloss: 0.653554
Early stopping, best iteration is:
[137]	train's binary_logloss: 0.593598	valid's binary_logloss: 0.650833
min_data_in_leaf, val_score: 0.648670: 100%|##########| 5/5 [00:03<00:00,  1.38it/s][I 2020-09-27 04:44:00,964] Trial 67 finished with value: 0.6508328689277415 and parameters: {'min_child_samples': 5}. Best is trial 65 with value: 0.6489009066838253.
min_data_in_leaf, val_score: 0.648670: 100%|##########| 5/5 [00:03<00:00,  1.31it/s]
Fold : 6
[I 2020-09-27 04:44:01,016] A new study created in memory with name: no-name-d7eb95f7-845c-40a3-9583-af27505411ce
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003088 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574382	valid's binary_logloss: 0.658832
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.607296	valid's binary_logloss: 0.657769
feature_fraction, val_score: 0.657769:  14%|#4        | 1/7 [00:00<00:03,  1.69it/s][I 2020-09-27 04:44:01,616] Trial 0 finished with value: 0.6577689712084953 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6577689712084953.
feature_fraction, val_score: 0.657769:  14%|#4        | 1/7 [00:00<00:03,  1.69it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004999 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573426	valid's binary_logloss: 0.659195
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.607667	valid's binary_logloss: 0.65734
feature_fraction, val_score: 0.657340:  29%|##8       | 2/7 [00:01<00:03,  1.50it/s][I 2020-09-27 04:44:02,465] Trial 1 finished with value: 0.6573399663499826 and parameters: {'feature_fraction': 0.8}. Best is trial 1 with value: 0.6573399663499826.
feature_fraction, val_score: 0.657340:  29%|##8       | 2/7 [00:01<00:03,  1.50it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012514 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578959	valid's binary_logloss: 0.658522
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.583977	valid's binary_logloss: 0.657645
feature_fraction, val_score: 0.657340:  43%|####2     | 3/7 [00:02<00:02,  1.48it/s][I 2020-09-27 04:44:03,161] Trial 2 finished with value: 0.6576453641626878 and parameters: {'feature_fraction': 0.5}. Best is trial 1 with value: 0.6573399663499826.
feature_fraction, val_score: 0.657340:  43%|####2     | 3/7 [00:02<00:02,  1.48it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000380 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.582749	valid's binary_logloss: 0.655485
[200]	train's binary_logloss: 0.531378	valid's binary_logloss: 0.660147
Early stopping, best iteration is:
[107]	train's binary_logloss: 0.57849	valid's binary_logloss: 0.654569
feature_fraction, val_score: 0.654569:  57%|#####7    | 4/7 [00:03<00:02,  1.04it/s][I 2020-09-27 04:44:04,801] Trial 3 finished with value: 0.654569007042944 and parameters: {'feature_fraction': 0.4}. Best is trial 3 with value: 0.654569007042944.
feature_fraction, val_score: 0.654569:  57%|#####7    | 4/7 [00:03<00:02,  1.04it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000972 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572042	valid's binary_logloss: 0.658846
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.586097	valid's binary_logloss: 0.65804
feature_fraction, val_score: 0.654569:  71%|#######1  | 5/7 [00:04<00:01,  1.16it/s][I 2020-09-27 04:44:05,420] Trial 4 finished with value: 0.6580401508293939 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 3 with value: 0.654569007042944.
feature_fraction, val_score: 0.654569:  71%|#######1  | 5/7 [00:04<00:01,  1.16it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000929 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571857	valid's binary_logloss: 0.659029
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.612853	valid's binary_logloss: 0.657339
feature_fraction, val_score: 0.654569:  86%|########5 | 6/7 [00:04<00:00,  1.30it/s][I 2020-09-27 04:44:05,979] Trial 5 finished with value: 0.6573389310156069 and parameters: {'feature_fraction': 1.0}. Best is trial 3 with value: 0.654569007042944.
feature_fraction, val_score: 0.654569:  86%|########5 | 6/7 [00:04<00:00,  1.30it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016710 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577175	valid's binary_logloss: 0.657844
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.608418	valid's binary_logloss: 0.656152
feature_fraction, val_score: 0.654569: 100%|##########| 7/7 [00:05<00:00,  1.24it/s][I 2020-09-27 04:44:06,878] Trial 6 finished with value: 0.6561524021651702 and parameters: {'feature_fraction': 0.6}. Best is trial 3 with value: 0.654569007042944.
feature_fraction, val_score: 0.654569: 100%|##########| 7/7 [00:05<00:00,  1.19it/s]
num_leaves, val_score: 0.654569:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004746 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.384415	valid's binary_logloss: 0.6703
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.508564	valid's binary_logloss: 0.662989
num_leaves, val_score: 0.654569:   5%|5         | 1/20 [00:00<00:14,  1.28it/s][I 2020-09-27 04:44:07,670] Trial 7 finished with value: 0.6629889780799463 and parameters: {'num_leaves': 164}. Best is trial 7 with value: 0.6629889780799463.
num_leaves, val_score: 0.654569:   5%|5         | 1/20 [00:00<00:14,  1.28it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000332 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.410682	valid's binary_logloss: 0.66176
Early stopping, best iteration is:
[62]	train's binary_logloss: 0.482134	valid's binary_logloss: 0.658251
num_leaves, val_score: 0.654569:  10%|#         | 2/20 [00:01<00:14,  1.22it/s][I 2020-09-27 04:44:08,569] Trial 8 finished with value: 0.6582514287091177 and parameters: {'num_leaves': 142}. Best is trial 8 with value: 0.6582514287091177.
num_leaves, val_score: 0.654569:  10%|#         | 2/20 [00:01<00:14,  1.22it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000603 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.412047	valid's binary_logloss: 0.659767
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.492004	valid's binary_logloss: 0.656213
num_leaves, val_score: 0.654569:  15%|#5        | 3/20 [00:02<00:14,  1.20it/s][I 2020-09-27 04:44:09,432] Trial 9 finished with value: 0.6562133173559287 and parameters: {'num_leaves': 140}. Best is trial 9 with value: 0.6562133173559287.
num_leaves, val_score: 0.654569:  15%|#5        | 3/20 [00:02<00:14,  1.20it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004277 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608851	valid's binary_logloss: 0.656989
Early stopping, best iteration is:
[87]	train's binary_logloss: 0.614406	valid's binary_logloss: 0.656446
num_leaves, val_score: 0.654569:  20%|##        | 4/20 [00:03<00:11,  1.37it/s][I 2020-09-27 04:44:09,931] Trial 10 finished with value: 0.6564457093023696 and parameters: {'num_leaves': 19}. Best is trial 9 with value: 0.6562133173559287.
num_leaves, val_score: 0.654569:  20%|##        | 4/20 [00:03<00:11,  1.37it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004598 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.301686	valid's binary_logloss: 0.673949
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.504321	valid's binary_logloss: 0.666042
num_leaves, val_score: 0.654569:  25%|##5       | 5/20 [00:04<00:13,  1.09it/s][I 2020-09-27 04:44:11,277] Trial 11 finished with value: 0.66604185972251 and parameters: {'num_leaves': 254}. Best is trial 9 with value: 0.6562133173559287.
num_leaves, val_score: 0.654569:  25%|##5       | 5/20 [00:04<00:13,  1.09it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000377 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.618907	valid's binary_logloss: 0.654583
[200]	train's binary_logloss: 0.590148	valid's binary_logloss: 0.653268
Early stopping, best iteration is:
[171]	train's binary_logloss: 0.597901	valid's binary_logloss: 0.652352
num_leaves, val_score: 0.652352:  30%|###       | 6/20 [00:04<00:11,  1.24it/s][I 2020-09-27 04:44:11,818] Trial 12 finished with value: 0.6523523176355726 and parameters: {'num_leaves': 15}. Best is trial 12 with value: 0.6523523176355726.
num_leaves, val_score: 0.652352:  30%|###       | 6/20 [00:04<00:11,  1.24it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000404 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658358	valid's binary_logloss: 0.663406
[200]	train's binary_logloss: 0.647972	valid's binary_logloss: 0.655898
[300]	train's binary_logloss: 0.642797	valid's binary_logloss: 0.652806
[400]	train's binary_logloss: 0.639134	valid's binary_logloss: 0.651593
[500]	train's binary_logloss: 0.636175	valid's binary_logloss: 0.651137
[600]	train's binary_logloss: 0.633356	valid's binary_logloss: 0.65085
[700]	train's binary_logloss: 0.630829	valid's binary_logloss: 0.650912
Early stopping, best iteration is:
[631]	train's binary_logloss: 0.63254	valid's binary_logloss: 0.650592
num_leaves, val_score: 0.650592:  35%|###5      | 7/20 [00:05<00:11,  1.17it/s][I 2020-09-27 04:44:12,795] Trial 13 finished with value: 0.6505921558008332 and parameters: {'num_leaves': 3}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  35%|###5      | 7/20 [00:05<00:11,  1.17it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004382 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.643838	valid's binary_logloss: 0.655968
[200]	train's binary_logloss: 0.630606	valid's binary_logloss: 0.651656
[300]	train's binary_logloss: 0.62092	valid's binary_logloss: 0.651649
Early stopping, best iteration is:
[275]	train's binary_logloss: 0.623153	valid's binary_logloss: 0.651151
num_leaves, val_score: 0.650592:  40%|####      | 8/20 [00:06<00:09,  1.29it/s][I 2020-09-27 04:44:13,393] Trial 14 finished with value: 0.6511512029998484 and parameters: {'num_leaves': 6}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  40%|####      | 8/20 [00:06<00:09,  1.29it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005223 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.536652	valid's binary_logloss: 0.654767
Early stopping, best iteration is:
[88]	train's binary_logloss: 0.548536	valid's binary_logloss: 0.654093
num_leaves, val_score: 0.650592:  45%|####5     | 9/20 [00:07<00:09,  1.12it/s][I 2020-09-27 04:44:14,546] Trial 15 finished with value: 0.6540930395271628 and parameters: {'num_leaves': 55}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  45%|####5     | 9/20 [00:07<00:09,  1.12it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000391 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.522391	valid's binary_logloss: 0.660187
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.573263	valid's binary_logloss: 0.659042
num_leaves, val_score: 0.650592:  50%|#####     | 10/20 [00:08<00:07,  1.25it/s][I 2020-09-27 04:44:15,132] Trial 16 finished with value: 0.6590416618563378 and parameters: {'num_leaves': 63}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  50%|#####     | 10/20 [00:08<00:07,  1.25it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000241 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.500962	valid's binary_logloss: 0.659121
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.552156	valid's binary_logloss: 0.658112
num_leaves, val_score: 0.650592:  55%|#####5    | 11/20 [00:08<00:06,  1.33it/s][I 2020-09-27 04:44:15,768] Trial 17 finished with value: 0.6581124834744749 and parameters: {'num_leaves': 77}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  55%|#####5    | 11/20 [00:08<00:06,  1.33it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000455 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.626446	valid's binary_logloss: 0.654342
[200]	train's binary_logloss: 0.602897	valid's binary_logloss: 0.654305
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.619208	valid's binary_logloss: 0.653208
num_leaves, val_score: 0.650592:  60%|######    | 12/20 [00:09<00:05,  1.50it/s][I 2020-09-27 04:44:16,240] Trial 18 finished with value: 0.6532083579968055 and parameters: {'num_leaves': 12}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  60%|######    | 12/20 [00:09<00:05,  1.50it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000331 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.340384	valid's binary_logloss: 0.669675
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.483746	valid's binary_logloss: 0.660937
num_leaves, val_score: 0.650592:  65%|######5   | 13/20 [00:10<00:05,  1.31it/s][I 2020-09-27 04:44:17,237] Trial 19 finished with value: 0.6609367707387375 and parameters: {'num_leaves': 209}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  65%|######5   | 13/20 [00:10<00:05,  1.31it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000416 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.47136	valid's binary_logloss: 0.661857
Early stopping, best iteration is:
[66]	train's binary_logloss: 0.520512	valid's binary_logloss: 0.659368
num_leaves, val_score: 0.650592:  70%|#######   | 14/20 [00:11<00:05,  1.08it/s][I 2020-09-27 04:44:18,525] Trial 20 finished with value: 0.6593681790082808 and parameters: {'num_leaves': 95}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  70%|#######   | 14/20 [00:11<00:05,  1.08it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000434 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.634417	valid's binary_logloss: 0.655116
[200]	train's binary_logloss: 0.616494	valid's binary_logloss: 0.653099
[300]	train's binary_logloss: 0.601536	valid's binary_logloss: 0.65265
Early stopping, best iteration is:
[250]	train's binary_logloss: 0.608783	valid's binary_logloss: 0.651395
num_leaves, val_score: 0.650592:  75%|#######5  | 15/20 [00:12<00:04,  1.20it/s][I 2020-09-27 04:44:19,142] Trial 21 finished with value: 0.6513954160672126 and parameters: {'num_leaves': 9}. Best is trial 13 with value: 0.6505921558008332.
num_leaves, val_score: 0.650592:  75%|#######5  | 15/20 [00:12<00:04,  1.20it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000366 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668969	valid's binary_logloss: 0.671303
[200]	train's binary_logloss: 0.659328	valid's binary_logloss: 0.662192
[300]	train's binary_logloss: 0.65408	valid's binary_logloss: 0.657049
[400]	train's binary_logloss: 0.650963	valid's binary_logloss: 0.654645
[500]	train's binary_logloss: 0.648956	valid's binary_logloss: 0.652696
[600]	train's binary_logloss: 0.647608	valid's binary_logloss: 0.651783
[700]	train's binary_logloss: 0.646665	valid's binary_logloss: 0.651143
[800]	train's binary_logloss: 0.645958	valid's binary_logloss: 0.650652
[900]	train's binary_logloss: 0.645405	valid's binary_logloss: 0.6504
[1000]	train's binary_logloss: 0.644946	valid's binary_logloss: 0.650075
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644946	valid's binary_logloss: 0.650075
num_leaves, val_score: 0.650075:  80%|########  | 16/20 [00:13<00:03,  1.05it/s][I 2020-09-27 04:44:20,393] Trial 22 finished with value: 0.6500754523975805 and parameters: {'num_leaves': 2}. Best is trial 22 with value: 0.6500754523975805.
num_leaves, val_score: 0.650075:  80%|########  | 16/20 [00:13<00:03,  1.05it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005069 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.565299	valid's binary_logloss: 0.658702
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.597675	valid's binary_logloss: 0.657428
num_leaves, val_score: 0.650075:  85%|########5 | 17/20 [00:13<00:02,  1.23it/s][I 2020-09-27 04:44:20,871] Trial 23 finished with value: 0.6574278070760912 and parameters: {'num_leaves': 39}. Best is trial 22 with value: 0.6500754523975805.
num_leaves, val_score: 0.650075:  85%|########5 | 17/20 [00:13<00:02,  1.23it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000464 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.652156	valid's binary_logloss: 0.66018
[200]	train's binary_logloss: 0.641398	valid's binary_logloss: 0.653863
[300]	train's binary_logloss: 0.635014	valid's binary_logloss: 0.652565
[400]	train's binary_logloss: 0.629702	valid's binary_logloss: 0.65154
[500]	train's binary_logloss: 0.624786	valid's binary_logloss: 0.652002
Early stopping, best iteration is:
[409]	train's binary_logloss: 0.629299	valid's binary_logloss: 0.651463
num_leaves, val_score: 0.650075:  90%|######### | 18/20 [00:15<00:01,  1.05it/s][I 2020-09-27 04:44:22,155] Trial 24 finished with value: 0.6514630998860335 and parameters: {'num_leaves': 4}. Best is trial 22 with value: 0.6500754523975805.
num_leaves, val_score: 0.650075:  90%|######### | 18/20 [00:15<00:01,  1.05it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000333 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.570374	valid's binary_logloss: 0.656731
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.590744	valid's binary_logloss: 0.655417
num_leaves, val_score: 0.650075:  95%|#########5| 19/20 [00:15<00:00,  1.23it/s][I 2020-09-27 04:44:22,639] Trial 25 finished with value: 0.655416641463825 and parameters: {'num_leaves': 37}. Best is trial 22 with value: 0.6500754523975805.
num_leaves, val_score: 0.650075:  95%|#########5| 19/20 [00:15<00:00,  1.23it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000693 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.455281	valid's binary_logloss: 0.66208
Early stopping, best iteration is:
[62]	train's binary_logloss: 0.51632	valid's binary_logloss: 0.658037
num_leaves, val_score: 0.650075: 100%|##########| 20/20 [00:16<00:00,  1.26it/s][I 2020-09-27 04:44:23,396] Trial 26 finished with value: 0.6580367008399806 and parameters: {'num_leaves': 107}. Best is trial 22 with value: 0.6500754523975805.
num_leaves, val_score: 0.650075: 100%|##########| 20/20 [00:16<00:00,  1.21it/s]
bagging, val_score: 0.650075:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000433 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667722	valid's binary_logloss: 0.669888
[200]	train's binary_logloss: 0.657469	valid's binary_logloss: 0.660998
[300]	train's binary_logloss: 0.652189	valid's binary_logloss: 0.656002
[400]	train's binary_logloss: 0.649008	valid's binary_logloss: 0.65329
[500]	train's binary_logloss: 0.64724	valid's binary_logloss: 0.651649
[600]	train's binary_logloss: 0.646196	valid's binary_logloss: 0.651202
[700]	train's binary_logloss: 0.645282	valid's binary_logloss: 0.650767
[800]	train's binary_logloss: 0.644572	valid's binary_logloss: 0.650399
[900]	train's binary_logloss: 0.643859	valid's binary_logloss: 0.650238
[1000]	train's binary_logloss: 0.643295	valid's binary_logloss: 0.649949
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643295	valid's binary_logloss: 0.649949
bagging, val_score: 0.649949:  10%|#         | 1/10 [00:01<00:12,  1.38s/it][I 2020-09-27 04:44:24,791] Trial 27 finished with value: 0.6499492453087289 and parameters: {'bagging_fraction': 0.6374753391797192, 'bagging_freq': 6}. Best is trial 27 with value: 0.6499492453087289.
bagging, val_score: 0.649949:  10%|#         | 1/10 [00:01<00:12,  1.38s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000379 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667741	valid's binary_logloss: 0.670126
[200]	train's binary_logloss: 0.657401	valid's binary_logloss: 0.660759
[300]	train's binary_logloss: 0.652134	valid's binary_logloss: 0.655834
[400]	train's binary_logloss: 0.649009	valid's binary_logloss: 0.653296
[500]	train's binary_logloss: 0.647228	valid's binary_logloss: 0.651516
[600]	train's binary_logloss: 0.646169	valid's binary_logloss: 0.65095
[700]	train's binary_logloss: 0.645307	valid's binary_logloss: 0.650552
[800]	train's binary_logloss: 0.644552	valid's binary_logloss: 0.650135
[900]	train's binary_logloss: 0.643876	valid's binary_logloss: 0.650283
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.64445	valid's binary_logloss: 0.649941
bagging, val_score: 0.649941:  20%|##        | 2/10 [00:03<00:11,  1.48s/it][I 2020-09-27 04:44:26,503] Trial 28 finished with value: 0.6499412034252128 and parameters: {'bagging_fraction': 0.6431039528688518, 'bagging_freq': 6}. Best is trial 28 with value: 0.6499412034252128.
bagging, val_score: 0.649941:  20%|##        | 2/10 [00:03<00:11,  1.48s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000402 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
bagging, val_score: 0.649721:  30%|###       | 3/10 [00:04<00:10,  1.44s/it][I 2020-09-27 04:44:27,842] Trial 29 finished with value: 0.6497207980729593 and parameters: {'bagging_fraction': 0.6147135266937416, 'bagging_freq': 6}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  30%|###       | 3/10 [00:04<00:10,  1.44s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000403 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66762	valid's binary_logloss: 0.670222
[200]	train's binary_logloss: 0.657378	valid's binary_logloss: 0.660905
[300]	train's binary_logloss: 0.652057	valid's binary_logloss: 0.655995
[400]	train's binary_logloss: 0.648925	valid's binary_logloss: 0.653365
[500]	train's binary_logloss: 0.647153	valid's binary_logloss: 0.65154
[600]	train's binary_logloss: 0.646118	valid's binary_logloss: 0.651221
[700]	train's binary_logloss: 0.64522	valid's binary_logloss: 0.650848
[800]	train's binary_logloss: 0.644499	valid's binary_logloss: 0.650374
[900]	train's binary_logloss: 0.643818	valid's binary_logloss: 0.650522
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644399	valid's binary_logloss: 0.650183
bagging, val_score: 0.649721:  40%|####      | 4/10 [00:05<00:08,  1.40s/it][I 2020-09-27 04:44:29,154] Trial 30 finished with value: 0.6501826485329506 and parameters: {'bagging_fraction': 0.6282514585296812, 'bagging_freq': 6}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  40%|####      | 4/10 [00:05<00:08,  1.40s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000425 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667636	valid's binary_logloss: 0.670402
[200]	train's binary_logloss: 0.65741	valid's binary_logloss: 0.660721
[300]	train's binary_logloss: 0.652135	valid's binary_logloss: 0.655888
[400]	train's binary_logloss: 0.649005	valid's binary_logloss: 0.653394
[500]	train's binary_logloss: 0.647265	valid's binary_logloss: 0.651817
[600]	train's binary_logloss: 0.646169	valid's binary_logloss: 0.651429
[700]	train's binary_logloss: 0.645289	valid's binary_logloss: 0.651273
Early stopping, best iteration is:
[625]	train's binary_logloss: 0.645884	valid's binary_logloss: 0.650972
bagging, val_score: 0.649721:  50%|#####     | 5/10 [00:07<00:06,  1.39s/it][I 2020-09-27 04:44:30,538] Trial 31 finished with value: 0.6509722145299361 and parameters: {'bagging_fraction': 0.625788067973422, 'bagging_freq': 6}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  50%|#####     | 5/10 [00:07<00:06,  1.39s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000386 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66771	valid's binary_logloss: 0.670672
[200]	train's binary_logloss: 0.657453	valid's binary_logloss: 0.660978
[300]	train's binary_logloss: 0.652108	valid's binary_logloss: 0.655884
[400]	train's binary_logloss: 0.648979	valid's binary_logloss: 0.653214
[500]	train's binary_logloss: 0.647206	valid's binary_logloss: 0.651492
[600]	train's binary_logloss: 0.646131	valid's binary_logloss: 0.650889
[700]	train's binary_logloss: 0.645274	valid's binary_logloss: 0.650764
[800]	train's binary_logloss: 0.644537	valid's binary_logloss: 0.650318
[900]	train's binary_logloss: 0.643867	valid's binary_logloss: 0.650313
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644435	valid's binary_logloss: 0.650123
bagging, val_score: 0.649721:  60%|######    | 6/10 [00:08<00:05,  1.34s/it][I 2020-09-27 04:44:31,752] Trial 32 finished with value: 0.650122761267748 and parameters: {'bagging_fraction': 0.6308230415432386, 'bagging_freq': 6}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  60%|######    | 6/10 [00:08<00:05,  1.34s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000509 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668475	valid's binary_logloss: 0.670977
[200]	train's binary_logloss: 0.658545	valid's binary_logloss: 0.661641
[300]	train's binary_logloss: 0.653145	valid's binary_logloss: 0.656461
[400]	train's binary_logloss: 0.649945	valid's binary_logloss: 0.653659
[500]	train's binary_logloss: 0.647957	valid's binary_logloss: 0.652033
[600]	train's binary_logloss: 0.646663	valid's binary_logloss: 0.651209
[700]	train's binary_logloss: 0.645743	valid's binary_logloss: 0.650593
[800]	train's binary_logloss: 0.644998	valid's binary_logloss: 0.649891
[900]	train's binary_logloss: 0.644366	valid's binary_logloss: 0.649745
[1000]	train's binary_logloss: 0.643856	valid's binary_logloss: 0.649763
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643856	valid's binary_logloss: 0.649763
bagging, val_score: 0.649721:  70%|#######   | 7/10 [00:09<00:04,  1.41s/it][I 2020-09-27 04:44:33,330] Trial 33 finished with value: 0.649762914198675 and parameters: {'bagging_fraction': 0.881712034252823, 'bagging_freq': 6}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  70%|#######   | 7/10 [00:09<00:04,  1.41s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001411 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668676	valid's binary_logloss: 0.671455
[200]	train's binary_logloss: 0.658729	valid's binary_logloss: 0.66187
[300]	train's binary_logloss: 0.653368	valid's binary_logloss: 0.656513
[400]	train's binary_logloss: 0.650201	valid's binary_logloss: 0.654227
[500]	train's binary_logloss: 0.648173	valid's binary_logloss: 0.652343
[600]	train's binary_logloss: 0.646868	valid's binary_logloss: 0.651436
[700]	train's binary_logloss: 0.645933	valid's binary_logloss: 0.650838
[800]	train's binary_logloss: 0.645188	valid's binary_logloss: 0.650137
[900]	train's binary_logloss: 0.644607	valid's binary_logloss: 0.64995
[1000]	train's binary_logloss: 0.644106	valid's binary_logloss: 0.649864
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644106	valid's binary_logloss: 0.649864
bagging, val_score: 0.649721:  80%|########  | 8/10 [00:11<00:03,  1.51s/it][I 2020-09-27 04:44:35,075] Trial 34 finished with value: 0.6498641722707447 and parameters: {'bagging_fraction': 0.9222411833757059, 'bagging_freq': 6}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  80%|########  | 8/10 [00:11<00:03,  1.51s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000737 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668902	valid's binary_logloss: 0.671648
[200]	train's binary_logloss: 0.65921	valid's binary_logloss: 0.661972
[300]	train's binary_logloss: 0.653923	valid's binary_logloss: 0.65701
[400]	train's binary_logloss: 0.650779	valid's binary_logloss: 0.654505
[500]	train's binary_logloss: 0.648744	valid's binary_logloss: 0.652567
[600]	train's binary_logloss: 0.647377	valid's binary_logloss: 0.651548
[700]	train's binary_logloss: 0.646422	valid's binary_logloss: 0.651081
[800]	train's binary_logloss: 0.645708	valid's binary_logloss: 0.650642
[900]	train's binary_logloss: 0.64514	valid's binary_logloss: 0.650392
[1000]	train's binary_logloss: 0.644665	valid's binary_logloss: 0.650089
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644665	valid's binary_logloss: 0.650089
bagging, val_score: 0.649721:  90%|######### | 9/10 [00:13<00:01,  1.50s/it][I 2020-09-27 04:44:36,546] Trial 35 finished with value: 0.6500890749178282 and parameters: {'bagging_fraction': 0.9922180140725776, 'bagging_freq': 2}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721:  90%|######### | 9/10 [00:13<00:01,  1.50s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000379 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66868	valid's binary_logloss: 0.671198
[200]	train's binary_logloss: 0.658732	valid's binary_logloss: 0.661411
[300]	train's binary_logloss: 0.653393	valid's binary_logloss: 0.656638
[400]	train's binary_logloss: 0.650183	valid's binary_logloss: 0.653991
[500]	train's binary_logloss: 0.648188	valid's binary_logloss: 0.652694
[600]	train's binary_logloss: 0.646874	valid's binary_logloss: 0.651422
[700]	train's binary_logloss: 0.645949	valid's binary_logloss: 0.650978
[800]	train's binary_logloss: 0.645235	valid's binary_logloss: 0.650342
[900]	train's binary_logloss: 0.644629	valid's binary_logloss: 0.649845
[1000]	train's binary_logloss: 0.644118	valid's binary_logloss: 0.649971
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.644118	valid's binary_logloss: 0.649971
bagging, val_score: 0.649721: 100%|##########| 10/10 [00:15<00:00,  1.61s/it][I 2020-09-27 04:44:38,424] Trial 36 finished with value: 0.6499714458134913 and parameters: {'bagging_fraction': 0.9261093121309201, 'bagging_freq': 7}. Best is trial 29 with value: 0.6497207980729593.
bagging, val_score: 0.649721: 100%|##########| 10/10 [00:15<00:00,  1.50s/it]
feature_fraction_stage2, val_score: 0.649721:   0%|          | 0/3 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000513 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667513	valid's binary_logloss: 0.670285
[200]	train's binary_logloss: 0.65728	valid's binary_logloss: 0.661217
[300]	train's binary_logloss: 0.652006	valid's binary_logloss: 0.656051
[400]	train's binary_logloss: 0.648934	valid's binary_logloss: 0.653472
[500]	train's binary_logloss: 0.647192	valid's binary_logloss: 0.651779
[600]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.651503
[700]	train's binary_logloss: 0.645255	valid's binary_logloss: 0.650916
[800]	train's binary_logloss: 0.644508	valid's binary_logloss: 0.650651
[900]	train's binary_logloss: 0.643868	valid's binary_logloss: 0.650537
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644389	valid's binary_logloss: 0.650382
feature_fraction_stage2, val_score: 0.649721:  33%|###3      | 1/3 [00:01<00:02,  1.23s/it][I 2020-09-27 04:44:39,673] Trial 37 finished with value: 0.6503821331647123 and parameters: {'feature_fraction': 0.48000000000000004}. Best is trial 37 with value: 0.6503821331647123.
feature_fraction_stage2, val_score: 0.649721:  33%|###3      | 1/3 [00:01<00:02,  1.23s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000502 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667466	valid's binary_logloss: 0.670008
[200]	train's binary_logloss: 0.657197	valid's binary_logloss: 0.660612
[300]	train's binary_logloss: 0.651972	valid's binary_logloss: 0.655778
[400]	train's binary_logloss: 0.648867	valid's binary_logloss: 0.653147
[500]	train's binary_logloss: 0.647122	valid's binary_logloss: 0.651573
[600]	train's binary_logloss: 0.646035	valid's binary_logloss: 0.650926
[700]	train's binary_logloss: 0.645194	valid's binary_logloss: 0.650578
[800]	train's binary_logloss: 0.644427	valid's binary_logloss: 0.650523
[900]	train's binary_logloss: 0.643736	valid's binary_logloss: 0.650556
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644312	valid's binary_logloss: 0.650291
feature_fraction_stage2, val_score: 0.649721:  67%|######6   | 2/3 [00:02<00:01,  1.29s/it][I 2020-09-27 04:44:41,097] Trial 38 finished with value: 0.6502911144681769 and parameters: {'feature_fraction': 0.41600000000000004}. Best is trial 38 with value: 0.6502911144681769.
feature_fraction_stage2, val_score: 0.649721:  67%|######6   | 2/3 [00:02<00:01,  1.29s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002477 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667513	valid's binary_logloss: 0.670285
[200]	train's binary_logloss: 0.65728	valid's binary_logloss: 0.661217
[300]	train's binary_logloss: 0.652006	valid's binary_logloss: 0.656051
[400]	train's binary_logloss: 0.648934	valid's binary_logloss: 0.653472
[500]	train's binary_logloss: 0.647192	valid's binary_logloss: 0.651779
[600]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.651503
[700]	train's binary_logloss: 0.645255	valid's binary_logloss: 0.650916
[800]	train's binary_logloss: 0.644508	valid's binary_logloss: 0.650651
[900]	train's binary_logloss: 0.643868	valid's binary_logloss: 0.650537
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644389	valid's binary_logloss: 0.650382
feature_fraction_stage2, val_score: 0.649721: 100%|##########| 3/3 [00:04<00:00,  1.33s/it][I 2020-09-27 04:44:42,520] Trial 39 finished with value: 0.6503821331647123 and parameters: {'feature_fraction': 0.44800000000000006}. Best is trial 38 with value: 0.6502911144681769.
feature_fraction_stage2, val_score: 0.649721: 100%|##########| 3/3 [00:04<00:00,  1.36s/it]
regularization_factors, val_score: 0.649721:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000379 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:   5%|5         | 1/20 [00:01<00:25,  1.35s/it][I 2020-09-27 04:44:43,887] Trial 40 finished with value: 0.6497208080532643 and parameters: {'lambda_l1': 4.606489682248236e-06, 'lambda_l2': 0.0004490091062700392}. Best is trial 40 with value: 0.6497208080532643.
regularization_factors, val_score: 0.649721:   5%|5         | 1/20 [00:01<00:25,  1.35s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000342 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  10%|#         | 2/20 [00:03<00:26,  1.50s/it][I 2020-09-27 04:44:45,731] Trial 41 finished with value: 0.6497208177282439 and parameters: {'lambda_l1': 3.367646783477069e-06, 'lambda_l2': 0.0006824013831978261}. Best is trial 40 with value: 0.6497208080532643.
regularization_factors, val_score: 0.649721:  10%|#         | 2/20 [00:03<00:26,  1.50s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000463 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  15%|#5        | 3/20 [00:04<00:24,  1.44s/it][I 2020-09-27 04:44:47,018] Trial 42 finished with value: 0.6497208110181181 and parameters: {'lambda_l1': 1.2594003820953281e-06, 'lambda_l2': 0.0005901706717435754}. Best is trial 40 with value: 0.6497208080532643.
regularization_factors, val_score: 0.649721:  15%|#5        | 3/20 [00:04<00:24,  1.44s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000389 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643838	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643266	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643266	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  20%|##        | 4/20 [00:05<00:22,  1.38s/it][I 2020-09-27 04:44:48,282] Trial 43 finished with value: 0.6497208467241731 and parameters: {'lambda_l1': 1.7272310002902818e-06, 'lambda_l2': 0.0005516895364430477}. Best is trial 40 with value: 0.6497208080532643.
regularization_factors, val_score: 0.649721:  20%|##        | 4/20 [00:05<00:22,  1.38s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000424 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644512	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643838	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643266	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643266	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  25%|##5       | 5/20 [00:07<00:22,  1.49s/it][I 2020-09-27 04:44:50,019] Trial 44 finished with value: 0.6497209472919824 and parameters: {'lambda_l1': 1.66122924086294e-06, 'lambda_l2': 0.0004859937346429329}. Best is trial 40 with value: 0.6497208080532643.
regularization_factors, val_score: 0.649721:  25%|##5       | 5/20 [00:07<00:22,  1.49s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000580 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.64451	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643264	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643264	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  30%|###       | 6/20 [00:08<00:20,  1.45s/it][I 2020-09-27 04:44:51,378] Trial 45 finished with value: 0.6497207058626523 and parameters: {'lambda_l1': 3.2445067392487893e-06, 'lambda_l2': 0.0004261866691401461}. Best is trial 45 with value: 0.6497207058626523.
regularization_factors, val_score: 0.649721:  30%|###       | 6/20 [00:08<00:20,  1.45s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000393 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643264	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643264	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  35%|###5      | 7/20 [00:10<00:19,  1.53s/it][I 2020-09-27 04:44:53,078] Trial 46 finished with value: 0.6497207182286581 and parameters: {'lambda_l1': 3.850005827927253e-06, 'lambda_l2': 0.0005152754958015667}. Best is trial 45 with value: 0.6497207058626523.
regularization_factors, val_score: 0.649721:  35%|###5      | 7/20 [00:10<00:19,  1.53s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000406 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649721:  40%|####      | 8/20 [00:11<00:17,  1.49s/it][I 2020-09-27 04:44:54,488] Trial 47 finished with value: 0.6497208048936114 and parameters: {'lambda_l1': 6.216594964021316e-06, 'lambda_l2': 0.0003007341715934148}. Best is trial 45 with value: 0.6497207058626523.
regularization_factors, val_score: 0.649721:  40%|####      | 8/20 [00:11<00:17,  1.49s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000538 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647226	valid's binary_logloss: 0.65118
[600]	train's binary_logloss: 0.64616	valid's binary_logloss: 0.650744
[700]	train's binary_logloss: 0.645279	valid's binary_logloss: 0.650352
[800]	train's binary_logloss: 0.644543	valid's binary_logloss: 0.650069
[900]	train's binary_logloss: 0.643859	valid's binary_logloss: 0.649948
[1000]	train's binary_logloss: 0.643284	valid's binary_logloss: 0.649627
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643284	valid's binary_logloss: 0.649627
regularization_factors, val_score: 0.649627:  45%|####5     | 9/20 [00:13<00:15,  1.45s/it][I 2020-09-27 04:44:55,828] Trial 48 finished with value: 0.6496273811877121 and parameters: {'lambda_l1': 0.0008017384871250849, 'lambda_l2': 2.2446245169734426e-07}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  45%|####5     | 9/20 [00:13<00:15,  1.45s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000443 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667565	valid's binary_logloss: 0.669845
[200]	train's binary_logloss: 0.657208	valid's binary_logloss: 0.660603
[300]	train's binary_logloss: 0.651961	valid's binary_logloss: 0.65582
[400]	train's binary_logloss: 0.648962	valid's binary_logloss: 0.65306
[500]	train's binary_logloss: 0.647204	valid's binary_logloss: 0.651318
[600]	train's binary_logloss: 0.646126	valid's binary_logloss: 0.650793
[700]	train's binary_logloss: 0.645262	valid's binary_logloss: 0.650639
[800]	train's binary_logloss: 0.644495	valid's binary_logloss: 0.650417
[900]	train's binary_logloss: 0.643822	valid's binary_logloss: 0.650127
[1000]	train's binary_logloss: 0.643242	valid's binary_logloss: 0.649774
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643242	valid's binary_logloss: 0.649774
regularization_factors, val_score: 0.649627:  50%|#####     | 10/20 [00:15<00:15,  1.53s/it][I 2020-09-27 04:44:57,545] Trial 49 finished with value: 0.6497744578765494 and parameters: {'lambda_l1': 0.012286863044744575, 'lambda_l2': 1.2600943736007061e-08}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  50%|#####     | 10/20 [00:15<00:15,  1.53s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000454 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667583	valid's binary_logloss: 0.6698
[200]	train's binary_logloss: 0.657274	valid's binary_logloss: 0.66074
[300]	train's binary_logloss: 0.652046	valid's binary_logloss: 0.655766
[400]	train's binary_logloss: 0.649051	valid's binary_logloss: 0.65315
[500]	train's binary_logloss: 0.647312	valid's binary_logloss: 0.651523
[600]	train's binary_logloss: 0.646232	valid's binary_logloss: 0.651124
[700]	train's binary_logloss: 0.645397	valid's binary_logloss: 0.650768
[800]	train's binary_logloss: 0.644668	valid's binary_logloss: 0.650466
[900]	train's binary_logloss: 0.643987	valid's binary_logloss: 0.650261
[1000]	train's binary_logloss: 0.643431	valid's binary_logloss: 0.649835
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643431	valid's binary_logloss: 0.649835
regularization_factors, val_score: 0.649627:  55%|#####5    | 11/20 [00:16<00:13,  1.47s/it][I 2020-09-27 04:44:58,890] Trial 50 finished with value: 0.6498345397919513 and parameters: {'lambda_l1': 0.0005572386578667407, 'lambda_l2': 2.4208863399594898}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  55%|#####5    | 11/20 [00:16<00:13,  1.47s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000464 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649627:  60%|######    | 12/20 [00:18<00:12,  1.52s/it][I 2020-09-27 04:45:00,537] Trial 51 finished with value: 0.6497208015259045 and parameters: {'lambda_l1': 8.699166692204462e-05, 'lambda_l2': 3.087574877344173e-06}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  60%|######    | 12/20 [00:18<00:12,  1.52s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002115 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649627:  65%|######5   | 13/20 [00:19<00:10,  1.50s/it][I 2020-09-27 04:45:01,964] Trial 52 finished with value: 0.6497208051737818 and parameters: {'lambda_l1': 0.00018230426317109586, 'lambda_l2': 4.548397960595403e-07}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  65%|######5   | 13/20 [00:19<00:10,  1.50s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000506 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649627:  70%|#######   | 14/20 [00:20<00:08,  1.43s/it][I 2020-09-27 04:45:03,256] Trial 53 finished with value: 0.6497208043193139 and parameters: {'lambda_l1': 0.00015987949714084808, 'lambda_l2': 1.645060058121504e-06}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  70%|#######   | 14/20 [00:20<00:08,  1.43s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000478 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.650251
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649627:  75%|#######5  | 15/20 [00:22<00:07,  1.54s/it][I 2020-09-27 04:45:05,051] Trial 54 finished with value: 0.6497208216187388 and parameters: {'lambda_l1': 0.0006042412420761168, 'lambda_l2': 1.2746381508914564e-06}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  75%|#######5  | 15/20 [00:22<00:07,  1.54s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000393 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649627:  80%|########  | 16/20 [00:23<00:05,  1.50s/it][I 2020-09-27 04:45:06,437] Trial 55 finished with value: 0.6497208030993801 and parameters: {'lambda_l1': 0.00012765660632242594, 'lambda_l2': 3.053198787844645e-06}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  80%|########  | 16/20 [00:23<00:05,  1.50s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005265 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647221	valid's binary_logloss: 0.651249
[600]	train's binary_logloss: 0.646154	valid's binary_logloss: 0.650938
[700]	train's binary_logloss: 0.645276	valid's binary_logloss: 0.650398
[800]	train's binary_logloss: 0.644511	valid's binary_logloss: 0.65025
[900]	train's binary_logloss: 0.643837	valid's binary_logloss: 0.650107
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643265	valid's binary_logloss: 0.649721
regularization_factors, val_score: 0.649627:  85%|########5 | 17/20 [00:25<00:04,  1.47s/it][I 2020-09-27 04:45:07,858] Trial 56 finished with value: 0.6497208043349164 and parameters: {'lambda_l1': 4.556413554561587e-05, 'lambda_l2': 1.5942089823943423e-05}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  85%|########5 | 17/20 [00:25<00:04,  1.47s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000436 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667565	valid's binary_logloss: 0.669845
[200]	train's binary_logloss: 0.657208	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651961	valid's binary_logloss: 0.655819
[400]	train's binary_logloss: 0.648963	valid's binary_logloss: 0.65306
[500]	train's binary_logloss: 0.647204	valid's binary_logloss: 0.651318
[600]	train's binary_logloss: 0.646126	valid's binary_logloss: 0.650793
[700]	train's binary_logloss: 0.645263	valid's binary_logloss: 0.650638
[800]	train's binary_logloss: 0.644496	valid's binary_logloss: 0.650417
[900]	train's binary_logloss: 0.643824	valid's binary_logloss: 0.650127
[1000]	train's binary_logloss: 0.643243	valid's binary_logloss: 0.649775
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643243	valid's binary_logloss: 0.649775
regularization_factors, val_score: 0.649627:  90%|######### | 18/20 [00:26<00:03,  1.52s/it][I 2020-09-27 04:45:09,475] Trial 57 finished with value: 0.6497749853707365 and parameters: {'lambda_l1': 9.790223265016275e-08, 'lambda_l2': 0.04135439527004478}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  90%|######### | 18/20 [00:26<00:03,  1.52s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000415 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669845
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.65196	valid's binary_logloss: 0.655819
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647227	valid's binary_logloss: 0.65118
[600]	train's binary_logloss: 0.64616	valid's binary_logloss: 0.650744
[700]	train's binary_logloss: 0.64528	valid's binary_logloss: 0.650352
[800]	train's binary_logloss: 0.644543	valid's binary_logloss: 0.65007
[900]	train's binary_logloss: 0.64386	valid's binary_logloss: 0.649948
[1000]	train's binary_logloss: 0.643285	valid's binary_logloss: 0.649628
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643285	valid's binary_logloss: 0.649628
regularization_factors, val_score: 0.649627:  95%|#########5| 19/20 [00:28<00:01,  1.46s/it][I 2020-09-27 04:45:10,790] Trial 58 finished with value: 0.6496276632112322 and parameters: {'lambda_l1': 0.0038892159453483817, 'lambda_l2': 3.162000398193428e-08}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627:  95%|#########5| 19/20 [00:28<00:01,  1.46s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000477 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667565	valid's binary_logloss: 0.669846
[200]	train's binary_logloss: 0.657208	valid's binary_logloss: 0.660603
[300]	train's binary_logloss: 0.651961	valid's binary_logloss: 0.65582
[400]	train's binary_logloss: 0.648963	valid's binary_logloss: 0.65306
[500]	train's binary_logloss: 0.647204	valid's binary_logloss: 0.651318
[600]	train's binary_logloss: 0.646126	valid's binary_logloss: 0.650793
[700]	train's binary_logloss: 0.645262	valid's binary_logloss: 0.650639
[800]	train's binary_logloss: 0.644495	valid's binary_logloss: 0.650418
[900]	train's binary_logloss: 0.643823	valid's binary_logloss: 0.650127
[1000]	train's binary_logloss: 0.643243	valid's binary_logloss: 0.649775
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643243	valid's binary_logloss: 0.649775
regularization_factors, val_score: 0.649627: 100%|##########| 20/20 [00:30<00:00,  1.59s/it][I 2020-09-27 04:45:12,683] Trial 59 finished with value: 0.6497745906504417 and parameters: {'lambda_l1': 0.012927722016684172, 'lambda_l2': 1.3099842616085507e-08}. Best is trial 48 with value: 0.6496273811877121.
regularization_factors, val_score: 0.649627: 100%|##########| 20/20 [00:30<00:00,  1.51s/it]
min_data_in_leaf, val_score: 0.649627:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000494 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.652009	valid's binary_logloss: 0.655779
[400]	train's binary_logloss: 0.649064	valid's binary_logloss: 0.653022
[500]	train's binary_logloss: 0.647474	valid's binary_logloss: 0.651157
[600]	train's binary_logloss: 0.646465	valid's binary_logloss: 0.650811
[700]	train's binary_logloss: 0.645712	valid's binary_logloss: 0.650417
[800]	train's binary_logloss: 0.645041	valid's binary_logloss: 0.650168
[900]	train's binary_logloss: 0.644428	valid's binary_logloss: 0.650337
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644942	valid's binary_logloss: 0.65
min_data_in_leaf, val_score: 0.649627:  20%|##        | 1/5 [00:01<00:05,  1.26s/it][I 2020-09-27 04:45:13,958] Trial 60 finished with value: 0.6500004379994676 and parameters: {'min_child_samples': 100}. Best is trial 60 with value: 0.6500004379994676.
min_data_in_leaf, val_score: 0.649627:  20%|##        | 1/5 [00:01<00:05,  1.26s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000400 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648932	valid's binary_logloss: 0.652828
[500]	train's binary_logloss: 0.647269	valid's binary_logloss: 0.651176
[600]	train's binary_logloss: 0.646216	valid's binary_logloss: 0.650912
[700]	train's binary_logloss: 0.64537	valid's binary_logloss: 0.650588
[800]	train's binary_logloss: 0.644654	valid's binary_logloss: 0.65043
[900]	train's binary_logloss: 0.643973	valid's binary_logloss: 0.650308
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644548	valid's binary_logloss: 0.650197
min_data_in_leaf, val_score: 0.649627:  40%|####      | 2/5 [00:02<00:03,  1.26s/it][I 2020-09-27 04:45:15,217] Trial 61 finished with value: 0.6501970617141319 and parameters: {'min_child_samples': 50}. Best is trial 60 with value: 0.6500004379994676.
min_data_in_leaf, val_score: 0.649627:  40%|####      | 2/5 [00:02<00:03,  1.26s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000404 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.65195	valid's binary_logloss: 0.65587
[400]	train's binary_logloss: 0.648925	valid's binary_logloss: 0.65304
[500]	train's binary_logloss: 0.647138	valid's binary_logloss: 0.651221
[600]	train's binary_logloss: 0.646029	valid's binary_logloss: 0.650864
[700]	train's binary_logloss: 0.645079	valid's binary_logloss: 0.650534
[800]	train's binary_logloss: 0.644313	valid's binary_logloss: 0.650116
[900]	train's binary_logloss: 0.643631	valid's binary_logloss: 0.650098
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.644204	valid's binary_logloss: 0.649909
min_data_in_leaf, val_score: 0.649627:  60%|######    | 3/5 [00:04<00:02,  1.36s/it][I 2020-09-27 04:45:16,819] Trial 62 finished with value: 0.6499094175863028 and parameters: {'min_child_samples': 5}. Best is trial 62 with value: 0.6499094175863028.
min_data_in_leaf, val_score: 0.649627:  60%|######    | 3/5 [00:04<00:02,  1.36s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000374 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.65195	valid's binary_logloss: 0.65587
[400]	train's binary_logloss: 0.648927	valid's binary_logloss: 0.653037
[500]	train's binary_logloss: 0.647134	valid's binary_logloss: 0.651144
[600]	train's binary_logloss: 0.646045	valid's binary_logloss: 0.650746
[700]	train's binary_logloss: 0.64509	valid's binary_logloss: 0.650337
[800]	train's binary_logloss: 0.644337	valid's binary_logloss: 0.650057
[900]	train's binary_logloss: 0.643666	valid's binary_logloss: 0.649979
Early stopping, best iteration is:
[811]	train's binary_logloss: 0.64422	valid's binary_logloss: 0.649764
min_data_in_leaf, val_score: 0.649627:  80%|########  | 4/5 [00:05<00:01,  1.32s/it][I 2020-09-27 04:45:18,047] Trial 63 finished with value: 0.649763563681734 and parameters: {'min_child_samples': 10}. Best is trial 63 with value: 0.649763563681734.
min_data_in_leaf, val_score: 0.649627:  80%|########  | 4/5 [00:05<00:01,  1.32s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000400 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667564	valid's binary_logloss: 0.669844
[200]	train's binary_logloss: 0.657207	valid's binary_logloss: 0.660602
[300]	train's binary_logloss: 0.651959	valid's binary_logloss: 0.655818
[400]	train's binary_logloss: 0.648965	valid's binary_logloss: 0.653001
[500]	train's binary_logloss: 0.647226	valid's binary_logloss: 0.65118
[600]	train's binary_logloss: 0.646171	valid's binary_logloss: 0.650777
[700]	train's binary_logloss: 0.645308	valid's binary_logloss: 0.650525
[800]	train's binary_logloss: 0.644563	valid's binary_logloss: 0.650372
[900]	train's binary_logloss: 0.643935	valid's binary_logloss: 0.650147
[1000]	train's binary_logloss: 0.643333	valid's binary_logloss: 0.649925
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.643333	valid's binary_logloss: 0.649925
min_data_in_leaf, val_score: 0.649627: 100%|##########| 5/5 [00:06<00:00,  1.32s/it][I 2020-09-27 04:45:19,357] Trial 64 finished with value: 0.6499246052491834 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.649763563681734.
min_data_in_leaf, val_score: 0.649627: 100%|##########| 5/5 [00:06<00:00,  1.33s/it]
Fold : 7
[I 2020-09-27 04:45:19,501] A new study created in memory with name: no-name-a7907594-6487-49b5-9a75-d6ef7c2f3b45
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008857 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576754	valid's binary_logloss: 0.65897
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.59731	valid's binary_logloss: 0.657355
feature_fraction, val_score: 0.657355:  14%|#4        | 1/7 [00:00<00:05,  1.05it/s][I 2020-09-27 04:45:20,471] Trial 0 finished with value: 0.6573548252031896 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6573548252031896.
feature_fraction, val_score: 0.657355:  14%|#4        | 1/7 [00:00<00:05,  1.05it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000492 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578741	valid's binary_logloss: 0.657569
[200]	train's binary_logloss: 0.527428	valid's binary_logloss: 0.66217
Early stopping, best iteration is:
[101]	train's binary_logloss: 0.578076	valid's binary_logloss: 0.657517
feature_fraction, val_score: 0.657355:  29%|##8       | 2/7 [00:01<00:04,  1.19it/s][I 2020-09-27 04:45:21,053] Trial 1 finished with value: 0.6575168666853799 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6573548252031896.
feature_fraction, val_score: 0.657355:  29%|##8       | 2/7 [00:01<00:04,  1.19it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000966 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572623	valid's binary_logloss: 0.65727
Early stopping, best iteration is:
[81]	train's binary_logloss: 0.585015	valid's binary_logloss: 0.656834
feature_fraction, val_score: 0.656834:  43%|####2     | 3/7 [00:02<00:03,  1.31it/s][I 2020-09-27 04:45:21,630] Trial 2 finished with value: 0.6568343510931819 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 2 with value: 0.6568343510931819.
feature_fraction, val_score: 0.656834:  43%|####2     | 3/7 [00:02<00:03,  1.31it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008151 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573507	valid's binary_logloss: 0.657017
[200]	train's binary_logloss: 0.517816	valid's binary_logloss: 0.65931
Early stopping, best iteration is:
[131]	train's binary_logloss: 0.55462	valid's binary_logloss: 0.655904
feature_fraction, val_score: 0.655904:  57%|#####7    | 4/7 [00:02<00:02,  1.36it/s][I 2020-09-27 04:45:22,306] Trial 3 finished with value: 0.6559041635016774 and parameters: {'feature_fraction': 0.8}. Best is trial 3 with value: 0.6559041635016774.
feature_fraction, val_score: 0.655904:  57%|#####7    | 4/7 [00:02<00:02,  1.36it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005133 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576598	valid's binary_logloss: 0.657946
Early stopping, best iteration is:
[91]	train's binary_logloss: 0.582318	valid's binary_logloss: 0.656196
feature_fraction, val_score: 0.655904:  71%|#######1  | 5/7 [00:03<00:01,  1.49it/s][I 2020-09-27 04:45:22,831] Trial 4 finished with value: 0.6561957860921139 and parameters: {'feature_fraction': 0.7}. Best is trial 3 with value: 0.6559041635016774.
feature_fraction, val_score: 0.655904:  71%|#######1  | 5/7 [00:03<00:01,  1.49it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000955 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.570187	valid's binary_logloss: 0.658113
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.579352	valid's binary_logloss: 0.657006
feature_fraction, val_score: 0.655904:  86%|########5 | 6/7 [00:04<00:00,  1.27it/s][I 2020-09-27 04:45:23,895] Trial 5 finished with value: 0.6570061744677883 and parameters: {'feature_fraction': 1.0}. Best is trial 3 with value: 0.6559041635016774.
feature_fraction, val_score: 0.655904:  86%|########5 | 6/7 [00:04<00:00,  1.27it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000543 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.582693	valid's binary_logloss: 0.657081
[200]	train's binary_logloss: 0.532209	valid's binary_logloss: 0.662404
Early stopping, best iteration is:
[100]	train's binary_logloss: 0.582693	valid's binary_logloss: 0.657081
feature_fraction, val_score: 0.655904: 100%|##########| 7/7 [00:04<00:00,  1.39it/s][I 2020-09-27 04:45:24,444] Trial 6 finished with value: 0.6570805680910938 and parameters: {'feature_fraction': 0.4}. Best is trial 3 with value: 0.6559041635016774.
feature_fraction, val_score: 0.655904: 100%|##########| 7/7 [00:04<00:00,  1.42it/s]
num_leaves, val_score: 0.655904:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004860 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.466323	valid's binary_logloss: 0.658116
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.534041	valid's binary_logloss: 0.655259
num_leaves, val_score: 0.655259:   5%|5         | 1/20 [00:00<00:14,  1.35it/s][I 2020-09-27 04:45:25,204] Trial 7 finished with value: 0.6552590163821485 and parameters: {'num_leaves': 85}. Best is trial 7 with value: 0.6552590163821485.
num_leaves, val_score: 0.655259:   5%|5         | 1/20 [00:00<00:14,  1.35it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000895 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.540475	valid's binary_logloss: 0.662426
Early stopping, best iteration is:
[83]	train's binary_logloss: 0.556094	valid's binary_logloss: 0.65974
num_leaves, val_score: 0.655259:  10%|#         | 2/20 [00:01<00:12,  1.40it/s][I 2020-09-27 04:45:25,845] Trial 8 finished with value: 0.6597395208090874 and parameters: {'num_leaves': 46}. Best is trial 7 with value: 0.6552590163821485.
num_leaves, val_score: 0.655259:  10%|#         | 2/20 [00:01<00:12,  1.40it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004966 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.60271	valid's binary_logloss: 0.656639
[200]	train's binary_logloss: 0.564039	valid's binary_logloss: 0.658129
Early stopping, best iteration is:
[108]	train's binary_logloss: 0.59936	valid's binary_logloss: 0.65622
num_leaves, val_score: 0.655259:  15%|#5        | 3/20 [00:01<00:11,  1.53it/s][I 2020-09-27 04:45:26,358] Trial 9 finished with value: 0.6562201154063112 and parameters: {'num_leaves': 19}. Best is trial 7 with value: 0.6552590163821485.
num_leaves, val_score: 0.655259:  15%|#5        | 3/20 [00:01<00:11,  1.53it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000795 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.314803	valid's binary_logloss: 0.675418
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.568221	valid's binary_logloss: 0.663895
num_leaves, val_score: 0.655259:  20%|##        | 4/20 [00:03<00:15,  1.05it/s][I 2020-09-27 04:45:28,019] Trial 10 finished with value: 0.6638954298043613 and parameters: {'num_leaves': 201}. Best is trial 7 with value: 0.6552590163821485.
num_leaves, val_score: 0.655259:  20%|##        | 4/20 [00:03<00:15,  1.05it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001158 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.415347	valid's binary_logloss: 0.669083
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.574128	valid's binary_logloss: 0.660544
num_leaves, val_score: 0.655259:  25%|##5       | 5/20 [00:04<00:13,  1.10it/s][I 2020-09-27 04:45:28,824] Trial 11 finished with value: 0.6605435697983096 and parameters: {'num_leaves': 118}. Best is trial 7 with value: 0.6552590163821485.
num_leaves, val_score: 0.655259:  25%|##5       | 5/20 [00:04<00:13,  1.10it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000788 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.424268	valid's binary_logloss: 0.667432
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.581498	valid's binary_logloss: 0.659094
num_leaves, val_score: 0.655259:  30%|###       | 6/20 [00:05<00:12,  1.16it/s][I 2020-09-27 04:45:29,584] Trial 12 finished with value: 0.6590937806617987 and parameters: {'num_leaves': 112}. Best is trial 7 with value: 0.6552590163821485.
num_leaves, val_score: 0.655259:  30%|###       | 6/20 [00:05<00:12,  1.16it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000922 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665644
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654899
num_leaves, val_score: 0.654899:  35%|###5      | 7/20 [00:07<00:15,  1.17s/it][I 2020-09-27 04:45:31,476] Trial 13 finished with value: 0.6548985372701535 and parameters: {'num_leaves': 256}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  35%|###5      | 7/20 [00:07<00:15,  1.17s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000853 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.276152	valid's binary_logloss: 0.674007
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.540876	valid's binary_logloss: 0.662293
num_leaves, val_score: 0.654899:  40%|####      | 8/20 [00:08<00:14,  1.17s/it][I 2020-09-27 04:45:32,652] Trial 14 finished with value: 0.6622934249190215 and parameters: {'num_leaves': 244}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  40%|####      | 8/20 [00:08<00:14,  1.17s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000843 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.475337	valid's binary_logloss: 0.661813
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.586366	valid's binary_logloss: 0.658471
num_leaves, val_score: 0.654899:  45%|####5     | 9/20 [00:08<00:11,  1.02s/it][I 2020-09-27 04:45:33,298] Trial 15 finished with value: 0.6584705405485446 and parameters: {'num_leaves': 81}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  45%|####5     | 9/20 [00:08<00:11,  1.02s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009872 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.32841	valid's binary_logloss: 0.676457
Early stopping, best iteration is:
[23]	train's binary_logloss: 0.544414	valid's binary_logloss: 0.664223
num_leaves, val_score: 0.654899:  50%|#####     | 10/20 [00:09<00:09,  1.01it/s][I 2020-09-27 04:45:34,239] Trial 16 finished with value: 0.664222940902035 and parameters: {'num_leaves': 189}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  50%|#####     | 10/20 [00:09<00:09,  1.01it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010397 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.353294	valid's binary_logloss: 0.668112
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.499593	valid's binary_logloss: 0.659529
num_leaves, val_score: 0.654899:  55%|#####5    | 11/20 [00:11<00:10,  1.17s/it][I 2020-09-27 04:45:35,832] Trial 17 finished with value: 0.6595289898494625 and parameters: {'num_leaves': 166}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  55%|#####5    | 11/20 [00:11<00:10,  1.17s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000949 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263628	valid's binary_logloss: 0.683347
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.520072	valid's binary_logloss: 0.662143
num_leaves, val_score: 0.654899:  60%|######    | 12/20 [00:12<00:09,  1.22s/it][I 2020-09-27 04:45:37,170] Trial 18 finished with value: 0.6621427253155968 and parameters: {'num_leaves': 254}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  60%|######    | 12/20 [00:12<00:09,  1.22s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004879 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.505464	valid's binary_logloss: 0.660159
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.592348	valid's binary_logloss: 0.657805
num_leaves, val_score: 0.654899:  65%|######5   | 13/20 [00:13<00:07,  1.02s/it][I 2020-09-27 04:45:37,726] Trial 19 finished with value: 0.6578051021077662 and parameters: {'num_leaves': 64}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  65%|######5   | 13/20 [00:13<00:07,  1.02s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000866 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.373419	valid's binary_logloss: 0.673149
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.55608	valid's binary_logloss: 0.664823
num_leaves, val_score: 0.654899:  70%|#######   | 14/20 [00:14<00:06,  1.13s/it][I 2020-09-27 04:45:39,108] Trial 20 finished with value: 0.6648226497863418 and parameters: {'num_leaves': 150}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  70%|#######   | 14/20 [00:14<00:06,  1.13s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000806 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.447378	valid's binary_logloss: 0.661981
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.521425	valid's binary_logloss: 0.658385
num_leaves, val_score: 0.654899:  75%|#######5  | 15/20 [00:15<00:05,  1.04s/it][I 2020-09-27 04:45:39,938] Trial 21 finished with value: 0.6583849556869334 and parameters: {'num_leaves': 97}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  75%|#######5  | 15/20 [00:15<00:05,  1.04s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004821 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650824	valid's binary_logloss: 0.661115
[200]	train's binary_logloss: 0.639749	valid's binary_logloss: 0.657461
[300]	train's binary_logloss: 0.632853	valid's binary_logloss: 0.656219
Early stopping, best iteration is:
[286]	train's binary_logloss: 0.633724	valid's binary_logloss: 0.656027
num_leaves, val_score: 0.654899:  80%|########  | 16/20 [00:16<00:03,  1.08it/s][I 2020-09-27 04:45:40,584] Trial 22 finished with value: 0.6560272050011768 and parameters: {'num_leaves': 4}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  80%|########  | 16/20 [00:16<00:03,  1.08it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000847 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.295072	valid's binary_logloss: 0.672517
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.474148	valid's binary_logloss: 0.65871
num_leaves, val_score: 0.654899:  85%|########5 | 17/20 [00:17<00:03,  1.02s/it][I 2020-09-27 04:45:41,847] Trial 23 finished with value: 0.6587097853479928 and parameters: {'num_leaves': 222}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  85%|########5 | 17/20 [00:17<00:03,  1.02s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006283 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.559059	valid's binary_logloss: 0.657941
Early stopping, best iteration is:
[78]	train's binary_logloss: 0.57617	valid's binary_logloss: 0.656731
num_leaves, val_score: 0.654899:  90%|######### | 18/20 [00:18<00:02,  1.04s/it][I 2020-09-27 04:45:42,913] Trial 24 finished with value: 0.6567308128514865 and parameters: {'num_leaves': 37}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  90%|######### | 18/20 [00:18<00:02,  1.04s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010220 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.388417	valid's binary_logloss: 0.668977
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.521183	valid's binary_logloss: 0.659883
num_leaves, val_score: 0.654899:  95%|#########5| 19/20 [00:19<00:01,  1.02s/it][I 2020-09-27 04:45:43,893] Trial 25 finished with value: 0.659882543351282 and parameters: {'num_leaves': 138}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899:  95%|#########5| 19/20 [00:19<00:01,  1.02s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001162 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.465903	valid's binary_logloss: 0.661386
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.526905	valid's binary_logloss: 0.657041
num_leaves, val_score: 0.654899: 100%|##########| 20/20 [00:20<00:00,  1.03it/s][I 2020-09-27 04:45:44,739] Trial 26 finished with value: 0.6570410039446851 and parameters: {'num_leaves': 86}. Best is trial 13 with value: 0.6548985372701535.
num_leaves, val_score: 0.654899: 100%|##########| 20/20 [00:20<00:00,  1.01s/it]
bagging, val_score: 0.654899:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004721 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.278384	valid's binary_logloss: 0.695429
Early stopping, best iteration is:
[15]	train's binary_logloss: 0.574718	valid's binary_logloss: 0.670457
bagging, val_score: 0.654899:  10%|#         | 1/10 [00:01<00:10,  1.20s/it][I 2020-09-27 04:45:45,957] Trial 27 finished with value: 0.6704565961142148 and parameters: {'bagging_fraction': 0.5211974818467775, 'bagging_freq': 5}. Best is trial 27 with value: 0.6704565961142148.
bagging, val_score: 0.654899:  10%|#         | 1/10 [00:01<00:10,  1.20s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018210 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.262977	valid's binary_logloss: 0.676276
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.4766	valid's binary_logloss: 0.659897
bagging, val_score: 0.654899:  20%|##        | 2/10 [00:02<00:10,  1.37s/it][I 2020-09-27 04:45:47,718] Trial 28 finished with value: 0.6598967542506259 and parameters: {'bagging_fraction': 0.9934815132316437, 'bagging_freq': 1}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  20%|##        | 2/10 [00:02<00:10,  1.37s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000944 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264681	valid's binary_logloss: 0.684914
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.549363	valid's binary_logloss: 0.665808
bagging, val_score: 0.654899:  30%|###       | 3/10 [00:04<00:09,  1.35s/it][I 2020-09-27 04:45:49,025] Trial 29 finished with value: 0.6658075550967804 and parameters: {'bagging_fraction': 0.9461366644242709, 'bagging_freq': 7}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  30%|###       | 3/10 [00:04<00:09,  1.35s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000798 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.281086	valid's binary_logloss: 0.707672
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.564424	valid's binary_logloss: 0.669487
bagging, val_score: 0.654899:  40%|####      | 4/10 [00:06<00:08,  1.49s/it][I 2020-09-27 04:45:50,825] Trial 30 finished with value: 0.6694865333360387 and parameters: {'bagging_fraction': 0.4198139622048719, 'bagging_freq': 1}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  40%|####      | 4/10 [00:06<00:08,  1.49s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000847 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.266862	valid's binary_logloss: 0.689038
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.514004	valid's binary_logloss: 0.66343
bagging, val_score: 0.654899:  50%|#####     | 5/10 [00:07<00:07,  1.51s/it][I 2020-09-27 04:45:52,393] Trial 31 finished with value: 0.663429926936811 and parameters: {'bagging_fraction': 0.7512986978703702, 'bagging_freq': 4}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  50%|#####     | 5/10 [00:07<00:07,  1.51s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004195 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.265958	valid's binary_logloss: 0.68931
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.549252	valid's binary_logloss: 0.6651
bagging, val_score: 0.654899:  60%|######    | 6/10 [00:09<00:05,  1.48s/it][I 2020-09-27 04:45:53,804] Trial 32 finished with value: 0.6650999977344688 and parameters: {'bagging_fraction': 0.7159980215103693, 'bagging_freq': 7}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  60%|######    | 6/10 [00:09<00:05,  1.48s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014957 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.272788	valid's binary_logloss: 0.699964
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.532238	valid's binary_logloss: 0.667015
bagging, val_score: 0.654899:  70%|#######   | 7/10 [00:10<00:04,  1.52s/it][I 2020-09-27 04:45:55,408] Trial 33 finished with value: 0.6670147726739027 and parameters: {'bagging_fraction': 0.5563678837629615, 'bagging_freq': 2}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  70%|#######   | 7/10 [00:10<00:04,  1.52s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000817 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.267946	valid's binary_logloss: 0.681235
Early stopping, best iteration is:
[21]	train's binary_logloss: 0.52815	valid's binary_logloss: 0.661059
bagging, val_score: 0.654899:  80%|########  | 8/10 [00:12<00:02,  1.47s/it][I 2020-09-27 04:45:56,765] Trial 34 finished with value: 0.6610593718244844 and parameters: {'bagging_fraction': 0.8313758530340752, 'bagging_freq': 3}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  80%|########  | 8/10 [00:12<00:02,  1.47s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001483 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.292079	valid's binary_logloss: 0.717802
Early stopping, best iteration is:
[16]	train's binary_logloss: 0.575885	valid's binary_logloss: 0.667398
bagging, val_score: 0.654899:  90%|######### | 9/10 [00:13<00:01,  1.56s/it][I 2020-09-27 04:45:58,520] Trial 35 finished with value: 0.6673978773322778 and parameters: {'bagging_fraction': 0.4012246265638672, 'bagging_freq': 5}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899:  90%|######### | 9/10 [00:13<00:01,  1.56s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006489 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.274203	valid's binary_logloss: 0.693572
Early stopping, best iteration is:
[23]	train's binary_logloss: 0.525889	valid's binary_logloss: 0.667929
bagging, val_score: 0.654899: 100%|##########| 10/10 [00:14<00:00,  1.46s/it][I 2020-09-27 04:45:59,749] Trial 36 finished with value: 0.667928676492384 and parameters: {'bagging_fraction': 0.6154723571783186, 'bagging_freq': 6}. Best is trial 28 with value: 0.6598967542506259.
bagging, val_score: 0.654899: 100%|##########| 10/10 [00:15<00:00,  1.50s/it]
feature_fraction_stage2, val_score: 0.654899:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004865 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.269387	valid's binary_logloss: 0.674759
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.505622	valid's binary_logloss: 0.660533
feature_fraction_stage2, val_score: 0.654899:  17%|#6        | 1/6 [00:01<00:05,  1.11s/it][I 2020-09-27 04:46:00,871] Trial 37 finished with value: 0.6605329410813362 and parameters: {'feature_fraction': 0.7200000000000001}. Best is trial 37 with value: 0.6605329410813362.
feature_fraction_stage2, val_score: 0.654899:  17%|#6        | 1/6 [00:01<00:05,  1.11s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001084 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665644
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654899
feature_fraction_stage2, val_score: 0.654899:  33%|###3      | 2/6 [00:03<00:05,  1.35s/it][I 2020-09-27 04:46:02,793] Trial 38 finished with value: 0.6548985372701535 and parameters: {'feature_fraction': 0.8160000000000001}. Best is trial 38 with value: 0.6548985372701535.
feature_fraction_stage2, val_score: 0.654899:  33%|###3      | 2/6 [00:03<00:05,  1.35s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004430 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.261299	valid's binary_logloss: 0.680887
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.526475	valid's binary_logloss: 0.664407
feature_fraction_stage2, val_score: 0.654899:  50%|#####     | 3/6 [00:04<00:03,  1.29s/it][I 2020-09-27 04:46:03,948] Trial 39 finished with value: 0.6644073193077987 and parameters: {'feature_fraction': 0.88}. Best is trial 38 with value: 0.6548985372701535.
feature_fraction_stage2, val_score: 0.654899:  50%|#####     | 3/6 [00:04<00:03,  1.29s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000627 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.265461	valid's binary_logloss: 0.682897
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.509383	valid's binary_logloss: 0.662438
feature_fraction_stage2, val_score: 0.654899:  67%|######6   | 4/6 [00:06<00:03,  1.74s/it][I 2020-09-27 04:46:06,737] Trial 40 finished with value: 0.6624379814652182 and parameters: {'feature_fraction': 0.7520000000000001}. Best is trial 38 with value: 0.6548985372701535.
feature_fraction_stage2, val_score: 0.654899:  67%|######6   | 4/6 [00:06<00:03,  1.74s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005011 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.265461	valid's binary_logloss: 0.682897
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.509383	valid's binary_logloss: 0.662438
feature_fraction_stage2, val_score: 0.654899:  83%|########3 | 5/6 [00:08<00:01,  1.57s/it][I 2020-09-27 04:46:07,913] Trial 41 finished with value: 0.6624379814652182 and parameters: {'feature_fraction': 0.784}. Best is trial 38 with value: 0.6548985372701535.
feature_fraction_stage2, val_score: 0.654899:  83%|########3 | 5/6 [00:08<00:01,  1.57s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000490 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.261804	valid's binary_logloss: 0.683679
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.500705	valid's binary_logloss: 0.666858
feature_fraction_stage2, val_score: 0.654899: 100%|##########| 6/6 [00:09<00:00,  1.64s/it][I 2020-09-27 04:46:09,700] Trial 42 finished with value: 0.6668584743708171 and parameters: {'feature_fraction': 0.8480000000000001}. Best is trial 38 with value: 0.6548985372701535.
feature_fraction_stage2, val_score: 0.654899: 100%|##########| 6/6 [00:09<00:00,  1.66s/it]
regularization_factors, val_score: 0.654899:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001030 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.265642	valid's binary_logloss: 0.678724
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.469659	valid's binary_logloss: 0.660888
regularization_factors, val_score: 0.654899:   5%|5         | 1/20 [00:01<00:29,  1.58s/it][I 2020-09-27 04:46:11,296] Trial 43 finished with value: 0.6608876654466368 and parameters: {'lambda_l1': 2.395209311350816e-06, 'lambda_l2': 0.13259120292527007}. Best is trial 43 with value: 0.6608876654466368.
regularization_factors, val_score: 0.654899:   5%|5         | 1/20 [00:01<00:29,  1.58s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000545 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.390048	valid's binary_logloss: 0.672992
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.535479	valid's binary_logloss: 0.6619
regularization_factors, val_score: 0.654899:  10%|#         | 2/20 [00:03<00:29,  1.62s/it][I 2020-09-27 04:46:13,025] Trial 44 finished with value: 0.6619000150383266 and parameters: {'lambda_l1': 6.3824374533786505, 'lambda_l2': 2.9167015559251105e-08}. Best is trial 43 with value: 0.6608876654466368.
regularization_factors, val_score: 0.654899:  10%|#         | 2/20 [00:03<00:29,  1.62s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000772 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.295559	valid's binary_logloss: 0.675205
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.4927	valid's binary_logloss: 0.660235
regularization_factors, val_score: 0.654899:  15%|#5        | 3/20 [00:05<00:30,  1.78s/it][I 2020-09-27 04:46:15,164] Trial 45 finished with value: 0.6602349076536601 and parameters: {'lambda_l1': 1.6485218908617958, 'lambda_l2': 1.1064051625349032e-08}. Best is trial 45 with value: 0.6602349076536601.
regularization_factors, val_score: 0.654899:  15%|#5        | 3/20 [00:05<00:30,  1.78s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000792 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263724	valid's binary_logloss: 0.666125
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.44514	valid's binary_logloss: 0.65487
regularization_factors, val_score: 0.654870:  20%|##        | 4/20 [00:06<00:27,  1.71s/it][I 2020-09-27 04:46:16,718] Trial 46 finished with value: 0.654869889386949 and parameters: {'lambda_l1': 3.0122223559697e-08, 'lambda_l2': 0.00021523441226079608}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  20%|##        | 4/20 [00:07<00:27,  1.71s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000873 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263726	valid's binary_logloss: 0.666214
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.445141	valid's binary_logloss: 0.654998
regularization_factors, val_score: 0.654870:  25%|##5       | 5/20 [00:09<00:27,  1.80s/it][I 2020-09-27 04:46:18,736] Trial 47 finished with value: 0.6549975103511791 and parameters: {'lambda_l1': 5.742885425790403e-08, 'lambda_l2': 0.0002767008648395505}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  25%|##5       | 5/20 [00:09<00:27,  1.80s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001046 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264666	valid's binary_logloss: 0.666984
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.445139	valid's binary_logloss: 0.654961
regularization_factors, val_score: 0.654870:  30%|###       | 6/20 [00:10<00:24,  1.73s/it][I 2020-09-27 04:46:20,296] Trial 48 finished with value: 0.6549605608644221 and parameters: {'lambda_l1': 8.607940177177832e-08, 'lambda_l2': 0.0001457048379141767}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  30%|###       | 6/20 [00:10<00:24,  1.73s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000921 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263723	valid's binary_logloss: 0.666273
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.445139	valid's binary_logloss: 0.655013
regularization_factors, val_score: 0.654870:  35%|###5      | 7/20 [00:12<00:24,  1.87s/it][I 2020-09-27 04:46:22,480] Trial 49 finished with value: 0.6550133681715706 and parameters: {'lambda_l1': 1.014605269031525e-08, 'lambda_l2': 0.00017168614663450724}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  35%|###5      | 7/20 [00:12<00:24,  1.87s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004784 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263723	valid's binary_logloss: 0.666224
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.445139	valid's binary_logloss: 0.654928
regularization_factors, val_score: 0.654870:  40%|####      | 8/20 [00:14<00:20,  1.72s/it][I 2020-09-27 04:46:23,853] Trial 50 finished with value: 0.6549280045293944 and parameters: {'lambda_l1': 1.4571400606028584e-08, 'lambda_l2': 0.00018545051962322963}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  40%|####      | 8/20 [00:14<00:20,  1.72s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001114 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263723	valid's binary_logloss: 0.666238
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.445139	valid's binary_logloss: 0.654958
regularization_factors, val_score: 0.654870:  45%|####5     | 9/20 [00:16<00:20,  1.83s/it][I 2020-09-27 04:46:25,937] Trial 51 finished with value: 0.6549580786078956 and parameters: {'lambda_l1': 1.1439375782625129e-08, 'lambda_l2': 0.00020040952933499966}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  45%|####5     | 9/20 [00:16<00:20,  1.83s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004784 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264844	valid's binary_logloss: 0.665542
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.421453	valid's binary_logloss: 0.654899
regularization_factors, val_score: 0.654870:  50%|#####     | 10/20 [00:17<00:17,  1.71s/it][I 2020-09-27 04:46:27,383] Trial 52 finished with value: 0.6548985889877852 and parameters: {'lambda_l1': 1.3842789551680254e-08, 'lambda_l2': 5.6495755460565416e-05}. Best is trial 46 with value: 0.654869889386949.
regularization_factors, val_score: 0.654870:  50%|#####     | 10/20 [00:17<00:17,  1.71s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004465 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665605
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654867
regularization_factors, val_score: 0.654867:  55%|#####5    | 11/20 [00:19<00:16,  1.80s/it][I 2020-09-27 04:46:29,389] Trial 53 finished with value: 0.6548668432849638 and parameters: {'lambda_l1': 1.189642528157414e-08, 'lambda_l2': 5.203275530782603e-06}. Best is trial 53 with value: 0.6548668432849638.
regularization_factors, val_score: 0.654867:  55%|#####5    | 11/20 [00:19<00:16,  1.80s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000851 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665516
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654838
regularization_factors, val_score: 0.654838:  60%|######    | 12/20 [00:21<00:13,  1.73s/it][I 2020-09-27 04:46:30,946] Trial 54 finished with value: 0.6548376800578837 and parameters: {'lambda_l1': 2.1670289548196517e-06, 'lambda_l2': 1.661838347245463e-06}. Best is trial 54 with value: 0.6548376800578837.
regularization_factors, val_score: 0.654838:  60%|######    | 12/20 [00:21<00:13,  1.73s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000858 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.66563
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654905
regularization_factors, val_score: 0.654838:  65%|######5   | 13/20 [00:23<00:12,  1.84s/it][I 2020-09-27 04:46:33,033] Trial 55 finished with value: 0.6549047704174789 and parameters: {'lambda_l1': 1.3047485080865155e-05, 'lambda_l2': 1.1388372443834088e-06}. Best is trial 54 with value: 0.6548376800578837.
regularization_factors, val_score: 0.654838:  65%|######5   | 13/20 [00:23<00:12,  1.84s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000859 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665482
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654874
regularization_factors, val_score: 0.654838:  70%|#######   | 14/20 [00:24<00:10,  1.75s/it][I 2020-09-27 04:46:34,599] Trial 56 finished with value: 0.6548744228748046 and parameters: {'lambda_l1': 1.677207413460934e-06, 'lambda_l2': 2.3547411212794384e-06}. Best is trial 54 with value: 0.6548376800578837.
regularization_factors, val_score: 0.654838:  70%|#######   | 14/20 [00:24<00:10,  1.75s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000869 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665557
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654891
regularization_factors, val_score: 0.654838:  75%|#######5  | 15/20 [00:27<00:09,  1.87s/it][I 2020-09-27 04:46:36,725] Trial 57 finished with value: 0.65489106531928 and parameters: {'lambda_l1': 2.808990582697963e-06, 'lambda_l2': 3.680142856828128e-06}. Best is trial 54 with value: 0.6548376800578837.
regularization_factors, val_score: 0.654838:  75%|#######5  | 15/20 [00:27<00:09,  1.87s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005001 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665537
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654883
regularization_factors, val_score: 0.654838:  80%|########  | 16/20 [00:28<00:07,  1.75s/it][I 2020-09-27 04:46:38,214] Trial 58 finished with value: 0.6548833147276146 and parameters: {'lambda_l1': 3.548471771976571e-06, 'lambda_l2': 1.3069611448432158e-06}. Best is trial 54 with value: 0.6548376800578837.
regularization_factors, val_score: 0.654838:  80%|########  | 16/20 [00:28<00:07,  1.75s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000875 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665518
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.654891
regularization_factors, val_score: 0.654838:  85%|########5 | 17/20 [00:30<00:05,  1.86s/it][I 2020-09-27 04:46:40,318] Trial 59 finished with value: 0.654890562836266 and parameters: {'lambda_l1': 3.6573101362801625e-06, 'lambda_l2': 1.3655499334616953e-06}. Best is trial 54 with value: 0.6548376800578837.
regularization_factors, val_score: 0.654838:  85%|########5 | 17/20 [00:30<00:05,  1.86s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000836 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.263781	valid's binary_logloss: 0.676626
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.42877	valid's binary_logloss: 0.654741
regularization_factors, val_score: 0.654741:  90%|######### | 18/20 [00:32<00:03,  1.78s/it][I 2020-09-27 04:46:41,916] Trial 60 finished with value: 0.6547406780914222 and parameters: {'lambda_l1': 8.237457561826076e-05, 'lambda_l2': 7.033731015507948e-07}. Best is trial 60 with value: 0.6547406780914222.
regularization_factors, val_score: 0.654741:  90%|######### | 18/20 [00:32<00:03,  1.78s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000852 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264845	valid's binary_logloss: 0.66557
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441546	valid's binary_logloss: 0.654905
regularization_factors, val_score: 0.654741:  95%|#########5| 19/20 [00:34<00:01,  1.87s/it][I 2020-09-27 04:46:43,993] Trial 61 finished with value: 0.6549054519516266 and parameters: {'lambda_l1': 8.016334404835216e-05, 'lambda_l2': 8.515948497846046e-07}. Best is trial 60 with value: 0.6547406780914222.
regularization_factors, val_score: 0.654741:  95%|#########5| 19/20 [00:34<00:01,  1.87s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000852 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.264846	valid's binary_logloss: 0.665533
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.441545	valid's binary_logloss: 0.65488
regularization_factors, val_score: 0.654741: 100%|##########| 20/20 [00:35<00:00,  1.78s/it][I 2020-09-27 04:46:45,569] Trial 62 finished with value: 0.6548801766746073 and parameters: {'lambda_l1': 6.807709893234408e-07, 'lambda_l2': 1.3368946882298463e-06}. Best is trial 60 with value: 0.6547406780914222.
regularization_factors, val_score: 0.654741: 100%|##########| 20/20 [00:35<00:00,  1.79s/it]
min_data_in_leaf, val_score: 0.654741:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000788 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.26674	valid's binary_logloss: 0.673371
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.44854	valid's binary_logloss: 0.659997
min_data_in_leaf, val_score: 0.654741:  20%|##        | 1/5 [00:02<00:08,  2.00s/it][I 2020-09-27 04:46:47,591] Trial 63 finished with value: 0.6599970673875724 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6599970673875724.
min_data_in_leaf, val_score: 0.654741:  20%|##        | 1/5 [00:02<00:08,  2.00s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000851 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.259932	valid's binary_logloss: 0.674681
Early stopping, best iteration is:
[27]	train's binary_logloss: 0.485136	valid's binary_logloss: 0.663118
min_data_in_leaf, val_score: 0.654741:  40%|####      | 2/5 [00:03<00:05,  1.90s/it][I 2020-09-27 04:46:49,234] Trial 64 finished with value: 0.6631181004712279 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.6599970673875724.
min_data_in_leaf, val_score: 0.654741:  40%|####      | 2/5 [00:03<00:05,  1.90s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000813 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.277711	valid's binary_logloss: 0.677648
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.469589	valid's binary_logloss: 0.66334
min_data_in_leaf, val_score: 0.654741:  60%|######    | 3/5 [00:05<00:03,  1.93s/it][I 2020-09-27 04:46:51,228] Trial 65 finished with value: 0.6633400797460791 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.6599970673875724.
min_data_in_leaf, val_score: 0.654741:  60%|######    | 3/5 [00:05<00:03,  1.93s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000892 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.262769	valid's binary_logloss: 0.674802
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.44869	valid's binary_logloss: 0.662608
min_data_in_leaf, val_score: 0.654741:  80%|########  | 4/5 [00:07<00:01,  1.86s/it][I 2020-09-27 04:46:52,928] Trial 66 finished with value: 0.6626078522269558 and parameters: {'min_child_samples': 10}. Best is trial 63 with value: 0.6599970673875724.
min_data_in_leaf, val_score: 0.654741:  80%|########  | 4/5 [00:07<00:01,  1.86s/it][LightGBM] [Info] Number of positive: 12849, number of negative: 13150
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000877 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4238
[LightGBM] [Info] Number of data points in the train set: 25999, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494211 -> initscore=-0.023156
[LightGBM] [Info] Start training from score -0.023156
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
Training until validation scores don't improve for 100 rounds
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[100]	train's binary_logloss: 0.355756	valid's binary_logloss: 0.678018
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.530938	valid's binary_logloss: 0.659107
min_data_in_leaf, val_score: 0.654741: 100%|##########| 5/5 [00:10<00:00,  2.23s/it][I 2020-09-27 04:46:56,042] Trial 67 finished with value: 0.6591070735201429 and parameters: {'min_child_samples': 100}. Best is trial 67 with value: 0.6591070735201429.
min_data_in_leaf, val_score: 0.654741: 100%|##########| 5/5 [00:10<00:00,  2.09s/it]
Fold : 8
[I 2020-09-27 04:46:56,086] A new study created in memory with name: no-name-0ac6def8-9fc0-44ca-a007-4dfc936cce3b
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000362 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.582362	valid's binary_logloss: 0.657172
Early stopping, best iteration is:
[75]	train's binary_logloss: 0.598686	valid's binary_logloss: 0.656832
feature_fraction, val_score: 0.656832:  14%|#4        | 1/7 [00:00<00:03,  1.84it/s][I 2020-09-27 04:46:56,647] Trial 0 finished with value: 0.6568322981471987 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6568322981471987.
feature_fraction, val_score: 0.656832:  14%|#4        | 1/7 [00:00<00:03,  1.84it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004661 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575682	valid's binary_logloss: 0.657591
Early stopping, best iteration is:
[65]	train's binary_logloss: 0.598895	valid's binary_logloss: 0.65488
feature_fraction, val_score: 0.654880:  29%|##8       | 2/7 [00:01<00:02,  1.90it/s][I 2020-09-27 04:46:57,132] Trial 1 finished with value: 0.654879918185127 and parameters: {'feature_fraction': 0.7}. Best is trial 1 with value: 0.654879918185127.
feature_fraction, val_score: 0.654880:  29%|##8       | 2/7 [00:01<00:02,  1.90it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000886 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573899	valid's binary_logloss: 0.657206
Early stopping, best iteration is:
[70]	train's binary_logloss: 0.593743	valid's binary_logloss: 0.655846
feature_fraction, val_score: 0.654880:  43%|####2     | 3/7 [00:01<00:02,  1.85it/s][I 2020-09-27 04:46:57,704] Trial 2 finished with value: 0.6558455623356558 and parameters: {'feature_fraction': 0.8}. Best is trial 1 with value: 0.654879918185127.
feature_fraction, val_score: 0.654880:  43%|####2     | 3/7 [00:01<00:02,  1.85it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000972 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.57084	valid's binary_logloss: 0.654546
[200]	train's binary_logloss: 0.513329	valid's binary_logloss: 0.658828
Early stopping, best iteration is:
[113]	train's binary_logloss: 0.562928	valid's binary_logloss: 0.654073
feature_fraction, val_score: 0.654073:  57%|#####7    | 4/7 [00:02<00:01,  1.73it/s][I 2020-09-27 04:46:58,369] Trial 3 finished with value: 0.6540730143979465 and parameters: {'feature_fraction': 1.0}. Best is trial 3 with value: 0.6540730143979465.
feature_fraction, val_score: 0.654073:  57%|#####7    | 4/7 [00:02<00:01,  1.73it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004601 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.579014	valid's binary_logloss: 0.655596
Early stopping, best iteration is:
[59]	train's binary_logloss: 0.606682	valid's binary_logloss: 0.6549
feature_fraction, val_score: 0.654073:  71%|#######1  | 5/7 [00:02<00:01,  1.63it/s][I 2020-09-27 04:46:59,068] Trial 4 finished with value: 0.6548998752828071 and parameters: {'feature_fraction': 0.5}. Best is trial 3 with value: 0.6540730143979465.
feature_fraction, val_score: 0.654073:  71%|#######1  | 5/7 [00:02<00:01,  1.63it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011929 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.572184	valid's binary_logloss: 0.658179
Early stopping, best iteration is:
[67]	train's binary_logloss: 0.594979	valid's binary_logloss: 0.656087
feature_fraction, val_score: 0.654073:  86%|########5 | 6/7 [00:03<00:00,  1.52it/s][I 2020-09-27 04:46:59,834] Trial 5 finished with value: 0.6560873120955287 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 3 with value: 0.6540730143979465.
feature_fraction, val_score: 0.654073:  86%|########5 | 6/7 [00:03<00:00,  1.52it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004851 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.576475	valid's binary_logloss: 0.652717
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.587886	valid's binary_logloss: 0.65223
feature_fraction, val_score: 0.652230: 100%|##########| 7/7 [00:04<00:00,  1.64it/s][I 2020-09-27 04:47:00,328] Trial 6 finished with value: 0.6522296496116577 and parameters: {'feature_fraction': 0.6}. Best is trial 6 with value: 0.6522296496116577.
feature_fraction, val_score: 0.652230: 100%|##########| 7/7 [00:04<00:00,  1.65it/s]
num_leaves, val_score: 0.652230:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004842 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.326219	valid's binary_logloss: 0.670645
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.497287	valid's binary_logloss: 0.658257
num_leaves, val_score: 0.652230:   5%|5         | 1/20 [00:00<00:17,  1.08it/s][I 2020-09-27 04:47:01,277] Trial 7 finished with value: 0.6582573078988242 and parameters: {'num_leaves': 200}. Best is trial 7 with value: 0.6582573078988242.
num_leaves, val_score: 0.652230:   5%|5         | 1/20 [00:00<00:17,  1.08it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004568 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.281259	valid's binary_logloss: 0.679658
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.462423	valid's binary_logloss: 0.663543
num_leaves, val_score: 0.652230:  10%|#         | 2/20 [00:02<00:17,  1.03it/s][I 2020-09-27 04:47:02,352] Trial 8 finished with value: 0.6635430249480666 and parameters: {'num_leaves': 251}. Best is trial 7 with value: 0.6582573078988242.
num_leaves, val_score: 0.652230:  10%|#         | 2/20 [00:02<00:17,  1.03it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005025 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657954	valid's binary_logloss: 0.664118
[200]	train's binary_logloss: 0.647761	valid's binary_logloss: 0.655763
[300]	train's binary_logloss: 0.642622	valid's binary_logloss: 0.652688
[400]	train's binary_logloss: 0.638865	valid's binary_logloss: 0.651406
[500]	train's binary_logloss: 0.635649	valid's binary_logloss: 0.650648
[600]	train's binary_logloss: 0.632824	valid's binary_logloss: 0.650562
[700]	train's binary_logloss: 0.630241	valid's binary_logloss: 0.650393
Early stopping, best iteration is:
[694]	train's binary_logloss: 0.630392	valid's binary_logloss: 0.650237
num_leaves, val_score: 0.650237:  15%|#5        | 3/20 [00:03<00:19,  1.17s/it][I 2020-09-27 04:47:03,969] Trial 9 finished with value: 0.65023693049223 and parameters: {'num_leaves': 3}. Best is trial 9 with value: 0.65023693049223.
num_leaves, val_score: 0.650237:  15%|#5        | 3/20 [00:03<00:19,  1.17s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004669 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657954	valid's binary_logloss: 0.664118
[200]	train's binary_logloss: 0.647761	valid's binary_logloss: 0.655763
[300]	train's binary_logloss: 0.642622	valid's binary_logloss: 0.652688
[400]	train's binary_logloss: 0.638865	valid's binary_logloss: 0.651406
[500]	train's binary_logloss: 0.635649	valid's binary_logloss: 0.650648
[600]	train's binary_logloss: 0.632824	valid's binary_logloss: 0.650562
[700]	train's binary_logloss: 0.630241	valid's binary_logloss: 0.650393
Early stopping, best iteration is:
[694]	train's binary_logloss: 0.630392	valid's binary_logloss: 0.650237
num_leaves, val_score: 0.650237:  20%|##        | 4/20 [00:04<00:18,  1.18s/it][I 2020-09-27 04:47:05,175] Trial 10 finished with value: 0.6502369304922299 and parameters: {'num_leaves': 3}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  20%|##        | 4/20 [00:04<00:18,  1.18s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005031 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.628976	valid's binary_logloss: 0.654294
[200]	train's binary_logloss: 0.608343	valid's binary_logloss: 0.65336
Early stopping, best iteration is:
[176]	train's binary_logloss: 0.613043	valid's binary_logloss: 0.652178
num_leaves, val_score: 0.650237:  25%|##5       | 5/20 [00:05<00:14,  1.01it/s][I 2020-09-27 04:47:05,732] Trial 11 finished with value: 0.6521775910881051 and parameters: {'num_leaves': 10}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  25%|##5       | 5/20 [00:05<00:14,  1.01it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004975 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.635408	valid's binary_logloss: 0.65585
[200]	train's binary_logloss: 0.618644	valid's binary_logloss: 0.654948
Early stopping, best iteration is:
[161]	train's binary_logloss: 0.624573	valid's binary_logloss: 0.6542
num_leaves, val_score: 0.650237:  30%|###       | 6/20 [00:05<00:11,  1.17it/s][I 2020-09-27 04:47:06,273] Trial 12 finished with value: 0.6541997386028066 and parameters: {'num_leaves': 8}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  30%|###       | 6/20 [00:05<00:11,  1.17it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000898 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.488946	valid's binary_logloss: 0.659659
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.5773	valid's binary_logloss: 0.65642
num_leaves, val_score: 0.650237:  35%|###5      | 7/20 [00:07<00:12,  1.04it/s][I 2020-09-27 04:47:07,468] Trial 13 finished with value: 0.6564196118293095 and parameters: {'num_leaves': 76}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  35%|###5      | 7/20 [00:07<00:12,  1.04it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000884 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.498154	valid's binary_logloss: 0.662208
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.581162	valid's binary_logloss: 0.657338
num_leaves, val_score: 0.650237:  40%|####      | 8/20 [00:07<00:10,  1.16it/s][I 2020-09-27 04:47:08,100] Trial 14 finished with value: 0.6573383283023841 and parameters: {'num_leaves': 71}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  40%|####      | 8/20 [00:07<00:10,  1.16it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004661 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.646593	valid's binary_logloss: 0.657131
[200]	train's binary_logloss: 0.634403	valid's binary_logloss: 0.652331
[300]	train's binary_logloss: 0.625932	valid's binary_logloss: 0.651212
Early stopping, best iteration is:
[289]	train's binary_logloss: 0.626825	valid's binary_logloss: 0.650977
num_leaves, val_score: 0.650237:  45%|####5     | 9/20 [00:08<00:08,  1.27it/s][I 2020-09-27 04:47:08,723] Trial 15 finished with value: 0.6509770662129648 and parameters: {'num_leaves': 5}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  45%|####5     | 9/20 [00:08<00:08,  1.27it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004588 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.498154	valid's binary_logloss: 0.662208
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.581162	valid's binary_logloss: 0.657338
num_leaves, val_score: 0.650237:  50%|#####     | 10/20 [00:08<00:07,  1.40it/s][I 2020-09-27 04:47:09,256] Trial 16 finished with value: 0.6573383283023841 and parameters: {'num_leaves': 71}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  50%|#####     | 10/20 [00:08<00:07,  1.40it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012770 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.392369	valid's binary_logloss: 0.669429
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.507218	valid's binary_logloss: 0.65932
num_leaves, val_score: 0.650237:  55%|#####5    | 11/20 [00:09<00:06,  1.35it/s][I 2020-09-27 04:47:10,058] Trial 17 finished with value: 0.659319879723429 and parameters: {'num_leaves': 141}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  55%|#####5    | 11/20 [00:09<00:06,  1.35it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010521 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.553596	valid's binary_logloss: 0.658319
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.593584	valid's binary_logloss: 0.656096
num_leaves, val_score: 0.650237:  60%|######    | 12/20 [00:10<00:06,  1.25it/s][I 2020-09-27 04:47:10,992] Trial 18 finished with value: 0.6560958307756177 and parameters: {'num_leaves': 42}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  60%|######    | 12/20 [00:10<00:06,  1.25it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001062 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.409616	valid's binary_logloss: 0.665328
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.53918	valid's binary_logloss: 0.658274
num_leaves, val_score: 0.650237:  65%|######5   | 13/20 [00:11<00:05,  1.19it/s][I 2020-09-27 04:47:11,928] Trial 19 finished with value: 0.6582743650249848 and parameters: {'num_leaves': 128}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  65%|######5   | 13/20 [00:11<00:05,  1.19it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004694 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.563913	valid's binary_logloss: 0.657197
Early stopping, best iteration is:
[77]	train's binary_logloss: 0.580794	valid's binary_logloss: 0.654988
num_leaves, val_score: 0.650237:  70%|#######   | 14/20 [00:12<00:04,  1.37it/s][I 2020-09-27 04:47:12,412] Trial 20 finished with value: 0.6549884052454669 and parameters: {'num_leaves': 37}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  70%|#######   | 14/20 [00:12<00:04,  1.37it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009279 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657954	valid's binary_logloss: 0.664118
[200]	train's binary_logloss: 0.647761	valid's binary_logloss: 0.655763
[300]	train's binary_logloss: 0.642622	valid's binary_logloss: 0.652688
[400]	train's binary_logloss: 0.638865	valid's binary_logloss: 0.651406
[500]	train's binary_logloss: 0.635649	valid's binary_logloss: 0.650648
[600]	train's binary_logloss: 0.632824	valid's binary_logloss: 0.650562
[700]	train's binary_logloss: 0.630241	valid's binary_logloss: 0.650393
Early stopping, best iteration is:
[694]	train's binary_logloss: 0.630392	valid's binary_logloss: 0.650237
num_leaves, val_score: 0.650237:  75%|#######5  | 15/20 [00:13<00:04,  1.18it/s][I 2020-09-27 04:47:13,534] Trial 21 finished with value: 0.6502369304922299 and parameters: {'num_leaves': 3}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  75%|#######5  | 15/20 [00:13<00:04,  1.18it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000785 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.646593	valid's binary_logloss: 0.657131
[200]	train's binary_logloss: 0.634403	valid's binary_logloss: 0.652331
[300]	train's binary_logloss: 0.625932	valid's binary_logloss: 0.651212
Early stopping, best iteration is:
[289]	train's binary_logloss: 0.626825	valid's binary_logloss: 0.650977
num_leaves, val_score: 0.650237:  80%|########  | 16/20 [00:14<00:03,  1.08it/s][I 2020-09-27 04:47:14,641] Trial 22 finished with value: 0.6509770662129648 and parameters: {'num_leaves': 5}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  80%|########  | 16/20 [00:14<00:03,  1.08it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.019722 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.565177	valid's binary_logloss: 0.655115
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.585373	valid's binary_logloss: 0.65339
num_leaves, val_score: 0.650237:  85%|########5 | 17/20 [00:14<00:02,  1.20it/s][I 2020-09-27 04:47:15,254] Trial 23 finished with value: 0.6533900763638031 and parameters: {'num_leaves': 36}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  85%|########5 | 17/20 [00:14<00:02,  1.20it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004882 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.565177	valid's binary_logloss: 0.655115
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.585373	valid's binary_logloss: 0.65339
num_leaves, val_score: 0.650237:  90%|######### | 18/20 [00:15<00:01,  1.36it/s][I 2020-09-27 04:47:15,759] Trial 24 finished with value: 0.6533900763638031 and parameters: {'num_leaves': 36}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  90%|######### | 18/20 [00:15<00:01,  1.36it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009941 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668764	valid's binary_logloss: 0.672745
[200]	train's binary_logloss: 0.65923	valid's binary_logloss: 0.663108
[300]	train's binary_logloss: 0.654067	valid's binary_logloss: 0.658349
[400]	train's binary_logloss: 0.651004	valid's binary_logloss: 0.655269
[500]	train's binary_logloss: 0.649032	valid's binary_logloss: 0.653548
[600]	train's binary_logloss: 0.647704	valid's binary_logloss: 0.652575
[700]	train's binary_logloss: 0.646761	valid's binary_logloss: 0.651831
[800]	train's binary_logloss: 0.646051	valid's binary_logloss: 0.651481
[900]	train's binary_logloss: 0.645487	valid's binary_logloss: 0.651115
[1000]	train's binary_logloss: 0.645019	valid's binary_logloss: 0.651086
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.645019	valid's binary_logloss: 0.651086
num_leaves, val_score: 0.650237:  95%|#########5| 19/20 [00:16<00:00,  1.12it/s][I 2020-09-27 04:47:17,024] Trial 25 finished with value: 0.6510861135332809 and parameters: {'num_leaves': 2}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237:  95%|#########5| 19/20 [00:16<00:00,  1.12it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005016 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.424526	valid's binary_logloss: 0.659895
Early stopping, best iteration is:
[33]	train's binary_logloss: 0.556295	valid's binary_logloss: 0.654704
num_leaves, val_score: 0.650237: 100%|##########| 20/20 [00:17<00:00,  1.20it/s][I 2020-09-27 04:47:17,712] Trial 26 finished with value: 0.6547044966874118 and parameters: {'num_leaves': 119}. Best is trial 10 with value: 0.6502369304922299.
num_leaves, val_score: 0.650237: 100%|##########| 20/20 [00:17<00:00,  1.15it/s]
bagging, val_score: 0.650237:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004738 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658045	valid's binary_logloss: 0.664127
[200]	train's binary_logloss: 0.647774	valid's binary_logloss: 0.655872
[300]	train's binary_logloss: 0.642497	valid's binary_logloss: 0.65234
[400]	train's binary_logloss: 0.638669	valid's binary_logloss: 0.651347
[500]	train's binary_logloss: 0.635478	valid's binary_logloss: 0.651168
Early stopping, best iteration is:
[453]	train's binary_logloss: 0.63696	valid's binary_logloss: 0.650827
bagging, val_score: 0.650237:  10%|#         | 1/10 [00:01<00:12,  1.44s/it][I 2020-09-27 04:47:19,168] Trial 27 finished with value: 0.6508272913841253 and parameters: {'bagging_fraction': 0.9963833291336945, 'bagging_freq': 6}. Best is trial 27 with value: 0.6508272913841253.
bagging, val_score: 0.650237:  10%|#         | 1/10 [00:01<00:12,  1.44s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004596 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.655889	valid's binary_logloss: 0.659928
[200]	train's binary_logloss: 0.646169	valid's binary_logloss: 0.654108
[300]	train's binary_logloss: 0.641381	valid's binary_logloss: 0.651946
[400]	train's binary_logloss: 0.637836	valid's binary_logloss: 0.650942
Early stopping, best iteration is:
[389]	train's binary_logloss: 0.638252	valid's binary_logloss: 0.650231
bagging, val_score: 0.650231:  20%|##        | 2/10 [00:02<00:10,  1.27s/it][I 2020-09-27 04:47:20,036] Trial 28 finished with value: 0.6502306281879949 and parameters: {'bagging_fraction': 0.4514619063105707, 'bagging_freq': 1}. Best is trial 28 with value: 0.6502306281879949.
bagging, val_score: 0.650231:  20%|##        | 2/10 [00:02<00:10,  1.27s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004540 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.655825	valid's binary_logloss: 0.660152
[200]	train's binary_logloss: 0.646253	valid's binary_logloss: 0.653794
[300]	train's binary_logloss: 0.64168	valid's binary_logloss: 0.652426
[400]	train's binary_logloss: 0.638103	valid's binary_logloss: 0.651436
Early stopping, best iteration is:
[381]	train's binary_logloss: 0.63871	valid's binary_logloss: 0.650844
bagging, val_score: 0.650231:  30%|###       | 3/10 [00:03<00:07,  1.13s/it][I 2020-09-27 04:47:20,849] Trial 29 finished with value: 0.6508442537589473 and parameters: {'bagging_fraction': 0.4102073189555854, 'bagging_freq': 1}. Best is trial 28 with value: 0.6502306281879949.
bagging, val_score: 0.650231:  30%|###       | 3/10 [00:03<00:07,  1.13s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004944 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.655879	valid's binary_logloss: 0.660227
[200]	train's binary_logloss: 0.646389	valid's binary_logloss: 0.653638
[300]	train's binary_logloss: 0.641982	valid's binary_logloss: 0.652522
[400]	train's binary_logloss: 0.638289	valid's binary_logloss: 0.652467
Early stopping, best iteration is:
[387]	train's binary_logloss: 0.63872	valid's binary_logloss: 0.652011
bagging, val_score: 0.650231:  40%|####      | 4/10 [00:03<00:06,  1.04s/it][I 2020-09-27 04:47:21,683] Trial 30 finished with value: 0.6520113408937251 and parameters: {'bagging_fraction': 0.4058702957626291, 'bagging_freq': 1}. Best is trial 28 with value: 0.6502306281879949.
bagging, val_score: 0.650231:  40%|####      | 4/10 [00:03<00:06,  1.04s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008559 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656756	valid's binary_logloss: 0.662314
[200]	train's binary_logloss: 0.646612	valid's binary_logloss: 0.653269
[300]	train's binary_logloss: 0.641725	valid's binary_logloss: 0.650745
[400]	train's binary_logloss: 0.638127	valid's binary_logloss: 0.650437
Early stopping, best iteration is:
[386]	train's binary_logloss: 0.638596	valid's binary_logloss: 0.649682
bagging, val_score: 0.649682:  50%|#####     | 5/10 [00:05<00:05,  1.10s/it][I 2020-09-27 04:47:22,921] Trial 31 finished with value: 0.6496818435732342 and parameters: {'bagging_fraction': 0.6494590447691004, 'bagging_freq': 3}. Best is trial 31 with value: 0.6496818435732342.
bagging, val_score: 0.649682:  50%|#####     | 5/10 [00:05<00:05,  1.10s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004698 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65681	valid's binary_logloss: 0.661741
[200]	train's binary_logloss: 0.646579	valid's binary_logloss: 0.653526
[300]	train's binary_logloss: 0.641414	valid's binary_logloss: 0.651164
[400]	train's binary_logloss: 0.637939	valid's binary_logloss: 0.650273
Early stopping, best iteration is:
[386]	train's binary_logloss: 0.638417	valid's binary_logloss: 0.649953
bagging, val_score: 0.649682:  60%|######    | 6/10 [00:05<00:03,  1.00it/s][I 2020-09-27 04:47:23,670] Trial 32 finished with value: 0.6499526525656477 and parameters: {'bagging_fraction': 0.6912299232252367, 'bagging_freq': 3}. Best is trial 31 with value: 0.6496818435732342.
bagging, val_score: 0.649682:  60%|######    | 6/10 [00:05<00:03,  1.00it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004577 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656917	valid's binary_logloss: 0.662005
[200]	train's binary_logloss: 0.646694	valid's binary_logloss: 0.653638
[300]	train's binary_logloss: 0.641556	valid's binary_logloss: 0.651449
[400]	train's binary_logloss: 0.637865	valid's binary_logloss: 0.650759
Early stopping, best iteration is:
[387]	train's binary_logloss: 0.638305	valid's binary_logloss: 0.650278
bagging, val_score: 0.649682:  70%|#######   | 7/10 [00:06<00:02,  1.09it/s][I 2020-09-27 04:47:24,413] Trial 33 finished with value: 0.6502781308682245 and parameters: {'bagging_fraction': 0.7006917452683191, 'bagging_freq': 3}. Best is trial 31 with value: 0.6496818435732342.
bagging, val_score: 0.649682:  70%|#######   | 7/10 [00:06<00:02,  1.09it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005647 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656936	valid's binary_logloss: 0.66276
[200]	train's binary_logloss: 0.646675	valid's binary_logloss: 0.653779
[300]	train's binary_logloss: 0.641779	valid's binary_logloss: 0.651157
[400]	train's binary_logloss: 0.638073	valid's binary_logloss: 0.650615
Early stopping, best iteration is:
[385]	train's binary_logloss: 0.638614	valid's binary_logloss: 0.649988
bagging, val_score: 0.649682:  80%|########  | 8/10 [00:07<00:01,  1.14it/s][I 2020-09-27 04:47:25,180] Trial 34 finished with value: 0.6499876467524935 and parameters: {'bagging_fraction': 0.6697359412743568, 'bagging_freq': 3}. Best is trial 31 with value: 0.6496818435732342.
bagging, val_score: 0.649682:  80%|########  | 8/10 [00:07<00:01,  1.14it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005015 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656928	valid's binary_logloss: 0.661654
[200]	train's binary_logloss: 0.646722	valid's binary_logloss: 0.652579
[300]	train's binary_logloss: 0.641759	valid's binary_logloss: 0.65045
[400]	train's binary_logloss: 0.638172	valid's binary_logloss: 0.649763
[500]	train's binary_logloss: 0.634789	valid's binary_logloss: 0.649596
Early stopping, best iteration is:
[472]	train's binary_logloss: 0.635712	valid's binary_logloss: 0.649063
bagging, val_score: 0.649063:  90%|######### | 9/10 [00:08<00:01,  1.00s/it][I 2020-09-27 04:47:26,487] Trial 35 finished with value: 0.649063213862809 and parameters: {'bagging_fraction': 0.6949061090676492, 'bagging_freq': 3}. Best is trial 35 with value: 0.649063213862809.
bagging, val_score: 0.649063:  90%|######### | 9/10 [00:08<00:01,  1.00s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006251 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656842	valid's binary_logloss: 0.662647
[200]	train's binary_logloss: 0.646752	valid's binary_logloss: 0.654227
[300]	train's binary_logloss: 0.641722	valid's binary_logloss: 0.65187
[400]	train's binary_logloss: 0.638055	valid's binary_logloss: 0.651604
[500]	train's binary_logloss: 0.634682	valid's binary_logloss: 0.651328
Early stopping, best iteration is:
[464]	train's binary_logloss: 0.63593	valid's binary_logloss: 0.650751
bagging, val_score: 0.649063: 100%|##########| 10/10 [00:09<00:00,  1.01s/it][I 2020-09-27 04:47:27,517] Trial 36 finished with value: 0.6507510423009876 and parameters: {'bagging_fraction': 0.6850010940586274, 'bagging_freq': 3}. Best is trial 35 with value: 0.649063213862809.
bagging, val_score: 0.649063: 100%|##########| 10/10 [00:09<00:00,  1.02it/s]
feature_fraction_stage2, val_score: 0.649063:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005016 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656928	valid's binary_logloss: 0.661654
[200]	train's binary_logloss: 0.646722	valid's binary_logloss: 0.652579
[300]	train's binary_logloss: 0.641759	valid's binary_logloss: 0.65045
[400]	train's binary_logloss: 0.638172	valid's binary_logloss: 0.649763
[500]	train's binary_logloss: 0.634789	valid's binary_logloss: 0.649596
Early stopping, best iteration is:
[472]	train's binary_logloss: 0.635712	valid's binary_logloss: 0.649063
feature_fraction_stage2, val_score: 0.649063:  17%|#6        | 1/6 [00:00<00:04,  1.11it/s][I 2020-09-27 04:47:28,434] Trial 37 finished with value: 0.649063213862809 and parameters: {'feature_fraction': 0.616}. Best is trial 37 with value: 0.649063213862809.
feature_fraction_stage2, val_score: 0.649063:  17%|#6        | 1/6 [00:00<00:04,  1.11it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004551 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656692	valid's binary_logloss: 0.661077
[200]	train's binary_logloss: 0.646429	valid's binary_logloss: 0.652636
[300]	train's binary_logloss: 0.641474	valid's binary_logloss: 0.650303
[400]	train's binary_logloss: 0.637696	valid's binary_logloss: 0.649448
[500]	train's binary_logloss: 0.634319	valid's binary_logloss: 0.64952
Early stopping, best iteration is:
[468]	train's binary_logloss: 0.635429	valid's binary_logloss: 0.648813
feature_fraction_stage2, val_score: 0.648813:  33%|###3      | 2/6 [00:01<00:03,  1.11it/s][I 2020-09-27 04:47:29,319] Trial 38 finished with value: 0.6488132191521305 and parameters: {'feature_fraction': 0.6479999999999999}. Best is trial 38 with value: 0.6488132191521305.
feature_fraction_stage2, val_score: 0.648813:  33%|###3      | 2/6 [00:01<00:03,  1.11it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000576 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657062	valid's binary_logloss: 0.661671
[200]	train's binary_logloss: 0.647032	valid's binary_logloss: 0.653351
[300]	train's binary_logloss: 0.641928	valid's binary_logloss: 0.650494
[400]	train's binary_logloss: 0.638384	valid's binary_logloss: 0.649678
[500]	train's binary_logloss: 0.634981	valid's binary_logloss: 0.649656
[600]	train's binary_logloss: 0.631952	valid's binary_logloss: 0.649533
Early stopping, best iteration is:
[551]	train's binary_logloss: 0.633314	valid's binary_logloss: 0.649149
feature_fraction_stage2, val_score: 0.648813:  50%|#####     | 3/6 [00:03<00:03,  1.06s/it][I 2020-09-27 04:47:30,759] Trial 39 finished with value: 0.6491494764152731 and parameters: {'feature_fraction': 0.552}. Best is trial 38 with value: 0.6488132191521305.
feature_fraction_stage2, val_score: 0.648813:  50%|#####     | 3/6 [00:03<00:03,  1.06s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000576 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656892	valid's binary_logloss: 0.661978
[200]	train's binary_logloss: 0.646753	valid's binary_logloss: 0.653932
[300]	train's binary_logloss: 0.641708	valid's binary_logloss: 0.652504
[400]	train's binary_logloss: 0.638016	valid's binary_logloss: 0.651
[500]	train's binary_logloss: 0.634749	valid's binary_logloss: 0.651944
Early stopping, best iteration is:
[412]	train's binary_logloss: 0.637606	valid's binary_logloss: 0.65077
feature_fraction_stage2, val_score: 0.648813:  67%|######6   | 4/6 [00:04<00:02,  1.00s/it][I 2020-09-27 04:47:31,621] Trial 40 finished with value: 0.6507703781531419 and parameters: {'feature_fraction': 0.584}. Best is trial 38 with value: 0.6488132191521305.
feature_fraction_stage2, val_score: 0.648813:  67%|######6   | 4/6 [00:04<00:02,  1.00s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004974 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.6377	valid's binary_logloss: 0.649329
[500]	train's binary_logloss: 0.634322	valid's binary_logloss: 0.649442
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635675	valid's binary_logloss: 0.648669
feature_fraction_stage2, val_score: 0.648669:  83%|########3 | 5/6 [00:05<00:00,  1.01it/s][I 2020-09-27 04:47:32,586] Trial 41 finished with value: 0.6486693704765705 and parameters: {'feature_fraction': 0.6799999999999999}. Best is trial 41 with value: 0.6486693704765705.
feature_fraction_stage2, val_score: 0.648669:  83%|########3 | 5/6 [00:05<00:00,  1.01it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004631 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657062	valid's binary_logloss: 0.661671
[200]	train's binary_logloss: 0.647032	valid's binary_logloss: 0.653351
[300]	train's binary_logloss: 0.641928	valid's binary_logloss: 0.650494
[400]	train's binary_logloss: 0.638384	valid's binary_logloss: 0.649678
[500]	train's binary_logloss: 0.634981	valid's binary_logloss: 0.649656
[600]	train's binary_logloss: 0.631952	valid's binary_logloss: 0.649533
Early stopping, best iteration is:
[551]	train's binary_logloss: 0.633314	valid's binary_logloss: 0.649149
feature_fraction_stage2, val_score: 0.648669: 100%|##########| 6/6 [00:06<00:00,  1.11s/it][I 2020-09-27 04:47:33,975] Trial 42 finished with value: 0.6491494764152731 and parameters: {'feature_fraction': 0.52}. Best is trial 41 with value: 0.6486693704765705.
feature_fraction_stage2, val_score: 0.648669: 100%|##########| 6/6 [00:06<00:00,  1.08s/it]
regularization_factors, val_score: 0.648669:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012670 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656679	valid's binary_logloss: 0.661907
[200]	train's binary_logloss: 0.646312	valid's binary_logloss: 0.653757
[300]	train's binary_logloss: 0.641354	valid's binary_logloss: 0.651207
[400]	train's binary_logloss: 0.637772	valid's binary_logloss: 0.650071
[500]	train's binary_logloss: 0.634407	valid's binary_logloss: 0.649928
Early stopping, best iteration is:
[473]	train's binary_logloss: 0.635291	valid's binary_logloss: 0.649607
regularization_factors, val_score: 0.648669:   5%|5         | 1/20 [00:01<00:20,  1.08s/it][I 2020-09-27 04:47:35,081] Trial 43 finished with value: 0.6496074604263665 and parameters: {'lambda_l1': 4.193685786846261e-08, 'lambda_l2': 0.3229663189729637}. Best is trial 43 with value: 0.6496074604263665.
regularization_factors, val_score: 0.648669:   5%|5         | 1/20 [00:01<00:20,  1.08s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011232 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657809	valid's binary_logloss: 0.663236
[200]	train's binary_logloss: 0.647792	valid's binary_logloss: 0.654479
[300]	train's binary_logloss: 0.643617	valid's binary_logloss: 0.650947
[400]	train's binary_logloss: 0.640751	valid's binary_logloss: 0.650509
[500]	train's binary_logloss: 0.638188	valid's binary_logloss: 0.650185
Early stopping, best iteration is:
[464]	train's binary_logloss: 0.639104	valid's binary_logloss: 0.649722
regularization_factors, val_score: 0.648669:  10%|#         | 2/20 [00:01<00:18,  1.03s/it][I 2020-09-27 04:47:35,996] Trial 44 finished with value: 0.6497218154257693 and parameters: {'lambda_l1': 7.661272890098566, 'lambda_l2': 1.6647681314940023e-07}. Best is trial 43 with value: 0.6496074604263665.
regularization_factors, val_score: 0.648669:  10%|#         | 2/20 [00:02<00:18,  1.03s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005094 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656808	valid's binary_logloss: 0.661435
[200]	train's binary_logloss: 0.646494	valid's binary_logloss: 0.653055
[300]	train's binary_logloss: 0.641675	valid's binary_logloss: 0.650859
[400]	train's binary_logloss: 0.637945	valid's binary_logloss: 0.649955
[500]	train's binary_logloss: 0.634406	valid's binary_logloss: 0.649662
Early stopping, best iteration is:
[469]	train's binary_logloss: 0.635491	valid's binary_logloss: 0.649128
regularization_factors, val_score: 0.648669:  15%|#5        | 3/20 [00:02<00:17,  1.01s/it][I 2020-09-27 04:47:36,939] Trial 45 finished with value: 0.6491279609766097 and parameters: {'lambda_l1': 0.038151596318754866, 'lambda_l2': 1.3493847157523479e-08}. Best is trial 45 with value: 0.6491279609766097.
regularization_factors, val_score: 0.648669:  15%|#5        | 3/20 [00:02<00:17,  1.01s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000809 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656763	valid's binary_logloss: 0.661526
[200]	train's binary_logloss: 0.646359	valid's binary_logloss: 0.653842
[300]	train's binary_logloss: 0.641324	valid's binary_logloss: 0.650914
[400]	train's binary_logloss: 0.637727	valid's binary_logloss: 0.650544
[500]	train's binary_logloss: 0.634443	valid's binary_logloss: 0.65063
Early stopping, best iteration is:
[457]	train's binary_logloss: 0.635863	valid's binary_logloss: 0.649897
regularization_factors, val_score: 0.648669:  20%|##        | 4/20 [00:04<00:17,  1.12s/it][I 2020-09-27 04:47:38,329] Trial 46 finished with value: 0.6498965604608956 and parameters: {'lambda_l1': 0.061724342064673816, 'lambda_l2': 1.1191098300700474e-08}. Best is trial 45 with value: 0.6491279609766097.
regularization_factors, val_score: 0.648669:  20%|##        | 4/20 [00:04<00:17,  1.12s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005050 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637704	valid's binary_logloss: 0.649345
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649458
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635679	valid's binary_logloss: 0.648685
regularization_factors, val_score: 0.648669:  25%|##5       | 5/20 [00:05<00:16,  1.09s/it][I 2020-09-27 04:47:39,333] Trial 47 finished with value: 0.6486854676681457 and parameters: {'lambda_l1': 7.797235764236721e-05, 'lambda_l2': 4.7617608412875624e-05}. Best is trial 47 with value: 0.6486854676681457.
regularization_factors, val_score: 0.648669:  25%|##5       | 5/20 [00:05<00:16,  1.09s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004751 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.6377	valid's binary_logloss: 0.649329
[500]	train's binary_logloss: 0.634322	valid's binary_logloss: 0.649442
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635675	valid's binary_logloss: 0.648669
regularization_factors, val_score: 0.648669:  30%|###       | 6/20 [00:06<00:14,  1.03s/it][I 2020-09-27 04:47:40,228] Trial 48 finished with value: 0.6486693564289175 and parameters: {'lambda_l1': 4.6907860757180705e-07, 'lambda_l2': 0.0001968190366629125}. Best is trial 48 with value: 0.6486693564289175.
regularization_factors, val_score: 0.648669:  30%|###       | 6/20 [00:06<00:14,  1.03s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000809 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646461	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637703	valid's binary_logloss: 0.649345
[500]	train's binary_logloss: 0.634325	valid's binary_logloss: 0.649458
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635679	valid's binary_logloss: 0.648685
regularization_factors, val_score: 0.648669:  35%|###5      | 7/20 [00:07<00:12,  1.01it/s][I 2020-09-27 04:47:41,143] Trial 49 finished with value: 0.6486853650384133 and parameters: {'lambda_l1': 7.106958303444078e-07, 'lambda_l2': 0.00031503172943699124}. Best is trial 48 with value: 0.6486693564289175.
regularization_factors, val_score: 0.648669:  35%|###5      | 7/20 [00:07<00:12,  1.01it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004879 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637704	valid's binary_logloss: 0.649345
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649458
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635679	valid's binary_logloss: 0.648685
regularization_factors, val_score: 0.648669:  40%|####      | 8/20 [00:08<00:13,  1.09s/it][I 2020-09-27 04:47:42,447] Trial 50 finished with value: 0.6486854411094597 and parameters: {'lambda_l1': 4.4792617037779606e-07, 'lambda_l2': 0.00040119958572656174}. Best is trial 48 with value: 0.6486693564289175.
regularization_factors, val_score: 0.648669:  40%|####      | 8/20 [00:08<00:13,  1.09s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000826 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.6377	valid's binary_logloss: 0.649329
[500]	train's binary_logloss: 0.634322	valid's binary_logloss: 0.649442
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635675	valid's binary_logloss: 0.648669
regularization_factors, val_score: 0.648669:  45%|####5     | 9/20 [00:09<00:11,  1.05s/it][I 2020-09-27 04:47:43,405] Trial 51 finished with value: 0.6486693484397528 and parameters: {'lambda_l1': 3.4434140107307795e-07, 'lambda_l2': 0.00030839964549718207}. Best is trial 51 with value: 0.6486693484397528.
regularization_factors, val_score: 0.648669:  45%|####5     | 9/20 [00:09<00:11,  1.05s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000859 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637704	valid's binary_logloss: 0.649345
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649458
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635679	valid's binary_logloss: 0.648685
regularization_factors, val_score: 0.648669:  50%|#####     | 10/20 [00:10<00:10,  1.01s/it][I 2020-09-27 04:47:44,335] Trial 52 finished with value: 0.6486854548458479 and parameters: {'lambda_l1': 2.9104522149216847e-07, 'lambda_l2': 0.00021910741939920448}. Best is trial 51 with value: 0.6486693484397528.
regularization_factors, val_score: 0.648669:  50%|#####     | 10/20 [00:10<00:10,  1.01s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009619 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637704	valid's binary_logloss: 0.649345
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649458
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635679	valid's binary_logloss: 0.648685
regularization_factors, val_score: 0.648669:  55%|#####5    | 11/20 [00:11<00:09,  1.03s/it][I 2020-09-27 04:47:45,395] Trial 53 finished with value: 0.6486854423114767 and parameters: {'lambda_l1': 1.8051040240033813e-07, 'lambda_l2': 0.0003854090838672508}. Best is trial 51 with value: 0.6486693484397528.
regularization_factors, val_score: 0.648669:  55%|#####5    | 11/20 [00:11<00:09,  1.03s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002587 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641483	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637704	valid's binary_logloss: 0.649345
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649458
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635679	valid's binary_logloss: 0.648685
regularization_factors, val_score: 0.648669:  60%|######    | 12/20 [00:12<00:08,  1.11s/it][I 2020-09-27 04:47:46,692] Trial 54 finished with value: 0.6486853768130912 and parameters: {'lambda_l1': 1.238928167433891e-06, 'lambda_l2': 0.001256677247104136}. Best is trial 51 with value: 0.6486693484397528.
regularization_factors, val_score: 0.648669:  60%|######    | 12/20 [00:12<00:08,  1.11s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009674 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641484	valid's binary_logloss: 0.650694
[400]	train's binary_logloss: 0.637701	valid's binary_logloss: 0.649329
[500]	train's binary_logloss: 0.634323	valid's binary_logloss: 0.649442
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635676	valid's binary_logloss: 0.648669
regularization_factors, val_score: 0.648669:  65%|######5   | 13/20 [00:13<00:07,  1.05s/it][I 2020-09-27 04:47:47,595] Trial 55 finished with value: 0.6486691459174234 and parameters: {'lambda_l1': 4.463187899572695e-06, 'lambda_l2': 0.0031440113191708815}. Best is trial 55 with value: 0.6486691459174234.
regularization_factors, val_score: 0.648669:  65%|######5   | 13/20 [00:13<00:07,  1.05s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004983 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646463	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641496	valid's binary_logloss: 0.650455
[400]	train's binary_logloss: 0.637726	valid's binary_logloss: 0.649177
[500]	train's binary_logloss: 0.634323	valid's binary_logloss: 0.649235
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635717	valid's binary_logloss: 0.648475
regularization_factors, val_score: 0.648475:  70%|#######   | 14/20 [00:14<00:06,  1.00s/it][I 2020-09-27 04:47:48,497] Trial 56 finished with value: 0.6484745400508054 and parameters: {'lambda_l1': 9.72103845454671e-06, 'lambda_l2': 0.011377674391625028}. Best is trial 56 with value: 0.6484745400508054.
regularization_factors, val_score: 0.648475:  70%|#######   | 14/20 [00:14<00:06,  1.00s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001108 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656804	valid's binary_logloss: 0.661432
[200]	train's binary_logloss: 0.646512	valid's binary_logloss: 0.653055
[300]	train's binary_logloss: 0.641411	valid's binary_logloss: 0.650947
[400]	train's binary_logloss: 0.637723	valid's binary_logloss: 0.649987
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.650052
Early stopping, best iteration is:
[464]	train's binary_logloss: 0.635548	valid's binary_logloss: 0.649488
regularization_factors, val_score: 0.648475:  75%|#######5  | 15/20 [00:15<00:05,  1.08s/it][I 2020-09-27 04:47:49,768] Trial 57 finished with value: 0.6494877361955586 and parameters: {'lambda_l1': 1.3431278103969494e-05, 'lambda_l2': 0.02447854418046466}. Best is trial 56 with value: 0.6484745400508054.
regularization_factors, val_score: 0.648475:  75%|#######5  | 15/20 [00:15<00:05,  1.08s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015062 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646462	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641495	valid's binary_logloss: 0.650455
[400]	train's binary_logloss: 0.637725	valid's binary_logloss: 0.649177
[500]	train's binary_logloss: 0.634322	valid's binary_logloss: 0.649235
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635715	valid's binary_logloss: 0.648475
regularization_factors, val_score: 0.648475:  80%|########  | 16/20 [00:16<00:04,  1.08s/it][I 2020-09-27 04:47:50,844] Trial 58 finished with value: 0.6484747613620405 and parameters: {'lambda_l1': 5.558047158433351e-06, 'lambda_l2': 0.007068676788026479}. Best is trial 56 with value: 0.6484745400508054.
regularization_factors, val_score: 0.648475:  80%|########  | 16/20 [00:16<00:04,  1.08s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000877 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646463	valid's binary_logloss: 0.653162
[300]	train's binary_logloss: 0.641497	valid's binary_logloss: 0.650455
[400]	train's binary_logloss: 0.637728	valid's binary_logloss: 0.649176
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649234
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635719	valid's binary_logloss: 0.648474
regularization_factors, val_score: 0.648474:  85%|########5 | 17/20 [00:17<00:03,  1.09s/it][I 2020-09-27 04:47:51,940] Trial 59 finished with value: 0.6484741879925516 and parameters: {'lambda_l1': 1.1402866509712536e-05, 'lambda_l2': 0.018245531549826995}. Best is trial 59 with value: 0.6484741879925516.
regularization_factors, val_score: 0.648474:  85%|########5 | 17/20 [00:17<00:03,  1.09s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004790 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646463	valid's binary_logloss: 0.653162
[300]	train's binary_logloss: 0.641497	valid's binary_logloss: 0.650455
[400]	train's binary_logloss: 0.637728	valid's binary_logloss: 0.649176
[500]	train's binary_logloss: 0.634326	valid's binary_logloss: 0.649234
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635719	valid's binary_logloss: 0.648474
regularization_factors, val_score: 0.648474:  90%|######### | 18/20 [00:18<00:02,  1.03s/it][I 2020-09-27 04:47:52,825] Trial 60 finished with value: 0.6484741937796707 and parameters: {'lambda_l1': 1.9414476880011605e-05, 'lambda_l2': 0.018137780474702055}. Best is trial 59 with value: 0.6484741879925516.
regularization_factors, val_score: 0.648474:  90%|######### | 18/20 [00:18<00:02,  1.03s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004872 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656759	valid's binary_logloss: 0.661816
[200]	train's binary_logloss: 0.646463	valid's binary_logloss: 0.653161
[300]	train's binary_logloss: 0.641496	valid's binary_logloss: 0.650455
[400]	train's binary_logloss: 0.637727	valid's binary_logloss: 0.649177
[500]	train's binary_logloss: 0.634324	valid's binary_logloss: 0.649235
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.635717	valid's binary_logloss: 0.648474
regularization_factors, val_score: 0.648474:  95%|#########5| 19/20 [00:20<00:01,  1.14s/it][I 2020-09-27 04:47:54,216] Trial 61 finished with value: 0.6484744650230337 and parameters: {'lambda_l1': 1.6059055968360456e-05, 'lambda_l2': 0.012844094468385317}. Best is trial 59 with value: 0.6484741879925516.
regularization_factors, val_score: 0.648474:  95%|#########5| 19/20 [00:20<00:01,  1.14s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004738 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65676	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646463	valid's binary_logloss: 0.653162
[300]	train's binary_logloss: 0.641498	valid's binary_logloss: 0.650454
[400]	train's binary_logloss: 0.637729	valid's binary_logloss: 0.649176
[500]	train's binary_logloss: 0.634329	valid's binary_logloss: 0.649256
Early stopping, best iteration is:
[458]	train's binary_logloss: 0.63572	valid's binary_logloss: 0.648474
regularization_factors, val_score: 0.648474: 100%|##########| 20/20 [00:21<00:00,  1.09s/it][I 2020-09-27 04:47:55,185] Trial 62 finished with value: 0.6484739197241262 and parameters: {'lambda_l1': 1.4740300654925883e-05, 'lambda_l2': 0.02349270671135121}. Best is trial 62 with value: 0.6484739197241262.
regularization_factors, val_score: 0.648474: 100%|##########| 20/20 [00:21<00:00,  1.06s/it]
min_data_in_leaf, val_score: 0.648474:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000891 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65676	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646525	valid's binary_logloss: 0.653595
[300]	train's binary_logloss: 0.641846	valid's binary_logloss: 0.650251
[400]	train's binary_logloss: 0.638553	valid's binary_logloss: 0.649577
[500]	train's binary_logloss: 0.63552	valid's binary_logloss: 0.64935
Early stopping, best iteration is:
[464]	train's binary_logloss: 0.636644	valid's binary_logloss: 0.648879
min_data_in_leaf, val_score: 0.648474:  20%|##        | 1/5 [00:01<00:04,  1.01s/it][I 2020-09-27 04:47:56,208] Trial 63 finished with value: 0.648879477038123 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.648879477038123.
min_data_in_leaf, val_score: 0.648474:  20%|##        | 1/5 [00:01<00:04,  1.01s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010338 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65676	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646463	valid's binary_logloss: 0.653162
[300]	train's binary_logloss: 0.6415	valid's binary_logloss: 0.650452
[400]	train's binary_logloss: 0.637791	valid's binary_logloss: 0.649375
[500]	train's binary_logloss: 0.634508	valid's binary_logloss: 0.649431
Early stopping, best iteration is:
[451]	train's binary_logloss: 0.636075	valid's binary_logloss: 0.648808
min_data_in_leaf, val_score: 0.648474:  40%|####      | 2/5 [00:02<00:03,  1.03s/it][I 2020-09-27 04:47:57,286] Trial 64 finished with value: 0.6488082911033146 and parameters: {'min_child_samples': 25}. Best is trial 64 with value: 0.6488082911033146.
min_data_in_leaf, val_score: 0.648474:  40%|####      | 2/5 [00:02<00:03,  1.03s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015161 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65676	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646493	valid's binary_logloss: 0.65325
[300]	train's binary_logloss: 0.641497	valid's binary_logloss: 0.650822
[400]	train's binary_logloss: 0.637788	valid's binary_logloss: 0.649913
[500]	train's binary_logloss: 0.634246	valid's binary_logloss: 0.649666
Early stopping, best iteration is:
[468]	train's binary_logloss: 0.635328	valid's binary_logloss: 0.649123
min_data_in_leaf, val_score: 0.648474:  60%|######    | 3/5 [00:03<00:02,  1.10s/it][I 2020-09-27 04:47:58,535] Trial 65 finished with value: 0.649122937898972 and parameters: {'min_child_samples': 10}. Best is trial 64 with value: 0.6488082911033146.
min_data_in_leaf, val_score: 0.648474:  60%|######    | 3/5 [00:03<00:02,  1.10s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001909 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65676	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.64648	valid's binary_logloss: 0.653437
[300]	train's binary_logloss: 0.641645	valid's binary_logloss: 0.650437
[400]	train's binary_logloss: 0.638049	valid's binary_logloss: 0.649312
[500]	train's binary_logloss: 0.634894	valid's binary_logloss: 0.649312
Early stopping, best iteration is:
[447]	train's binary_logloss: 0.63654	valid's binary_logloss: 0.648959
min_data_in_leaf, val_score: 0.648474:  80%|########  | 4/5 [00:04<00:01,  1.04s/it][I 2020-09-27 04:47:59,458] Trial 66 finished with value: 0.6489591737909831 and parameters: {'min_child_samples': 50}. Best is trial 64 with value: 0.6488082911033146.
min_data_in_leaf, val_score: 0.648474:  80%|########  | 4/5 [00:04<00:01,  1.04s/it][LightGBM] [Info] Number of positive: 12855, number of negative: 13145
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001493 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4242
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494423 -> initscore=-0.022309
[LightGBM] [Info] Start training from score -0.022309
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65676	valid's binary_logloss: 0.661817
[200]	train's binary_logloss: 0.646493	valid's binary_logloss: 0.65325
[300]	train's binary_logloss: 0.64151	valid's binary_logloss: 0.65102
[400]	train's binary_logloss: 0.637718	valid's binary_logloss: 0.650205
[500]	train's binary_logloss: 0.634224	valid's binary_logloss: 0.649812
Early stopping, best iteration is:
[459]	train's binary_logloss: 0.635631	valid's binary_logloss: 0.649459
min_data_in_leaf, val_score: 0.648474: 100%|##########| 5/5 [00:05<00:00,  1.02s/it][I 2020-09-27 04:48:00,406] Trial 67 finished with value: 0.6494585505180116 and parameters: {'min_child_samples': 5}. Best is trial 64 with value: 0.6488082911033146.
min_data_in_leaf, val_score: 0.648474: 100%|##########| 5/5 [00:05<00:00,  1.04s/it]
Fold : 9
[I 2020-09-27 04:48:00,469] A new study created in memory with name: no-name-7b517b46-d774-4255-8991-4b95254b4cf4
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000996 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571415	valid's binary_logloss: 0.664205
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.602615	valid's binary_logloss: 0.661775
feature_fraction, val_score: 0.661775:  14%|#4        | 1/7 [00:00<00:05,  1.12it/s][I 2020-09-27 04:48:01,375] Trial 0 finished with value: 0.6617754129584598 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.6617754129584598.
feature_fraction, val_score: 0.661775:  14%|#4        | 1/7 [00:00<00:05,  1.12it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001704 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577907	valid's binary_logloss: 0.66332
Early stopping, best iteration is:
[71]	train's binary_logloss: 0.596884	valid's binary_logloss: 0.662116
feature_fraction, val_score: 0.661775:  29%|##8       | 2/7 [00:02<00:04,  1.04it/s][I 2020-09-27 04:48:02,518] Trial 1 finished with value: 0.6621160574043524 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6617754129584598.
feature_fraction, val_score: 0.661775:  29%|##8       | 2/7 [00:02<00:04,  1.04it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000446 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.581403	valid's binary_logloss: 0.664323
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.601689	valid's binary_logloss: 0.663133
feature_fraction, val_score: 0.661775:  43%|####2     | 3/7 [00:02<00:03,  1.21it/s][I 2020-09-27 04:48:03,026] Trial 2 finished with value: 0.6631333345540273 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6617754129584598.
feature_fraction, val_score: 0.661775:  43%|####2     | 3/7 [00:02<00:03,  1.21it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000926 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.569848	valid's binary_logloss: 0.664707
Early stopping, best iteration is:
[67]	train's binary_logloss: 0.59321	valid's binary_logloss: 0.662756
feature_fraction, val_score: 0.661775:  57%|#####7    | 4/7 [00:03<00:02,  1.35it/s][I 2020-09-27 04:48:03,555] Trial 3 finished with value: 0.6627562719479632 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6617754129584598.
feature_fraction, val_score: 0.661775:  57%|#####7    | 4/7 [00:03<00:02,  1.35it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000814 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573306	valid's binary_logloss: 0.661877
[200]	train's binary_logloss: 0.517495	valid's binary_logloss: 0.664844
Early stopping, best iteration is:
[113]	train's binary_logloss: 0.565412	valid's binary_logloss: 0.660855
feature_fraction, val_score: 0.660855:  71%|#######1  | 5/7 [00:03<00:01,  1.42it/s][I 2020-09-27 04:48:04,180] Trial 4 finished with value: 0.6608549078376256 and parameters: {'feature_fraction': 0.8}. Best is trial 4 with value: 0.6608549078376256.
feature_fraction, val_score: 0.660855:  71%|#######1  | 5/7 [00:03<00:01,  1.42it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004908 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.574579	valid's binary_logloss: 0.662991
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.606825	valid's binary_logloss: 0.661109
feature_fraction, val_score: 0.660855:  86%|########5 | 6/7 [00:04<00:00,  1.58it/s][I 2020-09-27 04:48:04,649] Trial 5 finished with value: 0.66110942300072 and parameters: {'feature_fraction': 0.7}. Best is trial 4 with value: 0.6608549078376256.
feature_fraction, val_score: 0.660855:  86%|########5 | 6/7 [00:04<00:00,  1.58it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004781 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575121	valid's binary_logloss: 0.665199
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.604585	valid's binary_logloss: 0.663434
feature_fraction, val_score: 0.660855: 100%|##########| 7/7 [00:04<00:00,  1.58it/s][I 2020-09-27 04:48:05,278] Trial 6 finished with value: 0.6634341152776219 and parameters: {'feature_fraction': 0.6}. Best is trial 4 with value: 0.6608549078376256.
feature_fraction, val_score: 0.660855: 100%|##########| 7/7 [00:04<00:00,  1.46it/s]
num_leaves, val_score: 0.660855:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004289 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573306	valid's binary_logloss: 0.661877
[200]	train's binary_logloss: 0.517495	valid's binary_logloss: 0.664844
Early stopping, best iteration is:
[113]	train's binary_logloss: 0.565412	valid's binary_logloss: 0.660855
num_leaves, val_score: 0.660855:   5%|5         | 1/20 [00:01<00:37,  1.99s/it][I 2020-09-27 04:48:07,289] Trial 7 finished with value: 0.6608549078376256 and parameters: {'num_leaves': 31}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:   5%|5         | 1/20 [00:02<00:37,  1.99s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009560 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.291282	valid's binary_logloss: 0.683399
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.515751	valid's binary_logloss: 0.662419
num_leaves, val_score: 0.660855:  10%|#         | 2/20 [00:03<00:32,  1.79s/it][I 2020-09-27 04:48:08,615] Trial 8 finished with value: 0.6624189569421138 and parameters: {'num_leaves': 227}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  10%|#         | 2/20 [00:03<00:32,  1.79s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000898 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.633793	valid's binary_logloss: 0.66169
[200]	train's binary_logloss: 0.616176	valid's binary_logloss: 0.662116
Early stopping, best iteration is:
[127]	train's binary_logloss: 0.628472	valid's binary_logloss: 0.661027
num_leaves, val_score: 0.660855:  15%|#5        | 3/20 [00:03<00:24,  1.41s/it][I 2020-09-27 04:48:09,146] Trial 9 finished with value: 0.6610265822259738 and parameters: {'num_leaves': 8}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  15%|#5        | 3/20 [00:03<00:24,  1.41s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000937 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.277955	valid's binary_logloss: 0.68283
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.489512	valid's binary_logloss: 0.667966
num_leaves, val_score: 0.660855:  20%|##        | 4/20 [00:05<00:24,  1.52s/it][I 2020-09-27 04:48:10,921] Trial 10 finished with value: 0.6679657755747807 and parameters: {'num_leaves': 241}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  20%|##        | 4/20 [00:05<00:24,  1.52s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000991 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.586674	valid's binary_logloss: 0.66365
Early stopping, best iteration is:
[74]	train's binary_logloss: 0.601053	valid's binary_logloss: 0.662634
num_leaves, val_score: 0.660855:  25%|##5       | 5/20 [00:06<00:18,  1.21s/it][I 2020-09-27 04:48:11,417] Trial 11 finished with value: 0.6626335832631651 and parameters: {'num_leaves': 25}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  25%|##5       | 5/20 [00:06<00:18,  1.21s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000938 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.437791	valid's binary_logloss: 0.672762
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.555775	valid's binary_logloss: 0.664193
num_leaves, val_score: 0.660855:  30%|###       | 6/20 [00:06<00:15,  1.07s/it][I 2020-09-27 04:48:12,160] Trial 12 finished with value: 0.664192632015902 and parameters: {'num_leaves': 104}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  30%|###       | 6/20 [00:06<00:15,  1.07s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000963 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.458686	valid's binary_logloss: 0.668395
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.556167	valid's binary_logloss: 0.664721
num_leaves, val_score: 0.660855:  35%|###5      | 7/20 [00:07<00:12,  1.03it/s][I 2020-09-27 04:48:12,890] Trial 13 finished with value: 0.6647214922013416 and parameters: {'num_leaves': 90}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  35%|###5      | 7/20 [00:07<00:12,  1.03it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009252 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.350518	valid's binary_logloss: 0.684373
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.558627	valid's binary_logloss: 0.66659
num_leaves, val_score: 0.660855:  40%|####      | 8/20 [00:08<00:11,  1.07it/s][I 2020-09-27 04:48:13,742] Trial 14 finished with value: 0.6665899695534304 and parameters: {'num_leaves': 167}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  40%|####      | 8/20 [00:08<00:11,  1.07it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004731 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.532376	valid's binary_logloss: 0.669923
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.596466	valid's binary_logloss: 0.665493
num_leaves, val_score: 0.660855:  45%|####5     | 9/20 [00:09<00:10,  1.04it/s][I 2020-09-27 04:48:14,759] Trial 15 finished with value: 0.665493198325232 and parameters: {'num_leaves': 50}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  45%|####5     | 9/20 [00:09<00:10,  1.04it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000797 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.350518	valid's binary_logloss: 0.684373
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.558627	valid's binary_logloss: 0.66659
num_leaves, val_score: 0.660855:  50%|#####     | 10/20 [00:10<00:09,  1.04it/s][I 2020-09-27 04:48:15,722] Trial 16 finished with value: 0.6665899695534304 and parameters: {'num_leaves': 167}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  50%|#####     | 10/20 [00:10<00:09,  1.04it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004981 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.512177	valid's binary_logloss: 0.666645
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.60465	valid's binary_logloss: 0.662506
num_leaves, val_score: 0.660855:  55%|#####5    | 11/20 [00:10<00:07,  1.20it/s][I 2020-09-27 04:48:16,266] Trial 17 finished with value: 0.6625059680700698 and parameters: {'num_leaves': 60}. Best is trial 7 with value: 0.6608549078376256.
num_leaves, val_score: 0.660855:  55%|#####5    | 11/20 [00:10<00:07,  1.20it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001403 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650402	valid's binary_logloss: 0.665088
[200]	train's binary_logloss: 0.639194	valid's binary_logloss: 0.660755
[300]	train's binary_logloss: 0.632319	valid's binary_logloss: 0.66028
[400]	train's binary_logloss: 0.626788	valid's binary_logloss: 0.660425
Early stopping, best iteration is:
[360]	train's binary_logloss: 0.628825	valid's binary_logloss: 0.659947
num_leaves, val_score: 0.659947:  60%|######    | 12/20 [00:11<00:06,  1.22it/s][I 2020-09-27 04:48:17,046] Trial 18 finished with value: 0.659946568502427 and parameters: {'num_leaves': 4}. Best is trial 18 with value: 0.659946568502427.
num_leaves, val_score: 0.659947:  60%|######    | 12/20 [00:11<00:06,  1.22it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001058 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637047	valid's binary_logloss: 0.663485
[200]	train's binary_logloss: 0.621339	valid's binary_logloss: 0.662464
[300]	train's binary_logloss: 0.608631	valid's binary_logloss: 0.662754
Early stopping, best iteration is:
[218]	train's binary_logloss: 0.618989	valid's binary_logloss: 0.661824
num_leaves, val_score: 0.659947:  65%|######5   | 13/20 [00:12<00:05,  1.31it/s][I 2020-09-27 04:48:17,672] Trial 19 finished with value: 0.6618235788394593 and parameters: {'num_leaves': 7}. Best is trial 18 with value: 0.659946568502427.
num_leaves, val_score: 0.659947:  65%|######5   | 13/20 [00:12<00:05,  1.31it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006735 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657096	valid's binary_logloss: 0.667367
[200]	train's binary_logloss: 0.646565	valid's binary_logloss: 0.661837
[300]	train's binary_logloss: 0.641199	valid's binary_logloss: 0.660556
[400]	train's binary_logloss: 0.637372	valid's binary_logloss: 0.660306
Early stopping, best iteration is:
[357]	train's binary_logloss: 0.638883	valid's binary_logloss: 0.659984
num_leaves, val_score: 0.659947:  70%|#######   | 14/20 [00:13<00:05,  1.13it/s][I 2020-09-27 04:48:18,847] Trial 20 finished with value: 0.6599838058476127 and parameters: {'num_leaves': 3}. Best is trial 18 with value: 0.659946568502427.
num_leaves, val_score: 0.659947:  70%|#######   | 14/20 [00:13<00:05,  1.13it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001022 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.645096	valid's binary_logloss: 0.662705
[200]	train's binary_logloss: 0.632688	valid's binary_logloss: 0.660038
Early stopping, best iteration is:
[193]	train's binary_logloss: 0.633406	valid's binary_logloss: 0.660001
num_leaves, val_score: 0.659947:  75%|#######5  | 15/20 [00:14<00:03,  1.26it/s][I 2020-09-27 04:48:19,418] Trial 21 finished with value: 0.6600011118675012 and parameters: {'num_leaves': 5}. Best is trial 18 with value: 0.659946568502427.
num_leaves, val_score: 0.659947:  75%|#######5  | 15/20 [00:14<00:03,  1.26it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001022 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.629941	valid's binary_logloss: 0.66087
[200]	train's binary_logloss: 0.610687	valid's binary_logloss: 0.662744
Early stopping, best iteration is:
[125]	train's binary_logloss: 0.624522	valid's binary_logloss: 0.66051
num_leaves, val_score: 0.659947:  80%|########  | 16/20 [00:14<00:02,  1.38it/s][I 2020-09-27 04:48:19,995] Trial 22 finished with value: 0.6605096965777126 and parameters: {'num_leaves': 9}. Best is trial 18 with value: 0.659946568502427.
num_leaves, val_score: 0.659947:  80%|########  | 16/20 [00:14<00:02,  1.38it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006817 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.512177	valid's binary_logloss: 0.666645
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.60465	valid's binary_logloss: 0.662506
num_leaves, val_score: 0.659947:  85%|########5 | 17/20 [00:15<00:02,  1.48it/s][I 2020-09-27 04:48:20,555] Trial 23 finished with value: 0.6625059680700698 and parameters: {'num_leaves': 60}. Best is trial 18 with value: 0.659946568502427.
num_leaves, val_score: 0.659947:  85%|########5 | 17/20 [00:15<00:02,  1.48it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001075 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650402	valid's binary_logloss: 0.665088
[200]	train's binary_logloss: 0.639194	valid's binary_logloss: 0.660755
[300]	train's binary_logloss: 0.632319	valid's binary_logloss: 0.66028
[400]	train's binary_logloss: 0.626788	valid's binary_logloss: 0.660425
Early stopping, best iteration is:
[360]	train's binary_logloss: 0.628825	valid's binary_logloss: 0.659947
num_leaves, val_score: 0.659947:  90%|######### | 18/20 [00:16<00:01,  1.40it/s][I 2020-09-27 04:48:21,359] Trial 24 finished with value: 0.6599465685024268 and parameters: {'num_leaves': 4}. Best is trial 24 with value: 0.6599465685024268.
num_leaves, val_score: 0.659947:  90%|######### | 18/20 [00:16<00:01,  1.40it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000833 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.569877	valid's binary_logloss: 0.667063
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.613515	valid's binary_logloss: 0.663355
num_leaves, val_score: 0.659947:  95%|#########5| 19/20 [00:17<00:00,  1.27it/s][I 2020-09-27 04:48:22,313] Trial 25 finished with value: 0.6633553868179578 and parameters: {'num_leaves': 32}. Best is trial 24 with value: 0.6599465685024268.
num_leaves, val_score: 0.659947:  95%|#########5| 19/20 [00:17<00:00,  1.27it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000762 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.466555	valid's binary_logloss: 0.674499
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.577932	valid's binary_logloss: 0.665241
num_leaves, val_score: 0.659947: 100%|##########| 20/20 [00:17<00:00,  1.30it/s][I 2020-09-27 04:48:23,045] Trial 26 finished with value: 0.6652408810612898 and parameters: {'num_leaves': 85}. Best is trial 24 with value: 0.6599465685024268.
num_leaves, val_score: 0.659947: 100%|##########| 20/20 [00:17<00:00,  1.13it/s]
bagging, val_score: 0.659947:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000831 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649887	valid's binary_logloss: 0.663942
[200]	train's binary_logloss: 0.639122	valid's binary_logloss: 0.660082
[300]	train's binary_logloss: 0.632108	valid's binary_logloss: 0.660084
Early stopping, best iteration is:
[239]	train's binary_logloss: 0.636191	valid's binary_logloss: 0.659231
bagging, val_score: 0.659231:  10%|#         | 1/10 [00:00<00:06,  1.48it/s][I 2020-09-27 04:48:23,737] Trial 27 finished with value: 0.6592308295470516 and parameters: {'bagging_fraction': 0.8273856061488931, 'bagging_freq': 3}. Best is trial 27 with value: 0.6592308295470516.
bagging, val_score: 0.659231:  10%|#         | 1/10 [00:00<00:06,  1.48it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004809 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649764	valid's binary_logloss: 0.663649
[200]	train's binary_logloss: 0.638988	valid's binary_logloss: 0.65972
[300]	train's binary_logloss: 0.632186	valid's binary_logloss: 0.659366
[400]	train's binary_logloss: 0.626354	valid's binary_logloss: 0.659639
Early stopping, best iteration is:
[322]	train's binary_logloss: 0.630893	valid's binary_logloss: 0.659046
bagging, val_score: 0.659046:  20%|##        | 2/10 [00:01<00:05,  1.42it/s][I 2020-09-27 04:48:24,511] Trial 28 finished with value: 0.659045827636086 and parameters: {'bagging_fraction': 0.8505997166142588, 'bagging_freq': 3}. Best is trial 28 with value: 0.659045827636086.
bagging, val_score: 0.659046:  20%|##        | 2/10 [00:01<00:05,  1.42it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006005 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631879	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633137	valid's binary_logloss: 0.657825
bagging, val_score: 0.657825:  30%|###       | 3/10 [00:02<00:05,  1.40it/s][I 2020-09-27 04:48:25,250] Trial 29 finished with value: 0.6578254229578112 and parameters: {'bagging_fraction': 0.8638496845155332, 'bagging_freq': 3}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  30%|###       | 3/10 [00:02<00:05,  1.40it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001015 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649823	valid's binary_logloss: 0.663663
[200]	train's binary_logloss: 0.638792	valid's binary_logloss: 0.659288
[300]	train's binary_logloss: 0.631889	valid's binary_logloss: 0.658487
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.635846	valid's binary_logloss: 0.658154
bagging, val_score: 0.657825:  40%|####      | 4/10 [00:03<00:04,  1.28it/s][I 2020-09-27 04:48:26,194] Trial 30 finished with value: 0.6581539755446953 and parameters: {'bagging_fraction': 0.8557672308630124, 'bagging_freq': 3}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  40%|####      | 4/10 [00:03<00:04,  1.28it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002755 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650081	valid's binary_logloss: 0.663971
[200]	train's binary_logloss: 0.639132	valid's binary_logloss: 0.660066
[300]	train's binary_logloss: 0.632409	valid's binary_logloss: 0.659247
[400]	train's binary_logloss: 0.626618	valid's binary_logloss: 0.65931
Early stopping, best iteration is:
[341]	train's binary_logloss: 0.630063	valid's binary_logloss: 0.659011
bagging, val_score: 0.657825:  50%|#####     | 5/10 [00:04<00:04,  1.12it/s][I 2020-09-27 04:48:27,332] Trial 31 finished with value: 0.6590114175208794 and parameters: {'bagging_fraction': 0.8641735815144664, 'bagging_freq': 3}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  50%|#####     | 5/10 [00:04<00:04,  1.12it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000894 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.64988	valid's binary_logloss: 0.664431
[200]	train's binary_logloss: 0.639026	valid's binary_logloss: 0.660245
[300]	train's binary_logloss: 0.632354	valid's binary_logloss: 0.659402
[400]	train's binary_logloss: 0.626336	valid's binary_logloss: 0.659571
Early stopping, best iteration is:
[322]	train's binary_logloss: 0.630965	valid's binary_logloss: 0.659137
bagging, val_score: 0.657825:  60%|######    | 6/10 [00:05<00:03,  1.17it/s][I 2020-09-27 04:48:28,113] Trial 32 finished with value: 0.6591365814377496 and parameters: {'bagging_fraction': 0.8683729064753997, 'bagging_freq': 3}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  60%|######    | 6/10 [00:05<00:03,  1.17it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006198 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649852	valid's binary_logloss: 0.663933
[200]	train's binary_logloss: 0.639035	valid's binary_logloss: 0.65961
[300]	train's binary_logloss: 0.632241	valid's binary_logloss: 0.658575
[400]	train's binary_logloss: 0.62642	valid's binary_logloss: 0.658789
Early stopping, best iteration is:
[362]	train's binary_logloss: 0.628634	valid's binary_logloss: 0.658088
bagging, val_score: 0.657825:  70%|#######   | 7/10 [00:05<00:02,  1.16it/s][I 2020-09-27 04:48:28,978] Trial 33 finished with value: 0.6580884313848213 and parameters: {'bagging_fraction': 0.8714478568851503, 'bagging_freq': 3}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  70%|#######   | 7/10 [00:05<00:02,  1.16it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000833 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649854	valid's binary_logloss: 0.66428
[200]	train's binary_logloss: 0.638784	valid's binary_logloss: 0.660031
[300]	train's binary_logloss: 0.631851	valid's binary_logloss: 0.659733
Early stopping, best iteration is:
[259]	train's binary_logloss: 0.634487	valid's binary_logloss: 0.659495
bagging, val_score: 0.657825:  80%|########  | 8/10 [00:06<00:01,  1.20it/s][I 2020-09-27 04:48:29,747] Trial 34 finished with value: 0.6594947433812736 and parameters: {'bagging_fraction': 0.9161669099599079, 'bagging_freq': 3}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  80%|########  | 8/10 [00:06<00:01,  1.20it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003234 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649122	valid's binary_logloss: 0.66312
[200]	train's binary_logloss: 0.638562	valid's binary_logloss: 0.660896
Early stopping, best iteration is:
[170]	train's binary_logloss: 0.640905	valid's binary_logloss: 0.660106
bagging, val_score: 0.657825:  90%|######### | 9/10 [00:07<00:00,  1.18it/s][I 2020-09-27 04:48:30,634] Trial 35 finished with value: 0.6601056363582504 and parameters: {'bagging_fraction': 0.5253666376896976, 'bagging_freq': 2}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825:  90%|######### | 9/10 [00:07<00:00,  1.18it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000924 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.64967	valid's binary_logloss: 0.663468
[200]	train's binary_logloss: 0.638905	valid's binary_logloss: 0.660271
[300]	train's binary_logloss: 0.632375	valid's binary_logloss: 0.660007
[400]	train's binary_logloss: 0.626598	valid's binary_logloss: 0.659422
[500]	train's binary_logloss: 0.621008	valid's binary_logloss: 0.658741
Early stopping, best iteration is:
[478]	train's binary_logloss: 0.622298	valid's binary_logloss: 0.658486
bagging, val_score: 0.657825: 100%|##########| 10/10 [00:08<00:00,  1.12it/s][I 2020-09-27 04:48:31,633] Trial 36 finished with value: 0.658485788443909 and parameters: {'bagging_fraction': 0.7531472629337493, 'bagging_freq': 6}. Best is trial 29 with value: 0.6578254229578112.
bagging, val_score: 0.657825: 100%|##########| 10/10 [00:08<00:00,  1.17it/s]
feature_fraction_stage2, val_score: 0.657825:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000919 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649945	valid's binary_logloss: 0.663854
[200]	train's binary_logloss: 0.638845	valid's binary_logloss: 0.659895
[300]	train's binary_logloss: 0.631902	valid's binary_logloss: 0.659391
[400]	train's binary_logloss: 0.626123	valid's binary_logloss: 0.659153
Early stopping, best iteration is:
[373]	train's binary_logloss: 0.627702	valid's binary_logloss: 0.658836
feature_fraction_stage2, val_score: 0.657825:  17%|#6        | 1/6 [00:00<00:04,  1.09it/s][I 2020-09-27 04:48:32,563] Trial 37 finished with value: 0.6588361247453068 and parameters: {'feature_fraction': 0.8480000000000001}. Best is trial 37 with value: 0.6588361247453068.
feature_fraction_stage2, val_score: 0.657825:  17%|#6        | 1/6 [00:00<00:04,  1.09it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000843 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631879	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633137	valid's binary_logloss: 0.657825
feature_fraction_stage2, val_score: 0.657825:  33%|###3      | 2/6 [00:01<00:03,  1.16it/s][I 2020-09-27 04:48:33,305] Trial 38 finished with value: 0.6578254229578112 and parameters: {'feature_fraction': 0.8160000000000001}. Best is trial 38 with value: 0.6578254229578112.
feature_fraction_stage2, val_score: 0.657825:  33%|###3      | 2/6 [00:01<00:03,  1.16it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000906 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649964	valid's binary_logloss: 0.664299
[200]	train's binary_logloss: 0.638952	valid's binary_logloss: 0.66021
[300]	train's binary_logloss: 0.632195	valid's binary_logloss: 0.6599
Early stopping, best iteration is:
[265]	train's binary_logloss: 0.634377	valid's binary_logloss: 0.659183
feature_fraction_stage2, val_score: 0.657825:  50%|#####     | 3/6 [00:02<00:02,  1.11it/s][I 2020-09-27 04:48:34,293] Trial 39 finished with value: 0.6591825632130718 and parameters: {'feature_fraction': 0.88}. Best is trial 38 with value: 0.6578254229578112.
feature_fraction_stage2, val_score: 0.657825:  50%|#####     | 3/6 [00:02<00:02,  1.11it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016186 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650087	valid's binary_logloss: 0.664287
[200]	train's binary_logloss: 0.639299	valid's binary_logloss: 0.66052
[300]	train's binary_logloss: 0.632483	valid's binary_logloss: 0.659821
Early stopping, best iteration is:
[265]	train's binary_logloss: 0.634723	valid's binary_logloss: 0.659476
feature_fraction_stage2, val_score: 0.657825:  67%|######6   | 4/6 [00:03<00:01,  1.12it/s][I 2020-09-27 04:48:35,160] Trial 40 finished with value: 0.6594764202643758 and parameters: {'feature_fraction': 0.784}. Best is trial 38 with value: 0.6578254229578112.
feature_fraction_stage2, val_score: 0.657825:  67%|######6   | 4/6 [00:03<00:01,  1.12it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005048 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650002	valid's binary_logloss: 0.663921
[200]	train's binary_logloss: 0.639037	valid's binary_logloss: 0.659453
[300]	train's binary_logloss: 0.632258	valid's binary_logloss: 0.658764
Early stopping, best iteration is:
[263]	train's binary_logloss: 0.634669	valid's binary_logloss: 0.658316
feature_fraction_stage2, val_score: 0.657825:  83%|########3 | 5/6 [00:04<00:00,  1.21it/s][I 2020-09-27 04:48:35,831] Trial 41 finished with value: 0.6583158730564244 and parameters: {'feature_fraction': 0.7200000000000001}. Best is trial 38 with value: 0.6578254229578112.
feature_fraction_stage2, val_score: 0.657825:  83%|########3 | 5/6 [00:04<00:00,  1.21it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004831 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650087	valid's binary_logloss: 0.664287
[200]	train's binary_logloss: 0.639299	valid's binary_logloss: 0.66052
[300]	train's binary_logloss: 0.632483	valid's binary_logloss: 0.659821
Early stopping, best iteration is:
[265]	train's binary_logloss: 0.634723	valid's binary_logloss: 0.659476
feature_fraction_stage2, val_score: 0.657825: 100%|##########| 6/6 [00:04<00:00,  1.29it/s][I 2020-09-27 04:48:36,497] Trial 42 finished with value: 0.6594764202643758 and parameters: {'feature_fraction': 0.7520000000000001}. Best is trial 38 with value: 0.6578254229578112.
feature_fraction_stage2, val_score: 0.657825: 100%|##########| 6/6 [00:04<00:00,  1.24it/s]
regularization_factors, val_score: 0.657825:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000810 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650995	valid's binary_logloss: 0.663984
[200]	train's binary_logloss: 0.640816	valid's binary_logloss: 0.660341
[300]	train's binary_logloss: 0.635324	valid's binary_logloss: 0.659307
Early stopping, best iteration is:
[279]	train's binary_logloss: 0.636364	valid's binary_logloss: 0.659099
regularization_factors, val_score: 0.657825:   5%|5         | 1/20 [00:00<00:15,  1.24it/s][I 2020-09-27 04:48:37,323] Trial 43 finished with value: 0.659098571564232 and parameters: {'lambda_l1': 6.368584839611204, 'lambda_l2': 0.011340463712795283}. Best is trial 43 with value: 0.659098571564232.
regularization_factors, val_score: 0.657825:   5%|5         | 1/20 [00:00<00:15,  1.24it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004832 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631873	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633131	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  10%|#         | 2/20 [00:01<00:15,  1.13it/s][I 2020-09-27 04:48:38,394] Trial 44 finished with value: 0.6578254666437432 and parameters: {'lambda_l1': 2.7238144783184155e-08, 'lambda_l2': 1.008805152305263e-07}. Best is trial 44 with value: 0.6578254666437432.
regularization_factors, val_score: 0.657825:  10%|#         | 2/20 [00:01<00:15,  1.13it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.021204 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631873	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633131	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  15%|#5        | 3/20 [00:02<00:14,  1.13it/s][I 2020-09-27 04:48:39,260] Trial 45 finished with value: 0.6578254666410345 and parameters: {'lambda_l1': 1.0435559181670616e-08, 'lambda_l2': 1.585450495098862e-08}. Best is trial 45 with value: 0.6578254666410345.
regularization_factors, val_score: 0.657825:  15%|#5        | 3/20 [00:02<00:14,  1.13it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000902 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  20%|##        | 4/20 [00:03<00:13,  1.19it/s][I 2020-09-27 04:48:39,997] Trial 46 finished with value: 0.6578256353646856 and parameters: {'lambda_l1': 1.0219085602778263e-08, 'lambda_l2': 1.1221944608727081e-08}. Best is trial 45 with value: 0.6578254666410345.
regularization_factors, val_score: 0.657825:  20%|##        | 4/20 [00:03<00:13,  1.19it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001003 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  25%|##5       | 5/20 [00:04<00:12,  1.22it/s][I 2020-09-27 04:48:40,774] Trial 47 finished with value: 0.6578256353644696 and parameters: {'lambda_l1': 1.0248936810635019e-08, 'lambda_l2': 1.3858509908150066e-08}. Best is trial 45 with value: 0.6578254666410345.
regularization_factors, val_score: 0.657825:  25%|##5       | 5/20 [00:04<00:12,  1.22it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000912 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  30%|###       | 6/20 [00:04<00:11,  1.26it/s][I 2020-09-27 04:48:41,504] Trial 48 finished with value: 0.6578256353646091 and parameters: {'lambda_l1': 1.4330977464180962e-08, 'lambda_l2': 1.3975871503118737e-08}. Best is trial 45 with value: 0.6578254666410345.
regularization_factors, val_score: 0.657825:  30%|###       | 6/20 [00:04<00:11,  1.26it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008196 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631873	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633131	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  35%|###5      | 7/20 [00:06<00:11,  1.10it/s][I 2020-09-27 04:48:42,673] Trial 49 finished with value: 0.6578254666417223 and parameters: {'lambda_l1': 1.3796608447685168e-08, 'lambda_l2': 4.7462022137544425e-08}. Best is trial 45 with value: 0.6578254666410345.
regularization_factors, val_score: 0.657825:  35%|###5      | 7/20 [00:06<00:11,  1.10it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000848 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631873	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633131	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  40%|####      | 8/20 [00:07<00:10,  1.13it/s][I 2020-09-27 04:48:43,519] Trial 50 finished with value: 0.6578254844386293 and parameters: {'lambda_l1': 5.179213600763938e-06, 'lambda_l2': 5.7935679948530245e-06}. Best is trial 45 with value: 0.6578254666410345.
regularization_factors, val_score: 0.657825:  40%|####      | 8/20 [00:07<00:10,  1.13it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000955 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  45%|####5     | 9/20 [00:07<00:09,  1.19it/s][I 2020-09-27 04:48:44,255] Trial 51 finished with value: 0.6578252553356203 and parameters: {'lambda_l1': 1.558360532559241e-05, 'lambda_l2': 4.619266691410038e-06}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  45%|####5     | 9/20 [00:07<00:09,  1.19it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000894 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  50%|#####     | 10/20 [00:08<00:08,  1.23it/s][I 2020-09-27 04:48:44,994] Trial 52 finished with value: 0.6578256356080326 and parameters: {'lambda_l1': 2.7250062441770353e-06, 'lambda_l2': 2.367717105214214e-06}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  50%|#####     | 10/20 [00:08<00:08,  1.23it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001566 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  55%|#####5    | 11/20 [00:09<00:07,  1.26it/s][I 2020-09-27 04:48:45,738] Trial 53 finished with value: 0.6578256354185161 and parameters: {'lambda_l1': 3.3319990687032807e-07, 'lambda_l2': 9.864870800228018e-07}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  55%|#####5    | 11/20 [00:09<00:07,  1.26it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000843 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650018	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638999	valid's binary_logloss: 0.659122
[300]	train's binary_logloss: 0.632026	valid's binary_logloss: 0.658235
Early stopping, best iteration is:
[290]	train's binary_logloss: 0.632696	valid's binary_logloss: 0.658147
regularization_factors, val_score: 0.657825:  60%|######    | 12/20 [00:10<00:07,  1.08it/s][I 2020-09-27 04:48:46,984] Trial 54 finished with value: 0.6581469442698293 and parameters: {'lambda_l1': 0.006687572450441869, 'lambda_l2': 2.42706687165855e-07}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  60%|######    | 12/20 [00:10<00:07,  1.08it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000800 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631879	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633137	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  65%|######5   | 13/20 [00:11<00:06,  1.14it/s][I 2020-09-27 04:48:47,745] Trial 55 finished with value: 0.6578254639325656 and parameters: {'lambda_l1': 0.0006287455216203462, 'lambda_l2': 8.873368461516288e-05}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  65%|######5   | 13/20 [00:11<00:06,  1.14it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000955 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638998	valid's binary_logloss: 0.659122
[300]	train's binary_logloss: 0.632035	valid's binary_logloss: 0.658192
[400]	train's binary_logloss: 0.626308	valid's binary_logloss: 0.65838
Early stopping, best iteration is:
[307]	train's binary_logloss: 0.631536	valid's binary_logloss: 0.658173
regularization_factors, val_score: 0.657825:  70%|#######   | 14/20 [00:12<00:05,  1.18it/s][I 2020-09-27 04:48:48,519] Trial 56 finished with value: 0.6581734662791846 and parameters: {'lambda_l1': 0.0023930535028393245, 'lambda_l2': 0.00017393958564666502}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  70%|#######   | 14/20 [00:12<00:05,  1.18it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000968 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631879	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633137	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  75%|#######5  | 15/20 [00:12<00:04,  1.23it/s][I 2020-09-27 04:48:49,262] Trial 57 finished with value: 0.6578254362526145 and parameters: {'lambda_l1': 0.0001139415321377858, 'lambda_l2': 0.00018316916066158292}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  75%|#######5  | 15/20 [00:12<00:04,  1.23it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001132 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.63892	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.63188	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633137	valid's binary_logloss: 0.657825
regularization_factors, val_score: 0.657825:  80%|########  | 16/20 [00:13<00:03,  1.16it/s][I 2020-09-27 04:48:50,236] Trial 58 finished with value: 0.6578254556273145 and parameters: {'lambda_l1': 0.00013528915615576415, 'lambda_l2': 0.0002063652161436333}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  80%|########  | 16/20 [00:13<00:03,  1.16it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015940 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631876	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633134	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  85%|########5 | 17/20 [00:14<00:02,  1.13it/s][I 2020-09-27 04:48:51,159] Trial 59 finished with value: 0.6578256499831218 and parameters: {'lambda_l1': 0.00013602675042342853, 'lambda_l2': 0.00018327214045940748}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  85%|########5 | 17/20 [00:14<00:02,  1.13it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001556 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.63892	valid's binary_logloss: 0.659405
[300]	train's binary_logloss: 0.631879	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633136	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  90%|######### | 18/20 [00:15<00:01,  1.20it/s][I 2020-09-27 04:48:51,873] Trial 60 finished with value: 0.6578258958216437 and parameters: {'lambda_l1': 0.00015253198331570547, 'lambda_l2': 0.0071388436435203435}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  90%|######### | 18/20 [00:15<00:01,  1.20it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000941 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.63188	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633137	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825:  95%|#########5| 19/20 [00:16<00:00,  1.23it/s][I 2020-09-27 04:48:52,641] Trial 61 finished with value: 0.657825517008465 and parameters: {'lambda_l1': 0.0015382833109009928, 'lambda_l2': 4.130282202202608e-05}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825:  95%|#########5| 19/20 [00:16<00:00,  1.23it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000998 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638919	valid's binary_logloss: 0.659404
[300]	train's binary_logloss: 0.631874	valid's binary_logloss: 0.657967
Early stopping, best iteration is:
[281]	train's binary_logloss: 0.633131	valid's binary_logloss: 0.657826
regularization_factors, val_score: 0.657825: 100%|##########| 20/20 [00:16<00:00,  1.24it/s][I 2020-09-27 04:48:53,434] Trial 62 finished with value: 0.6578255316540529 and parameters: {'lambda_l1': 3.339088375221171e-05, 'lambda_l2': 0.0017874187322570777}. Best is trial 51 with value: 0.6578252553356203.
regularization_factors, val_score: 0.657825: 100%|##########| 20/20 [00:16<00:00,  1.18it/s]
min_data_in_leaf, val_score: 0.657825:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000868 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638922	valid's binary_logloss: 0.659362
[300]	train's binary_logloss: 0.632151	valid's binary_logloss: 0.658294
[400]	train's binary_logloss: 0.626333	valid's binary_logloss: 0.659009
Early stopping, best iteration is:
[300]	train's binary_logloss: 0.632151	valid's binary_logloss: 0.658294
min_data_in_leaf, val_score: 0.657825:  20%|##        | 1/5 [00:01<00:05,  1.28s/it][I 2020-09-27 04:48:54,726] Trial 63 finished with value: 0.6582938352043944 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6582938352043944.
min_data_in_leaf, val_score: 0.657825:  20%|##        | 1/5 [00:01<00:05,  1.28s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.005115 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638826	valid's binary_logloss: 0.659409
[300]	train's binary_logloss: 0.63179	valid's binary_logloss: 0.658711
Early stopping, best iteration is:
[232]	train's binary_logloss: 0.636374	valid's binary_logloss: 0.658604
min_data_in_leaf, val_score: 0.657825:  40%|####      | 2/5 [00:01<00:03,  1.10s/it][I 2020-09-27 04:48:55,420] Trial 64 finished with value: 0.6586044923878769 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.6582938352043944.
min_data_in_leaf, val_score: 0.657825:  40%|####      | 2/5 [00:01<00:03,  1.10s/it][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000876 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65002	valid's binary_logloss: 0.663404
[200]	train's binary_logloss: 0.639166	valid's binary_logloss: 0.659615
[300]	train's binary_logloss: 0.632623	valid's binary_logloss: 0.659448
Early stopping, best iteration is:
[240]	train's binary_logloss: 0.636379	valid's binary_logloss: 0.65892
min_data_in_leaf, val_score: 0.657825:  60%|######    | 3/5 [00:02<00:01,  1.02it/s][I 2020-09-27 04:48:56,123] Trial 65 finished with value: 0.658919688761251 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.6582938352043944.
min_data_in_leaf, val_score: 0.657825:  60%|######    | 3/5 [00:02<00:01,  1.02it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.004999 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65002	valid's binary_logloss: 0.663404
[200]	train's binary_logloss: 0.639125	valid's binary_logloss: 0.659535
[300]	train's binary_logloss: 0.632949	valid's binary_logloss: 0.658458
Early stopping, best iteration is:
[266]	train's binary_logloss: 0.634844	valid's binary_logloss: 0.658133
min_data_in_leaf, val_score: 0.657825:  80%|########  | 4/5 [00:03<00:00,  1.10it/s][I 2020-09-27 04:48:56,850] Trial 66 finished with value: 0.6581334712449208 and parameters: {'min_child_samples': 100}. Best is trial 66 with value: 0.6581334712449208.
min_data_in_leaf, val_score: 0.657825:  80%|########  | 4/5 [00:03<00:00,  1.10it/s][LightGBM] [Info] Number of positive: 12844, number of negative: 13156
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000869 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4241
[LightGBM] [Info] Number of data points in the train set: 26000, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.494000 -> initscore=-0.024001
[LightGBM] [Info] Start training from score -0.024001
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.650017	valid's binary_logloss: 0.663448
[200]	train's binary_logloss: 0.638826	valid's binary_logloss: 0.659409
[300]	train's binary_logloss: 0.631757	valid's binary_logloss: 0.658712
Early stopping, best iteration is:
[234]	train's binary_logloss: 0.636229	valid's binary_logloss: 0.65847
min_data_in_leaf, val_score: 0.657825: 100%|##########| 5/5 [00:04<00:00,  1.19it/s][I 2020-09-27 04:48:57,537] Trial 67 finished with value: 0.6584702414733883 and parameters: {'min_child_samples': 10}. Best is trial 66 with value: 0.6581334712449208.
min_data_in_leaf, val_score: 0.657825: 100%|##########| 5/5 [00:04<00:00,  1.22it/s]

################################
CV_score:0.6150646280811062

Fold : 0
[I 2020-09-27 04:48:57,616] A new study created in memory with name: no-name-8ff89ddf-7641-453c-86d0-fc507cfacb11
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.026169 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663649	valid's binary_logloss: 0.690213
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.676671	valid's binary_logloss: 0.689632
feature_fraction, val_score: 0.689632:  14%|#4        | 1/7 [00:01<00:08,  1.45s/it][I 2020-09-27 04:48:59,075] Trial 0 finished with value: 0.6896322280348909 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6896322280348909.
feature_fraction, val_score: 0.689632:  14%|#4        | 1/7 [00:01<00:08,  1.45s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001410 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66617	valid's binary_logloss: 0.690438
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.679097	valid's binary_logloss: 0.689783
feature_fraction, val_score: 0.689632:  29%|##8       | 2/7 [00:02<00:06,  1.24s/it][I 2020-09-27 04:48:59,837] Trial 1 finished with value: 0.6897826459094011 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6896322280348909.
feature_fraction, val_score: 0.689632:  29%|##8       | 2/7 [00:02<00:06,  1.24s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003154 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662081	valid's binary_logloss: 0.690003
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.677046	valid's binary_logloss: 0.68968
feature_fraction, val_score: 0.689632:  43%|####2     | 3/7 [00:03<00:04,  1.13s/it][I 2020-09-27 04:49:00,694] Trial 2 finished with value: 0.6896804566917485 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6896322280348909.
feature_fraction, val_score: 0.689632:  43%|####2     | 3/7 [00:03<00:04,  1.13s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015461 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66415	valid's binary_logloss: 0.689593
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.678462	valid's binary_logloss: 0.689021
feature_fraction, val_score: 0.689021:  57%|#####7    | 4/7 [00:03<00:03,  1.01s/it][I 2020-09-27 04:49:01,437] Trial 3 finished with value: 0.6890207233896298 and parameters: {'feature_fraction': 0.6}. Best is trial 3 with value: 0.6890207233896298.
feature_fraction, val_score: 0.689021:  57%|#####7    | 4/7 [00:03<00:03,  1.01s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.025191 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66251	valid's binary_logloss: 0.689598
Early stopping, best iteration is:
[42]	train's binary_logloss: 0.676825	valid's binary_logloss: 0.68942
feature_fraction, val_score: 0.689021:  71%|#######1  | 5/7 [00:05<00:02,  1.13s/it][I 2020-09-27 04:49:02,837] Trial 4 finished with value: 0.6894197474355058 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 3 with value: 0.6890207233896298.
feature_fraction, val_score: 0.689021:  71%|#######1  | 5/7 [00:05<00:02,  1.13s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017648 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663408	valid's binary_logloss: 0.690473
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.676529	valid's binary_logloss: 0.689828
feature_fraction, val_score: 0.689021:  86%|########5 | 6/7 [00:06<00:01,  1.04s/it][I 2020-09-27 04:49:03,685] Trial 5 finished with value: 0.68982789311769 and parameters: {'feature_fraction': 0.8}. Best is trial 3 with value: 0.6890207233896298.
feature_fraction, val_score: 0.689021:  86%|########5 | 6/7 [00:06<00:01,  1.04s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001730 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665183	valid's binary_logloss: 0.689631
[200]	train's binary_logloss: 0.645808	valid's binary_logloss: 0.690591
Early stopping, best iteration is:
[102]	train's binary_logloss: 0.664779	valid's binary_logloss: 0.689546
feature_fraction, val_score: 0.689021: 100%|##########| 7/7 [00:07<00:00,  1.07s/it][I 2020-09-27 04:49:04,813] Trial 6 finished with value: 0.6895458054912846 and parameters: {'feature_fraction': 0.5}. Best is trial 3 with value: 0.6890207233896298.
feature_fraction, val_score: 0.689021: 100%|##########| 7/7 [00:07<00:00,  1.03s/it]
num_leaves, val_score: 0.689021:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012666 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577872	valid's binary_logloss: 0.693632
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.663683	valid's binary_logloss: 0.689631
num_leaves, val_score: 0.689021:   5%|5         | 1/20 [00:01<00:30,  1.61s/it][I 2020-09-27 04:49:06,444] Trial 7 finished with value: 0.689631097968691 and parameters: {'num_leaves': 168}. Best is trial 7 with value: 0.689631097968691.
num_leaves, val_score: 0.689021:   5%|5         | 1/20 [00:01<00:30,  1.61s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017419 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.604202	valid's binary_logloss: 0.69167
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.650476	valid's binary_logloss: 0.689618
num_leaves, val_score: 0.689021:  10%|#         | 2/20 [00:02<00:26,  1.47s/it][I 2020-09-27 04:49:07,566] Trial 8 finished with value: 0.689618324134334 and parameters: {'num_leaves': 122}. Best is trial 8 with value: 0.689618324134334.
num_leaves, val_score: 0.689021:  10%|#         | 2/20 [00:02<00:26,  1.47s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016515 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685172	valid's binary_logloss: 0.689666
[200]	train's binary_logloss: 0.681255	valid's binary_logloss: 0.689678
Early stopping, best iteration is:
[156]	train's binary_logloss: 0.68289	valid's binary_logloss: 0.689534
num_leaves, val_score: 0.689021:  15%|#5        | 3/20 [00:03<00:22,  1.34s/it][I 2020-09-27 04:49:08,626] Trial 9 finished with value: 0.6895342400114649 and parameters: {'num_leaves': 6}. Best is trial 9 with value: 0.6895342400114649.
num_leaves, val_score: 0.689021:  15%|#5        | 3/20 [00:03<00:22,  1.34s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016558 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.535492	valid's binary_logloss: 0.695541
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.649759	valid's binary_logloss: 0.690794
num_leaves, val_score: 0.689021:  20%|##        | 4/20 [00:05<00:23,  1.49s/it][I 2020-09-27 04:49:10,461] Trial 10 finished with value: 0.6907940845828584 and parameters: {'num_leaves': 253}. Best is trial 9 with value: 0.6895342400114649.
num_leaves, val_score: 0.689021:  20%|##        | 4/20 [00:05<00:23,  1.49s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017448 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.67101	valid's binary_logloss: 0.689552
Early stopping, best iteration is:
[93]	train's binary_logloss: 0.672156	valid's binary_logloss: 0.68942
num_leaves, val_score: 0.689021:  25%|##5       | 5/20 [00:06<00:19,  1.31s/it][I 2020-09-27 04:49:11,362] Trial 11 finished with value: 0.6894197097930296 and parameters: {'num_leaves': 22}. Best is trial 11 with value: 0.6894197097930296.
num_leaves, val_score: 0.689021:  25%|##5       | 5/20 [00:06<00:19,  1.31s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014207 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.690044	valid's binary_logloss: 0.690612
[200]	train's binary_logloss: 0.68885	valid's binary_logloss: 0.689962
[300]	train's binary_logloss: 0.688098	valid's binary_logloss: 0.689612
[400]	train's binary_logloss: 0.687569	valid's binary_logloss: 0.689487
[500]	train's binary_logloss: 0.68718	valid's binary_logloss: 0.689391
[600]	train's binary_logloss: 0.686881	valid's binary_logloss: 0.689318
[700]	train's binary_logloss: 0.68664	valid's binary_logloss: 0.689265
[800]	train's binary_logloss: 0.68644	valid's binary_logloss: 0.689264
Early stopping, best iteration is:
[710]	train's binary_logloss: 0.686618	valid's binary_logloss: 0.689244
num_leaves, val_score: 0.689021:  30%|###       | 6/20 [00:09<00:25,  1.86s/it][I 2020-09-27 04:49:14,484] Trial 12 finished with value: 0.689244475322615 and parameters: {'num_leaves': 2}. Best is trial 12 with value: 0.689244475322615.
num_leaves, val_score: 0.689021:  30%|###       | 6/20 [00:09<00:25,  1.86s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015182 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.630223	valid's binary_logloss: 0.692152
Early stopping, best iteration is:
[21]	train's binary_logloss: 0.674331	valid's binary_logloss: 0.690049
num_leaves, val_score: 0.689021:  35%|###5      | 7/20 [00:10<00:20,  1.56s/it][I 2020-09-27 04:49:15,354] Trial 13 finished with value: 0.6900491534448284 and parameters: {'num_leaves': 80}. Best is trial 12 with value: 0.689244475322615.
num_leaves, val_score: 0.689021:  35%|###5      | 7/20 [00:10<00:20,  1.56s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017787 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.543727	valid's binary_logloss: 0.696815
Early stopping, best iteration is:
[10]	train's binary_logloss: 0.670334	valid's binary_logloss: 0.691482
num_leaves, val_score: 0.689021:  40%|####      | 8/20 [00:11<00:17,  1.46s/it][I 2020-09-27 04:49:16,588] Trial 14 finished with value: 0.6914823998645343 and parameters: {'num_leaves': 234}. Best is trial 12 with value: 0.689244475322615.
num_leaves, val_score: 0.689021:  40%|####      | 8/20 [00:11<00:17,  1.46s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015457 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.646446	valid's binary_logloss: 0.691884
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.681433	valid's binary_logloss: 0.689734
num_leaves, val_score: 0.689021:  45%|####5     | 9/20 [00:13<00:15,  1.43s/it][I 2020-09-27 04:49:17,939] Trial 15 finished with value: 0.6897344485270362 and parameters: {'num_leaves': 55}. Best is trial 12 with value: 0.689244475322615.
num_leaves, val_score: 0.689021:  45%|####5     | 9/20 [00:13<00:15,  1.43s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015412 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.570477	valid's binary_logloss: 0.695696
Early stopping, best iteration is:
[23]	train's binary_logloss: 0.654169	valid's binary_logloss: 0.690909
num_leaves, val_score: 0.689021:  50%|#####     | 10/20 [00:14<00:13,  1.34s/it][I 2020-09-27 04:49:19,067] Trial 16 finished with value: 0.6909092950799796 and parameters: {'num_leaves': 181}. Best is trial 12 with value: 0.689244475322615.
num_leaves, val_score: 0.689021:  50%|#####     | 10/20 [00:14<00:13,  1.34s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016435 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.623443	valid's binary_logloss: 0.691768
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.674343	valid's binary_logloss: 0.690276
num_leaves, val_score: 0.689021:  55%|#####5    | 11/20 [00:15<00:10,  1.20s/it][I 2020-09-27 04:49:19,941] Trial 17 finished with value: 0.690276336147683 and parameters: {'num_leaves': 90}. Best is trial 12 with value: 0.689244475322615.
num_leaves, val_score: 0.689021:  55%|#####5    | 11/20 [00:15<00:10,  1.20s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016181 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.659158	valid's binary_logloss: 0.689655
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.677405	valid's binary_logloss: 0.689049
num_leaves, val_score: 0.689021:  60%|######    | 12/20 [00:15<00:08,  1.07s/it][I 2020-09-27 04:49:20,717] Trial 18 finished with value: 0.6890491443530252 and parameters: {'num_leaves': 38}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  60%|######    | 12/20 [00:15<00:08,  1.07s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.023330 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.652371	valid's binary_logloss: 0.690333
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.672949	valid's binary_logloss: 0.68919
num_leaves, val_score: 0.689021:  65%|######5   | 13/20 [00:17<00:08,  1.17s/it][I 2020-09-27 04:49:22,101] Trial 19 finished with value: 0.6891896975164848 and parameters: {'num_leaves': 47}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  65%|######5   | 13/20 [00:17<00:08,  1.17s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017299 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.59964	valid's binary_logloss: 0.69374
Early stopping, best iteration is:
[14]	train's binary_logloss: 0.674012	valid's binary_logloss: 0.690104
num_leaves, val_score: 0.689021:  70%|#######   | 14/20 [00:18<00:06,  1.10s/it][I 2020-09-27 04:49:23,062] Trial 20 finished with value: 0.6901038597516463 and parameters: {'num_leaves': 130}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  70%|#######   | 14/20 [00:18<00:06,  1.10s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016669 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657163	valid's binary_logloss: 0.690788
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.674068	valid's binary_logloss: 0.689599
num_leaves, val_score: 0.689021:  75%|#######5  | 15/20 [00:19<00:05,  1.01s/it][I 2020-09-27 04:49:23,851] Trial 21 finished with value: 0.6895994157631041 and parameters: {'num_leaves': 40}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  75%|#######5  | 15/20 [00:19<00:05,  1.01s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016387 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.632347	valid's binary_logloss: 0.692318
Early stopping, best iteration is:
[23]	train's binary_logloss: 0.673649	valid's binary_logloss: 0.689645
num_leaves, val_score: 0.689021:  80%|########  | 16/20 [00:19<00:03,  1.05it/s][I 2020-09-27 04:49:24,677] Trial 22 finished with value: 0.6896448463652016 and parameters: {'num_leaves': 76}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  80%|########  | 16/20 [00:19<00:03,  1.05it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.020197 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656137	valid's binary_logloss: 0.689636
Early stopping, best iteration is:
[96]	train's binary_logloss: 0.657272	valid's binary_logloss: 0.689437
num_leaves, val_score: 0.689021:  85%|########5 | 17/20 [00:21<00:03,  1.10s/it][I 2020-09-27 04:49:26,120] Trial 23 finished with value: 0.6894373899731695 and parameters: {'num_leaves': 42}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  85%|########5 | 17/20 [00:21<00:03,  1.10s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016258 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608119	valid's binary_logloss: 0.691911
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.659223	valid's binary_logloss: 0.689435
num_leaves, val_score: 0.689021:  90%|######### | 18/20 [00:22<00:02,  1.09s/it][I 2020-09-27 04:49:27,187] Trial 24 finished with value: 0.6894354954462946 and parameters: {'num_leaves': 116}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  90%|######### | 18/20 [00:22<00:02,  1.09s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018058 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68615	valid's binary_logloss: 0.689604
[200]	train's binary_logloss: 0.682919	valid's binary_logloss: 0.689378
Early stopping, best iteration is:
[158]	train's binary_logloss: 0.684169	valid's binary_logloss: 0.689219
num_leaves, val_score: 0.689021:  95%|#########5| 19/20 [00:23<00:01,  1.08s/it][I 2020-09-27 04:49:28,242] Trial 25 finished with value: 0.6892189395196165 and parameters: {'num_leaves': 5}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021:  95%|#########5| 19/20 [00:23<00:01,  1.08s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016101 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.651145	valid's binary_logloss: 0.690351
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.674412	valid's binary_logloss: 0.689378
num_leaves, val_score: 0.689021: 100%|##########| 20/20 [00:24<00:00,  1.16s/it][I 2020-09-27 04:49:29,591] Trial 26 finished with value: 0.6893783362619839 and parameters: {'num_leaves': 49}. Best is trial 18 with value: 0.6890491443530252.
num_leaves, val_score: 0.689021: 100%|##########| 20/20 [00:24<00:00,  1.24s/it]
bagging, val_score: 0.689021:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013092 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664277	valid's binary_logloss: 0.68939
Early stopping, best iteration is:
[71]	train's binary_logloss: 0.670651	valid's binary_logloss: 0.688825
bagging, val_score: 0.688825:  10%|#         | 1/10 [00:01<00:09,  1.08s/it][I 2020-09-27 04:49:30,680] Trial 27 finished with value: 0.6888247862401992 and parameters: {'bagging_fraction': 0.958245225148382, 'bagging_freq': 2}. Best is trial 27 with value: 0.6888247862401992.
bagging, val_score: 0.688825:  10%|#         | 1/10 [00:01<00:09,  1.08s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014041 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664031	valid's binary_logloss: 0.689087
[200]	train's binary_logloss: 0.64388	valid's binary_logloss: 0.689768
Early stopping, best iteration is:
[119]	train's binary_logloss: 0.659997	valid's binary_logloss: 0.688879
bagging, val_score: 0.688825:  20%|##        | 2/10 [00:02<00:09,  1.15s/it][I 2020-09-27 04:49:32,003] Trial 28 finished with value: 0.6888794888870088 and parameters: {'bagging_fraction': 0.9745012448452529, 'bagging_freq': 2}. Best is trial 27 with value: 0.6888247862401992.
bagging, val_score: 0.688825:  20%|##        | 2/10 [00:02<00:09,  1.15s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015566 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664149	valid's binary_logloss: 0.689861
Early stopping, best iteration is:
[50]	train's binary_logloss: 0.675702	valid's binary_logloss: 0.689193
bagging, val_score: 0.688825:  30%|###       | 3/10 [00:03<00:08,  1.23s/it][I 2020-09-27 04:49:33,428] Trial 29 finished with value: 0.689193182327793 and parameters: {'bagging_fraction': 0.9667819838074765, 'bagging_freq': 2}. Best is trial 27 with value: 0.6888247862401992.
bagging, val_score: 0.688825:  30%|###       | 3/10 [00:03<00:08,  1.23s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013341 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664296	valid's binary_logloss: 0.689547
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.673228	valid's binary_logloss: 0.689074
bagging, val_score: 0.688825:  40%|####      | 4/10 [00:04<00:07,  1.17s/it][I 2020-09-27 04:49:34,451] Trial 30 finished with value: 0.6890740654309894 and parameters: {'bagging_fraction': 0.9690126597280045, 'bagging_freq': 2}. Best is trial 27 with value: 0.6888247862401992.
bagging, val_score: 0.688825:  40%|####      | 4/10 [00:04<00:07,  1.17s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016944 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664297	valid's binary_logloss: 0.690412
Early stopping, best iteration is:
[54]	train's binary_logloss: 0.674733	valid's binary_logloss: 0.68958
bagging, val_score: 0.688825:  50%|#####     | 5/10 [00:05<00:05,  1.10s/it][I 2020-09-27 04:49:35,398] Trial 31 finished with value: 0.6895802218638041 and parameters: {'bagging_fraction': 0.9943804516504096, 'bagging_freq': 4}. Best is trial 27 with value: 0.6888247862401992.
bagging, val_score: 0.688825:  50%|#####     | 5/10 [00:05<00:05,  1.10s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014817 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664664	valid's binary_logloss: 0.689957
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.679397	valid's binary_logloss: 0.689326
bagging, val_score: 0.688825:  60%|######    | 6/10 [00:06<00:04,  1.07s/it][I 2020-09-27 04:49:36,386] Trial 32 finished with value: 0.6893263473853064 and parameters: {'bagging_fraction': 0.6476701297352184, 'bagging_freq': 1}. Best is trial 27 with value: 0.6888247862401992.
bagging, val_score: 0.688825:  60%|######    | 6/10 [00:06<00:04,  1.07s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.029663 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
bagging, val_score: 0.688719:  70%|#######   | 7/10 [00:08<00:03,  1.16s/it][I 2020-09-27 04:49:37,766] Trial 33 finished with value: 0.6887188408517487 and parameters: {'bagging_fraction': 0.8287757616049252, 'bagging_freq': 7}. Best is trial 33 with value: 0.6887188408517487.
bagging, val_score: 0.688719:  70%|#######   | 7/10 [00:08<00:03,  1.16s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012644 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66436	valid's binary_logloss: 0.689716
Early stopping, best iteration is:
[51]	train's binary_logloss: 0.675497	valid's binary_logloss: 0.689039
bagging, val_score: 0.688719:  80%|########  | 8/10 [00:09<00:02,  1.09s/it][I 2020-09-27 04:49:38,691] Trial 34 finished with value: 0.689039279019024 and parameters: {'bagging_fraction': 0.8282760453593162, 'bagging_freq': 7}. Best is trial 33 with value: 0.6887188408517487.
bagging, val_score: 0.688719:  80%|########  | 8/10 [00:09<00:02,  1.09s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016881 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663999	valid's binary_logloss: 0.690093
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.665835	valid's binary_logloss: 0.689992
bagging, val_score: 0.688719:  90%|######### | 9/10 [00:10<00:01,  1.10s/it][I 2020-09-27 04:49:39,803] Trial 35 finished with value: 0.6899923113836022 and parameters: {'bagging_fraction': 0.8214825248241318, 'bagging_freq': 4}. Best is trial 33 with value: 0.6887188408517487.
bagging, val_score: 0.688719:  90%|######### | 9/10 [00:10<00:01,  1.10s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013803 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664239	valid's binary_logloss: 0.688911
[200]	train's binary_logloss: 0.644408	valid's binary_logloss: 0.690305
Early stopping, best iteration is:
[103]	train's binary_logloss: 0.663554	valid's binary_logloss: 0.688799
bagging, val_score: 0.688719: 100%|##########| 10/10 [00:11<00:00,  1.24s/it][I 2020-09-27 04:49:41,362] Trial 36 finished with value: 0.6887991231168928 and parameters: {'bagging_fraction': 0.8383907363062589, 'bagging_freq': 7}. Best is trial 33 with value: 0.6887188408517487.
bagging, val_score: 0.688719: 100%|##########| 10/10 [00:11<00:00,  1.18s/it]
feature_fraction_stage2, val_score: 0.688719:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013665 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
feature_fraction_stage2, val_score: 0.688719:  17%|#6        | 1/6 [00:00<00:04,  1.03it/s][I 2020-09-27 04:49:42,349] Trial 37 finished with value: 0.6887188408517487 and parameters: {'feature_fraction': 0.616}. Best is trial 37 with value: 0.6887188408517487.
feature_fraction_stage2, val_score: 0.688719:  17%|#6        | 1/6 [00:00<00:04,  1.03it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001729 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664799	valid's binary_logloss: 0.689634
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.671673	valid's binary_logloss: 0.689345
feature_fraction_stage2, val_score: 0.688719:  33%|###3      | 2/6 [00:02<00:03,  1.00it/s][I 2020-09-27 04:49:43,417] Trial 38 finished with value: 0.6893450614153805 and parameters: {'feature_fraction': 0.552}. Best is trial 37 with value: 0.6887188408517487.
feature_fraction_stage2, val_score: 0.688719:  33%|###3      | 2/6 [00:02<00:03,  1.00it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001868 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664799	valid's binary_logloss: 0.689634
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.671673	valid's binary_logloss: 0.689345
feature_fraction_stage2, val_score: 0.688719:  50%|#####     | 3/6 [00:03<00:03,  1.17s/it][I 2020-09-27 04:49:44,990] Trial 39 finished with value: 0.6893450614153805 and parameters: {'feature_fraction': 0.52}. Best is trial 37 with value: 0.6887188408517487.
feature_fraction_stage2, val_score: 0.688719:  50%|#####     | 3/6 [00:03<00:03,  1.17s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001814 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664285	valid's binary_logloss: 0.690298
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.672325	valid's binary_logloss: 0.689397
feature_fraction_stage2, val_score: 0.688719:  67%|######6   | 4/6 [00:04<00:02,  1.14s/it][I 2020-09-27 04:49:46,067] Trial 40 finished with value: 0.6893965384793862 and parameters: {'feature_fraction': 0.584}. Best is trial 37 with value: 0.6887188408517487.
feature_fraction_stage2, val_score: 0.688719:  67%|######6   | 4/6 [00:04<00:02,  1.14s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013983 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663794	valid's binary_logloss: 0.689901
[200]	train's binary_logloss: 0.64382	valid's binary_logloss: 0.690825
Early stopping, best iteration is:
[115]	train's binary_logloss: 0.66065	valid's binary_logloss: 0.689797
feature_fraction_stage2, val_score: 0.688719:  83%|########3 | 5/6 [00:06<00:01,  1.42s/it][I 2020-09-27 04:49:48,122] Trial 41 finished with value: 0.6897973746145477 and parameters: {'feature_fraction': 0.6799999999999999}. Best is trial 37 with value: 0.6887188408517487.
feature_fraction_stage2, val_score: 0.688719:  83%|########3 | 5/6 [00:06<00:01,  1.42s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.031387 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663746	valid's binary_logloss: 0.689073
[200]	train's binary_logloss: 0.643757	valid's binary_logloss: 0.691217
Early stopping, best iteration is:
[105]	train's binary_logloss: 0.662622	valid's binary_logloss: 0.689054
feature_fraction_stage2, val_score: 0.688719: 100%|##########| 6/6 [00:08<00:00,  1.44s/it][I 2020-09-27 04:49:49,607] Trial 42 finished with value: 0.6890541064139101 and parameters: {'feature_fraction': 0.6479999999999999}. Best is trial 37 with value: 0.6887188408517487.
feature_fraction_stage2, val_score: 0.688719: 100%|##########| 6/6 [00:08<00:00,  1.37s/it]
regularization_factors, val_score: 0.688719:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012472 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664985	valid's binary_logloss: 0.690223
[200]	train's binary_logloss: 0.646149	valid's binary_logloss: 0.690617
Early stopping, best iteration is:
[151]	train's binary_logloss: 0.655125	valid's binary_logloss: 0.690101
regularization_factors, val_score: 0.688719:   5%|5         | 1/20 [00:01<00:28,  1.50s/it][I 2020-09-27 04:49:51,126] Trial 43 finished with value: 0.6901011755065565 and parameters: {'lambda_l1': 1.1303470188424833, 'lambda_l2': 0.06375127708462985}. Best is trial 43 with value: 0.6901011755065565.
regularization_factors, val_score: 0.688719:   5%|5         | 1/20 [00:01<00:28,  1.50s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014406 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  10%|#         | 2/20 [00:02<00:26,  1.47s/it][I 2020-09-27 04:49:52,538] Trial 44 finished with value: 0.6887188408527825 and parameters: {'lambda_l1': 1.8636898004930976e-08, 'lambda_l2': 6.192880162509011e-08}. Best is trial 44 with value: 0.6887188408527825.
regularization_factors, val_score: 0.688719:  10%|#         | 2/20 [00:02<00:26,  1.47s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016047 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  15%|#5        | 3/20 [00:03<00:22,  1.32s/it][I 2020-09-27 04:49:53,496] Trial 45 finished with value: 0.6887188408522336 and parameters: {'lambda_l1': 1.8439298966850396e-08, 'lambda_l2': 1.3511479384741456e-08}. Best is trial 45 with value: 0.6887188408522336.
regularization_factors, val_score: 0.688719:  15%|#5        | 3/20 [00:03<00:22,  1.32s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012081 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  20%|##        | 4/20 [00:04<00:19,  1.20s/it][I 2020-09-27 04:49:54,422] Trial 46 finished with value: 0.6887188408519217 and parameters: {'lambda_l1': 1.2530425311422802e-08, 'lambda_l2': 1.0756585467257088e-08}. Best is trial 46 with value: 0.6887188408519217.
regularization_factors, val_score: 0.688719:  20%|##        | 4/20 [00:04<00:19,  1.20s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014442 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  25%|##5       | 5/20 [00:05<00:16,  1.13s/it][I 2020-09-27 04:49:55,370] Trial 47 finished with value: 0.6887188408519394 and parameters: {'lambda_l1': 1.2148997058183217e-08, 'lambda_l2': 1.076670603152709e-08}. Best is trial 46 with value: 0.6887188408519217.
regularization_factors, val_score: 0.688719:  25%|##5       | 5/20 [00:05<00:16,  1.13s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015325 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  30%|###       | 6/20 [00:07<00:16,  1.21s/it][I 2020-09-27 04:49:56,764] Trial 48 finished with value: 0.6887188408519016 and parameters: {'lambda_l1': 1.0473302644937364e-08, 'lambda_l2': 1.4000255193861959e-08}. Best is trial 48 with value: 0.6887188408519016.
regularization_factors, val_score: 0.688719:  30%|###       | 6/20 [00:07<00:16,  1.21s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017485 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  35%|###5      | 7/20 [00:08<00:14,  1.15s/it][I 2020-09-27 04:49:57,778] Trial 49 finished with value: 0.6887188408520548 and parameters: {'lambda_l1': 1.2933235345738932e-08, 'lambda_l2': 1.3210044579433778e-08}. Best is trial 48 with value: 0.6887188408519016.
regularization_factors, val_score: 0.688719:  35%|###5      | 7/20 [00:08<00:14,  1.15s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016188 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  40%|####      | 8/20 [00:09<00:13,  1.09s/it][I 2020-09-27 04:49:58,722] Trial 50 finished with value: 0.6887188410169663 and parameters: {'lambda_l1': 6.522515649363731e-06, 'lambda_l2': 5.399603910938485e-06}. Best is trial 48 with value: 0.6887188408519016.
regularization_factors, val_score: 0.688719:  40%|####      | 8/20 [00:09<00:13,  1.09s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018393 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  45%|####5     | 9/20 [00:10<00:12,  1.18s/it][I 2020-09-27 04:50:00,120] Trial 51 finished with value: 0.6887188408522066 and parameters: {'lambda_l1': 1.7781053835321766e-08, 'lambda_l2': 1.2911207447672734e-08}. Best is trial 48 with value: 0.6887188408519016.
regularization_factors, val_score: 0.688719:  45%|####5     | 9/20 [00:10<00:12,  1.18s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012528 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  50%|#####     | 10/20 [00:11<00:11,  1.14s/it][I 2020-09-27 04:50:01,154] Trial 52 finished with value: 0.6887188408521625 and parameters: {'lambda_l1': 1.487838960944784e-08, 'lambda_l2': 1.2171741979090901e-08}. Best is trial 48 with value: 0.6887188408519016.
regularization_factors, val_score: 0.688719:  50%|#####     | 10/20 [00:11<00:11,  1.14s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013835 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  55%|#####5    | 11/20 [00:12<00:09,  1.07s/it][I 2020-09-27 04:50:02,068] Trial 53 finished with value: 0.688718840880323 and parameters: {'lambda_l1': 1.2436063977852807e-06, 'lambda_l2': 5.416531879194707e-07}. Best is trial 48 with value: 0.6887188408519016.
regularization_factors, val_score: 0.688719:  55%|#####5    | 11/20 [00:12<00:09,  1.07s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.020591 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  60%|######    | 12/20 [00:13<00:08,  1.03s/it][I 2020-09-27 04:50:03,019] Trial 54 finished with value: 0.6887188408518048 and parameters: {'lambda_l1': 1.001927452586009e-08, 'lambda_l2': 1.0173335682514728e-08}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  60%|######    | 12/20 [00:13<00:08,  1.03s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013735 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  65%|######5   | 13/20 [00:14<00:07,  1.13s/it][I 2020-09-27 04:50:04,380] Trial 55 finished with value: 0.6887188408925199 and parameters: {'lambda_l1': 1.0246196926378488e-06, 'lambda_l2': 2.2586132878897466e-06}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  65%|######5   | 13/20 [00:14<00:07,  1.13s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017187 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664481	valid's binary_logloss: 0.689645
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675961	valid's binary_logloss: 0.689284
regularization_factors, val_score: 0.688719:  70%|#######   | 14/20 [00:15<00:06,  1.08s/it][I 2020-09-27 04:50:05,336] Trial 56 finished with value: 0.6892843278626516 and parameters: {'lambda_l1': 0.04091667144851904, 'lambda_l2': 1.0750374226083052e-08}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  70%|#######   | 14/20 [00:15<00:06,  1.08s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013760 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  75%|#######5  | 15/20 [00:16<00:05,  1.04s/it][I 2020-09-27 04:50:06,270] Trial 57 finished with value: 0.6887188408561253 and parameters: {'lambda_l1': 1.5233181813455253e-07, 'lambda_l2': 1.8743684852533287e-07}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  75%|#######5  | 15/20 [00:16<00:05,  1.04s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015130 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  80%|########  | 16/20 [00:19<00:06,  1.55s/it][I 2020-09-27 04:50:09,004] Trial 58 finished with value: 0.6887188422885628 and parameters: {'lambda_l1': 1.744760689790351e-07, 'lambda_l2': 0.00013088268252619333}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  80%|########  | 16/20 [00:19<00:06,  1.55s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018473 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664768	valid's binary_logloss: 0.689166
Early stopping, best iteration is:
[85]	train's binary_logloss: 0.667917	valid's binary_logloss: 0.688911
regularization_factors, val_score: 0.688719:  85%|########5 | 17/20 [00:20<00:04,  1.45s/it][I 2020-09-27 04:50:10,235] Trial 59 finished with value: 0.6889111184760159 and parameters: {'lambda_l1': 0.00016154773417025348, 'lambda_l2': 0.9174839426871738}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  85%|########5 | 17/20 [00:20<00:04,  1.45s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015601 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  90%|######### | 18/20 [00:21<00:02,  1.32s/it][I 2020-09-27 04:50:11,235] Trial 60 finished with value: 0.6887188408571195 and parameters: {'lambda_l1': 1.0344224061447487e-07, 'lambda_l2': 3.115824250207294e-07}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  90%|######### | 18/20 [00:21<00:02,  1.32s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016453 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719:  95%|#########5| 19/20 [00:23<00:01,  1.34s/it][I 2020-09-27 04:50:12,647] Trial 61 finished with value: 0.6887188408520247 and parameters: {'lambda_l1': 1.3726047963300931e-08, 'lambda_l2': 1.1171463229442362e-08}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719:  95%|#########5| 19/20 [00:23<00:01,  1.34s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003196 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664324	valid's binary_logloss: 0.689515
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.675893	valid's binary_logloss: 0.688719
regularization_factors, val_score: 0.688719: 100%|##########| 20/20 [00:24<00:00,  1.26s/it][I 2020-09-27 04:50:13,695] Trial 62 finished with value: 0.688718840851812 and parameters: {'lambda_l1': 1.0200361074017904e-08, 'lambda_l2': 1.1298563130408035e-08}. Best is trial 54 with value: 0.6887188408518048.
regularization_factors, val_score: 0.688719: 100%|##########| 20/20 [00:24<00:00,  1.20s/it]
min_data_in_leaf, val_score: 0.688719:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013241 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664243	valid's binary_logloss: 0.689415
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.674277	valid's binary_logloss: 0.689398
min_data_in_leaf, val_score: 0.688719:  20%|##        | 1/5 [00:00<00:03,  1.06it/s][I 2020-09-27 04:50:14,651] Trial 63 finished with value: 0.6893981290180272 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6893981290180272.
min_data_in_leaf, val_score: 0.688719:  20%|##        | 1/5 [00:00<00:03,  1.06it/s][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012878 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663871	valid's binary_logloss: 0.689525
[200]	train's binary_logloss: 0.643739	valid's binary_logloss: 0.690786
Early stopping, best iteration is:
[124]	train's binary_logloss: 0.658769	valid's binary_logloss: 0.689265
min_data_in_leaf, val_score: 0.688719:  40%|####      | 2/5 [00:02<00:03,  1.17s/it][I 2020-09-27 04:50:16,344] Trial 64 finished with value: 0.6892650557557729 and parameters: {'min_child_samples': 5}. Best is trial 64 with value: 0.6892650557557729.
min_data_in_leaf, val_score: 0.688719:  40%|####      | 2/5 [00:02<00:03,  1.17s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013201 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664176	valid's binary_logloss: 0.689954
Early stopping, best iteration is:
[55]	train's binary_logloss: 0.674447	valid's binary_logloss: 0.689227
min_data_in_leaf, val_score: 0.688719:  60%|######    | 3/5 [00:03<00:02,  1.14s/it][I 2020-09-27 04:50:17,422] Trial 65 finished with value: 0.6892266404784428 and parameters: {'min_child_samples': 10}. Best is trial 65 with value: 0.6892266404784428.
min_data_in_leaf, val_score: 0.688719:  60%|######    | 3/5 [00:03<00:02,  1.14s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013886 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664683	valid's binary_logloss: 0.689526
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.67151	valid's binary_logloss: 0.689297
min_data_in_leaf, val_score: 0.688719:  80%|########  | 4/5 [00:04<00:01,  1.13s/it][I 2020-09-27 04:50:18,524] Trial 66 finished with value: 0.689296784041159 and parameters: {'min_child_samples': 50}. Best is trial 65 with value: 0.6892266404784428.
min_data_in_leaf, val_score: 0.688719:  80%|########  | 4/5 [00:04<00:01,  1.13s/it][LightGBM] [Info] Number of positive: 46608, number of negative: 46417
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013526 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501027 -> initscore=0.004106
[LightGBM] [Info] Start training from score 0.004106
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665727	valid's binary_logloss: 0.689486
Early stopping, best iteration is:
[74]	train's binary_logloss: 0.67097	valid's binary_logloss: 0.689201
min_data_in_leaf, val_score: 0.688719: 100%|##########| 5/5 [00:06<00:00,  1.19s/it][I 2020-09-27 04:50:19,869] Trial 67 finished with value: 0.689200711916551 and parameters: {'min_child_samples': 100}. Best is trial 67 with value: 0.689200711916551.
min_data_in_leaf, val_score: 0.688719: 100%|##########| 5/5 [00:06<00:00,  1.24s/it]
Fold : 1
[I 2020-09-27 04:50:19,990] A new study created in memory with name: no-name-42491456-2d3a-4d80-ab38-ce544f64ff2c
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001692 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665024	valid's binary_logloss: 0.689794
Early stopping, best iteration is:
[70]	train's binary_logloss: 0.671608	valid's binary_logloss: 0.689532
feature_fraction, val_score: 0.689532:  14%|#4        | 1/7 [00:01<00:07,  1.22s/it][I 2020-09-27 04:50:21,234] Trial 0 finished with value: 0.689531715864851 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.689531715864851.
feature_fraction, val_score: 0.689532:  14%|#4        | 1/7 [00:01<00:07,  1.22s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012878 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663919	valid's binary_logloss: 0.690116
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.678175	valid's binary_logloss: 0.689468
feature_fraction, val_score: 0.689468:  29%|##8       | 2/7 [00:01<00:05,  1.09s/it][I 2020-09-27 04:50:22,004] Trial 1 finished with value: 0.6894679163318443 and parameters: {'feature_fraction': 0.6}. Best is trial 1 with value: 0.6894679163318443.
feature_fraction, val_score: 0.689468:  29%|##8       | 2/7 [00:01<00:05,  1.09s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011687 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66358	valid's binary_logloss: 0.690887
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.679357	valid's binary_logloss: 0.690079
feature_fraction, val_score: 0.689468:  43%|####2     | 3/7 [00:02<00:03,  1.01it/s][I 2020-09-27 04:50:22,771] Trial 2 finished with value: 0.6900787550751419 and parameters: {'feature_fraction': 0.7}. Best is trial 1 with value: 0.6894679163318443.
feature_fraction, val_score: 0.689468:  43%|####2     | 3/7 [00:02<00:03,  1.01it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003338 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662262	valid's binary_logloss: 0.690303
Early stopping, best iteration is:
[93]	train's binary_logloss: 0.664122	valid's binary_logloss: 0.689979
feature_fraction, val_score: 0.689468:  57%|#####7    | 4/7 [00:04<00:03,  1.18s/it][I 2020-09-27 04:50:24,375] Trial 3 finished with value: 0.6899793275720499 and parameters: {'feature_fraction': 1.0}. Best is trial 1 with value: 0.6894679163318443.
feature_fraction, val_score: 0.689468:  57%|#####7    | 4/7 [00:04<00:03,  1.18s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.021087 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662515	valid's binary_logloss: 0.689981
Early stopping, best iteration is:
[51]	train's binary_logloss: 0.674455	valid's binary_logloss: 0.689748
feature_fraction, val_score: 0.689468:  71%|#######1  | 5/7 [00:05<00:02,  1.10s/it][I 2020-09-27 04:50:25,314] Trial 4 finished with value: 0.6897477331954484 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 1 with value: 0.6894679163318443.
feature_fraction, val_score: 0.689468:  71%|#######1  | 5/7 [00:05<00:02,  1.10s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001352 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666503	valid's binary_logloss: 0.689198
[200]	train's binary_logloss: 0.647913	valid's binary_logloss: 0.690111
Early stopping, best iteration is:
[141]	train's binary_logloss: 0.658411	valid's binary_logloss: 0.689014
feature_fraction, val_score: 0.689014:  86%|########5 | 6/7 [00:06<00:01,  1.12s/it][I 2020-09-27 04:50:26,470] Trial 5 finished with value: 0.6890136743093267 and parameters: {'feature_fraction': 0.4}. Best is trial 5 with value: 0.6890136743093267.
feature_fraction, val_score: 0.689014:  86%|########5 | 6/7 [00:06<00:01,  1.12s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018804 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66328	valid's binary_logloss: 0.690498
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.681042	valid's binary_logloss: 0.689914
feature_fraction, val_score: 0.689014: 100%|##########| 7/7 [00:07<00:00,  1.02s/it][I 2020-09-27 04:50:27,266] Trial 6 finished with value: 0.6899136884834198 and parameters: {'feature_fraction': 0.8}. Best is trial 5 with value: 0.6890136743093267.
feature_fraction, val_score: 0.689014: 100%|##########| 7/7 [00:07<00:00,  1.04s/it]
num_leaves, val_score: 0.689014:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001398 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.59502	valid's binary_logloss: 0.693529
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.660259	valid's binary_logloss: 0.690335
num_leaves, val_score: 0.689014:   5%|5         | 1/20 [00:01<00:30,  1.59s/it][I 2020-09-27 04:50:28,875] Trial 7 finished with value: 0.6903350331651174 and parameters: {'num_leaves': 152}. Best is trial 7 with value: 0.6903350331651174.
num_leaves, val_score: 0.689014:   5%|5         | 1/20 [00:01<00:30,  1.59s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001113 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.621859	valid's binary_logloss: 0.693386
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.669901	valid's binary_logloss: 0.690519
num_leaves, val_score: 0.689014:  10%|#         | 2/20 [00:02<00:25,  1.40s/it][I 2020-09-27 04:50:29,827] Trial 8 finished with value: 0.6905190321264499 and parameters: {'num_leaves': 103}. Best is trial 7 with value: 0.6903350331651174.
num_leaves, val_score: 0.689014:  10%|#         | 2/20 [00:02<00:25,  1.40s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001303 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607653	valid's binary_logloss: 0.691023
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.658737	valid's binary_logloss: 0.690547
num_leaves, val_score: 0.689014:  15%|#5        | 3/20 [00:03<00:22,  1.30s/it][I 2020-09-27 04:50:30,899] Trial 9 finished with value: 0.6905470890370582 and parameters: {'num_leaves': 128}. Best is trial 7 with value: 0.6903350331651174.
num_leaves, val_score: 0.689014:  15%|#5        | 3/20 [00:03<00:22,  1.30s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001257 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68554	valid's binary_logloss: 0.689548
[200]	train's binary_logloss: 0.681807	valid's binary_logloss: 0.689004
[300]	train's binary_logloss: 0.678526	valid's binary_logloss: 0.688996
Early stopping, best iteration is:
[221]	train's binary_logloss: 0.681102	valid's binary_logloss: 0.688886
num_leaves, val_score: 0.688886:  20%|##        | 4/20 [00:05<00:23,  1.44s/it][I 2020-09-27 04:50:32,672] Trial 10 finished with value: 0.6888861673616007 and parameters: {'num_leaves': 6}. Best is trial 10 with value: 0.6888861673616007.
num_leaves, val_score: 0.688886:  20%|##        | 4/20 [00:05<00:23,  1.44s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001441 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.674935	valid's binary_logloss: 0.689151
[200]	train's binary_logloss: 0.663169	valid's binary_logloss: 0.689994
Early stopping, best iteration is:
[122]	train's binary_logloss: 0.672063	valid's binary_logloss: 0.689042
num_leaves, val_score: 0.688886:  25%|##5       | 5/20 [00:06<00:19,  1.33s/it][I 2020-09-27 04:50:33,728] Trial 11 finished with value: 0.6890424580437142 and parameters: {'num_leaves': 19}. Best is trial 10 with value: 0.6888861673616007.
num_leaves, val_score: 0.688886:  25%|##5       | 5/20 [00:06<00:19,  1.33s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001371 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688532	valid's binary_logloss: 0.690109
[200]	train's binary_logloss: 0.686793	valid's binary_logloss: 0.689219
[300]	train's binary_logloss: 0.685512	valid's binary_logloss: 0.68876
[400]	train's binary_logloss: 0.684466	valid's binary_logloss: 0.688694
[500]	train's binary_logloss: 0.68348	valid's binary_logloss: 0.6886
Early stopping, best iteration is:
[438]	train's binary_logloss: 0.684084	valid's binary_logloss: 0.68855
num_leaves, val_score: 0.688550:  30%|###       | 6/20 [00:08<00:23,  1.66s/it][I 2020-09-27 04:50:36,153] Trial 12 finished with value: 0.6885499863950102 and parameters: {'num_leaves': 3}. Best is trial 12 with value: 0.6885499863950102.
num_leaves, val_score: 0.688550:  30%|###       | 6/20 [00:08<00:23,  1.66s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001342 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68554	valid's binary_logloss: 0.689548
[200]	train's binary_logloss: 0.681807	valid's binary_logloss: 0.689004
[300]	train's binary_logloss: 0.678526	valid's binary_logloss: 0.688996
Early stopping, best iteration is:
[221]	train's binary_logloss: 0.681102	valid's binary_logloss: 0.688886
num_leaves, val_score: 0.688550:  35%|###5      | 7/20 [00:10<00:20,  1.54s/it][I 2020-09-27 04:50:37,431] Trial 13 finished with value: 0.6888861673616007 and parameters: {'num_leaves': 6}. Best is trial 12 with value: 0.6885499863950102.
num_leaves, val_score: 0.688550:  35%|###5      | 7/20 [00:10<00:20,  1.54s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000758 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.657781	valid's binary_logloss: 0.68976
Early stopping, best iteration is:
[90]	train's binary_logloss: 0.660403	valid's binary_logloss: 0.6895
num_leaves, val_score: 0.688550:  40%|####      | 8/20 [00:11<00:16,  1.36s/it][I 2020-09-27 04:50:38,375] Trial 14 finished with value: 0.6895000449518462 and parameters: {'num_leaves': 43}. Best is trial 12 with value: 0.6885499863950102.
num_leaves, val_score: 0.688550:  40%|####      | 8/20 [00:11<00:16,  1.36s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000790 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.64493	valid's binary_logloss: 0.691503
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.667465	valid's binary_logloss: 0.690163
num_leaves, val_score: 0.688550:  45%|####5     | 9/20 [00:11<00:13,  1.20s/it][I 2020-09-27 04:50:39,181] Trial 15 finished with value: 0.6901631360080164 and parameters: {'num_leaves': 64}. Best is trial 12 with value: 0.6885499863950102.
num_leaves, val_score: 0.688550:  45%|####5     | 9/20 [00:11<00:13,  1.20s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000739 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.548818	valid's binary_logloss: 0.697085
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.653082	valid's binary_logloss: 0.690873
num_leaves, val_score: 0.688550:  50%|#####     | 10/20 [00:13<00:12,  1.22s/it][I 2020-09-27 04:50:40,456] Trial 16 finished with value: 0.6908727065690785 and parameters: {'num_leaves': 247}. Best is trial 12 with value: 0.6885499863950102.
num_leaves, val_score: 0.688550:  50%|#####     | 10/20 [00:13<00:12,  1.22s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000719 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.690028	valid's binary_logloss: 0.690957
[200]	train's binary_logloss: 0.688879	valid's binary_logloss: 0.690086
[300]	train's binary_logloss: 0.688169	valid's binary_logloss: 0.689501
[400]	train's binary_logloss: 0.687668	valid's binary_logloss: 0.689075
[500]	train's binary_logloss: 0.687288	valid's binary_logloss: 0.688845
[600]	train's binary_logloss: 0.686991	valid's binary_logloss: 0.688707
[700]	train's binary_logloss: 0.68675	valid's binary_logloss: 0.688619
[800]	train's binary_logloss: 0.686547	valid's binary_logloss: 0.688562
[900]	train's binary_logloss: 0.686373	valid's binary_logloss: 0.688526
[1000]	train's binary_logloss: 0.686221	valid's binary_logloss: 0.688477
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.686221	valid's binary_logloss: 0.688477
num_leaves, val_score: 0.688477:  55%|#####5    | 11/20 [00:16<00:15,  1.75s/it][I 2020-09-27 04:50:43,436] Trial 17 finished with value: 0.6884769144658212 and parameters: {'num_leaves': 2}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  55%|#####5    | 11/20 [00:16<00:15,  1.75s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000850 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.573685	valid's binary_logloss: 0.696971
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.660038	valid's binary_logloss: 0.691352
num_leaves, val_score: 0.688477:  60%|######    | 12/20 [00:17<00:12,  1.57s/it][I 2020-09-27 04:50:44,593] Trial 18 finished with value: 0.6913522035050282 and parameters: {'num_leaves': 195}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  60%|######    | 12/20 [00:17<00:12,  1.57s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000815 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.64326	valid's binary_logloss: 0.690631
Early stopping, best iteration is:
[71]	train's binary_logloss: 0.654351	valid's binary_logloss: 0.690055
num_leaves, val_score: 0.688477:  65%|######5   | 13/20 [00:18<00:09,  1.39s/it][I 2020-09-27 04:50:45,549] Trial 19 finished with value: 0.6900553800103405 and parameters: {'num_leaves': 66}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  65%|######5   | 13/20 [00:18<00:09,  1.39s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000857 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68554	valid's binary_logloss: 0.689548
[200]	train's binary_logloss: 0.681807	valid's binary_logloss: 0.689004
[300]	train's binary_logloss: 0.678526	valid's binary_logloss: 0.688996
Early stopping, best iteration is:
[221]	train's binary_logloss: 0.681102	valid's binary_logloss: 0.688886
num_leaves, val_score: 0.688477:  70%|#######   | 14/20 [00:19<00:07,  1.30s/it][I 2020-09-27 04:50:46,632] Trial 20 finished with value: 0.6888861673616007 and parameters: {'num_leaves': 6}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  70%|#######   | 14/20 [00:19<00:07,  1.30s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000852 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688532	valid's binary_logloss: 0.690109
[200]	train's binary_logloss: 0.686793	valid's binary_logloss: 0.689219
[300]	train's binary_logloss: 0.685512	valid's binary_logloss: 0.68876
[400]	train's binary_logloss: 0.684466	valid's binary_logloss: 0.688694
[500]	train's binary_logloss: 0.68348	valid's binary_logloss: 0.6886
Early stopping, best iteration is:
[438]	train's binary_logloss: 0.684084	valid's binary_logloss: 0.68855
num_leaves, val_score: 0.688477:  75%|#######5  | 15/20 [00:21<00:07,  1.42s/it][I 2020-09-27 04:50:48,340] Trial 21 finished with value: 0.6885499863950102 and parameters: {'num_leaves': 3}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  75%|#######5  | 15/20 [00:21<00:07,  1.42s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000875 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.660592	valid's binary_logloss: 0.690578
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.676299	valid's binary_logloss: 0.690019
num_leaves, val_score: 0.688477:  80%|########  | 16/20 [00:21<00:04,  1.22s/it][I 2020-09-27 04:50:49,088] Trial 22 finished with value: 0.6900188884483167 and parameters: {'num_leaves': 39}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  80%|########  | 16/20 [00:21<00:04,  1.22s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000823 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658652	valid's binary_logloss: 0.689977
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.672748	valid's binary_logloss: 0.689541
num_leaves, val_score: 0.688477:  85%|########5 | 17/20 [00:22<00:03,  1.09s/it][I 2020-09-27 04:50:49,865] Trial 23 finished with value: 0.6895412052804837 and parameters: {'num_leaves': 42}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  85%|########5 | 17/20 [00:22<00:03,  1.09s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000785 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688532	valid's binary_logloss: 0.690109
[200]	train's binary_logloss: 0.686793	valid's binary_logloss: 0.689219
[300]	train's binary_logloss: 0.685512	valid's binary_logloss: 0.68876
[400]	train's binary_logloss: 0.684466	valid's binary_logloss: 0.688694
[500]	train's binary_logloss: 0.68348	valid's binary_logloss: 0.6886
Early stopping, best iteration is:
[438]	train's binary_logloss: 0.684084	valid's binary_logloss: 0.68855
num_leaves, val_score: 0.688477:  90%|######### | 18/20 [00:24<00:02,  1.28s/it][I 2020-09-27 04:50:51,590] Trial 24 finished with value: 0.6885499863950102 and parameters: {'num_leaves': 3}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  90%|######### | 18/20 [00:24<00:02,  1.28s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000830 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.633629	valid's binary_logloss: 0.690048
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.66326	valid's binary_logloss: 0.689286
num_leaves, val_score: 0.688477:  95%|#########5| 19/20 [00:25<00:01,  1.16s/it][I 2020-09-27 04:50:52,488] Trial 25 finished with value: 0.689285779471649 and parameters: {'num_leaves': 81}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477:  95%|#########5| 19/20 [00:25<00:01,  1.16s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001270 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.669687	valid's binary_logloss: 0.68995
Early stopping, best iteration is:
[70]	train's binary_logloss: 0.674818	valid's binary_logloss: 0.689845
num_leaves, val_score: 0.688477: 100%|##########| 20/20 [00:25<00:00,  1.04s/it][I 2020-09-27 04:50:53,247] Trial 26 finished with value: 0.6898451259704299 and parameters: {'num_leaves': 26}. Best is trial 17 with value: 0.6884769144658212.
num_leaves, val_score: 0.688477: 100%|##########| 20/20 [00:25<00:00,  1.30s/it]
bagging, val_score: 0.688477:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000794 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689916	valid's binary_logloss: 0.690872
[200]	train's binary_logloss: 0.688688	valid's binary_logloss: 0.689969
[300]	train's binary_logloss: 0.687926	valid's binary_logloss: 0.689254
[400]	train's binary_logloss: 0.687375	valid's binary_logloss: 0.688976
[500]	train's binary_logloss: 0.686964	valid's binary_logloss: 0.688668
[600]	train's binary_logloss: 0.686637	valid's binary_logloss: 0.688603
[700]	train's binary_logloss: 0.686361	valid's binary_logloss: 0.688503
[800]	train's binary_logloss: 0.686138	valid's binary_logloss: 0.688427
[900]	train's binary_logloss: 0.68594	valid's binary_logloss: 0.68828
Early stopping, best iteration is:
[893]	train's binary_logloss: 0.685953	valid's binary_logloss: 0.688266
bagging, val_score: 0.688266:  10%|#         | 1/10 [00:03<00:29,  3.27s/it][I 2020-09-27 04:50:56,524] Trial 27 finished with value: 0.6882656720836928 and parameters: {'bagging_fraction': 0.9056687704253129, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  10%|#         | 1/10 [00:03<00:29,  3.27s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000821 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689949	valid's binary_logloss: 0.690893
[200]	train's binary_logloss: 0.688731	valid's binary_logloss: 0.689897
[300]	train's binary_logloss: 0.687963	valid's binary_logloss: 0.689438
[400]	train's binary_logloss: 0.687422	valid's binary_logloss: 0.688923
[500]	train's binary_logloss: 0.687012	valid's binary_logloss: 0.688674
[600]	train's binary_logloss: 0.686692	valid's binary_logloss: 0.688589
[700]	train's binary_logloss: 0.686428	valid's binary_logloss: 0.688421
[800]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.688439
[900]	train's binary_logloss: 0.686006	valid's binary_logloss: 0.688398
Early stopping, best iteration is:
[868]	train's binary_logloss: 0.686066	valid's binary_logloss: 0.68835
bagging, val_score: 0.688266:  20%|##        | 2/10 [00:06<00:25,  3.24s/it][I 2020-09-27 04:50:59,688] Trial 28 finished with value: 0.68834951195225 and parameters: {'bagging_fraction': 0.9341933335925237, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  20%|##        | 2/10 [00:06<00:25,  3.24s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000760 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689958	valid's binary_logloss: 0.690901
[200]	train's binary_logloss: 0.688753	valid's binary_logloss: 0.689921
[300]	train's binary_logloss: 0.687991	valid's binary_logloss: 0.68943
[400]	train's binary_logloss: 0.687457	valid's binary_logloss: 0.688954
[500]	train's binary_logloss: 0.687054	valid's binary_logloss: 0.688639
[600]	train's binary_logloss: 0.686734	valid's binary_logloss: 0.688575
[700]	train's binary_logloss: 0.686471	valid's binary_logloss: 0.688404
[800]	train's binary_logloss: 0.686247	valid's binary_logloss: 0.688399
[900]	train's binary_logloss: 0.686054	valid's binary_logloss: 0.688395
[1000]	train's binary_logloss: 0.685884	valid's binary_logloss: 0.688386
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.685884	valid's binary_logloss: 0.688386
bagging, val_score: 0.688266:  30%|###       | 3/10 [00:09<00:22,  3.22s/it][I 2020-09-27 04:51:02,884] Trial 29 finished with value: 0.6883861307718222 and parameters: {'bagging_fraction': 0.9472695270258076, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  30%|###       | 3/10 [00:09<00:22,  3.22s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000818 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689959	valid's binary_logloss: 0.690908
[200]	train's binary_logloss: 0.688754	valid's binary_logloss: 0.689942
[300]	train's binary_logloss: 0.68799	valid's binary_logloss: 0.689383
[400]	train's binary_logloss: 0.687454	valid's binary_logloss: 0.688954
[500]	train's binary_logloss: 0.687052	valid's binary_logloss: 0.688747
[600]	train's binary_logloss: 0.686731	valid's binary_logloss: 0.68862
[700]	train's binary_logloss: 0.68647	valid's binary_logloss: 0.688482
[800]	train's binary_logloss: 0.686248	valid's binary_logloss: 0.68848
Early stopping, best iteration is:
[725]	train's binary_logloss: 0.686411	valid's binary_logloss: 0.68843
bagging, val_score: 0.688266:  40%|####      | 4/10 [00:12<00:18,  3.05s/it][I 2020-09-27 04:51:05,526] Trial 30 finished with value: 0.6884302974689744 and parameters: {'bagging_fraction': 0.9478078952145158, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  40%|####      | 4/10 [00:12<00:18,  3.05s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000815 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689962	valid's binary_logloss: 0.690903
[200]	train's binary_logloss: 0.688757	valid's binary_logloss: 0.690012
[300]	train's binary_logloss: 0.688001	valid's binary_logloss: 0.689453
[400]	train's binary_logloss: 0.687468	valid's binary_logloss: 0.688994
[500]	train's binary_logloss: 0.687058	valid's binary_logloss: 0.688733
[600]	train's binary_logloss: 0.686738	valid's binary_logloss: 0.688599
[700]	train's binary_logloss: 0.686473	valid's binary_logloss: 0.688496
[800]	train's binary_logloss: 0.686255	valid's binary_logloss: 0.688458
[900]	train's binary_logloss: 0.686061	valid's binary_logloss: 0.688445
Early stopping, best iteration is:
[867]	train's binary_logloss: 0.686119	valid's binary_logloss: 0.688391
bagging, val_score: 0.688266:  50%|#####     | 5/10 [00:15<00:15,  3.08s/it][I 2020-09-27 04:51:08,673] Trial 31 finished with value: 0.6883910883412938 and parameters: {'bagging_fraction': 0.9500225140884224, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  50%|#####     | 5/10 [00:15<00:15,  3.08s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000836 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689952	valid's binary_logloss: 0.690877
[200]	train's binary_logloss: 0.688739	valid's binary_logloss: 0.689932
[300]	train's binary_logloss: 0.687981	valid's binary_logloss: 0.689456
[400]	train's binary_logloss: 0.687443	valid's binary_logloss: 0.688946
[500]	train's binary_logloss: 0.687035	valid's binary_logloss: 0.688722
[600]	train's binary_logloss: 0.686712	valid's binary_logloss: 0.688567
[700]	train's binary_logloss: 0.686447	valid's binary_logloss: 0.688364
[800]	train's binary_logloss: 0.686224	valid's binary_logloss: 0.688372
Early stopping, best iteration is:
[701]	train's binary_logloss: 0.686445	valid's binary_logloss: 0.688354
bagging, val_score: 0.688266:  60%|######    | 6/10 [00:18<00:11,  2.94s/it][I 2020-09-27 04:51:11,277] Trial 32 finished with value: 0.6883544364816402 and parameters: {'bagging_fraction': 0.939276221019466, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  60%|######    | 6/10 [00:18<00:11,  2.94s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000816 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689945	valid's binary_logloss: 0.690873
[200]	train's binary_logloss: 0.688736	valid's binary_logloss: 0.68993
[300]	train's binary_logloss: 0.687975	valid's binary_logloss: 0.689441
[400]	train's binary_logloss: 0.687436	valid's binary_logloss: 0.688977
[500]	train's binary_logloss: 0.687029	valid's binary_logloss: 0.68869
[600]	train's binary_logloss: 0.68671	valid's binary_logloss: 0.688599
[700]	train's binary_logloss: 0.686444	valid's binary_logloss: 0.688457
[800]	train's binary_logloss: 0.686223	valid's binary_logloss: 0.688493
Early stopping, best iteration is:
[700]	train's binary_logloss: 0.686444	valid's binary_logloss: 0.688457
bagging, val_score: 0.688266:  70%|#######   | 7/10 [00:20<00:08,  2.83s/it][I 2020-09-27 04:51:13,857] Trial 33 finished with value: 0.6884573747067545 and parameters: {'bagging_fraction': 0.9386192260753653, 'bagging_freq': 1}. Best is trial 27 with value: 0.6882656720836928.
bagging, val_score: 0.688266:  70%|#######   | 7/10 [00:20<00:08,  2.83s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000907 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689934	valid's binary_logloss: 0.690812
[200]	train's binary_logloss: 0.688717	valid's binary_logloss: 0.689892
[300]	train's binary_logloss: 0.687955	valid's binary_logloss: 0.689321
[400]	train's binary_logloss: 0.687414	valid's binary_logloss: 0.688937
[500]	train's binary_logloss: 0.686999	valid's binary_logloss: 0.688627
[600]	train's binary_logloss: 0.686678	valid's binary_logloss: 0.68863
[700]	train's binary_logloss: 0.686413	valid's binary_logloss: 0.688396
[800]	train's binary_logloss: 0.68619	valid's binary_logloss: 0.688375
[900]	train's binary_logloss: 0.685991	valid's binary_logloss: 0.688321
Early stopping, best iteration is:
[867]	train's binary_logloss: 0.686053	valid's binary_logloss: 0.688263
bagging, val_score: 0.688263:  80%|########  | 8/10 [00:23<00:05,  2.91s/it][I 2020-09-27 04:51:16,952] Trial 34 finished with value: 0.6882634163638113 and parameters: {'bagging_fraction': 0.9270871288128361, 'bagging_freq': 1}. Best is trial 34 with value: 0.6882634163638113.
bagging, val_score: 0.688263:  80%|########  | 8/10 [00:23<00:05,  2.91s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000826 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
bagging, val_score: 0.688186:  90%|######### | 9/10 [00:26<00:02,  2.83s/it][I 2020-09-27 04:51:19,581] Trial 35 finished with value: 0.6881860644541565 and parameters: {'bagging_fraction': 0.797756062798586, 'bagging_freq': 1}. Best is trial 35 with value: 0.6881860644541565.
bagging, val_score: 0.688186:  90%|######### | 9/10 [00:26<00:02,  2.83s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000754 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689842	valid's binary_logloss: 0.690437
[200]	train's binary_logloss: 0.688572	valid's binary_logloss: 0.689737
[300]	train's binary_logloss: 0.687777	valid's binary_logloss: 0.68909
[400]	train's binary_logloss: 0.687207	valid's binary_logloss: 0.688463
[500]	train's binary_logloss: 0.686786	valid's binary_logloss: 0.688389
Early stopping, best iteration is:
[463]	train's binary_logloss: 0.686927	valid's binary_logloss: 0.688299
bagging, val_score: 0.688186: 100%|##########| 10/10 [00:28<00:00,  2.52s/it][I 2020-09-27 04:51:21,392] Trial 36 finished with value: 0.6882991425357883 and parameters: {'bagging_fraction': 0.7512163390719674, 'bagging_freq': 4}. Best is trial 35 with value: 0.6881860644541565.
bagging, val_score: 0.688186: 100%|##########| 10/10 [00:28<00:00,  2.81s/it]
feature_fraction_stage2, val_score: 0.688186:   0%|          | 0/3 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000836 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689843	valid's binary_logloss: 0.690616
[200]	train's binary_logloss: 0.688551	valid's binary_logloss: 0.689846
[300]	train's binary_logloss: 0.687748	valid's binary_logloss: 0.689146
[400]	train's binary_logloss: 0.687179	valid's binary_logloss: 0.688875
[500]	train's binary_logloss: 0.686761	valid's binary_logloss: 0.688605
[600]	train's binary_logloss: 0.686432	valid's binary_logloss: 0.68842
[700]	train's binary_logloss: 0.686158	valid's binary_logloss: 0.688213
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686161	valid's binary_logloss: 0.68819
feature_fraction_stage2, val_score: 0.688186:  33%|###3      | 1/3 [00:02<00:05,  2.77s/it][I 2020-09-27 04:51:24,172] Trial 37 finished with value: 0.6881903702314308 and parameters: {'feature_fraction': 0.41600000000000004}. Best is trial 37 with value: 0.6881903702314308.
feature_fraction_stage2, val_score: 0.688186:  33%|###3      | 1/3 [00:02<00:05,  2.77s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000871 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689828	valid's binary_logloss: 0.690608
[200]	train's binary_logloss: 0.688553	valid's binary_logloss: 0.689888
[300]	train's binary_logloss: 0.687744	valid's binary_logloss: 0.689142
[400]	train's binary_logloss: 0.687173	valid's binary_logloss: 0.688816
[500]	train's binary_logloss: 0.686758	valid's binary_logloss: 0.688522
[600]	train's binary_logloss: 0.686423	valid's binary_logloss: 0.688429
[700]	train's binary_logloss: 0.686151	valid's binary_logloss: 0.688287
Early stopping, best iteration is:
[697]	train's binary_logloss: 0.686159	valid's binary_logloss: 0.688254
feature_fraction_stage2, val_score: 0.688186:  67%|######6   | 2/3 [00:05<00:02,  2.75s/it][I 2020-09-27 04:51:26,877] Trial 38 finished with value: 0.6882536571755299 and parameters: {'feature_fraction': 0.44800000000000006}. Best is trial 37 with value: 0.6881903702314308.
feature_fraction_stage2, val_score: 0.688186:  67%|######6   | 2/3 [00:05<00:02,  2.75s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000812 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689828	valid's binary_logloss: 0.690608
[200]	train's binary_logloss: 0.688553	valid's binary_logloss: 0.689888
[300]	train's binary_logloss: 0.687744	valid's binary_logloss: 0.689142
[400]	train's binary_logloss: 0.687173	valid's binary_logloss: 0.688816
[500]	train's binary_logloss: 0.686758	valid's binary_logloss: 0.688522
[600]	train's binary_logloss: 0.686423	valid's binary_logloss: 0.688429
[700]	train's binary_logloss: 0.686151	valid's binary_logloss: 0.688287
Early stopping, best iteration is:
[697]	train's binary_logloss: 0.686159	valid's binary_logloss: 0.688254
feature_fraction_stage2, val_score: 0.688186: 100%|##########| 3/3 [00:08<00:00,  2.74s/it][I 2020-09-27 04:51:29,591] Trial 39 finished with value: 0.6882536571755299 and parameters: {'feature_fraction': 0.48000000000000004}. Best is trial 37 with value: 0.6881903702314308.
feature_fraction_stage2, val_score: 0.688186: 100%|##########| 3/3 [00:08<00:00,  2.73s/it]
regularization_factors, val_score: 0.688186:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008765 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:   5%|5         | 1/20 [00:02<00:48,  2.55s/it][I 2020-09-27 04:51:32,160] Trial 40 finished with value: 0.6881860967213117 and parameters: {'lambda_l1': 4.739533108597076e-06, 'lambda_l2': 0.011058449085898892}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:   5%|5         | 1/20 [00:02<00:48,  2.55s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000761 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  10%|#         | 2/20 [00:05<00:46,  2.59s/it][I 2020-09-27 04:51:34,825] Trial 41 finished with value: 0.6881861017696784 and parameters: {'lambda_l1': 2.6160452041031395e-06, 'lambda_l2': 0.012798063629265445}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:  10%|#         | 2/20 [00:05<00:46,  2.59s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000802 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  15%|#5        | 3/20 [00:07<00:44,  2.59s/it][I 2020-09-27 04:51:37,430] Trial 42 finished with value: 0.6881860972873897 and parameters: {'lambda_l1': 2.1069017715929826e-06, 'lambda_l2': 0.011265308763779644}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:  15%|#5        | 3/20 [00:07<00:44,  2.59s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000812 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  20%|##        | 4/20 [00:10<00:42,  2.64s/it][I 2020-09-27 04:51:40,183] Trial 43 finished with value: 0.6881861188350943 and parameters: {'lambda_l1': 2.871077466536222e-06, 'lambda_l2': 0.01863739043244412}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:  20%|##        | 4/20 [00:10<00:42,  2.64s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000761 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  25%|##5       | 5/20 [00:13<00:39,  2.65s/it][I 2020-09-27 04:51:42,840] Trial 44 finished with value: 0.6881860987513834 and parameters: {'lambda_l1': 1.3228363853250858e-06, 'lambda_l2': 0.011769235325842804}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:  25%|##5       | 5/20 [00:13<00:39,  2.65s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000756 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  30%|###       | 6/20 [00:15<00:37,  2.66s/it][I 2020-09-27 04:51:45,544] Trial 45 finished with value: 0.6881861035517631 and parameters: {'lambda_l1': 1.083504577056653e-06, 'lambda_l2': 0.013413611446410094}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:  30%|###       | 6/20 [00:15<00:37,  2.66s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000815 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686174	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  35%|###5      | 7/20 [00:18<00:34,  2.64s/it][I 2020-09-27 04:51:48,142] Trial 46 finished with value: 0.688186150142476 and parameters: {'lambda_l1': 3.3314910180930466e-06, 'lambda_l2': 0.029327749830602143}. Best is trial 40 with value: 0.6881860967213117.
regularization_factors, val_score: 0.688186:  35%|###5      | 7/20 [00:18<00:34,  2.64s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000818 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  40%|####      | 8/20 [00:21<00:31,  2.65s/it][I 2020-09-27 04:51:50,799] Trial 47 finished with value: 0.6881860757482499 and parameters: {'lambda_l1': 2.3526621184608058e-06, 'lambda_l2': 0.003868598725242898}. Best is trial 47 with value: 0.6881860757482499.
regularization_factors, val_score: 0.688186:  40%|####      | 8/20 [00:21<00:31,  2.65s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000808 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  45%|####5     | 9/20 [00:23<00:29,  2.65s/it][I 2020-09-27 04:51:53,467] Trial 48 finished with value: 0.6881860645612168 and parameters: {'lambda_l1': 3.130361329176902e-06, 'lambda_l2': 2.312087319678873e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  45%|####5     | 9/20 [00:23<00:29,  2.65s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000816 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  50%|#####     | 10/20 [00:26<00:26,  2.65s/it][I 2020-09-27 04:51:56,110] Trial 49 finished with value: 0.6881860867937425 and parameters: {'lambda_l1': 0.0016839304778010226, 'lambda_l2': 1.0716481604934569e-06}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  50%|#####     | 10/20 [00:26<00:26,  2.65s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000807 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689854	valid's binary_logloss: 0.690628
[200]	train's binary_logloss: 0.688573	valid's binary_logloss: 0.689878
[300]	train's binary_logloss: 0.687778	valid's binary_logloss: 0.689241
[400]	train's binary_logloss: 0.687211	valid's binary_logloss: 0.688892
[500]	train's binary_logloss: 0.686781	valid's binary_logloss: 0.688591
[600]	train's binary_logloss: 0.686449	valid's binary_logloss: 0.688474
[700]	train's binary_logloss: 0.686174	valid's binary_logloss: 0.688261
[800]	train's binary_logloss: 0.685939	valid's binary_logloss: 0.688313
Early stopping, best iteration is:
[739]	train's binary_logloss: 0.686081	valid's binary_logloss: 0.688233
regularization_factors, val_score: 0.688186:  55%|#####5    | 11/20 [00:29<00:24,  2.69s/it][I 2020-09-27 04:51:58,894] Trial 50 finished with value: 0.6882329753342665 and parameters: {'lambda_l1': 0.09692234973226965, 'lambda_l2': 8.775766596471004e-07}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  55%|#####5    | 11/20 [00:29<00:24,  2.69s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000815 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  60%|######    | 12/20 [00:31<00:21,  2.68s/it][I 2020-09-27 04:52:01,543] Trial 51 finished with value: 0.6881861075194643 and parameters: {'lambda_l1': 0.0032449073251714445, 'lambda_l2': 3.2716223509922217e-06}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  60%|######    | 12/20 [00:31<00:21,  2.68s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000769 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  65%|######5   | 13/20 [00:34<00:18,  2.67s/it][I 2020-09-27 04:52:04,186] Trial 52 finished with value: 0.6881860698451091 and parameters: {'lambda_l1': 0.0003848047657014028, 'lambda_l2': 9.591675343195672e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  65%|######5   | 13/20 [00:34<00:18,  2.67s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000847 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  70%|#######   | 14/20 [00:37<00:15,  2.65s/it][I 2020-09-27 04:52:06,789] Trial 53 finished with value: 0.6881860703621502 and parameters: {'lambda_l1': 0.0004165194013850262, 'lambda_l2': 0.0001302229603956366}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  70%|#######   | 14/20 [00:37<00:15,  2.65s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000813 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  75%|#######5  | 15/20 [00:42<00:17,  3.49s/it][I 2020-09-27 04:52:12,308] Trial 54 finished with value: 0.6881860706975759 and parameters: {'lambda_l1': 0.00045861316138919045, 'lambda_l2': 5.257878461523017e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  75%|#######5  | 15/20 [00:42<00:17,  3.49s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001772 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  80%|########  | 16/20 [00:47<00:15,  3.79s/it][I 2020-09-27 04:52:16,741] Trial 55 finished with value: 0.6881860672245419 and parameters: {'lambda_l1': 0.00019089434047484262, 'lambda_l2': 8.205565377180615e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  80%|########  | 16/20 [00:47<00:15,  3.79s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000880 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  85%|########5 | 17/20 [00:50<00:10,  3.55s/it][I 2020-09-27 04:52:19,720] Trial 56 finished with value: 0.6881860669984095 and parameters: {'lambda_l1': 0.00017461333741205492, 'lambda_l2': 7.853596223254834e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  85%|########5 | 17/20 [00:50<00:10,  3.55s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008942 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  90%|######### | 18/20 [00:53<00:06,  3.38s/it][I 2020-09-27 04:52:22,709] Trial 57 finished with value: 0.6881860668888051 and parameters: {'lambda_l1': 0.0001634804643071105, 'lambda_l2': 9.030958952909047e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  90%|######### | 18/20 [00:53<00:06,  3.38s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000820 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186:  95%|#########5| 19/20 [00:56<00:03,  3.29s/it][I 2020-09-27 04:52:25,792] Trial 58 finished with value: 0.6881860653671973 and parameters: {'lambda_l1': 5.5880426121768656e-05, 'lambda_l2': 5.7285984514290246e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186:  95%|#########5| 19/20 [00:56<00:03,  3.29s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001313 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.68858
[600]	train's binary_logloss: 0.686448	valid's binary_logloss: 0.688462
[700]	train's binary_logloss: 0.686171	valid's binary_logloss: 0.688201
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686173	valid's binary_logloss: 0.688186
regularization_factors, val_score: 0.688186: 100%|##########| 20/20 [00:59<00:00,  3.27s/it][I 2020-09-27 04:52:28,997] Trial 59 finished with value: 0.6881860649508688 and parameters: {'lambda_l1': 3.502289401416132e-05, 'lambda_l2': 1.1183876599807051e-05}. Best is trial 48 with value: 0.6881860645612168.
regularization_factors, val_score: 0.688186: 100%|##########| 20/20 [00:59<00:00,  2.97s/it]
min_data_in_leaf, val_score: 0.688186:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000910 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687776	valid's binary_logloss: 0.68924
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688859
[500]	train's binary_logloss: 0.686783	valid's binary_logloss: 0.688582
[600]	train's binary_logloss: 0.686449	valid's binary_logloss: 0.688469
[700]	train's binary_logloss: 0.686176	valid's binary_logloss: 0.688262
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686178	valid's binary_logloss: 0.688248
min_data_in_leaf, val_score: 0.688186:  20%|##        | 1/5 [00:02<00:11,  2.91s/it][I 2020-09-27 04:52:31,924] Trial 60 finished with value: 0.6882475377052306 and parameters: {'min_child_samples': 25}. Best is trial 60 with value: 0.6882475377052306.
min_data_in_leaf, val_score: 0.688186:  20%|##        | 1/5 [00:02<00:11,  2.91s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000804 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689877
[300]	train's binary_logloss: 0.687769	valid's binary_logloss: 0.689167
[400]	train's binary_logloss: 0.687205	valid's binary_logloss: 0.688804
[500]	train's binary_logloss: 0.68679	valid's binary_logloss: 0.688551
[600]	train's binary_logloss: 0.686455	valid's binary_logloss: 0.688496
[700]	train's binary_logloss: 0.686182	valid's binary_logloss: 0.68822
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686183	valid's binary_logloss: 0.688205
min_data_in_leaf, val_score: 0.688186:  40%|####      | 2/5 [00:05<00:08,  2.90s/it][I 2020-09-27 04:52:34,798] Trial 61 finished with value: 0.688205087532443 and parameters: {'min_child_samples': 50}. Best is trial 61 with value: 0.688205087532443.
min_data_in_leaf, val_score: 0.688186:  40%|####      | 2/5 [00:05<00:08,  2.90s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008806 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688564	valid's binary_logloss: 0.689845
[300]	train's binary_logloss: 0.687763	valid's binary_logloss: 0.689203
[400]	train's binary_logloss: 0.687188	valid's binary_logloss: 0.688823
[500]	train's binary_logloss: 0.686762	valid's binary_logloss: 0.688533
[600]	train's binary_logloss: 0.686429	valid's binary_logloss: 0.688447
[700]	train's binary_logloss: 0.686152	valid's binary_logloss: 0.688247
Early stopping, best iteration is:
[699]	train's binary_logloss: 0.686153	valid's binary_logloss: 0.688233
min_data_in_leaf, val_score: 0.688186:  60%|######    | 3/5 [00:08<00:05,  2.89s/it][I 2020-09-27 04:52:37,667] Trial 62 finished with value: 0.688232682196116 and parameters: {'min_child_samples': 10}. Best is trial 61 with value: 0.688205087532443.
min_data_in_leaf, val_score: 0.688186:  60%|######    | 3/5 [00:08<00:05,  2.89s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000818 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689852	valid's binary_logloss: 0.690627
[200]	train's binary_logloss: 0.688564	valid's binary_logloss: 0.689845
[300]	train's binary_logloss: 0.687763	valid's binary_logloss: 0.689203
[400]	train's binary_logloss: 0.687188	valid's binary_logloss: 0.688823
[500]	train's binary_logloss: 0.686762	valid's binary_logloss: 0.688533
[600]	train's binary_logloss: 0.686429	valid's binary_logloss: 0.688447
[700]	train's binary_logloss: 0.686152	valid's binary_logloss: 0.688247
[800]	train's binary_logloss: 0.68591	valid's binary_logloss: 0.688233
[900]	train's binary_logloss: 0.685702	valid's binary_logloss: 0.688215
[1000]	train's binary_logloss: 0.685499	valid's binary_logloss: 0.688212
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.685499	valid's binary_logloss: 0.688212
min_data_in_leaf, val_score: 0.688186:  80%|########  | 4/5 [00:12<00:03,  3.15s/it][I 2020-09-27 04:52:41,438] Trial 63 finished with value: 0.6882121446078117 and parameters: {'min_child_samples': 5}. Best is trial 61 with value: 0.688205087532443.
min_data_in_leaf, val_score: 0.688186:  80%|########  | 4/5 [00:12<00:03,  3.15s/it][LightGBM] [Info] Number of positive: 46746, number of negative: 46279
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000805 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93025, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502510 -> initscore=0.010040
[LightGBM] [Info] Start training from score 0.010040
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689902	valid's binary_logloss: 0.690519
[200]	train's binary_logloss: 0.68864	valid's binary_logloss: 0.68968
[300]	train's binary_logloss: 0.687867	valid's binary_logloss: 0.688977
[400]	train's binary_logloss: 0.687339	valid's binary_logloss: 0.688725
[500]	train's binary_logloss: 0.686927	valid's binary_logloss: 0.688484
[600]	train's binary_logloss: 0.686611	valid's binary_logloss: 0.688434
Early stopping, best iteration is:
[531]	train's binary_logloss: 0.68682	valid's binary_logloss: 0.688362
min_data_in_leaf, val_score: 0.688186: 100%|##########| 5/5 [00:15<00:00,  3.03s/it][I 2020-09-27 04:52:44,170] Trial 64 finished with value: 0.6883621338709048 and parameters: {'min_child_samples': 100}. Best is trial 61 with value: 0.688205087532443.
min_data_in_leaf, val_score: 0.688186: 100%|##########| 5/5 [00:15<00:00,  3.03s/it]
Fold : 2
[I 2020-09-27 04:52:44,427] A new study created in memory with name: no-name-4beb476a-e1cd-441a-9d78-1d4a1c5ec6fe
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010680 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663809	valid's binary_logloss: 0.689691
Early stopping, best iteration is:
[55]	train's binary_logloss: 0.674139	valid's binary_logloss: 0.689343
feature_fraction, val_score: 0.689343:  14%|#4        | 1/7 [00:01<00:09,  1.55s/it][I 2020-09-27 04:52:45,997] Trial 0 finished with value: 0.689343019442042 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.689343019442042.
feature_fraction, val_score: 0.689343:  14%|#4        | 1/7 [00:01<00:09,  1.55s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008342 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664318	valid's binary_logloss: 0.688991
Early stopping, best iteration is:
[63]	train's binary_logloss: 0.672462	valid's binary_logloss: 0.688554
feature_fraction, val_score: 0.688554:  29%|##8       | 2/7 [00:03<00:07,  1.58s/it][I 2020-09-27 04:52:47,648] Trial 1 finished with value: 0.6885538121639901 and parameters: {'feature_fraction': 0.6}. Best is trial 1 with value: 0.6885538121639901.
feature_fraction, val_score: 0.688554:  29%|##8       | 2/7 [00:03<00:07,  1.58s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001457 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666146	valid's binary_logloss: 0.68872
Early stopping, best iteration is:
[99]	train's binary_logloss: 0.66631	valid's binary_logloss: 0.688654
feature_fraction, val_score: 0.688554:  43%|####2     | 3/7 [00:05<00:06,  1.68s/it][I 2020-09-27 04:52:49,556] Trial 2 finished with value: 0.6886536923224554 and parameters: {'feature_fraction': 0.4}. Best is trial 1 with value: 0.6885538121639901.
feature_fraction, val_score: 0.688554:  43%|####2     | 3/7 [00:05<00:06,  1.68s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001582 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
feature_fraction, val_score: 0.688306:  57%|#####7    | 4/7 [00:06<00:04,  1.56s/it][I 2020-09-27 04:52:50,831] Trial 3 finished with value: 0.6883061543757897 and parameters: {'feature_fraction': 0.5}. Best is trial 3 with value: 0.6883061543757897.
feature_fraction, val_score: 0.688306:  57%|#####7    | 4/7 [00:06<00:04,  1.56s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013980 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663334	valid's binary_logloss: 0.689928
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.678023	valid's binary_logloss: 0.689633
feature_fraction, val_score: 0.688306:  71%|#######1  | 5/7 [00:07<00:02,  1.33s/it][I 2020-09-27 04:52:51,651] Trial 4 finished with value: 0.6896333450899451 and parameters: {'feature_fraction': 0.8}. Best is trial 3 with value: 0.6883061543757897.
feature_fraction, val_score: 0.688306:  71%|#######1  | 5/7 [00:07<00:02,  1.33s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003355 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662709	valid's binary_logloss: 0.690225
Early stopping, best iteration is:
[72]	train's binary_logloss: 0.669305	valid's binary_logloss: 0.689574
feature_fraction, val_score: 0.688306:  86%|########5 | 6/7 [00:08<00:01,  1.26s/it][I 2020-09-27 04:52:52,738] Trial 5 finished with value: 0.6895740757388882 and parameters: {'feature_fraction': 1.0}. Best is trial 3 with value: 0.6883061543757897.
feature_fraction, val_score: 0.688306:  86%|########5 | 6/7 [00:08<00:01,  1.26s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001567 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663043	valid's binary_logloss: 0.689123
Early stopping, best iteration is:
[74]	train's binary_logloss: 0.668993	valid's binary_logloss: 0.688769
feature_fraction, val_score: 0.688306: 100%|##########| 7/7 [00:09<00:00,  1.20s/it][I 2020-09-27 04:52:53,795] Trial 6 finished with value: 0.6887687471521096 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 3 with value: 0.6883061543757897.
feature_fraction, val_score: 0.688306: 100%|##########| 7/7 [00:09<00:00,  1.34s/it]
num_leaves, val_score: 0.688306:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010710 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.642699	valid's binary_logloss: 0.689465
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.658614	valid's binary_logloss: 0.688722
num_leaves, val_score: 0.688306:   5%|5         | 1/20 [00:01<00:19,  1.05s/it][I 2020-09-27 04:52:54,861] Trial 7 finished with value: 0.6887216519277777 and parameters: {'num_leaves': 63}. Best is trial 7 with value: 0.6887216519277777.
num_leaves, val_score: 0.688306:   5%|5         | 1/20 [00:01<00:19,  1.05s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001033 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.541922	valid's binary_logloss: 0.696672
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.656043	valid's binary_logloss: 0.691091
num_leaves, val_score: 0.688306:  10%|#         | 2/20 [00:02<00:21,  1.17s/it][I 2020-09-27 04:52:56,310] Trial 8 finished with value: 0.6910909448392947 and parameters: {'num_leaves': 247}. Best is trial 7 with value: 0.6887216519277777.
num_leaves, val_score: 0.688306:  10%|#         | 2/20 [00:02<00:21,  1.17s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000948 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.661372	valid's binary_logloss: 0.689877
Early stopping, best iteration is:
[75]	train's binary_logloss: 0.667459	valid's binary_logloss: 0.689409
num_leaves, val_score: 0.688306:  15%|#5        | 3/20 [00:03<00:18,  1.09s/it][I 2020-09-27 04:52:57,203] Trial 9 finished with value: 0.6894087377385544 and parameters: {'num_leaves': 36}. Best is trial 7 with value: 0.6887216519277777.
num_leaves, val_score: 0.688306:  15%|#5        | 3/20 [00:03<00:18,  1.09s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000877 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.543852	valid's binary_logloss: 0.693135
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.651184	valid's binary_logloss: 0.690432
num_leaves, val_score: 0.688306:  20%|##        | 4/20 [00:04<00:18,  1.17s/it][I 2020-09-27 04:52:58,582] Trial 10 finished with value: 0.6904319926657629 and parameters: {'num_leaves': 243}. Best is trial 7 with value: 0.6887216519277777.
num_leaves, val_score: 0.688306:  20%|##        | 4/20 [00:04<00:18,  1.17s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000959 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.589007	valid's binary_logloss: 0.693108
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.644233	valid's binary_logloss: 0.690224
num_leaves, val_score: 0.688306:  25%|##5       | 5/20 [00:05<00:17,  1.19s/it][I 2020-09-27 04:52:59,794] Trial 11 finished with value: 0.6902240196479881 and parameters: {'num_leaves': 155}. Best is trial 7 with value: 0.6887216519277777.
num_leaves, val_score: 0.688306:  25%|##5       | 5/20 [00:05<00:17,  1.19s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000952 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.596218	valid's binary_logloss: 0.693075
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.650962	valid's binary_logloss: 0.689486
num_leaves, val_score: 0.688306:  30%|###       | 6/20 [00:07<00:16,  1.17s/it][I 2020-09-27 04:53:00,935] Trial 12 finished with value: 0.689485631745598 and parameters: {'num_leaves': 141}. Best is trial 7 with value: 0.6887216519277777.
num_leaves, val_score: 0.688306:  30%|###       | 6/20 [00:07<00:16,  1.17s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000944 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.69009	valid's binary_logloss: 0.690453
[200]	train's binary_logloss: 0.688904	valid's binary_logloss: 0.689731
[300]	train's binary_logloss: 0.688162	valid's binary_logloss: 0.689318
[400]	train's binary_logloss: 0.687642	valid's binary_logloss: 0.688959
[500]	train's binary_logloss: 0.687262	valid's binary_logloss: 0.688713
[600]	train's binary_logloss: 0.686964	valid's binary_logloss: 0.688609
[700]	train's binary_logloss: 0.686721	valid's binary_logloss: 0.688505
[800]	train's binary_logloss: 0.686514	valid's binary_logloss: 0.688439
[900]	train's binary_logloss: 0.686339	valid's binary_logloss: 0.688402
[1000]	train's binary_logloss: 0.686188	valid's binary_logloss: 0.688357
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.686188	valid's binary_logloss: 0.688357
num_leaves, val_score: 0.688306:  35%|###5      | 7/20 [00:10<00:23,  1.78s/it][I 2020-09-27 04:53:04,147] Trial 13 finished with value: 0.6883565901301231 and parameters: {'num_leaves': 2}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  35%|###5      | 7/20 [00:10<00:23,  1.78s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001010 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.67672	valid's binary_logloss: 0.689306
[200]	train's binary_logloss: 0.666267	valid's binary_logloss: 0.689375
Early stopping, best iteration is:
[107]	train's binary_logloss: 0.675908	valid's binary_logloss: 0.689164
num_leaves, val_score: 0.688306:  40%|####      | 8/20 [00:11<00:18,  1.53s/it][I 2020-09-27 04:53:05,090] Trial 14 finished with value: 0.6891638737730074 and parameters: {'num_leaves': 16}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  40%|####      | 8/20 [00:11<00:18,  1.53s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000937 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.625772	valid's binary_logloss: 0.691507
Early stopping, best iteration is:
[69]	train's binary_logloss: 0.642008	valid's binary_logloss: 0.6905
num_leaves, val_score: 0.688306:  45%|####5     | 9/20 [00:12<00:15,  1.42s/it][I 2020-09-27 04:53:06,262] Trial 15 finished with value: 0.6904995062634406 and parameters: {'num_leaves': 90}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  45%|####5     | 9/20 [00:12<00:15,  1.42s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001295 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577877	valid's binary_logloss: 0.694164
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.65064	valid's binary_logloss: 0.689958
num_leaves, val_score: 0.688306:  50%|#####     | 10/20 [00:13<00:13,  1.38s/it][I 2020-09-27 04:53:07,528] Trial 16 finished with value: 0.6899575658354835 and parameters: {'num_leaves': 175}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  50%|#####     | 10/20 [00:13<00:13,  1.38s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000927 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.620139	valid's binary_logloss: 0.691195
Early stopping, best iteration is:
[42]	train's binary_logloss: 0.655396	valid's binary_logloss: 0.689705
num_leaves, val_score: 0.688306:  55%|#####5    | 11/20 [00:14<00:11,  1.26s/it][I 2020-09-27 04:53:08,520] Trial 17 finished with value: 0.6897051455072504 and parameters: {'num_leaves': 99}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  55%|#####5    | 11/20 [00:14<00:11,  1.26s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007712 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686259	valid's binary_logloss: 0.689494
[200]	train's binary_logloss: 0.683128	valid's binary_logloss: 0.689035
[300]	train's binary_logloss: 0.680538	valid's binary_logloss: 0.68913
Early stopping, best iteration is:
[237]	train's binary_logloss: 0.682127	valid's binary_logloss: 0.688927
num_leaves, val_score: 0.688306:  60%|######    | 12/20 [00:15<00:09,  1.22s/it][I 2020-09-27 04:53:09,653] Trial 18 finished with value: 0.6889270132361176 and parameters: {'num_leaves': 5}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  60%|######    | 12/20 [00:15<00:09,  1.22s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001555 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.566389	valid's binary_logloss: 0.69472
Early stopping, best iteration is:
[21]	train's binary_logloss: 0.655891	valid's binary_logloss: 0.689891
num_leaves, val_score: 0.688306:  65%|######5   | 13/20 [00:17<00:08,  1.23s/it][I 2020-09-27 04:53:10,888] Trial 19 finished with value: 0.6898906845392911 and parameters: {'num_leaves': 199}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  65%|######5   | 13/20 [00:17<00:08,  1.23s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000971 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649368	valid's binary_logloss: 0.68967
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.65419	valid's binary_logloss: 0.689079
num_leaves, val_score: 0.688306:  70%|#######   | 14/20 [00:18<00:07,  1.17s/it][I 2020-09-27 04:53:11,922] Trial 20 finished with value: 0.689079067247648 and parameters: {'num_leaves': 53}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  70%|#######   | 14/20 [00:18<00:07,  1.17s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001094 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.616545	valid's binary_logloss: 0.691175
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.668053	valid's binary_logloss: 0.690395
num_leaves, val_score: 0.688306:  75%|#######5  | 15/20 [00:19<00:05,  1.10s/it][I 2020-09-27 04:53:12,870] Trial 21 finished with value: 0.6903952269669797 and parameters: {'num_leaves': 105}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  75%|#######5  | 15/20 [00:19<00:05,  1.10s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000957 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.560451	valid's binary_logloss: 0.695619
Early stopping, best iteration is:
[21]	train's binary_logloss: 0.654473	valid's binary_logloss: 0.690005
num_leaves, val_score: 0.688306:  80%|########  | 16/20 [00:20<00:04,  1.16s/it][I 2020-09-27 04:53:14,161] Trial 22 finished with value: 0.6900048045676193 and parameters: {'num_leaves': 209}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  80%|########  | 16/20 [00:20<00:04,  1.16s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000964 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.683466	valid's binary_logloss: 0.688908
[200]	train's binary_logloss: 0.678323	valid's binary_logloss: 0.688588
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.681889	valid's binary_logloss: 0.688484
num_leaves, val_score: 0.688306:  85%|########5 | 17/20 [00:21<00:03,  1.08s/it][I 2020-09-27 04:53:15,053] Trial 23 finished with value: 0.68848442653872 and parameters: {'num_leaves': 8}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  85%|########5 | 17/20 [00:21<00:03,  1.08s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000948 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.677505	valid's binary_logloss: 0.68888
[200]	train's binary_logloss: 0.667867	valid's binary_logloss: 0.689195
Early stopping, best iteration is:
[101]	train's binary_logloss: 0.677397	valid's binary_logloss: 0.688863
num_leaves, val_score: 0.688306:  90%|######### | 18/20 [00:22<00:02,  1.01s/it][I 2020-09-27 04:53:15,914] Trial 24 finished with value: 0.6888633696872527 and parameters: {'num_leaves': 15}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  90%|######### | 18/20 [00:22<00:02,  1.01s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000957 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687323	valid's binary_logloss: 0.689724
[200]	train's binary_logloss: 0.684864	valid's binary_logloss: 0.68927
[300]	train's binary_logloss: 0.682878	valid's binary_logloss: 0.688942
[400]	train's binary_logloss: 0.681166	valid's binary_logloss: 0.688902
Early stopping, best iteration is:
[348]	train's binary_logloss: 0.682019	valid's binary_logloss: 0.688739
num_leaves, val_score: 0.688306:  95%|#########5| 19/20 [00:23<00:01,  1.18s/it][I 2020-09-27 04:53:17,497] Trial 25 finished with value: 0.688738728369025 and parameters: {'num_leaves': 4}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306:  95%|#########5| 19/20 [00:23<00:01,  1.18s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000982 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.638699	valid's binary_logloss: 0.691078
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.669389	valid's binary_logloss: 0.689921
num_leaves, val_score: 0.688306: 100%|##########| 20/20 [00:24<00:00,  1.09s/it][I 2020-09-27 04:53:18,377] Trial 26 finished with value: 0.6899211705265006 and parameters: {'num_leaves': 69}. Best is trial 13 with value: 0.6883565901301231.
num_leaves, val_score: 0.688306: 100%|##########| 20/20 [00:24<00:00,  1.23s/it]
bagging, val_score: 0.688306:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001241 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66507	valid's binary_logloss: 0.68924
Early stopping, best iteration is:
[81]	train's binary_logloss: 0.669105	valid's binary_logloss: 0.689037
bagging, val_score: 0.688306:  10%|#         | 1/10 [00:01<00:09,  1.05s/it][I 2020-09-27 04:53:19,441] Trial 27 finished with value: 0.6890373745262042 and parameters: {'bagging_fraction': 0.918366805969969, 'bagging_freq': 7}. Best is trial 27 with value: 0.6890373745262042.
bagging, val_score: 0.688306:  10%|#         | 1/10 [00:01<00:09,  1.05s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009692 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666459	valid's binary_logloss: 0.689984
Early stopping, best iteration is:
[75]	train's binary_logloss: 0.671612	valid's binary_logloss: 0.68934
bagging, val_score: 0.688306:  20%|##        | 2/10 [00:02<00:08,  1.03s/it][I 2020-09-27 04:53:20,408] Trial 28 finished with value: 0.6893396192520514 and parameters: {'bagging_fraction': 0.4219518026182199, 'bagging_freq': 1}. Best is trial 27 with value: 0.6890373745262042.
bagging, val_score: 0.688306:  20%|##        | 2/10 [00:02<00:08,  1.03s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000951 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666862	valid's binary_logloss: 0.691464
Early stopping, best iteration is:
[53]	train's binary_logloss: 0.676595	valid's binary_logloss: 0.69025
bagging, val_score: 0.688306:  30%|###       | 3/10 [00:02<00:06,  1.04it/s][I 2020-09-27 04:53:21,234] Trial 29 finished with value: 0.6902501595666304 and parameters: {'bagging_fraction': 0.4030288821219972, 'bagging_freq': 4}. Best is trial 27 with value: 0.6890373745262042.
bagging, val_score: 0.688306:  30%|###       | 3/10 [00:02<00:06,  1.04it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000959 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665081	valid's binary_logloss: 0.689749
[200]	train's binary_logloss: 0.645221	valid's binary_logloss: 0.689436
Early stopping, best iteration is:
[181]	train's binary_logloss: 0.648839	valid's binary_logloss: 0.689363
bagging, val_score: 0.688306:  40%|####      | 4/10 [00:04<00:06,  1.10s/it][I 2020-09-27 04:53:22,634] Trial 30 finished with value: 0.6893631828354185 and parameters: {'bagging_fraction': 0.7296879278779264, 'bagging_freq': 1}. Best is trial 27 with value: 0.6890373745262042.
bagging, val_score: 0.688306:  40%|####      | 4/10 [00:04<00:06,  1.10s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000952 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664945	valid's binary_logloss: 0.689235
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.674807	valid's binary_logloss: 0.688819
bagging, val_score: 0.688306:  50%|#####     | 5/10 [00:05<00:05,  1.04s/it][I 2020-09-27 04:53:23,533] Trial 31 finished with value: 0.6888194872886991 and parameters: {'bagging_fraction': 0.994076584593322, 'bagging_freq': 7}. Best is trial 31 with value: 0.6888194872886991.
bagging, val_score: 0.688306:  50%|#####     | 5/10 [00:05<00:05,  1.04s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000969 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665112	valid's binary_logloss: 0.689998
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.673047	valid's binary_logloss: 0.689362
bagging, val_score: 0.688306:  60%|######    | 6/10 [00:06<00:03,  1.00it/s][I 2020-09-27 04:53:24,435] Trial 32 finished with value: 0.6893623976325989 and parameters: {'bagging_fraction': 0.6317840838824331, 'bagging_freq': 4}. Best is trial 31 with value: 0.6888194872886991.
bagging, val_score: 0.688306:  60%|######    | 6/10 [00:06<00:03,  1.00it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000971 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665587	valid's binary_logloss: 0.689559
[200]	train's binary_logloss: 0.646055	valid's binary_logloss: 0.689772
Early stopping, best iteration is:
[147]	train's binary_logloss: 0.656183	valid's binary_logloss: 0.689318
bagging, val_score: 0.688306:  70%|#######   | 7/10 [00:07<00:03,  1.09s/it][I 2020-09-27 04:53:25,735] Trial 33 finished with value: 0.6893178393179011 and parameters: {'bagging_fraction': 0.6247158545663389, 'bagging_freq': 6}. Best is trial 31 with value: 0.6888194872886991.
bagging, val_score: 0.688306:  70%|#######   | 7/10 [00:07<00:03,  1.09s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000971 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665022	valid's binary_logloss: 0.68972
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.673867	valid's binary_logloss: 0.689193
bagging, val_score: 0.688306:  80%|########  | 8/10 [00:08<00:02,  1.05s/it][I 2020-09-27 04:53:26,715] Trial 34 finished with value: 0.6891934518367359 and parameters: {'bagging_fraction': 0.8401040019057991, 'bagging_freq': 2}. Best is trial 31 with value: 0.6888194872886991.
bagging, val_score: 0.688306:  80%|########  | 8/10 [00:08<00:02,  1.05s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000949 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665836	valid's binary_logloss: 0.690057
Early stopping, best iteration is:
[26]	train's binary_logloss: 0.683137	valid's binary_logloss: 0.69002
bagging, val_score: 0.688306:  90%|######### | 9/10 [00:09<00:00,  1.05it/s][I 2020-09-27 04:53:27,429] Trial 35 finished with value: 0.690019877609649 and parameters: {'bagging_fraction': 0.5293875757229788, 'bagging_freq': 3}. Best is trial 31 with value: 0.6888194872886991.
bagging, val_score: 0.688306:  90%|######### | 9/10 [00:09<00:00,  1.05it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000988 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664989	valid's binary_logloss: 0.690033
Early stopping, best iteration is:
[62]	train's binary_logloss: 0.673472	valid's binary_logloss: 0.689508
bagging, val_score: 0.688306: 100%|##########| 10/10 [00:09<00:00,  1.06it/s][I 2020-09-27 04:53:28,340] Trial 36 finished with value: 0.6895082528824183 and parameters: {'bagging_fraction': 0.7844185449233542, 'bagging_freq': 5}. Best is trial 31 with value: 0.6888194872886991.
bagging, val_score: 0.688306: 100%|##########| 10/10 [00:09<00:00,  1.00it/s]
feature_fraction_stage2, val_score: 0.688306:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000900 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
feature_fraction_stage2, val_score: 0.688306:  17%|#6        | 1/6 [00:01<00:05,  1.08s/it][I 2020-09-27 04:53:29,437] Trial 37 finished with value: 0.6883061543757897 and parameters: {'feature_fraction': 0.516}. Best is trial 37 with value: 0.6883061543757897.
feature_fraction_stage2, val_score: 0.688306:  17%|#6        | 1/6 [00:01<00:05,  1.08s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000788 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664973	valid's binary_logloss: 0.689707
[200]	train's binary_logloss: 0.646027	valid's binary_logloss: 0.690321
Early stopping, best iteration is:
[117]	train's binary_logloss: 0.661743	valid's binary_logloss: 0.689468
feature_fraction_stage2, val_score: 0.688306:  33%|###3      | 2/6 [00:02<00:04,  1.07s/it][I 2020-09-27 04:53:30,468] Trial 38 finished with value: 0.689467520428726 and parameters: {'feature_fraction': 0.45199999999999996}. Best is trial 37 with value: 0.6883061543757897.
feature_fraction_stage2, val_score: 0.688306:  33%|###3      | 2/6 [00:02<00:04,  1.07s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001140 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66468	valid's binary_logloss: 0.68972
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.672487	valid's binary_logloss: 0.689477
feature_fraction_stage2, val_score: 0.688306:  50%|#####     | 3/6 [00:02<00:03,  1.00s/it][I 2020-09-27 04:53:31,330] Trial 39 finished with value: 0.6894774349755185 and parameters: {'feature_fraction': 0.58}. Best is trial 37 with value: 0.6883061543757897.
feature_fraction_stage2, val_score: 0.688306:  50%|#####     | 3/6 [00:02<00:03,  1.00s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000843 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66569	valid's binary_logloss: 0.688902
Early stopping, best iteration is:
[65]	train's binary_logloss: 0.673083	valid's binary_logloss: 0.688613
feature_fraction_stage2, val_score: 0.688306:  67%|######6   | 4/6 [00:03<00:01,  1.06it/s][I 2020-09-27 04:53:32,142] Trial 40 finished with value: 0.6886133673965851 and parameters: {'feature_fraction': 0.42}. Best is trial 37 with value: 0.6883061543757897.
feature_fraction_stage2, val_score: 0.688306:  67%|######6   | 4/6 [00:03<00:01,  1.06it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000999 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664812	valid's binary_logloss: 0.688514
Early stopping, best iteration is:
[92]	train's binary_logloss: 0.66641	valid's binary_logloss: 0.688368
feature_fraction_stage2, val_score: 0.688306:  83%|########3 | 5/6 [00:04<00:00,  1.05it/s][I 2020-09-27 04:53:33,108] Trial 41 finished with value: 0.6883683592481646 and parameters: {'feature_fraction': 0.5479999999999999}. Best is trial 37 with value: 0.6883061543757897.
feature_fraction_stage2, val_score: 0.688306:  83%|########3 | 5/6 [00:04<00:00,  1.05it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000975 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
feature_fraction_stage2, val_score: 0.688306: 100%|##########| 6/6 [00:05<00:00,  1.01it/s][I 2020-09-27 04:53:34,199] Trial 42 finished with value: 0.6883061543757897 and parameters: {'feature_fraction': 0.484}. Best is trial 37 with value: 0.6883061543757897.
feature_fraction_stage2, val_score: 0.688306: 100%|##########| 6/6 [00:05<00:00,  1.03it/s]
regularization_factors, val_score: 0.688306:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000960 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66526	valid's binary_logloss: 0.689325
Early stopping, best iteration is:
[74]	train's binary_logloss: 0.670714	valid's binary_logloss: 0.688927
regularization_factors, val_score: 0.688306:   5%|5         | 1/20 [00:00<00:16,  1.13it/s][I 2020-09-27 04:53:35,096] Trial 43 finished with value: 0.6889268680203512 and parameters: {'lambda_l1': 3.489683036490915e-05, 'lambda_l2': 0.03596499685458891}. Best is trial 43 with value: 0.6889268680203512.
regularization_factors, val_score: 0.688306:   5%|5         | 1/20 [00:00<00:16,  1.13it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000943 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.668189	valid's binary_logloss: 0.688592
Early stopping, best iteration is:
[76]	train's binary_logloss: 0.672479	valid's binary_logloss: 0.688382
regularization_factors, val_score: 0.688306:  10%|#         | 2/20 [00:01<00:16,  1.11it/s][I 2020-09-27 04:53:36,045] Trial 44 finished with value: 0.6883820148603388 and parameters: {'lambda_l1': 4.2910298459529574, 'lambda_l2': 4.4841181697512024e-08}. Best is trial 44 with value: 0.6883820148603388.
regularization_factors, val_score: 0.688306:  10%|#         | 2/20 [00:01<00:16,  1.11it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001239 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  15%|#5        | 3/20 [00:02<00:16,  1.04it/s][I 2020-09-27 04:53:37,146] Trial 45 finished with value: 0.6883061543637848 and parameters: {'lambda_l1': 1.6123080908302226e-08, 'lambda_l2': 1.1923474224643993e-07}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  15%|#5        | 3/20 [00:02<00:16,  1.04it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001032 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  20%|##        | 4/20 [00:04<00:16,  1.01s/it][I 2020-09-27 04:53:38,271] Trial 46 finished with value: 0.6883061543726559 and parameters: {'lambda_l1': 1.4997584226077242e-08, 'lambda_l2': 1.9309033302845346e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  20%|##        | 4/20 [00:04<00:16,  1.01s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  25%|##5       | 5/20 [00:05<00:15,  1.05s/it][I 2020-09-27 04:53:39,412] Trial 47 finished with value: 0.6883061543725224 and parameters: {'lambda_l1': 2.1537994864162705e-08, 'lambda_l2': 1.7013032790342447e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  25%|##5       | 5/20 [00:05<00:15,  1.05s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000950 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  30%|###       | 6/20 [00:06<00:14,  1.06s/it][I 2020-09-27 04:53:40,511] Trial 48 finished with value: 0.6883061543697307 and parameters: {'lambda_l1': 4.5182334372625207e-08, 'lambda_l2': 1.1900726497967697e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  30%|###       | 6/20 [00:06<00:14,  1.06s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000959 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  35%|###5      | 7/20 [00:07<00:13,  1.07s/it][I 2020-09-27 04:53:41,604] Trial 49 finished with value: 0.6883061543737345 and parameters: {'lambda_l1': 1.214350265927941e-08, 'lambda_l2': 1.243915863528419e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  35%|###5      | 7/20 [00:07<00:13,  1.07s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000942 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  40%|####      | 8/20 [00:08<00:12,  1.08s/it][I 2020-09-27 04:53:42,706] Trial 50 finished with value: 0.6883061543740199 and parameters: {'lambda_l1': 1.0018700830284507e-08, 'lambda_l2': 1.1810016327451072e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  40%|####      | 8/20 [00:08<00:12,  1.08s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000961 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  45%|####5     | 9/20 [00:09<00:12,  1.10s/it][I 2020-09-27 04:53:43,836] Trial 51 finished with value: 0.6883061543739415 and parameters: {'lambda_l1': 1.034818635491755e-08, 'lambda_l2': 1.1390198269989985e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  45%|####5     | 9/20 [00:09<00:12,  1.10s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000968 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  50%|#####     | 10/20 [00:10<00:11,  1.12s/it][I 2020-09-27 04:53:45,004] Trial 52 finished with value: 0.6883061543730142 and parameters: {'lambda_l1': 1.67980865791993e-08, 'lambda_l2': 1.4273245882473324e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  50%|#####     | 10/20 [00:10<00:11,  1.12s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000947 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  55%|#####5    | 11/20 [00:11<00:10,  1.11s/it][I 2020-09-27 04:53:46,102] Trial 53 finished with value: 0.6883061543727123 and parameters: {'lambda_l1': 1.54156161065481e-08, 'lambda_l2': 1.961580255914531e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  55%|#####5    | 11/20 [00:11<00:10,  1.11s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001597 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  60%|######    | 12/20 [00:12<00:08,  1.10s/it][I 2020-09-27 04:53:47,171] Trial 54 finished with value: 0.688306154369646 and parameters: {'lambda_l1': 2.6426720558168387e-08, 'lambda_l2': 3.5118264244271155e-08}. Best is trial 45 with value: 0.6883061543637848.
regularization_factors, val_score: 0.688306:  60%|######    | 12/20 [00:12<00:08,  1.10s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000949 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  65%|######5   | 13/20 [00:14<00:07,  1.10s/it][I 2020-09-27 04:53:48,280] Trial 55 finished with value: 0.6883061543276774 and parameters: {'lambda_l1': 6.8998257219367e-08, 'lambda_l2': 5.65558107194084e-07}. Best is trial 55 with value: 0.6883061543276774.
regularization_factors, val_score: 0.688306:  65%|######5   | 13/20 [00:14<00:07,  1.10s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001012 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  70%|#######   | 14/20 [00:15<00:06,  1.13s/it][I 2020-09-27 04:53:49,474] Trial 56 finished with value: 0.6883061540275088 and parameters: {'lambda_l1': 7.521323057192409e-07, 'lambda_l2': 3.7320052285004596e-06}. Best is trial 56 with value: 0.6883061540275088.
regularization_factors, val_score: 0.688306:  70%|#######   | 14/20 [00:15<00:06,  1.13s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000962 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  75%|#######5  | 15/20 [00:16<00:05,  1.13s/it][I 2020-09-27 04:53:50,589] Trial 57 finished with value: 0.6883061539600913 and parameters: {'lambda_l1': 1.1451365202695885e-06, 'lambda_l2': 4.1525999441678045e-06}. Best is trial 57 with value: 0.6883061539600913.
regularization_factors, val_score: 0.688306:  75%|#######5  | 15/20 [00:16<00:05,  1.13s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000947 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  80%|########  | 16/20 [00:17<00:04,  1.12s/it][I 2020-09-27 04:53:51,694] Trial 58 finished with value: 0.6883061537046381 and parameters: {'lambda_l1': 1.87654882489576e-06, 'lambda_l2': 6.539895147651026e-06}. Best is trial 58 with value: 0.6883061537046381.
regularization_factors, val_score: 0.688306:  80%|########  | 16/20 [00:17<00:04,  1.12s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000888 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  85%|########5 | 17/20 [00:18<00:03,  1.11s/it][I 2020-09-27 04:53:52,787] Trial 59 finished with value: 0.6883061539934588 and parameters: {'lambda_l1': 1.7458283390546473e-06, 'lambda_l2': 2.9537260784896182e-06}. Best is trial 58 with value: 0.6883061537046381.
regularization_factors, val_score: 0.688306:  85%|########5 | 17/20 [00:18<00:03,  1.11s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000908 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  90%|######### | 18/20 [00:19<00:02,  1.11s/it][I 2020-09-27 04:53:53,910] Trial 60 finished with value: 0.6883061538065175 and parameters: {'lambda_l1': 2.402166405130976e-06, 'lambda_l2': 4.623828082213961e-06}. Best is trial 58 with value: 0.6883061537046381.
regularization_factors, val_score: 0.688306:  90%|######### | 18/20 [00:19<00:02,  1.11s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010324 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306:  95%|#########5| 19/20 [00:20<00:01,  1.12s/it][I 2020-09-27 04:53:55,054] Trial 61 finished with value: 0.6883061537873091 and parameters: {'lambda_l1': 2.6852977733488266e-06, 'lambda_l2': 4.514170857027213e-06}. Best is trial 58 with value: 0.6883061537046381.
regularization_factors, val_score: 0.688306:  95%|#########5| 19/20 [00:20<00:01,  1.12s/it][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012564 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665128	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.646099	valid's binary_logloss: 0.689036
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.659459	valid's binary_logloss: 0.688306
regularization_factors, val_score: 0.688306: 100%|##########| 20/20 [00:21<00:00,  1.10s/it][I 2020-09-27 04:53:56,095] Trial 62 finished with value: 0.6883061538017431 and parameters: {'lambda_l1': 2.099566024643849e-06, 'lambda_l2': 5.0450298575946105e-06}. Best is trial 58 with value: 0.6883061537046381.
regularization_factors, val_score: 0.688306: 100%|##########| 20/20 [00:21<00:00,  1.09s/it]
min_data_in_leaf, val_score: 0.688306:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000964 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665071	valid's binary_logloss: 0.68918
Early stopping, best iteration is:
[66]	train's binary_logloss: 0.672155	valid's binary_logloss: 0.688846
min_data_in_leaf, val_score: 0.688306:  20%|##        | 1/5 [00:00<00:03,  1.20it/s][I 2020-09-27 04:53:56,941] Trial 63 finished with value: 0.6888461130563782 and parameters: {'min_child_samples': 10}. Best is trial 63 with value: 0.6888461130563782.
min_data_in_leaf, val_score: 0.688306:  20%|##        | 1/5 [00:00<00:03,  1.20it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001044 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666223	valid's binary_logloss: 0.689352
Early stopping, best iteration is:
[63]	train's binary_logloss: 0.673794	valid's binary_logloss: 0.688948
min_data_in_leaf, val_score: 0.688306:  40%|####      | 2/5 [00:01<00:02,  1.20it/s][I 2020-09-27 04:53:57,783] Trial 64 finished with value: 0.6889480364726907 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.6888461130563782.
min_data_in_leaf, val_score: 0.688306:  40%|####      | 2/5 [00:01<00:02,  1.20it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000895 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.6653	valid's binary_logloss: 0.688213
Early stopping, best iteration is:
[98]	train's binary_logloss: 0.665652	valid's binary_logloss: 0.688167
min_data_in_leaf, val_score: 0.688167:  60%|######    | 3/5 [00:02<00:01,  1.14it/s][I 2020-09-27 04:53:58,761] Trial 65 finished with value: 0.6881665249243496 and parameters: {'min_child_samples': 25}. Best is trial 65 with value: 0.6881665249243496.
min_data_in_leaf, val_score: 0.688167:  60%|######    | 3/5 [00:02<00:01,  1.14it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000967 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664957	valid's binary_logloss: 0.688576
Early stopping, best iteration is:
[99]	train's binary_logloss: 0.665189	valid's binary_logloss: 0.688543
min_data_in_leaf, val_score: 0.688167:  80%|########  | 4/5 [00:03<00:00,  1.09it/s][I 2020-09-27 04:53:59,775] Trial 66 finished with value: 0.6885432542311284 and parameters: {'min_child_samples': 5}. Best is trial 65 with value: 0.6881665249243496.
min_data_in_leaf, val_score: 0.688167:  80%|########  | 4/5 [00:03<00:00,  1.09it/s][LightGBM] [Info] Number of positive: 46729, number of negative: 46297
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001196 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.502322 -> initscore=0.009288
[LightGBM] [Info] Start training from score 0.009288
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665534	valid's binary_logloss: 0.689168
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.668451	valid's binary_logloss: 0.688864
min_data_in_leaf, val_score: 0.688167: 100%|##########| 5/5 [00:04<00:00,  1.07it/s][I 2020-09-27 04:54:00,741] Trial 67 finished with value: 0.6888639866159642 and parameters: {'min_child_samples': 50}. Best is trial 65 with value: 0.6881665249243496.
min_data_in_leaf, val_score: 0.688167: 100%|##########| 5/5 [00:04<00:00,  1.08it/s]
Fold : 3
[I 2020-09-27 04:54:00,831] A new study created in memory with name: no-name-2a0ad92a-db0c-44a0-a535-c4566ac6f3da
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010784 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66381	valid's binary_logloss: 0.689338
Early stopping, best iteration is:
[57]	train's binary_logloss: 0.673532	valid's binary_logloss: 0.68897
feature_fraction, val_score: 0.688970:  14%|#4        | 1/7 [00:00<00:05,  1.01it/s][I 2020-09-27 04:54:01,833] Trial 0 finished with value: 0.6889700252608528 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6889700252608528.
feature_fraction, val_score: 0.688970:  14%|#4        | 1/7 [00:00<00:05,  1.01it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001634 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66267	valid's binary_logloss: 0.689828
Early stopping, best iteration is:
[65]	train's binary_logloss: 0.670714	valid's binary_logloss: 0.689441
feature_fraction, val_score: 0.688970:  29%|##8       | 2/7 [00:01<00:04,  1.04it/s][I 2020-09-27 04:54:02,732] Trial 1 finished with value: 0.6894408688583004 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6889700252608528.
feature_fraction, val_score: 0.688970:  29%|##8       | 2/7 [00:01<00:04,  1.04it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000679 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666188	valid's binary_logloss: 0.689964
[200]	train's binary_logloss: 0.648164	valid's binary_logloss: 0.690421
Early stopping, best iteration is:
[146]	train's binary_logloss: 0.657493	valid's binary_logloss: 0.689838
feature_fraction, val_score: 0.688970:  43%|####2     | 3/7 [00:02<00:04,  1.00s/it][I 2020-09-27 04:54:03,817] Trial 2 finished with value: 0.6898376157358569 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6889700252608528.
feature_fraction, val_score: 0.688970:  43%|####2     | 3/7 [00:02<00:04,  1.00s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008176 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664448	valid's binary_logloss: 0.690083
Early stopping, best iteration is:
[61]	train's binary_logloss: 0.673008	valid's binary_logloss: 0.68947
feature_fraction, val_score: 0.688970:  57%|#####7    | 4/7 [00:03<00:02,  1.08it/s][I 2020-09-27 04:54:04,577] Trial 3 finished with value: 0.6894700918769894 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6889700252608528.
feature_fraction, val_score: 0.688970:  57%|#####7    | 4/7 [00:03<00:02,  1.08it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001461 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.6633	valid's binary_logloss: 0.68927
[200]	train's binary_logloss: 0.642614	valid's binary_logloss: 0.689045
Early stopping, best iteration is:
[194]	train's binary_logloss: 0.643867	valid's binary_logloss: 0.688894
feature_fraction, val_score: 0.688894:  71%|#######1  | 5/7 [00:05<00:02,  1.10s/it][I 2020-09-27 04:54:06,090] Trial 4 finished with value: 0.6888944409719155 and parameters: {'feature_fraction': 0.8}. Best is trial 4 with value: 0.6888944409719155.
feature_fraction, val_score: 0.688894:  71%|#######1  | 5/7 [00:05<00:02,  1.10s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001711 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662969	valid's binary_logloss: 0.689906
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.672958	valid's binary_logloss: 0.689479
feature_fraction, val_score: 0.688894:  86%|########5 | 6/7 [00:06<00:01,  1.04s/it][I 2020-09-27 04:54:06,969] Trial 5 finished with value: 0.6894794839977714 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 4 with value: 0.6888944409719155.
feature_fraction, val_score: 0.688894:  86%|########5 | 6/7 [00:06<00:01,  1.04s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001143 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664948	valid's binary_logloss: 0.689939
Early stopping, best iteration is:
[71]	train's binary_logloss: 0.671105	valid's binary_logloss: 0.689412
feature_fraction, val_score: 0.688894: 100%|##########| 7/7 [00:06<00:00,  1.02it/s][I 2020-09-27 04:54:07,829] Trial 6 finished with value: 0.689411673782528 and parameters: {'feature_fraction': 0.5}. Best is trial 4 with value: 0.6888944409719155.
feature_fraction, val_score: 0.688894: 100%|##########| 7/7 [00:06<00:00,  1.00it/s]
num_leaves, val_score: 0.688894:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016008 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.545996	valid's binary_logloss: 0.696346
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.650465	valid's binary_logloss: 0.691113
num_leaves, val_score: 0.688894:   5%|5         | 1/20 [00:01<00:25,  1.32s/it][I 2020-09-27 04:54:09,164] Trial 7 finished with value: 0.6911130787415561 and parameters: {'num_leaves': 219}. Best is trial 7 with value: 0.6911130787415561.
num_leaves, val_score: 0.688894:   5%|5         | 1/20 [00:01<00:25,  1.32s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001458 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.583507	valid's binary_logloss: 0.694216
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.66415	valid's binary_logloss: 0.691315
num_leaves, val_score: 0.688894:  10%|#         | 2/20 [00:02<00:23,  1.28s/it][I 2020-09-27 04:54:10,338] Trial 8 finished with value: 0.6913152010105598 and parameters: {'num_leaves': 152}. Best is trial 7 with value: 0.6911130787415561.
num_leaves, val_score: 0.688894:  10%|#         | 2/20 [00:02<00:23,  1.28s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001515 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.600147	valid's binary_logloss: 0.692792
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.642663	valid's binary_logloss: 0.690464
num_leaves, val_score: 0.688894:  15%|#5        | 3/20 [00:04<00:23,  1.39s/it][I 2020-09-27 04:54:11,983] Trial 9 finished with value: 0.690463587315969 and parameters: {'num_leaves': 123}. Best is trial 9 with value: 0.690463587315969.
num_leaves, val_score: 0.688894:  15%|#5        | 3/20 [00:04<00:23,  1.39s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001721 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.676573	valid's binary_logloss: 0.689403
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.678598	valid's binary_logloss: 0.689178
num_leaves, val_score: 0.688894:  20%|##        | 4/20 [00:06<00:25,  1.61s/it][I 2020-09-27 04:54:14,106] Trial 10 finished with value: 0.6891784496510537 and parameters: {'num_leaves': 15}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  20%|##        | 4/20 [00:06<00:25,  1.61s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012855 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689973	valid's binary_logloss: 0.691198
[200]	train's binary_logloss: 0.688794	valid's binary_logloss: 0.690538
[300]	train's binary_logloss: 0.68806	valid's binary_logloss: 0.690049
[400]	train's binary_logloss: 0.687537	valid's binary_logloss: 0.689777
[500]	train's binary_logloss: 0.687144	valid's binary_logloss: 0.689665
[600]	train's binary_logloss: 0.686839	valid's binary_logloss: 0.689559
[700]	train's binary_logloss: 0.686592	valid's binary_logloss: 0.689472
[800]	train's binary_logloss: 0.686384	valid's binary_logloss: 0.689469
[900]	train's binary_logloss: 0.686205	valid's binary_logloss: 0.689458
Early stopping, best iteration is:
[877]	train's binary_logloss: 0.686244	valid's binary_logloss: 0.68944
num_leaves, val_score: 0.688894:  25%|##5       | 5/20 [00:09<00:31,  2.13s/it][I 2020-09-27 04:54:17,448] Trial 11 finished with value: 0.689440488741354 and parameters: {'num_leaves': 2}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  25%|##5       | 5/20 [00:09<00:31,  2.13s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001640 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.531631	valid's binary_logloss: 0.696171
Early stopping, best iteration is:
[9]	train's binary_logloss: 0.669502	valid's binary_logloss: 0.691204
num_leaves, val_score: 0.688894:  30%|###       | 6/20 [00:11<00:27,  1.95s/it][I 2020-09-27 04:54:18,964] Trial 12 finished with value: 0.6912044508576668 and parameters: {'num_leaves': 252}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  30%|###       | 6/20 [00:11<00:27,  1.95s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001636 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.626032	valid's binary_logloss: 0.690485
Early stopping, best iteration is:
[80]	train's binary_logloss: 0.636438	valid's binary_logloss: 0.689817
num_leaves, val_score: 0.688894:  35%|###5      | 7/20 [00:12<00:22,  1.75s/it][I 2020-09-27 04:54:20,263] Trial 13 finished with value: 0.6898168565458592 and parameters: {'num_leaves': 83}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  35%|###5      | 7/20 [00:12<00:22,  1.75s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010079 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.559692	valid's binary_logloss: 0.696329
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.657719	valid's binary_logloss: 0.691223
num_leaves, val_score: 0.688894:  40%|####      | 8/20 [00:13<00:18,  1.58s/it][I 2020-09-27 04:54:21,443] Trial 14 finished with value: 0.6912227645026857 and parameters: {'num_leaves': 197}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  40%|####      | 8/20 [00:13<00:18,  1.58s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001528 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.643665	valid's binary_logloss: 0.691255
Early stopping, best iteration is:
[27]	train's binary_logloss: 0.67467	valid's binary_logloss: 0.689948
num_leaves, val_score: 0.688894:  45%|####5     | 9/20 [00:14<00:14,  1.36s/it][I 2020-09-27 04:54:22,296] Trial 15 finished with value: 0.6899483024551792 and parameters: {'num_leaves': 57}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  45%|####5     | 9/20 [00:14<00:14,  1.36s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001941 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.5757	valid's binary_logloss: 0.695653
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.663745	valid's binary_logloss: 0.691071
num_leaves, val_score: 0.688894:  50%|#####     | 10/20 [00:15<00:13,  1.32s/it][I 2020-09-27 04:54:23,534] Trial 16 finished with value: 0.6910709930215578 and parameters: {'num_leaves': 163}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  50%|#####     | 10/20 [00:15<00:13,  1.32s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008405 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.642489	valid's binary_logloss: 0.690471
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.665549	valid's binary_logloss: 0.689944
num_leaves, val_score: 0.688894:  55%|#####5    | 11/20 [00:16<00:10,  1.20s/it][I 2020-09-27 04:54:24,436] Trial 17 finished with value: 0.6899435817261527 and parameters: {'num_leaves': 58}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  55%|#####5    | 11/20 [00:16<00:10,  1.20s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002315 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.536859	valid's binary_logloss: 0.696258
Early stopping, best iteration is:
[24]	train's binary_logloss: 0.640234	valid's binary_logloss: 0.690993
num_leaves, val_score: 0.688894:  60%|######    | 12/20 [00:18<00:10,  1.32s/it][I 2020-09-27 04:54:26,057] Trial 18 finished with value: 0.6909927461103317 and parameters: {'num_leaves': 239}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  60%|######    | 12/20 [00:18<00:10,  1.32s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001510 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.618426	valid's binary_logloss: 0.69242
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.669782	valid's binary_logloss: 0.69036
num_leaves, val_score: 0.688894:  65%|######5   | 13/20 [00:19<00:08,  1.22s/it][I 2020-09-27 04:54:27,037] Trial 19 finished with value: 0.6903597022330396 and parameters: {'num_leaves': 95}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  65%|######5   | 13/20 [00:19<00:08,  1.22s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014205 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.568032	valid's binary_logloss: 0.694802
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.656477	valid's binary_logloss: 0.690174
num_leaves, val_score: 0.688894:  70%|#######   | 14/20 [00:20<00:07,  1.20s/it][I 2020-09-27 04:54:28,190] Trial 20 finished with value: 0.690174257788127 and parameters: {'num_leaves': 182}. Best is trial 10 with value: 0.6891784496510537.
num_leaves, val_score: 0.688894:  70%|#######   | 14/20 [00:20<00:07,  1.20s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006271 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681177	valid's binary_logloss: 0.689414
[200]	train's binary_logloss: 0.674276	valid's binary_logloss: 0.689006
Early stopping, best iteration is:
[194]	train's binary_logloss: 0.674612	valid's binary_logloss: 0.688884
num_leaves, val_score: 0.688884:  75%|#######5  | 15/20 [00:21<00:06,  1.24s/it][I 2020-09-27 04:54:29,535] Trial 21 finished with value: 0.6888838019748574 and parameters: {'num_leaves': 10}. Best is trial 21 with value: 0.6888838019748574.
num_leaves, val_score: 0.688884:  75%|#######5  | 15/20 [00:21<00:06,  1.24s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001987 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663848	valid's binary_logloss: 0.689368
Early stopping, best iteration is:
[90]	train's binary_logloss: 0.666139	valid's binary_logloss: 0.689192
num_leaves, val_score: 0.688884:  80%|########  | 16/20 [00:22<00:04,  1.19s/it][I 2020-09-27 04:54:30,589] Trial 22 finished with value: 0.6891919324492329 and parameters: {'num_leaves': 30}. Best is trial 21 with value: 0.6888838019748574.
num_leaves, val_score: 0.688884:  80%|########  | 16/20 [00:22<00:04,  1.19s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009391 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.605617	valid's binary_logloss: 0.693796
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.666098	valid's binary_logloss: 0.690873
num_leaves, val_score: 0.688884:  85%|########5 | 17/20 [00:23<00:03,  1.13s/it][I 2020-09-27 04:54:31,594] Trial 23 finished with value: 0.6908732323148996 and parameters: {'num_leaves': 114}. Best is trial 21 with value: 0.6888838019748574.
num_leaves, val_score: 0.688884:  85%|########5 | 17/20 [00:23<00:03,  1.13s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001645 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.655426	valid's binary_logloss: 0.689235
[200]	train's binary_logloss: 0.628962	valid's binary_logloss: 0.691293
Early stopping, best iteration is:
[106]	train's binary_logloss: 0.653726	valid's binary_logloss: 0.689075
num_leaves, val_score: 0.688884:  90%|######### | 18/20 [00:24<00:02,  1.14s/it][I 2020-09-27 04:54:32,740] Trial 24 finished with value: 0.6890748065383784 and parameters: {'num_leaves': 41}. Best is trial 21 with value: 0.6888838019748574.
num_leaves, val_score: 0.688884:  90%|######### | 18/20 [00:24<00:02,  1.14s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011385 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687114	valid's binary_logloss: 0.690103
[200]	train's binary_logloss: 0.684519	valid's binary_logloss: 0.689685
[300]	train's binary_logloss: 0.682489	valid's binary_logloss: 0.68949
[400]	train's binary_logloss: 0.680652	valid's binary_logloss: 0.689206
[500]	train's binary_logloss: 0.678959	valid's binary_logloss: 0.689203
Early stopping, best iteration is:
[432]	train's binary_logloss: 0.680085	valid's binary_logloss: 0.689077
num_leaves, val_score: 0.688884:  95%|#########5| 19/20 [00:26<00:01,  1.37s/it][I 2020-09-27 04:54:34,648] Trial 25 finished with value: 0.6890769719746892 and parameters: {'num_leaves': 4}. Best is trial 21 with value: 0.6888838019748574.
num_leaves, val_score: 0.688884:  95%|#########5| 19/20 [00:26<00:01,  1.37s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002085 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.624994	valid's binary_logloss: 0.690549
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.653531	valid's binary_logloss: 0.689478
num_leaves, val_score: 0.688884: 100%|##########| 20/20 [00:27<00:00,  1.30s/it][I 2020-09-27 04:54:35,793] Trial 26 finished with value: 0.6894778249561458 and parameters: {'num_leaves': 84}. Best is trial 21 with value: 0.6888838019748574.
num_leaves, val_score: 0.688884: 100%|##########| 20/20 [00:27<00:00,  1.40s/it]
bagging, val_score: 0.688884:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013914 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681106	valid's binary_logloss: 0.689431
[200]	train's binary_logloss: 0.674446	valid's binary_logloss: 0.689212
[300]	train's binary_logloss: 0.668203	valid's binary_logloss: 0.689716
Early stopping, best iteration is:
[201]	train's binary_logloss: 0.674354	valid's binary_logloss: 0.6892
bagging, val_score: 0.688884:  10%|#         | 1/10 [00:01<00:11,  1.28s/it][I 2020-09-27 04:54:37,087] Trial 27 finished with value: 0.6891997209813286 and parameters: {'bagging_fraction': 0.5047216967290817, 'bagging_freq': 2}. Best is trial 27 with value: 0.6891997209813286.
bagging, val_score: 0.688884:  10%|#         | 1/10 [00:01<00:11,  1.28s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001692 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681088	valid's binary_logloss: 0.689391
[200]	train's binary_logloss: 0.674086	valid's binary_logloss: 0.689322
Early stopping, best iteration is:
[195]	train's binary_logloss: 0.67442	valid's binary_logloss: 0.689229
bagging, val_score: 0.688884:  20%|##        | 2/10 [00:02<00:10,  1.32s/it][I 2020-09-27 04:54:38,476] Trial 28 finished with value: 0.6892290270817055 and parameters: {'bagging_fraction': 0.9953814685875817, 'bagging_freq': 7}. Best is trial 27 with value: 0.6891997209813286.
bagging, val_score: 0.688884:  20%|##        | 2/10 [00:02<00:10,  1.32s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014930 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681054	valid's binary_logloss: 0.689888
[200]	train's binary_logloss: 0.674088	valid's binary_logloss: 0.689474
[300]	train's binary_logloss: 0.667589	valid's binary_logloss: 0.689818
Early stopping, best iteration is:
[201]	train's binary_logloss: 0.674022	valid's binary_logloss: 0.689449
bagging, val_score: 0.688884:  30%|###       | 3/10 [00:04<00:09,  1.34s/it][I 2020-09-27 04:54:39,878] Trial 29 finished with value: 0.6894491770687096 and parameters: {'bagging_fraction': 0.9657183733734178, 'bagging_freq': 7}. Best is trial 27 with value: 0.6891997209813286.
bagging, val_score: 0.688884:  30%|###       | 3/10 [00:04<00:09,  1.34s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001797 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681277	valid's binary_logloss: 0.689585
[200]	train's binary_logloss: 0.674417	valid's binary_logloss: 0.689825
Early stopping, best iteration is:
[161]	train's binary_logloss: 0.676999	valid's binary_logloss: 0.689344
bagging, val_score: 0.688884:  40%|####      | 4/10 [00:05<00:07,  1.32s/it][I 2020-09-27 04:54:41,134] Trial 30 finished with value: 0.6893441175783354 and parameters: {'bagging_fraction': 0.4138320408319066, 'bagging_freq': 1}. Best is trial 27 with value: 0.6891997209813286.
bagging, val_score: 0.688884:  40%|####      | 4/10 [00:05<00:07,  1.32s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008705 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681066	valid's binary_logloss: 0.688712
[200]	train's binary_logloss: 0.674172	valid's binary_logloss: 0.688921
Early stopping, best iteration is:
[121]	train's binary_logloss: 0.679523	valid's binary_logloss: 0.688425
bagging, val_score: 0.688425:  50%|#####     | 5/10 [00:06<00:06,  1.23s/it][I 2020-09-27 04:54:42,157] Trial 31 finished with value: 0.6884249102782825 and parameters: {'bagging_fraction': 0.7249814340678598, 'bagging_freq': 4}. Best is trial 31 with value: 0.6884249102782825.
bagging, val_score: 0.688425:  50%|#####     | 5/10 [00:06<00:06,  1.23s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001643 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681083	valid's binary_logloss: 0.689228
[200]	train's binary_logloss: 0.674127	valid's binary_logloss: 0.688871
[300]	train's binary_logloss: 0.667696	valid's binary_logloss: 0.688939
Early stopping, best iteration is:
[247]	train's binary_logloss: 0.671103	valid's binary_logloss: 0.688664
bagging, val_score: 0.688425:  60%|######    | 6/10 [00:07<00:05,  1.31s/it][I 2020-09-27 04:54:43,668] Trial 32 finished with value: 0.6886636187874674 and parameters: {'bagging_fraction': 0.7386084209140283, 'bagging_freq': 4}. Best is trial 31 with value: 0.6884249102782825.
bagging, val_score: 0.688425:  60%|######    | 6/10 [00:07<00:05,  1.31s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001677 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681024	valid's binary_logloss: 0.689172
[200]	train's binary_logloss: 0.67391	valid's binary_logloss: 0.689047
Early stopping, best iteration is:
[125]	train's binary_logloss: 0.67913	valid's binary_logloss: 0.688666
bagging, val_score: 0.688425:  70%|#######   | 7/10 [00:08<00:03,  1.24s/it][I 2020-09-27 04:54:44,744] Trial 33 finished with value: 0.6886664091503993 and parameters: {'bagging_fraction': 0.765888231588719, 'bagging_freq': 4}. Best is trial 31 with value: 0.6884249102782825.
bagging, val_score: 0.688425:  70%|#######   | 7/10 [00:08<00:03,  1.24s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009276 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681076	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671442	valid's binary_logloss: 0.688118
bagging, val_score: 0.688118:  80%|########  | 8/10 [00:10<00:02,  1.31s/it][I 2020-09-27 04:54:46,218] Trial 34 finished with value: 0.6881176904104821 and parameters: {'bagging_fraction': 0.7616456433874508, 'bagging_freq': 4}. Best is trial 34 with value: 0.6881176904104821.
bagging, val_score: 0.688118:  80%|########  | 8/10 [00:10<00:02,  1.31s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009038 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.680934	valid's binary_logloss: 0.688963
[200]	train's binary_logloss: 0.674092	valid's binary_logloss: 0.68854
[300]	train's binary_logloss: 0.667767	valid's binary_logloss: 0.688887
Early stopping, best iteration is:
[208]	train's binary_logloss: 0.673575	valid's binary_logloss: 0.688372
bagging, val_score: 0.688118:  90%|######### | 9/10 [00:11<00:01,  1.33s/it][I 2020-09-27 04:54:47,583] Trial 35 finished with value: 0.688371985997847 and parameters: {'bagging_fraction': 0.7538151922855485, 'bagging_freq': 4}. Best is trial 34 with value: 0.6881176904104821.
bagging, val_score: 0.688118:  90%|######### | 9/10 [00:11<00:01,  1.33s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001608 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68108	valid's binary_logloss: 0.689706
[200]	train's binary_logloss: 0.674027	valid's binary_logloss: 0.689423
Early stopping, best iteration is:
[161]	train's binary_logloss: 0.676626	valid's binary_logloss: 0.689208
bagging, val_score: 0.688118: 100%|##########| 10/10 [00:12<00:00,  1.29s/it][I 2020-09-27 04:54:48,782] Trial 36 finished with value: 0.6892084810261497 and parameters: {'bagging_fraction': 0.7387822613284701, 'bagging_freq': 4}. Best is trial 34 with value: 0.6881176904104821.
bagging, val_score: 0.688118: 100%|##########| 10/10 [00:12<00:00,  1.30s/it]
feature_fraction_stage2, val_score: 0.688118:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001509 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.680962	valid's binary_logloss: 0.689027
[200]	train's binary_logloss: 0.673815	valid's binary_logloss: 0.689025
Early stopping, best iteration is:
[122]	train's binary_logloss: 0.679212	valid's binary_logloss: 0.688791
feature_fraction_stage2, val_score: 0.688118:  17%|#6        | 1/6 [00:01<00:05,  1.04s/it][I 2020-09-27 04:54:49,838] Trial 37 finished with value: 0.6887910798116761 and parameters: {'feature_fraction': 0.88}. Best is trial 37 with value: 0.6887910798116761.
feature_fraction_stage2, val_score: 0.688118:  17%|#6        | 1/6 [00:01<00:05,  1.04s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008410 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681123	valid's binary_logloss: 0.688594
[200]	train's binary_logloss: 0.674146	valid's binary_logloss: 0.688526
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.679071	valid's binary_logloss: 0.688123
feature_fraction_stage2, val_score: 0.688118:  33%|###3      | 2/6 [00:02<00:04,  1.03s/it][I 2020-09-27 04:54:50,833] Trial 38 finished with value: 0.6881231868561525 and parameters: {'feature_fraction': 0.7200000000000001}. Best is trial 38 with value: 0.6881231868561525.
feature_fraction_stage2, val_score: 0.688118:  33%|###3      | 2/6 [00:02<00:04,  1.03s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001586 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681076	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671442	valid's binary_logloss: 0.688118
feature_fraction_stage2, val_score: 0.688118:  50%|#####     | 3/6 [00:03<00:03,  1.18s/it][I 2020-09-27 04:54:52,366] Trial 39 finished with value: 0.6881176904104821 and parameters: {'feature_fraction': 0.8160000000000001}. Best is trial 39 with value: 0.6881176904104821.
feature_fraction_stage2, val_score: 0.688118:  50%|#####     | 3/6 [00:03<00:03,  1.18s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011815 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681089	valid's binary_logloss: 0.689091
[200]	train's binary_logloss: 0.674046	valid's binary_logloss: 0.68919
Early stopping, best iteration is:
[125]	train's binary_logloss: 0.679183	valid's binary_logloss: 0.688872
feature_fraction_stage2, val_score: 0.688118:  67%|######6   | 4/6 [00:04<00:02,  1.13s/it][I 2020-09-27 04:54:53,387] Trial 40 finished with value: 0.6888723605544023 and parameters: {'feature_fraction': 0.784}. Best is trial 39 with value: 0.6881176904104821.
feature_fraction_stage2, val_score: 0.688118:  67%|######6   | 4/6 [00:04<00:02,  1.13s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008483 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681089	valid's binary_logloss: 0.689091
[200]	train's binary_logloss: 0.674046	valid's binary_logloss: 0.68919
Early stopping, best iteration is:
[125]	train's binary_logloss: 0.679183	valid's binary_logloss: 0.688872
feature_fraction_stage2, val_score: 0.688118:  83%|########3 | 5/6 [00:05<00:01,  1.10s/it][I 2020-09-27 04:54:54,428] Trial 41 finished with value: 0.6888723605544023 and parameters: {'feature_fraction': 0.7520000000000001}. Best is trial 39 with value: 0.6881176904104821.
feature_fraction_stage2, val_score: 0.688118:  83%|########3 | 5/6 [00:05<00:01,  1.10s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001564 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.680946	valid's binary_logloss: 0.689429
[200]	train's binary_logloss: 0.67397	valid's binary_logloss: 0.689016
Early stopping, best iteration is:
[190]	train's binary_logloss: 0.674674	valid's binary_logloss: 0.688999
feature_fraction_stage2, val_score: 0.688118: 100%|##########| 6/6 [00:06<00:00,  1.17s/it][I 2020-09-27 04:54:55,767] Trial 42 finished with value: 0.6889992541114389 and parameters: {'feature_fraction': 0.8480000000000001}. Best is trial 39 with value: 0.6881176904104821.
feature_fraction_stage2, val_score: 0.688118: 100%|##########| 6/6 [00:06<00:00,  1.16s/it]
regularization_factors, val_score: 0.688118:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009845 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681108	valid's binary_logloss: 0.689185
[200]	train's binary_logloss: 0.674108	valid's binary_logloss: 0.68916
Early stopping, best iteration is:
[161]	train's binary_logloss: 0.676632	valid's binary_logloss: 0.688793
regularization_factors, val_score: 0.688118:   5%|5         | 1/20 [00:01<00:22,  1.18s/it][I 2020-09-27 04:54:56,959] Trial 43 finished with value: 0.6887934098600209 and parameters: {'lambda_l1': 0.0384884910618084, 'lambda_l2': 1.204751116088014e-08}. Best is trial 43 with value: 0.6887934098600209.
regularization_factors, val_score: 0.688118:   5%|5         | 1/20 [00:01<00:22,  1.18s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001587 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68186	valid's binary_logloss: 0.689223
[200]	train's binary_logloss: 0.675896	valid's binary_logloss: 0.689073
Early stopping, best iteration is:
[179]	train's binary_logloss: 0.677134	valid's binary_logloss: 0.688878
regularization_factors, val_score: 0.688118:  10%|#         | 2/20 [00:02<00:22,  1.22s/it][I 2020-09-27 04:54:58,283] Trial 44 finished with value: 0.6888779097684588 and parameters: {'lambda_l1': 1.6316340580802456e-08, 'lambda_l2': 9.98107290914818}. Best is trial 43 with value: 0.6887934098600209.
regularization_factors, val_score: 0.688118:  10%|#         | 2/20 [00:02<00:22,  1.22s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008407 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  15%|#5        | 3/20 [00:04<00:22,  1.31s/it][I 2020-09-27 04:54:59,812] Trial 45 finished with value: 0.688117656077511 and parameters: {'lambda_l1': 7.53282906565389e-08, 'lambda_l2': 0.000670356667923206}. Best is trial 45 with value: 0.688117656077511.
regularization_factors, val_score: 0.688118:  15%|#5        | 3/20 [00:04<00:22,  1.31s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001515 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  20%|##        | 4/20 [00:05<00:22,  1.39s/it][I 2020-09-27 04:55:01,389] Trial 46 finished with value: 0.6881176696895994 and parameters: {'lambda_l1': 2.804832879023663e-08, 'lambda_l2': 0.00040452126562955353}. Best is trial 45 with value: 0.688117656077511.
regularization_factors, val_score: 0.688118:  20%|##        | 4/20 [00:05<00:22,  1.39s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008716 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  25%|##5       | 5/20 [00:07<00:21,  1.43s/it][I 2020-09-27 04:55:02,896] Trial 47 finished with value: 0.6881176655585728 and parameters: {'lambda_l1': 1.5604063831480376e-08, 'lambda_l2': 0.0004850969258230225}. Best is trial 45 with value: 0.688117656077511.
regularization_factors, val_score: 0.688118:  25%|##5       | 5/20 [00:07<00:21,  1.43s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001586 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  30%|###       | 6/20 [00:08<00:20,  1.45s/it][I 2020-09-27 04:55:04,413] Trial 48 finished with value: 0.6881176673153139 and parameters: {'lambda_l1': 1.1804802009557963e-08, 'lambda_l2': 0.0004509060497221136}. Best is trial 45 with value: 0.688117656077511.
regularization_factors, val_score: 0.688118:  30%|###       | 6/20 [00:08<00:20,  1.45s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015933 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  35%|###5      | 7/20 [00:10<00:19,  1.48s/it][I 2020-09-27 04:55:05,941] Trial 49 finished with value: 0.6881176626365213 and parameters: {'lambda_l1': 1.3636634898221364e-08, 'lambda_l2': 0.0005423517853601064}. Best is trial 45 with value: 0.688117656077511.
regularization_factors, val_score: 0.688118:  35%|###5      | 7/20 [00:10<00:19,  1.48s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010107 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  40%|####      | 8/20 [00:11<00:17,  1.49s/it][I 2020-09-27 04:55:07,456] Trial 50 finished with value: 0.6881176473150824 and parameters: {'lambda_l1': 1.544452447154657e-08, 'lambda_l2': 0.000841525353074742}. Best is trial 50 with value: 0.6881176473150824.
regularization_factors, val_score: 0.688118:  40%|####      | 8/20 [00:11<00:17,  1.49s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009452 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  45%|####5     | 9/20 [00:13<00:16,  1.50s/it][I 2020-09-27 04:55:08,986] Trial 51 finished with value: 0.6881176591565269 and parameters: {'lambda_l1': 1.1527007228477607e-08, 'lambda_l2': 0.0006103419407812519}. Best is trial 50 with value: 0.6881176473150824.
regularization_factors, val_score: 0.688118:  45%|####5     | 9/20 [00:13<00:16,  1.50s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001914 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  50%|#####     | 10/20 [00:14<00:15,  1.52s/it][I 2020-09-27 04:55:10,538] Trial 52 finished with value: 0.6881176497206014 and parameters: {'lambda_l1': 1.0151302543150935e-08, 'lambda_l2': 0.000794572846016082}. Best is trial 50 with value: 0.6881176473150824.
regularization_factors, val_score: 0.688118:  50%|#####     | 10/20 [00:14<00:15,  1.52s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008731 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  55%|#####5    | 11/20 [00:16<00:13,  1.52s/it][I 2020-09-27 04:55:12,058] Trial 53 finished with value: 0.6881176463901173 and parameters: {'lambda_l1': 1.6743100485936676e-08, 'lambda_l2': 0.0008594795917381767}. Best is trial 53 with value: 0.6881176463901173.
regularization_factors, val_score: 0.688118:  55%|#####5    | 11/20 [00:16<00:13,  1.52s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009990 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689152
[200]	train's binary_logloss: 0.674067	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  60%|######    | 12/20 [00:17<00:12,  1.52s/it][I 2020-09-27 04:55:13,591] Trial 54 finished with value: 0.6881176183828548 and parameters: {'lambda_l1': 1.3182804260599829e-08, 'lambda_l2': 0.0014065609790939724}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  60%|######    | 12/20 [00:17<00:12,  1.52s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001677 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681004	valid's binary_logloss: 0.689046
[200]	train's binary_logloss: 0.674082	valid's binary_logloss: 0.688629
[300]	train's binary_logloss: 0.667629	valid's binary_logloss: 0.688981
Early stopping, best iteration is:
[208]	train's binary_logloss: 0.673549	valid's binary_logloss: 0.68846
regularization_factors, val_score: 0.688118:  65%|######5   | 13/20 [00:19<00:10,  1.49s/it][I 2020-09-27 04:55:15,021] Trial 55 finished with value: 0.6884597615807989 and parameters: {'lambda_l1': 1.1894995962263343e-08, 'lambda_l2': 0.007778232962106016}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  65%|######5   | 13/20 [00:19<00:10,  1.49s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001586 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681005	valid's binary_logloss: 0.689046
[200]	train's binary_logloss: 0.674033	valid's binary_logloss: 0.688993
Early stopping, best iteration is:
[127]	train's binary_logloss: 0.678994	valid's binary_logloss: 0.688676
regularization_factors, val_score: 0.688118:  70%|#######   | 14/20 [00:20<00:08,  1.37s/it][I 2020-09-27 04:55:16,097] Trial 56 finished with value: 0.6886762297484497 and parameters: {'lambda_l1': 1.041076968285473e-06, 'lambda_l2': 0.011119399639455865}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  70%|#######   | 14/20 [00:20<00:08,  1.37s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001534 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689152
[200]	train's binary_logloss: 0.67413	valid's binary_logloss: 0.68893
Early stopping, best iteration is:
[128]	train's binary_logloss: 0.67903	valid's binary_logloss: 0.688711
regularization_factors, val_score: 0.688118:  75%|#######5  | 15/20 [00:21<00:06,  1.27s/it][I 2020-09-27 04:55:17,151] Trial 57 finished with value: 0.6887114735383899 and parameters: {'lambda_l1': 2.060141563353616e-07, 'lambda_l2': 0.004227573539355788}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  75%|#######5  | 15/20 [00:21<00:06,  1.27s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001568 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681076	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671442	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  80%|########  | 16/20 [00:22<00:05,  1.36s/it][I 2020-09-27 04:55:18,723] Trial 58 finished with value: 0.6881176889342293 and parameters: {'lambda_l1': 1.1056346896610525e-06, 'lambda_l2': 2.7165035646251157e-05}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  80%|########  | 16/20 [00:22<00:05,  1.36s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001614 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681076	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671442	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  85%|########5 | 17/20 [00:24<00:04,  1.42s/it][I 2020-09-27 04:55:20,289] Trial 59 finished with value: 0.6881176899381117 and parameters: {'lambda_l1': 1.4591391563330158e-07, 'lambda_l2': 9.037290952263432e-06}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  85%|########5 | 17/20 [00:24<00:04,  1.42s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008880 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681099	valid's binary_logloss: 0.689028
[200]	train's binary_logloss: 0.674171	valid's binary_logloss: 0.688687
[300]	train's binary_logloss: 0.667693	valid's binary_logloss: 0.689247
Early stopping, best iteration is:
[208]	train's binary_logloss: 0.67365	valid's binary_logloss: 0.688536
regularization_factors, val_score: 0.688118:  90%|######### | 18/20 [00:25<00:02,  1.43s/it][I 2020-09-27 04:55:21,723] Trial 60 finished with value: 0.6885355908433756 and parameters: {'lambda_l1': 1.3062472410110885e-08, 'lambda_l2': 0.007173981015512951}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  90%|######### | 18/20 [00:25<00:02,  1.43s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001684 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674066	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667548	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118:  95%|#########5| 19/20 [00:27<00:01,  1.47s/it][I 2020-09-27 04:55:23,279] Trial 61 finished with value: 0.6881176692604842 and parameters: {'lambda_l1': 1.0159772737706575e-08, 'lambda_l2': 0.0004128540970680604}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118:  95%|#########5| 19/20 [00:27<00:01,  1.47s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001565 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681077	valid's binary_logloss: 0.689153
[200]	train's binary_logloss: 0.674067	valid's binary_logloss: 0.688436
[300]	train's binary_logloss: 0.667549	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[241]	train's binary_logloss: 0.671443	valid's binary_logloss: 0.688118
regularization_factors, val_score: 0.688118: 100%|##########| 20/20 [00:29<00:00,  1.48s/it][I 2020-09-27 04:55:24,804] Trial 62 finished with value: 0.6881176378261832 and parameters: {'lambda_l1': 1.2256608662872115e-08, 'lambda_l2': 0.0010266987268648666}. Best is trial 54 with value: 0.6881176183828548.
regularization_factors, val_score: 0.688118: 100%|##########| 20/20 [00:29<00:00,  1.45s/it]
min_data_in_leaf, val_score: 0.688118:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001754 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681082	valid's binary_logloss: 0.68904
[200]	train's binary_logloss: 0.674109	valid's binary_logloss: 0.689221
Early stopping, best iteration is:
[127]	train's binary_logloss: 0.679064	valid's binary_logloss: 0.688843
min_data_in_leaf, val_score: 0.688118:  20%|##        | 1/5 [00:01<00:04,  1.11s/it][I 2020-09-27 04:55:25,927] Trial 63 finished with value: 0.6888427522145505 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6888427522145505.
min_data_in_leaf, val_score: 0.688118:  20%|##        | 1/5 [00:01<00:04,  1.11s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001638 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68099	valid's binary_logloss: 0.689568
[200]	train's binary_logloss: 0.67383	valid's binary_logloss: 0.68945
Early stopping, best iteration is:
[139]	train's binary_logloss: 0.678037	valid's binary_logloss: 0.689261
min_data_in_leaf, val_score: 0.688118:  40%|####      | 2/5 [00:02<00:03,  1.16s/it][I 2020-09-27 04:55:27,218] Trial 64 finished with value: 0.6892610898091048 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.6888427522145505.
min_data_in_leaf, val_score: 0.688118:  40%|####      | 2/5 [00:02<00:03,  1.16s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002000 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681202	valid's binary_logloss: 0.689481
[200]	train's binary_logloss: 0.674521	valid's binary_logloss: 0.689204
Early stopping, best iteration is:
[133]	train's binary_logloss: 0.678939	valid's binary_logloss: 0.688898
min_data_in_leaf, val_score: 0.688118:  60%|######    | 3/5 [00:03<00:02,  1.18s/it][I 2020-09-27 04:55:28,425] Trial 65 finished with value: 0.688897822488096 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.6888427522145505.
min_data_in_leaf, val_score: 0.688118:  60%|######    | 3/5 [00:03<00:02,  1.18s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001549 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681041	valid's binary_logloss: 0.68912
[200]	train's binary_logloss: 0.673932	valid's binary_logloss: 0.689165
[300]	train's binary_logloss: 0.667472	valid's binary_logloss: 0.689403
Early stopping, best iteration is:
[223]	train's binary_logloss: 0.672471	valid's binary_logloss: 0.688731
min_data_in_leaf, val_score: 0.688118:  80%|########  | 4/5 [00:05<00:01,  1.28s/it][I 2020-09-27 04:55:29,951] Trial 66 finished with value: 0.6887314309367388 and parameters: {'min_child_samples': 10}. Best is trial 66 with value: 0.6887314309367388.
min_data_in_leaf, val_score: 0.688118:  80%|########  | 4/5 [00:05<00:01,  1.28s/it][LightGBM] [Info] Number of positive: 46662, number of negative: 46364
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001500 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501602 -> initscore=0.006407
[LightGBM] [Info] Start training from score 0.006407
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.681418	valid's binary_logloss: 0.688989
[200]	train's binary_logloss: 0.675177	valid's binary_logloss: 0.688794
[300]	train's binary_logloss: 0.669107	valid's binary_logloss: 0.688977
Early stopping, best iteration is:
[242]	train's binary_logloss: 0.672665	valid's binary_logloss: 0.688301
min_data_in_leaf, val_score: 0.688118: 100%|##########| 5/5 [00:06<00:00,  1.38s/it][I 2020-09-27 04:55:31,562] Trial 67 finished with value: 0.6883012831237107 and parameters: {'min_child_samples': 100}. Best is trial 67 with value: 0.6883012831237107.
min_data_in_leaf, val_score: 0.688118: 100%|##########| 5/5 [00:06<00:00,  1.35s/it]
Fold : 4
[I 2020-09-27 04:55:31,677] A new study created in memory with name: no-name-9f8068ee-5ed7-40e8-b31b-fe76cdedea06
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008972 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664047	valid's binary_logloss: 0.690512
Early stopping, best iteration is:
[55]	train's binary_logloss: 0.674377	valid's binary_logloss: 0.689789
feature_fraction, val_score: 0.689789:  14%|#4        | 1/7 [00:00<00:05,  1.05it/s][I 2020-09-27 04:55:32,636] Trial 0 finished with value: 0.6897889878374779 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6897889878374779.
feature_fraction, val_score: 0.689789:  14%|#4        | 1/7 [00:00<00:05,  1.05it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000871 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666186	valid's binary_logloss: 0.690817
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.679056	valid's binary_logloss: 0.69044
feature_fraction, val_score: 0.689789:  29%|##8       | 2/7 [00:01<00:04,  1.14it/s][I 2020-09-27 04:55:33,334] Trial 1 finished with value: 0.6904400471655459 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6897889878374779.
feature_fraction, val_score: 0.689789:  29%|##8       | 2/7 [00:01<00:04,  1.14it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000896 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66495	valid's binary_logloss: 0.68987
Early stopping, best iteration is:
[42]	train's binary_logloss: 0.678016	valid's binary_logloss: 0.689555
feature_fraction, val_score: 0.689555:  43%|####2     | 3/7 [00:02<00:03,  1.19it/s][I 2020-09-27 04:55:34,092] Trial 2 finished with value: 0.6895553636305107 and parameters: {'feature_fraction': 0.5}. Best is trial 2 with value: 0.6895553636305107.
feature_fraction, val_score: 0.689555:  43%|####2     | 3/7 [00:02<00:03,  1.19it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014383 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663818	valid's binary_logloss: 0.690283
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.672948	valid's binary_logloss: 0.689614
feature_fraction, val_score: 0.689555:  57%|#####7    | 4/7 [00:03<00:02,  1.20it/s][I 2020-09-27 04:55:34,910] Trial 3 finished with value: 0.689613792253809 and parameters: {'feature_fraction': 0.7}. Best is trial 2 with value: 0.6895553636305107.
feature_fraction, val_score: 0.689555:  57%|#####7    | 4/7 [00:03<00:02,  1.20it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007348 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663516	valid's binary_logloss: 0.691084
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.673734	valid's binary_logloss: 0.690379
feature_fraction, val_score: 0.689555:  71%|#######1  | 5/7 [00:04<00:01,  1.18it/s][I 2020-09-27 04:55:35,802] Trial 4 finished with value: 0.6903789869460358 and parameters: {'feature_fraction': 0.8}. Best is trial 2 with value: 0.6895553636305107.
feature_fraction, val_score: 0.689555:  71%|#######1  | 5/7 [00:04<00:01,  1.18it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001795 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662299	valid's binary_logloss: 0.689516
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.676528	valid's binary_logloss: 0.689224
feature_fraction, val_score: 0.689224:  86%|########5 | 6/7 [00:05<00:00,  1.15it/s][I 2020-09-27 04:55:36,710] Trial 5 finished with value: 0.689223941248766 and parameters: {'feature_fraction': 1.0}. Best is trial 5 with value: 0.689223941248766.
feature_fraction, val_score: 0.689224:  86%|########5 | 6/7 [00:05<00:00,  1.15it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005320 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663036	valid's binary_logloss: 0.690663
Early stopping, best iteration is:
[45]	train's binary_logloss: 0.676494	valid's binary_logloss: 0.689393
feature_fraction, val_score: 0.689224: 100%|##########| 7/7 [00:06<00:00,  1.10it/s][I 2020-09-27 04:55:37,710] Trial 6 finished with value: 0.6893925766913008 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 5 with value: 0.689223941248766.
feature_fraction, val_score: 0.689224: 100%|##########| 7/7 [00:06<00:00,  1.16it/s]
num_leaves, val_score: 0.689224:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001743 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.55132	valid's binary_logloss: 0.697443
Early stopping, best iteration is:
[11]	train's binary_logloss: 0.668724	valid's binary_logloss: 0.691123
num_leaves, val_score: 0.689224:   5%|5         | 1/20 [00:01<00:26,  1.38s/it][I 2020-09-27 04:55:39,105] Trial 7 finished with value: 0.6911225737618327 and parameters: {'num_leaves': 200}. Best is trial 7 with value: 0.6911225737618327.
num_leaves, val_score: 0.689224:   5%|5         | 1/20 [00:01<00:26,  1.38s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001580 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571031	valid's binary_logloss: 0.698616
Early stopping, best iteration is:
[13]	train's binary_logloss: 0.669076	valid's binary_logloss: 0.690967
num_leaves, val_score: 0.689224:  10%|#         | 2/20 [00:02<00:25,  1.39s/it][I 2020-09-27 04:55:40,522] Trial 8 finished with value: 0.690966972517542 and parameters: {'num_leaves': 165}. Best is trial 8 with value: 0.690966972517542.
num_leaves, val_score: 0.689224:  10%|#         | 2/20 [00:02<00:25,  1.39s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002611 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.608797	valid's binary_logloss: 0.692752
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.670075	valid's binary_logloss: 0.690291
num_leaves, val_score: 0.689224:  15%|#5        | 3/20 [00:04<00:23,  1.36s/it][I 2020-09-27 04:55:41,828] Trial 9 finished with value: 0.690291129092192 and parameters: {'num_leaves': 104}. Best is trial 9 with value: 0.690291129092192.
num_leaves, val_score: 0.689224:  15%|#5        | 3/20 [00:04<00:23,  1.36s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001684 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687085	valid's binary_logloss: 0.689639
[200]	train's binary_logloss: 0.684519	valid's binary_logloss: 0.689387
[300]	train's binary_logloss: 0.682492	valid's binary_logloss: 0.689351
Early stopping, best iteration is:
[285]	train's binary_logloss: 0.682778	valid's binary_logloss: 0.689263
num_leaves, val_score: 0.689224:  20%|##        | 4/20 [00:05<00:23,  1.47s/it][I 2020-09-27 04:55:43,548] Trial 10 finished with value: 0.6892629508973852 and parameters: {'num_leaves': 4}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  20%|##        | 4/20 [00:05<00:23,  1.47s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002867 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687085	valid's binary_logloss: 0.689639
[200]	train's binary_logloss: 0.684519	valid's binary_logloss: 0.689387
[300]	train's binary_logloss: 0.682492	valid's binary_logloss: 0.689351
Early stopping, best iteration is:
[285]	train's binary_logloss: 0.682778	valid's binary_logloss: 0.689263
num_leaves, val_score: 0.689224:  25%|##5       | 5/20 [00:07<00:24,  1.62s/it][I 2020-09-27 04:55:45,500] Trial 11 finished with value: 0.6892629508973852 and parameters: {'num_leaves': 4}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  25%|##5       | 5/20 [00:07<00:24,  1.62s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003844 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.670769	valid's binary_logloss: 0.690136
Early stopping, best iteration is:
[57]	train's binary_logloss: 0.67811	valid's binary_logloss: 0.689439
num_leaves, val_score: 0.689224:  30%|###       | 6/20 [00:08<00:20,  1.46s/it][I 2020-09-27 04:55:46,606] Trial 12 finished with value: 0.6894394773112021 and parameters: {'num_leaves': 21}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  30%|###       | 6/20 [00:08<00:20,  1.46s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002745 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.632184	valid's binary_logloss: 0.690411
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.662296	valid's binary_logloss: 0.689656
num_leaves, val_score: 0.689224:  35%|###5      | 7/20 [00:09<00:17,  1.35s/it][I 2020-09-27 04:55:47,703] Trial 13 finished with value: 0.6896563634195803 and parameters: {'num_leaves': 71}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  35%|###5      | 7/20 [00:09<00:17,  1.35s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001681 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.528217	valid's binary_logloss: 0.698531
Early stopping, best iteration is:
[10]	train's binary_logloss: 0.667084	valid's binary_logloss: 0.692043
num_leaves, val_score: 0.689224:  40%|####      | 8/20 [00:11<00:16,  1.39s/it][I 2020-09-27 04:55:49,169] Trial 14 finished with value: 0.6920431771973239 and parameters: {'num_leaves': 242}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  40%|####      | 8/20 [00:11<00:16,  1.39s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001737 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.647395	valid's binary_logloss: 0.692015
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.680717	valid's binary_logloss: 0.689923
num_leaves, val_score: 0.689224:  45%|####5     | 9/20 [00:12<00:13,  1.19s/it][I 2020-09-27 04:55:49,908] Trial 15 finished with value: 0.6899230440367558 and parameters: {'num_leaves': 50}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  45%|####5     | 9/20 [00:12<00:13,  1.19s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001534 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.599344	valid's binary_logloss: 0.693711
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.666348	valid's binary_logloss: 0.689963
num_leaves, val_score: 0.689224:  50%|#####     | 10/20 [00:13<00:11,  1.16s/it][I 2020-09-27 04:55:50,983] Trial 16 finished with value: 0.6899633518471027 and parameters: {'num_leaves': 119}. Best is trial 10 with value: 0.6892629508973852.
num_leaves, val_score: 0.689224:  50%|#####     | 10/20 [00:13<00:11,  1.16s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001595 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688413	valid's binary_logloss: 0.689959
[200]	train's binary_logloss: 0.686555	valid's binary_logloss: 0.689381
[300]	train's binary_logloss: 0.685253	valid's binary_logloss: 0.688961
[400]	train's binary_logloss: 0.684142	valid's binary_logloss: 0.689082
Early stopping, best iteration is:
[301]	train's binary_logloss: 0.685239	valid's binary_logloss: 0.688951
num_leaves, val_score: 0.688951:  55%|#####5    | 11/20 [00:14<00:11,  1.24s/it][I 2020-09-27 04:55:52,401] Trial 17 finished with value: 0.6889511525087615 and parameters: {'num_leaves': 3}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  55%|#####5    | 11/20 [00:14<00:11,  1.24s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002030 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.571031	valid's binary_logloss: 0.698616
Early stopping, best iteration is:
[13]	train's binary_logloss: 0.669076	valid's binary_logloss: 0.690967
num_leaves, val_score: 0.688951:  60%|######    | 12/20 [00:16<00:10,  1.27s/it][I 2020-09-27 04:55:53,737] Trial 18 finished with value: 0.690966972517542 and parameters: {'num_leaves': 165}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  60%|######    | 12/20 [00:16<00:10,  1.27s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001718 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.628315	valid's binary_logloss: 0.692025
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.670941	valid's binary_logloss: 0.689909
num_leaves, val_score: 0.688951:  65%|######5   | 13/20 [00:16<00:08,  1.17s/it][I 2020-09-27 04:55:54,692] Trial 19 finished with value: 0.6899089284634762 and parameters: {'num_leaves': 76}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  65%|######5   | 13/20 [00:16<00:08,  1.17s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001740 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656752	valid's binary_logloss: 0.690382
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.670809	valid's binary_logloss: 0.689428
num_leaves, val_score: 0.688951:  70%|#######   | 14/20 [00:17<00:06,  1.08s/it][I 2020-09-27 04:55:55,563] Trial 20 finished with value: 0.6894277899501294 and parameters: {'num_leaves': 38}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  70%|#######   | 14/20 [00:17<00:06,  1.08s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001722 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687085	valid's binary_logloss: 0.689639
[200]	train's binary_logloss: 0.684519	valid's binary_logloss: 0.689387
[300]	train's binary_logloss: 0.682492	valid's binary_logloss: 0.689351
Early stopping, best iteration is:
[285]	train's binary_logloss: 0.682778	valid's binary_logloss: 0.689263
num_leaves, val_score: 0.688951:  75%|#######5  | 15/20 [00:19<00:05,  1.17s/it][I 2020-09-27 04:55:56,939] Trial 21 finished with value: 0.6892629508973853 and parameters: {'num_leaves': 4}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  75%|#######5  | 15/20 [00:19<00:05,  1.17s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005007 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687085	valid's binary_logloss: 0.689639
[200]	train's binary_logloss: 0.684519	valid's binary_logloss: 0.689387
[300]	train's binary_logloss: 0.682492	valid's binary_logloss: 0.689351
Early stopping, best iteration is:
[285]	train's binary_logloss: 0.682778	valid's binary_logloss: 0.689263
num_leaves, val_score: 0.688951:  80%|########  | 16/20 [00:20<00:04,  1.25s/it][I 2020-09-27 04:55:58,373] Trial 22 finished with value: 0.6892629508973852 and parameters: {'num_leaves': 4}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  80%|########  | 16/20 [00:20<00:04,  1.25s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001737 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.634712	valid's binary_logloss: 0.690552
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.671201	valid's binary_logloss: 0.689554
num_leaves, val_score: 0.688951:  85%|########5 | 17/20 [00:21<00:03,  1.17s/it][I 2020-09-27 04:55:59,359] Trial 23 finished with value: 0.6895543062404332 and parameters: {'num_leaves': 67}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  85%|########5 | 17/20 [00:21<00:03,  1.17s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001626 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.660898	valid's binary_logloss: 0.690623
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.675571	valid's binary_logloss: 0.689922
num_leaves, val_score: 0.688951:  90%|######### | 18/20 [00:22<00:02,  1.09s/it][I 2020-09-27 04:56:00,277] Trial 24 finished with value: 0.6899216528961626 and parameters: {'num_leaves': 33}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  90%|######### | 18/20 [00:22<00:02,  1.09s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002059 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.614335	valid's binary_logloss: 0.691319
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.67155	valid's binary_logloss: 0.689698
num_leaves, val_score: 0.688951:  95%|#########5| 19/20 [00:23<00:01,  1.08s/it][I 2020-09-27 04:56:01,311] Trial 25 finished with value: 0.6896983317380696 and parameters: {'num_leaves': 96}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951:  95%|#########5| 19/20 [00:23<00:01,  1.08s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001688 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.58378	valid's binary_logloss: 0.69401
Early stopping, best iteration is:
[20]	train's binary_logloss: 0.662024	valid's binary_logloss: 0.690131
num_leaves, val_score: 0.688951: 100%|##########| 20/20 [00:24<00:00,  1.11s/it][I 2020-09-27 04:56:02,500] Trial 26 finished with value: 0.6901310985346112 and parameters: {'num_leaves': 143}. Best is trial 17 with value: 0.6889511525087615.
num_leaves, val_score: 0.688951: 100%|##########| 20/20 [00:24<00:00,  1.24s/it]
bagging, val_score: 0.688951:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013453 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688279	valid's binary_logloss: 0.689921
[200]	train's binary_logloss: 0.686342	valid's binary_logloss: 0.68936
[300]	train's binary_logloss: 0.685002	valid's binary_logloss: 0.688756
[400]	train's binary_logloss: 0.683806	valid's binary_logloss: 0.688689
Early stopping, best iteration is:
[358]	train's binary_logloss: 0.684279	valid's binary_logloss: 0.688644
bagging, val_score: 0.688644:  10%|#         | 1/10 [00:01<00:16,  1.81s/it][I 2020-09-27 04:56:04,317] Trial 27 finished with value: 0.6886437067357495 and parameters: {'bagging_fraction': 0.8690258816156133, 'bagging_freq': 3}. Best is trial 27 with value: 0.6886437067357495.
bagging, val_score: 0.688644:  10%|#         | 1/10 [00:01<00:16,  1.81s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001668 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688299	valid's binary_logloss: 0.689981
[200]	train's binary_logloss: 0.686384	valid's binary_logloss: 0.689407
[300]	train's binary_logloss: 0.684946	valid's binary_logloss: 0.689175
Early stopping, best iteration is:
[276]	train's binary_logloss: 0.685288	valid's binary_logloss: 0.689089
bagging, val_score: 0.688644:  20%|##        | 2/10 [00:03<00:13,  1.73s/it][I 2020-09-27 04:56:05,876] Trial 28 finished with value: 0.6890889360892993 and parameters: {'bagging_fraction': 0.8933236439302734, 'bagging_freq': 3}. Best is trial 27 with value: 0.6886437067357495.
bagging, val_score: 0.688644:  20%|##        | 2/10 [00:03<00:13,  1.73s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001756 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688343	valid's binary_logloss: 0.689852
[200]	train's binary_logloss: 0.686411	valid's binary_logloss: 0.689334
[300]	train's binary_logloss: 0.685065	valid's binary_logloss: 0.688979
[400]	train's binary_logloss: 0.683898	valid's binary_logloss: 0.689086
Early stopping, best iteration is:
[315]	train's binary_logloss: 0.684866	valid's binary_logloss: 0.688904
bagging, val_score: 0.688644:  30%|###       | 3/10 [00:05<00:12,  1.73s/it][I 2020-09-27 04:56:07,585] Trial 29 finished with value: 0.6889043328094836 and parameters: {'bagging_fraction': 0.9252835167638693, 'bagging_freq': 3}. Best is trial 27 with value: 0.6886437067357495.
bagging, val_score: 0.688644:  30%|###       | 3/10 [00:05<00:12,  1.73s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001792 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688307	valid's binary_logloss: 0.689977
[200]	train's binary_logloss: 0.686382	valid's binary_logloss: 0.689334
[300]	train's binary_logloss: 0.684994	valid's binary_logloss: 0.689157
[400]	train's binary_logloss: 0.683796	valid's binary_logloss: 0.688976
Early stopping, best iteration is:
[399]	train's binary_logloss: 0.683807	valid's binary_logloss: 0.688973
bagging, val_score: 0.688644:  40%|####      | 4/10 [00:07<00:10,  1.82s/it][I 2020-09-27 04:56:09,625] Trial 30 finished with value: 0.688972596287182 and parameters: {'bagging_fraction': 0.9070779201672944, 'bagging_freq': 3}. Best is trial 27 with value: 0.6886437067357495.
bagging, val_score: 0.688644:  40%|####      | 4/10 [00:07<00:10,  1.82s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001680 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688296	valid's binary_logloss: 0.689913
[200]	train's binary_logloss: 0.686352	valid's binary_logloss: 0.689474
[300]	train's binary_logloss: 0.684941	valid's binary_logloss: 0.689008
[400]	train's binary_logloss: 0.683731	valid's binary_logloss: 0.689001
Early stopping, best iteration is:
[361]	train's binary_logloss: 0.684198	valid's binary_logloss: 0.688907
bagging, val_score: 0.688644:  50%|#####     | 5/10 [00:08<00:09,  1.84s/it][I 2020-09-27 04:56:11,504] Trial 31 finished with value: 0.6889071822847319 and parameters: {'bagging_fraction': 0.9123574981524005, 'bagging_freq': 3}. Best is trial 27 with value: 0.6886437067357495.
bagging, val_score: 0.688644:  50%|#####     | 5/10 [00:08<00:09,  1.84s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002215 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688308	valid's binary_logloss: 0.689935
[200]	train's binary_logloss: 0.686414	valid's binary_logloss: 0.689366
[300]	train's binary_logloss: 0.685039	valid's binary_logloss: 0.689039
[400]	train's binary_logloss: 0.683854	valid's binary_logloss: 0.689061
Early stopping, best iteration is:
[313]	train's binary_logloss: 0.684864	valid's binary_logloss: 0.688989
bagging, val_score: 0.688644:  60%|######    | 6/10 [00:10<00:07,  1.80s/it][I 2020-09-27 04:56:13,223] Trial 32 finished with value: 0.6889888493170525 and parameters: {'bagging_fraction': 0.9247906381445511, 'bagging_freq': 3}. Best is trial 27 with value: 0.6886437067357495.
bagging, val_score: 0.688644:  60%|######    | 6/10 [00:10<00:07,  1.80s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001684 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
bagging, val_score: 0.688510:  70%|#######   | 7/10 [00:13<00:06,  2.23s/it][I 2020-09-27 04:56:16,468] Trial 33 finished with value: 0.6885102663693616 and parameters: {'bagging_fraction': 0.7258338472094388, 'bagging_freq': 1}. Best is trial 33 with value: 0.6885102663693616.
bagging, val_score: 0.688510:  70%|#######   | 7/10 [00:13<00:06,  2.23s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003707 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687986	valid's binary_logloss: 0.689762
[200]	train's binary_logloss: 0.686029	valid's binary_logloss: 0.689311
[300]	train's binary_logloss: 0.684604	valid's binary_logloss: 0.689125
Early stopping, best iteration is:
[243]	train's binary_logloss: 0.685398	valid's binary_logloss: 0.689001
bagging, val_score: 0.688510:  80%|########  | 8/10 [00:15<00:04,  2.04s/it][I 2020-09-27 04:56:18,063] Trial 34 finished with value: 0.6890013155830245 and parameters: {'bagging_fraction': 0.5748859423991363, 'bagging_freq': 1}. Best is trial 33 with value: 0.6885102663693616.
bagging, val_score: 0.688510:  80%|########  | 8/10 [00:15<00:04,  2.04s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003161 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688221	valid's binary_logloss: 0.689816
[200]	train's binary_logloss: 0.686335	valid's binary_logloss: 0.688896
Early stopping, best iteration is:
[196]	train's binary_logloss: 0.686393	valid's binary_logloss: 0.688862
bagging, val_score: 0.688510:  90%|######### | 9/10 [00:16<00:01,  1.81s/it][I 2020-09-27 04:56:19,330] Trial 35 finished with value: 0.6888615132015019 and parameters: {'bagging_fraction': 0.7649026913055617, 'bagging_freq': 6}. Best is trial 33 with value: 0.6885102663693616.
bagging, val_score: 0.688510:  90%|######### | 9/10 [00:16<00:01,  1.81s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001740 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688211	valid's binary_logloss: 0.689704
[200]	train's binary_logloss: 0.686297	valid's binary_logloss: 0.689225
[300]	train's binary_logloss: 0.684963	valid's binary_logloss: 0.688871
Early stopping, best iteration is:
[229]	train's binary_logloss: 0.685871	valid's binary_logloss: 0.688848
bagging, val_score: 0.688510: 100%|##########| 10/10 [00:18<00:00,  1.68s/it][I 2020-09-27 04:56:20,711] Trial 36 finished with value: 0.6888483568488261 and parameters: {'bagging_fraction': 0.7251434202313678, 'bagging_freq': 7}. Best is trial 33 with value: 0.6885102663693616.
bagging, val_score: 0.688510: 100%|##########| 10/10 [00:18<00:00,  1.82s/it]
feature_fraction_stage2, val_score: 0.688510:   0%|          | 0/3 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001750 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
feature_fraction_stage2, val_score: 0.688510:  33%|###3      | 1/3 [00:01<00:03,  1.59s/it][I 2020-09-27 04:56:22,315] Trial 37 finished with value: 0.6885102663693616 and parameters: {'feature_fraction': 0.9840000000000001}. Best is trial 37 with value: 0.6885102663693616.
feature_fraction_stage2, val_score: 0.688510:  33%|###3      | 1/3 [00:01<00:03,  1.59s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001619 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688132	valid's binary_logloss: 0.689669
[200]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.689145
[300]	train's binary_logloss: 0.684787	valid's binary_logloss: 0.688653
[400]	train's binary_logloss: 0.683619	valid's binary_logloss: 0.688636
Early stopping, best iteration is:
[392]	train's binary_logloss: 0.683706	valid's binary_logloss: 0.688545
feature_fraction_stage2, val_score: 0.688510:  67%|######6   | 2/3 [00:03<00:01,  1.66s/it][I 2020-09-27 04:56:24,121] Trial 38 finished with value: 0.6885449109441385 and parameters: {'feature_fraction': 0.92}. Best is trial 37 with value: 0.6885102663693616.
feature_fraction_stage2, val_score: 0.688510:  67%|######6   | 2/3 [00:03<00:01,  1.66s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001677 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68813	valid's binary_logloss: 0.689528
[200]	train's binary_logloss: 0.686156	valid's binary_logloss: 0.688931
[300]	train's binary_logloss: 0.684723	valid's binary_logloss: 0.688714
[400]	train's binary_logloss: 0.683522	valid's binary_logloss: 0.688698
Early stopping, best iteration is:
[382]	train's binary_logloss: 0.683729	valid's binary_logloss: 0.688541
feature_fraction_stage2, val_score: 0.688510: 100%|##########| 3/3 [00:05<00:00,  1.69s/it][I 2020-09-27 04:56:25,892] Trial 39 finished with value: 0.688541416001024 and parameters: {'feature_fraction': 0.9520000000000001}. Best is trial 37 with value: 0.6885102663693616.
feature_fraction_stage2, val_score: 0.688510: 100%|##########| 3/3 [00:05<00:00,  1.72s/it]
regularization_factors, val_score: 0.688510:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005767 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683573	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:   5%|5         | 1/20 [00:01<00:33,  1.76s/it][I 2020-09-27 04:56:27,666] Trial 40 finished with value: 0.6885276922964685 and parameters: {'lambda_l1': 1.5626678097623024e-08, 'lambda_l2': 0.0035654183554903016}. Best is trial 40 with value: 0.6885276922964685.
regularization_factors, val_score: 0.688510:   5%|5         | 1/20 [00:01<00:33,  1.76s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002094 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  10%|#         | 2/20 [00:03<00:31,  1.76s/it][I 2020-09-27 04:56:29,430] Trial 41 finished with value: 0.6885276861626745 and parameters: {'lambda_l1': 3.870259482083587e-08, 'lambda_l2': 0.0031399198971774163}. Best is trial 41 with value: 0.6885276861626745.
regularization_factors, val_score: 0.688510:  10%|#         | 2/20 [00:03<00:31,  1.76s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001738 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  15%|#5        | 3/20 [00:05<00:29,  1.76s/it][I 2020-09-27 04:56:31,201] Trial 42 finished with value: 0.6885276847893826 and parameters: {'lambda_l1': 1.2562118956422349e-08, 'lambda_l2': 0.0030450124632060223}. Best is trial 42 with value: 0.6885276847893826.
regularization_factors, val_score: 0.688510:  15%|#5        | 3/20 [00:05<00:29,  1.76s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013796 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683573	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  20%|##        | 4/20 [00:07<00:28,  1.81s/it][I 2020-09-27 04:56:33,111] Trial 43 finished with value: 0.6885276913121847 and parameters: {'lambda_l1': 1.1600996445683937e-08, 'lambda_l2': 0.0034970196571133183}. Best is trial 42 with value: 0.6885276847893826.
regularization_factors, val_score: 0.688510:  20%|##        | 4/20 [00:07<00:28,  1.81s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009630 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  25%|##5       | 5/20 [00:09<00:27,  1.84s/it][I 2020-09-27 04:56:35,011] Trial 44 finished with value: 0.6885276777308208 and parameters: {'lambda_l1': 1.5284605069192858e-08, 'lambda_l2': 0.002555456660087826}. Best is trial 44 with value: 0.6885276777308208.
regularization_factors, val_score: 0.688510:  25%|##5       | 5/20 [00:09<00:27,  1.84s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001670 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  30%|###       | 6/20 [00:10<00:25,  1.81s/it][I 2020-09-27 04:56:36,753] Trial 45 finished with value: 0.6885276789659555 and parameters: {'lambda_l1': 1.761891864374289e-08, 'lambda_l2': 0.0026412614459624425}. Best is trial 44 with value: 0.6885276777308208.
regularization_factors, val_score: 0.688510:  30%|###       | 6/20 [00:10<00:25,  1.81s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010747 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688144	valid's binary_logloss: 0.689555
[200]	train's binary_logloss: 0.686223	valid's binary_logloss: 0.688989
[300]	train's binary_logloss: 0.684835	valid's binary_logloss: 0.688807
[400]	train's binary_logloss: 0.683655	valid's binary_logloss: 0.688801
Early stopping, best iteration is:
[366]	train's binary_logloss: 0.684032	valid's binary_logloss: 0.688577
regularization_factors, val_score: 0.688510:  35%|###5      | 7/20 [00:12<00:23,  1.82s/it][I 2020-09-27 04:56:38,598] Trial 46 finished with value: 0.6885772867402516 and parameters: {'lambda_l1': 0.06434172773582825, 'lambda_l2': 4.068285530948775e-06}. Best is trial 44 with value: 0.6885276777308208.
regularization_factors, val_score: 0.688510:  35%|###5      | 7/20 [00:12<00:23,  1.82s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001651 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688154	valid's binary_logloss: 0.689765
[200]	train's binary_logloss: 0.686207	valid's binary_logloss: 0.689123
[300]	train's binary_logloss: 0.684829	valid's binary_logloss: 0.688815
[400]	train's binary_logloss: 0.683653	valid's binary_logloss: 0.688858
Early stopping, best iteration is:
[378]	train's binary_logloss: 0.68391	valid's binary_logloss: 0.688721
regularization_factors, val_score: 0.688510:  40%|####      | 8/20 [00:14<00:21,  1.81s/it][I 2020-09-27 04:56:40,403] Trial 47 finished with value: 0.6887213435654922 and parameters: {'lambda_l1': 1.2218484580158208e-06, 'lambda_l2': 0.5657158290313663}. Best is trial 44 with value: 0.6885276777308208.
regularization_factors, val_score: 0.688510:  40%|####      | 8/20 [00:14<00:21,  1.81s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001718 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  45%|####5     | 9/20 [00:16<00:19,  1.77s/it][I 2020-09-27 04:56:42,076] Trial 48 finished with value: 0.6885102681025668 and parameters: {'lambda_l1': 1.1710448780595842e-05, 'lambda_l2': 6.970955109724959e-05}. Best is trial 48 with value: 0.6885102681025668.
regularization_factors, val_score: 0.688510:  45%|####5     | 9/20 [00:16<00:19,  1.77s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001926 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  50%|#####     | 10/20 [00:17<00:17,  1.71s/it][I 2020-09-27 04:56:43,655] Trial 49 finished with value: 0.6885102671718292 and parameters: {'lambda_l1': 2.6398830760982305e-05, 'lambda_l2': 8.347668509260632e-06}. Best is trial 49 with value: 0.6885102671718292.
regularization_factors, val_score: 0.688510:  50%|#####     | 10/20 [00:17<00:17,  1.71s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001747 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  55%|#####5    | 11/20 [00:19<00:15,  1.73s/it][I 2020-09-27 04:56:45,435] Trial 50 finished with value: 0.6885276427800883 and parameters: {'lambda_l1': 9.524089932248691e-05, 'lambda_l2': 2.4636927718157655e-06}. Best is trial 49 with value: 0.6885102671718292.
regularization_factors, val_score: 0.688510:  55%|#####5    | 11/20 [00:19<00:15,  1.73s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001941 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  60%|######    | 12/20 [00:21<00:13,  1.75s/it][I 2020-09-27 04:56:47,207] Trial 51 finished with value: 0.6885276426289637 and parameters: {'lambda_l1': 8.799095559564636e-05, 'lambda_l2': 2.1427847942855786e-06}. Best is trial 49 with value: 0.6885102671718292.
regularization_factors, val_score: 0.688510:  60%|######    | 12/20 [00:21<00:13,  1.75s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001695 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  65%|######5   | 13/20 [00:23<00:12,  1.75s/it][I 2020-09-27 04:56:48,982] Trial 52 finished with value: 0.688527642704103 and parameters: {'lambda_l1': 9.263556836103949e-05, 'lambda_l2': 8.942562462981873e-07}. Best is trial 49 with value: 0.6885102671718292.
regularization_factors, val_score: 0.688510:  65%|######5   | 13/20 [00:23<00:12,  1.75s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001690 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683572	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510:  70%|#######   | 14/20 [00:24<00:10,  1.75s/it][I 2020-09-27 04:56:50,729] Trial 53 finished with value: 0.6885276427926035 and parameters: {'lambda_l1': 9.755819246036098e-05, 'lambda_l2': 1.883877450611394e-08}. Best is trial 49 with value: 0.6885102671718292.
regularization_factors, val_score: 0.688510:  70%|#######   | 14/20 [00:24<00:10,  1.75s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011495 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  75%|#######5  | 15/20 [00:26<00:08,  1.74s/it][I 2020-09-27 04:56:52,434] Trial 54 finished with value: 0.6885102665832229 and parameters: {'lambda_l1': 7.204721356400207e-06, 'lambda_l2': 1.9515753697920597e-06}. Best is trial 54 with value: 0.6885102665832229.
regularization_factors, val_score: 0.688510:  75%|#######5  | 15/20 [00:26<00:08,  1.74s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001867 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  80%|########  | 16/20 [00:28<00:06,  1.69s/it][I 2020-09-27 04:56:54,005] Trial 55 finished with value: 0.6885102672275221 and parameters: {'lambda_l1': 2.7150467223950745e-06, 'lambda_l2': 3.804591403735543e-05}. Best is trial 54 with value: 0.6885102665832229.
regularization_factors, val_score: 0.688510:  80%|########  | 16/20 [00:28<00:06,  1.69s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001819 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  85%|########5 | 17/20 [00:29<00:04,  1.67s/it][I 2020-09-27 04:56:55,619] Trial 56 finished with value: 0.6885102669786968 and parameters: {'lambda_l1': 1.4913904132624564e-06, 'lambda_l2': 2.7565544406824723e-05}. Best is trial 54 with value: 0.6885102665832229.
regularization_factors, val_score: 0.688510:  85%|########5 | 17/20 [00:29<00:04,  1.67s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001735 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  90%|######### | 18/20 [00:31<00:03,  1.63s/it][I 2020-09-27 04:56:57,171] Trial 57 finished with value: 0.6885102674416176 and parameters: {'lambda_l1': 7.533020720507337e-07, 'lambda_l2': 5.04819339961594e-05}. Best is trial 54 with value: 0.6885102665832229.
regularization_factors, val_score: 0.688510:  90%|######### | 18/20 [00:31<00:03,  1.63s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001739 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688612
[400]	train's binary_logloss: 0.683551	valid's binary_logloss: 0.688705
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.684394	valid's binary_logloss: 0.68851
regularization_factors, val_score: 0.688510:  95%|#########5| 19/20 [00:32<00:01,  1.63s/it][I 2020-09-27 04:56:58,787] Trial 58 finished with value: 0.6885102664101546 and parameters: {'lambda_l1': 1.6489132998257337e-06, 'lambda_l2': 7.130405244958287e-08}. Best is trial 58 with value: 0.6885102664101546.
regularization_factors, val_score: 0.688510:  95%|#########5| 19/20 [00:32<00:01,  1.63s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001734 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.68616	valid's binary_logloss: 0.688995
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688613
[400]	train's binary_logloss: 0.683573	valid's binary_logloss: 0.68868
Early stopping, best iteration is:
[380]	train's binary_logloss: 0.683811	valid's binary_logloss: 0.688528
regularization_factors, val_score: 0.688510: 100%|##########| 20/20 [00:34<00:00,  1.67s/it][I 2020-09-27 04:57:00,567] Trial 59 finished with value: 0.6885278039688245 and parameters: {'lambda_l1': 0.008341325209129346, 'lambda_l2': 6.068578447774457e-08}. Best is trial 58 with value: 0.6885102664101546.
regularization_factors, val_score: 0.688510: 100%|##########| 20/20 [00:34<00:00,  1.73s/it]
min_data_in_leaf, val_score: 0.688510:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002224 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.686175	valid's binary_logloss: 0.68904
[300]	train's binary_logloss: 0.684779	valid's binary_logloss: 0.68875
[400]	train's binary_logloss: 0.683563	valid's binary_logloss: 0.68872
[500]	train's binary_logloss: 0.682433	valid's binary_logloss: 0.688523
[600]	train's binary_logloss: 0.681362	valid's binary_logloss: 0.688598
Early stopping, best iteration is:
[540]	train's binary_logloss: 0.682021	valid's binary_logloss: 0.688417
min_data_in_leaf, val_score: 0.688417:  20%|##        | 1/5 [00:02<00:09,  2.34s/it][I 2020-09-27 04:57:02,917] Trial 60 finished with value: 0.6884174980111903 and parameters: {'min_child_samples': 25}. Best is trial 60 with value: 0.6884174980111903.
min_data_in_leaf, val_score: 0.688417:  20%|##        | 1/5 [00:02<00:09,  2.34s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001678 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.686186	valid's binary_logloss: 0.68902
[300]	train's binary_logloss: 0.684775	valid's binary_logloss: 0.688899
[400]	train's binary_logloss: 0.683559	valid's binary_logloss: 0.688863
Early stopping, best iteration is:
[378]	train's binary_logloss: 0.68381	valid's binary_logloss: 0.68872
min_data_in_leaf, val_score: 0.688417:  40%|####      | 2/5 [00:04<00:06,  2.16s/it][I 2020-09-27 04:57:04,665] Trial 61 finished with value: 0.6887198054735056 and parameters: {'min_child_samples': 50}. Best is trial 60 with value: 0.6884174980111903.
min_data_in_leaf, val_score: 0.688417:  40%|####      | 2/5 [00:04<00:06,  2.16s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001665 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688189	valid's binary_logloss: 0.689923
[200]	train's binary_logloss: 0.686316	valid's binary_logloss: 0.689183
[300]	train's binary_logloss: 0.684943	valid's binary_logloss: 0.689041
Early stopping, best iteration is:
[254]	train's binary_logloss: 0.685519	valid's binary_logloss: 0.688957
min_data_in_leaf, val_score: 0.688417:  60%|######    | 3/5 [00:05<00:03,  1.91s/it][I 2020-09-27 04:57:05,984] Trial 62 finished with value: 0.6889570125436887 and parameters: {'min_child_samples': 100}. Best is trial 60 with value: 0.6884174980111903.
min_data_in_leaf, val_score: 0.688417:  60%|######    | 3/5 [00:05<00:03,  1.91s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001746 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.686178	valid's binary_logloss: 0.688982
[300]	train's binary_logloss: 0.684774	valid's binary_logloss: 0.688717
[400]	train's binary_logloss: 0.683552	valid's binary_logloss: 0.688696
Early stopping, best iteration is:
[383]	train's binary_logloss: 0.683767	valid's binary_logloss: 0.688615
min_data_in_leaf, val_score: 0.688417:  80%|########  | 4/5 [00:07<00:01,  1.86s/it][I 2020-09-27 04:57:07,748] Trial 63 finished with value: 0.6886149362033982 and parameters: {'min_child_samples': 10}. Best is trial 60 with value: 0.6884174980111903.
min_data_in_leaf, val_score: 0.688417:  80%|########  | 4/5 [00:07<00:01,  1.86s/it][LightGBM] [Info] Number of positive: 46664, number of negative: 46362
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001731 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4688
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501623 -> initscore=0.006493
[LightGBM] [Info] Start training from score 0.006493
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688124	valid's binary_logloss: 0.689613
[200]	train's binary_logloss: 0.686183	valid's binary_logloss: 0.688973
[300]	train's binary_logloss: 0.684776	valid's binary_logloss: 0.688751
[400]	train's binary_logloss: 0.683531	valid's binary_logloss: 0.688863
Early stopping, best iteration is:
[306]	train's binary_logloss: 0.684696	valid's binary_logloss: 0.688722
min_data_in_leaf, val_score: 0.688417: 100%|##########| 5/5 [00:08<00:00,  1.76s/it][I 2020-09-27 04:57:09,249] Trial 64 finished with value: 0.6887218892158072 and parameters: {'min_child_samples': 5}. Best is trial 60 with value: 0.6884174980111903.
min_data_in_leaf, val_score: 0.688417: 100%|##########| 5/5 [00:08<00:00,  1.73s/it]
Fold : 5
[I 2020-09-27 04:57:09,427] A new study created in memory with name: no-name-1b7c624a-f0b2-4aea-8fc8-aa6993df7768
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001589 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662759	valid's binary_logloss: 0.691799
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.678598	valid's binary_logloss: 0.69056
feature_fraction, val_score: 0.690560:  14%|#4        | 1/7 [00:00<00:05,  1.06it/s][I 2020-09-27 04:57:10,381] Trial 0 finished with value: 0.6905601497910684 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.6905601497910684.
feature_fraction, val_score: 0.690560:  14%|#4        | 1/7 [00:00<00:05,  1.06it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007342 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66369	valid's binary_logloss: 0.691565
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.68226	valid's binary_logloss: 0.690559
feature_fraction, val_score: 0.690559:  29%|##8       | 2/7 [00:01<00:04,  1.15it/s][I 2020-09-27 04:57:11,086] Trial 1 finished with value: 0.6905590100900298 and parameters: {'feature_fraction': 0.7}. Best is trial 1 with value: 0.6905590100900298.
feature_fraction, val_score: 0.690559:  29%|##8       | 2/7 [00:01<00:04,  1.15it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011035 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664057	valid's binary_logloss: 0.690869
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.678145	valid's binary_logloss: 0.690182
feature_fraction, val_score: 0.690182:  43%|####2     | 3/7 [00:02<00:03,  1.23it/s][I 2020-09-27 04:57:11,765] Trial 2 finished with value: 0.6901820889991936 and parameters: {'feature_fraction': 0.6}. Best is trial 2 with value: 0.6901820889991936.
feature_fraction, val_score: 0.690182:  43%|####2     | 3/7 [00:02<00:03,  1.23it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001692 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664852	valid's binary_logloss: 0.691416
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.679773	valid's binary_logloss: 0.690316
feature_fraction, val_score: 0.690182:  57%|#####7    | 4/7 [00:03<00:02,  1.29it/s][I 2020-09-27 04:57:12,459] Trial 3 finished with value: 0.690316333441943 and parameters: {'feature_fraction': 0.5}. Best is trial 2 with value: 0.6901820889991936.
feature_fraction, val_score: 0.690182:  57%|#####7    | 4/7 [00:03<00:02,  1.29it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006609 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663021	valid's binary_logloss: 0.690361
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.675428	valid's binary_logloss: 0.689984
feature_fraction, val_score: 0.689984:  71%|#######1  | 5/7 [00:03<00:01,  1.27it/s][I 2020-09-27 04:57:13,274] Trial 4 finished with value: 0.6899844980770151 and parameters: {'feature_fraction': 0.8}. Best is trial 4 with value: 0.6899844980770151.
feature_fraction, val_score: 0.689984:  71%|#######1  | 5/7 [00:03<00:01,  1.27it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001693 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662006	valid's binary_logloss: 0.691157
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.67696	valid's binary_logloss: 0.690385
feature_fraction, val_score: 0.689984:  86%|########5 | 6/7 [00:04<00:00,  1.26it/s][I 2020-09-27 04:57:14,069] Trial 5 finished with value: 0.690384793694187 and parameters: {'feature_fraction': 1.0}. Best is trial 4 with value: 0.6899844980770151.
feature_fraction, val_score: 0.689984:  86%|########5 | 6/7 [00:04<00:00,  1.26it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000701 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666045	valid's binary_logloss: 0.691233
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.67965	valid's binary_logloss: 0.690765
feature_fraction, val_score: 0.689984: 100%|##########| 7/7 [00:05<00:00,  1.31it/s][I 2020-09-27 04:57:14,764] Trial 6 finished with value: 0.6907650282143057 and parameters: {'feature_fraction': 0.4}. Best is trial 4 with value: 0.6899844980770151.
feature_fraction, val_score: 0.689984: 100%|##########| 7/7 [00:05<00:00,  1.31it/s]
num_leaves, val_score: 0.689984:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008471 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.532446	valid's binary_logloss: 0.701071
Early stopping, best iteration is:
[12]	train's binary_logloss: 0.663644	valid's binary_logloss: 0.692101
num_leaves, val_score: 0.689984:   5%|5         | 1/20 [00:01<00:24,  1.28s/it][I 2020-09-27 04:57:16,058] Trial 7 finished with value: 0.6921010909265272 and parameters: {'num_leaves': 246}. Best is trial 7 with value: 0.6921010909265272.
num_leaves, val_score: 0.689984:   5%|5         | 1/20 [00:01<00:24,  1.28s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011999 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.541655	valid's binary_logloss: 0.698738
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.651466	valid's binary_logloss: 0.691604
num_leaves, val_score: 0.689984:  10%|#         | 2/20 [00:02<00:23,  1.29s/it][I 2020-09-27 04:57:17,368] Trial 8 finished with value: 0.6916040591247711 and parameters: {'num_leaves': 228}. Best is trial 8 with value: 0.6916040591247711.
num_leaves, val_score: 0.689984:  10%|#         | 2/20 [00:02<00:23,  1.29s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001462 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.528502	valid's binary_logloss: 0.699513
Early stopping, best iteration is:
[12]	train's binary_logloss: 0.662669	valid's binary_logloss: 0.69248
num_leaves, val_score: 0.689984:  15%|#5        | 3/20 [00:04<00:22,  1.34s/it][I 2020-09-27 04:57:18,834] Trial 9 finished with value: 0.6924803382332924 and parameters: {'num_leaves': 254}. Best is trial 8 with value: 0.6916040591247711.
num_leaves, val_score: 0.689984:  15%|#5        | 3/20 [00:04<00:22,  1.34s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001476 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.683804	valid's binary_logloss: 0.690293
[200]	train's binary_logloss: 0.67902	valid's binary_logloss: 0.690554
Early stopping, best iteration is:
[108]	train's binary_logloss: 0.683362	valid's binary_logloss: 0.690199
num_leaves, val_score: 0.689984:  20%|##        | 4/20 [00:04<00:19,  1.19s/it][I 2020-09-27 04:57:19,664] Trial 10 finished with value: 0.6901988057794263 and parameters: {'num_leaves': 7}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  20%|##        | 4/20 [00:04<00:19,  1.19s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001573 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.677338	valid's binary_logloss: 0.690393
[200]	train's binary_logloss: 0.667531	valid's binary_logloss: 0.69092
Early stopping, best iteration is:
[138]	train's binary_logloss: 0.673519	valid's binary_logloss: 0.690353
num_leaves, val_score: 0.689984:  25%|##5       | 5/20 [00:05<00:17,  1.15s/it][I 2020-09-27 04:57:20,726] Trial 11 finished with value: 0.6903529337197906 and parameters: {'num_leaves': 14}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  25%|##5       | 5/20 [00:05<00:17,  1.15s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001618 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.607789	valid's binary_logloss: 0.695
Early stopping, best iteration is:
[16]	train's binary_logloss: 0.673018	valid's binary_logloss: 0.691131
num_leaves, val_score: 0.689984:  30%|###       | 6/20 [00:06<00:15,  1.12s/it][I 2020-09-27 04:57:21,759] Trial 12 finished with value: 0.6911309288725374 and parameters: {'num_leaves': 110}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  30%|###       | 6/20 [00:06<00:15,  1.12s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002136 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.584924	valid's binary_logloss: 0.695787
Early stopping, best iteration is:
[13]	train's binary_logloss: 0.672012	valid's binary_logloss: 0.691214
num_leaves, val_score: 0.689984:  35%|###5      | 7/20 [00:08<00:14,  1.12s/it][I 2020-09-27 04:57:22,903] Trial 13 finished with value: 0.6912141405715722 and parameters: {'num_leaves': 148}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  35%|###5      | 7/20 [00:08<00:14,  1.12s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001517 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.632471	valid's binary_logloss: 0.693122
Early stopping, best iteration is:
[14]	train's binary_logloss: 0.680218	valid's binary_logloss: 0.691475
num_leaves, val_score: 0.689984:  40%|####      | 8/20 [00:08<00:12,  1.03s/it][I 2020-09-27 04:57:23,723] Trial 14 finished with value: 0.6914751164482753 and parameters: {'num_leaves': 72}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  40%|####      | 8/20 [00:08<00:12,  1.03s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001436 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.563691	valid's binary_logloss: 0.697937
Early stopping, best iteration is:
[9]	train's binary_logloss: 0.67486	valid's binary_logloss: 0.691909
num_leaves, val_score: 0.689984:  45%|####5     | 9/20 [00:10<00:11,  1.09s/it][I 2020-09-27 04:57:24,931] Trial 15 finished with value: 0.6919085281880217 and parameters: {'num_leaves': 185}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  45%|####5     | 9/20 [00:10<00:11,  1.09s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001412 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658128	valid's binary_logloss: 0.690955
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.678602	valid's binary_logloss: 0.690303
num_leaves, val_score: 0.689984:  50%|#####     | 10/20 [00:10<00:09,  1.02it/s][I 2020-09-27 04:57:25,662] Trial 16 finished with value: 0.6903025101431474 and parameters: {'num_leaves': 37}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  50%|#####     | 10/20 [00:10<00:09,  1.02it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011796 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.562179	valid's binary_logloss: 0.697416
Early stopping, best iteration is:
[13]	train's binary_logloss: 0.667864	valid's binary_logloss: 0.691752
num_leaves, val_score: 0.689984:  55%|#####5    | 11/20 [00:12<00:09,  1.04s/it][I 2020-09-27 04:57:26,832] Trial 17 finished with value: 0.6917519955746995 and parameters: {'num_leaves': 187}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  55%|#####5    | 11/20 [00:12<00:09,  1.04s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002032 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.614951	valid's binary_logloss: 0.693854
Early stopping, best iteration is:
[26]	train's binary_logloss: 0.665369	valid's binary_logloss: 0.690866
num_leaves, val_score: 0.689984:  60%|######    | 12/20 [00:13<00:08,  1.04s/it][I 2020-09-27 04:57:27,890] Trial 18 finished with value: 0.6908659763700707 and parameters: {'num_leaves': 99}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  60%|######    | 12/20 [00:13<00:08,  1.04s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001505 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.637367	valid's binary_logloss: 0.693163
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.669343	valid's binary_logloss: 0.690841
num_leaves, val_score: 0.689984:  65%|######5   | 13/20 [00:14<00:06,  1.00it/s][I 2020-09-27 04:57:28,782] Trial 19 finished with value: 0.6908408834731546 and parameters: {'num_leaves': 66}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  65%|######5   | 13/20 [00:14<00:06,  1.00it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008072 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.575818	valid's binary_logloss: 0.696901
Early stopping, best iteration is:
[13]	train's binary_logloss: 0.669822	valid's binary_logloss: 0.691657
num_leaves, val_score: 0.689984:  70%|#######   | 14/20 [00:15<00:06,  1.02s/it][I 2020-09-27 04:57:29,853] Trial 20 finished with value: 0.6916567124710575 and parameters: {'num_leaves': 166}. Best is trial 10 with value: 0.6901988057794263.
num_leaves, val_score: 0.689984:  70%|#######   | 14/20 [00:15<00:06,  1.02s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008152 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687062	valid's binary_logloss: 0.690297
[200]	train's binary_logloss: 0.684539	valid's binary_logloss: 0.690125
Early stopping, best iteration is:
[195]	train's binary_logloss: 0.684633	valid's binary_logloss: 0.690069
num_leaves, val_score: 0.689984:  75%|#######5  | 15/20 [00:16<00:05,  1.04s/it][I 2020-09-27 04:57:30,950] Trial 21 finished with value: 0.6900688079787631 and parameters: {'num_leaves': 4}. Best is trial 21 with value: 0.6900688079787631.
num_leaves, val_score: 0.689984:  75%|#######5  | 15/20 [00:16<00:05,  1.04s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002005 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.660627	valid's binary_logloss: 0.690947
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.678295	valid's binary_logloss: 0.690073
num_leaves, val_score: 0.689984:  80%|########  | 16/20 [00:16<00:03,  1.04it/s][I 2020-09-27 04:57:31,722] Trial 22 finished with value: 0.6900727900021961 and parameters: {'num_leaves': 34}. Best is trial 21 with value: 0.6900688079787631.
num_leaves, val_score: 0.689984:  80%|########  | 16/20 [00:16<00:03,  1.04it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001577 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.653581	valid's binary_logloss: 0.692192
Early stopping, best iteration is:
[26]	train's binary_logloss: 0.67856	valid's binary_logloss: 0.690524
num_leaves, val_score: 0.689984:  85%|########5 | 17/20 [00:17<00:02,  1.10it/s][I 2020-09-27 04:57:32,506] Trial 23 finished with value: 0.6905242489383265 and parameters: {'num_leaves': 43}. Best is trial 21 with value: 0.6900688079787631.
num_leaves, val_score: 0.689984:  85%|########5 | 17/20 [00:17<00:02,  1.10it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001600 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.682861	valid's binary_logloss: 0.690265
Early stopping, best iteration is:
[76]	train's binary_logloss: 0.684408	valid's binary_logloss: 0.690209
num_leaves, val_score: 0.689984:  90%|######### | 18/20 [00:18<00:01,  1.16it/s][I 2020-09-27 04:57:33,262] Trial 24 finished with value: 0.6902091508032887 and parameters: {'num_leaves': 8}. Best is trial 21 with value: 0.6900688079787631.
num_leaves, val_score: 0.689984:  90%|######### | 18/20 [00:18<00:01,  1.16it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014414 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.656074	valid's binary_logloss: 0.691055
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.676266	valid's binary_logloss: 0.690265
num_leaves, val_score: 0.689984:  95%|#########5| 19/20 [00:19<00:00,  1.19it/s][I 2020-09-27 04:57:34,040] Trial 25 finished with value: 0.6902649750375632 and parameters: {'num_leaves': 40}. Best is trial 21 with value: 0.6900688079787631.
num_leaves, val_score: 0.689984:  95%|#########5| 19/20 [00:19<00:00,  1.19it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001563 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.630456	valid's binary_logloss: 0.693
Early stopping, best iteration is:
[21]	train's binary_logloss: 0.67434	valid's binary_logloss: 0.691243
num_leaves, val_score: 0.689984: 100%|##########| 20/20 [00:20<00:00,  1.17it/s][I 2020-09-27 04:57:34,925] Trial 26 finished with value: 0.6912431590178266 and parameters: {'num_leaves': 75}. Best is trial 21 with value: 0.6900688079787631.
num_leaves, val_score: 0.689984: 100%|##########| 20/20 [00:20<00:00,  1.01s/it]
bagging, val_score: 0.689984:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001562 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662713	valid's binary_logloss: 0.690614
Early stopping, best iteration is:
[61]	train's binary_logloss: 0.672055	valid's binary_logloss: 0.690011
bagging, val_score: 0.689984:  10%|#         | 1/10 [00:01<00:09,  1.06s/it][I 2020-09-27 04:57:35,998] Trial 27 finished with value: 0.6900113794489978 and parameters: {'bagging_fraction': 0.7744675089831714, 'bagging_freq': 3}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  10%|#         | 1/10 [00:01<00:09,  1.06s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001550 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663033	valid's binary_logloss: 0.691575
Early stopping, best iteration is:
[65]	train's binary_logloss: 0.671129	valid's binary_logloss: 0.690781
bagging, val_score: 0.689984:  20%|##        | 2/10 [00:02<00:08,  1.10s/it][I 2020-09-27 04:57:37,192] Trial 28 finished with value: 0.6907811511387683 and parameters: {'bagging_fraction': 0.7835892578642625, 'bagging_freq': 3}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  20%|##        | 2/10 [00:02<00:08,  1.10s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009896 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664095	valid's binary_logloss: 0.692608
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.67955	valid's binary_logloss: 0.690369
bagging, val_score: 0.689984:  30%|###       | 3/10 [00:03<00:07,  1.05s/it][I 2020-09-27 04:57:38,119] Trial 29 finished with value: 0.6903685689943968 and parameters: {'bagging_fraction': 0.4766066067413073, 'bagging_freq': 1}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  30%|###       | 3/10 [00:03<00:07,  1.05s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008136 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662586	valid's binary_logloss: 0.691009
Early stopping, best iteration is:
[26]	train's binary_logloss: 0.68167	valid's binary_logloss: 0.690305
bagging, val_score: 0.689984:  40%|####      | 4/10 [00:03<00:05,  1.02it/s][I 2020-09-27 04:57:38,929] Trial 30 finished with value: 0.6903053053602763 and parameters: {'bagging_fraction': 0.9933366847840979, 'bagging_freq': 7}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  40%|####      | 4/10 [00:03<00:05,  1.02it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001593 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663295	valid's binary_logloss: 0.692575
Early stopping, best iteration is:
[46]	train's binary_logloss: 0.676198	valid's binary_logloss: 0.690885
bagging, val_score: 0.689984:  50%|#####     | 5/10 [00:04<00:04,  1.04it/s][I 2020-09-27 04:57:39,840] Trial 31 finished with value: 0.6908846966555133 and parameters: {'bagging_fraction': 0.7357347346451741, 'bagging_freq': 4}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  50%|#####     | 5/10 [00:04<00:04,  1.04it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001590 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662926	valid's binary_logloss: 0.691444
Early stopping, best iteration is:
[22]	train's binary_logloss: 0.683062	valid's binary_logloss: 0.690387
bagging, val_score: 0.689984:  60%|######    | 6/10 [00:05<00:03,  1.09it/s][I 2020-09-27 04:57:40,659] Trial 32 finished with value: 0.6903865124235224 and parameters: {'bagging_fraction': 0.9606439169637138, 'bagging_freq': 4}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  60%|######    | 6/10 [00:05<00:03,  1.09it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001535 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66338	valid's binary_logloss: 0.692174
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.679143	valid's binary_logloss: 0.690222
bagging, val_score: 0.689984:  70%|#######   | 7/10 [00:06<00:02,  1.15it/s][I 2020-09-27 04:57:41,428] Trial 33 finished with value: 0.6902221405606176 and parameters: {'bagging_fraction': 0.5501404306458005, 'bagging_freq': 1}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  70%|#######   | 7/10 [00:06<00:02,  1.15it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001493 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662525	valid's binary_logloss: 0.691554
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.680411	valid's binary_logloss: 0.690703
bagging, val_score: 0.689984:  80%|########  | 8/10 [00:07<00:01,  1.16it/s][I 2020-09-27 04:57:42,266] Trial 34 finished with value: 0.6907033298544214 and parameters: {'bagging_fraction': 0.8542206886051794, 'bagging_freq': 6}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  80%|########  | 8/10 [00:07<00:01,  1.16it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011907 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663364	valid's binary_logloss: 0.69162
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.679523	valid's binary_logloss: 0.690026
bagging, val_score: 0.689984:  90%|######### | 9/10 [00:08<00:00,  1.18it/s][I 2020-09-27 04:57:43,075] Trial 35 finished with value: 0.6900257534785464 and parameters: {'bagging_fraction': 0.6072769654268587, 'bagging_freq': 3}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984:  90%|######### | 9/10 [00:08<00:00,  1.18it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001565 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663332	valid's binary_logloss: 0.691011
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.680985	valid's binary_logloss: 0.69027
bagging, val_score: 0.689984: 100%|##########| 10/10 [00:08<00:00,  1.19it/s][I 2020-09-27 04:57:43,910] Trial 36 finished with value: 0.6902695334181121 and parameters: {'bagging_fraction': 0.618007946081386, 'bagging_freq': 3}. Best is trial 27 with value: 0.6900113794489978.
bagging, val_score: 0.689984: 100%|##########| 10/10 [00:08<00:00,  1.11it/s]
feature_fraction_stage2, val_score: 0.689984:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013405 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662852	valid's binary_logloss: 0.691261
Early stopping, best iteration is:
[51]	train's binary_logloss: 0.674585	valid's binary_logloss: 0.690386
feature_fraction_stage2, val_score: 0.689984:  17%|#6        | 1/6 [00:00<00:04,  1.23it/s][I 2020-09-27 04:57:44,733] Trial 37 finished with value: 0.690385946557375 and parameters: {'feature_fraction': 0.7520000000000001}. Best is trial 37 with value: 0.690385946557375.
feature_fraction_stage2, val_score: 0.689984:  17%|#6        | 1/6 [00:00<00:04,  1.23it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008459 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663021	valid's binary_logloss: 0.690361
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.675428	valid's binary_logloss: 0.689984
feature_fraction_stage2, val_score: 0.689984:  33%|###3      | 2/6 [00:01<00:03,  1.23it/s][I 2020-09-27 04:57:45,549] Trial 38 finished with value: 0.6899844980770151 and parameters: {'feature_fraction': 0.8160000000000001}. Best is trial 38 with value: 0.6899844980770151.
feature_fraction_stage2, val_score: 0.689984:  33%|###3      | 2/6 [00:01<00:03,  1.23it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001668 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662759	valid's binary_logloss: 0.691799
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.678598	valid's binary_logloss: 0.69056
feature_fraction_stage2, val_score: 0.689984:  50%|#####     | 3/6 [00:02<00:02,  1.26it/s][I 2020-09-27 04:57:46,293] Trial 39 finished with value: 0.6905601497910684 and parameters: {'feature_fraction': 0.88}. Best is trial 38 with value: 0.6899844980770151.
feature_fraction_stage2, val_score: 0.689984:  50%|#####     | 3/6 [00:02<00:02,  1.26it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008728 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662852	valid's binary_logloss: 0.691261
Early stopping, best iteration is:
[51]	train's binary_logloss: 0.674585	valid's binary_logloss: 0.690386
feature_fraction_stage2, val_score: 0.689984:  67%|######6   | 4/6 [00:03<00:01,  1.27it/s][I 2020-09-27 04:57:47,065] Trial 40 finished with value: 0.6903859465573751 and parameters: {'feature_fraction': 0.784}. Best is trial 38 with value: 0.6899844980770151.
feature_fraction_stage2, val_score: 0.689984:  67%|######6   | 4/6 [00:03<00:01,  1.27it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013482 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663303	valid's binary_logloss: 0.690672
Early stopping, best iteration is:
[53]	train's binary_logloss: 0.674298	valid's binary_logloss: 0.690024
feature_fraction_stage2, val_score: 0.689984:  83%|########3 | 5/6 [00:03<00:00,  1.27it/s][I 2020-09-27 04:57:47,852] Trial 41 finished with value: 0.6900236664988709 and parameters: {'feature_fraction': 0.7200000000000001}. Best is trial 38 with value: 0.6899844980770151.
feature_fraction_stage2, val_score: 0.689984:  83%|########3 | 5/6 [00:03<00:00,  1.27it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009074 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662643	valid's binary_logloss: 0.690983
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.679699	valid's binary_logloss: 0.690557
feature_fraction_stage2, val_score: 0.689984: 100%|##########| 6/6 [00:04<00:00,  1.31it/s][I 2020-09-27 04:57:48,572] Trial 42 finished with value: 0.6905568387008639 and parameters: {'feature_fraction': 0.8480000000000001}. Best is trial 38 with value: 0.6899844980770151.
feature_fraction_stage2, val_score: 0.689984: 100%|##########| 6/6 [00:04<00:00,  1.29it/s]
regularization_factors, val_score: 0.689984:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001489 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663071	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667888	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689968:   5%|5         | 1/20 [00:01<00:19,  1.00s/it][I 2020-09-27 04:57:49,589] Trial 43 finished with value: 0.6899675224070905 and parameters: {'lambda_l1': 0.0034339426266632865, 'lambda_l2': 2.9528148173408035e-06}. Best is trial 43 with value: 0.6899675224070905.
regularization_factors, val_score: 0.689968:   5%|5         | 1/20 [00:01<00:19,  1.00s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001532 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66306	valid's binary_logloss: 0.690447
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.66789	valid's binary_logloss: 0.689967
regularization_factors, val_score: 0.689967:  10%|#         | 2/20 [00:02<00:18,  1.01s/it][I 2020-09-27 04:57:50,613] Trial 44 finished with value: 0.6899674411707333 and parameters: {'lambda_l1': 0.004662561477422725, 'lambda_l2': 1.3948625139125175e-06}. Best is trial 44 with value: 0.6899674411707333.
regularization_factors, val_score: 0.689967:  10%|#         | 2/20 [00:02<00:18,  1.01s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003785 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663061	valid's binary_logloss: 0.690447
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.66789	valid's binary_logloss: 0.689967
regularization_factors, val_score: 0.689967:  15%|#5        | 3/20 [00:03<00:17,  1.01s/it][I 2020-09-27 04:57:51,643] Trial 45 finished with value: 0.6899674025098815 and parameters: {'lambda_l1': 0.00524640370808824, 'lambda_l2': 1.4398621808620703e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  15%|#5        | 3/20 [00:03<00:17,  1.01s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008016 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662786	valid's binary_logloss: 0.690799
Early stopping, best iteration is:
[47]	train's binary_logloss: 0.67568	valid's binary_logloss: 0.69003
regularization_factors, val_score: 0.689967:  20%|##        | 4/20 [00:03<00:15,  1.05it/s][I 2020-09-27 04:57:52,442] Trial 46 finished with value: 0.6900296520521778 and parameters: {'lambda_l1': 0.010031342307687478, 'lambda_l2': 8.690635159166719e-07}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  20%|##        | 4/20 [00:03<00:15,  1.05it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001557 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66307	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667887	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689967:  25%|##5       | 5/20 [00:04<00:14,  1.05it/s][I 2020-09-27 04:57:53,400] Trial 47 finished with value: 0.6899675786092242 and parameters: {'lambda_l1': 0.002586041944855055, 'lambda_l2': 2.63283121364215e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  25%|##5       | 5/20 [00:04<00:14,  1.05it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008047 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663059	valid's binary_logloss: 0.690447
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667889	valid's binary_logloss: 0.689967
regularization_factors, val_score: 0.689967:  30%|###       | 6/20 [00:05<00:13,  1.05it/s][I 2020-09-27 04:57:54,339] Trial 48 finished with value: 0.6899674839198124 and parameters: {'lambda_l1': 0.004015863099526828, 'lambda_l2': 2.280356884231991e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  30%|###       | 6/20 [00:05<00:13,  1.05it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001644 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663071	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667888	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689967:  35%|###5      | 7/20 [00:06<00:12,  1.01it/s][I 2020-09-27 04:57:55,418] Trial 49 finished with value: 0.6899675288047956 and parameters: {'lambda_l1': 0.0033375593376080356, 'lambda_l2': 2.830642312376846e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  35%|###5      | 7/20 [00:06<00:12,  1.01it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001601 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663071	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667888	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689967:  40%|####      | 8/20 [00:07<00:12,  1.00s/it][I 2020-09-27 04:57:56,448] Trial 50 finished with value: 0.6899675296729249 and parameters: {'lambda_l1': 0.0033245647295823137, 'lambda_l2': 2.7333945263234923e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  40%|####      | 8/20 [00:07<00:12,  1.00s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001603 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663071	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667888	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689967:  45%|####5     | 9/20 [00:08<00:10,  1.01it/s][I 2020-09-27 04:57:57,415] Trial 51 finished with value: 0.6899675231589405 and parameters: {'lambda_l1': 0.0034231822486881607, 'lambda_l2': 2.5597586648530185e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  45%|####5     | 9/20 [00:08<00:10,  1.01it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001568 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663071	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667888	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689967:  50%|#####     | 10/20 [00:09<00:09,  1.02it/s][I 2020-09-27 04:57:58,387] Trial 52 finished with value: 0.6899675209429271 and parameters: {'lambda_l1': 0.0034569782502191274, 'lambda_l2': 2.163106922428234e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  50%|#####     | 10/20 [00:09<00:09,  1.02it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001571 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663071	valid's binary_logloss: 0.690424
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667889	valid's binary_logloss: 0.689968
regularization_factors, val_score: 0.689967:  55%|#####5    | 11/20 [00:10<00:08,  1.02it/s][I 2020-09-27 04:57:59,369] Trial 53 finished with value: 0.6899675043589648 and parameters: {'lambda_l1': 0.003707901920209718, 'lambda_l2': 1.70752737693983e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  55%|#####5    | 11/20 [00:10<00:08,  1.02it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001536 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66306	valid's binary_logloss: 0.690447
Early stopping, best iteration is:
[79]	train's binary_logloss: 0.667889	valid's binary_logloss: 0.689967
regularization_factors, val_score: 0.689967:  60%|######    | 12/20 [00:11<00:07,  1.01it/s][I 2020-09-27 04:58:00,373] Trial 54 finished with value: 0.6899674606296653 and parameters: {'lambda_l1': 0.004368898887817153, 'lambda_l2': 1.2458711221699426e-06}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  60%|######    | 12/20 [00:11<00:07,  1.01it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001596 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662937	valid's binary_logloss: 0.690557
Early stopping, best iteration is:
[47]	train's binary_logloss: 0.675852	valid's binary_logloss: 0.690271
regularization_factors, val_score: 0.689967:  65%|######5   | 13/20 [00:12<00:06,  1.05it/s][I 2020-09-27 04:58:01,228] Trial 55 finished with value: 0.6902710837967893 and parameters: {'lambda_l1': 0.07746158051498105, 'lambda_l2': 3.231480194675918e-08}. Best is trial 45 with value: 0.6899674025098815.
regularization_factors, val_score: 0.689967:  65%|######5   | 13/20 [00:12<00:06,  1.05it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008971 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664683	valid's binary_logloss: 0.690143
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.675081	valid's binary_logloss: 0.689647
regularization_factors, val_score: 0.689647:  70%|#######   | 14/20 [00:13<00:05,  1.08it/s][I 2020-09-27 04:58:02,088] Trial 56 finished with value: 0.6896473829748281 and parameters: {'lambda_l1': 5.946750740071561e-06, 'lambda_l2': 2.6997857156008718}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647:  70%|#######   | 14/20 [00:13<00:05,  1.08it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001490 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665021	valid's binary_logloss: 0.691158
Early stopping, best iteration is:
[47]	train's binary_logloss: 0.676401	valid's binary_logloss: 0.6902
regularization_factors, val_score: 0.689647:  75%|#######5  | 15/20 [00:14<00:04,  1.11it/s][I 2020-09-27 04:58:02,943] Trial 57 finished with value: 0.6902003315084454 and parameters: {'lambda_l1': 5.758495952716345e-07, 'lambda_l2': 3.755109192731262}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647:  75%|#######5  | 15/20 [00:14<00:04,  1.11it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001569 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663022	valid's binary_logloss: 0.690361
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.675429	valid's binary_logloss: 0.689984
regularization_factors, val_score: 0.689647:  80%|########  | 16/20 [00:15<00:03,  1.14it/s][I 2020-09-27 04:58:03,771] Trial 58 finished with value: 0.6899844790092887 and parameters: {'lambda_l1': 2.0582999452425637e-05, 'lambda_l2': 0.0005026756920447598}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647:  80%|########  | 16/20 [00:15<00:03,  1.14it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001565 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663021	valid's binary_logloss: 0.690361
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.675428	valid's binary_logloss: 0.689984
regularization_factors, val_score: 0.689647:  85%|########5 | 17/20 [00:16<00:02,  1.16it/s][I 2020-09-27 04:58:04,597] Trial 59 finished with value: 0.6899844953569758 and parameters: {'lambda_l1': 6.0992831053207785e-05, 'lambda_l2': 4.252592969567293e-08}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647:  85%|########5 | 17/20 [00:16<00:02,  1.16it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001929 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663089	valid's binary_logloss: 0.691109
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.676731	valid's binary_logloss: 0.690043
regularization_factors, val_score: 0.689647:  90%|######### | 18/20 [00:16<00:01,  1.17it/s][I 2020-09-27 04:58:05,432] Trial 60 finished with value: 0.6900427743024334 and parameters: {'lambda_l1': 0.268567393042738, 'lambda_l2': 0.0003824677053871511}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647:  90%|######### | 18/20 [00:16<00:01,  1.17it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001538 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663047	valid's binary_logloss: 0.690658
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.675429	valid's binary_logloss: 0.689984
regularization_factors, val_score: 0.689647:  95%|#########5| 19/20 [00:17<00:00,  1.17it/s][I 2020-09-27 04:58:06,282] Trial 61 finished with value: 0.6899844746373079 and parameters: {'lambda_l1': 0.0005260392810081732, 'lambda_l2': 2.7967911031858336e-07}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647:  95%|#########5| 19/20 [00:17<00:00,  1.17it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001556 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663199	valid's binary_logloss: 0.690198
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.674513	valid's binary_logloss: 0.68975
regularization_factors, val_score: 0.689647: 100%|##########| 20/20 [00:18<00:00,  1.15it/s][I 2020-09-27 04:58:07,180] Trial 62 finished with value: 0.6897502601908707 and parameters: {'lambda_l1': 0.03166566393209921, 'lambda_l2': 3.6505882507400447e-05}. Best is trial 56 with value: 0.6896473829748281.
regularization_factors, val_score: 0.689647: 100%|##########| 20/20 [00:18<00:00,  1.08it/s]
min_data_in_leaf, val_score: 0.689647:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011144 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664472	valid's binary_logloss: 0.690172
[200]	train's binary_logloss: 0.645537	valid's binary_logloss: 0.690997
Early stopping, best iteration is:
[108]	train's binary_logloss: 0.662894	valid's binary_logloss: 0.690057
min_data_in_leaf, val_score: 0.689647:  20%|##        | 1/5 [00:01<00:04,  1.19s/it][I 2020-09-27 04:58:08,386] Trial 63 finished with value: 0.6900565147753162 and parameters: {'min_child_samples': 10}. Best is trial 63 with value: 0.6900565147753162.
min_data_in_leaf, val_score: 0.689647:  20%|##        | 1/5 [00:01<00:04,  1.19s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002284 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665148	valid's binary_logloss: 0.691028
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.676081	valid's binary_logloss: 0.690354
min_data_in_leaf, val_score: 0.689647:  40%|####      | 2/5 [00:02<00:03,  1.10s/it][I 2020-09-27 04:58:09,276] Trial 64 finished with value: 0.6903537539012076 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.6900565147753162.
min_data_in_leaf, val_score: 0.689647:  40%|####      | 2/5 [00:02<00:03,  1.10s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007936 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664812	valid's binary_logloss: 0.690985
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.677793	valid's binary_logloss: 0.690549
min_data_in_leaf, val_score: 0.689647:  60%|######    | 3/5 [00:02<00:02,  1.02s/it][I 2020-09-27 04:58:10,108] Trial 65 finished with value: 0.690549034445801 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.6900565147753162.
min_data_in_leaf, val_score: 0.689647:  60%|######    | 3/5 [00:02<00:02,  1.02s/it][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001560 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664729	valid's binary_logloss: 0.690892
Early stopping, best iteration is:
[41]	train's binary_logloss: 0.677757	valid's binary_logloss: 0.690151
min_data_in_leaf, val_score: 0.689647:  80%|########  | 4/5 [00:03<00:00,  1.03it/s][I 2020-09-27 04:58:10,959] Trial 66 finished with value: 0.6901508243301284 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6900565147753162.
min_data_in_leaf, val_score: 0.689647:  80%|########  | 4/5 [00:03<00:00,  1.03it/s][LightGBM] [Info] Number of positive: 46417, number of negative: 46609
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001581 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498968 -> initscore=-0.004128
[LightGBM] [Info] Start training from score -0.004128
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664643	valid's binary_logloss: 0.691412
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.678314	valid's binary_logloss: 0.690208
min_data_in_leaf, val_score: 0.689647: 100%|##########| 5/5 [00:04<00:00,  1.07it/s][I 2020-09-27 04:58:11,803] Trial 67 finished with value: 0.6902082945895022 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.6900565147753162.
min_data_in_leaf, val_score: 0.689647: 100%|##########| 5/5 [00:04<00:00,  1.08it/s]
Fold : 6
[I 2020-09-27 04:58:11,870] A new study created in memory with name: no-name-dcbdec6e-2ceb-4d1e-86b5-bfbd5414d073
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000995 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665269	valid's binary_logloss: 0.688894
[200]	train's binary_logloss: 0.645599	valid's binary_logloss: 0.690352
Early stopping, best iteration is:
[106]	train's binary_logloss: 0.663937	valid's binary_logloss: 0.688771
feature_fraction, val_score: 0.688771:  14%|#4        | 1/7 [00:01<00:07,  1.19s/it][I 2020-09-27 04:58:13,071] Trial 0 finished with value: 0.6887706335922763 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771:  14%|#4        | 1/7 [00:01<00:07,  1.19s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001614 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662986	valid's binary_logloss: 0.689796
Early stopping, best iteration is:
[32]	train's binary_logloss: 0.679798	valid's binary_logloss: 0.689557
feature_fraction, val_score: 0.688771:  29%|##8       | 2/7 [00:01<00:05,  1.06s/it][I 2020-09-27 04:58:13,811] Trial 1 finished with value: 0.689557320235649 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771:  29%|##8       | 2/7 [00:01<00:05,  1.06s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015363 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663698	valid's binary_logloss: 0.689777
Early stopping, best iteration is:
[62]	train's binary_logloss: 0.672409	valid's binary_logloss: 0.689451
feature_fraction, val_score: 0.688771:  43%|####2     | 3/7 [00:02<00:03,  1.02it/s][I 2020-09-27 04:58:14,610] Trial 2 finished with value: 0.6894514131827376 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771:  43%|####2     | 3/7 [00:02<00:03,  1.02it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007904 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66431	valid's binary_logloss: 0.689418
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.668145	valid's binary_logloss: 0.689215
feature_fraction, val_score: 0.688771:  57%|#####7    | 4/7 [00:03<00:02,  1.07it/s][I 2020-09-27 04:58:15,452] Trial 3 finished with value: 0.6892151532007572 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771:  57%|#####7    | 4/7 [00:03<00:02,  1.07it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008838 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663214	valid's binary_logloss: 0.689646
Early stopping, best iteration is:
[61]	train's binary_logloss: 0.672363	valid's binary_logloss: 0.689118
feature_fraction, val_score: 0.688771:  71%|#######1  | 5/7 [00:04<00:01,  1.11it/s][I 2020-09-27 04:58:16,267] Trial 4 finished with value: 0.6891176758344856 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771:  71%|#######1  | 5/7 [00:04<00:01,  1.11it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000778 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66613	valid's binary_logloss: 0.689486
Early stopping, best iteration is:
[75]	train's binary_logloss: 0.671175	valid's binary_logloss: 0.689183
feature_fraction, val_score: 0.688771:  86%|########5 | 6/7 [00:05<00:00,  1.13it/s][I 2020-09-27 04:58:17,113] Trial 5 finished with value: 0.6891830705136953 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771:  86%|########5 | 6/7 [00:05<00:00,  1.13it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001832 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662507	valid's binary_logloss: 0.690168
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.675204	valid's binary_logloss: 0.689566
feature_fraction, val_score: 0.688771: 100%|##########| 7/7 [00:07<00:00,  1.32s/it][I 2020-09-27 04:58:19,455] Trial 6 finished with value: 0.6895663490864646 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6887706335922763.
feature_fraction, val_score: 0.688771: 100%|##########| 7/7 [00:07<00:00,  1.08s/it]
num_leaves, val_score: 0.688771:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000875 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65635	valid's binary_logloss: 0.690173
Early stopping, best iteration is:
[43]	train's binary_logloss: 0.673607	valid's binary_logloss: 0.689502
num_leaves, val_score: 0.688771:   5%|5         | 1/20 [00:01<00:20,  1.08s/it][I 2020-09-27 04:58:20,546] Trial 7 finished with value: 0.6895015997019708 and parameters: {'num_leaves': 43}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:   5%|5         | 1/20 [00:01<00:20,  1.08s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000948 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.636388	valid's binary_logloss: 0.690447
Early stopping, best iteration is:
[88]	train's binary_logloss: 0.641912	valid's binary_logloss: 0.689924
num_leaves, val_score: 0.688771:  10%|#         | 2/20 [00:02<00:20,  1.13s/it][I 2020-09-27 04:58:21,813] Trial 8 finished with value: 0.6899239348271381 and parameters: {'num_leaves': 73}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:  10%|#         | 2/20 [00:02<00:20,  1.13s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001001 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.640263	valid's binary_logloss: 0.690357
Early stopping, best iteration is:
[50]	train's binary_logloss: 0.661952	valid's binary_logloss: 0.689585
num_leaves, val_score: 0.688771:  15%|#5        | 3/20 [00:03<00:18,  1.10s/it][I 2020-09-27 04:58:22,822] Trial 9 finished with value: 0.6895847174318764 and parameters: {'num_leaves': 67}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:  15%|#5        | 3/20 [00:03<00:18,  1.10s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001688 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.552597	valid's binary_logloss: 0.69652
Early stopping, best iteration is:
[17]	train's binary_logloss: 0.65881	valid's binary_logloss: 0.692576
num_leaves, val_score: 0.688771:  20%|##        | 4/20 [00:04<00:19,  1.19s/it][I 2020-09-27 04:58:24,227] Trial 10 finished with value: 0.6925759831330955 and parameters: {'num_leaves': 228}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:  20%|##        | 4/20 [00:04<00:19,  1.19s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001533 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.538026	valid's binary_logloss: 0.69752
Early stopping, best iteration is:
[15]	train's binary_logloss: 0.659639	valid's binary_logloss: 0.691473
num_leaves, val_score: 0.688771:  25%|##5       | 5/20 [00:06<00:18,  1.25s/it][I 2020-09-27 04:58:25,631] Trial 11 finished with value: 0.6914732359891139 and parameters: {'num_leaves': 254}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:  25%|##5       | 5/20 [00:06<00:18,  1.25s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001108 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.578766	valid's binary_logloss: 0.694226
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.654454	valid's binary_logloss: 0.691199
num_leaves, val_score: 0.688771:  30%|###       | 6/20 [00:07<00:17,  1.22s/it][I 2020-09-27 04:58:26,778] Trial 12 finished with value: 0.6911992488401045 and parameters: {'num_leaves': 174}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:  30%|###       | 6/20 [00:07<00:17,  1.22s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000950 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.587943	valid's binary_logloss: 0.69436
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.61787	valid's binary_logloss: 0.691342
num_leaves, val_score: 0.688771:  35%|###5      | 7/20 [00:08<00:16,  1.27s/it][I 2020-09-27 04:58:28,162] Trial 13 finished with value: 0.6913417018956572 and parameters: {'num_leaves': 156}. Best is trial 7 with value: 0.6895015997019708.
num_leaves, val_score: 0.688771:  35%|###5      | 7/20 [00:08<00:16,  1.27s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001185 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.675867	valid's binary_logloss: 0.689217
[200]	train's binary_logloss: 0.664751	valid's binary_logloss: 0.689817
Early stopping, best iteration is:
[111]	train's binary_logloss: 0.674533	valid's binary_logloss: 0.68909
num_leaves, val_score: 0.688771:  40%|####      | 8/20 [00:09<00:14,  1.18s/it][I 2020-09-27 04:58:29,127] Trial 14 finished with value: 0.6890903374716104 and parameters: {'num_leaves': 17}. Best is trial 14 with value: 0.6890903374716104.
num_leaves, val_score: 0.688771:  40%|####      | 8/20 [00:09<00:14,  1.18s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001614 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688527	valid's binary_logloss: 0.690133
[200]	train's binary_logloss: 0.686734	valid's binary_logloss: 0.689033
[300]	train's binary_logloss: 0.685468	valid's binary_logloss: 0.688641
[400]	train's binary_logloss: 0.684369	valid's binary_logloss: 0.688513
Early stopping, best iteration is:
[375]	train's binary_logloss: 0.684633	valid's binary_logloss: 0.68848
num_leaves, val_score: 0.688480:  45%|####5     | 9/20 [00:11<00:14,  1.32s/it][I 2020-09-27 04:58:30,769] Trial 15 finished with value: 0.6884795876194431 and parameters: {'num_leaves': 3}. Best is trial 15 with value: 0.6884795876194431.
num_leaves, val_score: 0.688480:  45%|####5     | 9/20 [00:11<00:14,  1.32s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001058 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.610458	valid's binary_logloss: 0.69108
Early stopping, best iteration is:
[49]	train's binary_logloss: 0.645426	valid's binary_logloss: 0.690053
num_leaves, val_score: 0.688480:  50%|#####     | 10/20 [00:12<00:12,  1.27s/it][I 2020-09-27 04:58:31,926] Trial 16 finished with value: 0.6900527437374864 and parameters: {'num_leaves': 115}. Best is trial 15 with value: 0.6884795876194431.
num_leaves, val_score: 0.688480:  50%|#####     | 10/20 [00:12<00:12,  1.27s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000997 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688527	valid's binary_logloss: 0.690133
[200]	train's binary_logloss: 0.686734	valid's binary_logloss: 0.689033
[300]	train's binary_logloss: 0.685468	valid's binary_logloss: 0.688641
[400]	train's binary_logloss: 0.684369	valid's binary_logloss: 0.688513
Early stopping, best iteration is:
[375]	train's binary_logloss: 0.684633	valid's binary_logloss: 0.68848
num_leaves, val_score: 0.688480:  55%|#####5    | 11/20 [00:14<00:12,  1.37s/it][I 2020-09-27 04:58:33,539] Trial 17 finished with value: 0.6884795876194431 and parameters: {'num_leaves': 3}. Best is trial 15 with value: 0.6884795876194431.
num_leaves, val_score: 0.688480:  55%|#####5    | 11/20 [00:14<00:12,  1.37s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000988 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.690025	valid's binary_logloss: 0.691003
[200]	train's binary_logloss: 0.688901	valid's binary_logloss: 0.69002
[300]	train's binary_logloss: 0.688185	valid's binary_logloss: 0.68941
[400]	train's binary_logloss: 0.687683	valid's binary_logloss: 0.688943
[500]	train's binary_logloss: 0.687304	valid's binary_logloss: 0.688654
[600]	train's binary_logloss: 0.687004	valid's binary_logloss: 0.688502
[700]	train's binary_logloss: 0.686761	valid's binary_logloss: 0.688428
[800]	train's binary_logloss: 0.686556	valid's binary_logloss: 0.688335
[900]	train's binary_logloss: 0.68638	valid's binary_logloss: 0.688325
[1000]	train's binary_logloss: 0.686227	valid's binary_logloss: 0.688255
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.686227	valid's binary_logloss: 0.688255
num_leaves, val_score: 0.688255:  60%|######    | 12/20 [00:17<00:15,  1.92s/it][I 2020-09-27 04:58:36,728] Trial 18 finished with value: 0.6882551844633495 and parameters: {'num_leaves': 2}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  60%|######    | 12/20 [00:17<00:15,  1.92s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001002 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.683515	valid's binary_logloss: 0.689389
[200]	train's binary_logloss: 0.678306	valid's binary_logloss: 0.689315
[300]	train's binary_logloss: 0.673658	valid's binary_logloss: 0.688984
[400]	train's binary_logloss: 0.669291	valid's binary_logloss: 0.689314
Early stopping, best iteration is:
[317]	train's binary_logloss: 0.6729	valid's binary_logloss: 0.688929
num_leaves, val_score: 0.688255:  65%|######5   | 13/20 [00:18<00:12,  1.81s/it][I 2020-09-27 04:58:38,296] Trial 19 finished with value: 0.68892857284493 and parameters: {'num_leaves': 8}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  65%|######5   | 13/20 [00:18<00:12,  1.81s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000915 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.62055	valid's binary_logloss: 0.692044
Early stopping, best iteration is:
[29]	train's binary_logloss: 0.66542	valid's binary_logloss: 0.690281
num_leaves, val_score: 0.688255:  70%|#######   | 14/20 [00:19<00:09,  1.55s/it][I 2020-09-27 04:58:39,243] Trial 20 finished with value: 0.6902812591323769 and parameters: {'num_leaves': 99}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  70%|#######   | 14/20 [00:19<00:09,  1.55s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001482 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.684441	valid's binary_logloss: 0.689008
[200]	train's binary_logloss: 0.679968	valid's binary_logloss: 0.688832
[300]	train's binary_logloss: 0.676016	valid's binary_logloss: 0.688309
[400]	train's binary_logloss: 0.672203	valid's binary_logloss: 0.688492
Early stopping, best iteration is:
[301]	train's binary_logloss: 0.675968	valid's binary_logloss: 0.688303
num_leaves, val_score: 0.688255:  75%|#######5  | 15/20 [00:21<00:07,  1.52s/it][I 2020-09-27 04:58:40,689] Trial 21 finished with value: 0.6883028732185514 and parameters: {'num_leaves': 7}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  75%|#######5  | 15/20 [00:21<00:07,  1.52s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000968 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662251	valid's binary_logloss: 0.689855
Early stopping, best iteration is:
[71]	train's binary_logloss: 0.669171	valid's binary_logloss: 0.689288
num_leaves, val_score: 0.688255:  80%|########  | 16/20 [00:22<00:05,  1.33s/it][I 2020-09-27 04:58:41,585] Trial 22 finished with value: 0.6892878269668582 and parameters: {'num_leaves': 35}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  80%|########  | 16/20 [00:22<00:05,  1.33s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000967 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68536	valid's binary_logloss: 0.689552
[200]	train's binary_logloss: 0.681488	valid's binary_logloss: 0.689209
[300]	train's binary_logloss: 0.678139	valid's binary_logloss: 0.68914
Early stopping, best iteration is:
[279]	train's binary_logloss: 0.67879	valid's binary_logloss: 0.689
num_leaves, val_score: 0.688255:  85%|########5 | 17/20 [00:23<00:04,  1.35s/it][I 2020-09-27 04:58:42,989] Trial 23 finished with value: 0.6889999863028395 and parameters: {'num_leaves': 6}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  85%|########5 | 17/20 [00:23<00:04,  1.35s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000969 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.653975	valid's binary_logloss: 0.689905
Early stopping, best iteration is:
[63]	train's binary_logloss: 0.665486	valid's binary_logloss: 0.689503
num_leaves, val_score: 0.688255:  90%|######### | 18/20 [00:24<00:02,  1.25s/it][I 2020-09-27 04:58:43,991] Trial 24 finished with value: 0.6895028538395134 and parameters: {'num_leaves': 47}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  90%|######### | 18/20 [00:24<00:02,  1.25s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009425 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688527	valid's binary_logloss: 0.690133
[200]	train's binary_logloss: 0.686734	valid's binary_logloss: 0.689033
[300]	train's binary_logloss: 0.685468	valid's binary_logloss: 0.688641
[400]	train's binary_logloss: 0.684369	valid's binary_logloss: 0.688513
Early stopping, best iteration is:
[375]	train's binary_logloss: 0.684633	valid's binary_logloss: 0.68848
num_leaves, val_score: 0.688255:  95%|#########5| 19/20 [00:26<00:01,  1.34s/it][I 2020-09-27 04:58:45,542] Trial 25 finished with value: 0.6884795876194431 and parameters: {'num_leaves': 3}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255:  95%|#########5| 19/20 [00:26<00:01,  1.34s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000942 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.634107	valid's binary_logloss: 0.691117
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.66756	valid's binary_logloss: 0.689318
num_leaves, val_score: 0.688255: 100%|##########| 20/20 [00:26<00:00,  1.21s/it][I 2020-09-27 04:58:46,434] Trial 26 finished with value: 0.6893183889206451 and parameters: {'num_leaves': 76}. Best is trial 18 with value: 0.6882551844633495.
num_leaves, val_score: 0.688255: 100%|##########| 20/20 [00:26<00:00,  1.35s/it]
bagging, val_score: 0.688255:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000882 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
bagging, val_score: 0.687857:  10%|#         | 1/10 [00:02<00:23,  2.59s/it][I 2020-09-27 04:58:49,036] Trial 27 finished with value: 0.6878571684251173 and parameters: {'bagging_fraction': 0.7285342265331846, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  10%|#         | 1/10 [00:02<00:23,  2.59s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000976 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689824	valid's binary_logloss: 0.690686
[200]	train's binary_logloss: 0.688555	valid's binary_logloss: 0.68967
[300]	train's binary_logloss: 0.68776	valid's binary_logloss: 0.688991
[400]	train's binary_logloss: 0.687201	valid's binary_logloss: 0.688421
[500]	train's binary_logloss: 0.68677	valid's binary_logloss: 0.688248
[600]	train's binary_logloss: 0.686428	valid's binary_logloss: 0.688112
[700]	train's binary_logloss: 0.686154	valid's binary_logloss: 0.688132
Early stopping, best iteration is:
[670]	train's binary_logloss: 0.686227	valid's binary_logloss: 0.688037
bagging, val_score: 0.687857:  20%|##        | 2/10 [00:05<00:20,  2.62s/it][I 2020-09-27 04:58:51,737] Trial 28 finished with value: 0.6880373748785599 and parameters: {'bagging_fraction': 0.7377723938528014, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  20%|##        | 2/10 [00:05<00:20,  2.62s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001053 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689798	valid's binary_logloss: 0.690553
[200]	train's binary_logloss: 0.688509	valid's binary_logloss: 0.689611
[300]	train's binary_logloss: 0.687723	valid's binary_logloss: 0.688922
[400]	train's binary_logloss: 0.68716	valid's binary_logloss: 0.688433
[500]	train's binary_logloss: 0.686722	valid's binary_logloss: 0.688211
[600]	train's binary_logloss: 0.686375	valid's binary_logloss: 0.688204
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686527	valid's binary_logloss: 0.688082
bagging, val_score: 0.687857:  30%|###       | 3/10 [00:07<00:17,  2.49s/it][I 2020-09-27 04:58:53,937] Trial 29 finished with value: 0.6880818176819911 and parameters: {'bagging_fraction': 0.7152227689225483, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  30%|###       | 3/10 [00:07<00:17,  2.49s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000973 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689833	valid's binary_logloss: 0.690598
[200]	train's binary_logloss: 0.688544	valid's binary_logloss: 0.68966
[300]	train's binary_logloss: 0.687749	valid's binary_logloss: 0.688918
[400]	train's binary_logloss: 0.687185	valid's binary_logloss: 0.688393
[500]	train's binary_logloss: 0.686741	valid's binary_logloss: 0.688141
[600]	train's binary_logloss: 0.686404	valid's binary_logloss: 0.688102
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686547	valid's binary_logloss: 0.688023
bagging, val_score: 0.687857:  40%|####      | 4/10 [00:09<00:14,  2.43s/it][I 2020-09-27 04:58:56,211] Trial 30 finished with value: 0.6880233595037442 and parameters: {'bagging_fraction': 0.7356006552369923, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  40%|####      | 4/10 [00:09<00:14,  2.43s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000978 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689826	valid's binary_logloss: 0.690712
[200]	train's binary_logloss: 0.688569	valid's binary_logloss: 0.689662
[300]	train's binary_logloss: 0.687746	valid's binary_logloss: 0.688891
[400]	train's binary_logloss: 0.687195	valid's binary_logloss: 0.688435
[500]	train's binary_logloss: 0.686757	valid's binary_logloss: 0.688266
[600]	train's binary_logloss: 0.686422	valid's binary_logloss: 0.688159
Early stopping, best iteration is:
[552]	train's binary_logloss: 0.686576	valid's binary_logloss: 0.688066
bagging, val_score: 0.687857:  50%|#####     | 5/10 [00:12<00:11,  2.37s/it][I 2020-09-27 04:58:58,459] Trial 31 finished with value: 0.6880661932157487 and parameters: {'bagging_fraction': 0.7317912944017423, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  50%|#####     | 5/10 [00:12<00:11,  2.37s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001775 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689822	valid's binary_logloss: 0.690641
[200]	train's binary_logloss: 0.688546	valid's binary_logloss: 0.689616
[300]	train's binary_logloss: 0.687753	valid's binary_logloss: 0.688948
[400]	train's binary_logloss: 0.687196	valid's binary_logloss: 0.68841
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688198
[600]	train's binary_logloss: 0.686419	valid's binary_logloss: 0.688131
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686564	valid's binary_logloss: 0.688079
bagging, val_score: 0.687857:  60%|######    | 6/10 [00:14<00:09,  2.34s/it][I 2020-09-27 04:59:00,714] Trial 32 finished with value: 0.6880787095164863 and parameters: {'bagging_fraction': 0.7367657281985888, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  60%|######    | 6/10 [00:14<00:09,  2.34s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000997 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689824	valid's binary_logloss: 0.690686
[200]	train's binary_logloss: 0.688555	valid's binary_logloss: 0.68967
[300]	train's binary_logloss: 0.68776	valid's binary_logloss: 0.688991
[400]	train's binary_logloss: 0.687201	valid's binary_logloss: 0.688421
[500]	train's binary_logloss: 0.68677	valid's binary_logloss: 0.688248
[600]	train's binary_logloss: 0.686428	valid's binary_logloss: 0.688112
[700]	train's binary_logloss: 0.686154	valid's binary_logloss: 0.688132
Early stopping, best iteration is:
[670]	train's binary_logloss: 0.686227	valid's binary_logloss: 0.688037
bagging, val_score: 0.687857:  70%|#######   | 7/10 [00:16<00:07,  2.40s/it][I 2020-09-27 04:59:03,266] Trial 33 finished with value: 0.6880373748785599 and parameters: {'bagging_fraction': 0.7377865570355829, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  70%|#######   | 7/10 [00:16<00:07,  2.40s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000994 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689832	valid's binary_logloss: 0.69068
[200]	train's binary_logloss: 0.688571	valid's binary_logloss: 0.689603
[300]	train's binary_logloss: 0.687772	valid's binary_logloss: 0.688883
[400]	train's binary_logloss: 0.687221	valid's binary_logloss: 0.68836
[500]	train's binary_logloss: 0.686782	valid's binary_logloss: 0.688152
[600]	train's binary_logloss: 0.686435	valid's binary_logloss: 0.688108
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686576	valid's binary_logloss: 0.687995
bagging, val_score: 0.687857:  80%|########  | 8/10 [00:19<00:04,  2.35s/it][I 2020-09-27 04:59:05,479] Trial 34 finished with value: 0.6879951969996739 and parameters: {'bagging_fraction': 0.7515162282300969, 'bagging_freq': 5}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  80%|########  | 8/10 [00:19<00:04,  2.35s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001010 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689966	valid's binary_logloss: 0.690938
[200]	train's binary_logloss: 0.688773	valid's binary_logloss: 0.689819
[300]	train's binary_logloss: 0.688013	valid's binary_logloss: 0.689196
[400]	train's binary_logloss: 0.687467	valid's binary_logloss: 0.688806
[500]	train's binary_logloss: 0.687057	valid's binary_logloss: 0.688497
[600]	train's binary_logloss: 0.686739	valid's binary_logloss: 0.688298
[700]	train's binary_logloss: 0.686478	valid's binary_logloss: 0.688315
[800]	train's binary_logloss: 0.686255	valid's binary_logloss: 0.688233
[900]	train's binary_logloss: 0.686067	valid's binary_logloss: 0.688251
Early stopping, best iteration is:
[869]	train's binary_logloss: 0.686123	valid's binary_logloss: 0.688191
bagging, val_score: 0.687857:  90%|######### | 9/10 [00:22<00:02,  2.68s/it][I 2020-09-27 04:59:08,934] Trial 35 finished with value: 0.6881912997605557 and parameters: {'bagging_fraction': 0.9421235215974794, 'bagging_freq': 4}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857:  90%|######### | 9/10 [00:22<00:02,  2.68s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001114 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689917	valid's binary_logloss: 0.690846
[200]	train's binary_logloss: 0.688683	valid's binary_logloss: 0.689724
[300]	train's binary_logloss: 0.68788	valid's binary_logloss: 0.689127
[400]	train's binary_logloss: 0.687336	valid's binary_logloss: 0.6886
[500]	train's binary_logloss: 0.686912	valid's binary_logloss: 0.688322
[600]	train's binary_logloss: 0.686585	valid's binary_logloss: 0.688191
[700]	train's binary_logloss: 0.686301	valid's binary_logloss: 0.688109
[800]	train's binary_logloss: 0.686068	valid's binary_logloss: 0.688108
[900]	train's binary_logloss: 0.685866	valid's binary_logloss: 0.688065
Early stopping, best iteration is:
[869]	train's binary_logloss: 0.68593	valid's binary_logloss: 0.687939
bagging, val_score: 0.687857: 100%|##########| 10/10 [00:25<00:00,  2.86s/it][I 2020-09-27 04:59:12,229] Trial 36 finished with value: 0.6879386559102665 and parameters: {'bagging_fraction': 0.8372483508530265, 'bagging_freq': 7}. Best is trial 27 with value: 0.6878571684251173.
bagging, val_score: 0.687857: 100%|##########| 10/10 [00:25<00:00,  2.58s/it]
feature_fraction_stage2, val_score: 0.687857:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001013 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
feature_fraction_stage2, val_score: 0.687857:  17%|#6        | 1/6 [00:02<00:12,  2.55s/it][I 2020-09-27 04:59:14,795] Trial 37 finished with value: 0.6878571684251175 and parameters: {'feature_fraction': 0.484}. Best is trial 37 with value: 0.6878571684251175.
feature_fraction_stage2, val_score: 0.687857:  17%|#6        | 1/6 [00:02<00:12,  2.55s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000890 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690634
[200]	train's binary_logloss: 0.688568	valid's binary_logloss: 0.689696
[300]	train's binary_logloss: 0.687761	valid's binary_logloss: 0.688942
[400]	train's binary_logloss: 0.687217	valid's binary_logloss: 0.688449
[500]	train's binary_logloss: 0.686776	valid's binary_logloss: 0.688296
[600]	train's binary_logloss: 0.686427	valid's binary_logloss: 0.688198
Early stopping, best iteration is:
[552]	train's binary_logloss: 0.686591	valid's binary_logloss: 0.688123
feature_fraction_stage2, val_score: 0.687857:  33%|###3      | 2/6 [00:04<00:09,  2.42s/it][I 2020-09-27 04:59:16,923] Trial 38 finished with value: 0.6881228060729746 and parameters: {'feature_fraction': 0.45199999999999996}. Best is trial 37 with value: 0.6878571684251175.
feature_fraction_stage2, val_score: 0.687857:  33%|###3      | 2/6 [00:04<00:09,  2.42s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000977 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
feature_fraction_stage2, val_score: 0.687857:  50%|#####     | 3/6 [00:07<00:07,  2.47s/it][I 2020-09-27 04:59:19,495] Trial 39 finished with value: 0.6878571684251175 and parameters: {'feature_fraction': 0.516}. Best is trial 37 with value: 0.6878571684251175.
feature_fraction_stage2, val_score: 0.687857:  50%|#####     | 3/6 [00:07<00:07,  2.47s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000891 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689834	valid's binary_logloss: 0.690644
[200]	train's binary_logloss: 0.688576	valid's binary_logloss: 0.689601
[300]	train's binary_logloss: 0.687774	valid's binary_logloss: 0.688902
[400]	train's binary_logloss: 0.687209	valid's binary_logloss: 0.688349
[500]	train's binary_logloss: 0.686776	valid's binary_logloss: 0.688135
[600]	train's binary_logloss: 0.68644	valid's binary_logloss: 0.688077
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686591	valid's binary_logloss: 0.687962
feature_fraction_stage2, val_score: 0.687857:  67%|######6   | 4/6 [00:09<00:04,  2.37s/it][I 2020-09-27 04:59:21,646] Trial 40 finished with value: 0.6879623345018646 and parameters: {'feature_fraction': 0.42}. Best is trial 37 with value: 0.6878571684251175.
feature_fraction_stage2, val_score: 0.687857:  67%|######6   | 4/6 [00:09<00:04,  2.37s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001099 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689815	valid's binary_logloss: 0.690584
[200]	train's binary_logloss: 0.688543	valid's binary_logloss: 0.689546
[300]	train's binary_logloss: 0.687738	valid's binary_logloss: 0.688813
[400]	train's binary_logloss: 0.687187	valid's binary_logloss: 0.688335
[500]	train's binary_logloss: 0.686757	valid's binary_logloss: 0.688142
[600]	train's binary_logloss: 0.686403	valid's binary_logloss: 0.688107
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.68656	valid's binary_logloss: 0.688002
feature_fraction_stage2, val_score: 0.687857:  83%|########3 | 5/6 [00:11<00:02,  2.33s/it][I 2020-09-27 04:59:23,892] Trial 41 finished with value: 0.6880017676081877 and parameters: {'feature_fraction': 0.58}. Best is trial 37 with value: 0.6878571684251175.
feature_fraction_stage2, val_score: 0.687857:  83%|########3 | 5/6 [00:11<00:02,  2.33s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001021 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689822	valid's binary_logloss: 0.690658
[200]	train's binary_logloss: 0.688541	valid's binary_logloss: 0.689595
[300]	train's binary_logloss: 0.687744	valid's binary_logloss: 0.688805
[400]	train's binary_logloss: 0.687186	valid's binary_logloss: 0.688373
[500]	train's binary_logloss: 0.686758	valid's binary_logloss: 0.688191
[600]	train's binary_logloss: 0.68641	valid's binary_logloss: 0.688144
[700]	train's binary_logloss: 0.686139	valid's binary_logloss: 0.688033
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686211	valid's binary_logloss: 0.687908
feature_fraction_stage2, val_score: 0.687857: 100%|##########| 6/6 [00:14<00:00,  2.41s/it][I 2020-09-27 04:59:26,490] Trial 42 finished with value: 0.6879081347102528 and parameters: {'feature_fraction': 0.5479999999999999}. Best is trial 37 with value: 0.6878571684251175.
feature_fraction_stage2, val_score: 0.687857: 100%|##########| 6/6 [00:14<00:00,  2.38s/it]
regularization_factors, val_score: 0.687857:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000956 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689827	valid's binary_logloss: 0.690607
[200]	train's binary_logloss: 0.688566	valid's binary_logloss: 0.689648
[300]	train's binary_logloss: 0.687757	valid's binary_logloss: 0.688831
[400]	train's binary_logloss: 0.687199	valid's binary_logloss: 0.688357
[500]	train's binary_logloss: 0.686766	valid's binary_logloss: 0.688172
[600]	train's binary_logloss: 0.686415	valid's binary_logloss: 0.688061
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686571	valid's binary_logloss: 0.68802
regularization_factors, val_score: 0.687857:   5%|5         | 1/20 [00:02<00:41,  2.18s/it][I 2020-09-27 04:59:28,683] Trial 43 finished with value: 0.6880201782378496 and parameters: {'lambda_l1': 0.09717855595894546, 'lambda_l2': 2.599101232080336e-05}. Best is trial 43 with value: 0.6880201782378496.
regularization_factors, val_score: 0.687857:   5%|5         | 1/20 [00:02<00:41,  2.18s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001017 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689864	valid's binary_logloss: 0.690597
[200]	train's binary_logloss: 0.688608	valid's binary_logloss: 0.68959
[300]	train's binary_logloss: 0.687806	valid's binary_logloss: 0.688794
[400]	train's binary_logloss: 0.687261	valid's binary_logloss: 0.688287
[500]	train's binary_logloss: 0.686828	valid's binary_logloss: 0.688121
[600]	train's binary_logloss: 0.686479	valid's binary_logloss: 0.68802
Early stopping, best iteration is:
[552]	train's binary_logloss: 0.686646	valid's binary_logloss: 0.68794
regularization_factors, val_score: 0.687857:  10%|#         | 2/20 [00:04<00:39,  2.19s/it][I 2020-09-27 04:59:30,884] Trial 44 finished with value: 0.6879401204038632 and parameters: {'lambda_l1': 2.27626769111772e-08, 'lambda_l2': 3.9029986997360395}. Best is trial 44 with value: 0.6879401204038632.
regularization_factors, val_score: 0.687857:  10%|#         | 2/20 [00:04<00:39,  2.19s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000975 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  15%|#5        | 3/20 [00:06<00:39,  2.30s/it][I 2020-09-27 04:59:33,437] Trial 45 finished with value: 0.6878571684254513 and parameters: {'lambda_l1': 6.262982988092863e-08, 'lambda_l2': 4.447887374533072e-08}. Best is trial 45 with value: 0.6878571684254513.
regularization_factors, val_score: 0.687857:  15%|#5        | 3/20 [00:06<00:39,  2.30s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000939 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  20%|##        | 4/20 [00:09<00:37,  2.37s/it][I 2020-09-27 04:59:35,984] Trial 46 finished with value: 0.6878571684251785 and parameters: {'lambda_l1': 1.0695178382125108e-08, 'lambda_l2': 1.4853383476880933e-08}. Best is trial 46 with value: 0.6878571684251785.
regularization_factors, val_score: 0.687857:  20%|##        | 4/20 [00:09<00:37,  2.37s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000993 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  25%|##5       | 5/20 [00:12<00:36,  2.43s/it][I 2020-09-27 04:59:38,551] Trial 47 finished with value: 0.6878571684252762 and parameters: {'lambda_l1': 1.8418017989423836e-08, 'lambda_l2': 1.1880874452318282e-08}. Best is trial 46 with value: 0.6878571684251785.
regularization_factors, val_score: 0.687857:  25%|##5       | 5/20 [00:12<00:36,  2.43s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  30%|###       | 6/20 [00:14<00:34,  2.48s/it][I 2020-09-27 04:59:41,131] Trial 48 finished with value: 0.687857168425301 and parameters: {'lambda_l1': 1.8864997222383136e-08, 'lambda_l2': 1.1856973045132024e-08}. Best is trial 46 with value: 0.6878571684251785.
regularization_factors, val_score: 0.687857:  30%|###       | 6/20 [00:14<00:34,  2.48s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  35%|###5      | 7/20 [00:17<00:32,  2.53s/it][I 2020-09-27 04:59:43,792] Trial 49 finished with value: 0.6878571685137861 and parameters: {'lambda_l1': 2.3008656218442958e-05, 'lambda_l2': 1.0076176570249505e-08}. Best is trial 46 with value: 0.6878571684251785.
regularization_factors, val_score: 0.687857:  35%|###5      | 7/20 [00:17<00:32,  2.53s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  40%|####      | 8/20 [00:19<00:30,  2.54s/it][I 2020-09-27 04:59:46,353] Trial 50 finished with value: 0.6878571684048133 and parameters: {'lambda_l1': 1.0325498267148577e-05, 'lambda_l2': 9.965999410553949e-06}. Best is trial 50 with value: 0.6878571684048133.
regularization_factors, val_score: 0.687857:  40%|####      | 8/20 [00:19<00:30,  2.54s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001025 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  45%|####5     | 9/20 [00:22<00:28,  2.55s/it][I 2020-09-27 04:59:48,931] Trial 51 finished with value: 0.6878571684428725 and parameters: {'lambda_l1': 1.7588444362019025e-05, 'lambda_l2': 8.259367220543736e-06}. Best is trial 50 with value: 0.6878571684048133.
regularization_factors, val_score: 0.687857:  45%|####5     | 9/20 [00:22<00:28,  2.55s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000992 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  50%|#####     | 10/20 [00:25<00:25,  2.57s/it][I 2020-09-27 04:59:51,534] Trial 52 finished with value: 0.6878571684244191 and parameters: {'lambda_l1': 5.589768247317659e-07, 'lambda_l2': 5.973648036426743e-07}. Best is trial 50 with value: 0.6878571684048133.
regularization_factors, val_score: 0.687857:  50%|#####     | 10/20 [00:25<00:25,  2.57s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000986 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  55%|#####5    | 11/20 [00:27<00:23,  2.56s/it][I 2020-09-27 04:59:54,086] Trial 53 finished with value: 0.6878571684026484 and parameters: {'lambda_l1': 1.4374237878755214e-06, 'lambda_l2': 4.663535296869216e-06}. Best is trial 53 with value: 0.6878571684026484.
regularization_factors, val_score: 0.687857:  55%|#####5    | 11/20 [00:27<00:23,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000982 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  60%|######    | 12/20 [00:30<00:20,  2.57s/it][I 2020-09-27 04:59:56,659] Trial 54 finished with value: 0.6878571684225362 and parameters: {'lambda_l1': 4.5280682776478785e-06, 'lambda_l2': 3.232830506287648e-06}. Best is trial 53 with value: 0.6878571684026484.
regularization_factors, val_score: 0.687857:  60%|######    | 12/20 [00:30<00:20,  2.57s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001020 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  65%|######5   | 13/20 [00:32<00:17,  2.56s/it][I 2020-09-27 04:59:59,214] Trial 55 finished with value: 0.6878571684052713 and parameters: {'lambda_l1': 2.2260343593678247e-06, 'lambda_l2': 4.664732251084378e-06}. Best is trial 53 with value: 0.6878571684026484.
regularization_factors, val_score: 0.687857:  65%|######5   | 13/20 [00:32<00:17,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001004 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  70%|#######   | 14/20 [00:35<00:15,  2.56s/it][I 2020-09-27 05:00:01,765] Trial 56 finished with value: 0.6878571684225463 and parameters: {'lambda_l1': 2.731664642116516e-06, 'lambda_l2': 2.242416999990266e-06}. Best is trial 53 with value: 0.6878571684026484.
regularization_factors, val_score: 0.687857:  70%|#######   | 14/20 [00:35<00:15,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000988 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  75%|#######5  | 15/20 [00:37<00:12,  2.56s/it][I 2020-09-27 05:00:04,326] Trial 57 finished with value: 0.6878571684168346 and parameters: {'lambda_l1': 2.4032192398197603e-06, 'lambda_l2': 2.868804045681902e-06}. Best is trial 53 with value: 0.6878571684026484.
regularization_factors, val_score: 0.687857:  75%|#######5  | 15/20 [00:37<00:12,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000974 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  80%|########  | 16/20 [00:40<00:10,  2.56s/it][I 2020-09-27 05:00:06,891] Trial 58 finished with value: 0.6878571684147985 and parameters: {'lambda_l1': 3.739599166500671e-06, 'lambda_l2': 3.8681688367300176e-06}. Best is trial 53 with value: 0.6878571684026484.
regularization_factors, val_score: 0.687857:  80%|########  | 16/20 [00:40<00:10,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000993 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  85%|########5 | 17/20 [00:42<00:07,  2.56s/it][I 2020-09-27 05:00:09,442] Trial 59 finished with value: 0.6878571672506346 and parameters: {'lambda_l1': 2.703708966445098e-06, 'lambda_l2': 0.00019305898786142614}. Best is trial 59 with value: 0.6878571672506346.
regularization_factors, val_score: 0.687857:  85%|########5 | 17/20 [00:42<00:07,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000960 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  90%|######### | 18/20 [00:45<00:05,  2.56s/it][I 2020-09-27 05:00:11,996] Trial 60 finished with value: 0.6878571657826678 and parameters: {'lambda_l1': 9.271384341210084e-07, 'lambda_l2': 0.0004314431407377575}. Best is trial 60 with value: 0.6878571657826678.
regularization_factors, val_score: 0.687857:  90%|######### | 18/20 [00:45<00:05,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000998 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857:  95%|#########5| 19/20 [00:48<00:02,  2.56s/it][I 2020-09-27 05:00:14,562] Trial 61 finished with value: 0.6878571600694793 and parameters: {'lambda_l1': 8.018982307617414e-07, 'lambda_l2': 0.0013639030286262217}. Best is trial 61 with value: 0.6878571600694793.
regularization_factors, val_score: 0.687857:  95%|#########5| 19/20 [00:48<00:02,  2.56s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000973 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687189	valid's binary_logloss: 0.688313
[500]	train's binary_logloss: 0.686756	valid's binary_logloss: 0.688084
[600]	train's binary_logloss: 0.686406	valid's binary_logloss: 0.688043
[700]	train's binary_logloss: 0.68613	valid's binary_logloss: 0.688014
Early stopping, best iteration is:
[671]	train's binary_logloss: 0.686204	valid's binary_logloss: 0.687857
regularization_factors, val_score: 0.687857: 100%|##########| 20/20 [00:50<00:00,  2.55s/it][I 2020-09-27 05:00:17,099] Trial 62 finished with value: 0.6878571590590913 and parameters: {'lambda_l1': 5.337492288420711e-07, 'lambda_l2': 0.0015292766949980237}. Best is trial 62 with value: 0.6878571590590913.
regularization_factors, val_score: 0.687857: 100%|##########| 20/20 [00:50<00:00,  2.53s/it]
min_data_in_leaf, val_score: 0.687857:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001704 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687741	valid's binary_logloss: 0.688856
[400]	train's binary_logloss: 0.687194	valid's binary_logloss: 0.688269
[500]	train's binary_logloss: 0.686757	valid's binary_logloss: 0.688121
[600]	train's binary_logloss: 0.686414	valid's binary_logloss: 0.688068
Early stopping, best iteration is:
[551]	train's binary_logloss: 0.686573	valid's binary_logloss: 0.687983
min_data_in_leaf, val_score: 0.687857:  20%|##        | 1/5 [00:02<00:08,  2.20s/it][I 2020-09-27 05:00:19,310] Trial 63 finished with value: 0.6879830688856906 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.6879830688856906.
min_data_in_leaf, val_score: 0.687857:  20%|##        | 1/5 [00:02<00:08,  2.20s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000979 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687742	valid's binary_logloss: 0.688876
[400]	train's binary_logloss: 0.687187	valid's binary_logloss: 0.688301
[500]	train's binary_logloss: 0.686743	valid's binary_logloss: 0.688115
[600]	train's binary_logloss: 0.686394	valid's binary_logloss: 0.687995
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686546	valid's binary_logloss: 0.68795
min_data_in_leaf, val_score: 0.687857:  40%|####      | 2/5 [00:06<00:08,  2.72s/it][I 2020-09-27 05:00:23,240] Trial 64 finished with value: 0.687949637690665 and parameters: {'min_child_samples': 10}. Best is trial 64 with value: 0.687949637690665.
min_data_in_leaf, val_score: 0.687857:  40%|####      | 2/5 [00:06<00:08,  2.72s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000925 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688554	valid's binary_logloss: 0.689588
[300]	train's binary_logloss: 0.687742	valid's binary_logloss: 0.688876
[400]	train's binary_logloss: 0.687187	valid's binary_logloss: 0.688301
[500]	train's binary_logloss: 0.686741	valid's binary_logloss: 0.688135
[600]	train's binary_logloss: 0.686393	valid's binary_logloss: 0.688019
Early stopping, best iteration is:
[553]	train's binary_logloss: 0.68655	valid's binary_logloss: 0.687952
min_data_in_leaf, val_score: 0.687857:  60%|######    | 3/5 [00:08<00:05,  2.63s/it][I 2020-09-27 05:00:25,654] Trial 65 finished with value: 0.687952199445917 and parameters: {'min_child_samples': 5}. Best is trial 64 with value: 0.687949637690665.
min_data_in_leaf, val_score: 0.687857:  60%|######    | 3/5 [00:08<00:05,  2.63s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001028 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68983	valid's binary_logloss: 0.690577
[200]	train's binary_logloss: 0.688565	valid's binary_logloss: 0.689625
[300]	train's binary_logloss: 0.687768	valid's binary_logloss: 0.688833
[400]	train's binary_logloss: 0.687225	valid's binary_logloss: 0.688296
[500]	train's binary_logloss: 0.686789	valid's binary_logloss: 0.688054
[600]	train's binary_logloss: 0.686452	valid's binary_logloss: 0.687991
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.6866	valid's binary_logloss: 0.687924
min_data_in_leaf, val_score: 0.687857:  80%|########  | 4/5 [00:10<00:02,  2.52s/it][I 2020-09-27 05:00:27,934] Trial 66 finished with value: 0.6879239560726775 and parameters: {'min_child_samples': 50}. Best is trial 66 with value: 0.6879239560726775.
min_data_in_leaf, val_score: 0.687857:  80%|########  | 4/5 [00:10<00:02,  2.52s/it][LightGBM] [Info] Number of positive: 46279, number of negative: 46747
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000999 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497485 -> initscore=-0.010062
[LightGBM] [Info] Start training from score -0.010062
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689877	valid's binary_logloss: 0.690659
[200]	train's binary_logloss: 0.688626	valid's binary_logloss: 0.689631
[300]	train's binary_logloss: 0.687853	valid's binary_logloss: 0.68889
[400]	train's binary_logloss: 0.687343	valid's binary_logloss: 0.688334
[500]	train's binary_logloss: 0.686947	valid's binary_logloss: 0.688179
[600]	train's binary_logloss: 0.686609	valid's binary_logloss: 0.688099
Early stopping, best iteration is:
[555]	train's binary_logloss: 0.686758	valid's binary_logloss: 0.688011
min_data_in_leaf, val_score: 0.687857: 100%|##########| 5/5 [00:13<00:00,  2.42s/it][I 2020-09-27 05:00:30,120] Trial 67 finished with value: 0.6880111488280128 and parameters: {'min_child_samples': 100}. Best is trial 66 with value: 0.6879239560726775.
min_data_in_leaf, val_score: 0.687857: 100%|##########| 5/5 [00:13<00:00,  2.60s/it]
Fold : 7
[I 2020-09-27 05:00:30,278] A new study created in memory with name: no-name-ad8f4155-35d8-4bc1-80b9-1a44d36999f7
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000941 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665204	valid's binary_logloss: 0.691098
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.679318	valid's binary_logloss: 0.689859
feature_fraction, val_score: 0.689859:  14%|#4        | 1/7 [00:00<00:05,  1.15it/s][I 2020-09-27 05:00:31,155] Trial 0 finished with value: 0.6898590878905122 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6898590878905122.
feature_fraction, val_score: 0.689859:  14%|#4        | 1/7 [00:00<00:05,  1.15it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000919 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66602	valid's binary_logloss: 0.689895
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.674409	valid's binary_logloss: 0.689601
feature_fraction, val_score: 0.689601:  29%|##8       | 2/7 [00:01<00:04,  1.20it/s][I 2020-09-27 05:00:31,908] Trial 1 finished with value: 0.6896011115289385 and parameters: {'feature_fraction': 0.4}. Best is trial 1 with value: 0.6896011115289385.
feature_fraction, val_score: 0.689601:  29%|##8       | 2/7 [00:01<00:04,  1.20it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001642 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662693	valid's binary_logloss: 0.690012
Early stopping, best iteration is:
[39]	train's binary_logloss: 0.677876	valid's binary_logloss: 0.689256
feature_fraction, val_score: 0.689256:  43%|####2     | 3/7 [00:02<00:03,  1.23it/s][I 2020-09-27 05:00:32,673] Trial 2 finished with value: 0.6892558819835889 and parameters: {'feature_fraction': 1.0}. Best is trial 2 with value: 0.6892558819835889.
feature_fraction, val_score: 0.689256:  43%|####2     | 3/7 [00:02<00:03,  1.23it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001641 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663231	valid's binary_logloss: 0.690116
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.6803	valid's binary_logloss: 0.689625
feature_fraction, val_score: 0.689256:  57%|#####7    | 4/7 [00:03<00:02,  1.26it/s][I 2020-09-27 05:00:33,421] Trial 3 finished with value: 0.6896254736016139 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 2 with value: 0.6892558819835889.
feature_fraction, val_score: 0.689256:  57%|#####7    | 4/7 [00:03<00:02,  1.26it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012212 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664409	valid's binary_logloss: 0.68913
[200]	train's binary_logloss: 0.644514	valid's binary_logloss: 0.690437
Early stopping, best iteration is:
[100]	train's binary_logloss: 0.664409	valid's binary_logloss: 0.68913
feature_fraction, val_score: 0.689130:  71%|#######1  | 5/7 [00:04<00:01,  1.20it/s][I 2020-09-27 05:00:34,351] Trial 4 finished with value: 0.6891302495722187 and parameters: {'feature_fraction': 0.6}. Best is trial 4 with value: 0.6891302495722187.
feature_fraction, val_score: 0.689130:  71%|#######1  | 5/7 [00:04<00:01,  1.20it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016528 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664259	valid's binary_logloss: 0.689566
Early stopping, best iteration is:
[58]	train's binary_logloss: 0.673563	valid's binary_logloss: 0.688962
feature_fraction, val_score: 0.688962:  86%|########5 | 6/7 [00:04<00:00,  1.21it/s][I 2020-09-27 05:00:35,160] Trial 5 finished with value: 0.6889623887000853 and parameters: {'feature_fraction': 0.7}. Best is trial 5 with value: 0.6889623887000853.
feature_fraction, val_score: 0.688962:  86%|########5 | 6/7 [00:04<00:00,  1.21it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001610 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663479	valid's binary_logloss: 0.689779
Early stopping, best iteration is:
[60]	train's binary_logloss: 0.67284	valid's binary_logloss: 0.689141
feature_fraction, val_score: 0.688962: 100%|##########| 7/7 [00:05<00:00,  1.19it/s][I 2020-09-27 05:00:36,033] Trial 6 finished with value: 0.6891408826755168 and parameters: {'feature_fraction': 0.8}. Best is trial 5 with value: 0.6889623887000853.
feature_fraction, val_score: 0.688962: 100%|##########| 7/7 [00:05<00:00,  1.22it/s]
num_leaves, val_score: 0.688962:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002027 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.552919	valid's binary_logloss: 0.697247
Early stopping, best iteration is:
[23]	train's binary_logloss: 0.648261	valid's binary_logloss: 0.691036
num_leaves, val_score: 0.688962:   5%|5         | 1/20 [00:01<00:27,  1.44s/it][I 2020-09-27 05:00:37,483] Trial 7 finished with value: 0.6910358844305945 and parameters: {'num_leaves': 211}. Best is trial 7 with value: 0.6910358844305945.
num_leaves, val_score: 0.688962:   5%|5         | 1/20 [00:01<00:27,  1.44s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007979 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.62836	valid's binary_logloss: 0.691669
Early stopping, best iteration is:
[27]	train's binary_logloss: 0.669575	valid's binary_logloss: 0.689768
num_leaves, val_score: 0.688962:  10%|#         | 2/20 [00:02<00:22,  1.25s/it][I 2020-09-27 05:00:38,292] Trial 8 finished with value: 0.6897684410879633 and parameters: {'num_leaves': 81}. Best is trial 8 with value: 0.6897684410879633.
num_leaves, val_score: 0.688962:  10%|#         | 2/20 [00:02<00:22,  1.25s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001953 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.550486	valid's binary_logloss: 0.694876
Early stopping, best iteration is:
[27]	train's binary_logloss: 0.641202	valid's binary_logloss: 0.689961
num_leaves, val_score: 0.688962:  15%|#5        | 3/20 [00:03<00:22,  1.31s/it][I 2020-09-27 05:00:39,748] Trial 9 finished with value: 0.6899614185647561 and parameters: {'num_leaves': 215}. Best is trial 8 with value: 0.6897684410879633.
num_leaves, val_score: 0.688962:  15%|#5        | 3/20 [00:03<00:22,  1.31s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012913 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685219	valid's binary_logloss: 0.689356
[200]	train's binary_logloss: 0.681241	valid's binary_logloss: 0.689034
[300]	train's binary_logloss: 0.677713	valid's binary_logloss: 0.689095
Early stopping, best iteration is:
[228]	train's binary_logloss: 0.680209	valid's binary_logloss: 0.688922
num_leaves, val_score: 0.688922:  20%|##        | 4/20 [00:04<00:20,  1.28s/it][I 2020-09-27 05:00:40,941] Trial 10 finished with value: 0.6889216574404491 and parameters: {'num_leaves': 6}. Best is trial 10 with value: 0.6889216574404491.
num_leaves, val_score: 0.688922:  20%|##        | 4/20 [00:04<00:20,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006570 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687266	valid's binary_logloss: 0.68936
[200]	train's binary_logloss: 0.684753	valid's binary_logloss: 0.688801
[300]	train's binary_logloss: 0.682767	valid's binary_logloss: 0.688695
Early stopping, best iteration is:
[259]	train's binary_logloss: 0.683544	valid's binary_logloss: 0.688542
num_leaves, val_score: 0.688542:  25%|##5       | 5/20 [00:06<00:19,  1.29s/it][I 2020-09-27 05:00:42,272] Trial 11 finished with value: 0.6885421835424413 and parameters: {'num_leaves': 4}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  25%|##5       | 5/20 [00:06<00:19,  1.29s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012743 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.683209	valid's binary_logloss: 0.688991
[200]	train's binary_logloss: 0.677714	valid's binary_logloss: 0.689043
Early stopping, best iteration is:
[167]	train's binary_logloss: 0.679404	valid's binary_logloss: 0.688853
num_leaves, val_score: 0.688542:  30%|###       | 6/20 [00:07<00:17,  1.22s/it][I 2020-09-27 05:00:43,311] Trial 12 finished with value: 0.6888528740217242 and parameters: {'num_leaves': 8}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  30%|###       | 6/20 [00:07<00:17,  1.22s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.022273 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687266	valid's binary_logloss: 0.68936
[200]	train's binary_logloss: 0.684753	valid's binary_logloss: 0.688801
[300]	train's binary_logloss: 0.682767	valid's binary_logloss: 0.688695
Early stopping, best iteration is:
[259]	train's binary_logloss: 0.683544	valid's binary_logloss: 0.688542
num_leaves, val_score: 0.688542:  35%|###5      | 7/20 [00:08<00:16,  1.24s/it][I 2020-09-27 05:00:44,603] Trial 13 finished with value: 0.6885421835424413 and parameters: {'num_leaves': 4}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  35%|###5      | 7/20 [00:08<00:16,  1.24s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012970 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.636905	valid's binary_logloss: 0.691529
Early stopping, best iteration is:
[47]	train's binary_logloss: 0.661435	valid's binary_logloss: 0.689547
num_leaves, val_score: 0.688542:  40%|####      | 8/20 [00:09<00:13,  1.14s/it][I 2020-09-27 05:00:45,518] Trial 14 finished with value: 0.6895472826915808 and parameters: {'num_leaves': 69}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  40%|####      | 8/20 [00:09<00:13,  1.14s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008973 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68623	valid's binary_logloss: 0.689472
[200]	train's binary_logloss: 0.682968	valid's binary_logloss: 0.689396
Early stopping, best iteration is:
[166]	train's binary_logloss: 0.683998	valid's binary_logloss: 0.689301
num_leaves, val_score: 0.688542:  45%|####5     | 9/20 [00:10<00:12,  1.09s/it][I 2020-09-27 05:00:46,499] Trial 15 finished with value: 0.6893008615554372 and parameters: {'num_leaves': 5}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  45%|####5     | 9/20 [00:10<00:12,  1.09s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008097 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.641155	valid's binary_logloss: 0.691233
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.665246	valid's binary_logloss: 0.689534
num_leaves, val_score: 0.688542:  50%|#####     | 10/20 [00:11<00:10,  1.03s/it][I 2020-09-27 05:00:47,384] Trial 16 finished with value: 0.689534420595521 and parameters: {'num_leaves': 63}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  50%|#####     | 10/20 [00:11<00:10,  1.03s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007854 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.594183	valid's binary_logloss: 0.693385
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.653722	valid's binary_logloss: 0.690977
num_leaves, val_score: 0.688542:  55%|#####5    | 11/20 [00:12<00:09,  1.04s/it][I 2020-09-27 05:00:48,441] Trial 17 finished with value: 0.6909774333414871 and parameters: {'num_leaves': 139}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  55%|#####5    | 11/20 [00:12<00:09,  1.04s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014042 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.660928	valid's binary_logloss: 0.689724
Early stopping, best iteration is:
[65]	train's binary_logloss: 0.669654	valid's binary_logloss: 0.689001
num_leaves, val_score: 0.688542:  60%|######    | 12/20 [00:13<00:07,  1.02it/s][I 2020-09-27 05:00:49,287] Trial 18 finished with value: 0.6890011375101475 and parameters: {'num_leaves': 35}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  60%|######    | 12/20 [00:13<00:07,  1.02it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007812 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.594516	valid's binary_logloss: 0.692963
Early stopping, best iteration is:
[16]	train's binary_logloss: 0.670225	valid's binary_logloss: 0.691015
num_leaves, val_score: 0.688542:  65%|######5   | 13/20 [00:14<00:06,  1.03it/s][I 2020-09-27 05:00:50,248] Trial 19 finished with value: 0.6910149125364163 and parameters: {'num_leaves': 136}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  65%|######5   | 13/20 [00:14<00:06,  1.03it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008513 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.614802	valid's binary_logloss: 0.691487
Early stopping, best iteration is:
[50]	train's binary_logloss: 0.64677	valid's binary_logloss: 0.690169
num_leaves, val_score: 0.688542:  70%|#######   | 14/20 [00:15<00:05,  1.00it/s][I 2020-09-27 05:00:51,302] Trial 20 finished with value: 0.6901693018069993 and parameters: {'num_leaves': 103}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  70%|#######   | 14/20 [00:15<00:05,  1.00it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001631 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68623	valid's binary_logloss: 0.689472
[200]	train's binary_logloss: 0.682968	valid's binary_logloss: 0.689396
Early stopping, best iteration is:
[166]	train's binary_logloss: 0.683998	valid's binary_logloss: 0.689301
num_leaves, val_score: 0.688542:  75%|#######5  | 15/20 [00:16<00:05,  1.00s/it][I 2020-09-27 05:00:52,310] Trial 21 finished with value: 0.6893008615554372 and parameters: {'num_leaves': 5}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  75%|#######5  | 15/20 [00:16<00:05,  1.00s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001749 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663209	valid's binary_logloss: 0.689372
Early stopping, best iteration is:
[56]	train's binary_logloss: 0.673672	valid's binary_logloss: 0.688635
num_leaves, val_score: 0.688542:  80%|########  | 16/20 [00:17<00:03,  1.04it/s][I 2020-09-27 05:00:53,172] Trial 22 finished with value: 0.6886345495846673 and parameters: {'num_leaves': 32}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  80%|########  | 16/20 [00:17<00:03,  1.04it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008554 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65661	valid's binary_logloss: 0.689805
Early stopping, best iteration is:
[44]	train's binary_logloss: 0.673206	valid's binary_logloss: 0.689461
num_leaves, val_score: 0.688542:  85%|########5 | 17/20 [00:17<00:02,  1.10it/s][I 2020-09-27 05:00:53,968] Trial 23 finished with value: 0.6894608609879804 and parameters: {'num_leaves': 41}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  85%|########5 | 17/20 [00:17<00:02,  1.10it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012405 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658212	valid's binary_logloss: 0.690617
Early stopping, best iteration is:
[33]	train's binary_logloss: 0.677455	valid's binary_logloss: 0.689576
num_leaves, val_score: 0.688542:  90%|######### | 18/20 [00:18<00:01,  1.16it/s][I 2020-09-27 05:00:54,710] Trial 24 finished with value: 0.6895760118681855 and parameters: {'num_leaves': 39}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  90%|######### | 18/20 [00:18<00:01,  1.16it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008295 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.577324	valid's binary_logloss: 0.696256
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.645438	valid's binary_logloss: 0.691006
num_leaves, val_score: 0.688542:  95%|#########5| 19/20 [00:19<00:00,  1.03it/s][I 2020-09-27 05:00:55,935] Trial 25 finished with value: 0.6910055064236426 and parameters: {'num_leaves': 167}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542:  95%|#########5| 19/20 [00:19<00:00,  1.03it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012736 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667178	valid's binary_logloss: 0.689846
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.677808	valid's binary_logloss: 0.689292
num_leaves, val_score: 0.688542: 100%|##########| 20/20 [00:20<00:00,  1.11it/s][I 2020-09-27 05:00:56,664] Trial 26 finished with value: 0.6892917155718229 and parameters: {'num_leaves': 27}. Best is trial 11 with value: 0.6885421835424413.
num_leaves, val_score: 0.688542: 100%|##########| 20/20 [00:20<00:00,  1.03s/it]
bagging, val_score: 0.688542:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008116 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687216	valid's binary_logloss: 0.689193
[200]	train's binary_logloss: 0.684606	valid's binary_logloss: 0.688599
[300]	train's binary_logloss: 0.682514	valid's binary_logloss: 0.688495
Early stopping, best iteration is:
[243]	train's binary_logloss: 0.683636	valid's binary_logloss: 0.688314
bagging, val_score: 0.688314:  10%|#         | 1/10 [00:01<00:11,  1.33s/it][I 2020-09-27 05:00:58,010] Trial 27 finished with value: 0.6883135095365117 and parameters: {'bagging_fraction': 0.8631686996360303, 'bagging_freq': 3}. Best is trial 27 with value: 0.6883135095365117.
bagging, val_score: 0.688314:  10%|#         | 1/10 [00:01<00:11,  1.33s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008381 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687216	valid's binary_logloss: 0.689219
[200]	train's binary_logloss: 0.684738	valid's binary_logloss: 0.688577
[300]	train's binary_logloss: 0.682586	valid's binary_logloss: 0.688474
[400]	train's binary_logloss: 0.680625	valid's binary_logloss: 0.688466
Early stopping, best iteration is:
[388]	train's binary_logloss: 0.680824	valid's binary_logloss: 0.688307
bagging, val_score: 0.688307:  20%|##        | 2/10 [00:03<00:12,  1.50s/it][I 2020-09-27 05:00:59,908] Trial 28 finished with value: 0.6883067782443618 and parameters: {'bagging_fraction': 0.8747424021051551, 'bagging_freq': 3}. Best is trial 28 with value: 0.6883067782443618.
bagging, val_score: 0.688307:  20%|##        | 2/10 [00:03<00:12,  1.50s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008348 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687228	valid's binary_logloss: 0.689193
[200]	train's binary_logloss: 0.684624	valid's binary_logloss: 0.688721
[300]	train's binary_logloss: 0.682522	valid's binary_logloss: 0.688655
[400]	train's binary_logloss: 0.680572	valid's binary_logloss: 0.688787
Early stopping, best iteration is:
[313]	train's binary_logloss: 0.682272	valid's binary_logloss: 0.688627
bagging, val_score: 0.688307:  30%|###       | 3/10 [00:04<00:10,  1.54s/it][I 2020-09-27 05:01:01,550] Trial 29 finished with value: 0.6886270116658745 and parameters: {'bagging_fraction': 0.8767026255324745, 'bagging_freq': 3}. Best is trial 28 with value: 0.6883067782443618.
bagging, val_score: 0.688307:  30%|###       | 3/10 [00:04<00:10,  1.54s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008608 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687185	valid's binary_logloss: 0.689143
[200]	train's binary_logloss: 0.684623	valid's binary_logloss: 0.688512
[300]	train's binary_logloss: 0.682521	valid's binary_logloss: 0.688305
Early stopping, best iteration is:
[299]	train's binary_logloss: 0.682543	valid's binary_logloss: 0.688237
bagging, val_score: 0.688237:  40%|####      | 4/10 [00:06<00:09,  1.55s/it][I 2020-09-27 05:01:03,118] Trial 30 finished with value: 0.6882368432610401 and parameters: {'bagging_fraction': 0.8508850031981232, 'bagging_freq': 3}. Best is trial 30 with value: 0.6882368432610401.
bagging, val_score: 0.688237:  40%|####      | 4/10 [00:06<00:09,  1.55s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013843 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687182	valid's binary_logloss: 0.689138
[200]	train's binary_logloss: 0.684621	valid's binary_logloss: 0.688476
[300]	train's binary_logloss: 0.682518	valid's binary_logloss: 0.688405
[400]	train's binary_logloss: 0.680515	valid's binary_logloss: 0.688508
Early stopping, best iteration is:
[307]	train's binary_logloss: 0.682384	valid's binary_logloss: 0.688333
bagging, val_score: 0.688237:  50%|#####     | 5/10 [00:08<00:07,  1.57s/it][I 2020-09-27 05:01:04,736] Trial 31 finished with value: 0.6883329240332864 and parameters: {'bagging_fraction': 0.8593267514380051, 'bagging_freq': 3}. Best is trial 30 with value: 0.6882368432610401.
bagging, val_score: 0.688237:  50%|#####     | 5/10 [00:08<00:07,  1.57s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010252 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687215	valid's binary_logloss: 0.689327
[200]	train's binary_logloss: 0.684669	valid's binary_logloss: 0.688643
[300]	train's binary_logloss: 0.682522	valid's binary_logloss: 0.68885
[400]	train's binary_logloss: 0.680567	valid's binary_logloss: 0.688669
Early stopping, best iteration is:
[353]	train's binary_logloss: 0.681477	valid's binary_logloss: 0.68855
bagging, val_score: 0.688237:  60%|######    | 6/10 [00:09<00:06,  1.62s/it][I 2020-09-27 05:01:06,469] Trial 32 finished with value: 0.6885496852605667 and parameters: {'bagging_fraction': 0.8612598431625776, 'bagging_freq': 3}. Best is trial 30 with value: 0.6882368432610401.
bagging, val_score: 0.688237:  60%|######    | 6/10 [00:09<00:06,  1.62s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008437 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68717	valid's binary_logloss: 0.689286
[200]	train's binary_logloss: 0.684607	valid's binary_logloss: 0.688529
[300]	train's binary_logloss: 0.682498	valid's binary_logloss: 0.688434
Early stopping, best iteration is:
[262]	train's binary_logloss: 0.683269	valid's binary_logloss: 0.688293
bagging, val_score: 0.688237:  70%|#######   | 7/10 [00:11<00:04,  1.57s/it][I 2020-09-27 05:01:07,911] Trial 33 finished with value: 0.6882931441850038 and parameters: {'bagging_fraction': 0.8671876569591457, 'bagging_freq': 3}. Best is trial 30 with value: 0.6882368432610401.
bagging, val_score: 0.688237:  70%|#######   | 7/10 [00:11<00:04,  1.57s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008157 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687191	valid's binary_logloss: 0.68922
[200]	train's binary_logloss: 0.68462	valid's binary_logloss: 0.688602
Early stopping, best iteration is:
[186]	train's binary_logloss: 0.684928	valid's binary_logloss: 0.688549
bagging, val_score: 0.688237:  80%|########  | 8/10 [00:12<00:02,  1.45s/it][I 2020-09-27 05:01:09,099] Trial 34 finished with value: 0.6885490341442905 and parameters: {'bagging_fraction': 0.8804920945207426, 'bagging_freq': 3}. Best is trial 30 with value: 0.6882368432610401.
bagging, val_score: 0.688237:  80%|########  | 8/10 [00:12<00:02,  1.45s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013640 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68703	valid's binary_logloss: 0.688776
[200]	train's binary_logloss: 0.68453	valid's binary_logloss: 0.688164
[300]	train's binary_logloss: 0.682427	valid's binary_logloss: 0.687911
[400]	train's binary_logloss: 0.680592	valid's binary_logloss: 0.688176
Early stopping, best iteration is:
[347]	train's binary_logloss: 0.681525	valid's binary_logloss: 0.687729
bagging, val_score: 0.687729:  90%|######### | 9/10 [00:14<00:01,  1.52s/it][I 2020-09-27 05:01:10,771] Trial 35 finished with value: 0.6877285343949435 and parameters: {'bagging_fraction': 0.6813927760377331, 'bagging_freq': 3}. Best is trial 35 with value: 0.6877285343949435.
bagging, val_score: 0.687729:  90%|######### | 9/10 [00:14<00:01,  1.52s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008939 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687118	valid's binary_logloss: 0.688984
[200]	train's binary_logloss: 0.684488	valid's binary_logloss: 0.688477
[300]	train's binary_logloss: 0.682515	valid's binary_logloss: 0.688435
[400]	train's binary_logloss: 0.680676	valid's binary_logloss: 0.688358
[500]	train's binary_logloss: 0.678963	valid's binary_logloss: 0.688512
Early stopping, best iteration is:
[456]	train's binary_logloss: 0.679669	valid's binary_logloss: 0.688231
bagging, val_score: 0.687729: 100%|##########| 10/10 [00:15<00:00,  1.63s/it][I 2020-09-27 05:01:12,671] Trial 36 finished with value: 0.6882307476214391 and parameters: {'bagging_fraction': 0.6088933564022119, 'bagging_freq': 6}. Best is trial 35 with value: 0.6877285343949435.
bagging, val_score: 0.687729: 100%|##########| 10/10 [00:16<00:00,  1.60s/it]
feature_fraction_stage2, val_score: 0.687729:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001488 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687028	valid's binary_logloss: 0.688811
[200]	train's binary_logloss: 0.684577	valid's binary_logloss: 0.68832
[300]	train's binary_logloss: 0.682495	valid's binary_logloss: 0.688292
Early stopping, best iteration is:
[229]	train's binary_logloss: 0.683926	valid's binary_logloss: 0.688061
feature_fraction_stage2, val_score: 0.687729:  17%|#6        | 1/6 [00:01<00:06,  1.31s/it][I 2020-09-27 05:01:13,995] Trial 37 finished with value: 0.6880605102479459 and parameters: {'feature_fraction': 0.7799999999999999}. Best is trial 37 with value: 0.6880605102479459.
feature_fraction_stage2, val_score: 0.687729:  17%|#6        | 1/6 [00:01<00:06,  1.31s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007740 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68703	valid's binary_logloss: 0.688776
[200]	train's binary_logloss: 0.68453	valid's binary_logloss: 0.688164
[300]	train's binary_logloss: 0.682427	valid's binary_logloss: 0.687911
[400]	train's binary_logloss: 0.680592	valid's binary_logloss: 0.688176
Early stopping, best iteration is:
[347]	train's binary_logloss: 0.681525	valid's binary_logloss: 0.687729
feature_fraction_stage2, val_score: 0.687729:  33%|###3      | 2/6 [00:03<00:05,  1.43s/it][I 2020-09-27 05:01:15,700] Trial 38 finished with value: 0.6877285343949435 and parameters: {'feature_fraction': 0.6839999999999999}. Best is trial 38 with value: 0.6877285343949435.
feature_fraction_stage2, val_score: 0.687729:  33%|###3      | 2/6 [00:03<00:05,  1.43s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009068 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68704	valid's binary_logloss: 0.688886
[200]	train's binary_logloss: 0.684508	valid's binary_logloss: 0.688072
[300]	train's binary_logloss: 0.682367	valid's binary_logloss: 0.688022
Early stopping, best iteration is:
[230]	train's binary_logloss: 0.683826	valid's binary_logloss: 0.687901
feature_fraction_stage2, val_score: 0.687729:  50%|#####     | 3/6 [00:04<00:04,  1.41s/it][I 2020-09-27 05:01:17,068] Trial 39 finished with value: 0.68790080420035 and parameters: {'feature_fraction': 0.716}. Best is trial 38 with value: 0.6877285343949435.
feature_fraction_stage2, val_score: 0.687729:  50%|#####     | 3/6 [00:04<00:04,  1.41s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008155 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68704	valid's binary_logloss: 0.688886
[200]	train's binary_logloss: 0.684508	valid's binary_logloss: 0.688072
[300]	train's binary_logloss: 0.682367	valid's binary_logloss: 0.688022
Early stopping, best iteration is:
[230]	train's binary_logloss: 0.683826	valid's binary_logloss: 0.687901
feature_fraction_stage2, val_score: 0.687729:  67%|######6   | 4/6 [00:05<00:02,  1.40s/it][I 2020-09-27 05:01:18,433] Trial 40 finished with value: 0.68790080420035 and parameters: {'feature_fraction': 0.748}. Best is trial 38 with value: 0.6877285343949435.
feature_fraction_stage2, val_score: 0.687729:  67%|######6   | 4/6 [00:05<00:02,  1.40s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010780 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
feature_fraction_stage2, val_score: 0.687553:  83%|########3 | 5/6 [00:06<00:01,  1.35s/it][I 2020-09-27 05:01:19,668] Trial 41 finished with value: 0.6875532234561516 and parameters: {'feature_fraction': 0.652}. Best is trial 41 with value: 0.6875532234561516.
feature_fraction_stage2, val_score: 0.687553:  83%|########3 | 5/6 [00:06<00:01,  1.35s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015763 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687015	valid's binary_logloss: 0.68889
[200]	train's binary_logloss: 0.684543	valid's binary_logloss: 0.688419
[300]	train's binary_logloss: 0.682467	valid's binary_logloss: 0.688343
Early stopping, best iteration is:
[234]	train's binary_logloss: 0.683801	valid's binary_logloss: 0.688055
feature_fraction_stage2, val_score: 0.687553: 100%|##########| 6/6 [00:08<00:00,  1.32s/it][I 2020-09-27 05:01:20,912] Trial 42 finished with value: 0.6880548224298009 and parameters: {'feature_fraction': 0.62}. Best is trial 41 with value: 0.6875532234561516.
feature_fraction_stage2, val_score: 0.687553: 100%|##########| 6/6 [00:08<00:00,  1.37s/it]
regularization_factors, val_score: 0.687553:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009634 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:   5%|5         | 1/20 [00:01<00:24,  1.28s/it][I 2020-09-27 05:01:22,212] Trial 43 finished with value: 0.6875532234753943 and parameters: {'lambda_l1': 8.067143310213936e-07, 'lambda_l2': 1.7924277356165816e-07}. Best is trial 43 with value: 0.6875532234753943.
regularization_factors, val_score: 0.687553:   5%|5         | 1/20 [00:01<00:24,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005038 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  10%|#         | 2/20 [00:02<00:23,  1.30s/it][I 2020-09-27 05:01:23,564] Trial 44 finished with value: 0.6875532234639651 and parameters: {'lambda_l1': 2.9817465406512945e-07, 'lambda_l2': 7.314463336600674e-08}. Best is trial 44 with value: 0.6875532234639651.
regularization_factors, val_score: 0.687553:  10%|#         | 2/20 [00:02<00:23,  1.30s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012896 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  15%|#5        | 3/20 [00:03<00:21,  1.29s/it][I 2020-09-27 05:01:24,835] Trial 45 finished with value: 0.6875532234634629 and parameters: {'lambda_l1': 2.9860158446650325e-07, 'lambda_l2': 5.2722857836715734e-08}. Best is trial 45 with value: 0.6875532234634629.
regularization_factors, val_score: 0.687553:  15%|#5        | 3/20 [00:03<00:21,  1.29s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007909 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  20%|##        | 4/20 [00:05<00:20,  1.28s/it][I 2020-09-27 05:01:26,067] Trial 46 finished with value: 0.6875532234640681 and parameters: {'lambda_l1': 3.287625005492051e-07, 'lambda_l2': 3.053523237410299e-08}. Best is trial 45 with value: 0.6875532234634629.
regularization_factors, val_score: 0.687553:  20%|##        | 4/20 [00:05<00:20,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007843 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  25%|##5       | 5/20 [00:06<00:19,  1.27s/it][I 2020-09-27 05:01:27,332] Trial 47 finished with value: 0.6875532234679212 and parameters: {'lambda_l1': 4.737421684219058e-07, 'lambda_l2': 3.2678236762325496e-08}. Best is trial 45 with value: 0.6875532234634629.
regularization_factors, val_score: 0.687553:  25%|##5       | 5/20 [00:06<00:19,  1.27s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012460 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  30%|###       | 6/20 [00:07<00:17,  1.27s/it][I 2020-09-27 05:01:28,588] Trial 48 finished with value: 0.6875532234603976 and parameters: {'lambda_l1': 1.8616761378629058e-07, 'lambda_l2': 1.614659705144355e-08}. Best is trial 48 with value: 0.6875532234603976.
regularization_factors, val_score: 0.687553:  30%|###       | 6/20 [00:07<00:17,  1.27s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008626 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  35%|###5      | 7/20 [00:08<00:16,  1.27s/it][I 2020-09-27 05:01:29,854] Trial 49 finished with value: 0.6875532234594542 and parameters: {'lambda_l1': 1.3735113285620687e-07, 'lambda_l2': 2.067788810166821e-08}. Best is trial 49 with value: 0.6875532234594542.
regularization_factors, val_score: 0.687553:  35%|###5      | 7/20 [00:08<00:16,  1.27s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012441 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  40%|####      | 8/20 [00:10<00:15,  1.28s/it][I 2020-09-27 05:01:31,172] Trial 50 finished with value: 0.6875532234565727 and parameters: {'lambda_l1': 2.5154633280823624e-08, 'lambda_l2': 1.0923630752094391e-08}. Best is trial 50 with value: 0.6875532234565727.
regularization_factors, val_score: 0.687553:  40%|####      | 8/20 [00:10<00:15,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002245 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  45%|####5     | 9/20 [00:11<00:14,  1.29s/it][I 2020-09-27 05:01:32,467] Trial 51 finished with value: 0.6875532234566062 and parameters: {'lambda_l1': 3.0807637392676064e-08, 'lambda_l2': 1.4533537232191534e-08}. Best is trial 50 with value: 0.6875532234565727.
regularization_factors, val_score: 0.687553:  45%|####5     | 9/20 [00:11<00:14,  1.29s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008797 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  50%|#####     | 10/20 [00:12<00:12,  1.28s/it][I 2020-09-27 05:01:33,731] Trial 52 finished with value: 0.687553223456156 and parameters: {'lambda_l1': 1.0265554714929266e-08, 'lambda_l2': 1.0447547951042816e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  50%|#####     | 10/20 [00:12<00:12,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001950 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  55%|#####5    | 11/20 [00:14<00:11,  1.30s/it][I 2020-09-27 05:01:35,074] Trial 53 finished with value: 0.6875532234561593 and parameters: {'lambda_l1': 1.0392524686592004e-08, 'lambda_l2': 1.0663353352190143e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  55%|#####5    | 11/20 [00:14<00:11,  1.30s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008108 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687064	valid's binary_logloss: 0.688739
[200]	train's binary_logloss: 0.684486	valid's binary_logloss: 0.688006
[300]	train's binary_logloss: 0.682501	valid's binary_logloss: 0.687947
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683839	valid's binary_logloss: 0.687771
regularization_factors, val_score: 0.687553:  60%|######    | 12/20 [00:15<00:10,  1.29s/it][I 2020-09-27 05:01:36,332] Trial 54 finished with value: 0.6877705631041466 and parameters: {'lambda_l1': 2.1247372830919867e-08, 'lambda_l2': 0.06309677087509566}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  60%|######    | 12/20 [00:15<00:10,  1.29s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011180 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  65%|######5   | 13/20 [00:16<00:08,  1.28s/it][I 2020-09-27 05:01:37,586] Trial 55 finished with value: 0.6875532234562124 and parameters: {'lambda_l1': 1.0739708350048745e-08, 'lambda_l2': 1.1070314813782383e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  65%|######5   | 13/20 [00:16<00:08,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008356 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687053	valid's binary_logloss: 0.68877
[200]	train's binary_logloss: 0.684524	valid's binary_logloss: 0.688207
[300]	train's binary_logloss: 0.682531	valid's binary_logloss: 0.687938
[400]	train's binary_logloss: 0.68063	valid's binary_logloss: 0.688065
Early stopping, best iteration is:
[348]	train's binary_logloss: 0.681599	valid's binary_logloss: 0.687655
regularization_factors, val_score: 0.687553:  70%|#######   | 14/20 [00:18<00:08,  1.38s/it][I 2020-09-27 05:01:39,213] Trial 56 finished with value: 0.6876553612371981 and parameters: {'lambda_l1': 0.08111866585830418, 'lambda_l2': 7.066527702724071e-06}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  70%|#######   | 14/20 [00:18<00:08,  1.38s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010624 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  75%|#######5  | 15/20 [00:19<00:06,  1.35s/it][I 2020-09-27 05:01:40,473] Trial 57 finished with value: 0.6875532234561976 and parameters: {'lambda_l1': 1.0664783935126406e-08, 'lambda_l2': 1.0078447980735048e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  75%|#######5  | 15/20 [00:19<00:06,  1.35s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007967 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  80%|########  | 16/20 [00:20<00:05,  1.32s/it][I 2020-09-27 05:01:41,748] Trial 58 finished with value: 0.6875532234661581 and parameters: {'lambda_l1': 1.2662033446419092e-08, 'lambda_l2': 1.5025969082548396e-06}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  80%|########  | 16/20 [00:20<00:05,  1.32s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010265 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  85%|########5 | 17/20 [00:22<00:03,  1.30s/it][I 2020-09-27 05:01:43,006] Trial 59 finished with value: 0.6875532234563088 and parameters: {'lambda_l1': 1.9318545060845645e-08, 'lambda_l2': 1.3003225260249327e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  85%|########5 | 17/20 [00:22<00:03,  1.30s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008421 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  90%|######### | 18/20 [00:23<00:02,  1.28s/it][I 2020-09-27 05:01:44,230] Trial 60 finished with value: 0.6875532237690561 and parameters: {'lambda_l1': 1.4255011647002522e-05, 'lambda_l2': 1.2469961061897813e-06}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  90%|######### | 18/20 [00:23<00:02,  1.28s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008041 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553:  95%|#########5| 19/20 [00:24<00:01,  1.27s/it][I 2020-09-27 05:01:45,477] Trial 61 finished with value: 0.6875532234563587 and parameters: {'lambda_l1': 1.5764030850824877e-08, 'lambda_l2': 1.4202912414449929e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553:  95%|#########5| 19/20 [00:24<00:01,  1.27s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007967 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684545	valid's binary_logloss: 0.687967
[300]	train's binary_logloss: 0.682519	valid's binary_logloss: 0.687694
Early stopping, best iteration is:
[231]	train's binary_logloss: 0.683858	valid's binary_logloss: 0.687553
regularization_factors, val_score: 0.687553: 100%|##########| 20/20 [00:25<00:00,  1.29s/it][I 2020-09-27 05:01:46,827] Trial 62 finished with value: 0.6875532234562169 and parameters: {'lambda_l1': 1.4154637690622022e-08, 'lambda_l2': 1.0757491754705414e-08}. Best is trial 52 with value: 0.687553223456156.
regularization_factors, val_score: 0.687553: 100%|##########| 20/20 [00:25<00:00,  1.30s/it]
min_data_in_leaf, val_score: 0.687553:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008157 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687147	valid's binary_logloss: 0.688925
[200]	train's binary_logloss: 0.6848	valid's binary_logloss: 0.688284
[300]	train's binary_logloss: 0.682838	valid's binary_logloss: 0.688287
Early stopping, best iteration is:
[229]	train's binary_logloss: 0.684208	valid's binary_logloss: 0.688185
min_data_in_leaf, val_score: 0.687553:  20%|##        | 1/5 [00:01<00:05,  1.47s/it][I 2020-09-27 05:01:48,314] Trial 63 finished with value: 0.688184603164505 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.688184603164505.
min_data_in_leaf, val_score: 0.687553:  20%|##        | 1/5 [00:01<00:05,  1.47s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013040 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687067	valid's binary_logloss: 0.688767
[200]	train's binary_logloss: 0.684549	valid's binary_logloss: 0.68803
[300]	train's binary_logloss: 0.682536	valid's binary_logloss: 0.687853
Early stopping, best iteration is:
[228]	train's binary_logloss: 0.683958	valid's binary_logloss: 0.687779
min_data_in_leaf, val_score: 0.687553:  40%|####      | 2/5 [00:02<00:04,  1.43s/it][I 2020-09-27 05:01:49,650] Trial 64 finished with value: 0.6877794897053392 and parameters: {'min_child_samples': 10}. Best is trial 64 with value: 0.6877794897053392.
min_data_in_leaf, val_score: 0.687553:  40%|####      | 2/5 [00:02<00:04,  1.43s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007873 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687067	valid's binary_logloss: 0.688767
[200]	train's binary_logloss: 0.684546	valid's binary_logloss: 0.688205
[300]	train's binary_logloss: 0.68251	valid's binary_logloss: 0.687847
[400]	train's binary_logloss: 0.68055	valid's binary_logloss: 0.688135
Early stopping, best iteration is:
[347]	train's binary_logloss: 0.681552	valid's binary_logloss: 0.687815
min_data_in_leaf, val_score: 0.687553:  60%|######    | 3/5 [00:04<00:02,  1.50s/it][I 2020-09-27 05:01:51,310] Trial 65 finished with value: 0.6878151679233732 and parameters: {'min_child_samples': 5}. Best is trial 64 with value: 0.6877794897053392.
min_data_in_leaf, val_score: 0.687553:  60%|######    | 3/5 [00:04<00:02,  1.50s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008336 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687068	valid's binary_logloss: 0.688737
[200]	train's binary_logloss: 0.684557	valid's binary_logloss: 0.68783
[300]	train's binary_logloss: 0.682493	valid's binary_logloss: 0.687685
Early stopping, best iteration is:
[233]	train's binary_logloss: 0.683814	valid's binary_logloss: 0.687524
min_data_in_leaf, val_score: 0.687524:  80%|########  | 4/5 [00:05<00:01,  1.42s/it][I 2020-09-27 05:01:52,557] Trial 66 finished with value: 0.6875238059643457 and parameters: {'min_child_samples': 25}. Best is trial 66 with value: 0.6875238059643457.
min_data_in_leaf, val_score: 0.687524:  80%|########  | 4/5 [00:05<00:01,  1.42s/it][LightGBM] [Info] Number of positive: 46298, number of negative: 46728
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008121 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.497689 -> initscore=-0.009245
[LightGBM] [Info] Start training from score -0.009245
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687098	valid's binary_logloss: 0.688869
[200]	train's binary_logloss: 0.684669	valid's binary_logloss: 0.688242
[300]	train's binary_logloss: 0.68261	valid's binary_logloss: 0.688041
[400]	train's binary_logloss: 0.680742	valid's binary_logloss: 0.688478
Early stopping, best iteration is:
[302]	train's binary_logloss: 0.682568	valid's binary_logloss: 0.688016
min_data_in_leaf, val_score: 0.687524: 100%|##########| 5/5 [00:07<00:00,  1.45s/it][I 2020-09-27 05:01:54,060] Trial 67 finished with value: 0.6880163054826545 and parameters: {'min_child_samples': 50}. Best is trial 66 with value: 0.6875238059643457.
min_data_in_leaf, val_score: 0.687524: 100%|##########| 5/5 [00:07<00:00,  1.44s/it]
Fold : 8
[I 2020-09-27 05:01:54,160] A new study created in memory with name: no-name-f83c045e-ae9d-45af-86f2-b4f88b2fc4c1
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001841 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.6627	valid's binary_logloss: 0.6905
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.677925	valid's binary_logloss: 0.689727
feature_fraction, val_score: 0.689727:  14%|#4        | 1/7 [00:01<00:06,  1.03s/it][I 2020-09-27 05:01:55,205] Trial 0 finished with value: 0.6897267085894525 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6897267085894525.
feature_fraction, val_score: 0.689727:  14%|#4        | 1/7 [00:01<00:06,  1.03s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000753 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.666345	valid's binary_logloss: 0.689793
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.679261	valid's binary_logloss: 0.689685
feature_fraction, val_score: 0.689685:  29%|##8       | 2/7 [00:01<00:04,  1.07it/s][I 2020-09-27 05:01:55,922] Trial 1 finished with value: 0.689684519676036 and parameters: {'feature_fraction': 0.4}. Best is trial 1 with value: 0.689684519676036.
feature_fraction, val_score: 0.689685:  29%|##8       | 2/7 [00:01<00:04,  1.07it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.033713 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664351	valid's binary_logloss: 0.689953
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.668343	valid's binary_logloss: 0.689942
feature_fraction, val_score: 0.689685:  43%|####2     | 3/7 [00:02<00:03,  1.02it/s][I 2020-09-27 05:01:56,995] Trial 2 finished with value: 0.6899415616595388 and parameters: {'feature_fraction': 0.6}. Best is trial 1 with value: 0.689684519676036.
feature_fraction, val_score: 0.689685:  43%|####2     | 3/7 [00:02<00:03,  1.02it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002095 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663723	valid's binary_logloss: 0.690154
Early stopping, best iteration is:
[47]	train's binary_logloss: 0.676135	valid's binary_logloss: 0.689446
feature_fraction, val_score: 0.689446:  57%|#####7    | 4/7 [00:03<00:02,  1.08it/s][I 2020-09-27 05:01:57,810] Trial 3 finished with value: 0.6894460749095064 and parameters: {'feature_fraction': 0.7}. Best is trial 3 with value: 0.6894460749095064.
feature_fraction, val_score: 0.689446:  57%|#####7    | 4/7 [00:03<00:02,  1.08it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001658 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662983	valid's binary_logloss: 0.690679
Early stopping, best iteration is:
[53]	train's binary_logloss: 0.674243	valid's binary_logloss: 0.689539
feature_fraction, val_score: 0.689446:  71%|#######1  | 5/7 [00:04<00:01,  1.10it/s][I 2020-09-27 05:01:58,677] Trial 4 finished with value: 0.6895387621340154 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 3 with value: 0.6894460749095064.
feature_fraction, val_score: 0.689446:  71%|#######1  | 5/7 [00:04<00:01,  1.10it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008537 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663117	valid's binary_logloss: 0.689832
Early stopping, best iteration is:
[47]	train's binary_logloss: 0.675766	valid's binary_logloss: 0.689575
feature_fraction, val_score: 0.689446:  86%|########5 | 6/7 [00:05<00:00,  1.10it/s][I 2020-09-27 05:01:59,592] Trial 5 finished with value: 0.6895746649335026 and parameters: {'feature_fraction': 0.8}. Best is trial 3 with value: 0.6894460749095064.
feature_fraction, val_score: 0.689446:  86%|########5 | 6/7 [00:05<00:00,  1.10it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000930 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665484	valid's binary_logloss: 0.690502
Early stopping, best iteration is:
[82]	train's binary_logloss: 0.669205	valid's binary_logloss: 0.69007
feature_fraction, val_score: 0.689446: 100%|##########| 7/7 [00:06<00:00,  1.09it/s][I 2020-09-27 05:02:00,511] Trial 6 finished with value: 0.6900695949496937 and parameters: {'feature_fraction': 0.5}. Best is trial 3 with value: 0.6894460749095064.
feature_fraction, val_score: 0.689446: 100%|##########| 7/7 [00:06<00:00,  1.10it/s]
num_leaves, val_score: 0.689446:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008662 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686038	valid's binary_logloss: 0.689757
[200]	train's binary_logloss: 0.68281	valid's binary_logloss: 0.688899
[300]	train's binary_logloss: 0.68005	valid's binary_logloss: 0.688585
[400]	train's binary_logloss: 0.677584	valid's binary_logloss: 0.688509
Early stopping, best iteration is:
[336]	train's binary_logloss: 0.679139	valid's binary_logloss: 0.688453
num_leaves, val_score: 0.688453:   5%|5         | 1/20 [00:01<00:30,  1.59s/it][I 2020-09-27 05:02:02,118] Trial 7 finished with value: 0.6884528114970678 and parameters: {'num_leaves': 5}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:   5%|5         | 1/20 [00:01<00:30,  1.59s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012715 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.593349	valid's binary_logloss: 0.694762
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.655278	valid's binary_logloss: 0.690329
num_leaves, val_score: 0.688453:  10%|#         | 2/20 [00:02<00:25,  1.43s/it][I 2020-09-27 05:02:03,178] Trial 8 finished with value: 0.690328689997322 and parameters: {'num_leaves': 139}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  10%|#         | 2/20 [00:02<00:25,  1.43s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012613 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.545663	valid's binary_logloss: 0.696241
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.654494	valid's binary_logloss: 0.690387
num_leaves, val_score: 0.688453:  15%|#5        | 3/20 [00:03<00:23,  1.37s/it][I 2020-09-27 05:02:04,416] Trial 9 finished with value: 0.6903874648728597 and parameters: {'num_leaves': 227}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  15%|#5        | 3/20 [00:03<00:23,  1.37s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012246 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689978	valid's binary_logloss: 0.691198
[200]	train's binary_logloss: 0.688797	valid's binary_logloss: 0.69055
[300]	train's binary_logloss: 0.688072	valid's binary_logloss: 0.690072
[400]	train's binary_logloss: 0.687556	valid's binary_logloss: 0.689833
[500]	train's binary_logloss: 0.687169	valid's binary_logloss: 0.689688
[600]	train's binary_logloss: 0.686868	valid's binary_logloss: 0.689597
[700]	train's binary_logloss: 0.686624	valid's binary_logloss: 0.689542
[800]	train's binary_logloss: 0.68642	valid's binary_logloss: 0.689474
[900]	train's binary_logloss: 0.686246	valid's binary_logloss: 0.689462
[1000]	train's binary_logloss: 0.686095	valid's binary_logloss: 0.689411
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.686095	valid's binary_logloss: 0.689411
num_leaves, val_score: 0.688453:  20%|##        | 4/20 [00:06<00:30,  1.89s/it][I 2020-09-27 05:02:07,521] Trial 10 finished with value: 0.6894108839720557 and parameters: {'num_leaves': 2}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  20%|##        | 4/20 [00:06<00:30,  1.89s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008181 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.684097	valid's binary_logloss: 0.689543
[200]	train's binary_logloss: 0.679346	valid's binary_logloss: 0.689259
[300]	train's binary_logloss: 0.675305	valid's binary_logloss: 0.68922
[400]	train's binary_logloss: 0.671448	valid's binary_logloss: 0.689174
Early stopping, best iteration is:
[385]	train's binary_logloss: 0.672024	valid's binary_logloss: 0.689088
num_leaves, val_score: 0.688453:  25%|##5       | 5/20 [00:08<00:28,  1.88s/it][I 2020-09-27 05:02:09,364] Trial 11 finished with value: 0.6890882300993826 and parameters: {'num_leaves': 7}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  25%|##5       | 5/20 [00:08<00:28,  1.88s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009553 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.684097	valid's binary_logloss: 0.689543
[200]	train's binary_logloss: 0.679346	valid's binary_logloss: 0.689259
[300]	train's binary_logloss: 0.675305	valid's binary_logloss: 0.68922
[400]	train's binary_logloss: 0.671448	valid's binary_logloss: 0.689174
Early stopping, best iteration is:
[385]	train's binary_logloss: 0.672024	valid's binary_logloss: 0.689088
num_leaves, val_score: 0.688453:  30%|###       | 6/20 [00:10<00:25,  1.85s/it][I 2020-09-27 05:02:11,153] Trial 12 finished with value: 0.6890882300993826 and parameters: {'num_leaves': 7}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  30%|###       | 6/20 [00:10<00:25,  1.85s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008338 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.652737	valid's binary_logloss: 0.690159
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.676594	valid's binary_logloss: 0.689592
num_leaves, val_score: 0.688453:  35%|###5      | 7/20 [00:11<00:19,  1.52s/it][I 2020-09-27 05:02:11,891] Trial 13 finished with value: 0.6895920137463518 and parameters: {'num_leaves': 46}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  35%|###5      | 7/20 [00:11<00:19,  1.52s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008791 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.630115	valid's binary_logloss: 0.692022
Early stopping, best iteration is:
[25]	train's binary_logloss: 0.671325	valid's binary_logloss: 0.690619
num_leaves, val_score: 0.688453:  40%|####      | 8/20 [00:12<00:15,  1.33s/it][I 2020-09-27 05:02:12,774] Trial 14 finished with value: 0.690619286117565 and parameters: {'num_leaves': 79}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  40%|####      | 8/20 [00:12<00:15,  1.33s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008004 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.614161	valid's binary_logloss: 0.692813
Early stopping, best iteration is:
[31]	train's binary_logloss: 0.661053	valid's binary_logloss: 0.690497
num_leaves, val_score: 0.688453:  45%|####5     | 9/20 [00:13<00:13,  1.21s/it][I 2020-09-27 05:02:13,713] Trial 15 finished with value: 0.6904972906779976 and parameters: {'num_leaves': 104}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  45%|####5     | 9/20 [00:13<00:13,  1.21s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008054 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685119	valid's binary_logloss: 0.68988
[200]	train's binary_logloss: 0.681155	valid's binary_logloss: 0.689507
Early stopping, best iteration is:
[189]	train's binary_logloss: 0.681535	valid's binary_logloss: 0.689459
num_leaves, val_score: 0.688453:  50%|#####     | 10/20 [00:14<00:11,  1.17s/it][I 2020-09-27 05:02:14,772] Trial 16 finished with value: 0.6894592704386628 and parameters: {'num_leaves': 6}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  50%|#####     | 10/20 [00:14<00:11,  1.17s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009175 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.568944	valid's binary_logloss: 0.695405
Early stopping, best iteration is:
[15]	train's binary_logloss: 0.665473	valid's binary_logloss: 0.690687
num_leaves, val_score: 0.688453:  55%|#####5    | 11/20 [00:15<00:10,  1.13s/it][I 2020-09-27 05:02:15,813] Trial 17 finished with value: 0.6906872291686416 and parameters: {'num_leaves': 182}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  55%|#####5    | 11/20 [00:15<00:10,  1.13s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001779 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.649856	valid's binary_logloss: 0.690204
Early stopping, best iteration is:
[48]	train's binary_logloss: 0.667974	valid's binary_logloss: 0.689828
num_leaves, val_score: 0.688453:  60%|######    | 12/20 [00:16<00:08,  1.06s/it][I 2020-09-27 05:02:16,731] Trial 18 finished with value: 0.6898284619368371 and parameters: {'num_leaves': 50}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  60%|######    | 12/20 [00:16<00:08,  1.06s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009546 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.652737	valid's binary_logloss: 0.690159
Early stopping, best iteration is:
[30]	train's binary_logloss: 0.676594	valid's binary_logloss: 0.689592
num_leaves, val_score: 0.688453:  65%|######5   | 13/20 [00:16<00:06,  1.03it/s][I 2020-09-27 05:02:17,476] Trial 19 finished with value: 0.6895920137463518 and parameters: {'num_leaves': 46}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  65%|######5   | 13/20 [00:16<00:06,  1.03it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011684 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.688384	valid's binary_logloss: 0.6906
[200]	train's binary_logloss: 0.686574	valid's binary_logloss: 0.689853
[300]	train's binary_logloss: 0.68527	valid's binary_logloss: 0.689427
[400]	train's binary_logloss: 0.684182	valid's binary_logloss: 0.689235
[500]	train's binary_logloss: 0.683198	valid's binary_logloss: 0.689083
[600]	train's binary_logloss: 0.6823	valid's binary_logloss: 0.688973
[700]	train's binary_logloss: 0.681468	valid's binary_logloss: 0.688875
[800]	train's binary_logloss: 0.680639	valid's binary_logloss: 0.688913
Early stopping, best iteration is:
[703]	train's binary_logloss: 0.681443	valid's binary_logloss: 0.68886
num_leaves, val_score: 0.688453:  70%|#######   | 14/20 [00:19<00:08,  1.48s/it][I 2020-09-27 05:02:20,147] Trial 20 finished with value: 0.6888601909760304 and parameters: {'num_leaves': 3}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  70%|#######   | 14/20 [00:19<00:08,  1.48s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008648 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.676993	valid's binary_logloss: 0.689308
Early stopping, best iteration is:
[90]	train's binary_logloss: 0.678106	valid's binary_logloss: 0.689282
num_leaves, val_score: 0.688453:  75%|#######5  | 15/20 [00:20<00:06,  1.28s/it][I 2020-09-27 05:02:20,976] Trial 21 finished with value: 0.6892821753767159 and parameters: {'num_leaves': 15}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  75%|#######5  | 15/20 [00:20<00:06,  1.28s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008704 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.689978	valid's binary_logloss: 0.691198
[200]	train's binary_logloss: 0.688797	valid's binary_logloss: 0.69055
[300]	train's binary_logloss: 0.688072	valid's binary_logloss: 0.690072
[400]	train's binary_logloss: 0.687556	valid's binary_logloss: 0.689833
[500]	train's binary_logloss: 0.687169	valid's binary_logloss: 0.689688
[600]	train's binary_logloss: 0.686868	valid's binary_logloss: 0.689597
[700]	train's binary_logloss: 0.686624	valid's binary_logloss: 0.689542
[800]	train's binary_logloss: 0.68642	valid's binary_logloss: 0.689474
[900]	train's binary_logloss: 0.686246	valid's binary_logloss: 0.689462
[1000]	train's binary_logloss: 0.686095	valid's binary_logloss: 0.689411
Did not meet early stopping. Best iteration is:
[1000]	train's binary_logloss: 0.686095	valid's binary_logloss: 0.689411
num_leaves, val_score: 0.688453:  80%|########  | 16/20 [00:25<00:09,  2.43s/it][I 2020-09-27 05:02:26,077] Trial 22 finished with value: 0.6894108839720557 and parameters: {'num_leaves': 2}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  80%|########  | 16/20 [00:25<00:09,  2.43s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016684 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.65797	valid's binary_logloss: 0.690213
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.675789	valid's binary_logloss: 0.689489
num_leaves, val_score: 0.688453:  85%|########5 | 17/20 [00:26<00:05,  1.96s/it][I 2020-09-27 05:02:26,941] Trial 23 finished with value: 0.6894891049398579 and parameters: {'num_leaves': 39}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  85%|########5 | 17/20 [00:26<00:05,  1.96s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011681 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.632096	valid's binary_logloss: 0.691354
Early stopping, best iteration is:
[28]	train's binary_logloss: 0.670154	valid's binary_logloss: 0.690155
num_leaves, val_score: 0.688453:  90%|######### | 18/20 [00:27<00:03,  1.64s/it][I 2020-09-27 05:02:27,818] Trial 24 finished with value: 0.6901553410011205 and parameters: {'num_leaves': 75}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  90%|######### | 18/20 [00:27<00:03,  1.64s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012016 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.669171	valid's binary_logloss: 0.689591
Early stopping, best iteration is:
[78]	train's binary_logloss: 0.67319	valid's binary_logloss: 0.68921
num_leaves, val_score: 0.688453:  95%|#########5| 19/20 [00:28<00:01,  1.42s/it][I 2020-09-27 05:02:28,738] Trial 25 finished with value: 0.6892103907051438 and parameters: {'num_leaves': 24}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453:  95%|#########5| 19/20 [00:28<00:01,  1.42s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018748 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.631577	valid's binary_logloss: 0.692299
Early stopping, best iteration is:
[26]	train's binary_logloss: 0.671068	valid's binary_logloss: 0.690199
num_leaves, val_score: 0.688453: 100%|##########| 20/20 [00:29<00:00,  1.26s/it][I 2020-09-27 05:02:29,625] Trial 26 finished with value: 0.6901985363739374 and parameters: {'num_leaves': 77}. Best is trial 7 with value: 0.6884528114970678.
num_leaves, val_score: 0.688453: 100%|##########| 20/20 [00:29<00:00,  1.46s/it]
bagging, val_score: 0.688453:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001613 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686006	valid's binary_logloss: 0.689628
[200]	train's binary_logloss: 0.682779	valid's binary_logloss: 0.688355
[300]	train's binary_logloss: 0.68005	valid's binary_logloss: 0.688419
Early stopping, best iteration is:
[219]	train's binary_logloss: 0.682239	valid's binary_logloss: 0.688258
bagging, val_score: 0.688258:  10%|#         | 1/10 [00:01<00:11,  1.23s/it][I 2020-09-27 05:02:30,868] Trial 27 finished with value: 0.6882582742878182 and parameters: {'bagging_fraction': 0.615338737094182, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  10%|#         | 1/10 [00:01<00:11,  1.23s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008365 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68604	valid's binary_logloss: 0.689719
[200]	train's binary_logloss: 0.682712	valid's binary_logloss: 0.68842
Early stopping, best iteration is:
[194]	train's binary_logloss: 0.682897	valid's binary_logloss: 0.688374
bagging, val_score: 0.688258:  20%|##        | 2/10 [00:02<00:09,  1.19s/it][I 2020-09-27 05:02:31,952] Trial 28 finished with value: 0.6883743964950245 and parameters: {'bagging_fraction': 0.6026436236346611, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  20%|##        | 2/10 [00:02<00:09,  1.19s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008715 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686015	valid's binary_logloss: 0.689456
[200]	train's binary_logloss: 0.682706	valid's binary_logloss: 0.688437
[300]	train's binary_logloss: 0.67996	valid's binary_logloss: 0.688695
Early stopping, best iteration is:
[223]	train's binary_logloss: 0.682071	valid's binary_logloss: 0.68839
bagging, val_score: 0.688258:  30%|###       | 3/10 [00:03<00:08,  1.19s/it][I 2020-09-27 05:02:33,149] Trial 29 finished with value: 0.6883895614361045 and parameters: {'bagging_fraction': 0.6029377567256127, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  30%|###       | 3/10 [00:03<00:08,  1.19s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008131 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685948	valid's binary_logloss: 0.689692
[200]	train's binary_logloss: 0.68277	valid's binary_logloss: 0.688533
[300]	train's binary_logloss: 0.679958	valid's binary_logloss: 0.688725
Early stopping, best iteration is:
[274]	train's binary_logloss: 0.680722	valid's binary_logloss: 0.688429
bagging, val_score: 0.688258:  40%|####      | 4/10 [00:04<00:07,  1.22s/it][I 2020-09-27 05:02:34,457] Trial 30 finished with value: 0.6884290873607732 and parameters: {'bagging_fraction': 0.5938506890500748, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  40%|####      | 4/10 [00:04<00:07,  1.22s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013062 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685972	valid's binary_logloss: 0.689581
[200]	train's binary_logloss: 0.682787	valid's binary_logloss: 0.688637
Early stopping, best iteration is:
[196]	train's binary_logloss: 0.682904	valid's binary_logloss: 0.688622
bagging, val_score: 0.688258:  50%|#####     | 5/10 [00:05<00:05,  1.18s/it][I 2020-09-27 05:02:35,525] Trial 31 finished with value: 0.6886216049092391 and parameters: {'bagging_fraction': 0.5957530118583011, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  50%|#####     | 5/10 [00:05<00:05,  1.18s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008942 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686042	valid's binary_logloss: 0.68983
[200]	train's binary_logloss: 0.682839	valid's binary_logloss: 0.688484
Early stopping, best iteration is:
[196]	train's binary_logloss: 0.682969	valid's binary_logloss: 0.688429
bagging, val_score: 0.688258:  60%|######    | 6/10 [00:06<00:04,  1.15s/it][I 2020-09-27 05:02:36,622] Trial 32 finished with value: 0.6884291730078321 and parameters: {'bagging_fraction': 0.601131517375276, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  60%|######    | 6/10 [00:06<00:04,  1.15s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008379 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685889	valid's binary_logloss: 0.68974
[200]	train's binary_logloss: 0.682644	valid's binary_logloss: 0.68847
[300]	train's binary_logloss: 0.679926	valid's binary_logloss: 0.688667
Early stopping, best iteration is:
[200]	train's binary_logloss: 0.682644	valid's binary_logloss: 0.68847
bagging, val_score: 0.688258:  70%|#######   | 7/10 [00:08<00:03,  1.14s/it][I 2020-09-27 05:02:37,721] Trial 33 finished with value: 0.6884699706432157 and parameters: {'bagging_fraction': 0.6161422919690354, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  70%|#######   | 7/10 [00:08<00:03,  1.14s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013266 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686069	valid's binary_logloss: 0.689747
[200]	train's binary_logloss: 0.682773	valid's binary_logloss: 0.688633
[300]	train's binary_logloss: 0.679937	valid's binary_logloss: 0.68895
Early stopping, best iteration is:
[205]	train's binary_logloss: 0.682645	valid's binary_logloss: 0.688588
bagging, val_score: 0.688258:  80%|########  | 8/10 [00:09<00:02,  1.14s/it][I 2020-09-27 05:02:38,853] Trial 34 finished with value: 0.6885878332172652 and parameters: {'bagging_fraction': 0.6004490251930547, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  80%|########  | 8/10 [00:09<00:02,  1.14s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008258 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686065	valid's binary_logloss: 0.689456
[200]	train's binary_logloss: 0.682802	valid's binary_logloss: 0.688959
[300]	train's binary_logloss: 0.680026	valid's binary_logloss: 0.688847
Early stopping, best iteration is:
[262]	train's binary_logloss: 0.681101	valid's binary_logloss: 0.688619
bagging, val_score: 0.688258:  90%|######### | 9/10 [00:10<00:01,  1.21s/it][I 2020-09-27 05:02:40,228] Trial 35 finished with value: 0.6886193436195897 and parameters: {'bagging_fraction': 0.7777960275374113, 'bagging_freq': 7}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258:  90%|######### | 9/10 [00:10<00:01,  1.21s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008496 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685807	valid's binary_logloss: 0.689065
[200]	train's binary_logloss: 0.68264	valid's binary_logloss: 0.6887
[300]	train's binary_logloss: 0.680006	valid's binary_logloss: 0.688483
Early stopping, best iteration is:
[291]	train's binary_logloss: 0.680267	valid's binary_logloss: 0.688364
bagging, val_score: 0.688258: 100%|##########| 10/10 [00:11<00:00,  1.25s/it][I 2020-09-27 05:02:41,562] Trial 36 finished with value: 0.6883635702761751 and parameters: {'bagging_fraction': 0.4580982361208543, 'bagging_freq': 4}. Best is trial 27 with value: 0.6882582742878182.
bagging, val_score: 0.688258: 100%|##########| 10/10 [00:11<00:00,  1.19s/it]
feature_fraction_stage2, val_score: 0.688258:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008209 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
feature_fraction_stage2, val_score: 0.688072:  17%|#6        | 1/6 [00:01<00:07,  1.55s/it][I 2020-09-27 05:02:43,127] Trial 37 finished with value: 0.6880722259887746 and parameters: {'feature_fraction': 0.748}. Best is trial 37 with value: 0.6880722259887746.
feature_fraction_stage2, val_score: 0.688072:  17%|#6        | 1/6 [00:01<00:07,  1.55s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003567 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
feature_fraction_stage2, val_score: 0.688072:  33%|###3      | 2/6 [00:03<00:06,  1.58s/it][I 2020-09-27 05:02:44,774] Trial 38 finished with value: 0.6880722259887746 and parameters: {'feature_fraction': 0.716}. Best is trial 37 with value: 0.6880722259887746.
feature_fraction_stage2, val_score: 0.688072:  33%|###3      | 2/6 [00:03<00:06,  1.58s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008780 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686006	valid's binary_logloss: 0.689628
[200]	train's binary_logloss: 0.682779	valid's binary_logloss: 0.688355
[300]	train's binary_logloss: 0.68005	valid's binary_logloss: 0.688419
Early stopping, best iteration is:
[219]	train's binary_logloss: 0.682239	valid's binary_logloss: 0.688258
feature_fraction_stage2, val_score: 0.688072:  50%|#####     | 3/6 [00:04<00:04,  1.45s/it][I 2020-09-27 05:02:45,923] Trial 39 finished with value: 0.6882582742878182 and parameters: {'feature_fraction': 0.6839999999999999}. Best is trial 37 with value: 0.6880722259887746.
feature_fraction_stage2, val_score: 0.688072:  50%|#####     | 3/6 [00:04<00:04,  1.45s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009262 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685951	valid's binary_logloss: 0.689912
[200]	train's binary_logloss: 0.682613	valid's binary_logloss: 0.688642
[300]	train's binary_logloss: 0.679824	valid's binary_logloss: 0.688952
Early stopping, best iteration is:
[201]	train's binary_logloss: 0.682582	valid's binary_logloss: 0.688618
feature_fraction_stage2, val_score: 0.688072:  67%|######6   | 4/6 [00:05<00:02,  1.36s/it][I 2020-09-27 05:02:47,065] Trial 40 finished with value: 0.6886180547556369 and parameters: {'feature_fraction': 0.7799999999999999}. Best is trial 37 with value: 0.6880722259887746.
feature_fraction_stage2, val_score: 0.688072:  67%|######6   | 4/6 [00:05<00:02,  1.36s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009195 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685997	valid's binary_logloss: 0.689645
[200]	train's binary_logloss: 0.682772	valid's binary_logloss: 0.688789
[300]	train's binary_logloss: 0.680042	valid's binary_logloss: 0.689271
Early stopping, best iteration is:
[200]	train's binary_logloss: 0.682772	valid's binary_logloss: 0.688789
feature_fraction_stage2, val_score: 0.688072:  83%|########3 | 5/6 [00:06<00:01,  1.29s/it][I 2020-09-27 05:02:48,184] Trial 41 finished with value: 0.6887892049667754 and parameters: {'feature_fraction': 0.62}. Best is trial 37 with value: 0.6880722259887746.
feature_fraction_stage2, val_score: 0.688072:  83%|########3 | 5/6 [00:06<00:01,  1.29s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013986 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686063	valid's binary_logloss: 0.68975
[200]	train's binary_logloss: 0.682823	valid's binary_logloss: 0.688614
[300]	train's binary_logloss: 0.680059	valid's binary_logloss: 0.689073
Early stopping, best iteration is:
[206]	train's binary_logloss: 0.68266	valid's binary_logloss: 0.688579
feature_fraction_stage2, val_score: 0.688072: 100%|##########| 6/6 [00:07<00:00,  1.24s/it][I 2020-09-27 05:02:49,309] Trial 42 finished with value: 0.688578931631993 and parameters: {'feature_fraction': 0.652}. Best is trial 37 with value: 0.6880722259887746.
feature_fraction_stage2, val_score: 0.688072: 100%|##########| 6/6 [00:07<00:00,  1.29s/it]
regularization_factors, val_score: 0.688072:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001438 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686053	valid's binary_logloss: 0.689655
[200]	train's binary_logloss: 0.682817	valid's binary_logloss: 0.688251
[300]	train's binary_logloss: 0.680044	valid's binary_logloss: 0.688312
Early stopping, best iteration is:
[274]	train's binary_logloss: 0.68083	valid's binary_logloss: 0.68815
regularization_factors, val_score: 0.688072:   5%|5         | 1/20 [00:01<00:26,  1.40s/it][I 2020-09-27 05:02:50,722] Trial 43 finished with value: 0.6881495198006506 and parameters: {'lambda_l1': 0.0485944386213443, 'lambda_l2': 0.24935601021074602}. Best is trial 43 with value: 0.6881495198006506.
regularization_factors, val_score: 0.688072:   5%|5         | 1/20 [00:01<00:26,  1.40s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001480 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68603	valid's binary_logloss: 0.689695
[200]	train's binary_logloss: 0.682787	valid's binary_logloss: 0.688424
[300]	train's binary_logloss: 0.680205	valid's binary_logloss: 0.688656
Early stopping, best iteration is:
[221]	train's binary_logloss: 0.682216	valid's binary_logloss: 0.688392
regularization_factors, val_score: 0.688072:  10%|#         | 2/20 [00:02<00:24,  1.35s/it][I 2020-09-27 05:02:51,960] Trial 44 finished with value: 0.6883919982325207 and parameters: {'lambda_l1': 0.10751881913962534, 'lambda_l2': 0.5824729388634683}. Best is trial 43 with value: 0.6881495198006506.
regularization_factors, val_score: 0.688072:  10%|#         | 2/20 [00:02<00:24,  1.35s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008830 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689487
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679962	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677326	valid's binary_logloss: 0.688466
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679068	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  15%|#5        | 3/20 [00:04<00:24,  1.41s/it][I 2020-09-27 05:02:53,524] Trial 45 finished with value: 0.6880722890783393 and parameters: {'lambda_l1': 1.188253884167047e-06, 'lambda_l2': 0.005588276816712126}. Best is trial 45 with value: 0.6880722890783393.
regularization_factors, val_score: 0.688072:  15%|#5        | 3/20 [00:04<00:24,  1.41s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002086 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689487
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679962	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677326	valid's binary_logloss: 0.688466
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679068	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  20%|##        | 4/20 [00:05<00:23,  1.48s/it][I 2020-09-27 05:02:55,155] Trial 46 finished with value: 0.6880722851297056 and parameters: {'lambda_l1': 5.105795902683296e-08, 'lambda_l2': 0.005237582202723369}. Best is trial 46 with value: 0.6880722851297056.
regularization_factors, val_score: 0.688072:  20%|##        | 4/20 [00:05<00:23,  1.48s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001606 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689487
[200]	train's binary_logloss: 0.682685	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679962	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.67738	valid's binary_logloss: 0.68864
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.679165	valid's binary_logloss: 0.688085
regularization_factors, val_score: 0.688072:  25%|##5       | 5/20 [00:07<00:22,  1.52s/it][I 2020-09-27 05:02:56,785] Trial 47 finished with value: 0.6880848414842485 and parameters: {'lambda_l1': 1.720232956611629e-08, 'lambda_l2': 0.007516638600538305}. Best is trial 46 with value: 0.6880722851297056.
regularization_factors, val_score: 0.688072:  25%|##5       | 5/20 [00:07<00:22,  1.52s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014048 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679067	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  30%|###       | 6/20 [00:09<00:21,  1.54s/it][I 2020-09-27 05:02:58,355] Trial 48 finished with value: 0.688072230112268 and parameters: {'lambda_l1': 1.027357618260445e-08, 'lambda_l2': 0.00036547314253742376}. Best is trial 48 with value: 0.688072230112268.
regularization_factors, val_score: 0.688072:  30%|###       | 6/20 [00:09<00:21,  1.54s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008658 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  35%|###5      | 7/20 [00:10<00:20,  1.55s/it][I 2020-09-27 05:02:59,927] Trial 49 finished with value: 0.6880722260400305 and parameters: {'lambda_l1': 2.5990203700513686e-08, 'lambda_l2': 4.6609028631019925e-06}. Best is trial 49 with value: 0.6880722260400305.
regularization_factors, val_score: 0.688072:  35%|###5      | 7/20 [00:10<00:20,  1.55s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001464 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  40%|####      | 8/20 [00:12<00:18,  1.56s/it][I 2020-09-27 05:03:01,518] Trial 50 finished with value: 0.6880722259938851 and parameters: {'lambda_l1': 1.0501739122696127e-08, 'lambda_l2': 2.906436684517273e-07}. Best is trial 50 with value: 0.6880722259938851.
regularization_factors, val_score: 0.688072:  40%|####      | 8/20 [00:12<00:18,  1.56s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008492 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  45%|####5     | 9/20 [00:13<00:17,  1.56s/it][I 2020-09-27 05:03:03,084] Trial 51 finished with value: 0.6880722259917854 and parameters: {'lambda_l1': 1.0110083939507621e-08, 'lambda_l2': 1.8869898459706898e-07}. Best is trial 51 with value: 0.6880722259917854.
regularization_factors, val_score: 0.688072:  45%|####5     | 9/20 [00:13<00:17,  1.56s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008606 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  50%|#####     | 10/20 [00:15<00:15,  1.56s/it][I 2020-09-27 05:03:04,652] Trial 52 finished with value: 0.6880722259908346 and parameters: {'lambda_l1': 1.5445331505275187e-08, 'lambda_l2': 1.2613210373316155e-07}. Best is trial 52 with value: 0.6880722259908346.
regularization_factors, val_score: 0.688072:  50%|#####     | 10/20 [00:15<00:15,  1.56s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008206 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  55%|#####5    | 11/20 [00:16<00:14,  1.56s/it][I 2020-09-27 05:03:06,216] Trial 53 finished with value: 0.6880722259908322 and parameters: {'lambda_l1': 1.187725756223345e-08, 'lambda_l2': 8.260605690107965e-08}. Best is trial 53 with value: 0.6880722259908322.
regularization_factors, val_score: 0.688072:  55%|#####5    | 11/20 [00:16<00:14,  1.56s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001731 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  60%|######    | 12/20 [00:18<00:12,  1.59s/it][I 2020-09-27 05:03:07,867] Trial 54 finished with value: 0.6880722259624853 and parameters: {'lambda_l1': 2.1107731769847976e-06, 'lambda_l2': 1.024124993705147e-08}. Best is trial 54 with value: 0.6880722259624853.
regularization_factors, val_score: 0.688072:  60%|######    | 12/20 [00:18<00:12,  1.59s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013514 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  65%|######5   | 13/20 [00:20<00:11,  1.59s/it][I 2020-09-27 05:03:09,440] Trial 55 finished with value: 0.6880722259228028 and parameters: {'lambda_l1': 4.841910937380013e-06, 'lambda_l2': 1.6262995524915506e-08}. Best is trial 55 with value: 0.6880722259228028.
regularization_factors, val_score: 0.688072:  65%|######5   | 13/20 [00:20<00:11,  1.59s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002085 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  70%|#######   | 14/20 [00:21<00:09,  1.60s/it][I 2020-09-27 05:03:11,073] Trial 56 finished with value: 0.6880722257360149 and parameters: {'lambda_l1': 1.964871251332563e-05, 'lambda_l2': 1.3890624113900224e-08}. Best is trial 56 with value: 0.6880722257360149.
regularization_factors, val_score: 0.688072:  70%|#######   | 14/20 [00:21<00:09,  1.60s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013271 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  75%|#######5  | 15/20 [00:23<00:07,  1.59s/it][I 2020-09-27 05:03:12,639] Trial 57 finished with value: 0.6880722255253509 and parameters: {'lambda_l1': 3.358919485655909e-05, 'lambda_l2': 1.6266181828610483e-08}. Best is trial 57 with value: 0.6880722255253509.
regularization_factors, val_score: 0.688072:  75%|#######5  | 15/20 [00:23<00:07,  1.59s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009828 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  80%|########  | 16/20 [00:24<00:06,  1.58s/it][I 2020-09-27 05:03:14,206] Trial 58 finished with value: 0.6880722255987741 and parameters: {'lambda_l1': 2.8108351646575752e-05, 'lambda_l2': 1.100109030564083e-08}. Best is trial 57 with value: 0.6880722255253509.
regularization_factors, val_score: 0.688072:  80%|########  | 16/20 [00:24<00:06,  1.58s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012705 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  85%|########5 | 17/20 [00:26<00:04,  1.57s/it][I 2020-09-27 05:03:15,742] Trial 59 finished with value: 0.6880722255024915 and parameters: {'lambda_l1': 3.5103404293664e-05, 'lambda_l2': 1.005446328322094e-08}. Best is trial 59 with value: 0.6880722255024915.
regularization_factors, val_score: 0.688072:  85%|########5 | 17/20 [00:26<00:04,  1.57s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001474 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  90%|######### | 18/20 [00:28<00:03,  1.58s/it][I 2020-09-27 05:03:17,361] Trial 60 finished with value: 0.6880722254979789 and parameters: {'lambda_l1': 3.541626511270535e-05, 'lambda_l2': 1.0717975785657199e-08}. Best is trial 60 with value: 0.6880722254979789.
regularization_factors, val_score: 0.688072:  90%|######### | 18/20 [00:28<00:03,  1.58s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010965 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072:  95%|#########5| 19/20 [00:29<00:01,  1.58s/it][I 2020-09-27 05:03:18,921] Trial 61 finished with value: 0.6880722256490812 and parameters: {'lambda_l1': 2.4664030202905467e-05, 'lambda_l2': 1.0803279789819484e-08}. Best is trial 60 with value: 0.6880722254979789.
regularization_factors, val_score: 0.688072:  95%|#########5| 19/20 [00:29<00:01,  1.58s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008152 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685985	valid's binary_logloss: 0.689486
[200]	train's binary_logloss: 0.682684	valid's binary_logloss: 0.688315
[300]	train's binary_logloss: 0.679961	valid's binary_logloss: 0.688232
[400]	train's binary_logloss: 0.677324	valid's binary_logloss: 0.688467
Early stopping, best iteration is:
[334]	train's binary_logloss: 0.679066	valid's binary_logloss: 0.688072
regularization_factors, val_score: 0.688072: 100%|##########| 20/20 [00:31<00:00,  1.56s/it][I 2020-09-27 05:03:20,457] Trial 62 finished with value: 0.6880722251720996 and parameters: {'lambda_l1': 5.8781935567800015e-05, 'lambda_l2': 1.1695290416442605e-08}. Best is trial 62 with value: 0.6880722251720996.
regularization_factors, val_score: 0.688072: 100%|##########| 20/20 [00:31<00:00,  1.56s/it]
min_data_in_leaf, val_score: 0.688072:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012689 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686108	valid's binary_logloss: 0.689957
[200]	train's binary_logloss: 0.683126	valid's binary_logloss: 0.688938
Early stopping, best iteration is:
[199]	train's binary_logloss: 0.683149	valid's binary_logloss: 0.688886
min_data_in_leaf, val_score: 0.688072:  20%|##        | 1/5 [00:01<00:04,  1.14s/it][I 2020-09-27 05:03:21,608] Trial 63 finished with value: 0.6888860610141573 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.6888860610141573.
min_data_in_leaf, val_score: 0.688072:  20%|##        | 1/5 [00:01<00:04,  1.14s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009121 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685963	valid's binary_logloss: 0.689803
[200]	train's binary_logloss: 0.682736	valid's binary_logloss: 0.688538
Early stopping, best iteration is:
[198]	train's binary_logloss: 0.682793	valid's binary_logloss: 0.688516
min_data_in_leaf, val_score: 0.688072:  40%|####      | 2/5 [00:02<00:03,  1.14s/it][I 2020-09-27 05:03:22,746] Trial 64 finished with value: 0.6885159620293074 and parameters: {'min_child_samples': 25}. Best is trial 64 with value: 0.6885159620293074.
min_data_in_leaf, val_score: 0.688072:  40%|####      | 2/5 [00:02<00:03,  1.14s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008620 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68607	valid's binary_logloss: 0.689662
[200]	train's binary_logloss: 0.682938	valid's binary_logloss: 0.688496
[300]	train's binary_logloss: 0.68032	valid's binary_logloss: 0.688457
[400]	train's binary_logloss: 0.677831	valid's binary_logloss: 0.688297
Early stopping, best iteration is:
[330]	train's binary_logloss: 0.679568	valid's binary_logloss: 0.6881
min_data_in_leaf, val_score: 0.688072:  60%|######    | 3/5 [00:03<00:02,  1.27s/it][I 2020-09-27 05:03:24,343] Trial 65 finished with value: 0.6880998602504527 and parameters: {'min_child_samples': 50}. Best is trial 65 with value: 0.6880998602504527.
min_data_in_leaf, val_score: 0.688072:  60%|######    | 3/5 [00:03<00:02,  1.27s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001443 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685989	valid's binary_logloss: 0.689573
[200]	train's binary_logloss: 0.682702	valid's binary_logloss: 0.688452
[300]	train's binary_logloss: 0.679934	valid's binary_logloss: 0.68856
Early stopping, best iteration is:
[272]	train's binary_logloss: 0.680746	valid's binary_logloss: 0.688417
min_data_in_leaf, val_score: 0.688072:  80%|########  | 4/5 [00:05<00:01,  1.32s/it][I 2020-09-27 05:03:25,758] Trial 66 finished with value: 0.6884168700112442 and parameters: {'min_child_samples': 5}. Best is trial 65 with value: 0.6880998602504527.
min_data_in_leaf, val_score: 0.688072:  80%|########  | 4/5 [00:05<00:01,  1.32s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008044 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686007	valid's binary_logloss: 0.689624
[200]	train's binary_logloss: 0.682678	valid's binary_logloss: 0.688328
[300]	train's binary_logloss: 0.679893	valid's binary_logloss: 0.688247
[400]	train's binary_logloss: 0.677249	valid's binary_logloss: 0.688678
Early stopping, best iteration is:
[311]	train's binary_logloss: 0.679598	valid's binary_logloss: 0.688186
min_data_in_leaf, val_score: 0.688072: 100%|##########| 5/5 [00:06<00:00,  1.38s/it][I 2020-09-27 05:03:27,278] Trial 67 finished with value: 0.6881857897062641 and parameters: {'min_child_samples': 10}. Best is trial 65 with value: 0.6880998602504527.
min_data_in_leaf, val_score: 0.688072: 100%|##########| 5/5 [00:06<00:00,  1.36s/it]
Fold : 9
[I 2020-09-27 05:03:27,416] A new study created in memory with name: no-name-e6a1e2b2-b26c-4a35-b26d-98523bba8f2b
feature_fraction, val_score: inf:   0%|          | 0/7 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001148 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664968	valid's binary_logloss: 0.689984
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.679101	valid's binary_logloss: 0.689149
feature_fraction, val_score: 0.689149:  14%|#4        | 1/7 [00:01<00:06,  1.12s/it][I 2020-09-27 05:03:28,547] Trial 0 finished with value: 0.6891486028436887 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149:  14%|#4        | 1/7 [00:01<00:06,  1.12s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.014784 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662784	valid's binary_logloss: 0.690663
Early stopping, best iteration is:
[40]	train's binary_logloss: 0.677488	valid's binary_logloss: 0.689489
feature_fraction, val_score: 0.689149:  29%|##8       | 2/7 [00:02<00:05,  1.19s/it][I 2020-09-27 05:03:29,885] Trial 1 finished with value: 0.6894894237035651 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149:  29%|##8       | 2/7 [00:02<00:05,  1.19s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000849 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66599	valid's binary_logloss: 0.69036
Early stopping, best iteration is:
[64]	train's binary_logloss: 0.67328	valid's binary_logloss: 0.689617
feature_fraction, val_score: 0.689149:  43%|####2     | 3/7 [00:03<00:04,  1.09s/it][I 2020-09-27 05:03:30,757] Trial 2 finished with value: 0.6896170710885305 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149:  43%|####2     | 3/7 [00:03<00:04,  1.09s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005632 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.66269	valid's binary_logloss: 0.690688
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.67783	valid's binary_logloss: 0.689711
feature_fraction, val_score: 0.689149:  57%|#####7    | 4/7 [00:04<00:03,  1.03s/it][I 2020-09-27 05:03:31,639] Trial 3 finished with value: 0.6897107894464999 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149:  57%|#####7    | 4/7 [00:04<00:03,  1.03s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013949 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.662985	valid's binary_logloss: 0.690369
Early stopping, best iteration is:
[89]	train's binary_logloss: 0.665582	valid's binary_logloss: 0.689673
feature_fraction, val_score: 0.689149:  71%|#######1  | 5/7 [00:05<00:02,  1.02s/it][I 2020-09-27 05:03:32,641] Trial 4 finished with value: 0.6896732000011675 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149:  71%|#######1  | 5/7 [00:05<00:02,  1.02s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.007569 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.663886	valid's binary_logloss: 0.690174
Early stopping, best iteration is:
[61]	train's binary_logloss: 0.672601	valid's binary_logloss: 0.689421
feature_fraction, val_score: 0.689149:  86%|########5 | 6/7 [00:06<00:00,  1.04it/s][I 2020-09-27 05:03:33,477] Trial 5 finished with value: 0.6894210990140921 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149:  86%|########5 | 6/7 [00:06<00:00,  1.04it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008826 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.664317	valid's binary_logloss: 0.690145
Early stopping, best iteration is:
[52]	train's binary_logloss: 0.675207	valid's binary_logloss: 0.689455
feature_fraction, val_score: 0.689149: 100%|##########| 7/7 [00:06<00:00,  1.11it/s][I 2020-09-27 05:03:34,239] Trial 6 finished with value: 0.6894553081219641 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.6891486028436887.
feature_fraction, val_score: 0.689149: 100%|##########| 7/7 [00:06<00:00,  1.03it/s]
num_leaves, val_score: 0.689149:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000950 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.564202	valid's binary_logloss: 0.694875
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.658179	valid's binary_logloss: 0.69083
num_leaves, val_score: 0.689149:   5%|5         | 1/20 [00:01<00:23,  1.25s/it][I 2020-09-27 05:03:35,508] Trial 7 finished with value: 0.6908299824401397 and parameters: {'num_leaves': 204}. Best is trial 7 with value: 0.6908299824401397.
num_leaves, val_score: 0.689149:   5%|5         | 1/20 [00:01<00:23,  1.25s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000951 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.564078	valid's binary_logloss: 0.696783
Early stopping, best iteration is:
[13]	train's binary_logloss: 0.668565	valid's binary_logloss: 0.690921
num_leaves, val_score: 0.689149:  10%|#         | 2/20 [00:02<00:22,  1.23s/it][I 2020-09-27 05:03:36,684] Trial 8 finished with value: 0.6909207765265465 and parameters: {'num_leaves': 202}. Best is trial 7 with value: 0.6908299824401397.
num_leaves, val_score: 0.689149:  10%|#         | 2/20 [00:02<00:22,  1.23s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000865 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.625795	valid's binary_logloss: 0.693319
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.663313	valid's binary_logloss: 0.690147
num_leaves, val_score: 0.689149:  15%|#5        | 3/20 [00:03<00:19,  1.13s/it][I 2020-09-27 05:03:37,591] Trial 9 finished with value: 0.6901469432883655 and parameters: {'num_leaves': 89}. Best is trial 9 with value: 0.6901469432883655.
num_leaves, val_score: 0.689149:  15%|#5        | 3/20 [00:03<00:19,  1.13s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000928 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.690045	valid's binary_logloss: 0.690754
[200]	train's binary_logloss: 0.688849	valid's binary_logloss: 0.690081
[300]	train's binary_logloss: 0.688099	valid's binary_logloss: 0.6897
[400]	train's binary_logloss: 0.687569	valid's binary_logloss: 0.689504
[500]	train's binary_logloss: 0.687173	valid's binary_logloss: 0.689388
[600]	train's binary_logloss: 0.686865	valid's binary_logloss: 0.689372
Early stopping, best iteration is:
[537]	train's binary_logloss: 0.687051	valid's binary_logloss: 0.689339
num_leaves, val_score: 0.689149:  20%|##        | 4/20 [00:05<00:22,  1.41s/it][I 2020-09-27 05:03:39,636] Trial 10 finished with value: 0.6893392852593333 and parameters: {'num_leaves': 2}. Best is trial 10 with value: 0.6893392852593333.
num_leaves, val_score: 0.689149:  20%|##        | 4/20 [00:05<00:22,  1.41s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000928 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.686226	valid's binary_logloss: 0.689731
[200]	train's binary_logloss: 0.683165	valid's binary_logloss: 0.689426
[300]	train's binary_logloss: 0.680433	valid's binary_logloss: 0.689593
Early stopping, best iteration is:
[221]	train's binary_logloss: 0.682545	valid's binary_logloss: 0.689351
num_leaves, val_score: 0.689149:  25%|##5       | 5/20 [00:06<00:20,  1.34s/it][I 2020-09-27 05:03:40,838] Trial 11 finished with value: 0.6893514061615624 and parameters: {'num_leaves': 5}. Best is trial 10 with value: 0.6893392852593333.
num_leaves, val_score: 0.689149:  25%|##5       | 5/20 [00:06<00:20,  1.34s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001034 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.677343	valid's binary_logloss: 0.689733
Early stopping, best iteration is:
[73]	train's binary_logloss: 0.680397	valid's binary_logloss: 0.689164
num_leaves, val_score: 0.689149:  30%|###       | 6/20 [00:07<00:16,  1.18s/it][I 2020-09-27 05:03:41,643] Trial 12 finished with value: 0.6891638627451019 and parameters: {'num_leaves': 15}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  30%|###       | 6/20 [00:07<00:16,  1.18s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000978 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.625795	valid's binary_logloss: 0.693319
Early stopping, best iteration is:
[35]	train's binary_logloss: 0.663313	valid's binary_logloss: 0.690147
num_leaves, val_score: 0.689149:  35%|###5      | 7/20 [00:08<00:14,  1.12s/it][I 2020-09-27 05:03:42,610] Trial 13 finished with value: 0.6901469432883656 and parameters: {'num_leaves': 89}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  35%|###5      | 7/20 [00:08<00:14,  1.12s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000900 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.645429	valid's binary_logloss: 0.691128
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.670937	valid's binary_logloss: 0.689723
num_leaves, val_score: 0.689149:  40%|####      | 8/20 [00:09<00:12,  1.03s/it][I 2020-09-27 05:03:43,427] Trial 14 finished with value: 0.6897233852217974 and parameters: {'num_leaves': 58}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  40%|####      | 8/20 [00:09<00:12,  1.03s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000972 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.53862	valid's binary_logloss: 0.696319
Early stopping, best iteration is:
[19]	train's binary_logloss: 0.651313	valid's binary_logloss: 0.690486
num_leaves, val_score: 0.689149:  45%|####5     | 9/20 [00:10<00:12,  1.13s/it][I 2020-09-27 05:03:44,806] Trial 15 finished with value: 0.6904857400051433 and parameters: {'num_leaves': 254}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  45%|####5     | 9/20 [00:10<00:12,  1.13s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000921 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.58509	valid's binary_logloss: 0.697538
Early stopping, best iteration is:
[18]	train's binary_logloss: 0.665762	valid's binary_logloss: 0.690884
num_leaves, val_score: 0.689149:  50%|#####     | 10/20 [00:11<00:11,  1.10s/it][I 2020-09-27 05:03:45,833] Trial 16 finished with value: 0.6908843676817171 and parameters: {'num_leaves': 160}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  50%|#####     | 10/20 [00:11<00:11,  1.10s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.008682 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.655506	valid's binary_logloss: 0.690438
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.67629	valid's binary_logloss: 0.689323
num_leaves, val_score: 0.689149:  55%|#####5    | 11/20 [00:12<00:08,  1.03it/s][I 2020-09-27 05:03:46,509] Trial 17 finished with value: 0.6893229030060266 and parameters: {'num_leaves': 44}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  55%|#####5    | 11/20 [00:12<00:08,  1.03it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000877 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.598808	valid's binary_logloss: 0.694092
Early stopping, best iteration is:
[16]	train's binary_logloss: 0.671679	valid's binary_logloss: 0.690451
num_leaves, val_score: 0.689149:  60%|######    | 12/20 [00:13<00:07,  1.02it/s][I 2020-09-27 05:03:47,497] Trial 18 finished with value: 0.6904513066474659 and parameters: {'num_leaves': 136}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  60%|######    | 12/20 [00:13<00:07,  1.02it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000956 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.665649	valid's binary_logloss: 0.690234
Early stopping, best iteration is:
[38]	train's binary_logloss: 0.679544	valid's binary_logloss: 0.689555
num_leaves, val_score: 0.689149:  65%|######5   | 13/20 [00:13<00:06,  1.12it/s][I 2020-09-27 05:03:48,195] Trial 19 finished with value: 0.6895546123306635 and parameters: {'num_leaves': 30}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  65%|######5   | 13/20 [00:13<00:06,  1.12it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000933 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.625105	valid's binary_logloss: 0.692852
Early stopping, best iteration is:
[37]	train's binary_logloss: 0.661371	valid's binary_logloss: 0.690145
num_leaves, val_score: 0.689149:  70%|#######   | 14/20 [00:14<00:05,  1.09it/s][I 2020-09-27 05:03:49,159] Trial 20 finished with value: 0.6901453602236771 and parameters: {'num_leaves': 91}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  70%|#######   | 14/20 [00:14<00:05,  1.09it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000954 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.658357	valid's binary_logloss: 0.69009
Early stopping, best iteration is:
[53]	train's binary_logloss: 0.671442	valid's binary_logloss: 0.689611
num_leaves, val_score: 0.689149:  75%|#######5  | 15/20 [00:15<00:04,  1.12it/s][I 2020-09-27 05:03:49,991] Trial 21 finished with value: 0.6896105933356692 and parameters: {'num_leaves': 40}. Best is trial 12 with value: 0.6891638627451019.
num_leaves, val_score: 0.689149:  75%|#######5  | 15/20 [00:15<00:04,  1.12it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000936 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685253	valid's binary_logloss: 0.689642
[200]	train's binary_logloss: 0.681379	valid's binary_logloss: 0.689235
Early stopping, best iteration is:
[185]	train's binary_logloss: 0.681921	valid's binary_logloss: 0.689088
num_leaves, val_score: 0.689088:  80%|########  | 16/20 [00:16<00:03,  1.04it/s][I 2020-09-27 05:03:51,133] Trial 22 finished with value: 0.689088142033773 and parameters: {'num_leaves': 6}. Best is trial 22 with value: 0.689088142033773.
num_leaves, val_score: 0.689088:  80%|########  | 16/20 [00:16<00:03,  1.04it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000953 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.687295	valid's binary_logloss: 0.68994
[200]	train's binary_logloss: 0.684807	valid's binary_logloss: 0.689595
[300]	train's binary_logloss: 0.682867	valid's binary_logloss: 0.689608
Early stopping, best iteration is:
[256]	train's binary_logloss: 0.683708	valid's binary_logloss: 0.689506
num_leaves, val_score: 0.689088:  85%|########5 | 17/20 [00:18<00:03,  1.06s/it][I 2020-09-27 05:03:52,425] Trial 23 finished with value: 0.6895057052158261 and parameters: {'num_leaves': 4}. Best is trial 22 with value: 0.689088142033773.
num_leaves, val_score: 0.689088:  85%|########5 | 17/20 [00:18<00:03,  1.06s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011377 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.636575	valid's binary_logloss: 0.690665
Early stopping, best iteration is:
[36]	train's binary_logloss: 0.667345	valid's binary_logloss: 0.689716
num_leaves, val_score: 0.689088:  90%|######### | 18/20 [00:18<00:01,  1.01it/s][I 2020-09-27 05:03:53,255] Trial 24 finished with value: 0.6897160545946952 and parameters: {'num_leaves': 72}. Best is trial 22 with value: 0.689088142033773.
num_leaves, val_score: 0.689088:  90%|######### | 18/20 [00:19<00:01,  1.01it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000933 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.667234	valid's binary_logloss: 0.689986
Early stopping, best iteration is:
[74]	train's binary_logloss: 0.672163	valid's binary_logloss: 0.689426
num_leaves, val_score: 0.689088:  95%|#########5| 19/20 [00:19<00:00,  1.05it/s][I 2020-09-27 05:03:54,104] Trial 25 finished with value: 0.6894258959836473 and parameters: {'num_leaves': 28}. Best is trial 22 with value: 0.689088142033773.
num_leaves, val_score: 0.689088:  95%|#########5| 19/20 [00:19<00:00,  1.05it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000884 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.609273	valid's binary_logloss: 0.693331
Early stopping, best iteration is:
[34]	train's binary_logloss: 0.656448	valid's binary_logloss: 0.689954
num_leaves, val_score: 0.689088: 100%|##########| 20/20 [00:20<00:00,  1.02it/s][I 2020-09-27 05:03:55,150] Trial 26 finished with value: 0.6899541998715152 and parameters: {'num_leaves': 117}. Best is trial 22 with value: 0.689088142033773.
num_leaves, val_score: 0.689088: 100%|##########| 20/20 [00:20<00:00,  1.04s/it]
bagging, val_score: 0.689088:   0%|          | 0/10 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000959 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685151	valid's binary_logloss: 0.689666
[200]	train's binary_logloss: 0.681327	valid's binary_logloss: 0.689749
Early stopping, best iteration is:
[143]	train's binary_logloss: 0.68343	valid's binary_logloss: 0.689482
bagging, val_score: 0.689088:  10%|#         | 1/10 [00:01<00:09,  1.05s/it][I 2020-09-27 05:03:56,212] Trial 27 finished with value: 0.6894817163491044 and parameters: {'bagging_fraction': 0.9223383259961062, 'bagging_freq': 3}. Best is trial 27 with value: 0.6894817163491044.
bagging, val_score: 0.689088:  10%|#         | 1/10 [00:01<00:09,  1.05s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000943 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685286	valid's binary_logloss: 0.689382
[200]	train's binary_logloss: 0.68151	valid's binary_logloss: 0.689555
Early stopping, best iteration is:
[173]	train's binary_logloss: 0.682462	valid's binary_logloss: 0.688945
bagging, val_score: 0.688945:  20%|##        | 2/10 [00:02<00:08,  1.04s/it][I 2020-09-27 05:03:57,220] Trial 28 finished with value: 0.6889445920177824 and parameters: {'bagging_fraction': 0.40058272142371876, 'bagging_freq': 7}. Best is trial 28 with value: 0.6889445920177824.
bagging, val_score: 0.688945:  20%|##        | 2/10 [00:02<00:08,  1.04s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000924 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685224	valid's binary_logloss: 0.689275
[200]	train's binary_logloss: 0.681491	valid's binary_logloss: 0.689706
Early stopping, best iteration is:
[168]	train's binary_logloss: 0.682543	valid's binary_logloss: 0.688999
bagging, val_score: 0.688945:  30%|###       | 3/10 [00:03<00:07,  1.03s/it][I 2020-09-27 05:03:58,234] Trial 29 finished with value: 0.6889992638135699 and parameters: {'bagging_fraction': 0.4181787978927057, 'bagging_freq': 7}. Best is trial 28 with value: 0.6889445920177824.
bagging, val_score: 0.688945:  30%|###       | 3/10 [00:03<00:07,  1.03s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000991 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685284	valid's binary_logloss: 0.68908
[200]	train's binary_logloss: 0.681564	valid's binary_logloss: 0.689249
Early stopping, best iteration is:
[172]	train's binary_logloss: 0.682574	valid's binary_logloss: 0.688831
bagging, val_score: 0.688831:  40%|####      | 4/10 [00:04<00:06,  1.02s/it][I 2020-09-27 05:03:59,225] Trial 30 finished with value: 0.688830953603927 and parameters: {'bagging_fraction': 0.40441196806803215, 'bagging_freq': 7}. Best is trial 30 with value: 0.688830953603927.
bagging, val_score: 0.688831:  40%|####      | 4/10 [00:04<00:06,  1.02s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001020 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685401	valid's binary_logloss: 0.689413
[200]	train's binary_logloss: 0.68163	valid's binary_logloss: 0.6894
Early stopping, best iteration is:
[172]	train's binary_logloss: 0.682614	valid's binary_logloss: 0.688975
bagging, val_score: 0.688831:  50%|#####     | 5/10 [00:05<00:05,  1.01s/it][I 2020-09-27 05:04:00,231] Trial 31 finished with value: 0.688974864123655 and parameters: {'bagging_fraction': 0.4053863270403766, 'bagging_freq': 7}. Best is trial 30 with value: 0.688830953603927.
bagging, val_score: 0.688831:  50%|#####     | 5/10 [00:05<00:05,  1.01s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000972 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685259	valid's binary_logloss: 0.68914
Early stopping, best iteration is:
[95]	train's binary_logloss: 0.685454	valid's binary_logloss: 0.689029
bagging, val_score: 0.688831:  60%|######    | 6/10 [00:05<00:03,  1.07it/s][I 2020-09-27 05:04:00,968] Trial 32 finished with value: 0.6890288131482014 and parameters: {'bagging_fraction': 0.4233037496739684, 'bagging_freq': 7}. Best is trial 30 with value: 0.688830953603927.
bagging, val_score: 0.688831:  60%|######    | 6/10 [00:05<00:03,  1.07it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000937 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685245	valid's binary_logloss: 0.689157
Early stopping, best iteration is:
[87]	train's binary_logloss: 0.685847	valid's binary_logloss: 0.688973
bagging, val_score: 0.688831:  70%|#######   | 7/10 [00:06<00:02,  1.16it/s][I 2020-09-27 05:04:01,669] Trial 33 finished with value: 0.6889732179547475 and parameters: {'bagging_fraction': 0.4054479427416367, 'bagging_freq': 7}. Best is trial 30 with value: 0.688830953603927.
bagging, val_score: 0.688831:  70%|#######   | 7/10 [00:06<00:02,  1.16it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001013 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681625	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
bagging, val_score: 0.688665:  80%|########  | 8/10 [00:07<00:01,  1.12it/s][I 2020-09-27 05:04:02,643] Trial 34 finished with value: 0.6886646750149633 and parameters: {'bagging_fraction': 0.4052909923895454, 'bagging_freq': 7}. Best is trial 34 with value: 0.6886646750149633.
bagging, val_score: 0.688665:  80%|########  | 8/10 [00:07<00:01,  1.12it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001120 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68534	valid's binary_logloss: 0.68936
[200]	train's binary_logloss: 0.681589	valid's binary_logloss: 0.689277
Early stopping, best iteration is:
[160]	train's binary_logloss: 0.682993	valid's binary_logloss: 0.689007
bagging, val_score: 0.688665:  90%|######### | 9/10 [00:08<00:00,  1.09it/s][I 2020-09-27 05:04:03,602] Trial 35 finished with value: 0.6890068245461886 and parameters: {'bagging_fraction': 0.40351986086829467, 'bagging_freq': 7}. Best is trial 34 with value: 0.6886646750149633.
bagging, val_score: 0.688665:  90%|######### | 9/10 [00:08<00:00,  1.09it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000983 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68515	valid's binary_logloss: 0.68937
[200]	train's binary_logloss: 0.681294	valid's binary_logloss: 0.689514
Early stopping, best iteration is:
[113]	train's binary_logloss: 0.684606	valid's binary_logloss: 0.68927
bagging, val_score: 0.688665: 100%|##########| 10/10 [00:09<00:00,  1.11it/s][I 2020-09-27 05:04:04,460] Trial 36 finished with value: 0.6892703778547441 and parameters: {'bagging_fraction': 0.5684326008067807, 'bagging_freq': 5}. Best is trial 34 with value: 0.6886646750149633.
bagging, val_score: 0.688665: 100%|##########| 10/10 [00:09<00:00,  1.08it/s]
feature_fraction_stage2, val_score: 0.688665:   0%|          | 0/6 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000961 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681625	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
feature_fraction_stage2, val_score: 0.688665:  17%|#6        | 1/6 [00:01<00:05,  1.01s/it][I 2020-09-27 05:04:05,490] Trial 37 finished with value: 0.6886646750149633 and parameters: {'feature_fraction': 0.516}. Best is trial 37 with value: 0.6886646750149633.
feature_fraction_stage2, val_score: 0.688665:  17%|#6        | 1/6 [00:01<00:05,  1.01s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000867 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685291	valid's binary_logloss: 0.689179
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.685898	valid's binary_logloss: 0.688923
feature_fraction_stage2, val_score: 0.688665:  33%|###3      | 2/6 [00:01<00:03,  1.09it/s][I 2020-09-27 05:04:06,195] Trial 38 finished with value: 0.6889230860549193 and parameters: {'feature_fraction': 0.45199999999999996}. Best is trial 37 with value: 0.6886646750149633.
feature_fraction_stage2, val_score: 0.688665:  33%|###3      | 2/6 [00:01<00:03,  1.09it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001184 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685287	valid's binary_logloss: 0.689043
[200]	train's binary_logloss: 0.681498	valid's binary_logloss: 0.689164
Early stopping, best iteration is:
[168]	train's binary_logloss: 0.682602	valid's binary_logloss: 0.688738
feature_fraction_stage2, val_score: 0.688665:  50%|#####     | 3/6 [00:02<00:02,  1.07it/s][I 2020-09-27 05:04:07,161] Trial 39 finished with value: 0.6887381070029943 and parameters: {'feature_fraction': 0.5479999999999999}. Best is trial 37 with value: 0.6886646750149633.
feature_fraction_stage2, val_score: 0.688665:  50%|#####     | 3/6 [00:02<00:02,  1.07it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000856 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685333	valid's binary_logloss: 0.689331
[200]	train's binary_logloss: 0.681639	valid's binary_logloss: 0.689698
Early stopping, best iteration is:
[166]	train's binary_logloss: 0.682754	valid's binary_logloss: 0.689155
feature_fraction_stage2, val_score: 0.688665:  67%|######6   | 4/6 [00:03<00:01,  1.07it/s][I 2020-09-27 05:04:08,093] Trial 40 finished with value: 0.6891552510228472 and parameters: {'feature_fraction': 0.42}. Best is trial 37 with value: 0.6886646750149633.
feature_fraction_stage2, val_score: 0.688665:  67%|######6   | 4/6 [00:03<00:01,  1.07it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001100 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685198	valid's binary_logloss: 0.689125
[200]	train's binary_logloss: 0.681316	valid's binary_logloss: 0.689341
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682274	valid's binary_logloss: 0.688933
feature_fraction_stage2, val_score: 0.688665:  83%|########3 | 5/6 [00:04<00:00,  1.04it/s][I 2020-09-27 05:04:09,134] Trial 41 finished with value: 0.6889333127717522 and parameters: {'feature_fraction': 0.58}. Best is trial 37 with value: 0.6886646750149633.
feature_fraction_stage2, val_score: 0.688665:  83%|########3 | 5/6 [00:04<00:00,  1.04it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011122 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681625	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
feature_fraction_stage2, val_score: 0.688665: 100%|##########| 6/6 [00:05<00:00,  1.03it/s][I 2020-09-27 05:04:10,124] Trial 42 finished with value: 0.6886646750149633 and parameters: {'feature_fraction': 0.484}. Best is trial 37 with value: 0.6886646750149633.
feature_fraction_stage2, val_score: 0.688665: 100%|##########| 6/6 [00:05<00:00,  1.06it/s]
regularization_factors, val_score: 0.688665:   0%|          | 0/20 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000996 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685322	valid's binary_logloss: 0.689114
[200]	train's binary_logloss: 0.681508	valid's binary_logloss: 0.689186
Early stopping, best iteration is:
[175]	train's binary_logloss: 0.682376	valid's binary_logloss: 0.688631
regularization_factors, val_score: 0.688631:   5%|5         | 1/20 [00:01<00:20,  1.08s/it][I 2020-09-27 05:04:11,228] Trial 43 finished with value: 0.6886306062996863 and parameters: {'lambda_l1': 0.05593044522293832, 'lambda_l2': 0.003082206943612091}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:   5%|5         | 1/20 [00:01<00:20,  1.08s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000926 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685295	valid's binary_logloss: 0.689098
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.68591	valid's binary_logloss: 0.688875
regularization_factors, val_score: 0.688631:  10%|#         | 2/20 [00:01<00:17,  1.01it/s][I 2020-09-27 05:04:11,991] Trial 44 finished with value: 0.6888752025477913 and parameters: {'lambda_l1': 0.1570663292442244, 'lambda_l2': 0.0010606198676237839}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  10%|#         | 2/20 [00:01<00:17,  1.01it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000960 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.68561	valid's binary_logloss: 0.689202
Early stopping, best iteration is:
[89]	train's binary_logloss: 0.686012	valid's binary_logloss: 0.688996
regularization_factors, val_score: 0.688631:  15%|#5        | 3/20 [00:02<00:15,  1.10it/s][I 2020-09-27 05:04:12,715] Trial 45 finished with value: 0.6889959749240171 and parameters: {'lambda_l1': 1.4927818545611396e-06, 'lambda_l2': 7.613862377908525}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  15%|#5        | 3/20 [00:02<00:15,  1.10it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000944 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685965	valid's binary_logloss: 0.689511
[200]	train's binary_logloss: 0.682794	valid's binary_logloss: 0.689793
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.683538	valid's binary_logloss: 0.689222
regularization_factors, val_score: 0.688631:  20%|##        | 4/20 [00:03<00:15,  1.05it/s][I 2020-09-27 05:04:13,760] Trial 46 finished with value: 0.6892218212122403 and parameters: {'lambda_l1': 5.48610084980033, 'lambda_l2': 0.00024090694580484778}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  20%|##        | 4/20 [00:03<00:15,  1.05it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004926 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  25%|##5       | 5/20 [00:04<00:14,  1.02it/s][I 2020-09-27 05:04:14,825] Trial 47 finished with value: 0.6886646716779263 and parameters: {'lambda_l1': 0.0011713348102023722, 'lambda_l2': 1.4326223733321923e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  25%|##5       | 5/20 [00:04<00:14,  1.02it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001038 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  30%|###       | 6/20 [00:05<00:14,  1.07s/it][I 2020-09-27 05:04:16,093] Trial 48 finished with value: 0.6886646717918603 and parameters: {'lambda_l1': 0.0011323335612472105, 'lambda_l2': 1.2449590491320676e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  30%|###       | 6/20 [00:05<00:14,  1.07s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009231 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  35%|###5      | 7/20 [00:06<00:13,  1.05s/it][I 2020-09-27 05:04:17,084] Trial 49 finished with value: 0.6886646722615538 and parameters: {'lambda_l1': 0.0009667778650190732, 'lambda_l2': 1.4964282342181056e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  35%|###5      | 7/20 [00:06<00:13,  1.05s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001006 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  40%|####      | 8/20 [00:07<00:12,  1.04s/it][I 2020-09-27 05:04:18,110] Trial 50 finished with value: 0.6886646726282439 and parameters: {'lambda_l1': 0.000834005476706058, 'lambda_l2': 1.1339953988984425e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  40%|####      | 8/20 [00:07<00:12,  1.04s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001002 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  45%|####5     | 9/20 [00:09<00:11,  1.04s/it][I 2020-09-27 05:04:19,146] Trial 51 finished with value: 0.6886646724473121 and parameters: {'lambda_l1': 0.0008969854861021092, 'lambda_l2': 1.1117633466286682e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  45%|####5     | 9/20 [00:09<00:11,  1.04s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.013191 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  50%|#####     | 10/20 [00:09<00:10,  1.01s/it][I 2020-09-27 05:04:20,098] Trial 52 finished with value: 0.6886646724367362 and parameters: {'lambda_l1': 0.0009016551190739346, 'lambda_l2': 1.0925317395658214e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  50%|#####     | 10/20 [00:09<00:10,  1.01s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000952 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  55%|#####5    | 11/20 [00:10<00:09,  1.01s/it][I 2020-09-27 05:04:21,108] Trial 53 finished with value: 0.6886646727381388 and parameters: {'lambda_l1': 0.0007965344637915165, 'lambda_l2': 1.3659640884572439e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  55%|#####5    | 11/20 [00:10<00:09,  1.01s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011699 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681626	valid's binary_logloss: 0.689259
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.682563	valid's binary_logloss: 0.688665
regularization_factors, val_score: 0.688631:  60%|######    | 12/20 [00:11<00:07,  1.01it/s][I 2020-09-27 05:04:22,056] Trial 54 finished with value: 0.6886646716630922 and parameters: {'lambda_l1': 0.0011776384915221255, 'lambda_l2': 1.0468618620438098e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  60%|######    | 12/20 [00:11<00:07,  1.01it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000972 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685369	valid's binary_logloss: 0.688941
[200]	train's binary_logloss: 0.681589	valid's binary_logloss: 0.689058
Early stopping, best iteration is:
[174]	train's binary_logloss: 0.68257	valid's binary_logloss: 0.688662
regularization_factors, val_score: 0.688631:  65%|######5   | 13/20 [00:12<00:06,  1.00it/s][I 2020-09-27 05:04:23,068] Trial 55 finished with value: 0.6886618433996927 and parameters: {'lambda_l1': 0.00344335006832809, 'lambda_l2': 1.0707613854300203e-08}. Best is trial 43 with value: 0.6886306062996863.
regularization_factors, val_score: 0.688631:  65%|######5   | 13/20 [00:12<00:06,  1.00it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000963 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685283	valid's binary_logloss: 0.689021
[200]	train's binary_logloss: 0.68165	valid's binary_logloss: 0.688997
Early stopping, best iteration is:
[172]	train's binary_logloss: 0.682609	valid's binary_logloss: 0.688401
regularization_factors, val_score: 0.688401:  70%|#######   | 14/20 [00:13<00:05,  1.00it/s][I 2020-09-27 05:04:24,069] Trial 56 finished with value: 0.6884008811372169 and parameters: {'lambda_l1': 0.013564067517829166, 'lambda_l2': 0.0028990672498837708}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401:  70%|#######   | 14/20 [00:13<00:05,  1.00it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001378 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685305	valid's binary_logloss: 0.689126
[200]	train's binary_logloss: 0.681644	valid's binary_logloss: 0.689459
Early stopping, best iteration is:
[187]	train's binary_logloss: 0.68211	valid's binary_logloss: 0.688878
regularization_factors, val_score: 0.688401:  75%|#######5  | 15/20 [00:14<00:05,  1.02s/it][I 2020-09-27 05:04:25,134] Trial 57 finished with value: 0.6888781933882588 and parameters: {'lambda_l1': 0.06847066607771893, 'lambda_l2': 0.005100079808171161}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401:  75%|#######5  | 15/20 [00:14<00:05,  1.02s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000979 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685375	valid's binary_logloss: 0.689304
[200]	train's binary_logloss: 0.681532	valid's binary_logloss: 0.689352
Early stopping, best iteration is:
[173]	train's binary_logloss: 0.682496	valid's binary_logloss: 0.688928
regularization_factors, val_score: 0.688401:  80%|########  | 16/20 [00:15<00:04,  1.01s/it][I 2020-09-27 05:04:26,139] Trial 58 finished with value: 0.6889282775916249 and parameters: {'lambda_l1': 0.04206907981702274, 'lambda_l2': 0.008049472807870911}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401:  80%|########  | 16/20 [00:15<00:04,  1.01s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001050 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685283	valid's binary_logloss: 0.689021
[200]	train's binary_logloss: 0.68165	valid's binary_logloss: 0.688997
Early stopping, best iteration is:
[172]	train's binary_logloss: 0.682609	valid's binary_logloss: 0.688401
regularization_factors, val_score: 0.688401:  85%|########5 | 17/20 [00:19<00:05,  1.68s/it][I 2020-09-27 05:04:29,386] Trial 59 finished with value: 0.6884009077923067 and parameters: {'lambda_l1': 0.01818292826675875, 'lambda_l2': 4.793765360259705e-07}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401:  85%|########5 | 17/20 [00:19<00:05,  1.68s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001800 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685284	valid's binary_logloss: 0.689021
[200]	train's binary_logloss: 0.681652	valid's binary_logloss: 0.688997
Early stopping, best iteration is:
[172]	train's binary_logloss: 0.68261	valid's binary_logloss: 0.688401
regularization_factors, val_score: 0.688401:  90%|######### | 18/20 [00:20<00:03,  1.51s/it][I 2020-09-27 05:04:30,486] Trial 60 finished with value: 0.6884009815019274 and parameters: {'lambda_l1': 0.022406843035860227, 'lambda_l2': 2.0488930606948696e-06}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401:  90%|######### | 18/20 [00:20<00:03,  1.51s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001595 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685284	valid's binary_logloss: 0.689021
[200]	train's binary_logloss: 0.681652	valid's binary_logloss: 0.688997
Early stopping, best iteration is:
[172]	train's binary_logloss: 0.682611	valid's binary_logloss: 0.688401
regularization_factors, val_score: 0.688401:  95%|#########5| 19/20 [00:21<00:01,  1.39s/it][I 2020-09-27 05:04:31,591] Trial 61 finished with value: 0.6884010228456562 and parameters: {'lambda_l1': 0.024768099089603052, 'lambda_l2': 5.962389419495749e-06}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401:  95%|#########5| 19/20 [00:21<00:01,  1.39s/it][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001134 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685378	valid's binary_logloss: 0.688831
[200]	train's binary_logloss: 0.681663	valid's binary_logloss: 0.688838
Early stopping, best iteration is:
[182]	train's binary_logloss: 0.682288	valid's binary_logloss: 0.688653
regularization_factors, val_score: 0.688401: 100%|##########| 20/20 [00:22<00:00,  1.32s/it][I 2020-09-27 05:04:32,742] Trial 62 finished with value: 0.6886532224197758 and parameters: {'lambda_l1': 0.03746482418394038, 'lambda_l2': 1.895168169019993e-06}. Best is trial 56 with value: 0.6884008811372169.
regularization_factors, val_score: 0.688401: 100%|##########| 20/20 [00:22<00:00,  1.13s/it]
min_data_in_leaf, val_score: 0.688401:   0%|          | 0/5 [00:00<?, ?it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000979 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685537	valid's binary_logloss: 0.689132
Early stopping, best iteration is:
[95]	train's binary_logloss: 0.68572	valid's binary_logloss: 0.689056
min_data_in_leaf, val_score: 0.688401:  20%|##        | 1/5 [00:00<00:03,  1.30it/s][I 2020-09-27 05:04:33,525] Trial 63 finished with value: 0.6890556326625765 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 0.6890556326625765.
min_data_in_leaf, val_score: 0.688401:  20%|##        | 1/5 [00:00<00:03,  1.30it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685507	valid's binary_logloss: 0.688911
Early stopping, best iteration is:
[86]	train's binary_logloss: 0.68607	valid's binary_logloss: 0.688678
min_data_in_leaf, val_score: 0.688401:  40%|####      | 2/5 [00:01<00:02,  1.32it/s][I 2020-09-27 05:04:34,254] Trial 64 finished with value: 0.68867811407117 and parameters: {'min_child_samples': 50}. Best is trial 64 with value: 0.68867811407117.
min_data_in_leaf, val_score: 0.688401:  40%|####      | 2/5 [00:01<00:02,  1.32it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001032 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685329	valid's binary_logloss: 0.688996
[200]	train's binary_logloss: 0.681584	valid's binary_logloss: 0.689202
Early stopping, best iteration is:
[168]	train's binary_logloss: 0.682673	valid's binary_logloss: 0.688789
min_data_in_leaf, val_score: 0.688401:  60%|######    | 3/5 [00:02<00:01,  1.20it/s][I 2020-09-27 05:04:35,256] Trial 65 finished with value: 0.6887885221160116 and parameters: {'min_child_samples': 25}. Best is trial 64 with value: 0.68867811407117.
min_data_in_leaf, val_score: 0.688401:  60%|######    | 3/5 [00:02<00:01,  1.20it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001002 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685286	valid's binary_logloss: 0.689236
Early stopping, best iteration is:
[87]	train's binary_logloss: 0.685815	valid's binary_logloss: 0.68908
min_data_in_leaf, val_score: 0.688401:  80%|########  | 4/5 [00:03<00:00,  1.27it/s][I 2020-09-27 05:04:35,953] Trial 66 finished with value: 0.6890800984264625 and parameters: {'min_child_samples': 10}. Best is trial 64 with value: 0.68867811407117.
min_data_in_leaf, val_score: 0.688401:  80%|########  | 4/5 [00:03<00:00,  1.27it/s][LightGBM] [Info] Number of positive: 46363, number of negative: 46663
[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000956 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 4689
[LightGBM] [Info] Number of data points in the train set: 93026, number of used features: 26
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.498388 -> initscore=-0.006450
[LightGBM] [Info] Start training from score -0.006450
Training until validation scores don't improve for 100 rounds
[100]	train's binary_logloss: 0.685281	valid's binary_logloss: 0.688938
Early stopping, best iteration is:
[96]	train's binary_logloss: 0.685434	valid's binary_logloss: 0.688858
min_data_in_leaf, val_score: 0.688401: 100%|##########| 5/5 [00:03<00:00,  1.29it/s][I 2020-09-27 05:04:36,693] Trial 67 finished with value: 0.6888576677735015 and parameters: {'min_child_samples': 5}. Best is trial 64 with value: 0.68867811407117.
min_data_in_leaf, val_score: 0.688401: 100%|##########| 5/5 [00:03<00:00,  1.27it/s]

################################
CV_score:0.5369671988207919

---------------------------------
total CV_score:0.5760159134509492

提出用ファイルを作成

# 各モードの予測結果を結合する
test_all = pd.concat([test_regular_id, test_gachi_id])

# id列でソートする
test_all = test_all.sort_values('id')

# 提出用ファイルを作成する
test_all.to_csv("submission.csv", index=False)

添付データ

  • train_mode_LightGBM.ipynb?X-Amz-Expires=10800&X-Amz-Date=20241222T050229Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIP7GCBGMWPMZ42PQ
  • train_mode_LightGBM_2.ipynb?X-Amz-Expires=10800&X-Amz-Date=20241222T050229Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIP7GCBGMWPMZ42PQ
  • train_mode_LightGBM_2.ipynb?X-Amz-Expires=10800&X-Amz-Date=20241222T050229Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIP7GCBGMWPMZ42PQ
  • train_lobbymode_LightGBM.ipynb?X-Amz-Expires=10800&X-Amz-Date=20241222T050229Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIP7GCBGMWPMZ42PQ
  • train_lobbymode_LightGBM.ipynb?X-Amz-Expires=10800&X-Amz-Date=20241222T050229Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIP7GCBGMWPMZ42PQ
  • train_lobbymode_LightGBM.ipynb?X-Amz-Expires=10800&X-Amz-Date=20241222T050229Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIP7GCBGMWPMZ42PQ
  • Aws4 request&x amz signedheaders=host&x amz signature=68f32ed4df459354a3bc837cef0ed6fef88baeaf85db2f1f55c118c230f55116
    sylk

    4-1の欠損値の件は、A1-rankなどのrank系かも。

    たびたびでてますが、rankは「ガチマッチ」のウデマエが割り当てられており、ナワバリバトルには存在しませんので、全部nullが入ってます。 nullしかない特徴量をターゲットエンコーディングできるのかわからないですが、そうかも。

    Icon24
    t_mochizuki0

    早速のコメントありがとうございます。 情報が足りなくて申し訳ありません。欠損は「A2-weapon」などのブキ系(を数値に変換した列)の一部の値が、数カ所だけ空白になっている、という状況です。 モード別に学習データを分ける前はそのような欠損は起きなかったので、学習データを分けたことで不都合が生じたのだろうとは思うのですが、原因不明です…。

    Aws4 request&x amz signedheaders=host&x amz signature=1faa0ed48ae42b1eb04b98eaa868ec067d902ff1ff03b0ccdecf7bd84f4a7deb
    cha_kabu

    スプラトゥーンをプレイしたことが無いのでアイデア助かります。ありがとうございます。

    欠損について、恐らくですがmodeで分けてしまうとブキ種によってはデータ数が十分でなく、さらにそれをFold分割した際にseedによっては該当データが無くなってしまい、Target Encodingに必要な平均値が出せていないのではないでしょうか。

    以下のブログ記事が参考になったのでご参考ください。 https://blog.amedama.jp/entry/target-mean-encoding-types

    しかしその問題が起きるのは使用率が低いブキ種だけですし、プレイヤー毎ではなくチーム毎にEncordingするなど対処法はありそうです(まだ試していないので効果があるかは分かりません…)。

    Icon24
    t_mochizuki0

    ありがとうございます。データ数が少ないと起きるということから、おそらくその原因だと思います。 ブログを参考に試してみます。

    確かに使用率が低そうなブキで欠損が起きているので、いったんそれは無視して、まずは最終的な予測を出すところまで作ってみようと思います。 ありがとうございました。

    Icon24
    t_mochizuki0

    欠損値の問題は保留とし、最終的な提出用データを作るところまで作成しました。
    結果を提出したところ、全データをまとめて学習した時よりPublicスコアが向上しました。

    • 全データをまとめて学習した際のPublicスコア: 0.536344
    • 各モード毎に学習した結果をまとめた際のPublicスコア: 0.541567

    上に表示されているコードを更新しました。
    実験なので力技で実装しましたが、参考になれば幸いです。

    欠損値は、ゆっくり確認します…。

    Icon24
    t_mochizuki0

    データ(と学習)の分割ですが、ガチエリア〜ガチヤグラの4種類に分けると細かすぎて学習データが減るのか、CVが低いモードがあります(ガチヤグラ)。
    このためレギュラーマッチ(=ナワバリバトル)とガチマッチ(ガチエリア〜ガチヤグラ)の2種類に分けてみました。
    Publicスコアはこちらの方が若干高いので、これを最終版とします。ありがとうございました。
    (欠損値の問題は、ゆっくり確認します)

    Aws4 request&x amz signedheaders=host&x amz signature=1faa0ed48ae42b1eb04b98eaa868ec067d902ff1ff03b0ccdecf7bd84f4a7deb
    cha_kabu

    こちら参考に実装させていただきました。ありがとうございます。 また、私も同じ対応をしたところでした。

    ナワバリ以外のCVが低くなるのは、定かではありませんがナワバリと比べるとレベル差の影響がほとんどない(=予測が難しい?)のも影響している気がします。

    ご参考までに、ナワバリとヤグラのy=1(青)とy=0(赤)のAチームとBチームのトータルレベル比のヒストグラムを添付します。

    上位の人たちはうまくナワバリ以外にも効く特徴量を入れているのでしょうね… ブキに関する特徴量を色々増やしているのですが全然改善しない。。

    ![c93c1935-2a24-4124-9656-c5e892b8ec11.png](https://probspace-stg.s3-ap-northeast-1.amazonaws.com/uploads/user/680f6a9666875ad4cf3fb97e7dde0aa5/images/UserImage_130/c93c1935-2a24-4124-9656-c5e892b8ec11.png =200x) ![89553337-0b26-4ca0-95a9-dc95a92276e8.png](https://probspace-stg.s3-ap-northeast-1.amazonaws.com/uploads/user/680f6a9666875ad4cf3fb97e7dde0aa5/images/UserImage_131/89553337-0b26-4ca0-95a9-dc95a92276e8.png =200x)

    Aws4 request&x amz signedheaders=host&x amz signature=86d48518c7efdda02574591133c05d9d3a4548e76c67278893ca8ea4dc715149
    yktsnd

    勉強させていただいております。
    気がついた点が一点あるのでコメントさせてください。

    レギュラーマッチ: CV_score:0.5996257033039711
    ガチマッッチ: CV_score:0.5277763340574645
    平均: CV_score:0.5637010186807178
    とあり、最終的なスコアはnp.mean(total_scores)としておりますが、単に÷2した値になっています。
    レギュラーマッチとガチマッチの試合数は異なるので、重み付けするかconcatしてからaccuracy_scoreで計算するのが正しいと思います。

    Icon24
    t_mochizuki0

    ご指摘ありがとうございます。試合数で重み付けして平均を取ったところ、0.543471347270915になりました。このため本文の説明も修正しました。 ありがとうございました。

    Favicon
    new user
    コメントするには 新規登録 もしくは ログイン が必要です。