Friday, May 29, 2020

Machine Learning Integration With Devops


mlops



Today, I share with you all guys a new project of mine.

I create an automated system so that whenever a developer is creating a machine learning code in our system in git and after it just commit the code so that's job is done and all the left things from training the model to deploy the model everything is automatic.

But you guys have a question in mind, How? Just in one click how do these things work?
how

So after so many failed attempts. I create this project. But after lots of effort is done. I made this project very simple. so you all guys also understand very easily. Even if you do have not so much knowledge of python. I explain in a minute what I want to say. Just read this full blog...



Here, the First developer writes the code in our system in git and just commits after it commits so it automatically pushes in GitHub. how ?? with the help of hooks.


Here as you see above I made a hook named post-commit. That's why when developers commit so it automatically pushes on GitHub.

After pushes, it uploads on GitHub




As it comes on GitHub. I set a webhook on Github so it connects to Jenkins. For this here, I use ngrok. 



After that main works start from here...

Here, First Jenkins copy that code and dataset in a folder.

data

data

Here,  I use 
build trigger >>> Github hook trigger for Gitscm polling 
Because I made a webhook in Github.

code:

sudo cp -rf * /root/mlops

After I copy that code in a folder. I want a new environment where all requirements of this file is existing like Keras, TensorFlow, etc. For this made simple and fast I use the Docker concept here. I made a docker file for this. Here is the docker file.


code: 
 vi Dockerfile

FROM python:3.6.10

WORKDIR /usr/project

COPY .   .

RUN pip install  -r  requirement.txt

CMD ["python","./mnist.py"]

...........................................................................................
Here in the requirement.txt file

matplotlib
numpy
pandas
scipy
tensorflow==1.15.0
keras==2.3.1
...........................................................................................

After writing this code you made a docker image so write code

docker build -t project .

here in the end if you see your image is successfully built. So your environment is ready for execution.

Now just you start that environment. but we don't want anything manual so what we do? we create a pipeline with Jenkins first job and also ready this above image earlier in the system. 

Now we create our second job of Jenkins

data

data

code:
if sudo cat /root/mlops/mymodel.py | grep keras
then 
if sudo docker ps -a | grep keras_env
then
sudo docker rm -f keras_env
sudo docker run -t -v /root/mlops:/usr/src/myapp -w /usr/src/myapp --name keras_env project:v1  python mymodel.py
fi
else 
sudo docker run -t -v /root/mlops:/usr/src/myapp -w /usr/src/myapp --name keras_env project:v1  python mymodel.py
fi

if sudo cat /root/mlops/mymodel.py | grep sklearn
then 
if sudo docker ps -a | grep sklearn_env
then 
sudo docker rm -f sklearn_env
sudo docker run -t -v /root/mlops:/usr/src/myapp -w /usr/src/myapp --name sklearn_env project:v2  python mymodel.py
fi
else 
sudo docker run -t -v /root/mlops:/usr/src/myapp -w /usr/src/myapp --name sklearn_env project:v2   python mymodel.py
fi

here with the help of this code, I create a new environment according to requirement and after just starting the environment it starts automatically to train the model. But I write a code in such a way so it changes the parameter automatically with according to accuracy. But I use a very basic concept for tweek so you also understand very easily what I do exactly.

Here is the code >>>

mymodel.py file

from keras.datasets import mnist
from keras.utils import np_utils
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras import backend as K
import struct
import numpy as np

def read_idx(filename):
    """Credit: https://gist.github.com/tylerneylon"""
    with open(filename, 'rb') as f:
        zero, data_type, dims = struct.unpack('>HBB', f.read(4))
        shape = tuple(struct.unpack('>I', f.read(4))[0] for d in range(dims))
        return np.frombuffer(f.read(), dtype=np.uint8).reshape(shape)
x_train = read_idx("/content/drive/My Drive/Copy of train-images-idx3-ubyte")
y_train = read_idx("/content/drive/My Drive/Copy of train-labels-idx1-ubyte")
x_test = read_idx("/content/drive/My Drive/Copy of t10k-images-idx3-ubyte")
y_test = read_idx("/content/drive/My Drive/Copy of t10k-labels-idx1-ubyte")


epochs = 2


# Training Parameters
batch_size = 128
 

# Lets store the number of rows and columns
img_rows = x_train[0].shape[0]
img_cols = x_train[1].shape[0]

# Getting our date in the right 'shape' needed for Keras
# We need to add a 4th dimenion to our date thereby changing our
# Our original image shape of (60000,28,28) to (60000,28,28,1)
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)

# store the shape of a single image 
input_shape = (img_rows, img_cols, 1)

# change our image type to float32 data type
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')

# Normalize our data by changing the range from (0 to 255) to (0 to 1)
x_train /= 255
x_test /= 255

print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# Now we one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

# Let's count the number columns in our hot encoded matrix 
print ("Number of Classes: " + str(y_test.shape[1]))
num_classes = y_test.shape[1]
num_pixels = x_train.shape[1] * x_train.shape[2]
while 1:
# create model
  model = Sequential()

  model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
  model.add(BatchNormalization())
  j = 1
  while epochs >=j:
    model.add(Conv2D(64, (3, 3), activation='relu'))
    model.add(BatchNormalization())
    j+= 1
  model.add(MaxPooling2D(pool_size=(2, 2)))
  model.add(Dropout(0.25))

  model.add(Flatten())
  j = 1
  while epochs >=j:
      j+= 1
      model.add(Dense(128, activation='relu'))
      model.add(BatchNormalization())
      if j >= 3:
        break
  model.add(Dropout(0.5))
  model.add(Dense(num_classes, activation='softmax'))

  model.compile(loss = 'categorical_crossentropy',
              optimizer = keras.optimizers.Adadelta(),
              metrics = ['accuracy'])

  print(model.summary())
  history = model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))

  score = model.evaluate(x_test, y_test, verbose=0)
  print('Test loss:', score[0])
  print('Test accuracy:', score[1])
  f = open("result.txt", "a+")
  f.write("\n No of epochs used:"+str(epochs))
  f.write("\n Accuracy of model : "+str(score[1]))
  print(f.read())
  f.close()
  if score[1] < 0.915:
        epochs+= 1
        f = open("result.txt", "a+")
        f.write("\n\n\nAgain train the model with add some more extra layers")
        continue
  else:
        model.save("mymodel.h5")
        break
  break


If you see carefully. Here I just write a condition for tweek our model. Basically, what I do exactly. That time till my accuracy goes to above 91.5% so it changes the hyperparameter automatically. It changes in such a way so you so when it again create a model so it adds one more extra layer from the previous one. One more convd layer with different filters and one more dense layer with 128 neurons

Q) Here I restrict the dense layer of 128 neurons not to add more than 3. But why???
Because as I see at the time of training the model when I increase the no of the dense layer so it will not impact more on accuracy. Or if they can't so why we add.


Q) Here I do not restrict the convd layer or not add more other layers. But why???
Because as I see at the time of training the best impact on accuracy is to change the convd layers with different filters.


It's the easiest way to change the hyperparameter automatically that I find!!!
I find and try more than 3 methods to change hyperparameter automatically. But believe me its easiest ever. That I show you above.If you know about CNN so you easily understand what I do exactly.

Let me show you a demo how you can add layers automatically in my model

Let if you want to add pooling layer of filter size 2,2
so you just write >>>

  j = 1
  while epochs >=j:
    j+= 1
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

 
Here you can also add more than 1 layer at time  2,3,4,5...

That's all so simple. After you write that it automatically starts to add the layers.1st time it adds 1 layer and then 2nd time it adds 2 layers...

Let me show the output of mine above code>>>

Using TensorFlow backend.
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
Number of Classes: 10
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
batch_normalization_1 (Batch (None, 26, 26, 32)        128       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 64)        256       
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 22, 22, 64)        36928     
_________________________________________________________________
batch_normalization_3 (Batch (None, 22, 22, 64)        256       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 11, 11, 64)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 11, 11, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 7744)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               991360    
_________________________________________________________________
batch_normalization_4 (Batch (None, 128)               512       
_________________________________________________________________
dense_2 (Dense)              (None, 128)               16512     
_________________________________________________________________
batch_normalization_5 (Batch (None, 128)               512       
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 10)                1290      
=================================================================
Total params: 1,066,570
Trainable params: 1,065,738
Non-trainable params: 832
_________________________________________________________________
None
Train on 60000 samples, validate on 10000 samples
Epoch 1/2
60000/60000 [==============================] - 22s 364us/step - loss: 0.4532 - accuracy: 0.8436 
Epoch 2/2
60000/60000 [==============================] - 14s 229us/step - loss: 0.2815 - accuracy: 0.9009 
Test loss: 0.304155961728096
Test accuracy: 0.8895999789237976

Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
batch_normalization_6 (Batch (None, 26, 26, 32)        128       
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
batch_normalization_7 (Batch (None, 24, 24, 64)        256       
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 22, 22, 64)        36928     
_________________________________________________________________
batch_normalization_8 (Batch (None, 22, 22, 64)        256       
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 20, 20, 64)        36928     
_________________________________________________________________
batch_normalization_9 (Batch (None, 20, 20, 64)        256       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 10, 10, 64)        0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 6400)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 128)               819328    
_________________________________________________________________
batch_normalization_10 (Batc (None, 128)               512       
_________________________________________________________________
dense_5 (Dense)              (None, 128)               16512     
_________________________________________________________________
batch_normalization_11 (Batc (None, 128)               512       
_________________________________________________________________
dropout_4 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_6 (Dense)              (None, 10)                1290      
=================================================================
Total params: 931,722
Trainable params: 930,762
Non-trainable params: 960
_________________________________________________________________
None
Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 18s 302us/step - loss: 0.4840 - accuracy: 0.8346 
Epoch 2/3
60000/60000 [==============================] - 17s 283us/step - loss: 0.2917 - accuracy: 0.8957 
Epoch 3/3
60000/60000 [==============================] - 17s 283us/step - loss: 0.2350 - accuracy: 0.9160 
Test loss: 0.25420061814785005
Test accuracy: 0.9068999886512756

Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_8 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
batch_normalization_12 (Batc (None, 26, 26, 32)        128       
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
batch_normalization_13 (Batc (None, 24, 24, 64)        256       
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 22, 22, 64)        36928     
_________________________________________________________________
batch_normalization_14 (Batc (None, 22, 22, 64)        256       
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
batch_normalization_15 (Batc (None, 20, 20, 64)        256       
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 18, 18, 64)        36928     
_________________________________________________________________
batch_normalization_16 (Batc (None, 18, 18, 64)        256       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 9, 9, 64)          0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 9, 9, 64)          0         
_________________________________________________________________
flatten_3 (Flatten)          (None, 5184)              0         
_________________________________________________________________
dense_7 (Dense)              (None, 128)               663680    
_________________________________________________________________
batch_normalization_17 (Batc (None, 128)               512       
_________________________________________________________________
dense_8 (Dense)              (None, 128)               16512     
_________________________________________________________________
batch_normalization_18 (Batc (None, 128)               512       
_________________________________________________________________
dropout_6 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_9 (Dense)              (None, 10)                1290      
=================================================================
Total params: 813,258
Trainable params: 812,170
Non-trainable params: 1,088
_________________________________________________________________
None
Train on 60000 samples, validate on 10000 samples
Epoch 1/4
60000/60000 [==============================] - 21s 356us/step - loss: 0.4951 - accuracy: 0.8265 
Epoch 2/4
60000/60000 [==============================] - 20s 333us/step - loss: 0.2987 - accuracy: 0.8938 
Epoch 3/4
60000/60000 [==============================] - 20s 333us/step - loss: 0.2439 - accuracy: 0.9130 
Epoch 4/4
60000/60000 [==============================] - 20s 332us/step - loss: 0.2110 - accuracy: 0.9254 
Test loss: 0.23349667409956454
Test accuracy: 0.9146999716758728

Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_13 (Conv2D)           (None, 26, 26, 32)        320       
_________________________________________________________________
batch_normalization_19 (Batc (None, 26, 26, 32)        128       
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 24, 24, 64)        18496     
_________________________________________________________________
batch_normalization_20 (Batc (None, 24, 24, 64)        256       
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 22, 22, 64)        36928     
_________________________________________________________________
batch_normalization_21 (Batc (None, 22, 22, 64)        256       
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
batch_normalization_22 (Batc (None, 20, 20, 64)        256       
_________________________________________________________________
conv2d_17 (Conv2D)           (None, 18, 18, 64)        36928     
_________________________________________________________________
batch_normalization_23 (Batc (None, 18, 18, 64)        256       
_________________________________________________________________
conv2d_18 (Conv2D)           (None, 16, 16, 64)        36928     
_________________________________________________________________
batch_normalization_24 (Batc (None, 16, 16, 64)        256       
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 8, 8, 64)          0         
_________________________________________________________________
dropout_7 (Dropout)          (None, 8, 8, 64)          0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 4096)              0         
_________________________________________________________________
dense_10 (Dense)             (None, 128)               524416    
_________________________________________________________________
batch_normalization_25 (Batc (None, 128)               512       
_________________________________________________________________
dense_11 (Dense)             (None, 128)               16512     
_________________________________________________________________
batch_normalization_26 (Batc (None, 128)               512       
_________________________________________________________________
dropout_8 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_12 (Dense)             (None, 10)                1290      
=================================================================
Total params: 711,178
Trainable params: 709,962
Non-trainable params: 1,216
_________________________________________________________________
None
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 24s 397us/step - loss: 0.5309 - accuracy: 0.8140 
Epoch 2/5
60000/60000 [==============================] - 22s 366us/step - loss: 0.3210 - accuracy: 0.8840 
Epoch 3/5
60000/60000 [==============================] - 22s 369us/step - loss: 0.2628 - accuracy: 0.9061 
Epoch 4/5
60000/60000 [==============================] - 22s 367us/step - loss: 0.2315 - accuracy: 0.9168 
Epoch 5/5
60000/60000 [==============================] - 22s 367us/step - loss: 0.2044 - accuracy: 0.9269 
Test loss: 0.2832299417465925
Test accuracy: 0.9032999873161316

Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_19 (Conv2D)           (None, 26, 26, 32)        320       
_________________________________________________________________
batch_normalization_27 (Batc (None, 26, 26, 32)        128       
_________________________________________________________________
conv2d_20 (Conv2D)           (None, 24, 24, 64)        18496     
_________________________________________________________________
batch_normalization_28 (Batc (None, 24, 24, 64)        256       
_________________________________________________________________
conv2d_21 (Conv2D)           (None, 22, 22, 64)        36928     
_________________________________________________________________
batch_normalization_29 (Batc (None, 22, 22, 64)        256       
_________________________________________________________________
conv2d_22 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
batch_normalization_30 (Batc (None, 20, 20, 64)        256       
_________________________________________________________________
conv2d_23 (Conv2D)           (None, 18, 18, 64)        36928     
_________________________________________________________________
batch_normalization_31 (Batc (None, 18, 18, 64)        256       
_________________________________________________________________
conv2d_24 (Conv2D)           (None, 16, 16, 64)        36928     
_________________________________________________________________
batch_normalization_32 (Batc (None, 16, 16, 64)        256       
_________________________________________________________________
conv2d_25 (Conv2D)           (None, 14, 14, 64)        36928     
_________________________________________________________________
batch_normalization_33 (Batc (None, 14, 14, 64)        256       
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
dropout_9 (Dropout)          (None, 7, 7, 64)          0         
_________________________________________________________________
flatten_5 (Flatten)          (None, 3136)              0         
_________________________________________________________________
dense_13 (Dense)             (None, 128)               401536    
_________________________________________________________________
batch_normalization_34 (Batc (None, 128)               512       
_________________________________________________________________
dense_14 (Dense)             (None, 128)               16512     
_________________________________________________________________
batch_normalization_35 (Batc (None, 128)               512       
_________________________________________________________________
dropout_10 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_15 (Dense)             (None, 10)                1290      
=================================================================
Total params: 625,482
Trainable params: 624,138
Non-trainable params: 1,344
_________________________________________________________________
None
Train on 60000 samples, validate on 10000 samples
Epoch 1/6
60000/60000 [==============================] - 26s 426us/step - loss: 0.5690 - accuracy: 0.8003 
Epoch 2/6
60000/60000 [==============================] - 24s 396us/step - loss: 0.3357 - accuracy: 0.8806 
60000/60000 [==============================] - 24s 397us/step - loss: 0.2792 - accuracy: 0.9015 
Epoch 4/6
60000/60000 [==============================] - 24s 399us/step - loss: 0.2448 - accuracy: 0.9125 
60000/60000 [==============================] - 24s 398us/step - loss: 0.2178 - accuracy: 0.9214 
Epoch 6/6
60000/60000 [==============================] - 24s 398us/step - loss: 0.1957 - accuracy: 0.9304 
Test loss: 0.26208001482188703
Test accuracy: 0.9110000133514404

Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_26 (Conv2D)           (None, 26, 26, 32)        320       
_________________________________________________________________
batch_normalization_36 (Batc (None, 26, 26, 32)        128       
_________________________________________________________________
conv2d_27 (Conv2D)           (None, 24, 24, 64)        18496     
_________________________________________________________________
batch_normalization_37 (Batc (None, 24, 24, 64)        256       
_________________________________________________________________
conv2d_28 (Conv2D)           (None, 22, 22, 64)        36928     
_________________________________________________________________
batch_normalization_38 (Batc (None, 22, 22, 64)        256       
_________________________________________________________________
conv2d_29 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
batch_normalization_39 (Batc (None, 20, 20, 64)        256       
_________________________________________________________________
conv2d_30 (Conv2D)           (None, 18, 18, 64)        36928     
_________________________________________________________________
batch_normalization_40 (Batc (None, 18, 18, 64)        256       
_________________________________________________________________
conv2d_31 (Conv2D)           (None, 16, 16, 64)        36928     
_________________________________________________________________
batch_normalization_41 (Batc (None, 16, 16, 64)        256       
_________________________________________________________________
conv2d_32 (Conv2D)           (None, 14, 14, 64)        36928     
_________________________________________________________________
batch_normalization_42 (Batc (None, 14, 14, 64)        256       
_________________________________________________________________
conv2d_33 (Conv2D)           (None, 12, 12, 64)        36928     
_________________________________________________________________
batch_normalization_43 (Batc (None, 12, 12, 64)        256       
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
dropout_11 (Dropout)         (None, 6, 6, 64)          0         
_________________________________________________________________
flatten_6 (Flatten)          (None, 2304)              0         
_________________________________________________________________
dense_16 (Dense)             (None, 128)               295040    
_________________________________________________________________
batch_normalization_44 (Batc (None, 128)               512       
_________________________________________________________________
dense_17 (Dense)             (None, 128)               16512     
_________________________________________________________________
batch_normalization_45 (Batc (None, 128)               512       
_________________________________________________________________
dropout_12 (Dropout)         (None, 128)               0         
_________________________________________________________________
dense_18 (Dense)             (None, 10)                1290      
=================================================================
Total params: 556,170
Trainable params: 554,698
Non-trainable params: 1,472
_________________________________________________________________
None
Train on 60000 samples, validate on 10000 samples
Epoch 1/7
60000/60000 [==============================] - 28s 459us/step - loss: 0.6092 - accuracy: 0.7853 
Epoch 2/7
60000/60000 [==============================] - 26s 428us/step - loss: 0.3550 - accuracy: 0.8737 
60000/60000 [==============================] - 26s 431us/step - loss: 0.2882 - accuracy: 0.8967 
Epoch 4/7
60000/60000 [==============================] - 26s 429us/step - loss: 0.2525 - accuracy: 0.9092 
Epoch 5/7
60000/60000 [==============================] - 26s 430us/step - loss: 0.2246 - accuracy: 0.9201 
60000/60000 [==============================] - 26s 429us/step - loss: 0.2066 - accuracy: 0.9268 
Epoch 7/7
60000/60000 [==============================] - 26s 428us/step - loss: 0.1891 - accuracy: 0.9320 
Test loss: 0.23520663108825685
Test accuracy: 0.9193000197410583

>>>Here you see as till our model not give the desired accuracy so it again creates a new model also they have extra layer from the previous one.

>>>Here in this model I also make a log file named result.txt.  So I can maintain our logs of the model and what they stored accuracy and epochs.

Let me show you mine log file>>>>
...............................................................................
No of epochs used:2
 Accuracy of model : 0.8996999859809875


Again train the model with add some more extra layers
 No of epochs used:3
 Accuracy of model : 0.9120000004768372


Again train the model with add some more extra layers
 No of epochs used:4
 Accuracy of model : 0.909500002861023


Again train the model with add some more extra layers
 No of epochs used:5
 Accuracy of model : 0.9101999998092651


Again train the model with add some more extra layers
 No of epochs used:6
 Accuracy of model : 0.914900004863739


Again train the model with add some more extra layers
 No of epochs used:7
 Accuracy of model : 0.9157999753952026
.............................................................................................................

>>>Here I also create one more job of Jenkins so it constantly monitors my container/environment. If it found it not running so start again.

#Jenkins is not meant for monitoring or its not good for monitoring. For monitoring, we have other tools available like Prometheus and grafana . But today I show from Jenkins only!!!

For monitor here, I use pollscm build trigger




Here the code is :

cd /root/mlops/
if sudo ls | grep mymodel.h5
then
echo "Model trained successfully"
else
if sudo docker ps | grep keras_env
then
echo "container running"
else
if sudo docker ps -a | grep keras_env
then
sudo docker rm -f keras_env
sudo docker run -t -v /root/mlops:/usr/src/myapp -w /usr/src/myapp --name keras_env project:v1  python mymodel.py
else
sudo docker run -t -v /root/mlops:/usr/src/myapp -w /usr/src/myapp --name keras_env project:v1  python mymodel.py
fi
fi
fi

That's all

Here, the only important thing you should consider is tweek. How I tweek or change the hyperparameter automatically.

I will come with a new task soon stay tunned with me...

Thank you for reading...

No comments:

Post a Comment