8/31/2023 0 Comments Mongodb generate test dataManually generating fake data takes time, hence slowing down the testing process since it is hard to come up with a lot of new data. This is because it prevents one from using ones real identity, especially in for data like identification numbers, full names, and date of birth, among others. Note: Since the goal of this tutorial is to demonstrate Tensorflow-IO's capability to prepare tf.data.Datasets from mongodb and train tf.keras models directly, improving the accuracy of the models is out of the current scope.During system development and testing, employment of Fake data can be very useful. Infer on the test data res = model.evaluate(test_ds)ġ09/109 - 0s 2ms/step - loss: 0.5696 - accuracy: 0.7383 Model = tf.keras.Model(all_inputs, output) X = tf.(32, activation="relu")(all_features) # Convert the feature columns into a tf.keras layerĪll_features = tf.(encoded_features) Normalization_layer = get_normalization_layer(header, train_ds)Įncoded_numeric_col = normalization_layer(numeric_col)Įncoded_features.append(encoded_numeric_col)īuild, compile and train the model # Set the parameters Numeric_col = tf.keras.Input(shape=(1,), name=header) # Prepare a Dataset that only yields our feature.įeature_ds = dataset.map(lambda x, y: x) Normalizer = preprocessing.Normalization(axis=None) # Create a Normalization layer for our feature. However, the standard feature_columns can also be used.įor a better understanding of the preprocessing_layers in classifying structured data, please refer to the structured data tutorial def get_normalization_layer(name, dataset): Test_ds = test_ds.map(lambda v: (v, v.pop("target")))Ĭonnection successful: mongodb://localhost:27017Īs per the structured data tutorial, it is recommended to use the Keras Preprocessing Layers as they are more intuitive, and can be easily integrated with the models. Uri=URI, database=DATABASE, collection=TEST_COLLECTION Train_ds = train_ds.map(lambda v: (v, v.pop("target"))) Validate tf and tfio imports print("tensorflow-io version: Install the required tensorflow-io and mongodb (helper) packages pip install -q tensorflow-io pip install -q pymongo Import packages import osįrom sklearn.model_selection import train_test_splitįrom import preprocessing This tutorial uses pymongo as a helper package to create a new mongodb database and collection to store the data. Note: A basic understanding of mongodb storage will help you in following the tutorial with ease. This tutorial focuses on preparing tf.data.Datasets by reading data from mongoDB collections and using it for training a tf.keras model.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |