写点什么

Tensorflow 随笔 (二)

用户头像
毛显新
关注
发布于: 2 小时前

When training with methods such as tf.GradientTape(), use tf.summary to log the required information.


Use the same dataset as above, but convert it to tf.data.Dataset to take advantage of batching capabilities:


import tensorflow as tfimport datetime
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
physical_devices = tf.config.list_physical_devices('GPU')try: # Disable first GPU tf.config.set_visible_devices(physical_devices[:1], 'GPU') logical_devices = tf.config.list_logical_devices('GPU') # Logical device was not created for first GPU assert len(logical_devices) == len(physical_devices) - 1except: # Invalid device or cannot modify virtual devices once initialized. pass
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()x_train, x_test = x_train / 255.0, x_test / 255.0
AUTOTUNE = tf.data.AUTOTUNEtrain_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(60000).batch(64)test_dataset = test_dataset.batch(64)
复制代码


The training code follows the advanced quickstart tutorial, but shows how to log metrics to TensorBoard. Choose loss and optimizer:


loss_object = tf.keras.losses.SparseCategoricalCrossentropy()optimizer = tf.keras.optimizers.Adam()
复制代码


Create stateful metrics that can be used to accumulate values during training and logged at any point:


# Define our metricstrain_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')
复制代码


Define the training and test functions:


def train_step(model, optimizer, x_train, y_train):  with tf.GradientTape() as tape:    predictions = model(x_train, training=True)    loss = loss_object(y_train, predictions)  grads = tape.gradient(loss, model.trainable_variables)  optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss) train_accuracy(y_train, predictions)
def test_step(model, x_test, y_test): predictions = model(x_test) loss = loss_object(y_test, predictions)
test_loss(loss) test_accuracy(y_test, predictions)
复制代码


Set up summary writers to write the summaries to disk in a different logs directory:


current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")train_log_dir = 'logs/gradient_tape/' + current_time + '/train'test_log_dir = 'logs/gradient_tape/' + current_time + '/test'train_summary_writer = tf.summary.create_file_writer(train_log_dir)test_summary_writer = tf.summary.create_file_writer(test_log_dir)
复制代码


Start training. Use tf.summary.scalar() to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. You have control over which metrics to log and how often to do it. Other tf.summary functions enable logging other types of data.


model = create_model() # reset our model
EPOCHS = 5
for epoch in range(EPOCHS): for (x_train, y_train) in train_dataset: train_step(model, optimizer, x_train, y_train) with train_summary_writer.as_default(): tf.summary.scalar('loss', train_loss.result(), step=epoch) tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)
for (x_test, y_test) in test_dataset: test_step(model, x_test, y_test) with test_summary_writer.as_default(): tf.summary.scalar('loss', test_loss.result(), step=epoch) tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}' print (template.format(epoch+1, train_loss.result(), train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100))
# Reset metrics every epoch train_loss.reset_states() test_loss.reset_states() train_accuracy.reset_states() test_accuracy.reset_states()
复制代码


TensorBoard.dev is a free public service that enables you to upload your TensorBoard logs and get a permalink that can be shared with everyone in academic papers, blog posts, social media, etc. This can enable better reproducibility and collaboration


To use TensorBoard.dev, run the following command:


!tensorboard dev upload \  --logdir logs/fit \  --name "(optional) My latest experiment" \  --description "(optional) Simple comparison of several hyperparameters" \  --one_shot
复制代码


用户头像

毛显新

关注

还未添加个人签名 2021.07.26 加入

还未添加个人简介

评论

发布
暂无评论
Tensorflow随笔(二)