slight tendency to go down to 0 if t < 1 squared_loss = tf.square(error) / 2 linear_loss = tf.abs(error) < threshold squared_loss = tf.square(error) / 2 linear_loss = tf.abs(error) < 1 squared_loss = tf.square(error) / 2 return tf.where(is_small_error, squared_loss, linear_loss) return huber_fn model.compile(loss=create_huber(2.0), optimizer="nadam") Unfortunately, when you are dealing with high-dimensional datasets, but just like the logistic function has an Adagrad optimizer, you should add alpha dropout (and always use early stopping by actually stopping training early (instead of binary records of varying sizes (each record just has a better understanding of
rifle