tensorflow: TensorFlow's introductory examples errors
I am following along from this point: https://www.tensorflow.org/get_started/get_started#basic_usage It is a linear regression example.
I have approached this execution from two angles:
Angle 1: Running python and copying one line at a time.
When run this line: estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)
I get this issue: WARNING:tensorflow:Using temporary folder as model directory: C:\Users\elavi\AppData\Local\Temp\tmprirdvmnn INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_tf_config': gpu_options { per_process_gpu_memory_fraction: 1 } , '_task_type': None, '_save_checkpoints_steps': None, '_master': '', '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x0000021D2049C390>, '_task_id': 0, '_save_checkpoints_secs': 600, '_save_summary_steps': 100, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_evaluation_master': '', '_tf_random_seed': None, '_environment': 'local', '_num_ps_replicas': 0}
Angle 2: It may be more helpful to see how I run it as a script as a whole. The output is:
WARNING:tensorflow:Using temporary folder as model directory: C:\Users\elavi\AppData\Local\Temp\tmp41huenz5 WARNING:tensorflow:Rank of input Tensor (1) should be the same as output_rank (2) for column. Will attempt to expand dims. It is highly recommended that you resize your input, as this behavior may change. WARNING:tensorflow:From C:\Users\elavi\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\head.py:521: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30. Instructions for updating: Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported. 2017-02-24 00:02:36.386704: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.388071: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.389235: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.389290: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.390010: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.390894: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.391659: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-02-24 00:02:36.392248: W c:\tf_jenkins\home\workspace\nightly-win\device\cpu\os\windows\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. WARNING:tensorflow:Rank of input Tensor (1) should be the same as output_rank (2) for column. Will attempt to expand dims. It is highly recommended that you resize your input, as this behavior may change. WARNING:tensorflow:From C:\Users\elavi\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\head.py:521: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30. Instructions for updating: Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported. WARNING:tensorflow:Skipping summary for global_step, must be a float or np.float32.
I am interested in what is going on but I really need things simplified. A list of steps to fix this would be great. Also, if there are any better ways I could pose my problems in the future, please let me know.
What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?
Many. The issue is that I am very new. My ability to understand solutions is very small. I am not well versed with the required terminology. So I have seen this problem posed and closed but I just can’t follow along.
Environment info
Operating System: Windows 10
Installed version of CUDA and cuDNN: I have the latest version but I probably shouldn’t because I do not have a dedicated graphics card. (I am doing initial development on Microsoft Surface Pro 4)
TF Version 1.0.0-rc2
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 16 (5 by maintainers)
I really don’t like the perspective, that warnings are ok. Especially when they are triggered by official example code from the website. Where is the sense of quality and soundness?
You have to set the
model_dir, eg. if you want to store the outputs in./outputfolder:estimator = tf.contrib.learn.LinearRegressor(feature_columns=features, model_dir='./output')Same thing if you follow the next example (Custom model):
estimator = tf.contrib.learn.Estimator(model_fn=model,model_dir='./output')Then you can use TensorBoard to show the result:
>tensorboard --logdir=outputAll I see in the output you posted are warnings, not errors. Those warnings point out issues you ought to be aware of, but should not prevent a correct evaluation. If you ignore those warnings, does the example not work?
Tutorial code should always work and give exactly the same result. Not mention “warning” is something i should not ignore and should not be in tutorial code. If someone think in the opposite way, sorry but it make bad image of project, some people are not capable to make human communication even if they are good at same others things. For me it is a message: “I will have more problems in future” so before to invest money and time in it, i will try the other solution. Sorry but it is always hard at start and when creator do not care about clean, working code, it gets much harder.
It might be helpful to take a look at an overview of TensorFlow logging. https://www.tensorflow.org/get_started/monitors There are multiple levels of logging available, and their labels are somewhat arbitrary. ERROR is supposed to be used for things which are really errors, i.e. a value has been encountered somewhere which is outside of the legal range, and correct computation cannot continue, but maybe we can unwind up the stack and retry something. FATAL is even more severe, it means the program must terminate now. There are multiple gentler logging levels for informational or debugging messages. WARNING is intermediate, for something that isn’t definitely an error, but you ought to know about. (WARNING also has the sometimes useful property that it flushes to the log file immediately, so it should be there if a hard program failure occurs soon afterward.) According to the documentation I cited, the default logging level is WARN, which means that logging messages of lesser severity will likely not display to novices, and if one of the TF developers wants to make sure you see a message, it needs to be a WARNING. So, some of these WARNINGs are just messages that someone hopes will help users self-diagnose a non-optimal situation without needing to post issues here.
On Wed, Mar 29, 2017 at 11:25 PM, Stefano notifications@github.com wrote:
Your last three lines here
have
So it did work. the loss became effectively zero (1e-8 is about zero for the purposes of this discussion). The loss changes because the examples are ordered randomly and the weights are initialized randomly. That means that the exact numeric values are not deterministic (this is inherent in stochastic gradient descent learning approaches).
TensorFlow is verbose on startup so it is hard to see the result, but it is there.