"Neural networks take in binary digits and output fits (fuzzy bits) which is a number between 0 and 1 but never absolute (e.g. 0.4323, 0.9, 0.1). For this reason, to make use of the output, we have to round off the fits to form bits (binary units)."
Most neural network architectures will accept as inputs, and will produce as outputs real-valued inputs (and other data which can be represented using reals, such as dummy variables). I'm not sure what you're referring to as "fuzzy bits", since neural network outputs can have a variety of ranges, not just between 0.0 and 1.0.
"Because neural networks utilize fuzzy logic, the standard system architecture is slightly different."
Most neural networks do not use conventional fuzzy calculus elements (fuzzy sets, fuzzy membership functions, fuzzy inference, hedges, etc.).
"A reasonable threshold would be anything greater than 0.8 should be 1. Anything lower than 0.2 should be 0. Anything in the middle means the network is not smart enough and requires more training."
Many useful neural networks exhibit a majority of output values between 0.2 and 0.8. This may be all the differentiation which is possible, and still is very useful in many applications. Further, examining the neural network output distribution directly is a poor way to determine the level of training. This is much better accomplished by assessing neural network performance on a validation set.
I suggest the following as a starting place for artificial neural networks:
faqs.org/faqs/ai-faq/neural-nets/pa
...