On the implications of this information bottleneck:
a. If the information can be squeezed, or compressed, or minimized in representation terseness, then the amount of "storage" needed to keep the information can be minimized.e. So the correlation of entropy and learning rate, could possibly imply that a higher entropic data is also correlated with faster learning.
f. (e) goes against the conventional wisdom that random data, being totally random, will enjoy much less structured information, and thus more difficult to extract pattern out, or LEARN anything.
No comments:
Post a Comment