So-called machine learning is producing all kinds of results these days. People find it very useful. Given what it actually is, though — purely statistical analysis of uninterpreted data — I think it is very misleading to call it learning by the machine. The simple execution of numerical computations and yielding of results — no matter how advanced the computations, or how useful the results — is not normally considered learning, and does not establish intelligence.
There could be a more interesting argument over whether software implementations of formal reasoning represent some form of artificial intelligence. The problem is, you cannot get real-world conclusions from purely formal reasoning, so the appearance of intelligence is rather limited.
AI researchers recognized this early on, and came up with the idea of complementing computational reasoning with some sort of “knowledge representation”. Then, in a fascinating spontaneous development within this specialized research domain, the pendulum swung back again, and there came to be a broad consensus that “knowledge representation” was best approached as intimately interdependent with reasoning.