Machine learning models built to catch malware on Windows systems are typically evaluated on data that closely resembles their training set. In practice, the malware arriving on enterprise endpoints looks different, comes from different sources, and in many cases has been deliberately obfuscated to evade detection. A study from researchers at the Polytechnic of Porto tests what happens when that gap is made explicit, and the results have direct implications for organizations relying on static … More →The post Malware detectors trained on one dataset often stumble on another appeared first on Help Net Security.