<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Make Data Speak]]></title><description><![CDATA[Make Data Speak]]></description><link>https://amneumarkt.leima.is</link><generator>RSS for Node</generator><lastBuildDate>Fri, 01 May 2026 10:15:18 GMT</lastBuildDate><atom:link href="https://amneumarkt.leima.is/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Training GAN can be baffling]]></title><description><![CDATA[Training GAN can be baffling. 
For example, the generator and the discriminator just don't "learn" at the same scale sometimes. Would you try to balance the generator loss and discriminator loss by hand? 
Soumith Chintala ( @ FAIR ) put together this...]]></description><link>https://amneumarkt.leima.is/training-gan-can-be-baffling</link><guid isPermaLink="true">https://amneumarkt.leima.is/training-gan-can-be-baffling</guid><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[L Ma]]></dc:creator><pubDate>Thu, 19 Aug 2021 08:30:55 GMT</pubDate><content:encoded><![CDATA[<p>Training GAN can be baffling. 
For example, the generator and the discriminator just don't "learn" at the same scale sometimes. Would you try to balance the generator loss and discriminator loss by hand? 
Soumith Chintala ( @ FAIR ) put together this list of tips for training GAN. "Don't balance loss via statistics" is one of the 17 tips by Chintala. The list is quite inspiring.</p>
<p>https://github.com/soumith/ganhacks</p>
]]></content:encoded></item></channel></rss>