SerializationException when serializing lots of objects in .NET

I tried reproducing the problem, but the code just takes forever to run even when each of the 13+ million objects is only 2 bytes. So I suspect you could not only fix the problem, but also significantly improve performance if you pack your data a little better in your custom ISerialize implementations. Don't let the serializer see so deep into your structure, but cut it off at the point where your object graph blows up into hundreds of thousands of array elements or more (because presumably if you have that many objects, they're pretty small or you wouldn't be able to hold them in memory anyway). Take this example, which allows the serializer to see classes B and C, but manually manages the collection of class A:

class Program
{
    static void Main(string[] args)
    {
        C c = new C(8, 2000000);
        System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
        System.IO.MemoryStream ms = new System.IO.MemoryStream();
        bf.Serialize(ms, c);
        ms.Seek(0, System.IO.SeekOrigin.Begin);
        for (int i = 0; i < 3; i++)
            for (int j = i; j < i + 3; j++)
                Console.WriteLine("{0}, {1}", c.all[i][j].b1, c.all[i][j].b2);
        Console.WriteLine("=====");
        c = null;
        c = (C)(bf.Deserialize(ms));
        for (int i = 0; i < 3; i++)
            for (int j = i; j < i + 3; j++)
                Console.WriteLine("{0}, {1}", c.all[i][j].b1, c.all[i][j].b2);
        Console.WriteLine("=====");
    }
}

class A
{
    byte dataByte1;
    byte dataByte2;
    public A(byte b1, byte b2)
    {
        dataByte1 = b1;
        dataByte2 = b2;
    }

    public UInt16 GetAllData()
    {
        return (UInt16)((dataByte1 << 8) | dataByte2);
    }

    public A(UInt16 allData)
    {
        dataByte1 = (byte)(allData >> 8);
        dataByte2 = (byte)(allData & 0xff);
    }

    public byte b1
    {
        get
        {
            return dataByte1;
        }
    }

    public byte b2
    {
        get
        {
            return dataByte2;
        }
    }
}

[Serializable()]
class B : System.Runtime.Serialization.ISerializable
{
    string name;
    List<A> myList;

    public B(int size)
    {
        myList = new List<A>(size);

        for (int i = 0; i < size; i++)
        {
            myList.Add(new A((byte)(i % 255), (byte)((i + 1) % 255)));
        }
        name = "List of " + size.ToString();
    }

    public A this[int index]
    {
        get
        {
            return myList[index];
        }
    }

    #region ISerializable Members

    public void GetObjectData(System.Runtime.Serialization.SerializationInfo info, System.Runtime.Serialization.StreamingContext context)
    {
        UInt16[] packed = new UInt16[myList.Count];
        info.AddValue("name", name);
        for (int i = 0; i < myList.Count; i++)
        {
            packed[i] = myList[i].GetAllData();
        }
        info.AddValue("packedData", packed);
    }

    protected B(System.Runtime.Serialization.SerializationInfo info, System.Runtime.Serialization.StreamingContext context)
    {
        name = info.GetString("name");
        UInt16[] packed = (UInt16[])(info.GetValue("packedData", typeof(UInt16[])));
        myList = new List<A>(packed.Length);
        for (int i = 0; i < packed.Length; i++)
            myList.Add(new A(packed[i]));
    }

    #endregion
}

[Serializable()]
class C
{
    public List<B> all;
    public C(int count, int size)
    {
        all = new List<B>(count);
        for (int i = 0; i < count; i++)
        {
            all.Add(new B(size));
        }
    }
}

Have you thought about the fact that Int32.MaxValue is 2,147,483,647 - over 2 billion.

You'd need 16GB of memory just to store the pointers (assuming a 64 bit machine), let alone the objects themselves. Half that on a 32bit machine, though squeezing 8GB of pointer data into the maximum of 3GB or so usable space would be a good trick.

I strongly suspect that your problem is not the number of objects, but that the serialization framework is going into some kind of infinite loop because you have referential loops in your data structure.

Consider this simple class:

public class Node
{
    public string Name {get; set;}
    public IList<Node> Children {get;}
    public Node Parent {get; set;}
    ...
}

This simple class can't be serialised, because the presence of the Parent property means that serialisation will go into an infinite loop.

Since you're already implementing ISerializable, you are 75% of the way to solving this - you just need to ensure you remove any cycles from the object graph you are storing, to store an object tree instead.

One technique that is often used is to store the name (or id) of a referenced object instead of the actual reference, resolving the name back to the object on load.


Depending on the structure of the data, maybe you can serialize / deserialize subgraphs of your large object graph? If the data could be somehow partitioned, you could get away with it, creating only small duplication of serialized data.


The issue has been fixed with .NET Core 2.1. I have requested to backport the solution to .NET Framework 4.8:

https://github.com/Microsoft/dotnet-framework-early-access/issues/46.

If you feel the issue should be fixed you can leave a comment that this is also important to you. The fix in .NET Core was to reuse the prime number generator present in Dictionary also for BinaryFormatter.

If you have so many objects serialized and you do not want wait 40 minutes to read them back make sure that you add to your App.Config this:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <runtime>
    <!-- Use this switch to make BinaryFormatter fast with large object graphs starting with .NET 4.7.2 -->
      <AppContextSwitchOverrides value="Switch.System.Runtime.Serialization.UseNewMaxArraySize=true" />
  </runtime>
</configuration>

to enable the BinaryFormatter deserialization fix which did finally arrive with .NET 4.7.2. More information about both issues can be found here:

https://aloiskraus.wordpress.com/2017/04/23/the-definitive-serialization-performance-guide/