Neural Network Ressources

I’d try with 2 hidden layers of 64 nodes each first. That should fit in with just a base64 encoding so you can concentrate on training + forward pass.

If you find it’s too small of a network you can try larger.

To answer your question you can also try base85 or for further gains try (ab)using unicode. Codingame counts characters, not bytes.

1 Like

Assuming c++ code locally and in codingame, unicode characters for codingame · GitHub
This approach should be translatable to python easily I believe.

If I’m counting right, your current net has 124405. With one less hidden layer that would be 84205. That would leave ~16000 characters for other code.

1 Like

the unicode method is genius!
I will try to train a smaller network first and then try the unicode idea later.
Thanks for both of your help!

After a long time I am back in coding game.
I found this:

(D-600/20000): sqrt((nextCPx - x)²+(nextCPy – y)²)-600/20000

angleScalD : ?

angleCrossD : ?

vScalD/1000 : ((nextCPx-x)*vx+(nextCPy-y)*vy)/(SQRT((nextCPx-x)²+(nextCPy-y)²))/1000

vCrossD/1000 : SIN(ACOS(((nextCPx-x)vx+(nextCPy-y)vy)/((SQRT((nextCPx-x)²+(nextCPy-y)²))(SQRT(vx²+vy²))))))(SQRT(vx²+vy²))/1000

nextDirScalD : (((nextCPx-x)(nextnextCPx-nextCPx)+(nextCPy-y)(nextnextCPy-nextCPy))/((SQRT((nextCPx-x)²+(nextCPy-y)²))*(SQRT((nextnextCPx-nextCPx)²+(nextnextCPy-nextCPy)²))))

nextDirCrossD : SIN(ACOS((((nextCPx-x)(nextnextCPx-nextCPx)+(nextCPy-y)(nextnextCPy-nextCPy))/((SQRT((nextCPx-x)²+(nextCPy-y)²))*(SQRT((nextnextCPx-nextCPx)²+(nextnextCPy-nextCPy)²))))))

(distanceCP1CP2-1200)/10000 : (SQRT((nextnextCPx-nextCPx)²+(nextnextCPy-nextCPy)²)-1200)/10000

Someone know how to calculate angleScalD and angleCrossD ?

double angleScalD = dot(vec2ByAngle(angle), vec2(1.0,0.0));
double angleCrossD = cross(vec2ByAngle(angle), vec2(1.0,0.0));

How to have angleNextDir??
double angleScalD = dot(vec2ByAngle(angle), vec2(1.0,0.0));
double angleCrossD = cross(vec2ByAngle(angle), vec2(1.0,0.0));
int vScalD = (int)(normVdot(vec2ByAngle(angleV), vec2(1.0,0.0)));
int vCrossD = (int)(normV
cross(vec2ByAngle(angleV), vec2(1.0,0.0)));
double nextDirScalD = dot(vec2ByAngle(angleNextDir), vec2(1.0, 0.0));
double nextDirCrossD = cross(vec2ByAngle(angleNextDir), vec2(1.0, 0.0));

I have now a working NN and know how to translate the input into the NN
But I didnt find how to translate the output of NN into the game , the second term is the thrust but I dont know how to convert the first term into the destination of the pod.
Any ideas ?

In expert mode you can provide angle (-18,18) and thrust. When you provide destination the referee just translates it into angles internally.

I think Pedro1994 is talking about Mad Pod Racing.

int x = checkpoint[_p1.check_point].x - 3*_p1.speed.x;
int y = checkpoint[_p1.check_point].y - 3*_p1.speed.y;
or 
float anglef = _p1.angletot;
float angleRad = anglef * PI / 180.f;
Vector2 dird = {cos(angleRad)*10000 , sin(angleRad)*10000};
cout << dird.x << " " dird.y << " " << thrust << endl;

My NN give the same result with this type of data: Always the same output angle and for the thrust. Somebody can help me please.
vector dimension = { 7, 8,8,8,8,8,8, 8, 2 };
nn = NeuralNetwork(dimension, 0.4);
nn.InitByDumpNN({/* weight layer 0*/
{{-0.123,…,-0.486,0.300}},
/* weight layer 1*/
{{-0.218,…,-0.399}},
/* weight layer 2*/
{{0.063, …,0.427}},
/* weight layer 3*/
{{-0.366,…,0.392}},
/* weight layer 4*/
{{0.136,…,0.220}},
/* weight layer 5*/
{{0.417,…,0.134}},
/* weight layer 6*/
{{0.906,…,-0.232}},
/* weight layer 7*/
{{6.230,3.534},{0.112,-0.091},{0.321,-0.349},{0.327,-0.012},{0.432,-0.028},{6.285,3.650},{4.884,3.100},{0.426,0.452}} },
{/*layer bias : */
{0.017,0.139,0.178,0.516,-0.258,0.453,0.463,0.380},
/*layer bias : */
{-0.454,0.043,-0.361,-0.422,-0.168,0.267,0.549,0.134},
/*layer bias : */
{-0.016,0.085,0.097,0.051,-0.087,0.055,0.227,0.335},
/*layer bias : */
{0.208,0.159,-0.080,-0.144,0.140,0.712,-0.421,-0.086},
/*layer bias : */
{-0.257,-0.032,-0.432,-0.418,-0.142,0.113,0.691,-0.025},
/*layer bias : */
{0.054,0.906,0.012,-0.053,-0.546,0.924,-0.212,-0.495},
/*layer bias : */
{0.891,-0.270,-0.278,-0.075,-0.446,-2.431,-0.400,0.185},
/*layer bias : */
{7.131,3.891} });
and for input

        int nextcheck = (_p1.check_point + 1) % checkpoint.size();
        double x1 = _p1.x;
        double y1 = _p1.y;
        double x2 = checkpoint[_p1.check_point].x;
        double y2 = checkpoint[_p1.check_point].y;
        double x3 = checkpoint[nextcheck].x;
        double y3 = checkpoint[nextcheck].y;
        double angle = atan2(y1 - y2, x1 - x2) - atan2(y3 - y2, x3 - x2);
        angle = angle * 180.0 / PI;
        angle = fmod((angle + 180.0), 360.0);
        if (angle < 0.0)
            angle += 360.0;
        angle -= 180.0;
        double anglech = atan2(y2 - y1, x2 - x1);
        anglech = anglech * 180.0 / PI;
        // Ajustement par rapport à l'angle total (p.angletot)
        anglech = fmod(anglech - _p1.angletot + 540, 360) - 180;
    double col = double((_p1.speed.x * (checkpoint[_p1.check_point].x - _p1.x) + _p1.speed.y * (checkpoint[_p1.check_point].y - _p1.y))) /
        double(sqrt(_p1.speed.x * _p1.speed.x + _p1.speed.y * _p1.speed.y) *
            sqrt((checkpoint[_p1.check_point].x - _p1.x) * (checkpoint[_p1.check_point].x - _p1.x) + (checkpoint[_p1.check_point].y - _p1.y) * (checkpoint[_p1.check_point].y - _p1.y)) + 0.000001);
   
    double distcheck = distance(checkpoint[_p1.check_point], { (float)_p1.x, (float)_p1.y });

    double speed = norme1(_p1.speed);

    double a1 = angle / 180.0;
    double a2 = anglech / 180.0;
    double a3 = col;
    double a4 = distcheck / 30000;
    double a5 = speed / 2000.0;
    double a6 = _p1.angletot / 360.0;
    double a7 = (float)_p1.thrust / 200.0;