Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Common serialization format between shared between all zk-snarks framworks #1206

Open
tdelabro opened this issue Jul 17, 2024 · 8 comments
Open

Comments

@tdelabro
Copy link

I am sharing here a discussion I created on Arkworks about interoperability between the different zk-snark libs out there.

https://github.com/orgs/arkworks-rs/discussions/8

Any insight from your team would be super welcome!

@ivokub
Copy link
Collaborator

ivokub commented Jul 17, 2024

Yeah, it is inconvenient that every circuit has their own serialization format. We have looked into using some general IR, but it has seemed to be difficult to have a generic implementation due to particular optimizations etc. what every frontend does.

There was an effort Vamp-IR, but it seems that its development has stopped.

@readygo67
Copy link
Contributor

I have implemented a ganrk plonk verifier written by rust with arkworks, I'd like to share my experience
In general, 3 tips should be pay attention to

  1. different mask for G1/G2 compressed x format.
//reference github.com/consensys/gnark-crypto/ecc/bn254/marshal.go
const GNARK_MASK: u8 = 0b11 << 6;
const GNARK_UNCOMPRESSED: u8 = 0b00 << 6;
const GNARK_COMPRESSED_POSTIVE: u8 = 0b10 << 6;
const GNARK_COMPRESSED_NEGATIVE: u8 = 0b11 << 6;
const GNARK_COMPRESSED_INFINITY: u8 = 0b01 << 6;

const ARK_MASK: u8 = 0b11 << 6;
const ARK_COMPRESSED_POSTIVE: u8 = 0b00 << 6;
const ARK_COMPRESSED_NEGATIVE: u8 = 0b10 << 6;
const ARK_COMPRESSED_INFINITY: u8 = 0b01 << 6;
  1. in uncompressed G1/G2 format, ark works us a negative flag if y is negative, while this flag is not exist in gnark, refer " ark_g1_to_gnark_unompressed_bytes" function below

  2. le in arkworks and be in gnark

the following is code snippet

use ark_bn254::{Config, Fq, G1Affine, G2Affine};
use ark_ec::{
    bn::{self, Bn, BnConfig, TwistType},
    short_weierstrass::SWFlags,
    AffineRepr,
};
use ark_ff::{BigInteger, Field, PrimeField};
use ark_serialize::{CanonicalDeserialize, CanonicalSerialize, SerializationError};
use std::cmp::{Ord, Ordering, PartialOrd};
use std::error::Error;
use std::ops::{Add, Div, Mul, Neg, Sub};
use std::{any::Any, str::FromStr};

//reference github.com/consensys/gnark-crypto/ecc/bn254/marshal.go
const GNARK_MASK: u8 = 0b11 << 6;
const GNARK_UNCOMPRESSED: u8 = 0b00 << 6;
const GNARK_COMPRESSED_POSTIVE: u8 = 0b10 << 6;
const GNARK_COMPRESSED_NEGATIVE: u8 = 0b11 << 6;
const GNARK_COMPRESSED_INFINITY: u8 = 0b01 << 6;

const ARK_MASK: u8 = 0b11 << 6;
const ARK_COMPRESSED_POSTIVE: u8 = 0b00 << 6;
const ARK_COMPRESSED_NEGATIVE: u8 = 0b10 << 6;
const ARK_COMPRESSED_INFINITY: u8 = 0b01 << 6;

/*
    In gnark, G1Affine
    compressed bytes is big-endian,
    MSB byte:
    bit7 = 1 : compressed format
    bit6 = 1 : y > -y
    bit6 = 0 : y < -y
    uncompressed bytes =  x big-endian bytes | y big endian bytes

    In arkworks, G1Affine
    compressed bytes i  little-endian,
    MSB byte:
    bit7 = 0 : y<-y
    bit7 = 1 : y > -y
    uncompressed bytes =  x le bytes | y le bytes + negative flag in y's MSB byte

*/
fn gnark_flag_to_ark_flag(msb: u8) -> u8 {
    let gnark_flag = msb & GNARK_MASK;

    let mut ark_flag = ARK_COMPRESSED_POSTIVE;
    match gnark_flag {
        GNARK_COMPRESSED_POSTIVE => ark_flag = ARK_COMPRESSED_POSTIVE,
        GNARK_COMPRESSED_NEGATIVE => ark_flag = ARK_COMPRESSED_NEGATIVE,
        GNARK_COMPRESSED_INFINITY => ark_flag = ARK_COMPRESSED_INFINITY,
        _ => panic!("Unexpected gnark_flag value: {}", gnark_flag),
    }

    msb & !ARK_MASK | ark_flag
}

//convert big-endian gnark compressed x bytes to litte-endian ark compressed x for g1 and g2 point
pub fn ganrk_commpressed_x_to_ark_commpressed_x(x: &Vec<u8>) -> Vec<u8> {
    if x.len() != 32 && x.len() != 64 {
        panic!("Invalid x length: {}", x.len());
    }
    let mut x_copy = x.clone();

    let msb = gnark_flag_to_ark_flag(x_copy[0]);
    x_copy[0] = msb;

    x_copy.reverse();
    x_copy
}

// get the G1Affine to gnark uncompressed bytes
// ark uncompressed   | x bytes in le | y bytes in le | (d88ec7e93cdf5ddabe594fc8b62c1913c1ee19a029bc4a6b2a56ecae808a7c09 06a2261cf69efc2413ce2db397a8c0fccf0849f81979b2c2fc9457cdf2bd5300)
// gnark uncompressed | x bytes in be | y bytes in be | (097c8a80aeec562a6b4abc29a019eec113192cb6c84f59beda5ddf3ce9c78ed8 0053bdf2cd5794fcc2b27919f84908cffcc0a897b32dce1324fc9ef61c26a206)
// Note: in ark works, a negative flag is tagged if y is negative, this flag is not exist in gnark.
// in production, use only 1 mthod 
pub fn ark_g1_to_gnark_unompressed_bytes(point: &G1Affine) -> Result<Vec<u8>, Box<dyn Error>> {
    let mut bytes1 = vec![];
    
    //method 1, decode x and y seperately, and concatenate them
    let x_bytes = point.x().unwrap().into_bigint().to_bytes_be();
    let y_bytes = point.y().unwrap().into_bigint().to_bytes_be();

    bytes1.extend_from_slice(&x_bytes);
    bytes1.extend_from_slice(&y_bytes);


    //method 2, use ark-works serialize_uncompressed function with special processing for negative flag 
    let mut bytes2 = vec![];
    point.serialize_uncompressed(&mut bytes2)?;
    let (x_bytes, y_bytes) = bytes2.split_at_mut(32);
    x_bytes.reverse();
    y_bytes.reverse();

    //remove the negative flag
    let flag = point.to_flags();
    if flag == SWFlags::YIsNegative {
        y_bytes[0] &= !(0b1 << 7);
    }

    let mut output_bytes = x_bytes.to_vec();
    output_bytes.extend_from_slice(y_bytes);
    
    if output_bytes != bytes1 {
        return Err("ark_g1_to_gnark_unompressed_bytes failed".into())
    }else{
        Ok(output_bytes)
    }
}

pub fn gnark_compressed_x_to_g1_point(buf: &[u8]) -> Result<G1Affine, Box<dyn Error>> {
    if buf.len() != 32 {
        return Err(SerializationError::InvalidData.into())
    };

    let m_data = buf[0] & GNARK_MASK;
    if m_data == GNARK_COMPRESSED_INFINITY {
        if !is_zeroed(buf[0] & !GNARK_MASK, &buf[1..32]) {
            return Err(SerializationError::InvalidData.into())
        }
        Ok(G1Affine::identity())
    } else {
        let mut x_bytes: [u8; 32] = [0u8; 32];
        x_bytes.copy_from_slice(buf);
        x_bytes[0] &= !GNARK_MASK;

        let x = Fq::from_be_bytes_mod_order(&x_bytes.to_vec());
        let (y, neg_y) =
            G1Affine::get_ys_from_x_unchecked(x).ok_or(SerializationError::InvalidData)?;

        let mut final_y = y;
        if y.cmp(&neg_y) == Ordering::Greater {
            if m_data == GNARK_COMPRESSED_POSTIVE {
                final_y = y.neg();
            }
        } else {
            if m_data == GNARK_COMPRESSED_NEGATIVE {
                final_y = y.neg();
            }
        }

        let p = G1Affine::new_unchecked(x, final_y);
        if !p.is_on_curve() {
            return Err(SerializationError::InvalidData.into())
        }
        Ok(p)
    }
}

fn is_zeroed(first_byte: u8, buf: &[u8]) -> bool {
    if first_byte != 0 {
        return false;
    }
    for &b in buf {
        if b != 0 {
            return false;
        }
    }
    true
}

pub fn gnark_uncompressed_bytes_to_g1_point(buf: &[u8]) -> Result<G1Affine, Box<dyn Error>>{
    if buf.len() != 64 {
        return Err(SerializationError::InvalidData.into());
    };

    let (x_bytes, y_bytes) = buf.split_at(32);

    let x = Fq::from_be_bytes_mod_order(&x_bytes.to_vec());
    let y = Fq::from_be_bytes_mod_order(&y_bytes.to_vec());
    let p = G1Affine::new_unchecked(x, y);
    if !p.is_on_curve() {
        return Err(SerializationError::InvalidData.into());
    }
    Ok(p)
}

pub fn gnark_compressed_x_to_g2_point(buf: &[u8]) -> Result<G2Affine, Box<dyn Error>> {
    if buf.len() != 64 {
        return Err(SerializationError::InvalidData.into());
    };

    let bytes = ganrk_commpressed_x_to_ark_commpressed_x(&buf.to_vec());
    let p = G2Affine::deserialize_compressed::<&[u8]>(&bytes)?;
    Ok(p)
}


#[cfg(test)]
mod test {
    use super::*;

    #[test]
    fn test_g1_generator() {
        let generator = G1Affine::generator();

        let x = Fq::from(1u8);
        let y = Fq::from(2u8);

        let p1 = G1Affine::new(x, y);
        assert_eq!(p1, generator);

        let p2 = G1Affine::new(Fq::from_str("1").unwrap(), Fq::from_str("2").unwrap());
        assert_eq!(p2, generator);
    }


    #[test]
    fn test_g1point_from_string() {
        //Note: 
        //1 Fq::from_str is only applied without leading zeros
        //2. arkworks g1 point is marshal in little-endian
        
        let x = Fq::from_str("1").unwrap();
        let y = Fq::from_str("2").unwrap();
        let p1 = G1Affine::new(x, y);
        println!("{:?}", p1);
        assert_eq!(p1.is_on_curve(), true);

        let mut bytes_vec = vec![];
        p1.serialize_compressed(&mut bytes_vec).unwrap();
        // println!("bytes_vec: {:?}", bytes_vec); //little-endian

        let s = String::from("0000000000000000000000000000000000000000000000000000000000000001");
        let mut bytes_vec = hex::decode(&s.clone()).unwrap();

        bytes_vec.reverse();
        let p2 = G1Affine::deserialize_compressed::<&[u8]>(bytes_vec.as_ref()).unwrap();

        assert_eq!(p1, p2);
    }

    #[test]
    fn test_g1point_serde() {
        //step1, rebuild G1Affine from x and y coordinates
        let xs = String::from("097c8a80aeec562a6b4abc29a019eec113192cb6c84f59beda5ddf3ce9c78ed8");
        let mut bytes_vec = hex::decode(&xs.clone()).unwrap();
        bytes_vec.reverse();
        let x = Fq::deserialize_compressed::<&[u8]>(bytes_vec.as_ref()).unwrap();

        let ys = String::from("0053bdf2cd5794fcc2b27919f84908cffcc0a897b32dce1324fc9ef61c26a206");
        let mut bytes_vec = hex::decode(&ys.clone()).unwrap();
        bytes_vec.reverse();
        let y = Fq::deserialize_compressed::<&[u8]>(bytes_vec.as_ref()).unwrap();
        let p1 = G1Affine::new_unchecked(x, y);
        assert_eq!(p1.is_on_curve(), true);

        //step2. get G1Affine compressed bytes, and rebuild
        let mut compressed_bytes: Vec<u8> = vec![];
        p1.serialize_compressed(&mut compressed_bytes).unwrap();
        println!("p1 compressed: {:?}", hex::encode(compressed_bytes.clone()));
        let p2 = G1Affine::deserialize_compressed::<&[u8]>(compressed_bytes.as_ref()).unwrap();
        assert_eq!(p2.is_on_curve(), true);
        assert_eq!(p1, p2);

        //step3. get G1Affine uncompressed bytes, and rebuild
        //gnark 097c8a80aeec562a6b4abc29a019eec113192cb6c84f59beda5ddf3ce9c78ed80053bdf2cd5794fcc2b27919f84908cffcc0a897b32dce1324fc9ef61c26a206
        let mut uncompressed_bytes = vec![];
        p1.serialize_uncompressed(&mut uncompressed_bytes).unwrap();
        println!(
            "p1 uncompressed: {:?}",
            hex::encode(uncompressed_bytes.clone())
        );
        println!("p1 uncompressed: {:?}", uncompressed_bytes.clone());
        let p3 = G1Affine::deserialize_uncompressed::<&[u8]>(uncompressed_bytes.as_ref()).unwrap();
        assert_eq!(p3.is_on_curve(), true);
        assert_eq!(p1, p3);
    }

    #[test]
    fn test_ganrk_flag_to_ark_flag() {
        let b: u8 = 255;

        let gnark_positive = b & !GNARK_MASK | GNARK_COMPRESSED_POSTIVE;
        let ark_positive = gnark_flag_to_ark_flag(gnark_positive);
        assert_eq!(ark_positive & ARK_MASK, ARK_COMPRESSED_POSTIVE);
        assert_eq!(ark_positive & !ARK_MASK, 63);
        // println!("gnark_positive {:?}, ark_positive: {:?}", gnark_positive, ark_positive);

        let gnark_negative = b & !GNARK_MASK | GNARK_COMPRESSED_NEGATIVE;
        let ark_negative = gnark_flag_to_ark_flag(gnark_negative);
        assert_eq!(ark_negative & ARK_MASK, ARK_COMPRESSED_NEGATIVE);
        assert_eq!(ark_negative & !ARK_MASK, 63);
        // println!("gnark_negative {:?},ark_negative: {:?}", gnark_negative, ark_negative);

        let gnark_infinity = b & !GNARK_MASK | GNARK_COMPRESSED_INFINITY;
        let ark_infinity = gnark_flag_to_ark_flag(gnark_infinity);
        assert_eq!(ark_infinity & ARK_MASK, ARK_COMPRESSED_INFINITY);
        assert_eq!(ark_infinity & !ARK_MASK, 63);
        // println!("gnark_infinity {:?},ark_infinity: {:?}", gnark_infinity, ark_infinity);
    }

    #[test]
    #[should_panic(expected = "Unexpected gnark_flag value")]
    fn test_gnark_flag_to_ark_flag_panic() {
        let b: u8 = 255;

        let ganrk_invalid = b & !GNARK_MASK;
        gnark_flag_to_ark_flag(ganrk_invalid);
    }

    #[test]
    fn test_g1point_gnark_compressed_x_to_ark_compressed_x() {
        {
            let xs = String::from("897c8a80aeec562a6b4abc29a019eec113192cb6c84f59beda5ddf3ce9c78ed8");
            let ganrk_x_bytes_vec = hex::decode(&xs).unwrap();

            let ark_x_bytes_vec = ganrk_commpressed_x_to_ark_commpressed_x(&ganrk_x_bytes_vec);
            assert_eq!(
                "d88ec7e93cdf5ddabe594fc8b62c1913c1ee19a029bc4a6b2a56ecae808a7c09",
                hex::encode(ark_x_bytes_vec.clone())
            );

            let p1 = G1Affine::deserialize_compressed::<&[u8]>(ark_x_bytes_vec.as_ref()).unwrap();
            assert_eq!(p1.is_on_curve(), true);

            let mut compressed = vec![];
            p1.serialize_compressed(&mut compressed).unwrap();
            println!("compressed: {:?}", hex::encode(compressed.clone()));
            assert_eq!("d88ec7e93cdf5ddabe594fc8b62c1913c1ee19a029bc4a6b2a56ecae808a7c09", hex::encode(compressed));
        }


        {
            let xs = String::from("d934a10bcf7f1b4a365e8be1c1063fe8f919f03021c2ffe4f80b29267ec93e5b");
            let ganrk_x_bytes_vec = hex::decode(&xs).unwrap();

            let ark_x_bytes_vec = ganrk_commpressed_x_to_ark_commpressed_x(&ganrk_x_bytes_vec);
            assert_eq!(
                "5b3ec97e26290bf8e4ffc22130f019f9e83f06c1e18b5e364a1b7fcf0ba13499",
                hex::encode(ark_x_bytes_vec.clone())
            );

            let p1 = G1Affine::deserialize_compressed::<&[u8]>(ark_x_bytes_vec.as_ref()).unwrap();
            assert_eq!(p1.is_on_curve(), true);

            let mut compressed = vec![];
            p1.serialize_compressed(&mut compressed).unwrap();
            println!("compressed: {:?}", hex::encode(compressed.clone()));
            assert_eq!("5b3ec97e26290bf8e4ffc22130f019f9e83f06c1e18b5e364a1b7fcf0ba13499", hex::encode(compressed));
        }

    }


    #[test]
    fn test_g2point_gnark_compressed_x_to_ark_compressed_x() {
        //bn254 g2 generator
        //998e9393920d483a7260bfb731fb5d25f1aa493335a9e71297e485b7aef312c21800deef121f1e76426a00665e5c4479674322d4f75edadd46debd5cd992f6ed, x:10857046999023057135944570762232829481370756359578518086990519993285655852781+11559732032986387107991004021392285783925812861821192530917403151452391805634*u, y:8495653923123431417604973247489272438418190587263600148770280649306958101930+4082367875863433681332203403145435568316851327593401208105741076214120093531*u
        let xs = String::from("998e9393920d483a7260bfb731fb5d25f1aa493335a9e71297e485b7aef312c21800deef121f1e76426a00665e5c4479674322d4f75edadd46debd5cd992f6ed");
        let ganrk_x_bytes_vec = hex::decode(&xs).unwrap();

        let ark_x_bytes_vec = ganrk_commpressed_x_to_ark_commpressed_x(&ganrk_x_bytes_vec);
        assert_eq!("edf692d95cbdde46ddda5ef7d422436779445c5e66006a42761e1f12efde0018c212f3aeb785e49712e7a9353349aaf1255dfb31b7bf60723a480d9293938e19", hex::encode(ark_x_bytes_vec.clone()));

        let p1 = G2Affine::deserialize_compressed::<&[u8]>(ark_x_bytes_vec.as_ref()).unwrap();
        assert_eq!(p1.is_on_curve(), true);
        assert_eq!(p1, G2Affine::generator());

        let mut compressed = vec![];
        p1.serialize_compressed(&mut compressed).unwrap();
        // println!("compressed: {:?}", hex::encode(compressed.clone()));

        let mut x_compressed = vec![];
        p1.x()
            .unwrap()
            .serialize_compressed(&mut x_compressed)
            .unwrap();
        // println!("x compressed: {:?}", hex::encode(x_compressed.clone()));
        assert_eq!(x_compressed, compressed);

        assert_eq!(
            "10857046999023057135944570762232829481370756359578518086990519993285655852781",
            p1.x().unwrap().c0.into_bigint().to_string()
        );
        assert_eq!(
            "11559732032986387107991004021392285783925812861821192530917403151452391805634",
            p1.x().unwrap().c1.into_bigint().to_string()
        );
        assert_eq!(
            "8495653923123431417604973247489272438418190587263600148770280649306958101930",
            p1.y().unwrap().c0.into_bigint().to_string()
        );
        assert_eq!(
            "4082367875863433681332203403145435568316851327593401208105741076214120093531",
            p1.y().unwrap().c1.into_bigint().to_string()
        );
    }
}

@tdelabro
Copy link
Author

@readygo67 the whole point of having a common shared representation is that no lib has to be aware of the internal representation of the others.
You don't have to know anything about gnark or snark-js

@readygo67
Copy link
Contributor

readygo67 commented Jul 24, 2024

@tdelabro
I think there are 2 points,

  1. even though lib A should not know the internal representation of lib B, they both must be consensus about the mean of value(say 123456), in this aspect, I think BigUint is the pillar for all.
  2. then every lib has to adopt the value to their internal representation, for example, big-endian v.s. little endian, special Flags,

@ivokub
Copy link
Collaborator

ivokub commented Jul 24, 2024

@readygo67 the whole point of having a common shared representation is that no lib has to be aware of the internal representation of the others. You don't have to know anything about gnark or snark-js

At least in gnark we use the serialization format which is close to the internal representation to make serialization-deserialization more efficient. Other libraries may have different choices (e.g. points format, field element format etc.), so there should be a unified approach - but even then there is a question how to represent field elements (in big-endian vs little-endian).

I think right now different libraries are developing at quite fast pace, so trying to adhere to a common format would restrict the choices the developers can do. I guess at some point when most efficient internal representation formats have settled it makes sense to try to find a unified format, but I think right now there isn't a consensus.

@tdelabro
Copy link
Author

tdelabro commented Jul 24, 2024

but even then there is a question of how to represent field elements (in big-endian vs little-endian)

HexString ("0xbabe123cafe") is something everyone understands (often native to the language or at least part of the std lib), which is not equivocal (endianness is not a question), is human readable (which is important for communication/config formats like JSON or TOML that we want to target here), is obviously different from any internal representation the libs can choose, and will fit virtually any fields of any size.

Its only drawback is lower performance during serialization and deserialization, which is not at all an issue here.
I think it is the way to go.

@puma314
Copy link

puma314 commented Aug 6, 2024

@readygo67 do you have a repo with the entirety of the Rust PLONK verifier?

@readygo67
Copy link
Contributor

I have a repo which implement Gnark plonk verifier in rust, and will make it public in few weeks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants