Skip to content
silviana amethyst edited this page Aug 23, 2024 · 10 revisions

Using Bertini2's core

The core library for Bertini2 is in C++. It makes use of C++14 features (and eventually C++17, so use a new compiler), templates, polymorphism, etc. To use it, #include the files you need, and link to the compiled library.

Finding and linking to Bertini2

CMake

It is easy to link to Bertini2 (subject to programmer's perception of easy). We recommend using CMake to generate a Makefile for you.

Precision

On precision through arithmetic

  • One should strive to keep all numbers at the same precision for arithmetic to be most reliable.
  • A number at a given precision, will stay at that precision until it is told to change.
    This includes setting an existing number to another at different precision. That is, at_50 = at_70 will not change the precision of the lhs to 70 -- it remains 50.
  • New multiprecision numbers are made at the current DefaultPrecision.
  • To change the precision of an mpfr_float or bertini::complex aka bertini::mpfr, be all like z.precision(70), or, Precision(z, 70).
  • Trying to change the precision of doubles throws.

The test precision_through_arithemetic in fundamentals_test.cpp in b2_class_test verifies that these statements are true.

Cautions on mixing precisions - double & multiple

Consider the following test case, which fails:

BOOST_AUTO_TEST_CASE(multiple_mpfr_by_double)
{
	DefaultPrecision(50);
	mpfr_float a("0.1");
	double factor = 0.1;

	mpfr_float result = a*factor;
	mpfr_float expected("0.01");

	BOOST_CHECK_CLOSE(expected, result, 1e-50);
}

Here's the result:

error: in "super_fundamentals/multiple_mpfr_by_double": difference{5.55112e-17} 
between expected{0.01} and result{0.01} exceeds 1e-50%
*** 1 failure is detected in the test module "Bertini 2 Class Testing"

That is, multiplication by a double pollutes the result with noise at about the level of epsilon for a double: 1e-16.

Lesson: don't mix arithmetic types.

Philosophical or arbitrary choices

Constructing polynomial systems

We wanted to be able to produce polynomials (and other evaluable objects) in C++, without the need to write strings or do other deserialization. So, we want to be able to make variables, coefficients, and other objects, and do arithmetic with them. Hence, we chose to implement polynomials using a classical tree. Nodes are polymorphic, derived from a common Node type. There are operators and symbols.

We have explicitly disallowed the use of double-precision numbers as best as we can in the construction of systems. Doubles converted to high-precision numbers yield noise at the end of the result. Hence, results are less predictable than construction from strings, or the direct representation of numbers using operators or rationals.

That is, seek to use mpq_rational(29/3) rather than 29/3 (which truncates due to integer arithmetic), or 9.66666666666666666 which will be about within 16 digits of the number 29/3. But consider this: if you are tracking in higher precision, will be an incorrect representation of the number, affecting your ability to solve the system you think you are solving.

Use integers and rationals when you can, and floats if you must. But avoid the use of doubles, particularly if you will be tracking in higher precision. This is enforced by making the user work a little harder to construct systems from double-precision coefficients.

The use of shared pointers as nodes

Why did we use operator overloading on std::shared_ptr<Node>?

  1. It was implementable
  2. It provides an easy to write syntax for generating the internal representation of your polynomial system
  3. It prevents one from having to do lots of initialization and clearing of data. The trees are leak-free by design.
  4. It's in the standard.

If the pointer chasing that happens during evaluation of systems becomes a problem, we will port the Straight Line Program code from Bertini1 into version 2, and write a little compiler to generate the SLP from the tree-based design.

The use of templates throughout the code

One of the major problems of Bertini1 is that is is in C, causing the following issues (nonexhaustive list)

  • Code is duplicated for different numeric types
  • Operator overloading is a no-go
  • No classes, so lots of initialization and clearing

That is, Bertini1 is great as a blackbox command line executable, but savage as a library.

Templates help solve the first problem. About half of Bertini1 is duplicated code for double and multiple precision. By factoring the differences between these types into traits, etc, we can have a single template which can be used with any numeric type. This lets us also potentially drop in intermediate hardware types, or perhaps intervals.

Perhaps there are too many templates in the code base. If you feel so, consider submitting PR's removing some. But Dani generally feels that the code is fairly readable, and compilation time is not unreasonable.