Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

Support state history log splitting #9277

Merged
merged 6 commits into from
Jul 23, 2020

Conversation

huangminghuang
Copy link
Contributor

@huangminghuang huangminghuang commented Jul 2, 2020

Change Description

This PR supports the trace and chain state log sppliting in state history plugin and fixes the problem for context free data pruning for splitted block logs.

Change Type

Select ONE

  • Documentation
  • Stability bug fix
  • Other
  • Other - special case

Consensus Changes

  • Consensus Changes

API Changes

  • API Changes

Documentation Additions

  • Documentation Additions

New option for chain plugin is added

  • blocks-retained-dir: the location of the blocks retained directory (absolute path or relative to blocks dir). If the value is empty, it is set to the value of blocks dir.

Some new options for state history plugin are added:

  • state-history-stride: split the state history log files when the block number is the multiple of the stride. When the stride is reached, the current history log and index will be renamed '*-history--.log/index' and a new current history log and index will be created with the most recent blocks. All files following this format will be used to construct an extended history log.
  • max-retained-history-files: the maximum number of state history log file groups to retain so that the blocks in those files can be queried. When the number is reached, the oldest log files would be moved to archive dir or deleted if the archive dir is empty. The retained log files should not be manipulated by users.
  • state-history-retained-dir: the location of the state history retained directory (absolute path or relative to state-history dir). If the value is empty, it is set to the value of state-history directory.
  • state-hsitory-archive-dir: the location of the state history archive directory (absolute path or relative to state-history dir). If the value is empty, log files beyond the retained limit will be deleted. All files in the archive directory are completely under user's control, i.e. they won't be accessed by nodeos anymore.

@matthewdarwin
Copy link

As described there seems to be no way to share state history files between different nodeos instances. Obviously each nodeos needs to keep very recent stuff on its own, but once you want something older than say, a few days, then it would be nice to store the files on a shared drive (NFS, s3fuse, etc) and have mutliple nodeos processes pull from them.

This way you get scalability on nodeos by having multiple, but without using multiple times the storage.

// Skip if not a file
if (!bfs::is_regular_file(p->status()))
continue;
// skip if it's not match blocks-*-*.log
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this comment true for this generic method?

auto existing_itr = collection.find(log.first_block_num());
if (existing_itr != collection.end()) {
if (log.last_block_num() <= existing_itr->second.last_block_num) {
wlog("${log_path} contains the overlapping range with ${existing_path}.log, droping ${log_path} "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dropping

if (existing_itr != collection.end()) {
if (log.last_block_num() <= existing_itr->second.last_block_num) {
wlog("${log_path} contains the overlapping range with ${existing_path}.log, droping ${log_path} "
"from catelog",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

catalog

}

bool index_matches_data(const bfs::path& index_path, const LogData& log) const {
if (bfs::exists(index_path) && bfs::file_size(index_path) / sizeof(uint64_t) != log.num_blocks()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused, why is this ~=?

return false;
}

std::string filebase_for_block(uint32_t block_num) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a search for this, is it used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

return "";
}

bool set_active_item(uint32_t block_num, mapmode mode = mapmode::readonly) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about this instead being a method like:
optional<uint64_t> get_position(uint32_t block_num, mapmode mode = ...)
since the active item is always set before retrieving the block position?

}
else {
bfs::remove(old_name);
wlog("${new_name} already exists, just remove ${old_name}", ("old_name", old_name.string())("new_name", new_name.string()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove -> removing

}

static void rename_bundle(bfs::path orig_path, bfs::path new_path) {
rename_if_not_exists(orig_path.replace_extension(".log"), new_path.replace_extension(".log"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it matter to the code after calling this, if the rename fails or succeeds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't matter. The design was to cover the case when the archive or retaining dir is a network shared directory like (NFS mount), where multiple nodeos can write to it. If the file already exists because another nodeos instance has written to it. We should be able to ignore it.

rename_bundle(dir / name, new_path);

if (this->collection.size() >= max_retained_files) {
auto items_to_erase =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make const

file.close();
file.open(path.generic_string());
EOS_ASSERT(file.size() % sizeof(uint64_t) == 0, Exception,
"The size of ${file} is not the multiple of sizeof(uint64_t)", ("file", path.generic_string()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"not the multiple" => "not a multiple"

if (!result && chain_state_log)
result = chain_state_log->get_block_id(block_num);

if (!result) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a must, but would prefer this being:
if (result) {
return result
}

try ...

@huangminghuang huangminghuang merged commit 4a55b3e into develop Jul 23, 2020
@huangminghuang huangminghuang deleted the huangming/state-history-split-log branch July 24, 2020 20:34
@igorls
Copy link
Contributor

igorls commented Aug 26, 2020

As described there seems to be no way to share state history files between different nodeos instances. Obviously each nodeos needs to keep very recent stuff on its own, but once you want something older than say, a few days, then it would be nice to store the files on a shared drive (NFS, s3fuse, etc) and have mutliple nodeos processes pull from them.

This way you get scalability on nodeos by having multiple, but without using multiple times the storage.

Is this being considered ? Maybe make the files under "state-history-archive-dir" be readable by nodeos under read-only mode? Since it would make a lot of sense to keep reading those old files but from a slower storage backend

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants