Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parse_dom_table function performance #1626

Open
ThomasChan opened this issue Sep 17, 2019 · 3 comments
Open

parse_dom_table function performance #1626

ThomasChan opened this issue Sep 17, 2019 · 3 comments
Labels

Comments

@ThomasChan
Copy link

https://github.com/SheetJS/js-xlsx/blob/master/xlsx.js#L19061

The merges for loop will became extremely large and slow while handling large table, i had test this with 204 cols * 250 rows table, without optimize ws[!merges] result a huge array, and almost of item is merging single cell itself which is useless.

before optimize, in my test case, export function excute 18.6s, and after, only excute 4.67s.

and in my customer's client, they were exporting a 193 cols * 1277 rows table, export function excute 6mins, after optimize, only excute 15s.

table

before

after

code change henglabs@febac23

@ThomasChan
Copy link
Author

well, my code change above was wrong, and i figured out that merges for loop's logic then made another change, decrease time complexity from O(merges.length * n) to O(merges.length).

屏幕快照 2019-09-18 下午4 37 44

for export 200 cols * 10000 rows table, parse_dom_table function don't out-of-memory any more, but jszip utf8ToBytes function got out-of-memory.

@ThomasChan ThomasChan changed the title parse_dom_table function performance optimize suggest parse_dom_table function performance Sep 18, 2019
@SheetJSDev
Copy link
Contributor

@ThomasChan thanks for looking into this, and feel free to submit a PR.

There's definitely room for improvement. The weird loop is done that way to address a case like:

A1:C2 D1:E2 F1
F2

The first cell in the second row should be located at F2, but to determine that you need to look at the A1:C2 merge first then the D1:E2 merge.

The implementation was designed expecting only a small number of merges. IF you have many, then the approach is extremely slow.

Given the parse order, it will always be sorted by starting row then by starting column. To reduce it to a single walk through the merge array, you might be able to sort by ending row then by starting column (sorting the array with a custom sort function). Then you'd keep track of a starting index into the array (elements before that point could never affect the result, so you can skip them).

So that we have a sense for the performance, can you share a sample table that you are trying to convert?

@ThomasChan
Copy link
Author

Thanks for reply, i had submit a PR, and sample table is nothing special to any html table, you can create a BIG table like i said above, 200 cols * 10000 rows, then use xlsx to export like

 var title = 'Big Table';
  var writeOptions = {
    Props: {
      Title: title,
      CreatedDate: new Date().toLocaleString(),
    },
    type: 'binary', // if not use binary will out-of-memory
  };
  var wb = XLSX.utils.book_new();
  var ws = XLSX.utils.table_to_sheet(t, {
    dense: true, // no dense also out-of-memory
    raw: true, // no raw will slow down performance
  });
  XLSX.utils.book_append_sheet(wb, ws, title);
  XLSX.writeFile(
    wb,
    `${title}.xlsx`,
    writeOptions,
  );

@reviewher reviewher added the HTML label Mar 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants