Description:
Ingest of OpenGraph Edges takes very long. I tried to ingest an OpenGraph json with ca. 240.000 Edges, which took almost 11h to process.
We try now again with neo4j enterprise backend, which should support running on more than one CPU core, but when checking on the neo4j server, it seems only one core is used.
Additionally, bloodhound itself does not use all available resources
Are you intending to fix this bug?
no
Component(s) Affected:
Steps to Reproduce:
- Try to ingest larger OpenGraph Dataset (~200.00 Edges)
- Observe utilization on neo4j and bloodhound instances
Expected Behavior:
Either memory or CPU should be utilized fully during ingest and analysis
Actual Behavior:
neo4j: Only one core is used, even with neo4j enterprise
bloodhound: Only 3 out of 8 cores are used
Screenshots/Code Snippets/Sample Files:
Environment Information:
BloodHound: 8.6.1
Collector: -
Database (if persistence related): noe4j 4.4.48 enterprise
Docker (if using Docker): AWS ECS
Additional Information:
Potential Solution (optional):
Split ingest tasks into smaller chunks and run them in parallel.
Related Issues:
Contributor Checklist:
Description:
Ingest of OpenGraph Edges takes very long. I tried to ingest an OpenGraph json with ca. 240.000 Edges, which took almost 11h to process.
We try now again with neo4j enterprise backend, which should support running on more than one CPU core, but when checking on the neo4j server, it seems only one core is used.
Additionally, bloodhound itself does not use all available resources
Are you intending to fix this bug?
no
Component(s) Affected:
Steps to Reproduce:
Expected Behavior:
Either memory or CPU should be utilized fully during ingest and analysis
Actual Behavior:
neo4j: Only one core is used, even with neo4j enterprise
bloodhound: Only 3 out of 8 cores are used
Screenshots/Code Snippets/Sample Files:
Environment Information:
BloodHound: 8.6.1
Collector: -
Database (if persistence related): noe4j 4.4.48 enterprise
Docker (if using Docker): AWS ECS
Additional Information:
Potential Solution (optional):
Split ingest tasks into smaller chunks and run them in parallel.
Related Issues:
Contributor Checklist: