I have a little server (CentoOS 7 running on Linode) that is normally very quiet. But while I was asleep last night it went nuts for about 6.5 hours running something. By the time I got a chance to research it, it had gone back to normal. Here is what Linode's analytics graphs show for the last 24 hours:

The CPU is what tripped alarm bells and alerted me, but see also the Network IPv4 - there was a visible increase in Public In for the whole period, and then a dramatic amount of Public Out starting about 1hr 45min into the episode.
As for the I/O, it's not surprising that it was higher during the same time period, but notice the difference between the I/O before 17:00 and now - what was it doing before that it's not doing now? The pre-incident level (hovering around 15-35 blocks/s) mostly continues into the past, except for an anomaly in January when it dropped to negligible and then slowly rose back up over a few days. But now it's really quiet, even though the basic services I use seem to all be working.
If I had run top
while it was happening, I might have figured out the source, but what can I do after-the-fact? If there are logs I can look at, please make suggestions. I'm concerned that my server has been hacked and is being used somehow, but I'm fairly novice at server admin - I'm more of a developer. If I can figure out what used all that CPU and Network, perhaps I can also figure out if there was a breach and how to fix it before the same person decides to use it again. I do have a couple web applications with sloppy old code, but I think the worst that could happen from those would be messing with a database. I thought I had the server itself buttoned down pretty tight - e.g. only private key allowed for login.
Here is the last 30 days rather than just 24 hours - there is a weekly spike in network usage - perhaps it's just related to backups (I'm subscribed to Linode's backup service, although the dates of backups don't line up with the spikes), but it could be more nefarious...

Finally, here is the January I/O graph with the anomaly I mentioned earlier, if it's relevant (the rest of the January graphs looked pretty normal):
