Large tables generates heavy database I/O workloads when are nor accesses through an index. Large indexes are also inefficient. Creating indexes with too many columns is a very common mistake. Dropping complex indexes and creating two or three smaller ones can improve query times in many cases.
Here is a useful SQL query (for Oracle) to list the larger database tables and indexes together with their size in MBytes.
SELECT segment_name, segment_type, tablespace_name, SUM(bytes)/1048576 megs FROM user_extents GROUP BY segment_name, segment_type, tablespace_name ORDER BY megs DESC;
Sometime tables are too big and growing too fast to be able to optimize data access. In such cases a good solution is to archive old data from such large tables to maintain an acceptable system performances.
IBM has a dedicated solution called Maximo Archiving with Optim Data Growth Solution that may worth evaluating.