yes this is understood, however it’s still much slower on v4 even after filtering the search compared to 3.4
Maybe playlist search has had a negative effect on the overall coding for search? Or some other features are putting more strain on the hardware ![]()
Who knows. My library is average size so I’ve not noticed any difference.
Does the search get quicker after one or two queries/searches?
That’s my workaround for another long standing slow search on ny linked 5/6000m
30k tracks in my ssd
Should this technical limitation not apply on desktop engine dj as well?
I mean the db query you mentioned.
Search on laptop is near instantaneous.
It’s a fair point…. Could that point to the hardware being strained then?
Perhaps some monitoring of performance on the laptop whilst searching might reveal an activity spike that the laptop is able to soak up but the hardware isn’t?
Slow SQL queries can be caused by a variety of factors, often related to how the database is structured, how the queries are written, or how the server is configured. Here are some common causes:
1. Lack of Indexes
- Explanation: Indexes help databases quickly locate the data without scanning every row in a table. If your query involves columns that aren’t indexed, the database may perform a full table scan, which is slow, especially on large tables.
- Solution: Ensure that columns used in
WHERE,JOIN,GROUP BY, andORDER BYclauses are indexed.
2. Poorly Designed Queries
- Explanation: Queries that are not optimized can cause slowdowns. For example, using
SELECT *retrieves all columns when only a few are needed, or using inefficient joins. - Solution: Write efficient SQL queries by selecting only the necessary columns and optimizing joins. Consider rewriting complex queries to be more efficient.
3. Large Data Volume
- Explanation: Querying or joining very large tables can be inherently slow if not managed properly.
- Solution: Use pagination, limit the number of rows returned, or break down large queries into smaller, more manageable ones.
4. Suboptimal Joins
- Explanation: Joining multiple tables without proper indexing or using joins that cause a large number of combinations (e.g., Cartesian products) can significantly slow down queries.
- Solution: Ensure that join conditions are using indexed columns and avoid unnecessary joins.
5. Locking and Blocking
- Explanation: When a query locks a table or rows, other queries might have to wait until the lock is released, causing delays.
- Solution: Optimize transactions to be as short as possible, and consider using appropriate isolation levels to minimize locking.
6. Inefficient Use of SQL Functions
- Explanation: Functions like
LIKE, especially with leading wildcards, or complex operations like subqueries inWHEREclauses can be slow. - Solution: Use full-text search indexes for
LIKEqueries or rewrite complex conditions to be more efficient.
7. Network Latency
- Explanation: If the database server is on a remote network, network latency can add significant time to query execution.
- Solution: Optimize the network setup, reduce the amount of data transferred, or consider moving the database closer to the application.
8. Insufficient Server Resources
- Explanation: Lack of CPU, memory, or disk I/O capacity can cause slow query performance, especially under high load.
- Solution: Upgrade the server hardware, optimize server settings, or distribute the load across multiple servers.
9. Inefficient Schema Design
- Explanation: Poorly normalized tables, or an excessive number of joins due to over-normalization, can slow down queries.
- Solution: Review and optimize the database schema for the types of queries being executed.
10. Outdated Statistics
- Explanation: Database systems rely on statistics to generate query execution plans. Outdated statistics can lead to inefficient plans.
- Solution: Regularly update statistics to ensure the query optimizer has accurate information.
11. Fragmentation
- Explanation: Over time, as data is inserted, updated, and deleted, tables and indexes can become fragmented, leading to slower queries.
- Solution: Regularly perform maintenance tasks like defragmentation or rebuilding indexes.
12. Improper Use of Caching
- Explanation: If the database or application is not caching results efficiently, it may be re-running expensive queries unnecessarily.
- Solution: Implement caching strategies for frequently accessed data.
13. Parameter Sniffing
- Explanation: Sometimes, the query optimizer generates a plan based on the parameters passed during the first execution, which may not be optimal for subsequent executions with different parameters.
- Solution: Use query hints or recompile options to avoid suboptimal plans due to parameter sniffing.
14. Suboptimal Execution Plan
- Explanation: The query optimizer may choose a suboptimal execution plan due to complex query structure or outdated statistics.
- Solution: Analyze the execution plan using tools like
EXPLAINand adjust the query or indexes as necessary.
Addressing these issues involves a combination of query optimization, database tuning, and proper resource management. Regular monitoring and maintenance are also crucial in ensuring consistent performance.
ChatGPT ![]()
I’m glad to know ChatGPT agrees with me ![]()
![]()
Now, I’m off to meet my old mate C3PO for a drink.
RAM?
CPU?
Poorly written DB?
As most use SSDs, it should not be a bottle neck (I hope)
Agreed it shouldn’t, but I guess something somewhere has to be running the query?
Macros on excel for example can kill your PC.
I can never tell with SQL Developer as we’re not allowed to install it on our machine, we run it through Citrix so all the strain will be wherever its hosting from I think.
I know if people do large data exports it can bring our whole system down mind, just freezing, lagging on telnet sessions etc.
Not my area of expertise though. I’m capable of wrecking stuff, not knowing why lol.
Oooooooooh hang on, just rewatched the video.
I think we’ve spoken about this before @mufasa … there is a sort enabled with the order as descending.
OP, can you remove your sorting preference and try again, so just have it set to whatever the default is and ascending. Then see if the searches are faster?
The sort workaround was to speed up the playlist presentation
If you leave everything as play order
But who knows perhaps it might speed up search as well
It’s almost 10 years since Engine launched, db should be sorted by now. Must be a difficult task.
Perhaps another rewrite is needed.
Anyone know what the ‘other side’ is like when searching huge libraries?
The last gear of theirs I used was a 2000 nexus and as far as I’m aware they didn’t want you using USB sticks over 32gb?
Be interesting to see if it’s a global DJ hardware issue… comparing to laptop software is a bit pointless imo, apples and oranges.
The search in Engine DJ (computer) is almost instant. I will try on the P4+ with 2 or more search queries and see if it speeds up. However I’m on v3.4 and the problem is with v4
The sort field is different by default on each category , so in my case for playlists it was BPM by default, when i selected play order and search other tracks outside that playlist folder it became slower on the search.
If I leave it like it boots up it works like it should.
Video Below:
Has anyone taken a look at precisely what is indexed in m.db? You could easily add a few indexes (if necessary) after syncing your USB stick, without breaking compatibility…
knows more than i do re - sql
Thank you for the tag @mufasa! Hope you are well bro! =)
The Track table contains multiple indices. From what I can tell, all the key areas of search are indexed.
The schema is extremely well thought out tbh. Here’s the Track Table schema.
CREATE TABLE Track (
id INTEGER PRIMARY KEY AUTOINCREMENT,
playOrder INTEGER,
length INTEGER,
bpm INTEGER,
year INTEGER,
path TEXT,
filename TEXT,
bitrate INTEGER,
bpmAnalyzed REAL,
albumArtId INTEGER,
fileBytes INTEGER,
title TEXT,
artist TEXT,
album TEXT,
genre TEXT,
comment TEXT,
label TEXT,
composer TEXT,
remixer TEXT,
key INTEGER,
rating INTEGER,
albumArt TEXT,
timeLastPlayed DATETIME,
isPlayed BOOLEAN,
fileType TEXT,
isAnalyzed BOOLEAN,
dateCreated DATETIME,
dateAdded DATETIME,
isAvailable BOOLEAN,
isMetadataOfPackedTrackChanged BOOLEAN,
isPerfomanceDataOfPackedTrackChanged BOOLEAN,
playedIndicator INTEGER,
isMetadataImported BOOLEAN,
pdbImportKey INTEGER,
streamingSource TEXT,
uri TEXT,
isBeatGridLocked BOOLEAN,
originDatabaseUuid TEXT,
originTrackId INTEGER,
trackData BLOB,
overviewWaveFormData BLOB,
beatData BLOB,
quickCues BLOB,
loops BLOB,
thirdPartySourceId INTEGER,
streamingFlags INTEGER,
explicitLyrics BOOLEAN,
activeOnLoadLoops INTEGER,
lastEditTime DATETIME,
CONSTRAINT C_originDatabaseUuid_originTrackId UNIQUE (originDatabaseUuid, originTrackId),
CONSTRAINT C_path UNIQUE (path),
FOREIGN KEY (albumArtId) REFERENCES AlbumArt (id) ON DELETE RESTRICT
);
CREATE INDEX index_Track_filename ON Track (filename);
CREATE INDEX index_Track_albumArtId ON Track (albumArtId);
CREATE INDEX index_Track_uri ON Track (uri);
CREATE INDEX index_Track_title ON Track(title);
CREATE INDEX index_Track_length ON Track(length);
CREATE INDEX index_Track_rating ON Track(rating);
CREATE INDEX index_Track_year ON Track(year);
CREATE INDEX index_Track_dateAdded ON Track(dateAdded);
CREATE INDEX index_Track_genre ON Track(genre);
CREATE INDEX index_Track_artist ON Track(artist);
CREATE INDEX index_Track_album ON Track(album);
CREATE INDEX index_Track_key ON Track(key);
CREATE INDEX index_Track_bpmAnalyzed ON Track(CAST(bpmAnalyzed + 0.5 AS int));
CREATE TRIGGER trigger_after_insert_Track_check_id
AFTER INSERT ON Track
WHEN NEW.id <= (SELECT seq FROM sqlite_sequence WHERE name = 'Track')
BEGIN
SELECT RAISE(ABORT, 'Recycling deleted track id''s are not allowed');
END;
CREATE TRIGGER trigger_after_update_Track_check_Id
BEFORE UPDATE ON Track
WHEN NEW.id <> OLD.id
BEGIN
SELECT RAISE(ABORT, 'Changing track id''s are not allowed');
END;
CREATE TRIGGER trigger_after_insert_Track_fix_origin
AFTER INSERT ON Track
WHEN IFNULL(NEW.originTrackId, 0) = 0
OR IFNULL(NEW.originDatabaseUuid, '') = ''
BEGIN
UPDATE Track SET
originTrackId = NEW.id,
originDatabaseUuid = (SELECT uuid FROM Information)
WHERE track.id = NEW.id;
END;
CREATE TRIGGER trigger_after_update_Track_fix_origin
AFTER UPDATE ON Track
WHEN IFNULL(NEW.originTrackId, 0) = 0
OR IFNULL(NEW.originDatabaseUuid, '') = ''
BEGIN
UPDATE Track SET
originTrackId = NEW.id,
originDatabaseUuid = (SELECT uuid FROM Information)
WHERE track.id = NEW.id;
END;
CREATE TRIGGER trigger_after_update_Track_timestamp
AFTER UPDATE OF length, bpm, year, filename, bitrate, bpmAnalyzed, albumArtId,
title, artist, album, genre, comment, label, composer, remixer, key, rating, albumArt,
fileType, isAnalyzed, isBeatgridLocked, trackData, overviewWaveformData, beatData, quickCues,
loops, explicitLyrics, activeOnLoadLoops
ON Track
FOR EACH ROW
BEGIN
UPDATE Track SET lastEditTime = strftime('%s') WHERE ROWID=NEW.ROWID;
END;
This is pretty critical when the databases get larger and more writes & deletes happen. When doing a “Cleanup”, it seems that vacuum; is executed on the database files on all attached storage devices, therefore optimizing the database files. This helps with queries quite a bit and also reduces the file sizes.
The schema doesn’t have auto_vacuum=FULL, which is why a Cleanup from the UI is required for optimal performance.
Also I forgot to comment, after using the P4+ v3.4 for long periods of time the search hangs again taking long periods of time to find items.
A restart fixes this problem…
Video Below: