GROUP BY UNIXTIMESTAMP(timestamp) DIV 30. Ou dites pour une raison quelconque que vous vouliez les grouper en intervalles de 20 secondes ce serait DIV 20 etc. Pour changer les limites entre les valeurs GROUP BY vous pouvez utiliser. GROUP BY (UNIXTIMESTAMP(timestamp) + r) DIV 30. Oracle: Group by Month I have such problem, I am working with Oracle database. In one of my tables there is a column in timestamp format like 06.01.14 08:54:35 I must select some data by grouping this column only by month. The result of query for the column if 06.01.14 the.
I possess two furniture like this.The 'purchase' desk has 21886 rows.
My question is here.
Detailed above, two inquiries returns exact same outcome but specific is as well slack(describe too many rows).What's the difference?
chrischris
2 Solutions
I think your select distinct is definitely gradual because you broke the index by coordinating on another table. In most cases, choose specific will be quicker. But in this case, since you are matching on guidelines of another desk, the index is broken and will be significantly slower.
consumer5509289
It will be usually suggested to make use of
DISTINCT
rather ofGROUP BY
, since that is certainly what you really wish, and let the optimizer select the 'best' delivery plan. Nevertheless - no optimizer is usually perfect. UsingDISTINCT
the optimizer can possess more options for an setup program. But that furthermore means that it provides even moreoptions to select a poor strategy.You create that the
DISTINCT
query is certainly 'gradual', but you don't inform any numbers. In my check (with 10 times as numerous rows onMariaDB 10.0.19and10.3.13) theDISTINCT
query is such as (only) 25% slower (562mbeds/453ms). TheClarify
outcome can be no help at all. It's i9000 even 'lying down'. WithLIMIT 100, 30
it would require to study at minimum 130 rows (that's what myEXPLAIN
actually schows forTeam BY
), but it shows you 65.I can't describe the 25% difference in setup period, but it seems that the motor is performing a full table/index check in any case, and types the result before it can omit 100 and go for 30 rows.
The best program would probably become:
- Read rows from
idxregdate
list (deskA
) one by one in descending purchase - Appear if there is usually a go with in the
idxorderid
catalog (deskN
) - Neglect 100 complementing rows
- Send 30 coordinating rows
- Leave
If there are usually like 10% of rows in
A
which have no complement inT
, this plan would study something like 143 rows fromA
.Best I could perform to somehow drive this strategy is:
This problem profits the exact same result in 156 ms (3 situations faster than
Team BY
). But that is still as well sluggish. And it'beds probaly nevertheless reading all rows in tableA
.We can proof that a much better plan can can be found with a 'little' subquery technique:
This query executes in 'no period' ( 0 master of science) and returns the same result on my check information. And though it's not really 100% dependable, it displays that the optimizer is usually not performing a great work.
So what are my a conclusion:
- The optimizer will not always do the best work and occasionally needs help
- Even when we understand 'the best strategy', we can not always enforce it
DISTINCT
is usually not constantly faster thanGROUP BY
- When no index can be used for all clauses - stuff are obtaining quite tricky
Check schema and dummy data:
Concerns:
John SpiegelJohn Spiegel