SQL与Slick的对比

来源:https://scala-slick.org/doc/3.2.3/sql-to-slick.html

This section shows an overview over the most important types of SQL queries and a corresponding type-safe Slick query.

SELECT *

SQL

sql"select * from PERSON".as[Person]

Slick

The Slick equivalent of SELECT * is the result of the plain TableQuery:

people.result

SELECT

SQL

sql"""
  select AGE, concat(concat(concat(NAME,' ('),ID),')')
  from PERSON
""".as[(Int,String)]

Slick

Scala’s equivalent for SELECT is map. Columns can be referenced similarly and functions operating on columns can be accessed using their Scala equivalents (but allowing only ++ for String concatenation, not +).

people.map(p => (p.age, p.name ++ " (" ++ p.id.asColumnOf[String] ++ ")")).result

.. index:: WHERE, filter, or, and, &&, ||, ==

WHERE

SQL

sql"select * from PERSON where AGE >= 18 AND NAME = 'C. Vogt'".as[Person]

Slick

Scala’s equivalent for WHERE is filter. Make sure to use === instead of == for comparison.

people.filter(p => p.age >= 18 && p.name === "C. Vogt").result

ORDER BY

SQL

sql"select * from PERSON order by AGE asc, NAME".as[Person]

Slick

Scala‘s equivalent for ORDER BY is sortBy. Provide a tuple to sort by multiple columns. Slick’s .asc and .desc methods affect the ordering. Be aware that a single ORDER BY with multiple columns is not equivalent to multiple .sortBy calls but to a single .sortBy call passing a tuple.

people.sortBy(p => (p.age.asc, p.name)).result

Aggregations (max, etc.)

SQL

sql"select max(AGE) from PERSON".as[Option[Int]].head

Slick

Aggregations are collection methods in Scala. In SQL they are called on a column, but in Slick they are called on a collection-like value e.g. a complete query, which people coming from SQL easily trip over. They return a scalar value, which can be run individually. Aggregation methods such as max that can return NULL return Options in Slick.

people.map(_.age).max.result

GROUP BY

People coming from SQL often seem to have trouble understanding Scala‘s and Slick’s groupBy, because of the different signatures involved. SQL‘s GROUP BY can be seen as an operation that turns all columns that weren’t part of the grouping key into collections of all the elements in a group. SQL requires the use of its aggregation operations like avg to compute single values out of these collections.

SQL

sql"""
  select ADDRESS_ID, AVG(AGE)
  from PERSON
  group by ADDRESS_ID
""".as[(Int,Option[Int])]

Slick

Scala’s groupBy returns a Map of grouping keys to Lists of the rows for each group. There is no automatic conversion of individual columns into collections. This has to be done explicitly in Scala, by mapping from the group to the desired column, which then allows SQL-like aggregation.

people.groupBy(p => p.addressId)
       .map{ case (addressId, group) => (addressId, group.map(_.age).avg) }
       .result

SQL requires aggregation of grouped values. We require the same in Slick for now. This means a groupBy call must be followed by a map call or will fail with an Exception. This makes Slick‘s grouping syntax a bit more complicated than SQL’s.

HAVING

SQL

sql"""
  select ADDRESS_ID
  from PERSON
  group by ADDRESS_ID
  having avg(AGE) > 50
""".as[Int]

Slick

Slick does not have different methods for WHERE and HAVING. For achieving semantics equivalent to HAVING, just use filter after groupBy and the following map.

people.groupBy(p => p.addressId)
       .map{ case (addressId, group) => (addressId, group.map(_.age).avg) }
       .filter{ case (addressId, avgAge) => avgAge > 50 }
       .map(_._1)
       .result

Implicit inner joins

SQL

sql"""
  select P.NAME, A.CITY
  from PERSON P, ADDRESS A
  where P.ADDRESS_ID = a.id
""".as[(String,String)]

Slick

Slick generates SQL using implicit joins for flatMap and map or the corresponding for-expression syntax.

people.flatMap(p =>
  addresses.filter(a => p.addressId === a.id)
           .map(a => (p.name, a.city))
).result

// or equivalent for-expression:
(for(p <- people;
     a <- addresses if p.addressId === a.id
 ) yield (p.name, a.city)
).result

Explicit inner joins

SQL

sql"""
  select P.NAME, A.CITY
  from PERSON P
  join ADDRESS A on P.ADDRESS_ID = a.id
""".as[(String,String)]

Slick

Slick offers a small DSL for explicit joins.

(people join addresses on (_.addressId === _.id))
  .map{ case (p, a) => (p.name, a.city) }.result

Outer joins (left/right/full)

SQL

sql"""
  select P.NAME,A.CITY
  from ADDRESS A
  left join PERSON P on P.ADDRESS_ID = a.id
""".as[(Option[String],String)]

Slick

Outer joins are done using Slick’s explicit join DSL. Be aware that in case of an outer join SQL changes the type of outer joined, non-nullable columns into nullable columns. In order to represent this in a clean way even in the presence of mapped types, Slick lifts the whole side of the join into an Option. This goes a bit further than the SQL semantics because it allows you to distinguish a row which was not matched in the join from a row that was matched but already contained nothing but NULL values.

(addresses joinLeft people on (_.id === _.addressId))
  .map{ case (a, p) => (p.map(_.name), a.city) }.result

Subquery

SQL

sql"""
  select *
  from PERSON P
  where P.ID in (select ID
                 from ADDRESS
                 where CITY = 'New York City')
""".as[Person]

Slick

Slick queries are composable. Subqueries can be simply composed, where the types work out, just like any other Scala code.

val address_ids = addresses.filter(_.city === "New York City").map(_.id)
people.filter(_.id in address_ids).result // <- run as one query

The method .in expects a sub query. For an in-memory Scala collection, the method .inSet can be used instead.

Scalar value subquery / custom function

SQL

sql"""
  select * from PERSON P,
                     (select rand() * MAX(ID) as ID from PERSON) RAND_ID
  where P.ID >= RAND_ID.ID
  order by P.ID asc
  limit 1
""".as[Person].head

Slick

This code shows a subquery computing a single value in combination with a user-defined database function.

val rand = SimpleFunction.nullary[Double]("RAND")

val rndId = (people.map(_.id).max.asColumnOf[Double] * rand).asColumnOf[Int]

people.filter(_.id >= rndId)
       .sortBy(_.id)
       .result.head

insert

SQL

sqlu"""
  insert into PERSON (NAME, AGE, ADDRESS_ID) values ('M Odersky', 12345, 1)
"""

Slick

Inserts can be a bit surprising at first, when coming from SQL, because unlike SQL, Slick re-uses the same syntax that is used for querying to select which columns should be inserted into. So basically, you first write a query and instead of creating an Action that gets the result of this query, you call += on with value to be inserted, which gives you an Action that performs the insert. ++= allows insertion of a Seq of rows at once. Columns that are auto-incremented are automatically ignored, so inserting into them has no effect. Using forceInsert allows actual insertion into auto-incremented columns.

people.map(p => (p.name, p.age, p.addressId)) += ("M Odersky",12345,1)

update

SQL

sqlu"""
  update PERSON set NAME='M. Odersky', AGE=54321 where NAME='M Odersky'
"""

Slick

Just like inserts, updates are based on queries that select and filter what should be updated and instead of running the query and fetching the data .update is used to replace it.

people.filter(_.name === "M Odersky")
       .map(p => (p.name,p.age))
       .update(("M. Odersky",54321))

delete

SQL

sqlu"""
  delete PERSON where NAME='M. Odersky'
"""

Slick

Just like inserts, deletes are based on queries that filter what should be deleted. Instead of getting the query result of the query, .delete is used to obtain an Action that deletes the selected rows.

people.filter(p => p.name === "M. Odersky")
       .delete

CASE

SQL

sql"""
  select
    case
      when ADDRESS_ID = 1 then 'A'
      when ADDRESS_ID = 2 then 'B'
    end
  from PERSON P
""".as[Option[String]]

Slick

Slick uses a small DSL to allow CASE like case distinctions.

people.map(p =>
  Case
    If(p.addressId === 1) Then "A"
    If(p.addressId === 2) Then "B"
).result
原文地址:https://www.cnblogs.com/144823836yj/p/14708258.html