Hive之内置函数

  • 函数分类
    • UDF(User Defined Function):数据一对一
    • UDAF(User Defined Aggreation Function):数据多对一
    • UDTF(User Defined Table-Generating Function):数据一对多
  • group by / sort by
    • 对函数处理过的别名报错处理,假如:select f(col) as fc, count(*) as cnt from table_name group by fc;
      • 解决方法1,套一层子查询:select fc, cnt from (select f(col) as fc, count(*) as cnt from table_name)t group by fc
      • 解决方法2,group by 使用函数:select f(col) as fc, count(*) as cnt from table_name group by f(col);

数学功能类

  • 参数为NULL时,大部分返回NULL
  • 四舍五入模式
模式 说明
ROUND_CEILING Rounding mode to round towards positive infinity.
ROUND_DOWN Rounding mode to round towards zero.
ROUND_FLOOR Rounding mode to round towards negative infinity.
ROUND_HALF_DOWN Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round down.
ROUND_HALF_EVEN Rounding mode to round towards the "nearest neighbor" unless both neighbors are equidistant, in which case, round towards the even neighbor.
ROUND_HALF_UP Rounding mode to round towards "nearest neighbor" unless both neighbors are equidistant, in which case round up.
ROUND_UNNECESSARY Rounding mode to assert that the requested operation has an exact result, hence no rounding is necessary.
ROUND_UP Rounding mode to round away from zero.
  • DOUBLE round(DOUBLE a) 返回四舍五入后的值(不保留小数位)
  • DOUBLE round(DOUBLE a, INT d) 返回四舍五入后的值(保留d位小数)
hive (badou)> select round(2.4), round(2.5), round(2.54);
OK
_c0     _c1     _c2
2.0     3.0     3.0
Time taken: 0.169 seconds, Fetched: 1 row(s)
hive (badou)> select round(2.4, 1), round(2.5, 2), round(2.545, 2);
OK
_c0     _c1     _c2
2.4     2.5     2.55
Time taken: 0.231 seconds, Fetched: 1 row(s)
  • DOUBLE bround(DOUBLE a) HALF_EVEN模式(Banker's rounding或者Gaussian rounding)四舍五入(不保留小数位)>=v1.3.0版本才支持
  • DOUBLE bround(DOUBLE a, INT d) HALF_EVEN模式四舍五入(保留d位小数位)
  • BIGINT floor(DOUBLE a) 向下取整
  • BIGINT ceil(DOUBLE a), ceiling(DOUBLE a) 向上取整
hive (badou)> select floor(2.4), floor(2.9);
OK
_c0     _c1
2       2

hive (badou)> select ceil(2.1), ceil(2.9);
OK
_c0     _c1
3       3
  • DOUBLE rand(), rand(INT seed) 生成0~1之间的随机数,可以设置随机种子
  • DOUBLE exp(DOUBLE a), exp(DECIMAL a) 返回自然数e的a次幂
hive (default)> select exp(1), exp(2), exp(2.5);
OK
_c0     _c1     _c2
2.718281828459045       7.38905609893065        12.182493960703473
  • DOUBLE ln(DOUBLE a), ln(DECIMAL a) 返回自然数e为底a的对数,跟exp是相反的操作
  • DOUBLE log10(DOUBLE a), log10(DECIMAL a) 10为底a的对数
  • DOUBLE log2(DOUBLE a), log2(DECIMAL a) 2为底a的对数
  • DOUBLE log(DOUBLE base, DOUBLE a), log(DECIMAL base, DECIMAL a) 以base为底a的对数
  • DOUBLE pow(DOUBLE a, DOUBLE p), power(DOUBLE a, DOUBLE p) a的p次方
  • DOUBLE sqrt(DOUBLE a), sqrt(DECIMAL a) 对a开平方根
  • STRING bin(BIGINT a) 整型数a的二进制字符串
  • STRING hex(BIGINT a) hex(STRING a) hex(BINARY a) 生成a的十六进制字符串,如果是字符串,则按单个字符编码输出十六进制字符串,然后拼接在一起
  • BINARY unhex(STRING a) hex函数的逆操作
  • STRING conv(BIGINT num, INT from_base, INT to_base), conv(STRING num, INT from_base, INT to_base) num从from_base转换到to_base
  • DOUBLE abs(DOUBLE a) a的绝对值
  • INT or DOUBLE pmod(INT a, INT b), pmod(DOUBLE a, DOUBLE b) a对b取模
  • DOUBLE sin(DOUBLE a), sin(DECIMAL a) 求a弧度的正弦值
  • DOUBLE asin(DOUBLE a), asin(DECIMAL a) -1 <= a <= 1时返回反正弦值,否则返回NULL
  • DOUBLE cos(DOUBLE a), cos(DECIMAL a) 求a弧度的余弦值
  • DOUBLE acos(DOUBLE a), acos(DECIMAL a) 求a弧度反余弦值,情况和反正弦值一样
  • DOUBLE tan(DOUBLE a), tan(DECIMAL a) 求a弧度正切
  • DOUBLE atan(DOUBLE a), atan(DECIMAL a) 求a弧度反正切,条件和反正弦一样
  • DOUBLE degrees(DOUBLE a), degrees(DECIMAL a) 弧度转度数
  • DOUBLE radians(DOUBLE a) 度数转弧度
  • INT or DOUBLE positive(INT a), positive(DOUBLE a) 返回a
  • INT or DOUBLE negative(INT a), negative(DOUBLE a) 返回-a
  • DOUBLE or INT sign(DOUBLE a), sign(DECIMAL a) Returns the sign of a as '1.0' (if a is positive) or '-1.0' (if a is negative), '0.0' otherwise. The decimal version returns INT instead of DOUBLE.
  • DOUBLE e() 返回自然数e的值
  • DOUBLE pi() 返回pi的值
  • BIGINT factorial(INT a) 求a的阶乘 a >=0 且 a <= 20
  • DOUBLE cbrt(DOUBLE a) 求a的立方根
  • INT or BIGINT shiftleft(TINYINT|SMALLINT|INT a, INT b), shiftleft(BIGINT a, INT b) 对a向左移b位
  • INT or BIGINT shiftright(TINYINT|SMALLINT|INT a, INT b), shiftright(BIGINT a, INT b) 对a向右移b位
  • INT or BIGINT shiftrightunsigned(TINYINT|SMALLINT|INT a, INT b), shiftrightunsigned(BIGINT a, INT b) Bitwise unsigned right shift (as of Hive 1.2.0). Shifts a b positions to the right.
  • T greatest(T v1, T v2, ...) Returns the greatest value of the list of values (as of Hive 1.1.0). Fixed to return NULL when one or more arguments are NULL, and strict type restriction relaxed, consistent with ">" operator (as of Hive 2.0.0).
  • T least(T v1, T v2, ...) Returns the least value of the list of values (as of Hive 1.1.0). Fixed to return NULL when one or more arguments are NULL, and strict type restriction relaxed, consistent with "<" operator (as of Hive 2.0.0).
  • INT width_bucket(NUMERIC expr, NUMERIC min_value, NUMERIC max_value, INT num_buckets) Returns an integer between 0 and num_buckets+1 by mapping expr into the ith equally sized bucket. Buckets are made by dividing [min_value, max_value] into equally sized regions. If expr < min_value, return 1, if expr > max_value return num_buckets+1. See https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions214.htm (as of Hive 3.0.0)

集合功能类

  • int size(Map<K.V>) 返回map元素的个数
  • int size(Array) 返回array元素的个数
  • array map_keys(Map<K.V>) 返回包含map所有key的无序array
  • array map_values(Map<K.V>) 返回包含map所有value的无序array
  • boolean array_contains(Array, value) 判断数组是否包含value,包含返回true
  • array sort_array(Array) 按自然序对元素升序排列

类型转换类

  • binary binary(string|binary) Casts the parameter into a binary.
  • Expected "=" to follow "type" cast(expr as ) expr转换成type类型,转换失败返回null

日期功能类

  • string from_unixtime(bigint unixtime[, string format]) unix时间戳转换成人类可读日期格式
  • bigint unix_timestamp() Gets current Unix timestamp in seconds. This function is not deterministic and its value is not fixed for the scope of a query execution, therefore prevents proper optimization of queries - this has been deprecated since 2.0 in favour of CURRENT_TIMESTAMP constant.
  • bigint unix_timestamp(string date) Converts time string in format yyyy-MM-dd HH:mm:ss to Unix timestamp (in seconds), using the default timezone and the default locale, return 0 if fail: unix_timestamp('2009-03-20 11:30:01') = 1237573801
  • bigint unix_timestamp(string date, string pattern) Convert time string with given pattern (see [http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html]) to Unix time stamp (in seconds), return 0 if fail: unix_timestamp('2009-03-20', 'yyyy-MM-dd') = 1237532400.
  • pre 2.1.0: string, 2.1.0 on: date to_date(string timestamp) Returns the date part of a timestamp string (pre-Hive 2.1.0): to_date("1970-01-01 00:00:00") = "1970-01-01". As of Hive 2.1.0, returns a date object.
  • int year(string date) Returns the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970.
  • int quarter(date/timestamp/string) Returns the quarter of the year for a date, timestamp, or string in the range 1 to 4 (as of Hive 1.3.0). Example: quarter('2015-04-08') = 2.
  • int month(string date) Returns the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11.
  • int day(string date), dayofmonth(date) Returns the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1.
  • int hour(string date) Returns the hour of the timestamp: hour('2009-07-30 12:58:59') = 12, hour('12:58:59') = 12.
  • int hour(string date) Returns the hour of the timestamp: hour('2009-07-30 12:58:59') = 12, hour('12:58:59') = 12.
  • int minute(string date) Returns the minute of the timestamp.
  • int second(string date) Returns the second of the timestamp.
  • int weekofyear(string date) Returns the week number of a timestamp string: weekofyear("1970-11-01 00:00:00") = 44, weekofyear("1970-11-01") = 44.
  • int extract(field FROM source) Retrieve fields such as days or hours from source (as of Hive 2.2.0). Source must be a date, timestamp, interval or a string that can be converted into either a date or timestamp.
# upported fields include: day, dayofweek, hour, minute, month, quarter, second, week and year.
# Examples:
1.select extract(month from "2016-10-20") results in 10.
2.select extract(hour from "2016-10-20 05:06:07") results in 5.
3.select extract(dayofweek from "2016-10-20 05:06:07") results in 5.
4.select extract(month from interval '1-3' year to month) results in 3.
5.select extract(minute from interval '3 12:20:30' day to second) results in 20.
  • int datediff(string enddate, string startdate) Returns the number of days from startdate to enddate: datediff('2009-03-01', '2009-02-27') = 2.
  • pre 2.1.0: string,2.1.0 on: date date_add(date/timestamp/string startdate, tinyint/smallint/int days)
    • Adds a number of days to startdate: date_add('2008-12-31', 1) = '2009-01-01'.
    • Prior to Hive 2.1.0 (HIVE-13248) the return type was a String because no Date type existed when the method was created.
  • pre 2.1.0: string,2.1.0 on: date date_sub(date/timestamp/string startdate, tinyint/smallint/int days)
    • Subtracts a number of days to startdate: date_sub('2008-12-31', 1) = '2008-12-30'.
    • Prior to Hive 2.1.0 (HIVE-13248) the return type was a String because no Date type existed when the method was created.
  • timestamp from_utc_timestamp({any primitive type} ts, string timezone)
    • Converts a timestamp* in UTC to a given timezone (as of Hive 0.8.0).
    • timestamp is a primitive type, including timestamp/date, tinyint/smallint/int/bigint, float/double and decimal.
    • Fractional values are considered as seconds. Integer values are considered as milliseconds. For example, from_utc_timestamp(2592000.0,'PST'), from_utc_timestamp(2592000000,'PST') and from_utc_timestamp(timestamp '1970-01-30 16:00:00','PST') all return the timestamp 1970-01-30 08:00:00.
  • timestamp to_utc_timestamp({any primitive type} ts, string timezone)
    • 跟from_utc_timestamp相反,有timezone指定的时间戳转换为utc时间戳
  • date current_date
    • 内置变量,当前日期Y-m-d
    • Returns the current date at the start of query evaluation (as of Hive 1.2.0). All calls of current_date within the same query return the same value.
  • timestamp current_timestamp
    • 内置变量 当前时间戳
    • Returns the current timestamp at the start of query evaluation (as of Hive 1.2.0). All calls of current_timestamp within the same query return the same value.
  • string add_months(string start_date, int num_months, output_date_format)
    • Returns the date that is num_months after start_date (as of Hive 1.1.0). start_date is a string, date or timestamp. num_months is an integer. If start_date is the last day of the month or if the resulting month has fewer days than the day component of start_date, then the result is the last day of the resulting month. Otherwise, the result has the same day component as start_date. The default output format is 'yyyy-MM-dd'.
    • Before Hive 4.0.0, the time part of the date is ignored.
    • As of Hive 4.0.0, add_months supports an optional argument output_date_format, which accepts a String that represents a valid date format for the output. This allows to retain the time format in the output.
    • For example :
    • add_months('2009-08-31', 1) returns '2009-09-30'.
    • add_months('2017-12-31 14:15:16', 2, 'YYYY-MM-dd HH:mm:ss') returns '2018-02-28 14:15:16'.
  • string last_day(string date)
    • Returns the last day of the month which the date belongs to (as of Hive 1.1.0). date is a string in the format 'yyyy-MM-dd HH:mm:ss' or 'yyyy-MM-dd'. The time part of date is ignored.
  • string next_day(string start_date, string day_of_week)
    • Returns the first date which is later than start_date and named as day_of_week (as of Hive 1.2.0). start_date is a string/date/timestamp. day_of_week is 2 letters, 3 letters or full name of the day of the week (e.g. Mo, tue, FRIDAY). The time part of start_date is ignored. Example: next_day('2015-01-14', 'TU') = 2015-01-20.
  • string trunc(string date, string format)
    • Returns date truncated to the unit specified by the format (as of Hive 1.2.0). Supported formats: MONTH/MON/MM, YEAR/YYYY/YY. Example: trunc('2015-03-17', 'MM') = 2015-03-01.
  • double months_between(date1, date2)
    • Returns number of months between dates date1 and date2 (as of Hive 1.2.0). If date1 is later than date2, then the result is positive. If date1 is earlier than date2, then the result is negative. If date1 and date2 are either the same days of the month or both last days of months, then the result is always an integer. Otherwise the UDF calculates the fractional portion of the result based on a 31-day month and considers the difference in time components date1 and date2. date1 and date2 type can be date, timestamp or string in the format 'yyyy-MM-dd' or 'yyyy-MM-dd HH:mm:ss'. The result is rounded to 8 decimal places. Example: months_between('1997-02-28 10:30:00', '1996-10-30') = 3.94959677
  • string date_format(date/timestamp/string ts, string fmt)
    • Converts a date/timestamp/string to a value of string in the format specified by the date format fmt (as of Hive 1.2.0). Supported formats are Java SimpleDateFormat formats – https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html. The second argument fmt should be constant. Example: date_format('2015-04-08', 'y') = '2015'.
    • date_format can be used to implement other UDFs, e.g.:
    • dayname(date) is date_format(date, 'EEEE')
    • dayofyear(date) is date_format(date, 'D')

条件功能判断类

  • T if(boolean testCondition, T valueTrue, T valueFalseOrNull)
    • boolean testCondition 条件为真返回 valueTrue,否则返回valueFlaseOrNull
  • boolean isnull( a )
    • Returns true if a is NULL and false otherwise.
  • boolean isnotnull ( a )
    • Returns true if a is not NULL and false otherwise.
  • T nvl(T value, T default_value)
    • Returns default value if value is null else returns value (as of HIve 0.11).
  • T COALESCE(T v1, T v2, ...)
    • Returns the first v that is not NULL, or NULL if all v's are NULL.
  • T CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END
    • When a = b, returns c; when a = d, returns e; else returns f.
  • T CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END
    • When a = true, returns b; when c = true, returns d; else returns e.
  • T nullif( a, b )
    • Returns NULL if a=b; otherwise returns a (as of Hive 2.3.0).
    • Shorthand for: CASE WHEN a = b then NULL else a
  • void assert_true(boolean condition)
    • Throw an exception if 'condition' is not true, otherwise return null (as of Hive 0.8.0). For example, select assert_true (2<1).

字符串操作类

  • int ascii(string str)
    • Returns the numeric value of the first character of str.
  • string base64(binary bin)
    • Converts the argument from binary to a base 64 string (as of Hive 0.12.0).
  • int character_length(string str)
    • Returns the number of UTF-8 characters contained in str (as of Hive 2.2.0). The function char_length is shorthand for this function.
  • string chr(bigint|double A)
    • Returns the ASCII character having the binary equivalent to A (as of Hive 1.3.0 and 2.1.0). If A is larger than 256 the result is equivalent to chr(A % 256). Example: select chr(88); returns "X".
  • string concat(string|binary A, string|binary B...)
    • Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order. For example, concat('foo', 'bar') results in 'foobar'. Note that this function can take any number of input strings.
  • array<struct<string,double>> context_ngrams(array<array>, array, int K, int pf)
    • Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of "context". See StatisticsAndDataMining for more information.
  • string concat_ws(string SEP, string A, string B...)
    • Like concat() above, but with custom separator SEP.
  • string concat_ws(string SEP, array)
    • Like concat_ws() above, but taking an array of strings. (as of Hive 0.9.0)
  • string decode(binary bin, string charset)
    • Decodes the first argument into a String using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null. (As of Hive 0.12.0.)
  • string elt(N int,str1 string,str2 string,str3 string,...)
  • binary encode(string src, string charset)
    • Encodes the first argument into a BINARY using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null. (As of Hive 0.12.0.)
  • int field(val T,val1 T,val2 T,val3 T,...)
  • string get_json_object(string json_string, string path)
    • Extracts json object from a json string based on json path specified, and returns json string of the extracted json object. It will return null if the input json string is invalid. NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys cannot start with numbers. This is due to restrictions on Hive column names.
    • A limited version of JSONPath is supported:
      • $ : Root object
      • . : Child operator
      • [] : Subscript operator for array
      • * : Wildcard for []
    • Syntax not supported that's worth noticing:
      • : Zero length string as key
      • .. : Recursive descent
      • @ : Current object/element
      • () : Script expression
      • ?() : Filter (script) expression.
      • [,] : Union operator
      • [start:end.step] : array slice operator
# example1
hive (default)> select get_json_object('{"a":1,"b":1}','$');
OK
_c0
{"a":1,"b":1}

# example2
hive (default)> select get_json_object('{"a":1,"b":1}','$.a');
OK
_c0
1

# example3
hive (default)> select get_json_object('{"a":1,"b":1, "c":{"c1":222,"c2":3333},"fruit":[{"f1":1,"f2":2},{"g1":2,"g2":3}]}','$.fruit[0]');
OK
_c0
{"f1":1,"f2":2}

# example4
hive (default)> select get_json_object('{"a":1,"b":1, "c":{"c1":222,"c2":3333},"fruit":[{"f1":1,"f2":2},{"g1":2,"g2":3}]}','$.fruit[0].f1');
OK
_c0
1

# example5
hive (default)> select get_json_object('{"a":1,"b":1, "c":{"c1":222,"c2":3333},"fruit":[{"f1":1,"f2":2},{"g1":2,"g2":3}]}','$.fruit*');
OK
_c0
[{"f1":1,"f2":2},{"g1":2,"g2":3}]
  • boolean in_file(string str, string filename)
    • Returns true if the string str appears as an entire line in filename.
  • int instr(string str, string substr)
    • Returns the position of the first occurrence of substr in str. Returns null if either of the arguments are null and returns 0 if substr could not be found in str. Be aware that this is not zero based. The first character in str has index 1.
  • int length(string A)
    • Returns the length of the string.
  • int locate(string substr, string str[, int pos])
    • Returns the position of the first occurrence of substr in str after position pos.
  • string lower(string A), lcase(string A)
    • Returns the string resulting from converting all characters of B to lower case. For example, lower('fOoBaR') results in 'foobar'.
  • string lpad(string str, int len, string pad)
    • Returns str, left-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters. In case of empty pad string, the return value is null.
  • string ltrim(string A)
    • Returns the string resulting from trimming spaces from the beginning(left hand side) of A. For example, ltrim(' foobar ') results in 'foobar '.
  • array<struct<string,double>> ngrams(array<array>, int N, int K, int pf)
    • Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. See StatisticsAndDataMining for more information.
  • int octet_length(string str)
    • Returns the number of octets required to hold the string str in UTF-8 encoding (since Hive 2.2.0). Note that octet_length(str) can be larger than character_length(str).
  • string parse_url(string urlString, string partToExtract [, string keyToExtract])
  • string printf(String format, Obj... args)
    • Returns the input formatted according do printf-style format strings (as of Hive 0.9.0).
  • string regexp_extract(string subject, string pattern, int index)
    • Returns the string extracted using the pattern.
    • For example, regexp_extract('foothebar', 'foo(.*?)(bar)', 2) returns 'bar.'
    • Note that some care is necessary in using predefined character classes: using 's' as the second argument will match the letter s; 's' is necessary to match whitespace, etc.
    • The 'index' parameter is the Java regex Matcher group() method index. See docs/api/java/util/regex/Matcher.html for more information on the 'index' or Java regex group() method.
  • string regexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT)
    • Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT.
    • For example, regexp_replace("foobar", "oo|ar", "") returns 'fb.' Note that some care is necessary in using predefined character classes: using 's' as the second argument will match the letter s; 's' is necessary to match whitespace, etc.
  • string repeat(string str, int n)
    • Repeats str n times.
  • string replace(string A, string OLD, string NEW)
    • Returns the string A with all non-overlapping occurrences of OLD replaced with NEW (as of Hive 1.3.0 and 2.1.0). Example: select replace("ababab", "abab", "Z"); returns "Zab".
  • string reverse(string A)
    • Returns the reversed string.
  • string rpad(string str, int len, string pad)
    • Returns str, right-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters.
    • In case of empty pad string, the return value is null.
  • string rtrim(string A)
    • Returns the string resulting from trimming spaces from the end(right hand side) of A. For example, rtrim(' foobar ') results in ' foobar'.
  • array<array> sentences(string str, string lang, string locale)
    • Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The 'lang' and 'locale' are optional arguments. For example, sentences('Hello there! How are you?') returns ( ("Hello", "there"), ("How", "are", "you") ).
  • string space(int n)
    • Returns a string of n spaces.
  • array split(string str, string pat)
    • Splits str around pat (pat is a regular expression).
  • map<string,string> str_to_map(text[, delimiter1, delimiter2])
    • Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ',' for delimiter1 and ':' for delimiter2.
  • string substr(string|binary A, int start) substring(string|binary A, int start)
  • string substr(string|binary A, int start, int len) substring(string|binary A, int start, int len)
  • string substring_index(string A, string delim, int count)
    • Returns the substring from string A before count occurrences of the delimiter delim (as of Hive 1.3.0). If count is positive, everything to the left of the final delimiter (counting from the left) is returned. If count is negative, everything to the right of the final delimiter (counting from the right) is returned. Substring_index performs a case-sensitive match when searching for delim. Example: substring_index('www.apache.org', '.', 2) = 'www.apache'.
  • string translate(string|char|varchar input, string|char|varchar from, string|char|varchar to)
    • Translates the input string by replacing the characters present in the from string with the corresponding characters in the to string.
    • This is similar to the translate function in PostgreSQL.
    • If any of the parameters to this UDF are NULL, the result is NULL as well. (Available as of Hive 0.10.0, for string types)
    • Char/varchar support added as of Hive 0.14.0.
  • string trim(string A)
    • Returns the string resulting from trimming spaces from both ends of A. For example, trim(' foobar ') results in 'foobar'
  • binary unbase64(string str)
    • Converts the argument from a base 64 string to BINARY. (As of Hive 0.12.0.)
  • string upper(string A) ucase(string A)
    • Returns the string resulting from converting all characters of A to upper case. For example, upper('fOoBaR') results in 'FOOBAR'.
  • string initcap(string A)
    • Returns string, with the first letter of each word in uppercase, all other letters in lowercase. Words are delimited by whitespace. (As of Hive 1.1.0.)
  • int levenshtein(string A, string B)
    • Returns the Levenshtein distance between two strings (as of Hive 1.2.0). For example, levenshtein('kitten', 'sitting') results in 3.
  • string soundex(string A)
    • Returns soundex code of the string (as of Hive 1.2.0). For example, soundex('Miller') results in M460.

数据马赛克功能类

V2.1.0以及之后版本才支持

  • string mask(string str[, string upper[, string lower[, string number]]])
    • Returns a masked version of str (as of Hive 2.1.0). By default, upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example mask("abcd-EFGH-8765-4321") results in xxxx-XXXX-nnnn-nnnn. You can override the characters used in the mask by supplying additional arguments: the second argument controls the mask character for upper case letters, the third argument for lower case letters and the fourth argument for numbers. For example, mask("abcd-EFGH-8765-4321", "U", "l", "#") results in llll-UUUU-####-####.
  • string mask_first_n(string str[, int n])
    • Returns a masked version of str with the first n values masked (as of Hive 2.1.0). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_first_n("1234-5678-8765-4321", 4) results in nnnn-5678-8765-4321.
  • string mask_last_n(string str[, int n])
    • Returns a masked version of str with the last n values masked (as of Hive 2.1.0). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_last_n("1234-5678-8765-4321", 4) results in 1234-5678-8765-nnnn.
  • string mask_show_first_n(string str[, int n])
    • Returns a masked version of str, showing the first n characters unmasked (as of Hive 2.1.0). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_show_first_n("1234-5678-8765-4321", 4) results in 1234-nnnn-nnnn-nnnn.
  • string mask_show_last_n(string str[, int n])
    • Returns a masked version of str, showing the last n characters unmasked (as of Hive 2.1.0). Upper case letters are converted to "X", lower case letters are converted to "x" and numbers are converted to "n". For example, mask_show_last_n("1234-5678-8765-4321", 4) results in nnnn-nnnn-nnnn-4321.
  • string mask_hash(string|char|varchar str)
    • Returns a hashed value based on str (as of Hive 2.1.0). The hash is consistent and can be used to join masked values together across tables. This function returns null for non-string types.

辅助功能类

  • varies java_method(class, method[, arg1[, arg2..]])
    • Synonym for reflect. (As of Hive 0.9.0.)
  • varies reflect(class, method[, arg1[, arg2..]])
    • Calls a Java method by matching the argument signature, using reflection. (As of Hive 0.7.0.) See Reflect (Generic) UDF for examples.
  • int hash(a1[, a2...])
    • Returns a hash value of the arguments. (As of Hive 0.4.)
  • string current_user()
    • Returns current user name from the configured authenticator manager (as of Hive 1.2.0). Could be the same as the user provided when connecting, but with some authentication managers (for example HadoopDefaultAuthenticator) it could be different.
  • string logged_in_user()
    • Returns current user name from the session state (as of Hive 2.2.0). This is the username provided when connecting to Hive.
  • string current_database()
    • Returns current database name (as of Hive 0.13.0).
  • string md5(string/binary)
    • Calculates an MD5 128-bit checksum for the string or binary (as of Hive 1.3.0). The value is returned as a string of 32 hex digits, or NULL if the argument was NULL. Example: md5('ABC') = '902fbdd2b1df0c4f70b4a5d23525e932'.
  • string sha1(string/binary), sha(string/binary)
    • Calculates the SHA-1 digest for string or binary and returns the value as a hex string (as of Hive 1.3.0). Example: sha1('ABC') = '3c01bdbb26f358bab27f267924aa2c9a03fcfdb8'.
    • bigint crc32(string/binary) Computes a cyclic redundancy check value for string or binary argument and returns bigint value (as of Hive 1.3.0). Example: crc32('ABC') = 2743272264.
    • string sha2(string/binary, int) Calculates the SHA-2 family of hash functions (SHA-224, SHA-256, SHA-384, and SHA-512) (as of Hive 1.3.0). The first argument is the string or binary to be hashed. The second argument indicates the desired bit length of the result, which must have a value of 224, 256, 384, 512, or 0 (which is equivalent to 256). SHA-224 is supported starting from Java 8. If either argument is NULL or the hash length is not one of the permitted values, the return value is NULL. Example: sha2('ABC', 256) = 'b5d4045c3f466fa91fe2cc6abe79232a1a57cdf104f7a26e716e0a1e2789df78'.
  • binary aes_encrypt(input string/binary, key string/binary)
    • Encrypt input using AES (as of Hive 1.3.0). Key lengths of 128, 192 or 256 bits can be used. 192 and 256 bits keys can be used if Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files are installed. If either argument is NULL or the key length is not one of the permitted values, the return value is NULL. Example: base64(aes_encrypt('ABC', '1234567890123456')) = 'y6Ss+zCYObpCbgfWfyNWTw=='.
  • binary aes_decrypt(input binary, key string/binary)
    • Decrypt input using AES (as of Hive 1.3.0). Key lengths of 128, 192 or 256 bits can be used. 192 and 256 bits keys can be used if Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files are installed. If either argument is NULL or the key length is not one of the permitted values, the return value is NULL. Example: aes_decrypt(unbase64('y6Ss+zCYObpCbgfWfyNWTw=='), '1234567890123456') = 'ABC'.
  • string version()
    • Returns the Hive version (as of Hive 2.1.0). The string contains 2 fields, the first being a build number and the second being a build hash. Example: "select version();" might return "2.1.0.2.5.0.0-1245 r027527b9c5ce1a3d7d0b6d2e6de2378fb0c39232". Actual results will depend on your build.

聚合功能类(UDAF:user defined aggregate function)

  • BIGINT count(*), count(expr), count(DISTINCT expr[, expr...])
    • count(*) - Returns the total number of retrieved rows, including rows containing NULL values.
    • count(expr) - Returns the number of rows for which the supplied expression is non-NULL.
    • count(DISTINCT expr[, expr]) - Returns the number of rows for which the supplied expression(s) are unique and non-NULL. Execution of this can be optimized with hive.optimize.distinct.rewrite.
  • DOUBLE sum(col), sum(DISTINCT col)
    • Returns the sum of the elements in the group or the sum of the distinct values of the column in the group.
  • DOUBLE avg(col), avg(DISTINCT col)
    • Returns the average of the elements in the group or the average of the distinct values of the column in the group.
  • DOUBLE min(col)
    • Returns the minimum of the column in the group.
  • DOUBLE max(col)
    • Returns the maximum value of the column in the group.
  • DOUBLE variance(col), var_pop(col)
    • Returns the variance of a numeric column in the group.
  • DOUBLE var_samp(col)
    • Returns the unbiased sample variance of a numeric column in the group.
  • DOUBLE stddev_pop(col)
    • Returns the standard deviation of a numeric column in the group.
  • DOUBLE stddev_samp(col)
    • Returns the unbiased sample standard deviation of a numeric column in the group.
  • DOUBLE covar_pop(col1, col2)
    • Returns the population covariance of a pair of numeric columns in the group.
  • DOUBLE covar_samp(col1, col2)
    • Returns the sample covariance of a pair of a numeric columns in the group.
  • DOUBLE corr(col1, col2)
    • Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group.
  • DOUBLE percentile(BIGINT col, p)
    • Returns the exact pth percentile of a column in the group (does not work with floating point types). p must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
  • array percentile(BIGINT col, array(p1 [, p2]...))
    • Returns the exact percentiles p1, p2, ... of a column in the group (does not work with floating point types). pi must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
  • DOUBLE percentile_approx(DOUBLE col, p [, B])
    • Returns an approximate pth percentile of a numeric column (including floating point types) in the group. The B parameter controls approximation accuracy at the cost of memory. Higher values yield better approximations, and the default is 10,000. When the number of distinct values in col is smaller than B, this gives an exact percentile value.
  • array percentile_approx(DOUBLE col, array(p1 [, p2]...) [, B])
    • Same as above, but accepts and returns an array of percentile values instead of a single one.
  • double regr_avgx(independent, dependent)
    • Equivalent to avg(dependent). As of Hive 2.2.0.
  • double regr_avgy(independent, dependent)
    • Equivalent to avg(independent). As of Hive 2.2.0.
  • double regr_count(independent, dependent)
    • Returns the number of non-null pairs used to fit the linear regression line. As of Hive 2.2.0.
  • double regr_intercept(independent, dependent)
    • Returns the y-intercept of the linear regression line, i.e. the value of b in the equation dependent = a * independent + b. As of Hive 2.2.0.
  • double regr_r2(independent, dependent)
    • Returns the coefficient of determination for the regression. As of Hive 2.2.0.
  • double regr_slope(independent, dependent)
    • Returns the slope of the linear regression line, i.e. the value of a in the equation dependent = a * independent + b. As of Hive 2.2.0.
  • double regr_sxx(independent, dependent)
    • Equivalent to regr_count(independent, dependent) * var_pop(dependent). As of Hive 2.2.0.
  • double regr_sxy(independent, dependent)
    • Equivalent to regr_count(independent, dependent) * covar_pop(independent, dependent). As of Hive 2.2.0.
  • double regr_syy(independent, dependent)
    • Equivalent to regr_count(independent, dependent) * var_pop(independent). As of Hive 2.2.0.
  • array<struct {'x','y'}> histogram_numeric(col, b)
    • Computes a histogram of a numeric column in the group using b non-uniformly spaced bins. The output is an array of size b of double-valued (x,y) coordinates that represent the bin centers and heights
  • array collect_set(col)
    • Returns a set of objects with duplicate elements eliminated.
  • array collect_list(col)
    • Returns a list of objects with duplicates. (As of Hive 0.13.0.)
  • INTEGER ntile(INTEGER x)
    • Divides an ordered partition into x groups called buckets and assigns a bucket number to each row in the partition. This allows easy calculation of tertiles, quartiles, deciles, percentiles and other common summary statistics. (As of Hive 0.11.0.)

表数据生成功能类(UDTF:table-generating function)

1 一般用户定义的函数,输入一行数据,输出一行数据,UDTF类函数输入一行数据,输出多行数据

**2 注意事项 **

  • Using the syntax "SELECT udtf(col) AS colAlias..." has a few limitations:
    • No other expressions are allowed in SELECT
      • SELECT pageid, explode(adid_list) AS myCol... is not supported
    • UDTF's can't be nested
      • SELECT explode(explode(adid_list)) AS myCol... is not supported
    • GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY is not supported
    • SELECT explode(adid_list) AS myCol ... GROUP BY myCol is not supported
  • 结合lateral view没有这样的限制

函数列表

  • T explode(ARRAY a)
    • Explodes an array to multiple rows. Returns a row-set with a single column (col), one row for each element from the array.
# 数组转换成多行
hive (default)> select explode(array('a', 'b', 'c')) as col1;
OK
col1
a
b
c

# 结合lateral view使用
hive (default)> select tf.* from (select 0) t lateral view explode(array('A','B','C')) tf as col;
OK
tf.col
A
B
C
  • Tkey,Tvalue explode(MAP<Tkey,Tvalue> m)
    • Explodes a map to multiple rows. Returns a row-set with a two columns (key,value) , one row for each key-value pair from the input map. (As of Hive 0.8.0.).
# map转换成多行
hive (default)> select explode(map('a', 10, 'b', 20, 'c', 30)) as (k, v);
OK
k       v
a       10
b       20
c       30

# 结合lateral view使用
hive (default)> select tf.* from (select 0)t lateral view explode(map('a', 10, 'b', 20, 'c', 30))tf as key, value;
OK
tf.key  tf.value
a       10
b       20
c       30
  • int,T posexplode(ARRAY a)
    • Explodes an array to multiple rows with additional positional column of int type (position of items in the original array, starting with 0). Returns a row-set with two columns (pos,val), one row for each element from the array.
# 数组转换多行(带索引)
hive (default)> select posexplode(array('a', 'b', 'c')) as (key, value);
OK
key     value
0       a
1       b
2       c

# 结合lateral view使用
hive (default)> select tf.* from (select 0)t lateral view posexplode(array('a','b','c')) tf as key, value;
OK
tf.key  tf.value
0       a
1       b
2       c
  • T1,...,Tn inline(ARRAY<STRUCTf1:T1,...,fn:Tn> a)
    • Explodes an array of structs to multiple rows. Returns a row-set with N columns (N = number of top level elements in the struct), one row per struct from the array. (As of Hive 0.10.)
# 结构体数组转换成多行
hive (default)> select inline(array(struct('a', 10, date '2015-01-01'),struct('b', 20, date '2015-02-02'))) as (col1, col2, col3); 
OK
col1    col2    col3
a       10      2015-01-01
b       20      2015-02-02

# 结合lateral view使用
hive (default)> select tf.* from (select 0) t lateral view inline(array(struct('A',10,date '2015-01-01'),struct('B',20,date '2016-02-02'))) tf as col1,col2,col3;
OK
tf.col1 tf.col2 tf.col3
A       10      2015-01-01
B       20      2016-02-02
  • T1,...,Tn/r stack(int r,T1 V1,...,Tn/r Vn)
    • Breaks up n values V1,...,Vn into r rows. Each row will have n/r columns. r must be constant.
    • 每组字段数计算:n/r
# 参数列表按两个分组,每一组一行
hive (default)> select stack(3, 'a', 10, 'b', 20, 'c', 30) as (col1, col2);
OK
col1    col2
a       10
b       20
c       30

# 结合lateral view使用
hive (default)> select tf.* from (select 0) t lateral view stack(2,'A',10,date '2015-01-01','B',20,date '2016-01-01') tf as col0,col1,col2;
OK
tf.col0 tf.col1 tf.col2
A       10      2015-01-01
B       20      2016-01-01
  • string1,...,stringn json_tuple(string jsonStr,string k1,...,string kn)
    • Takes JSON string and a set of n keys, and returns a tuple of n values. This is a more efficient version of the get_json_object UDF because it can get multiple keys with just one call.
    • 比get_json_object更高效,她可以同时取多个值
    • 同一个row,只解析一次
# 从json字符串取指定字段的值
hive (default)> select json_tuple('{"a":1, "b":"a"}', "a", "b") as (f1, f2);
OK
f1      f2
1       a
  • string 1,...,stringn parse_url_tuple(string urlStr,string p1,...,string pn)
    • Takes URL string and a set of n URL parts, and returns a tuple of n values. This is similar to the parse_url() UDF but can extract multiple parts at once out of a URL. Valid part names are: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO, QUERY:.
hive (default)> SELECT b.* FROM (select 0)t LATERAL VIEW parse_url_tuple('http://www.facebook.com/abc/t.php?id=1&b=2', 'HOST', 'PATH', 'QUERY', 'QUERY:id') b as host, path, query, query_id;
OK
b.host  b.path  b.query b.query_id
www.facebook.com        /abc/t.php      id=1&b=2        1

参考资料

【0】Hive wiki - LanguageManual UDF
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF

【1】bigdecimal 保留小数位
https://www.cnblogs.com/liqforstudy/p/5652517.html

【2】what-is-half-even-rounding-for
https://stackoverflow.com/questions/28134938/what-is-half-even-rounding-for

原文地址:https://www.cnblogs.com/wadeyu/p/9784572.html