code stringlengths 25 201k | docstring stringlengths 19 96.2k | func_name stringlengths 0 235 | language stringclasses 1
value | repo stringlengths 8 51 | path stringlengths 11 314 | url stringlengths 62 377 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
public RelBuilder rename(List<? extends @Nullable String> fieldNames) {
final List<String> oldFieldNames = peek().getRowType().getFieldNames();
Preconditions.checkArgument(
fieldNames.size() <= oldFieldNames.size(), "More names than fields");
final List<String> newFieldNames = ne... | Ensures that the field names match those given.
<p>If all fields have the same name, adds nothing; if any fields do not have the same name,
adds a {@link Project}.
<p>Note that the names can be short-lived. Other {@code RelBuilder} operations make no
guarantees about the field names of the rows they produce.
@param ... | rename | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private @Nullable String inferAlias(List<RexNode> exprList, RexNode expr, int i) {
switch (expr.getKind()) {
case INPUT_REF:
final RexInputRef ref = (RexInputRef) expr;
return requireNonNull(stack.peek(), "empty frame stack")
.fields
... | Infers the alias of an expression.
<p>If the expression was created by {@link #alias}, replaces the expression in the project
list. | inferAlias | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder distinct() {
return aggregate(groupKey(fields()));
} | Creates an {@link Aggregate} that makes the relational expression distinct on all fields. | distinct | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder aggregate(GroupKey groupKey, AggCall... aggCalls) {
return aggregate(groupKey, ImmutableList.copyOf(aggCalls));
} | Creates an {@link Aggregate} with an array of calls. | aggregate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder aggregate(GroupKey groupKey, List<AggregateCall> aggregateCalls) {
return aggregate(
groupKey,
aggregateCalls.stream()
.map(
aggregateCall ->
new AggCallImpl2(
... | Creates an {@link Aggregate} with an array of {@link AggregateCall}s. | aggregate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder aggregate(GroupKey groupKey, Iterable<AggCall> aggCalls) {
final Registrar registrar = new Registrar(fields(), peek().getRowType().getFieldNames());
final GroupKeyImpl groupKey_ = (GroupKeyImpl) groupKey;
ImmutableBitSet groupSet =
ImmutableBitSet.of(registrar.r... | Creates an {@link Aggregate} with multiple calls. | aggregate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private RelBuilder aggregate_(
ImmutableBitSet groupSet,
ImmutableList<ImmutableBitSet> groupSets,
RelNode input,
List<AggregateCall> aggregateCalls,
List<RexNode> extraNodes,
List<Field> inFields) {
final RelNode aggregate =
... | Finishes the implementation of {@link #aggregate} by creating an {@link Aggregate} and
pushing it onto the stack. | aggregate_ | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder union(boolean all) {
return union(all, 2);
} | Creates a {@link Union} of the two most recent relational expressions on the stack.
@param all Whether to create UNION ALL | union | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder union(boolean all, int n) {
return setOp(all, UNION, n);
} | Creates a {@link Union} of the {@code n} most recent relational expressions on the stack.
@param all Whether to create UNION ALL
@param n Number of inputs to the UNION operator | union | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder intersect(boolean all) {
return intersect(all, 2);
} | Creates an {@link Intersect} of the two most recent relational expressions on the stack.
@param all Whether to create INTERSECT ALL | intersect | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder intersect(boolean all, int n) {
return setOp(all, SqlKind.INTERSECT, n);
} | Creates an {@link Intersect} of the {@code n} most recent relational expressions on the
stack.
@param all Whether to create INTERSECT ALL
@param n Number of inputs to the INTERSECT operator | intersect | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder minus(boolean all) {
return minus(all, 2);
} | Creates a {@link Minus} of the two most recent relational expressions on the stack.
@param all Whether to create EXCEPT ALL | minus | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder minus(boolean all, int n) {
return setOp(all, SqlKind.EXCEPT, n);
} | Creates a {@link Minus} of the {@code n} most recent relational expressions on the stack.
@param all Whether to create EXCEPT ALL | minus | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Experimental
public RelBuilder transientScan(String tableName) {
return this.transientScan(tableName, this.peek().getRowType());
} | Creates a {@link TableScan} on a {@link TransientTable} with the given name, using as type
the top of the stack's type.
@param tableName table name | transientScan | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Experimental
public RelBuilder transientScan(String tableName, RelDataType rowType) {
TransientTable transientTable = new ListTransientTable(tableName, rowType);
requireNonNull(relOptSchema, "relOptSchema");
RelOptTable relOptTable =
RelOptTableImpl.create(
... | Creates a {@link TableScan} on a {@link TransientTable} with the given name and type.
@param tableName table name
@param rowType row type of the table | transientScan | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private RelBuilder tableSpool(Spool.Type readType, Spool.Type writeType, RelOptTable table) {
RelNode spool = struct.spoolFactory.createTableSpool(peek(), readType, writeType, table);
replaceTop(spool);
return this;
} | Creates a {@link TableSpool} for the most recent relational expression.
@param readType Spool's read type (as described in {@link Spool.Type})
@param writeType Spool's write type (as described in {@link Spool.Type})
@param table Table to write into | tableSpool | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Experimental
public RelBuilder repeatUnion(String tableName, boolean all) {
return repeatUnion(tableName, all, -1);
} | Creates a {@link RepeatUnion} associated to a {@link TransientTable} without a maximum number
of iterations, i.e. repeatUnion(tableName, all, -1).
@param tableName name of the {@link TransientTable} associated to the {@link RepeatUnion}
@param all whether duplicates will be considered or not | repeatUnion | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Override
public RelNode visit(TableScan scan) {
final RelOptTable scanTable = scan.getTable();
final List<String> qualifiedName = scanTable.getQualifiedName();
if (qualifiedName.get(qualifiedName.size() - 1).equals(tableName)) {
relOptTable = scanTable;
... | Auxiliary class to find a certain RelOptTable based on its name. | visit | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder join(JoinRelType joinType, RexNode condition0, RexNode... conditions) {
return join(joinType, Lists.asList(condition0, conditions));
} | Creates a {@link Join} with an array of conditions. | join | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder join(JoinRelType joinType, Iterable<? extends RexNode> conditions) {
return join(joinType, and(conditions), ImmutableSet.of());
} | Creates a {@link Join} with multiple conditions. | join | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder join(JoinRelType joinType, RexNode condition) {
return join(joinType, condition, ImmutableSet.of());
} | Creates a {@link Join} with one condition. | join | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder join(
JoinRelType joinType, RexNode condition, Set<CorrelationId> variablesSet) {
Frame right = stack.pop();
final Frame left = stack.pop();
final RelNode join;
// FLINK BEGIN MODIFICATION
// keep behavior of Calcite 1.27.0
final boolean corr... | Creates a {@link Join} with correlating variables. | join | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder correlate(
JoinRelType joinType, CorrelationId correlationId, RexNode... requiredFields) {
return correlate(joinType, correlationId, ImmutableList.copyOf(requiredFields));
} | Creates a {@link Correlate} with a {@link CorrelationId} and an array of fields that are used
by correlation. | correlate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder correlate(
JoinRelType joinType,
CorrelationId correlationId,
Iterable<? extends RexNode> requiredFields) {
Frame right = stack.pop();
final Registrar registrar = new Registrar(fields(), peek().getRowType().getFieldNames());
List<Integer> r... | Creates a {@link Correlate} with a {@link CorrelationId} and a list of fields that are used
by correlation. | correlate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder join(JoinRelType joinType, String... fieldNames) {
final List<RexNode> conditions = new ArrayList<>();
for (String fieldName : fieldNames) {
conditions.add(equals(field(2, 0, fieldName), field(2, 1, fieldName)));
}
return join(joinType, conditions);
} | Creates a {@link Join} using USING syntax.
<p>For each of the field names, both left and right inputs must have a field of that name.
Constructs a join condition that the left and right fields are equal.
@param joinType Join type
@param fieldNames Field names | join | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder semiJoin(Iterable<? extends RexNode> conditions) {
final Frame right = stack.pop();
final RelNode semiJoin =
struct.joinFactory.createJoin(
peek(),
right.rel,
ImmutableList.of(),
... | Creates a {@link Join} with {@link JoinRelType#SEMI}.
<p>A semi-join is a form of join that combines two relational expressions according to some
condition, and outputs only rows from the left input for which at least one row from the
right input matches. It only outputs columns from the left input, and ignores duplic... | semiJoin | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder semiJoin(RexNode... conditions) {
return semiJoin(ImmutableList.copyOf(conditions));
} | Creates a {@link Join} with {@link JoinRelType#SEMI}.
@see #semiJoin(Iterable) | semiJoin | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder antiJoin(Iterable<? extends RexNode> conditions) {
final Frame right = stack.pop();
final RelNode antiJoin =
struct.joinFactory.createJoin(
peek(),
right.rel,
ImmutableList.of(),
... | Creates an anti-join.
<p>An anti-join is a form of join that combines two relational expressions according to some
condition, but outputs only rows from the left input for which no rows from the right input
match.
<p>For example, {@code EMP anti-join DEPT} finds all {@code EMP} records that do not have a
correspondin... | antiJoin | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder antiJoin(RexNode... conditions) {
return antiJoin(ImmutableList.copyOf(conditions));
} | Creates an anti-join.
@see #antiJoin(Iterable) | antiJoin | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder as(final String alias) {
final Frame pair = stack.pop();
List<Field> newFields = Util.transform(pair.fields, field -> field.addAlias(alias));
stack.push(new Frame(pair.rel, ImmutableList.copyOf(newFields)));
return this;
} | Assigns a table alias to the top entry on the stack. | as | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private static boolean allNull(@Nullable Object[] values, int column, int columnCount) {
for (int i = column; i < values.length; i += columnCount) {
if (values[i] != null) {
return false;
}
}
return true;
} | Returns whether all values for a given column are null. | allNull | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder empty() {
final Frame frame = stack.pop();
final RelNode values =
struct.valuesFactory.createValues(
cluster, frame.rel.getRowType(), ImmutableList.of());
stack.push(new Frame(values, frame.fields));
return this;
} | Creates a relational expression that reads from an input and throws all of the rows away.
<p>Note that this method always pops one relational expression from the stack. {@code
values}, in contrast, does not pop any relational expressions, and always produces a leaf.
<p>The default implementation creates a {@link Valu... | empty | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder values(RelDataType rowType, Object... columnValues) {
final ImmutableList<ImmutableList<RexLiteral>> tupleList =
tupleList(rowType.getFieldCount(), columnValues);
RelNode values =
struct.valuesFactory.createValues(
cluster, rowTyp... | Creates a {@link Values} with a specified row type.
<p>This method can handle cases that {@link #values(String[], Object...)} cannot, such as all
values of a column being null, or there being zero rows.
@param rowType Row type
@param columnValues Values | values | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder values(Iterable<? extends List<RexLiteral>> tupleList, RelDataType rowType) {
RelNode values = struct.valuesFactory.createValues(cluster, rowType, copy(tupleList));
push(values);
return this;
} | Creates a {@link Values} with a specified row type.
<p>This method can handle cases that {@link #values(String[], Object...)} cannot, such as all
values of a column being null, or there being zero rows.
@param tupleList Tuple list
@param rowType Row type | values | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder values(RelDataType rowType) {
return values(ImmutableList.<ImmutableList<RexLiteral>>of(), rowType);
} | Creates a {@link Values} with a specified row type and zero rows.
@param rowType Row type | values | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private static <E> ImmutableList<ImmutableList<E>> copy(Iterable<? extends List<E>> tupleList) {
final ImmutableList.Builder<ImmutableList<E>> builder = ImmutableList.builder();
int changeCount = 0;
for (List<E> literals : tupleList) {
final ImmutableList<E> literals2 = ImmutableList... | Converts an iterable of lists into an immutable list of immutable lists with the same
contents. Returns the same object if possible. | copy | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder sortExchange(RelDistribution distribution, RelCollation collation) {
RelNode exchange =
struct.sortExchangeFactory.createSortExchange(peek(), distribution, collation);
replaceTop(exchange);
return this;
} | Creates a SortExchange by distribution and collation. | sortExchange | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder sort(int... fields) {
final ImmutableList.Builder<RexNode> builder = ImmutableList.builder();
for (int field : fields) {
builder.add(field < 0 ? desc(field(-field - 1)) : field(field));
}
return sortLimit(-1, -1, builder.build());
} | Creates a {@link Sort} by field ordinals.
<p>Negative fields mean descending: -1 means field(0) descending, -2 means field(1)
descending, etc. | sort | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder sort(RelCollation collation) {
final RelNode sort = struct.sortFactory.createSort(peek(), collation, null, null);
replaceTop(sort);
return this;
} | Creates a {@link Sort} by specifying collations. | sort | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder pivot(
GroupKey groupKey,
Iterable<? extends AggCall> aggCalls,
Iterable<? extends RexNode> axes,
Iterable<? extends Map.Entry<String, ? extends Iterable<? extends RexNode>>> values) {
final List<RexNode> axisList = ImmutableList.copyOf(axes);
... | Creates a Pivot.
<p>To achieve the same effect as the SQL
<blockquote>
<pre>{@code
SELECT *
FROM (SELECT mgr, deptno, job, sal FROM emp)
PIVOT (SUM(sal) AS ss, COUNT(*) AS c
FOR (job, deptno)
IN (('CLERK', 10) AS c10, ('MANAGER', 20) AS m20))
}</pre>
</blockquote>
<p>use the builder as follows:
<blockquot... | pivot | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
public RelBuilder unpivot(
boolean includeNulls,
Iterable<String> measureNames,
Iterable<String> axisNames,
Iterable<
? extends
Map.Entry<
? extends List<? extends ... | Creates an Unpivot.
<p>To achieve the same effect as the SQL
<blockquote>
<pre>{@code
SELECT *
FROM (SELECT deptno, job, sal, comm FROM emp)
UNPIVOT INCLUDE NULLS (remuneration
FOR remuneration_type IN (comm AS 'commission',
sal AS 'salary'))
}</pre>
</blockquote>
<p>use the bui... | unpivot | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default AggCall sort(RexNode... orderKeys) {
return sort(ImmutableList.copyOf(orderKeys));
} | Returns a copy of this AggCall that sorts its input values by {@code orderKeys} before
aggregating, as in SQL's {@code WITHIN GROUP} clause. | sort | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default AggCall unique(RexNode... distinctKeys) {
return unique(ImmutableList.copyOf(distinctKeys));
} | Returns a copy of this AggCall that makes its input values unique by {@code distinctKeys}
before aggregating, as in SQL's {@code WITHIN DISTINCT} clause. | unique | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default AggCall distinct() {
return distinct(true);
} | Returns a copy of this AggCall that is distinct. | distinct | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private static boolean checkIfCorrelated(
Set<CorrelationId> variablesSet,
JoinRelType joinType,
RelNode leftNode,
RelNode rightRel) {
if (variablesSet.size() != 1) {
return false;
}
CorrelationId id = Iterables.getOnlyElement(variables... | Checks for {@link CorrelationId}, then validates the id is not used on left, and finally
checks if id is actually used on right.
@return true if a correlate id is present and used
@throws IllegalArgumentException if the {@link CorrelationId} is used by left side or if the
a {@link CorrelationId} is present and the... | checkIfCorrelated | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default <R> R let(Function<OverCall, R> consumer) {
return consumer.apply(this);
} | Performs an action on this OverCall. | let | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default OverCall rowsUnbounded() {
return rowsBetween(
RexWindowBounds.UNBOUNDED_PRECEDING, RexWindowBounds.UNBOUNDED_FOLLOWING);
} | Sets an unbounded ROWS window, equivalent to SQL {@code ROWS BETWEEN UNBOUNDED PRECEDING
AND UNBOUNDED FOLLOWING}. | rowsUnbounded | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default OverCall rowsFrom(RexWindowBound lower) {
return rowsBetween(lower, RexWindowBounds.UNBOUNDED_FOLLOWING);
} | Sets a ROWS window with a lower bound, equivalent to SQL {@code ROWS BETWEEN lower AND
CURRENT ROW}. | rowsFrom | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default OverCall rowsTo(RexWindowBound upper) {
return rowsBetween(RexWindowBounds.UNBOUNDED_PRECEDING, upper);
} | Sets a ROWS window with an upper bound, equivalent to SQL {@code ROWS BETWEEN CURRENT ROW
AND upper}. | rowsTo | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default OverCall rangeUnbounded() {
return rangeBetween(
RexWindowBounds.UNBOUNDED_PRECEDING, RexWindowBounds.UNBOUNDED_FOLLOWING);
} | Sets an unbounded RANGE window, equivalent to SQL {@code RANGE BETWEEN UNBOUNDED
PRECEDING AND UNBOUNDED FOLLOWING}. | rangeUnbounded | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default OverCall rangeFrom(RexWindowBound lower) {
return rangeBetween(lower, RexWindowBounds.CURRENT_ROW);
} | Sets a RANGE window with a lower bound, equivalent to SQL {@code RANGE BETWEEN lower AND
CURRENT ROW}. | rangeFrom | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
default OverCall rangeTo(RexWindowBound upper) {
return rangeBetween(RexWindowBounds.UNBOUNDED_PRECEDING, upper);
} | Sets a RANGE window with an upper bound, equivalent to SQL {@code RANGE BETWEEN CURRENT
ROW AND upper}. | rangeTo | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
int addedFieldCount() {
return extraNodes.size() - originalExtraNodes.size();
} | Returns the number of fields added. | addedFieldCount | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
Field addAlias(String alias) {
if (left.contains(alias)) {
return this;
}
final ImmutableSet<String> aliasList =
ImmutableSet.<String>builder().addAll(left).add(alias).build();
return new Field(aliasList, right);
} | A field that belongs to a stack {@link Frame}. | addAlias | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Override
public RexNode visitInputRef(RexInputRef inputRef) {
final RelDataType leftRowType = left.getRowType();
final RexBuilder rexBuilder = getRexBuilder();
final int leftCount = leftRowType.getFieldCount();
if (inputRef.getIndex() < leftCount) {
... | Shuttle that shifts a predicate's inputs to the left, replacing early ones with references to
a {@link RexCorrelVariable}. | visitInputRef | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Value.Default
default boolean dedupAggregateCalls() {
return true;
} | Whether {@link RelBuilder#aggregate} should eliminate duplicate aggregate calls; default
true. | dedupAggregateCalls | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Value.Default
default boolean pruneInputOfAggregate() {
return true;
} | Whether {@link RelBuilder#aggregate} should prune unused input columns; default true. | pruneInputOfAggregate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Value.Default
default boolean pushJoinCondition() {
return false;
} | Whether to push down join conditions; default false (but {@link
SqlToRelConverter#config()} by default sets this to true). | pushJoinCondition | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Value.Default
default boolean simplify() {
return true;
} | Whether to simplify expressions; default true. | simplify | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Value.Default
default boolean simplifyValues() {
return true;
} | Whether to simplify {@code Union(Values, Values)} or {@code Union(Project(Values))} to
{@code Values}; default true. | simplifyValues | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
@Value.Default
default boolean aggregateUnique() {
return false;
} | Whether to create an Aggregate even if we know that the input is already unique; default
false. | aggregateUnique | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/calcite/tools/RelBuilder.java | Apache-2.0 |
private static boolean declaredDescriptorColumn(SelectScope scope, Column column) {
if (!(scope.getNode() instanceof ExplicitTableSqlSelect)) {
return false;
}
final ExplicitTableSqlSelect select = (ExplicitTableSqlSelect) scope.getNode();
return select.descriptors.stream()
... | Returns whether the given column has been declared in a {@link SqlKind#DESCRIPTOR} next to a
{@link SqlKind#EXPLICIT_TABLE} within TVF operands. | declaredDescriptorColumn | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkCalciteSqlValidator.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkCalciteSqlValidator.java | Apache-2.0 |
@Override
public RelBuilder aggregate(
RelBuilder.GroupKey groupKey, Iterable<RelBuilder.AggCall> aggCalls) {
// build a relNode, the build() may also return a project
RelNode relNode = super.aggregate(groupKey, aggCalls).build();
if (relNode instanceof LogicalAggregate) {
... | Build non-window aggregate for either aggregate or table aggregate. | aggregate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkRelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkRelBuilder.java | Apache-2.0 |
public RelBuilder windowAggregate(
LogicalWindow window,
GroupKey groupKey,
List<NamedWindowProperty> namedProperties,
Iterable<AggCall> aggCalls) {
// build logical aggregate
// Because of:
// [CALCITE-3763] RelBuilder.aggregate should prune unus... | Build window aggregate for either aggregate or table aggregate. | windowAggregate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkRelBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkRelBuilder.java | Apache-2.0 |
@Override
public RexLiteral makeZeroLiteral(RelDataType type) {
switch (type.getSqlTypeName()) {
case TIMESTAMP_WITH_LOCAL_TIME_ZONE:
return makeLiteral(new TimestampString(1970, 1, 1, 0, 0, 0), type);
default:
return super.makeZeroLiteral(type);
... | Creates a literal of the default value for the given type.
<p>This value is:
<ul>
<li>0 for numeric types;
<li>FALSE for BOOLEAN;
<li>The epoch for TIMESTAMP and DATE;
<li>Midnight for TIME;
<li>The empty string for string types (CHAR, BINARY, VARCHAR, VARBINARY).
</ul>
<p>Uses '1970-01-01 00:00:00'(epoch ... | makeZeroLiteral | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkRexBuilder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/FlinkRexBuilder.java | Apache-2.0 |
public String expand(String expr) {
final CalciteParser parser = planner.parser();
final SqlNode node = parser.parseExpression(expr);
final SqlNode validated = planner.validateExpression(node, inputRowType, outputType);
return validated.toSqlString(sqlDialect).getSql();
} | Converts the given SQL expression string to an expanded string with fully qualified function
calls and escaped identifiers.
<p>E.g. {@code my_udf(f0) + 1} to {@code `my_catalog`.`my_database`.`my_udf`(`f0`) + 1} | expand | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | Apache-2.0 |
public RexNode convertToRexNode(String expr) {
final CalciteParser parser = planner.parser();
return planner.rex(parser.parseExpression(expr), inputRowType, outputType);
} | Converts a SQL expression to a {@link RexNode} expression.
@param expr SQL expression e.g. {@code `my_catalog`.`my_database`.`my_udf`(`f0`) + 1} | convertToRexNode | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | Apache-2.0 |
public RexNode convertToRexNode(SqlNode sqlNode) {
return planner.rex(sqlNode, inputRowType, outputType);
} | Converts a {@link SqlNode} to a {@link RexNode} expression. | convertToRexNode | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | Apache-2.0 |
public RexNode[] convertToRexNodes(String[] exprs) {
final CalciteParser parser = planner.parser();
return Stream.of(exprs)
.map(parser::parseExpression)
.map(node -> planner.rex(node, inputRowType, null))
.toArray(RexNode[]::new);
} | Converts an array of SQL expressions to an array of {@link RexNode} expressions.
@param exprs SQL expression e.g. {@code `my_catalog`.`my_database`.`my_udf`(`f0`) + 1} | convertToRexNodes | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/SqlToRexConverter.java | Apache-2.0 |
public static RelNode convertCollectToRel(
FlinkRelBuilder relBuilder,
RelNode input,
CollectModifyOperation collectModifyOperation,
ReadableConfig configuration,
ClassLoader classLoader) {
final DataTypeFactory dataTypeFactory =
unwrap... | Converts an {@link TableResult#collect()} sink to a {@link RelNode}. | convertCollectToRel | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
public static RelNode convertExternalToRel(
FlinkRelBuilder relBuilder,
RelNode input,
ExternalModifyOperation externalModifyOperation) {
final DynamicTableSink tableSink =
new ExternalDynamicSink(
externalModifyOperation.getChangelogMo... | Converts an external sink (i.e. further {@link DataStream} transformations) to a {@link
RelNode}. | convertExternalToRel | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
public static RelNode validateSchemaAndApplyImplicitCast(
RelNode query,
ResolvedSchema sinkSchema,
String tableDebugName,
DataTypeFactory dataTypeFactory,
FlinkTypeFactory typeFactory) {
final RowType sinkType =
(RowType)
... | Checks if the given query can be written into the given sink's table schema. | validateSchemaAndApplyImplicitCast | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
public static RelNode validateSchemaAndApplyImplicitCast(
RelNode query,
List<DataType> targetTypes,
String tableDebugName,
DataTypeFactory dataTypeFactory,
FlinkTypeFactory typeFactory) {
final RowType sinkType =
(RowType)
... | Checks if the given query can be written into the given target types. | validateSchemaAndApplyImplicitCast | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static Tuple2<RelNode, int[]> convertToRowLevelDelete(
LogicalTableModify tableModify,
ContextResolvedTable contextResolvedTable,
SupportsRowLevelDelete.RowLevelDeleteInfo rowLevelDeleteInfo,
String tableDebugName,
DataTypeFactory dataTypeFactory,
... | Convert tableModify node to a RelNode representing for row-level delete.
@return a tuple contains the RelNode and the index for the required physical columns for
row-level delete. | convertToRowLevelDelete | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static int[] getPhysicalColumnIndices(List<Integer> colIndexes, ResolvedSchema schema) {
return colIndexes.stream()
.filter(i -> schema.getColumns().get(i).isPhysical())
.mapToInt(i -> i)
.toArray();
} | Return the indices from {@param colIndexes} that belong to physical column. | getPhysicalColumnIndices | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static void convertPredicateToNegative(LogicalTableModify tableModify) {
RexBuilder rexBuilder = tableModify.getCluster().getRexBuilder();
RelNode input = tableModify.getInput();
LogicalFilter newFilter;
// if the input is a table scan, there's no predicate which means it's alway... | Convert the predicate in WHERE clause to the negative predicate. | convertPredicateToNegative | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static DataType fixCollectDataType(
DataTypeFactory dataTypeFactory, ResolvedSchema schema) {
final DataType fixedDataType =
DataTypeUtils.transform(
dataTypeFactory,
schema.toSourceRowDataType(),
TypeTra... | Temporary solution until we drop legacy types. | fixCollectDataType | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static void pushMetadataProjection(
FlinkRelBuilder relBuilder,
FlinkTypeFactory typeFactory,
ResolvedSchema schema,
DynamicTableSink sink) {
final RexBuilder rexBuilder = relBuilder.getRexBuilder();
final List<Column> columns = schema.getColumns()... | Creates a projection that reorders physical and metadata columns according to the consumed
data type of the sink. It casts metadata columns into the expected data type.
@see SupportsWritingMetadata | pushMetadataProjection | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static void prepareDynamicSink(
String tableDebugName,
Map<String, String> staticPartitions,
boolean isOverwrite,
DynamicTableSink sink,
ResolvedCatalogTable table,
List<SinkAbilitySpec> sinkAbilitySpecs,
int[][] targetColumns) ... | Prepares the given {@link DynamicTableSink}. It checks whether the sink is compatible with
the INSERT INTO clause and applies initial parameters. | prepareDynamicSink | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
private static RowType createConsumedType(ResolvedSchema schema, DynamicTableSink sink) {
final Map<String, DataType> metadataMap = extractMetadataMap(sink);
final Stream<RowField> physicalFields =
schema.getColumns().stream()
.filter(Column::isPhysical)
... | Returns the {@link DataType} that a sink should consume as the output from the runtime.
<p>The format looks as follows: {@code PHYSICAL COLUMNS + PERSISTED METADATA COLUMNS} | createConsumedType | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSinkUtils.java | Apache-2.0 |
public static RelNode convertDataStreamToRel(
boolean isBatchMode,
ReadableConfig config,
FlinkRelBuilder relBuilder,
ContextResolvedTable contextResolvedTable,
DataStream<?> dataStream,
DataType physicalDataType,
boolean isTopLevelReco... | Converts a given {@link DataStream} to a {@link RelNode}. It adds helper projections if
necessary. | convertDataStreamToRel | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static RelNode convertSourceToRel(
boolean isBatchMode,
ReadableConfig config,
FlinkRelBuilder relBuilder,
ContextResolvedTable contextResolvedTable,
FlinkStatistic statistic,
List<RelHint> hints,
DynamicTableSource tableSource) ... | Converts a given {@link DynamicTableSource} to a {@link RelNode}. It adds helper projections
if necessary. | convertSourceToRel | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static void prepareDynamicSource(
String tableDebugName,
ResolvedCatalogTable table,
DynamicTableSource source,
boolean isBatchMode,
ReadableConfig config,
List<SourceAbilitySpec> sourceAbilities) {
final ResolvedSchema schema = tabl... | Prepares the given {@link DynamicTableSource}. It check whether the source is compatible with
the given schema and applies initial parameters. | prepareDynamicSource | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static List<MetadataColumn> createRequiredMetadataColumns(
ResolvedSchema schema, DynamicTableSource source) {
final Map<String, MetadataColumn> metadataKeysToMetadataColumns =
createMetadataKeysToMetadataColumnsMap(schema);
final Map<String, DataType> metadataMap = e... | Returns a list of required metadata columns. Ordered by the iteration order of {@link
SupportsReadingMetadata#listReadableMetadata()}.
<p>This method assumes that source and schema have been validated via {@link
#prepareDynamicSource(String, ResolvedCatalogTable, DynamicTableSource, boolean,
ReadableConfig, List)}. | createRequiredMetadataColumns | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static Map<String, MetadataColumn> createMetadataKeysToMetadataColumnsMap(
ResolvedSchema schema) {
final List<MetadataColumn> metadataColumns = extractMetadataColumns(schema);
Map<String, MetadataColumn> metadataKeysToMetadataColumns = new HashMap<>();
for (MetadataColumn c... | Returns a map record the mapping relation between metadataKeys to metadataColumns in input
schema. | createMetadataKeysToMetadataColumnsMap | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static RowType createProducedType(ResolvedSchema schema, DynamicTableSource source) {
final Map<String, DataType> metadataMap = extractMetadataMap(source);
final Stream<RowField> physicalFields =
((RowType) schema.toPhysicalRowDataType().getLogicalType()).getFields().stream();
... | Returns the {@link DataType} that a source should produce as the input into the runtime.
<p>The format looks as follows: {@code PHYSICAL COLUMNS + METADATA COLUMNS}
<p>Physical columns use the table schema's name. Metadata column use the metadata key as
name. | createProducedType | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static boolean isUpsertSource(
ResolvedSchema resolvedSchema, DynamicTableSource tableSource) {
if (!(tableSource instanceof ScanTableSource)) {
return false;
}
ChangelogMode mode = ((ScanTableSource) tableSource).getChangelogMode();
boolean isUpsertMode =
... | Returns true if the table is an upsert source. | isUpsertSource | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static boolean isSourceChangeEventsDuplicate(
ResolvedSchema resolvedSchema,
DynamicTableSource tableSource,
TableConfig tableConfig) {
if (!(tableSource instanceof ScanTableSource)) {
return false;
}
ChangelogMode mode = ((ScanTableSource) ... | Returns true if the table source produces duplicate change events. | isSourceChangeEventsDuplicate | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static boolean changelogNormalizeEnabled(
boolean eventTimeSnapshotRequired,
ResolvedSchema resolvedSchema,
DynamicTableSource tableSource,
TableConfig tableConfig) {
return !eventTimeSnapshotRequired
&& (isUpsertSource(resolvedSchema, table... | Returns true if the changelogNormalize should be enabled. | changelogNormalizeEnabled | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
private static void pushWatermarkAssigner(FlinkRelBuilder relBuilder, ResolvedSchema schema) {
final ExpressionConverter converter = new ExpressionConverter(relBuilder);
final RelDataType inputRelDataType = relBuilder.peek().getRowType();
// schema resolver has checked before that only one spec... | Creates a specialized node for assigning watermarks. | pushWatermarkAssigner | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
private static void pushGeneratedProjection(FlinkRelBuilder relBuilder, ResolvedSchema schema) {
final ExpressionConverter converter = new ExpressionConverter(relBuilder);
final List<RexNode> projection =
schema.getColumns().stream()
.map(
... | Creates a projection that adds computed columns and finalizes the table schema. | pushGeneratedProjection | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/connectors/DynamicSourceUtils.java | Apache-2.0 |
public static SqlParserImplFactory create(SqlConformance conformance) {
if (conformance == FlinkSqlConformance.DEFAULT) {
return FlinkSqlParserImpl.FACTORY;
} else {
throw new TableException("Unsupported SqlConformance: " + conformance);
}
} | A util method to create SqlParserImplFactory according to SqlConformance. | create | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/delegation/FlinkSqlParserFactories.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/delegation/FlinkSqlParserFactories.java | Apache-2.0 |
private SqlOperatorTable getSqlOperatorTable(CalciteConfig calciteConfig) {
return JavaScalaConversionUtil.<SqlOperatorTable>toJava(calciteConfig.getSqlOperatorTable())
.map(
operatorTable -> {
if (calciteConfig.replacesSqlOperatorTable()) {
... | Returns the operator table for this environment including a custom Calcite configuration. | getSqlOperatorTable | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/delegation/PlannerContext.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/delegation/PlannerContext.java | Apache-2.0 |
private SqlOperatorTable getBuiltinSqlOperatorTable() {
return SqlOperatorTables.chain(
new FunctionCatalogOperatorTable(
context.getFunctionCatalog(),
context.getCatalogManager().getDataTypeFactory(),
typeFactory,
... | Returns builtin the operator table and external the operator for this environment. | getBuiltinSqlOperatorTable | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/delegation/PlannerContext.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/delegation/PlannerContext.java | Apache-2.0 |
public ResolvedExpression resolve(Expression expression) {
List<ResolvedExpression> resolved = resolver.resolve(Collections.singletonList(expression));
Preconditions.checkArgument(resolved.size() == 1);
return resolved.get(0);
} | Planner expression resolver for {@link UnresolvedCallExpression}. | resolve | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/CallExpressionResolver.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/CallExpressionResolver.java | Apache-2.0 |
public static Set<String> findReferencedColumn(String columnName, ResolvedSchema schema) {
Column column =
schema.getColumn(columnName)
.orElseThrow(
() ->
new ValidationException(
... | Find referenced column names that derive the computed column.
@param columnName the name of the column
@param schema the schema contains the computed column definition
@return the referenced column names | findReferencedColumn | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/ColumnReferenceFinder.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/ColumnReferenceFinder.java | Apache-2.0 |
@Override
protected ResolvedExpression defaultMethod(Expression expression) {
if (expression instanceof UnresolvedReferenceExpression) {
UnresolvedReferenceExpression expr = (UnresolvedReferenceExpression) expression;
String name = expr.getName();
int localIndex = ArrayUt... | Abstract class to resolve the expressions in {@link DeclarativeAggregateFunction}. | defaultMethod | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/DeclarativeExpressionResolver.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/DeclarativeExpressionResolver.java | Apache-2.0 |
default RexFactory getRexFactory() {
return unwrapContext(getRelBuilder()).getRexFactory();
} | Convert expression to RexNode, used by children conversion. | getRexFactory | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/CallExpressionConvertRule.java | Apache-2.0 |
@Override
public Optional<RexNode> convert(CallExpression call, ConvertContext context) {
return customizedConverters
.getConverter(call.getFunctionDefinition())
.map(converter -> converter.convert(call, context));
} | Customized {@link CallExpressionConvertRule}, Functions conversion here all require special
logic, and there may be some special rules, such as needing get the literal values of inputs,
such as converting to combinations of functions, to convert to RexNode of calcite. | convert | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/CustomizedConvertRule.java | Apache-2.0 |
@SuppressWarnings("unchecked")
public static <T> T extractValue(ValueLiteralExpression literal, Class<T> clazz) {
final Optional<Object> possibleObject = literal.getValueAs(Object.class);
if (!possibleObject.isPresent()) {
throw new TableException("Invalid literal.");
}
f... | Extracts a value from a literal. Including planner-specific instances such as {@link
DecimalData}. | extractValue | java | apache/flink | flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java | https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.