Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 14
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 4501)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 14
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
<html> <head> <meta name="viewport" content="width=device-width, initial-scale=1" charset="UTF-8"> <title>ordinal</title> <link href="../../../../images/logo-icon.svg" rel="icon" type="image/svg"> <script>var pathToRoot = "../../../../";</script> <script type="text/javascript" src="../../../../scripts/sourceset_dependencies.js" async="async"></script> <link href="../../../../styles/style.css" rel="Stylesheet"> <link href="../../../../styles/logo-styles.css" rel="Stylesheet"> <link href="../../../../styles/jetbrains-mono.css" rel="Stylesheet"> <link href="../../../../styles/main.css" rel="Stylesheet"> <script type="text/javascript" src="../../../../scripts/clipboard.js" async="async"></script> <script type="text/javascript" src="../../../../scripts/navigation-loader.js" async="async"></script> <script type="text/javascript" src="../../../../scripts/platform-content-handler.js" async="async"></script> <script type="text/javascript" src="../../../../scripts/main.js" async="async"></script> </head> <body> <div id="container"> <div id="leftColumn"> <div id="logo"></div> <div id="paneSearch"></div> <div id="sideMenu"></div> </div> <div id="main"> <div id="leftToggler"><span class="icon-toggler"></span></div> <script type="text/javascript" src="../../../../scripts/pages.js"></script> <script type="text/javascript" src="../../../../scripts/main.js"></script> <div class="main-content" id="content" pageIds="org.hexworks.zircon.internal.resource/ColorThemeResource.POLA/ordinal/#/PointingToDeclaration//-828656838"> <div class="navigation-wrapper" id="navigation-wrapper"> <div class="breadcrumbs"><a href="../../../index.html">zircon.core</a>/<a href="../../index.html">org.hexworks.zircon.internal.resource</a>/<a href="../index.html">ColorThemeResource</a>/<a href="index.html">POLA</a>/<a href="ordinal.html">ordinal</a></div> <div class="pull-right d-flex"> <div class="filter-section" id="filter-section"><button class="platform-tag platform-selector common-like" data-active="" data-filter=":zircon.core:dokkaHtml/commonMain">common</button></div> <div id="searchBar"></div> </div> </div> <div class="cover "> <h1 class="cover"><span>ordinal</span></h1> </div> <div class="divergent-group" data-filterable-current=":zircon.core:dokkaHtml/commonMain" data-filterable-set=":zircon.core:dokkaHtml/commonMain"><div class="with-platform-tags"><span class="pull-right"></span></div> <div> <div class="platform-hinted " data-platform-hinted="data-platform-hinted"><div class="content sourceset-depenent-content" data-active="" data-togglable=":zircon.core:dokkaHtml/commonMain"><div class="symbol monospace">val <a href="ordinal.html">ordinal</a>: <a href="https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-int/index.html">Int</a><span class="top-right-position"><span class="copy-icon"></span><div class="copy-popup-wrapper popup-to-left"><span class="copy-popup-icon"></span><span>Content copied to clipboard</span></div></span></div></div></div> </div> </div> </div> <div class="footer"><span class="go-to-top-icon"><a href="#content"></a></span><span>© 2020 Copyright</span><span class="pull-right"><span>Sponsored and developed by dokka</span><a href="https://github.com/Kotlin/dokka"><span class="padded-icon"></span></a></span></div> </div> </div> </body> </html>
{ "content_hash": "358a96552e3a39dfdbb2a8d3a34b2963", "timestamp": "", "source": "github", "line_count": 51, "max_line_length": 565, "avg_line_length": 70.13725490196079, "alnum_prop": 0.6287391668996366, "repo_name": "Hexworks/zircon", "id": "1a2d296986f304ed82c85cb4cbdb4a947cd31f58", "size": "3578", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "docs/2020.2.0-RELEASE-KOTLIN/zircon.core/zircon.core/org.hexworks.zircon.internal.resource/-color-theme-resource/-p-o-l-a/ordinal.html", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "Java", "bytes": "121457" }, { "name": "Kotlin", "bytes": "1792092" }, { "name": "Shell", "bytes": "152" } ] }
// ----------------------------------------------------------------------------------------- // <copyright file="BlobServerEncryptionTests.cs" company="Microsoft"> // Copyright 2016 Microsoft Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // </copyright> // ----------------------------------------------------------------------------------------- namespace Microsoft.Azure.Storage.Blob { using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Threading.Tasks; [TestClass] public class BlobServerEncryptionTests : BlobTestBase { // // Use TestInitialize to run code before running each test [TestInitialize()] public void MyTestInitialize() { if (TestBase.BlobBufferManager != null) { TestBase.BlobBufferManager.OutstandingBufferCount = 0; } } // // Use TestCleanup to run code after each test has run [TestCleanup()] public void MyTestCleanup() { if (TestBase.BlobBufferManager != null) { Assert.AreEqual(0, TestBase.BlobBufferManager.OutstandingBufferCount); } } [TestMethod] [Description("Download encrypted blob attributes.")] [TestCategory(ComponentCategory.Blob)] [TestCategory(TestTypeCategory.UnitTest)] [TestCategory(SmokeTestCategory.NonSmoke)] [TestCategory(TenantTypeCategory.DevStore)] [TestCategory(TenantTypeCategory.DevFabric), TestCategory(TenantTypeCategory.Cloud)] public async Task TestBlobAttributesEncryptionAsync() { CloudBlobContainer container = GetRandomContainerReference(); try { await container.CreateIfNotExistsAsync(); CloudBlockBlob blob = container.GetBlockBlobReference(BlobTestBase.GetRandomContainerName()); await blob.UploadTextAsync("test"); await blob.FetchAttributesAsync(); Assert.IsTrue(blob.Properties.IsServerEncrypted); CloudBlockBlob testBlob = container.GetBlockBlobReference(blob.Name); await testBlob.DownloadTextAsync(); Assert.IsTrue(testBlob.Properties.IsServerEncrypted); } finally { await container.DeleteAsync(); } } [TestMethod] [Description("List encrypted blob(s).")] [TestCategory(ComponentCategory.Blob)] [TestCategory(TestTypeCategory.UnitTest)] [TestCategory(SmokeTestCategory.NonSmoke)] [TestCategory(TenantTypeCategory.DevStore)] [TestCategory(TenantTypeCategory.DevFabric), TestCategory(TenantTypeCategory.Cloud)] public async Task TestListBlobsEncryptionAsync() { bool blobFound = false; CloudBlobContainer container = GetRandomContainerReference(); try { await container.CreateIfNotExistsAsync(); CloudBlockBlob blob = container.GetBlockBlobReference(BlobTestBase.GetRandomContainerName()); await blob.UploadTextAsync("test"); BlobResultSegment results = await container.ListBlobsSegmentedAsync(null); foreach (IListBlobItem b in results.Results) { CloudBlob cloudBlob = (CloudBlob)b; Assert.IsTrue(cloudBlob.Properties.IsServerEncrypted); blobFound = true; } Assert.IsTrue(blobFound); } finally { await container.DeleteAsync(); } } #if !FACADE_NETCORE [TestMethod] [Description("Upload encrypted blob.")] [TestCategory(ComponentCategory.Blob)] [TestCategory(TestTypeCategory.UnitTest)] [TestCategory(SmokeTestCategory.NonSmoke)] [TestCategory(TenantTypeCategory.DevStore)] [TestCategory(TenantTypeCategory.DevFabric), TestCategory(TenantTypeCategory.Cloud)] public async Task TestBlobEncryptionAsync() { bool requestFound = false; OperationContext ctxt = new OperationContext(); CloudBlobContainer container = GetRandomContainerReference(); try { await container.CreateIfNotExistsAsync(); CloudBlockBlob blob = container.GetBlockBlobReference(BlobTestBase.GetRandomContainerName()); await blob.UploadTextAsync("test"); ctxt.RequestCompleted += (sender, args) => { Assert.IsTrue(args.RequestInformation.IsRequestServerEncrypted); requestFound = true; }; await blob.UploadTextAsync("test", null, null, null, ctxt); Assert.IsTrue(requestFound); } finally { await container.DeleteAsync(); } } #endif } }
{ "content_hash": "3661dade802b653385905a8992f356f4", "timestamp": "", "source": "github", "line_count": 152, "max_line_length": 113, "avg_line_length": 37.25, "alnum_prop": 0.5865418580007065, "repo_name": "Azure/azure-storage-net", "id": "ba8600b345643d83a53aba3634065d4576427c24", "size": "5664", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "Test/WindowsRuntime/Blob/BlobServerEncryptionTests.cs", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "C#", "bytes": "14366754" } ] }
package org.apache.hadoop.hive.ql.optimizer; import java.io.Serializable; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.LinkedHashSet; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Properties; import java.util.Set; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.conf.HiveConf; import org.apache.hadoop.hive.conf.HiveConf.ConfVars; import org.apache.hadoop.hive.metastore.Warehouse; import org.apache.hadoop.hive.metastore.api.MetaException; import org.apache.hadoop.hive.ql.Context; import org.apache.hadoop.hive.ql.ErrorMsg; import org.apache.hadoop.hive.ql.exec.ColumnInfo; import org.apache.hadoop.hive.ql.exec.ConditionalTask; import org.apache.hadoop.hive.ql.exec.DemuxOperator; import org.apache.hadoop.hive.ql.exec.DependencyCollectionTask; import org.apache.hadoop.hive.ql.exec.FileSinkOperator; import org.apache.hadoop.hive.ql.exec.JoinOperator; import org.apache.hadoop.hive.ql.exec.MapJoinOperator; import org.apache.hadoop.hive.ql.exec.MoveTask; import org.apache.hadoop.hive.ql.exec.NodeUtils; import org.apache.hadoop.hive.ql.exec.Operator; import org.apache.hadoop.hive.ql.exec.OperatorFactory; import org.apache.hadoop.hive.ql.exec.OperatorUtils; import org.apache.hadoop.hive.ql.exec.ReduceSinkOperator; import org.apache.hadoop.hive.ql.exec.RowSchema; import org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator; import org.apache.hadoop.hive.ql.exec.TableScanOperator; import org.apache.hadoop.hive.ql.exec.Task; import org.apache.hadoop.hive.ql.exec.TaskFactory; import org.apache.hadoop.hive.ql.exec.UnionOperator; import org.apache.hadoop.hive.ql.exec.Utilities; import org.apache.hadoop.hive.ql.exec.mr.ExecDriver; import org.apache.hadoop.hive.ql.exec.mr.MapRedTask; import org.apache.hadoop.hive.ql.exec.spark.SparkTask; import org.apache.hadoop.hive.ql.hooks.ReadEntity; import org.apache.hadoop.hive.ql.io.RCFileInputFormat; import org.apache.hadoop.hive.ql.io.merge.MergeFileWork; import org.apache.hadoop.hive.ql.io.orc.OrcFileStripeMergeInputFormat; import org.apache.hadoop.hive.ql.io.orc.OrcInputFormat; import org.apache.hadoop.hive.ql.io.rcfile.merge.RCFileBlockMergeInputFormat; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Partition; import org.apache.hadoop.hive.ql.optimizer.GenMRProcContext.GenMRUnionCtx; import org.apache.hadoop.hive.ql.optimizer.GenMRProcContext.GenMapRedCtx; import org.apache.hadoop.hive.ql.optimizer.listbucketingpruner.ListBucketingPruner; import org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner; import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.tableSpec; import org.apache.hadoop.hive.ql.parse.ParseContext; import org.apache.hadoop.hive.ql.parse.PrunedPartitionList; import org.apache.hadoop.hive.ql.parse.QBParseInfo; import org.apache.hadoop.hive.ql.parse.SemanticException; import org.apache.hadoop.hive.ql.plan.BaseWork; import org.apache.hadoop.hive.ql.plan.ConditionalResolverMergeFiles; import org.apache.hadoop.hive.ql.plan.ConditionalResolverMergeFiles.ConditionalResolverMergeFilesCtx; import org.apache.hadoop.hive.ql.plan.ConditionalWork; import org.apache.hadoop.hive.ql.plan.DynamicPartitionCtx; import org.apache.hadoop.hive.ql.plan.ExprNodeDesc; import org.apache.hadoop.hive.ql.plan.FetchWork; import org.apache.hadoop.hive.ql.plan.FileMergeDesc; import org.apache.hadoop.hive.ql.plan.FileSinkDesc; import org.apache.hadoop.hive.ql.plan.FilterDesc.SampleDesc; import org.apache.hadoop.hive.ql.plan.LoadFileDesc; import org.apache.hadoop.hive.ql.plan.MapWork; import org.apache.hadoop.hive.ql.plan.MapredLocalWork; import org.apache.hadoop.hive.ql.plan.MapredWork; import org.apache.hadoop.hive.ql.plan.MoveWork; import org.apache.hadoop.hive.ql.plan.OperatorDesc; import org.apache.hadoop.hive.ql.plan.OrcFileMergeDesc; import org.apache.hadoop.hive.ql.plan.PartitionDesc; import org.apache.hadoop.hive.ql.plan.PlanUtils; import org.apache.hadoop.hive.ql.plan.RCFileMergeDesc; import org.apache.hadoop.hive.ql.plan.ReduceSinkDesc; import org.apache.hadoop.hive.ql.plan.ReduceWork; import org.apache.hadoop.hive.ql.plan.SparkWork; import org.apache.hadoop.hive.ql.plan.StatsWork; import org.apache.hadoop.hive.ql.plan.TableDesc; import org.apache.hadoop.hive.ql.plan.TableScanDesc; import org.apache.hadoop.hive.ql.plan.TezWork; import org.apache.hadoop.hive.ql.stats.StatsFactory; import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory; import org.apache.hadoop.mapred.InputFormat; import com.google.common.collect.Interner; /** * General utility common functions for the Processor to convert operator into * map-reduce tasks. */ public final class GenMapRedUtils { private static Log LOG; static { LOG = LogFactory.getLog("org.apache.hadoop.hive.ql.optimizer.GenMapRedUtils"); } public static boolean needsTagging(ReduceWork rWork) { return rWork != null && (rWork.getReducer().getClass() == JoinOperator.class || rWork.getReducer().getClass() == DemuxOperator.class); } /** * Initialize the current plan by adding it to root tasks. * * @param op * the reduce sink operator encountered * @param opProcCtx * processing context */ public static void initPlan(ReduceSinkOperator op, GenMRProcContext opProcCtx) throws SemanticException { Operator<? extends OperatorDesc> reducer = op.getChildOperators().get(0); Map<Operator<? extends OperatorDesc>, GenMapRedCtx> mapCurrCtx = opProcCtx.getMapCurrCtx(); GenMapRedCtx mapredCtx = mapCurrCtx.get(op.getParentOperators().get(0)); Task<? extends Serializable> currTask = mapredCtx.getCurrTask(); MapredWork plan = (MapredWork) currTask.getWork(); HashMap<Operator<? extends OperatorDesc>, Task<? extends Serializable>> opTaskMap = opProcCtx.getOpTaskMap(); Operator<? extends OperatorDesc> currTopOp = opProcCtx.getCurrTopOp(); opTaskMap.put(reducer, currTask); plan.setReduceWork(new ReduceWork()); plan.getReduceWork().setReducer(reducer); ReduceSinkDesc desc = op.getConf(); plan.getReduceWork().setNumReduceTasks(desc.getNumReducers()); if (needsTagging(plan.getReduceWork())) { plan.getReduceWork().setNeedsTagging(true); } assert currTopOp != null; String currAliasId = opProcCtx.getCurrAliasId(); if (!opProcCtx.isSeenOp(currTask, currTopOp)) { setTaskPlan(currAliasId, currTopOp, currTask, false, opProcCtx); } currTopOp = null; currAliasId = null; opProcCtx.setCurrTask(currTask); opProcCtx.setCurrTopOp(currTopOp); opProcCtx.setCurrAliasId(currAliasId); } /** * Initialize the current union plan. * * @param op * the reduce sink operator encountered * @param opProcCtx * processing context */ public static void initUnionPlan(ReduceSinkOperator op, UnionOperator currUnionOp, GenMRProcContext opProcCtx, Task<? extends Serializable> unionTask) throws SemanticException { Operator<? extends OperatorDesc> reducer = op.getChildOperators().get(0); MapredWork plan = (MapredWork) unionTask.getWork(); HashMap<Operator<? extends OperatorDesc>, Task<? extends Serializable>> opTaskMap = opProcCtx.getOpTaskMap(); opTaskMap.put(reducer, unionTask); plan.setReduceWork(new ReduceWork()); plan.getReduceWork().setReducer(reducer); plan.getReduceWork().setReducer(reducer); ReduceSinkDesc desc = op.getConf(); plan.getReduceWork().setNumReduceTasks(desc.getNumReducers()); if (needsTagging(plan.getReduceWork())) { plan.getReduceWork().setNeedsTagging(true); } initUnionPlan(opProcCtx, currUnionOp, unionTask, false); } private static void setUnionPlan(GenMRProcContext opProcCtx, boolean local, Task<? extends Serializable> currTask, GenMRUnionCtx uCtx, boolean mergeTask) throws SemanticException { Operator<? extends OperatorDesc> currTopOp = opProcCtx.getCurrTopOp(); if (currTopOp != null) { String currAliasId = opProcCtx.getCurrAliasId(); if (mergeTask || !opProcCtx.isSeenOp(currTask, currTopOp)) { setTaskPlan(currAliasId, currTopOp, currTask, local, opProcCtx); } currTopOp = null; opProcCtx.setCurrTopOp(currTopOp); } else { List<String> taskTmpDirLst = uCtx.getTaskTmpDir(); if ((taskTmpDirLst != null) && !(taskTmpDirLst.isEmpty())) { List<TableDesc> tt_descLst = uCtx.getTTDesc(); assert !taskTmpDirLst.isEmpty() && !tt_descLst.isEmpty(); assert taskTmpDirLst.size() == tt_descLst.size(); int size = taskTmpDirLst.size(); assert local == false; List<Operator<? extends OperatorDesc>> topOperators = uCtx.getListTopOperators(); MapredWork plan = (MapredWork) currTask.getWork(); for (int pos = 0; pos < size; pos++) { String taskTmpDir = taskTmpDirLst.get(pos); TableDesc tt_desc = tt_descLst.get(pos); MapWork mWork = plan.getMapWork(); if (mWork.getPathToAliases().get(taskTmpDir) == null) { mWork.getPathToAliases().put(taskTmpDir, new ArrayList<String>()); mWork.getPathToAliases().get(taskTmpDir).add(taskTmpDir); mWork.getPathToPartitionInfo().put(taskTmpDir, new PartitionDesc(tt_desc, null)); mWork.getAliasToWork().put(taskTmpDir, topOperators.get(pos)); } } } } } /* * It is a idempotent function to add various intermediate files as the source * for the union. The plan has already been created. */ public static void initUnionPlan(GenMRProcContext opProcCtx, UnionOperator currUnionOp, Task<? extends Serializable> currTask, boolean local) throws SemanticException { // In case of lateral views followed by a join, the same tree // can be traversed more than one if (currUnionOp != null) { GenMRUnionCtx uCtx = opProcCtx.getUnionTask(currUnionOp); assert uCtx != null; setUnionPlan(opProcCtx, local, currTask, uCtx, false); } } /* * join current union task to old task */ public static void joinUnionPlan(GenMRProcContext opProcCtx, UnionOperator currUnionOp, Task<? extends Serializable> currentUnionTask, Task<? extends Serializable> existingTask, boolean local) throws SemanticException { assert currUnionOp != null; GenMRUnionCtx uCtx = opProcCtx.getUnionTask(currUnionOp); assert uCtx != null; setUnionPlan(opProcCtx, local, existingTask, uCtx, true); List<Task<? extends Serializable>> parTasks = null; if (opProcCtx.getRootTasks().contains(currentUnionTask)) { opProcCtx.getRootTasks().remove(currentUnionTask); if (!opProcCtx.getRootTasks().contains(existingTask) && (existingTask.getParentTasks() == null || existingTask.getParentTasks().isEmpty())) { opProcCtx.getRootTasks().add(existingTask); } } if ((currentUnionTask != null) && (currentUnionTask.getParentTasks() != null) && !currentUnionTask.getParentTasks().isEmpty()) { parTasks = new ArrayList<Task<? extends Serializable>>(); parTasks.addAll(currentUnionTask.getParentTasks()); Object[] parTaskArr = parTasks.toArray(); for (Object parTask : parTaskArr) { ((Task<? extends Serializable>) parTask) .removeDependentTask(currentUnionTask); } } if ((currentUnionTask != null) && (parTasks != null)) { for (Task<? extends Serializable> parTask : parTasks) { parTask.addDependentTask(existingTask); if (opProcCtx.getRootTasks().contains(existingTask)) { opProcCtx.getRootTasks().remove(existingTask); } } } opProcCtx.setCurrTask(existingTask); } /** * Merge the current task into the old task for the reducer * * @param currTask * the current task for the current reducer * @param oldTask * the old task for the current reducer * @param opProcCtx * processing context */ public static void joinPlan(Task<? extends Serializable> currTask, Task<? extends Serializable> oldTask, GenMRProcContext opProcCtx) throws SemanticException { assert currTask != null && oldTask != null; Operator<? extends OperatorDesc> currTopOp = opProcCtx.getCurrTopOp(); List<Task<? extends Serializable>> parTasks = null; // terminate the old task and make current task dependent on it if (currTask.getParentTasks() != null && !currTask.getParentTasks().isEmpty()) { parTasks = new ArrayList<Task<? extends Serializable>>(); parTasks.addAll(currTask.getParentTasks()); Object[] parTaskArr = parTasks.toArray(); for (Object element : parTaskArr) { ((Task<? extends Serializable>) element).removeDependentTask(currTask); } } if (currTopOp != null) { mergeInput(currTopOp, opProcCtx, oldTask, false); } if (parTasks != null) { for (Task<? extends Serializable> parTask : parTasks) { parTask.addDependentTask(oldTask); } } if (oldTask instanceof MapRedTask && currTask instanceof MapRedTask) { ((MapRedTask)currTask).getWork().getMapWork() .mergingInto(((MapRedTask) oldTask).getWork().getMapWork()); } opProcCtx.setCurrTopOp(null); opProcCtx.setCurrTask(oldTask); } /** * If currTopOp is not set for input of the task, add input for to the task */ static boolean mergeInput(Operator<? extends OperatorDesc> currTopOp, GenMRProcContext opProcCtx, Task<? extends Serializable> task, boolean local) throws SemanticException { if (!opProcCtx.isSeenOp(task, currTopOp)) { String currAliasId = opProcCtx.getCurrAliasId(); setTaskPlan(currAliasId, currTopOp, task, local, opProcCtx); return true; } return false; } /** * Met cRS in pRS(parentTask)-cRS-OP(childTask) case * Split and link two tasks by temporary file : pRS-FS / TS-cRS-OP */ static void splitPlan(ReduceSinkOperator cRS, Task<? extends Serializable> parentTask, Task<? extends Serializable> childTask, GenMRProcContext opProcCtx) throws SemanticException { assert parentTask != null && childTask != null; splitTasks(cRS, parentTask, childTask, opProcCtx); } /** * Met cRS in pOP(parentTask with RS)-cRS-cOP(noTask) case * Create new child task for cRS-cOP and link two tasks by temporary file : pOP-FS / TS-cRS-cOP * * @param cRS * the reduce sink operator encountered * @param opProcCtx * processing context */ static void splitPlan(ReduceSinkOperator cRS, GenMRProcContext opProcCtx) throws SemanticException { // Generate a new task ParseContext parseCtx = opProcCtx.getParseCtx(); Task<? extends Serializable> parentTask = opProcCtx.getCurrTask(); MapredWork childPlan = getMapRedWork(parseCtx); Task<? extends Serializable> childTask = TaskFactory.get(childPlan, parseCtx .getConf()); Operator<? extends OperatorDesc> reducer = cRS.getChildOperators().get(0); // Add the reducer ReduceWork rWork = new ReduceWork(); childPlan.setReduceWork(rWork); rWork.setReducer(reducer); ReduceSinkDesc desc = cRS.getConf(); childPlan.getReduceWork().setNumReduceTasks(new Integer(desc.getNumReducers())); opProcCtx.getOpTaskMap().put(reducer, childTask); splitTasks(cRS, parentTask, childTask, opProcCtx); } /** * set the current task in the mapredWork. * * @param alias_id * current alias * @param topOp * the top operator of the stack * @param plan * current plan * @param local * whether you need to add to map-reduce or local work * @param opProcCtx * processing context */ public static void setTaskPlan(String alias_id, Operator<? extends OperatorDesc> topOp, Task<?> task, boolean local, GenMRProcContext opProcCtx) throws SemanticException { setTaskPlan(alias_id, topOp, task, local, opProcCtx, null); } /** * set the current task in the mapredWork. * * @param alias_id * current alias * @param topOp * the top operator of the stack * @param plan * current plan * @param local * whether you need to add to map-reduce or local work * @param opProcCtx * processing context * @param pList * pruned partition list. If it is null it will be computed on-the-fly. */ public static void setTaskPlan(String alias_id, Operator<? extends OperatorDesc> topOp, Task<?> task, boolean local, GenMRProcContext opProcCtx, PrunedPartitionList pList) throws SemanticException { setMapWork(((MapredWork) task.getWork()).getMapWork(), opProcCtx.getParseCtx(), opProcCtx.getInputs(), pList, topOp, alias_id, opProcCtx.getConf(), local); opProcCtx.addSeenOp(task, topOp); } /** * initialize MapWork * * @param alias_id * current alias * @param topOp * the top operator of the stack * @param plan * map work to initialize * @param local * whether you need to add to map-reduce or local work * @param pList * pruned partition list. If it is null it will be computed on-the-fly. * @param inputs * read entities for the map work * @param conf * current instance of hive conf */ public static void setMapWork(MapWork plan, ParseContext parseCtx, Set<ReadEntity> inputs, PrunedPartitionList partsList, Operator<? extends OperatorDesc> topOp, String alias_id, HiveConf conf, boolean local) throws SemanticException { ArrayList<Path> partDir = new ArrayList<Path>(); ArrayList<PartitionDesc> partDesc = new ArrayList<PartitionDesc>(); Path tblDir = null; TableDesc tblDesc = null; plan.setNameToSplitSample(parseCtx.getNameToSplitSample()); if (partsList == null) { try { TableScanOperator tsOp = (TableScanOperator) topOp; partsList = PartitionPruner.prune(tsOp, parseCtx, alias_id); } catch (SemanticException e) { throw e; } catch (HiveException e) { LOG.error(org.apache.hadoop.util.StringUtils.stringifyException(e)); throw new SemanticException(e.getMessage(), e); } } // Generate the map work for this alias_id // pass both confirmed and unknown partitions through the map-reduce // framework Set<Partition> parts = partsList.getPartitions(); PartitionDesc aliasPartnDesc = null; try { if (!parts.isEmpty()) { aliasPartnDesc = Utilities.getPartitionDesc(parts.iterator().next()); } } catch (HiveException e) { LOG.error(org.apache.hadoop.util.StringUtils.stringifyException(e)); throw new SemanticException(e.getMessage(), e); } // The table does not have any partitions if (aliasPartnDesc == null) { aliasPartnDesc = new PartitionDesc(Utilities.getTableDesc(((TableScanOperator) topOp) .getConf().getTableMetadata()), null); } Map<String, String> props = topOp.getConf().getOpProps(); if (props != null) { Properties target = aliasPartnDesc.getProperties(); if (target == null) { aliasPartnDesc.setProperties(target = new Properties()); } target.putAll(props); } plan.getAliasToPartnInfo().put(alias_id, aliasPartnDesc); long sizeNeeded = Integer.MAX_VALUE; int fileLimit = -1; if (parseCtx.getGlobalLimitCtx().isEnable()) { long sizePerRow = HiveConf.getLongVar(parseCtx.getConf(), HiveConf.ConfVars.HIVELIMITMAXROWSIZE); sizeNeeded = parseCtx.getGlobalLimitCtx().getGlobalLimit() * sizePerRow; // for the optimization that reduce number of input file, we limit number // of files allowed. If more than specific number of files have to be // selected, we skip this optimization. Since having too many files as // inputs can cause unpredictable latency. It's not necessarily to be // cheaper. fileLimit = HiveConf.getIntVar(parseCtx.getConf(), HiveConf.ConfVars.HIVELIMITOPTLIMITFILE); if (sizePerRow <= 0 || fileLimit <= 0) { LOG.info("Skip optimization to reduce input size of 'limit'"); parseCtx.getGlobalLimitCtx().disableOpt(); } else if (parts.isEmpty()) { LOG.info("Empty input: skip limit optimiztion"); } else { LOG.info("Try to reduce input size for 'limit' " + "sizeNeeded: " + sizeNeeded + " file limit : " + fileLimit); } } boolean isFirstPart = true; boolean emptyInput = true; boolean singlePartition = (parts.size() == 1); // Track the dependencies for the view. Consider a query like: select * from V; // where V is a view of the form: select * from T // The dependencies should include V at depth 0, and T at depth 1 (inferred). Map<String, ReadEntity> viewToInput = parseCtx.getViewAliasToInput(); ReadEntity parentViewInfo = PlanUtils.getParentViewInfo(alias_id, viewToInput); // The table should also be considered a part of inputs, even if the table is a // partitioned table and whether any partition is selected or not //This read entity is a direct read entity and not an indirect read (that is when // this is being read because it is a dependency of a view). boolean isDirectRead = (parentViewInfo == null); for (Partition part : parts) { if (part.getTable().isPartitioned()) { PlanUtils.addInput(inputs, new ReadEntity(part, parentViewInfo, isDirectRead)); } else { PlanUtils.addInput(inputs, new ReadEntity(part.getTable(), parentViewInfo, isDirectRead)); } // Later the properties have to come from the partition as opposed // to from the table in order to support versioning. Path[] paths = null; SampleDesc sampleDescr = parseCtx.getOpToSamplePruner().get(topOp); // Lookup list bucketing pruner Map<String, ExprNodeDesc> partToPruner = parseCtx.getOpToPartToSkewedPruner().get(topOp); ExprNodeDesc listBucketingPruner = (partToPruner != null) ? partToPruner.get(part.getName()) : null; if (sampleDescr != null) { assert (listBucketingPruner == null) : "Sampling and list bucketing can't coexit."; paths = SamplePruner.prune(part, sampleDescr); parseCtx.getGlobalLimitCtx().disableOpt(); } else if (listBucketingPruner != null) { assert (sampleDescr == null) : "Sampling and list bucketing can't coexist."; /* Use list bucketing prunner's path. */ paths = ListBucketingPruner.prune(parseCtx, part, listBucketingPruner); } else { // Now we only try the first partition, if the first partition doesn't // contain enough size, we change to normal mode. if (parseCtx.getGlobalLimitCtx().isEnable()) { if (isFirstPart) { long sizeLeft = sizeNeeded; ArrayList<Path> retPathList = new ArrayList<Path>(); SamplePruner.LimitPruneRetStatus status = SamplePruner.limitPrune(part, sizeLeft, fileLimit, retPathList); if (status.equals(SamplePruner.LimitPruneRetStatus.NoFile)) { continue; } else if (status.equals(SamplePruner.LimitPruneRetStatus.NotQualify)) { LOG.info("Use full input -- first " + fileLimit + " files are more than " + sizeNeeded + " bytes"); parseCtx.getGlobalLimitCtx().disableOpt(); } else { emptyInput = false; paths = new Path[retPathList.size()]; int index = 0; for (Path path : retPathList) { paths[index++] = path; } if (status.equals(SamplePruner.LimitPruneRetStatus.NeedAllFiles) && singlePartition) { // if all files are needed to meet the size limit, we disable // optimization. It usually happens for empty table/partition or // table/partition with only one file. By disabling this // optimization, we can avoid retrying the query if there is // not sufficient rows. parseCtx.getGlobalLimitCtx().disableOpt(); } } isFirstPart = false; } else { paths = new Path[0]; } } if (!parseCtx.getGlobalLimitCtx().isEnable()) { paths = part.getPath(); } } // is it a partitioned table ? if (!part.getTable().isPartitioned()) { assert ((tblDir == null) && (tblDesc == null)); tblDir = paths[0]; tblDesc = Utilities.getTableDesc(part.getTable()); } else if (tblDesc == null) { tblDesc = Utilities.getTableDesc(part.getTable()); } if (props != null) { Properties target = tblDesc.getProperties(); if (target == null) { tblDesc.setProperties(target = new Properties()); } target.putAll(props); } for (Path p : paths) { if (p == null) { continue; } String path = p.toString(); if (LOG.isDebugEnabled()) { LOG.debug("Adding " + path + " of table" + alias_id); } partDir.add(p); try { if (part.getTable().isPartitioned()) { partDesc.add(Utilities.getPartitionDesc(part)); } else { partDesc.add(Utilities.getPartitionDescFromTableDesc(tblDesc, part)); } } catch (HiveException e) { LOG.error(org.apache.hadoop.util.StringUtils.stringifyException(e)); throw new SemanticException(e.getMessage(), e); } } } if (emptyInput) { parseCtx.getGlobalLimitCtx().disableOpt(); } Iterator<Path> iterPath = partDir.iterator(); Iterator<PartitionDesc> iterPartnDesc = partDesc.iterator(); if (!local) { while (iterPath.hasNext()) { assert iterPartnDesc.hasNext(); String path = iterPath.next().toString(); PartitionDesc prtDesc = iterPartnDesc.next(); // Add the path to alias mapping if (plan.getPathToAliases().get(path) == null) { plan.getPathToAliases().put(path, new ArrayList<String>()); } plan.getPathToAliases().get(path).add(alias_id); plan.getPathToPartitionInfo().put(path, prtDesc); if (LOG.isDebugEnabled()) { LOG.debug("Information added for path " + path); } } assert plan.getAliasToWork().get(alias_id) == null; plan.getAliasToWork().put(alias_id, topOp); } else { // populate local work if needed MapredLocalWork localPlan = plan.getMapRedLocalWork(); if (localPlan == null) { localPlan = new MapredLocalWork( new LinkedHashMap<String, Operator<? extends OperatorDesc>>(), new LinkedHashMap<String, FetchWork>()); } assert localPlan.getAliasToWork().get(alias_id) == null; assert localPlan.getAliasToFetchWork().get(alias_id) == null; localPlan.getAliasToWork().put(alias_id, topOp); if (tblDir == null) { tblDesc = Utilities.getTableDesc(partsList.getSourceTable()); localPlan.getAliasToFetchWork().put( alias_id, new FetchWork(partDir, partDesc, tblDesc)); } else { localPlan.getAliasToFetchWork().put(alias_id, new FetchWork(tblDir, tblDesc)); } plan.setMapRedLocalWork(localPlan); } } /** * set the current task in the mapredWork. * * @param alias * current alias * @param topOp * the top operator of the stack * @param plan * current plan * @param local * whether you need to add to map-reduce or local work * @param tt_desc * table descriptor */ public static void setTaskPlan(String path, String alias, Operator<? extends OperatorDesc> topOp, MapWork plan, boolean local, TableDesc tt_desc) throws SemanticException { if (path == null || alias == null) { return; } if (!local) { if (plan.getPathToAliases().get(path) == null) { plan.getPathToAliases().put(path, new ArrayList<String>()); } plan.getPathToAliases().get(path).add(alias); plan.getPathToPartitionInfo().put(path, new PartitionDesc(tt_desc, null)); plan.getAliasToWork().put(alias, topOp); } else { // populate local work if needed MapredLocalWork localPlan = plan.getMapRedLocalWork(); if (localPlan == null) { localPlan = new MapredLocalWork( new LinkedHashMap<String, Operator<? extends OperatorDesc>>(), new LinkedHashMap<String, FetchWork>()); } assert localPlan.getAliasToWork().get(alias) == null; assert localPlan.getAliasToFetchWork().get(alias) == null; localPlan.getAliasToWork().put(alias, topOp); localPlan.getAliasToFetchWork().put(alias, new FetchWork(new Path(alias), tt_desc)); plan.setMapRedLocalWork(localPlan); } } /** * Set key and value descriptor * @param work RedueWork * @param rs ReduceSinkOperator */ public static void setKeyAndValueDesc(ReduceWork work, ReduceSinkOperator rs) { work.setKeyDesc(rs.getConf().getKeySerializeInfo()); int tag = Math.max(0, rs.getConf().getTag()); List<TableDesc> tagToSchema = work.getTagToValueDesc(); while (tag + 1 > tagToSchema.size()) { tagToSchema.add(null); } tagToSchema.set(tag, rs.getConf().getValueSerializeInfo()); } /** * set key and value descriptor. * * @param plan * current plan * @param topOp * current top operator in the path */ public static void setKeyAndValueDesc(ReduceWork plan, Operator<? extends OperatorDesc> topOp) { if (topOp == null) { return; } if (topOp instanceof ReduceSinkOperator) { ReduceSinkOperator rs = (ReduceSinkOperator) topOp; setKeyAndValueDesc(plan, rs); } else { List<Operator<? extends OperatorDesc>> children = topOp.getChildOperators(); if (children != null) { for (Operator<? extends OperatorDesc> op : children) { setKeyAndValueDesc(plan, op); } } } } /** * Set the key and value description for all the tasks rooted at the given * task. Loops over all the tasks recursively. * * @param task */ public static void setKeyAndValueDescForTaskTree(Task<? extends Serializable> task) { if (task instanceof ConditionalTask) { List<Task<? extends Serializable>> listTasks = ((ConditionalTask) task) .getListTasks(); for (Task<? extends Serializable> tsk : listTasks) { setKeyAndValueDescForTaskTree(tsk); } } else if (task instanceof ExecDriver) { MapredWork work = (MapredWork) task.getWork(); work.getMapWork().deriveExplainAttributes(); HashMap<String, Operator<? extends OperatorDesc>> opMap = work .getMapWork().getAliasToWork(); if (opMap != null && !opMap.isEmpty()) { for (Operator<? extends OperatorDesc> op : opMap.values()) { setKeyAndValueDesc(work.getReduceWork(), op); } } } else if (task != null && (task.getWork() instanceof TezWork)) { TezWork work = (TezWork)task.getWork(); for (BaseWork w : work.getAllWorkUnsorted()) { if (w instanceof MapWork) { ((MapWork)w).deriveExplainAttributes(); } } } else if (task instanceof SparkTask) { SparkWork work = (SparkWork) task.getWork(); for (BaseWork w : work.getAllWorkUnsorted()) { if (w instanceof MapWork) { ((MapWork) w).deriveExplainAttributes(); } } } if (task.getChildTasks() == null) { return; } for (Task<? extends Serializable> childTask : task.getChildTasks()) { setKeyAndValueDescForTaskTree(childTask); } } public static void internTableDesc(Task<?> task, Interner<TableDesc> interner) { if (task instanceof ConditionalTask) { for (Task tsk : ((ConditionalTask) task).getListTasks()) { internTableDesc(tsk, interner); } } else if (task instanceof ExecDriver) { MapredWork work = (MapredWork) task.getWork(); work.getMapWork().internTable(interner); } else if (task != null && (task.getWork() instanceof TezWork)) { TezWork work = (TezWork)task.getWork(); for (BaseWork w : work.getAllWorkUnsorted()) { if (w instanceof MapWork) { ((MapWork)w).internTable(interner); } } } if (task.getNumChild() > 0) { for (Task childTask : task.getChildTasks()) { internTableDesc(childTask, interner); } } } /** * create a new plan and return. * * @return the new plan */ public static MapredWork getMapRedWork(ParseContext parseCtx) { MapredWork work = getMapRedWorkFromConf(parseCtx.getConf()); work.getMapWork().setNameToSplitSample(parseCtx.getNameToSplitSample()); return work; } /** * create a new plan and return. The pan won't contain the name to split * sample information in parse context. * * @return the new plan */ public static MapredWork getMapRedWorkFromConf(HiveConf conf) { MapredWork mrWork = new MapredWork(); MapWork work = mrWork.getMapWork(); boolean mapperCannotSpanPartns = conf.getBoolVar( HiveConf.ConfVars.HIVE_MAPPER_CANNOT_SPAN_MULTIPLE_PARTITIONS); work.setMapperCannotSpanPartns(mapperCannotSpanPartns); work.setPathToAliases(new LinkedHashMap<String, ArrayList<String>>()); work.setPathToPartitionInfo(new LinkedHashMap<String, PartitionDesc>()); work.setAliasToWork(new LinkedHashMap<String, Operator<? extends OperatorDesc>>()); work.setHadoopSupportsSplittable( conf.getBoolVar(HiveConf.ConfVars.HIVE_COMBINE_INPUT_FORMAT_SUPPORTS_SPLITTABLE)); return mrWork; } public static TableScanOperator createTemporaryTableScanOperator(RowSchema rowSchema) { TableScanOperator tableScanOp = (TableScanOperator) OperatorFactory.get(new TableScanDesc(null), rowSchema); // Set needed columns for this dummy TableScanOperator List<Integer> neededColumnIds = new ArrayList<Integer>(); List<String> neededColumnNames = new ArrayList<String>(); List<ColumnInfo> parentColumnInfos = rowSchema.getSignature(); for (int i = 0 ; i < parentColumnInfos.size(); i++) { neededColumnIds.add(i); neededColumnNames.add(parentColumnInfos.get(i).getInternalName()); } tableScanOp.setNeededColumnIDs(neededColumnIds); tableScanOp.setNeededColumns(neededColumnNames); tableScanOp.setReferencedColumns(neededColumnNames); return tableScanOp; } /** * Break the pipeline between parent and child, and then * output data generated by parent to a temporary file stored in taskTmpDir. * A FileSinkOperator is added after parent to output the data. * Before child, we add a TableScanOperator to load data stored in the temporary * file back. * @param parent * @param child * @param taskTmpDir * @param tt_desc * @param parseCtx * @return The TableScanOperator inserted before child. */ public static TableScanOperator createTemporaryFile( Operator<? extends OperatorDesc> parent, Operator<? extends OperatorDesc> child, Path taskTmpDir, TableDesc tt_desc, ParseContext parseCtx) { // Create a FileSinkOperator for the file name of taskTmpDir boolean compressIntermediate = parseCtx.getConf().getBoolVar(HiveConf.ConfVars.COMPRESSINTERMEDIATE); FileSinkDesc desc = new FileSinkDesc(taskTmpDir, tt_desc, compressIntermediate); if (compressIntermediate) { desc.setCompressCodec(parseCtx.getConf().getVar( HiveConf.ConfVars.COMPRESSINTERMEDIATECODEC)); desc.setCompressType(parseCtx.getConf().getVar( HiveConf.ConfVars.COMPRESSINTERMEDIATETYPE)); } Operator<? extends OperatorDesc> fileSinkOp = OperatorFactory.get( desc, parent.getSchema()); // Connect parent to fileSinkOp parent.replaceChild(child, fileSinkOp); fileSinkOp.setParentOperators(Utilities.makeList(parent)); // Create a dummy TableScanOperator for the file generated through fileSinkOp TableScanOperator tableScanOp = (TableScanOperator) createTemporaryTableScanOperator( parent.getSchema()); // Connect this TableScanOperator to child. tableScanOp.setChildOperators(Utilities.makeList(child)); child.replaceParent(parent, tableScanOp); return tableScanOp; } @SuppressWarnings("nls") /** * Split two tasks by creating a temporary file between them. * * @param op reduce sink operator being processed * @param parentTask the parent task * @param childTask the child task * @param opProcCtx context **/ private static void splitTasks(ReduceSinkOperator op, Task<? extends Serializable> parentTask, Task<? extends Serializable> childTask, GenMRProcContext opProcCtx) throws SemanticException { if (op.getNumParent() != 1) { throw new IllegalStateException("Expecting operator " + op + " to have one parent. " + "But found multiple parents : " + op.getParentOperators()); } ParseContext parseCtx = opProcCtx.getParseCtx(); parentTask.addDependentTask(childTask); // Root Task cannot depend on any other task, therefore childTask cannot be // a root Task List<Task<? extends Serializable>> rootTasks = opProcCtx.getRootTasks(); if (rootTasks.contains(childTask)) { rootTasks.remove(childTask); } // Generate the temporary file name Context baseCtx = parseCtx.getContext(); Path taskTmpDir = baseCtx.getMRTmpPath(); Operator<? extends OperatorDesc> parent = op.getParentOperators().get(0); TableDesc tt_desc = PlanUtils.getIntermediateFileTableDesc(PlanUtils .getFieldSchemasFromRowSchema(parent.getSchema(), "temporarycol")); // Create the temporary file, its corresponding FileSinkOperaotr, and // its corresponding TableScanOperator. TableScanOperator tableScanOp = createTemporaryFile(parent, op, taskTmpDir, tt_desc, parseCtx); Map<Operator<? extends OperatorDesc>, GenMapRedCtx> mapCurrCtx = opProcCtx.getMapCurrCtx(); mapCurrCtx.put(tableScanOp, new GenMapRedCtx(childTask, null)); String streamDesc = taskTmpDir.toUri().toString(); MapredWork cplan = (MapredWork) childTask.getWork(); if (needsTagging(cplan.getReduceWork())) { Operator<? extends OperatorDesc> reducerOp = cplan.getReduceWork().getReducer(); String id = null; if (reducerOp instanceof JoinOperator) { if (parseCtx.getJoinOps().contains(reducerOp)) { id = ((JoinOperator)reducerOp).getConf().getId(); } } else if (reducerOp instanceof MapJoinOperator) { if (parseCtx.getMapJoinOps().contains(reducerOp)) { id = ((MapJoinOperator)reducerOp).getConf().getId(); } } else if (reducerOp instanceof SMBMapJoinOperator) { if (parseCtx.getSmbMapJoinOps().contains(reducerOp)) { id = ((SMBMapJoinOperator)reducerOp).getConf().getId(); } } if (id != null) { streamDesc = id + ":$INTNAME"; } else { streamDesc = "$INTNAME"; } String origStreamDesc = streamDesc; int pos = 0; while (cplan.getMapWork().getAliasToWork().get(streamDesc) != null) { streamDesc = origStreamDesc.concat(String.valueOf(++pos)); } // TODO: Allocate work to remove the temporary files and make that // dependent on the redTask cplan.getReduceWork().setNeedsTagging(true); } // Add the path to alias mapping setTaskPlan(taskTmpDir.toUri().toString(), streamDesc, tableScanOp, cplan.getMapWork(), false, tt_desc); opProcCtx.setCurrTopOp(null); opProcCtx.setCurrAliasId(null); opProcCtx.setCurrTask(childTask); opProcCtx.addRootIfPossible(parentTask); } static boolean hasBranchFinished(Object... children) { for (Object child : children) { if (child == null) { return false; } } return true; } /** * Replace the Map-side operator tree associated with targetAlias in * target with the Map-side operator tree associated with sourceAlias in source. * @param sourceAlias * @param targetAlias * @param source * @param target */ public static void replaceMapWork(String sourceAlias, String targetAlias, MapWork source, MapWork target) { Map<String, ArrayList<String>> sourcePathToAliases = source.getPathToAliases(); Map<String, PartitionDesc> sourcePathToPartitionInfo = source.getPathToPartitionInfo(); Map<String, Operator<? extends OperatorDesc>> sourceAliasToWork = source.getAliasToWork(); Map<String, PartitionDesc> sourceAliasToPartnInfo = source.getAliasToPartnInfo(); Map<String, ArrayList<String>> targetPathToAliases = target.getPathToAliases(); Map<String, PartitionDesc> targetPathToPartitionInfo = target.getPathToPartitionInfo(); Map<String, Operator<? extends OperatorDesc>> targetAliasToWork = target.getAliasToWork(); Map<String, PartitionDesc> targetAliasToPartnInfo = target.getAliasToPartnInfo(); if (!sourceAliasToWork.containsKey(sourceAlias) || !targetAliasToWork.containsKey(targetAlias)) { // Nothing to do if there is no operator tree associated with // sourceAlias in source or there is not operator tree associated // with targetAlias in target. return; } if (sourceAliasToWork.size() > 1) { // If there are multiple aliases in source, we do not know // how to merge. return; } // Remove unnecessary information from target targetAliasToWork.remove(targetAlias); targetAliasToPartnInfo.remove(targetAlias); List<String> pathsToRemove = new ArrayList<String>(); for (Entry<String, ArrayList<String>> entry: targetPathToAliases.entrySet()) { ArrayList<String> aliases = entry.getValue(); aliases.remove(targetAlias); if (aliases.isEmpty()) { pathsToRemove.add(entry.getKey()); } } for (String pathToRemove: pathsToRemove) { targetPathToAliases.remove(pathToRemove); targetPathToPartitionInfo.remove(pathToRemove); } // Add new information from source to target targetAliasToWork.put(sourceAlias, sourceAliasToWork.get(sourceAlias)); targetAliasToPartnInfo.putAll(sourceAliasToPartnInfo); targetPathToPartitionInfo.putAll(sourcePathToPartitionInfo); List<String> pathsToAdd = new ArrayList<String>(); for (Entry<String, ArrayList<String>> entry: sourcePathToAliases.entrySet()) { ArrayList<String> aliases = entry.getValue(); if (aliases.contains(sourceAlias)) { pathsToAdd.add(entry.getKey()); } } for (String pathToAdd: pathsToAdd) { if (!targetPathToAliases.containsKey(pathToAdd)) { targetPathToAliases.put(pathToAdd, new ArrayList<String>()); } targetPathToAliases.get(pathToAdd).add(sourceAlias); } } /** * @param fsInput The FileSink operator. * @param ctx The MR processing context. * @param finalName the final destination path the merge job should output. * @param dependencyTask * @param mvTasks * @param conf * @param currTask * @throws SemanticException * create a Map-only merge job using CombineHiveInputFormat for all partitions with * following operators: * MR job J0: * ... * | * v * FileSinkOperator_1 (fsInput) * | * v * Merge job J1: * | * v * TableScan (using CombineHiveInputFormat) (tsMerge) * | * v * FileSinkOperator (fsMerge) * * Here the pathToPartitionInfo & pathToAlias will remain the same, which means the paths * do * not contain the dynamic partitions (their parent). So after the dynamic partitions are * created (after the first job finished before the moveTask or ConditionalTask start), * we need to change the pathToPartitionInfo & pathToAlias to include the dynamic * partition * directories. * */ public static void createMRWorkForMergingFiles (FileSinkOperator fsInput, Path finalName, DependencyCollectionTask dependencyTask, List<Task<MoveWork>> mvTasks, HiveConf conf, Task<? extends Serializable> currTask) throws SemanticException { // // 1. create the operator tree // FileSinkDesc fsInputDesc = fsInput.getConf(); // Create a TableScan operator RowSchema inputRS = fsInput.getSchema(); Operator<? extends OperatorDesc> tsMerge = GenMapRedUtils.createTemporaryTableScanOperator(inputRS); // Create a FileSink operator TableDesc ts = (TableDesc) fsInputDesc.getTableInfo().clone(); FileSinkDesc fsOutputDesc = new FileSinkDesc(finalName, ts, conf.getBoolVar(ConfVars.COMPRESSRESULT)); FileSinkOperator fsOutput = (FileSinkOperator) OperatorFactory.getAndMakeChild( fsOutputDesc, inputRS, tsMerge); // If the input FileSinkOperator is a dynamic partition enabled, the tsMerge input schema // needs to include the partition column, and the fsOutput should have // a DynamicPartitionCtx to indicate that it needs to dynamically partitioned. DynamicPartitionCtx dpCtx = fsInputDesc.getDynPartCtx(); if (dpCtx != null && dpCtx.getNumDPCols() > 0) { // adding DP ColumnInfo to the RowSchema signature ArrayList<ColumnInfo> signature = inputRS.getSignature(); String tblAlias = fsInputDesc.getTableInfo().getTableName(); LinkedHashMap<String, String> colMap = new LinkedHashMap<String, String>(); StringBuilder partCols = new StringBuilder(); for (String dpCol : dpCtx.getDPColNames()) { ColumnInfo colInfo = new ColumnInfo(dpCol, TypeInfoFactory.stringTypeInfo, // all partition column type should be string tblAlias, true); // partition column is virtual column signature.add(colInfo); colMap.put(dpCol, dpCol); // input and output have the same column name partCols.append(dpCol).append('/'); } partCols.setLength(partCols.length() - 1); // remove the last '/' inputRS.setSignature(signature); // create another DynamicPartitionCtx, which has a different input-to-DP column mapping DynamicPartitionCtx dpCtx2 = new DynamicPartitionCtx(dpCtx); dpCtx2.setInputToDPCols(colMap); fsOutputDesc.setDynPartCtx(dpCtx2); // update the FileSinkOperator to include partition columns fsInputDesc.getTableInfo().getProperties().setProperty( org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_PARTITION_COLUMNS, partCols.toString()); // list of dynamic partition column names } else { // non-partitioned table fsInputDesc.getTableInfo().getProperties().remove( org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_PARTITION_COLUMNS); } // // 2. Constructing a conditional task consisting of a move task and a map reduce task // MoveWork dummyMv = new MoveWork(null, null, null, new LoadFileDesc(fsInputDesc.getFinalDirName(), finalName, true, null, null), false); MapWork cplan; Serializable work; if ((conf.getBoolVar(ConfVars.HIVEMERGERCFILEBLOCKLEVEL) && fsInputDesc.getTableInfo().getInputFileFormatClass().equals(RCFileInputFormat.class)) || (conf.getBoolVar(ConfVars.HIVEMERGEORCFILESTRIPELEVEL) && fsInputDesc.getTableInfo().getInputFileFormatClass().equals(OrcInputFormat.class))) { cplan = GenMapRedUtils.createMergeTask(fsInputDesc, finalName, dpCtx != null && dpCtx.getNumDPCols() > 0); if (conf.getVar(ConfVars.HIVE_EXECUTION_ENGINE).equals("tez")) { work = new TezWork(conf.getVar(HiveConf.ConfVars.HIVEQUERYID)); cplan.setName("File Merge"); ((TezWork) work).add(cplan); } else if (conf.getVar(ConfVars.HIVE_EXECUTION_ENGINE).equals("spark")) { work = new SparkWork(conf.getVar(HiveConf.ConfVars.HIVEQUERYID)); cplan.setName("Spark Merge File Work"); ((SparkWork) work).add(cplan); } else { work = cplan; } } else { cplan = createMRWorkForMergingFiles(conf, tsMerge, fsInputDesc); if (conf.getVar(ConfVars.HIVE_EXECUTION_ENGINE).equals("tez")) { work = new TezWork(conf.getVar(HiveConf.ConfVars.HIVEQUERYID)); cplan.setName("File Merge"); ((TezWork)work).add(cplan); } else if (conf.getVar(ConfVars.HIVE_EXECUTION_ENGINE).equals("spark")) { work = new SparkWork(conf.getVar(HiveConf.ConfVars.HIVEQUERYID)); cplan.setName("Spark Merge File Work"); ((SparkWork) work).add(cplan); } else { work = new MapredWork(); ((MapredWork)work).setMapWork(cplan); } } // use CombineHiveInputFormat for map-only merging cplan.setInputformat("org.apache.hadoop.hive.ql.io.CombineHiveInputFormat"); // NOTE: we should gather stats in MR1 rather than MR2 at merge job since we don't // know if merge MR2 will be triggered at execution time ConditionalTask cndTsk = GenMapRedUtils.createCondTask(conf, currTask, dummyMv, work, fsInputDesc.getFinalDirName().toString()); // keep the dynamic partition context in conditional task resolver context ConditionalResolverMergeFilesCtx mrCtx = (ConditionalResolverMergeFilesCtx) cndTsk.getResolverCtx(); mrCtx.setDPCtx(fsInputDesc.getDynPartCtx()); mrCtx.setLbCtx(fsInputDesc.getLbCtx()); // // 3. add the moveTask as the children of the conditional task // linkMoveTask(fsOutput, cndTsk, mvTasks, conf, dependencyTask); } /** * Make the move task in the GenMRProcContext following the FileSinkOperator a dependent of all * possible subtrees branching from the ConditionalTask. * * @param newOutput * @param cndTsk * @param mvTasks * @param hconf * @param dependencyTask */ public static void linkMoveTask(FileSinkOperator newOutput, ConditionalTask cndTsk, List<Task<MoveWork>> mvTasks, HiveConf hconf, DependencyCollectionTask dependencyTask) { Task<MoveWork> mvTask = GenMapRedUtils.findMoveTask(mvTasks, newOutput); for (Task<? extends Serializable> tsk : cndTsk.getListTasks()) { linkMoveTask(mvTask, tsk, hconf, dependencyTask); } } /** * Follows the task tree down from task and makes all leaves parents of mvTask * * @param mvTask * @param task * @param hconf * @param dependencyTask */ public static void linkMoveTask(Task<MoveWork> mvTask, Task<? extends Serializable> task, HiveConf hconf, DependencyCollectionTask dependencyTask) { if (task.getDependentTasks() == null || task.getDependentTasks().isEmpty()) { // If it's a leaf, add the move task as a child addDependentMoveTasks(mvTask, hconf, task, dependencyTask); } else { // Otherwise, for each child run this method recursively for (Task<? extends Serializable> childTask : task.getDependentTasks()) { linkMoveTask(mvTask, childTask, hconf, dependencyTask); } } } /** * Adds the dependencyTaskForMultiInsert in ctx as a dependent of parentTask. If mvTask is a * load table, and HIVE_MULTI_INSERT_ATOMIC_OUTPUTS is set, adds mvTask as a dependent of * dependencyTaskForMultiInsert in ctx, otherwise adds mvTask as a dependent of parentTask as * well. * * @param mvTask * @param hconf * @param parentTask * @param dependencyTask */ public static void addDependentMoveTasks(Task<MoveWork> mvTask, HiveConf hconf, Task<? extends Serializable> parentTask, DependencyCollectionTask dependencyTask) { if (mvTask != null) { if (dependencyTask != null) { parentTask.addDependentTask(dependencyTask); if (mvTask.getWork().getLoadTableWork() != null) { // Moving tables/partitions depend on the dependencyTask dependencyTask.addDependentTask(mvTask); } else { // Moving files depends on the parentTask (we still want the dependencyTask to depend // on the parentTask) parentTask.addDependentTask(mvTask); } } else { parentTask.addDependentTask(mvTask); } } } /** * Add the StatsTask as a dependent task of the MoveTask * because StatsTask will change the Table/Partition metadata. For atomicity, we * should not change it before the data is actually there done by MoveTask. * * @param nd * the FileSinkOperator whose results are taken care of by the MoveTask. * @param mvTask * The MoveTask that moves the FileSinkOperator's results. * @param currTask * The MapRedTask that the FileSinkOperator belongs to. * @param hconf * HiveConf */ public static void addStatsTask(FileSinkOperator nd, MoveTask mvTask, Task<? extends Serializable> currTask, HiveConf hconf) { MoveWork mvWork = mvTask.getWork(); StatsWork statsWork = null; if (mvWork.getLoadTableWork() != null) { statsWork = new StatsWork(mvWork.getLoadTableWork()); } else if (mvWork.getLoadFileWork() != null) { statsWork = new StatsWork(mvWork.getLoadFileWork()); } assert statsWork != null : "Error when genereting StatsTask"; statsWork.setSourceTask(currTask); statsWork.setStatsReliable(hconf.getBoolVar(ConfVars.HIVE_STATS_RELIABLE)); if (currTask.getWork() instanceof MapredWork) { MapredWork mrWork = (MapredWork) currTask.getWork(); mrWork.getMapWork().setGatheringStats(true); if (mrWork.getReduceWork() != null) { mrWork.getReduceWork().setGatheringStats(true); } } else if (currTask.getWork() instanceof SparkWork) { SparkWork work = (SparkWork) currTask.getWork(); for (BaseWork w: work.getAllWork()) { w.setGatheringStats(true); } } else { // must be TezWork TezWork work = (TezWork) currTask.getWork(); for (BaseWork w: work.getAllWork()) { w.setGatheringStats(true); } } // AggKey in StatsWork is used for stats aggregation while StatsAggPrefix // in FileSinkDesc is used for stats publishing. They should be consistent. statsWork.setAggKey(nd.getConf().getStatsAggPrefix()); Task<? extends Serializable> statsTask = TaskFactory.get(statsWork, hconf); // mark the MapredWork and FileSinkOperator for gathering stats nd.getConf().setGatherStats(true); nd.getConf().setStatsReliable(hconf.getBoolVar(ConfVars.HIVE_STATS_RELIABLE)); nd.getConf().setMaxStatsKeyPrefixLength(StatsFactory.getMaxPrefixLength(hconf)); // mrWork.addDestinationTable(nd.getConf().getTableInfo().getTableName()); // subscribe feeds from the MoveTask so that MoveTask can forward the list // of dynamic partition list to the StatsTask mvTask.addDependentTask(statsTask); statsTask.subscribeFeed(mvTask); } /** * Returns true iff current query is an insert into for the given file sink * * @param parseCtx * @param fsOp * @return */ public static boolean isInsertInto(ParseContext parseCtx, FileSinkOperator fsOp) { return fsOp.getConf().getTableInfo().getTableName() != null && parseCtx.getQB().getParseInfo().isInsertToTable(); } /** * Create a MapredWork based on input path, the top operator and the input * table descriptor. * * @param conf * @param topOp * the table scan operator that is the root of the MapReduce task. * @param fsDesc * the file sink descriptor that serves as the input to this merge task. * @param parentMR * the parent MapReduce work * @param parentFS * the last FileSinkOperator in the parent MapReduce work * @return the MapredWork */ private static MapWork createMRWorkForMergingFiles (HiveConf conf, Operator<? extends OperatorDesc> topOp, FileSinkDesc fsDesc) { ArrayList<String> aliases = new ArrayList<String>(); String inputDir = fsDesc.getFinalDirName().toString(); TableDesc tblDesc = fsDesc.getTableInfo(); aliases.add(inputDir); // dummy alias: just use the input path // constructing the default MapredWork MapredWork cMrPlan = GenMapRedUtils.getMapRedWorkFromConf(conf); MapWork cplan = cMrPlan.getMapWork(); cplan.getPathToAliases().put(inputDir, aliases); cplan.getPathToPartitionInfo().put(inputDir, new PartitionDesc(tblDesc, null)); cplan.getAliasToWork().put(inputDir, topOp); cplan.setMapperCannotSpanPartns(true); return cplan; } /** * Create a block level merge task for RCFiles or stripe level merge task for * ORCFiles * * @param fsInputDesc * @param finalName * @param inputFormatClass * @return MergeWork if table is stored as RCFile or ORCFile, * null otherwise */ public static MapWork createMergeTask(FileSinkDesc fsInputDesc, Path finalName, boolean hasDynamicPartitions) throws SemanticException { Path inputDir = fsInputDesc.getFinalDirName(); TableDesc tblDesc = fsInputDesc.getTableInfo(); List<Path> inputDirs = new ArrayList<Path>(1); ArrayList<String> inputDirstr = new ArrayList<String>(1); // this will be populated by MergeFileWork.resolveDynamicPartitionStoredAsSubDirsMerge // in case of dynamic partitioning and list bucketing if (!hasDynamicPartitions && !GenMapRedUtils.isSkewedStoredAsDirs(fsInputDesc)) { inputDirs.add(inputDir); } inputDirstr.add(inputDir.toString()); // internal input format class for CombineHiveInputFormat final Class<? extends InputFormat> internalIFClass; if (tblDesc.getInputFileFormatClass().equals(RCFileInputFormat.class)) { internalIFClass = RCFileBlockMergeInputFormat.class; } else if (tblDesc.getInputFileFormatClass().equals(OrcInputFormat.class)) { internalIFClass = OrcFileStripeMergeInputFormat.class; } else { throw new SemanticException("createMergeTask called on a table with file" + " format other than RCFile or ORCFile"); } // create the merge file work MergeFileWork work = new MergeFileWork(inputDirs, finalName, hasDynamicPartitions, tblDesc.getInputFileFormatClass().getName()); LinkedHashMap<String, ArrayList<String>> pathToAliases = new LinkedHashMap<String, ArrayList<String>>(); pathToAliases.put(inputDir.toString(), inputDirstr); work.setMapperCannotSpanPartns(true); work.setPathToAliases(pathToAliases); PartitionDesc pDesc = new PartitionDesc(tblDesc, null); pDesc.setInputFileFormatClass(internalIFClass); work.getPathToPartitionInfo().put(inputDir.toString(), pDesc); work.setListBucketingCtx(fsInputDesc.getLbCtx()); // create alias to work which contains the merge operator LinkedHashMap<String, Operator<? extends OperatorDesc>> aliasToWork = new LinkedHashMap<String, Operator<? extends OperatorDesc>>(); Operator<? extends OperatorDesc> mergeOp = null; final FileMergeDesc fmd; if (tblDesc.getInputFileFormatClass().equals(RCFileInputFormat.class)) { fmd = new RCFileMergeDesc(); } else { fmd = new OrcFileMergeDesc(); } fmd.setDpCtx(fsInputDesc.getDynPartCtx()); fmd.setOutputPath(finalName); fmd.setHasDynamicPartitions(work.hasDynamicPartitions()); fmd.setListBucketingAlterTableConcatenate(work.isListBucketingAlterTableConcatenate()); int lbLevel = work.getListBucketingCtx() == null ? 0 : work.getListBucketingCtx().calculateListBucketingLevel(); fmd.setListBucketingDepth(lbLevel); mergeOp = OperatorFactory.get(fmd); aliasToWork.put(inputDir.toString(), mergeOp); work.setAliasToWork(aliasToWork); return work; } /** * Construct a conditional task given the current leaf task, the MoveWork and the MapredWork. * * @param conf * HiveConf * @param currTask * current leaf task * @param mvWork * MoveWork for the move task * @param mergeWork * MapredWork for the merge task. * @param inputPath * the input directory of the merge/move task * @return The conditional task */ @SuppressWarnings("unchecked") public static ConditionalTask createCondTask(HiveConf conf, Task<? extends Serializable> currTask, MoveWork mvWork, Serializable mergeWork, String inputPath) { // There are 3 options for this ConditionalTask: // 1) Merge the partitions // 2) Move the partitions (i.e. don't merge the partitions) // 3) Merge some partitions and move other partitions (i.e. merge some partitions and don't // merge others) in this case the merge is done first followed by the move to prevent // conflicts. Task<? extends Serializable> mergeOnlyMergeTask = TaskFactory.get(mergeWork, conf); Task<? extends Serializable> moveOnlyMoveTask = TaskFactory.get(mvWork, conf); Task<? extends Serializable> mergeAndMoveMergeTask = TaskFactory.get(mergeWork, conf); Task<? extends Serializable> mergeAndMoveMoveTask = TaskFactory.get(mvWork, conf); // NOTE! It is necessary merge task is the parent of the move task, and not // the other way around, for the proper execution of the execute method of // ConditionalTask mergeAndMoveMergeTask.addDependentTask(mergeAndMoveMoveTask); List<Serializable> listWorks = new ArrayList<Serializable>(); listWorks.add(mvWork); listWorks.add(mergeWork); ConditionalWork cndWork = new ConditionalWork(listWorks); List<Task<? extends Serializable>> listTasks = new ArrayList<Task<? extends Serializable>>(); listTasks.add(moveOnlyMoveTask); listTasks.add(mergeOnlyMergeTask); listTasks.add(mergeAndMoveMergeTask); ConditionalTask cndTsk = (ConditionalTask) TaskFactory.get(cndWork, conf); cndTsk.setListTasks(listTasks); // create resolver cndTsk.setResolver(new ConditionalResolverMergeFiles()); ConditionalResolverMergeFilesCtx mrCtx = new ConditionalResolverMergeFilesCtx(listTasks, inputPath); cndTsk.setResolverCtx(mrCtx); // make the conditional task as the child of the current leaf task currTask.addDependentTask(cndTsk); return cndTsk; } /** * check if it is skewed table and stored as dirs. * * @param fsInputDesc * @return */ public static boolean isSkewedStoredAsDirs(FileSinkDesc fsInputDesc) { return (fsInputDesc.getLbCtx() == null) ? false : fsInputDesc.getLbCtx() .isSkewedStoredAsDir(); } public static Task<MoveWork> findMoveTask( List<Task<MoveWork>> mvTasks, FileSinkOperator fsOp) { // find the move task for (Task<MoveWork> mvTsk : mvTasks) { MoveWork mvWork = mvTsk.getWork(); Path srcDir = null; if (mvWork.getLoadFileWork() != null) { srcDir = mvWork.getLoadFileWork().getSourcePath(); } else if (mvWork.getLoadTableWork() != null) { srcDir = mvWork.getLoadTableWork().getSourcePath(); } if ((srcDir != null) && (srcDir.equals(fsOp.getConf().getFinalDirName()))) { return mvTsk; } } return null; } /** * Returns true iff the fsOp requires a merge * @param mvTasks * @param hconf * @param fsOp * @param currTask * @param isInsertTable * @return */ public static boolean isMergeRequired(List<Task<MoveWork>> mvTasks, HiveConf hconf, FileSinkOperator fsOp, Task<? extends Serializable> currTask, boolean isInsertTable) { // Has the user enabled merging of files for map-only jobs or for all jobs if ((mvTasks != null) && (!mvTasks.isEmpty())) { // no need of merging if the move is to a local file system MoveTask mvTask = (MoveTask) GenMapRedUtils.findMoveTask(mvTasks, fsOp); if (mvTask != null && isInsertTable && hconf.getBoolVar(ConfVars.HIVESTATSAUTOGATHER)) { GenMapRedUtils.addStatsTask(fsOp, mvTask, currTask, hconf); } if ((mvTask != null) && !mvTask.isLocal() && fsOp.getConf().canBeMerged()) { if (currTask.getWork() instanceof TezWork) { // tez blurs the boundary between map and reduce, thus it has it's own // config return hconf.getBoolVar(ConfVars.HIVEMERGETEZFILES); } else if (currTask.getWork() instanceof SparkWork) { // spark has its own config for merging return hconf.getBoolVar(ConfVars.HIVEMERGESPARKFILES); } if (fsOp.getConf().isLinkedFileSink()) { // If the user has HIVEMERGEMAPREDFILES set to false, the idea was the // number of reducers are few, so the number of files anyway are small. // However, with this optimization, we are increasing the number of files // possibly by a big margin. So, merge aggresively. if (hconf.getBoolVar(ConfVars.HIVEMERGEMAPFILES) || hconf.getBoolVar(ConfVars.HIVEMERGEMAPREDFILES)) { return true; } } else { // There are separate configuration parameters to control whether to // merge for a map-only job // or for a map-reduce job if (currTask.getWork() instanceof MapredWork) { ReduceWork reduceWork = ((MapredWork) currTask.getWork()).getReduceWork(); boolean mergeMapOnly = hconf.getBoolVar(ConfVars.HIVEMERGEMAPFILES) && reduceWork == null; boolean mergeMapRed = hconf.getBoolVar(ConfVars.HIVEMERGEMAPREDFILES) && reduceWork != null; if (mergeMapOnly || mergeMapRed) { return true; } } else { return false; } } } } return false; } /** * Create and add any dependent move tasks * * @param currTask * @param chDir * @param fsOp * @param parseCtx * @param mvTasks * @param hconf * @param dependencyTask * @return */ public static Path createMoveTask(Task<? extends Serializable> currTask, boolean chDir, FileSinkOperator fsOp, ParseContext parseCtx, List<Task<MoveWork>> mvTasks, HiveConf hconf, DependencyCollectionTask dependencyTask) { Path dest = null; if (chDir) { dest = fsOp.getConf().getFinalDirName(); // generate the temporary file // it must be on the same file system as the current destination Context baseCtx = parseCtx.getContext(); Path tmpDir = baseCtx.getExternalTmpPath(dest); FileSinkDesc fileSinkDesc = fsOp.getConf(); // Change all the linked file sink descriptors if (fileSinkDesc.isLinkedFileSink()) { for (FileSinkDesc fsConf:fileSinkDesc.getLinkedFileSinkDesc()) { fsConf.setParentDir(tmpDir); fsConf.setDirName(new Path(tmpDir, fsConf.getDirName().getName())); } } else { fileSinkDesc.setDirName(tmpDir); } } Task<MoveWork> mvTask = null; if (!chDir) { mvTask = GenMapRedUtils.findMoveTask(mvTasks, fsOp); } // Set the move task to be dependent on the current task if (mvTask != null) { GenMapRedUtils.addDependentMoveTasks(mvTask, hconf, currTask, dependencyTask); } return dest; } public static Set<Partition> getConfirmedPartitionsForScan(QBParseInfo parseInfo) { Set<Partition> confirmedPartns = new HashSet<Partition>(); tableSpec tblSpec = parseInfo.getTableSpec(); if (tblSpec.specType == tableSpec.SpecType.STATIC_PARTITION) { // static partition if (tblSpec.partHandle != null) { confirmedPartns.add(tblSpec.partHandle); } else { // partial partition spec has null partHandle assert parseInfo.isNoScanAnalyzeCommand(); confirmedPartns.addAll(tblSpec.partitions); } } else if (tblSpec.specType == tableSpec.SpecType.DYNAMIC_PARTITION) { // dynamic partition confirmedPartns.addAll(tblSpec.partitions); } return confirmedPartns; } public static List<String> getPartitionColumns(QBParseInfo parseInfo) { tableSpec tblSpec = parseInfo.getTableSpec(); if (tblSpec.tableHandle.isPartitioned()) { return new ArrayList<String>(tblSpec.getPartSpec().keySet()); } return Collections.emptyList(); } public static List<Path> getInputPathsForPartialScan(QBParseInfo parseInfo, StringBuffer aggregationKey) throws SemanticException { List<Path> inputPaths = new ArrayList<Path>(); switch (parseInfo.getTableSpec().specType) { case TABLE_ONLY: inputPaths.add(parseInfo.getTableSpec().tableHandle.getPath()); break; case STATIC_PARTITION: Partition part = parseInfo.getTableSpec().partHandle; try { aggregationKey.append(Warehouse.makePartPath(part.getSpec())); } catch (MetaException e) { throw new SemanticException(ErrorMsg.ANALYZE_TABLE_PARTIALSCAN_AGGKEY.getMsg( part.getDataLocation().toString() + e.getMessage())); } inputPaths.add(part.getDataLocation()); break; default: assert false; } return inputPaths; } public static Set<String> findAliases(final MapWork work, Operator<?> startOp) { Set<String> aliases = new LinkedHashSet<String>(); for (Operator<?> topOp : findTopOps(startOp, null)) { String alias = findAlias(work, topOp); if (alias != null) { aliases.add(alias); } } return aliases; } public static Set<Operator<?>> findTopOps(Operator<?> startOp, final Class<?> clazz) { final Set<Operator<?>> operators = new LinkedHashSet<Operator<?>>(); OperatorUtils.iterateParents(startOp, new NodeUtils.Function<Operator<?>>() { @Override public void apply(Operator<?> argument) { if (argument.getNumParent() == 0 && (clazz == null || clazz.isInstance(argument))) { operators.add(argument); } } }); return operators; } public static String findAlias(MapWork work, Operator<?> operator) { for (Entry<String, Operator<?>> entry : work.getAliasToWork().entrySet()) { if (entry.getValue() == operator) { return entry.getKey(); } } return null; } private GenMapRedUtils() { // prevent instantiation } }
{ "content_hash": "6fa27b43518dfdd70289b20532f7aed3", "timestamp": "", "source": "github", "line_count": 1861, "max_line_length": 108, "avg_line_length": 38.26168726491134, "alnum_prop": 0.6783793272944316, "repo_name": "WANdisco/amplab-hive", "id": "1aec3074bbacc164021105f34b50d5a807ae70e4", "size": "72011", "binary": false, "copies": "2", "ref": "refs/heads/trunk", "path": "ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "Batchfile", "bytes": "46615" }, { "name": "C", "bytes": "120921" }, { "name": "C++", "bytes": "163978" }, { "name": "CSS", "bytes": "1372" }, { "name": "GAP", "bytes": "119185" }, { "name": "Groff", "bytes": "5379" }, { "name": "HTML", "bytes": "22057" }, { "name": "Java", "bytes": "24417582" }, { "name": "M", "bytes": "2173" }, { "name": "Makefile", "bytes": "6963" }, { "name": "PHP", "bytes": "1715861" }, { "name": "PLpgSQL", "bytes": "85750" }, { "name": "Perl", "bytes": "316401" }, { "name": "PigLatin", "bytes": "12333" }, { "name": "Protocol Buffer", "bytes": "6541" }, { "name": "Python", "bytes": "268845" }, { "name": "SQLPL", "bytes": "1414" }, { "name": "Shell", "bytes": "149965" }, { "name": "Thrift", "bytes": "93618" }, { "name": "XSLT", "bytes": "7619" } ] }
 using System; using System.Collections.Generic; using XenAdmin.Core; using XenAdmin.Network; using XenAPI; namespace XenAdmin.Actions { public abstract class PoolAbstractAction : AsyncAction { protected Func<Host, AdUserAndPassword> GetAdCredentials; protected Func<HostAbstractAction, Pool, long, long, bool> AcceptNTolChanges; protected Action<List<LicenseFailure>, string> DoOnLicensingFailure; protected PoolAbstractAction(IXenConnection connection, string title, Func<Host, AdUserAndPassword> getAdCredentials, Func<HostAbstractAction, Pool, long, long, bool> acceptNTolChanges, Action<List<LicenseFailure>, string> doOnLicensingFailure) : base(connection, title) { this.GetAdCredentials = getAdCredentials; this.AcceptNTolChanges = acceptNTolChanges; this.DoOnLicensingFailure = doOnLicensingFailure; } protected void ClearAllDelegates() { GetAdCredentials = null; AcceptNTolChanges = null; DoOnLicensingFailure = null; } protected static void FixLicensing(Pool pool, List<Host> hostsToRelicense, Action<List<LicenseFailure>, string> doOnLicensingFailure) { if (hostsToRelicense.Count == 0) return; Host poolMaster = Helpers.GetMaster(pool); AsyncAction action = new ApplyLicenseEditionAction(hostsToRelicense.ConvertAll(h=>h as IXenObject), Host.GetEdition(poolMaster.edition), poolMaster.license_server["address"], poolMaster.license_server["port"], doOnLicensingFailure); action.RunExternal(null); } /// <summary> /// Mask the CPUs of any slaves that need masking to join the pool /// </summary> /// <returns>Whether any CPUs were masked</returns> protected static bool FixCpus(Pool pool, List<Host> hostsToCpuMask, Func<HostAbstractAction, Pool, long, long, bool> acceptNTolChanges) { if (hostsToCpuMask.Count == 0) return false; Host poolMaster = Helpers.GetMaster(pool); List<RebootHostAction> rebootActions = new List<RebootHostAction>(); // Mask the CPUs, and reboot the hosts (simultaneously, as they must all be on separate connections) foreach (Host host in hostsToCpuMask) { Host.set_cpu_features(host.Connection.Session, host.opaque_ref, poolMaster.cpu_info["features"]); RebootHostAction action = new RebootHostAction(host, acceptNTolChanges); rebootActions.Add(action); action.RunAsync(); } // Wait for all the actions to finish, checking every ten seconds while (true) { bool done = true; foreach (RebootHostAction action in rebootActions) { if (!action.IsCompleted) done = false; } if (done) break; System.Threading.Thread.Sleep(10000); } return true; } /// <summary> /// If we're joining a pool that has a non-shared default/crash/suspend SR, then clear that /// pool's default SRs, since a pool with default SRs set to local storage is a confusing /// configuration that we do not allow to be set through the GUI (only shared SRs can be set /// as the default in pools, even though xapi allows otherwise). /// </summary> /// <param name="pool"></param> protected void ClearNonSharedSrs(Pool pool) { SR defSR = pool.Connection.Resolve<SR>(pool.default_SR); if (defSR != null && !defSR.shared) { XenAPI.Pool poolCopy = (Pool)pool.Clone(); poolCopy.default_SR = new XenRef<SR>(Helper.NullOpaqueRef); poolCopy.crash_dump_SR = new XenRef<SR>(Helper.NullOpaqueRef); poolCopy.suspend_image_SR = new XenRef<SR>(Helper.NullOpaqueRef); pool.Locked = true; try { poolCopy.SaveChanges(Session); } finally { pool.Locked = false; } } } protected static void FixAd(Pool pool, List<Host> hostsToAdConfigure, Func<Host, AdUserAndPassword> getAdCredentials) { if (hostsToAdConfigure.Count == 0) return; Host poolMaster = Helpers.GetMaster(pool); AsyncAction action; bool success = true; do { success = true; AdUserAndPassword adUserAndPassword = getAdCredentials(poolMaster); try { foreach (Host h in hostsToAdConfigure) { action = new EnableAdAction(Helpers.GetPoolOfOne(h.Connection), poolMaster.external_auth_service_name,adUserAndPassword.Username, adUserAndPassword.Password) {Host = h}; action.RunExternal(null); } } catch (EnableAdAction.CredentialsFailure) { success = false; } } while (!success); } public class AdUserAndPassword { public AdUserAndPassword(string username,string password) { Username = username; Password = password; } public readonly string Username; public readonly string Password; } } }
{ "content_hash": "de7ed903506a0e1af0a51c729c5ec409", "timestamp": "", "source": "github", "line_count": 156, "max_line_length": 222, "avg_line_length": 37.55128205128205, "alnum_prop": 0.5648685558210994, "repo_name": "aftabahmedsajid/XenCenter-Complete-dependencies-", "id": "79394377f652bc60a98d1e61b768fe325facfbdc", "size": "7288", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "XenModel/Actions/Pool/PoolAction.cs", "mode": "33188", "license": "bsd-2-clause", "language": [ { "name": "C", "bytes": "1907" }, { "name": "C#", "bytes": "16370140" }, { "name": "C++", "bytes": "21035" }, { "name": "JavaScript", "bytes": "812" }, { "name": "PowerShell", "bytes": "40" }, { "name": "Shell", "bytes": "62958" }, { "name": "Visual Basic", "bytes": "11351" } ] }
class UserAgent < ActiveRecord::Base validates_presence_of :agent_id validates_presence_of :agent_string before_validation :generate_agent_id, on: :create # Creates a UserAgent record based on the agent string. def self.do_record(agent_string) checksum = agent_string_checksum(agent_string) UserAgent.create_with(agent_string: agent_string).find_or_create_by(agent_id: checksum) end def self.agent_string_checksum(agent_string) Zlib.crc32(agent_string).to_s end private def generate_agent_id self.agent_id = UserAgent.agent_string_checksum(self.agent_string) end end
{ "content_hash": "be25503e55c794586528ed78c5dd379e", "timestamp": "", "source": "github", "line_count": 22, "max_line_length": 91, "avg_line_length": 27.727272727272727, "alnum_prop": 0.7442622950819672, "repo_name": "virgild/jibjob-rails", "id": "11b4b08a55493195e787ea0faf03a493993cda32", "size": "875", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "app/models/user_agent.rb", "mode": "33188", "license": "mit", "language": [ { "name": "CSS", "bytes": "33581" }, { "name": "HTML", "bytes": "84646" }, { "name": "JavaScript", "bytes": "34091" }, { "name": "Puppet", "bytes": "6285" }, { "name": "Ruby", "bytes": "157396" } ] }
package model.prefab; import model.AbstractModel; import resourceHandling.Resource; public class UpArrowModel extends AbstractModel { public UpArrowModel() { super("Up Arrow"); initialize(); } private void initialize() { add(new Resource("Up Arrow", "/Resources/Sprites/Misc/Up.gif", (float) 2.5, null, true, "Default")); add(new Resource("Up Arrow", "/Resources/Sprites/Misc/Up.gif", (float) 2.5, null, true, "NoEntityState")); } }
{ "content_hash": "888023e410e270d7a2adc5420cc9c697", "timestamp": "", "source": "github", "line_count": 17, "max_line_length": 108, "avg_line_length": 26.41176470588235, "alnum_prop": 0.7082405345211581, "repo_name": "PacketCloud/Angry-Beaver", "id": "2967564faa8c7cc5a7b5f01e28a3f2812e5e3b9e", "size": "449", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "src/model/prefab/UpArrowModel.java", "mode": "33188", "license": "mit", "language": [ { "name": "Java", "bytes": "137772" } ] }
package fredboat.definitions; import fredboat.util.Emojis; import java.util.Optional; /** * Created by napster on 15.02.18. * <p> * A user should not be able to enable/disable a locked module */ public enum Module { //@formatter:off locked // enabledByDef ADMIN ("moduleAdmin", Emojis.KEY, true, true, "administration"), INFO ("moduleInfo", Emojis.INFO, true, true, "information"), CONFIG("moduleConfig", Emojis.GEAR, true, true, "configuration"), MUSIC ("moduleMusic", Emojis.MUSIC, true, true, "music"), MOD ("moduleModeration", Emojis.HAMMER, true, false, "moderation"), UTIL ("moduleUtility", Emojis.TOOLS, true, false, "utility"), FUN ("moduleFun", Emojis.DIE, true, false, "fun"), ; //@formatter:on private final String translationKey; private final String emoji; private final boolean enabledByDefault; private final boolean lockedModule; private final String altName; Module(String translationKey, String emoji, boolean enabledByDefault, boolean lockedModule, String altName) { this.translationKey = translationKey; this.emoji = emoji; this.enabledByDefault = enabledByDefault; this.lockedModule = lockedModule; this.altName = altName; } public String getTranslationKey() { return translationKey; } public String getEmoji() { return emoji; } public boolean isEnabledByDefault() { return enabledByDefault; } public boolean isLockedModule() { return lockedModule; } public String getAltName() { return altName; } /** * This method tries to parse an input into a module that we recognize. * * @param input input to be parsed into a Module known to us (= defined in this enum) * @return the optional module identified from the input. */ public static Optional<Module> parse(String input) { for (Module module : Module.values()) { if (module.name().equalsIgnoreCase(input) || module.getAltName().equalsIgnoreCase(input)) { return Optional.of(module); } } return Optional.empty(); } }
{ "content_hash": "b7c530e76b871fbe8d592b17d656d555", "timestamp": "", "source": "github", "line_count": 79, "max_line_length": 95, "avg_line_length": 29.658227848101266, "alnum_prop": 0.6099018352539479, "repo_name": "Frederikam/FredBoat", "id": "71c797a19f8a900ce8475e708facc848574863f4", "size": "3495", "binary": false, "copies": "1", "ref": "refs/heads/dev", "path": "Shared/src/main/java/fredboat/definitions/Module.java", "mode": "33188", "license": "mit", "language": [ { "name": "Dockerfile", "bytes": "406" }, { "name": "HTML", "bytes": "14673" }, { "name": "Java", "bytes": "476205" }, { "name": "Kotlin", "bytes": "656768" }, { "name": "Shell", "bytes": "2827" } ] }
// // PDFAnnotationMarkup_SKExtensions.m // Skim // // Created by Christiaan Hofman on 4/1/08. #import "PDFAnnotationMarkup_SKExtensions.h" #import <SkimNotes/SkimNotes.h> #import "PDFAnnotation_SKExtensions.h" #import "PDFAnnotationInk_SKExtensions.h" #import "SKStringConstants.h" #import "SKFDFParser.h" #import "PDFSelection_SKExtensions.h" #import "NSUserDefaults_SKExtensions.h" #import "NSGeometry_SKExtensions.h" #import "NSData_SKExtensions.h" #import "NSCharacterSet_SKExtensions.h" #import "SKRuntime.h" #import "NSPointerArray_SKExtensions.h" #import "NSColor_SKExtensions.h" #import "PDFSelection_SKExtensions.h" #import "NSResponder_SKExtensions.h" #import "PDFPage_SKExtensions.h" NSString *SKPDFAnnotationSelectionSpecifierKey = @"selectionSpecifier"; @implementation PDFAnnotationMarkup (SKExtensions) /* http://www.cocoabuilder.com/archive/message/cocoa/2007/2/16/178891 The docs are wrong (as is Adobe's spec). The ordering at zero rotation is: -------- | 0 1 | | 2 3 | -------- */ static NSArray *createQuadPointsWithBounds(const NSRect bounds, const NSPoint origin, NSInteger rotation) { NSRect r = NSOffsetRect(bounds, -origin.x, -origin.y); NSInteger offset = rotation / 90; NSPoint p[4]; memset(&p, 0, 4 * sizeof(NSPoint)); p[offset] = SKTopLeftPoint(r); p[(++offset)%4] = SKTopRightPoint(r); p[(++offset)%4] = SKBottomRightPoint(r); p[(++offset)%4] = SKBottomLeftPoint(r); return [[NSArray alloc] initWithObjects:[NSValue valueWithPoint:p[0]], [NSValue valueWithPoint:p[1]], [NSValue valueWithPoint:p[3]], [NSValue valueWithPoint:p[2]], nil]; } static NSMapTable *lineRectsTable = nil; static void (*original_dealloc)(id, SEL) = NULL; - (void)replacement_dealloc { [lineRectsTable removeObjectForKey:self]; original_dealloc(self, _cmd); } + (void)load { original_dealloc = (void (*)(id, SEL))SKReplaceInstanceMethodImplementationFromSelector(self, @selector(dealloc), @selector(replacement_dealloc)); lineRectsTable = [[NSMapTable alloc] initWithKeyOptions:NSMapTableZeroingWeakMemory | NSMapTableObjectPointerPersonality valueOptions:NSMapTableStrongMemory | NSMapTableObjectPointerPersonality capacity:0]; } - (NSPointerArray *)lineRects:(BOOL *)created { NSPointerArray *lineRects = [lineRectsTable objectForKey:self]; if (created) *created = (lineRects == NULL); if (lineRects == NULL) { lineRects = [[NSPointerArray alloc] initForRectPointers]; [lineRectsTable setObject:lineRects forKey:self]; [lineRects release];; } return lineRects; } + (NSColor *)defaultSkimNoteColorForMarkupType:(NSInteger)markupType { switch (markupType) { case kPDFMarkupTypeUnderline: return [[NSUserDefaults standardUserDefaults] colorForKey:SKUnderlineNoteColorKey]; case kPDFMarkupTypeStrikeOut: return [[NSUserDefaults standardUserDefaults] colorForKey:SKStrikeOutNoteColorKey]; case kPDFMarkupTypeHighlight: return [[NSUserDefaults standardUserDefaults] colorForKey:SKHighlightNoteColorKey]; } return nil; } - (id)initSkimNoteWithBounds:(NSRect)bounds markupType:(NSInteger)type { self = [super initSkimNoteWithBounds:bounds]; if (self) { [self setMarkupType:type]; NSColor *color = [[self class] defaultSkimNoteColorForMarkupType:type]; if (color) [self setColor:color]; } return self; } - (id)initSkimNoteWithBounds:(NSRect)bounds { self = [self initSkimNoteWithBounds:bounds markupType:kPDFMarkupTypeHighlight]; return self; } - (id)initSkimNoteWithSelection:(PDFSelection *)selection markupType:(NSInteger)type { NSRect bounds = [selection hasCharacters] ? [selection boundsForPage:[selection safeFirstPage]] : NSZeroRect; if ([selection hasCharacters] == NO || NSIsEmptyRect(bounds)) { [[self initWithBounds:NSZeroRect] release]; self = nil; } else { self = [self initSkimNoteWithBounds:bounds markupType:type]; if (self) { PDFPage *page = [selection safeFirstPage]; NSInteger rotation = [page intrinsicRotation]; NSMutableArray *quadPoints = [[NSMutableArray alloc] init]; NSRect newBounds = NSZeroRect; if (selection) { NSUInteger i, iMax; NSRect lineRect = NSZeroRect; for (PDFSelection *sel in [selection selectionsByLine]) { lineRect = [sel boundsForPage:page]; if (NSIsEmptyRect(lineRect) == NO && [[sel string] rangeOfCharacterFromSet:[NSCharacterSet nonWhitespaceAndNewlineCharacterSet]].length) { [[self lineRects:NULL] addPointer:&lineRect]; newBounds = NSUnionRect(lineRect, newBounds); } } if (NSIsEmptyRect(newBounds)) { [self release]; self = nil; } else { [self setBounds:newBounds]; NSPointerArray *lines = [self lineRects:NULL]; iMax = [lines count]; for (i = 0; i < iMax; i++) { NSArray *quadLine = createQuadPointsWithBounds([lines rectAtIndex:i], [self bounds].origin, rotation); [quadPoints addObjectsFromArray:quadLine]; [quadLine release]; } } } [self setQuadrilateralPoints:quadPoints]; [quadPoints release]; } } return self; } - (NSString *)fdfString { NSMutableString *fdfString = [[[super fdfString] mutableCopy] autorelease]; NSPoint point; NSRect bounds = [self bounds]; [fdfString appendFDFName:SKFDFAnnotationQuadrilateralPointsKey]; [fdfString appendString:@"["]; for (NSValue *value in [self quadrilateralPoints]) { point = [value pointValue]; [fdfString appendFormat:@"%f %f ", point.x + NSMinX(bounds), point.y + NSMinY(bounds)]; } [fdfString appendString:@"]"]; return fdfString; } - (NSPointerArray *)lineRects { BOOL created = NO; NSPointerArray *lines = [self lineRects:&created]; if (created) { // archived annotations (or annotations we didn't create) won't have these NSArray *quadPoints = [self quadrilateralPoints]; NSAssert([quadPoints count] % 4 == 0, @"inconsistent number of quad points"); NSUInteger j = [lines count], jMax = [quadPoints count] / 4; NSPoint origin = [self bounds].origin; while ([lines count]) [lines removePointerAtIndex:0]; for (j = 0; j < jMax; j++) { NSRange range = NSMakeRange(4 * j, 4); NSValue *values[4]; [quadPoints getObjects:values range:range]; NSPoint point; NSUInteger i; CGFloat minX = CGFLOAT_MAX, maxX = -CGFLOAT_MAX, minY = CGFLOAT_MAX, maxY = -CGFLOAT_MAX; for (i = 0; i < 4; i++) { point = [values[i] pointValue]; minX = fmin(minX, point.x); maxX = fmax(maxX, point.x); minY = fmin(minY, point.y); maxY = fmax(maxY, point.y); } NSRect lineRect = NSMakeRect(origin.x + minX, origin.y + minY, maxX - minX, maxY - minY); [lines addPointer:&lineRect]; } } return lines; } - (PDFSelection *)selection { NSMutableArray *selections = [NSMutableArray array]; NSPointerArray *lines = [self lineRects]; NSUInteger i, iMax = [lines count]; for (i = 0; i < iMax; i++) { // slightly outset the rect to avoid rounding errors, as selectionForRect is pretty strict in some OS versions, but unfortunately not in others PDFSelection *selection = [[self page] selectionForRect:NSInsetRect([lines rectAtIndex:i], -1.0, -1.0)]; if ([selection hasCharacters]) [selections addObject:selection]; } return [PDFSelection selectionByAddingSelections:selections]; } - (BOOL)hitTest:(NSPoint)point { if ([super hitTest:point] == NO) return NO; NSPointerArray *lines = [self lineRects]; NSUInteger i = [lines count]; BOOL isContained = NO; while (i-- && NO == isContained) isContained = NSPointInRect(point, [lines rectAtIndex:i]); return isContained; } - (CGFloat)boundsOrder { NSPointerArray *lines = [self lineRects]; NSRect bounds = [lines count] > 0 ? [lines rectAtIndex:0] : [self bounds]; return [[self page] sortOrderForBounds:bounds]; } - (NSRect)displayRectForBounds:(NSRect)bounds lineWidth:(CGFloat)lineWidth { bounds = [super displayRectForBounds:bounds lineWidth:lineWidth]; if ([self markupType] == kPDFMarkupTypeHighlight) { CGFloat delta = 0.03 * NSHeight(bounds); bounds.origin.y -= delta; bounds.size.height += delta; } return bounds; } - (void)drawSelectionHighlightForView:(PDFView *)pdfView { if (NSIsEmptyRect([self bounds])) return; BOOL active = [[pdfView window] isKeyWindow] && [[[pdfView window] firstResponder] isDescendantOf:pdfView]; NSPointerArray *lines = [self lineRects]; NSUInteger i, iMax = [lines count]; CGFloat lineWidth = 1.0 / [pdfView scaleFactor]; PDFPage *page = [self page]; [NSGraphicsContext saveGraphicsState]; [(active ? [NSColor alternateSelectedControlColor] : [NSColor disabledControlTextColor]) setFill]; for (i = 0; i < iMax; i++) NSFrameRectWithWidth([pdfView convertRect:NSIntegralRect([pdfView convertRect:[lines rectAtIndex:i] fromPage:page]) toPage:page], lineWidth); [NSGraphicsContext restoreGraphicsState]; } - (BOOL)isMarkup { return YES; } - (BOOL)hasBorder { return NO; } - (BOOL)isConvertibleAnnotation { return YES; } - (NSString *)colorDefaultKey { switch ([self markupType]) { case kPDFMarkupTypeUnderline: return SKUnderlineNoteColorKey; case kPDFMarkupTypeStrikeOut: return SKStrikeOutNoteColorKey; case kPDFMarkupTypeHighlight: return SKHighlightNoteColorKey; } return nil; } - (void)autoUpdateString { if ([[NSUserDefaults standardUserDefaults] boolForKey:SKDisableUpdateContentsFromEnclosedTextKey]) return; NSString *selString = [[self selection] cleanedString]; if ([selString length]) [self setString:selString]; } #pragma mark Scripting support + (NSSet *)customScriptingKeys { static NSSet *customMarkupScriptingKeys = nil; if (customMarkupScriptingKeys == nil) { NSMutableSet *customKeys = [[super customScriptingKeys] mutableCopy]; [customKeys addObject:SKPDFAnnotationSelectionSpecifierKey]; [customKeys addObject:SKPDFAnnotationScriptingPointListsKey]; [customKeys removeObject:SKNPDFAnnotationLineWidthKey]; [customKeys removeObject:SKPDFAnnotationScriptingBorderStyleKey]; [customKeys removeObject:SKNPDFAnnotationDashPatternKey]; customMarkupScriptingKeys = [customKeys copy]; [customKeys release]; } return customMarkupScriptingKeys; } - (id)selectionSpecifier { PDFSelection *sel = [self selection]; return [sel hasCharacters] ? [sel objectSpecifier] : [NSArray array]; } - (NSArray *)scriptingPointLists { NSPoint origin = [self bounds].origin; NSMutableArray *pointLists = [NSMutableArray array]; NSMutableArray *pointValues; NSPoint point; NSInteger i, j, iMax = [[self quadrilateralPoints] count] / 4; for (i = 0; i < iMax; i++) { pointValues = [[NSMutableArray alloc] initWithCapacity:iMax]; for (j = 0; j < 4; j++) { point = [[[self quadrilateralPoints] objectAtIndex:4 * i + j] pointValue]; [pointValues addObject:[NSData dataWithPointAsQDPoint:SKAddPoints(point, origin)]]; } [pointLists addObject:pointValues]; [pointValues release]; } return pointLists; } @end
{ "content_hash": "85d228dea83bba9a49d7d1276ecd9396", "timestamp": "", "source": "github", "line_count": 328, "max_line_length": 210, "avg_line_length": 36.93292682926829, "alnum_prop": 0.6514776291893677, "repo_name": "ycaihua/skim-app", "id": "1b39c84401e93bd2280edc66da45e7dd2a60c233", "size": "13665", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "PDFAnnotationMarkup_SKExtensions.m", "mode": "33188", "license": "bsd-3-clause", "language": [ { "name": "C", "bytes": "34446" }, { "name": "C++", "bytes": "7053" }, { "name": "CSS", "bytes": "7668" }, { "name": "Mathematica", "bytes": "158347" }, { "name": "Objective-C", "bytes": "2983467" }, { "name": "Perl", "bytes": "862414" }, { "name": "Python", "bytes": "2366" }, { "name": "Shell", "bytes": "1533" }, { "name": "TeX", "bytes": "8257" } ] }
/** * Created by zml on 2017/8/30. */ import { Component, OnInit } from '@angular/core'; import { ActivatedRoute, ParamMap } from '@angular/router'; import { Location } from '@angular/common'; import 'rxjs/add/operator/switchMap' // import the switchMap operate to use later with the route parameters Observable. import { HeroService } from './hero.service'; import { Hero } from './hero'; @Component({ selector: 'hero-detail', templateUrl: './hero-detail.component.html', styleUrls: ['./hero-detail.component.css'] }) export class HeroDetailComponent implements OnInit { hero: Hero; constructor( private heroService: HeroService, private route: ActivatedRoute, private location: Location ) {} ngOnInit(): void { this.route.paramMap .switchMap((params: ParamMap) => this.heroService.getHero(+params.get('id'))) // id is a number, route parameters are always strings, converted with the JavaScript(+) operator .subscribe(hero => this.hero = hero); // If a user re-navigates to this component while a getHero() request is still //processing, switchMap cancels the old request and then calls getHero() again. } goBack(): void { this.location.back(); } save(): void { this.heroService.update(this.hero) .then(() => this.goBack()); } }
{ "content_hash": "7d8ca8b09d3d36acd343b50f12a45b26", "timestamp": "", "source": "github", "line_count": 46, "max_line_length": 148, "avg_line_length": 29.58695652173913, "alnum_prop": 0.6620132255694342, "repo_name": "zhangbuji/tour-heros", "id": "002d2b4d2a03a51375d5a110db44146bb3390131", "size": "1361", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "src/app/hero-detail.component.ts", "mode": "33188", "license": "mit", "language": [ { "name": "CSS", "bytes": "3617" }, { "name": "HTML", "bytes": "2751" }, { "name": "JavaScript", "bytes": "14954" }, { "name": "TypeScript", "bytes": "12677" } ] }
/** * @file 检测是否已经启动过实例 * @author ZHL */ var path = require('path'); var fs = require('fs'); var dirtool = require('../helpers/dirTool'); var hiproxyDir = dirtool.getHiproxyDir(); module.exports = function () { var pidFile = path.join(hiproxyDir, 'hiproxy.pid'); var existsPid = fs.existsSync(pidFile); var binPath = path.resolve(__filename, '../../../bin/cli.js'); return new Promise(function (resolve, reject) { if (existsPid) { var exec = require('child_process').exec; var pid = fs.readFileSync(pidFile, 'utf8'); var cmd = process.platform === 'win32' ? 'tasklist' : 'ps aux'; exec(cmd, function (err, stdout, stderr) { if (err) { resolve(); } stdout.split('\n').forEach(function (line) { var p = line.trim(); if (p.indexOf(pid) > -1 && p.indexOf(binPath) > -1) { reject(new Error('There is an instance of hiproxy service running, please don\'t run another server.')); } return false; }); resolve(); }); } else { resolve(); } }); };
{ "content_hash": "a4562190819a873b43225473fbe9afb9", "timestamp": "", "source": "github", "line_count": 38, "max_line_length": 116, "avg_line_length": 29.026315789473685, "alnum_prop": 0.5611967361740707, "repo_name": "hiproxy/hiproxy", "id": "8329c9f187b3e024cfea456c67a15a95d44224c4", "size": "1125", "binary": false, "copies": "2", "ref": "refs/heads/master", "path": "src/helpers/checkServerStarted.js", "mode": "33188", "license": "mit", "language": [ { "name": "HTML", "bytes": "21471" }, { "name": "JavaScript", "bytes": "339523" } ] }
package io.vitess.client; import com.google.gson.Gson; import com.google.gson.JsonSyntaxException; import com.google.gson.reflect.TypeToken; import io.vitess.proto.Query; import io.vitess.proto.Topodata.TabletType; import java.io.BufferedReader; import java.io.InputStreamReader; import java.lang.reflect.Type; import java.net.InetSocketAddress; import java.util.HashMap; import java.util.List; import java.util.Map; import org.apache.log4j.LogManager; import org.apache.log4j.Logger; import org.joda.time.Duration; import org.junit.Assert; import vttest.Vttest.VTTestTopology; public class TestUtil { static final Logger logger = LogManager.getLogger(TestUtil.class.getName()); public static final String PROPERTY_KEY_CLIENT_TEST_ENV = "vitess.client.testEnv"; public static final String PROPERTY_KEY_CLIENT_TEST_PORT = "vitess.client.testEnv.portName"; public static final String PROPERTY_KEY_CLIENT_FACTORY_CLASS = "vitess.client.factory"; /** * Setup MySQL, Vttablet and VtGate instances required for the tests. This uses the py/vttest * framework to start and stop instances. */ public static void setupTestEnv(TestEnv testEnv) throws Exception { List<String> command = testEnv.getSetupCommand(15000); logger.info("setup command: " + command.toString()); ProcessBuilder pb = new ProcessBuilder(command); pb.redirectErrorStream(true); Process p = pb.start(); BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream())); // Read the vtgate port from stdout as JSON with a "port" field. String line; while ((line = br.readLine()) != null) { logger.info("run_local_database: " + line); if (!line.startsWith("{")) { continue; } try { Type mapType = new TypeToken<Map<String, Object>>() {}.getType(); Map<String, Object> map = new Gson().fromJson(line, mapType); testEnv.setPythonScriptProcess(p); testEnv.setPort(((Double)map.get(System.getProperty(PROPERTY_KEY_CLIENT_TEST_PORT))).intValue()); return; } catch (JsonSyntaxException e) { logger.error("JsonSyntaxException parsing setup command output: " + line, e); } } Assert.fail("setup script failed to parse vtgate port"); } /** * Teardown the test instances, if any. */ public static void teardownTestEnv(TestEnv testEnv) throws Exception { Process process = testEnv.getPythonScriptProcess(); if (process != null) { logger.info("sending empty line to run_local_database to stop test setup"); process.getOutputStream().write("\n".getBytes()); process.getOutputStream().flush(); process.waitFor(); testEnv.setPythonScriptProcess(null); } testEnv.clearTestOutput(); } public static TestEnv getTestEnv(String keyspace, VTTestTopology topology) { String testEnvClass = System.getProperty(PROPERTY_KEY_CLIENT_TEST_ENV); try { Class<?> clazz = Class.forName(testEnvClass); TestEnv env = (TestEnv) clazz.newInstance(); env.setKeyspace(keyspace); env.setTopology(topology); return env; } catch (ClassNotFoundException | IllegalAccessException | InstantiationException e) { throw new RuntimeException(e); } } public static RpcClientFactory getRpcClientFactory() { String rpcClientFactoryClass = System.getProperty(PROPERTY_KEY_CLIENT_FACTORY_CLASS); try { Class<?> clazz = Class.forName(rpcClientFactoryClass); return (RpcClientFactory) clazz.newInstance(); } catch (ClassNotFoundException | IllegalAccessException | InstantiationException e) { throw new RuntimeException(e); } } public static VTGateBlockingConn getBlockingConn(TestEnv testEnv) { // Dial timeout Context ctx = Context.getDefault().withDeadlineAfter(Duration.millis(5000)); return new VTGateBlockingConn( getRpcClientFactory().create(ctx, new InetSocketAddress("localhost", testEnv.getPort())), testEnv.getKeyspace()); } public static void insertRows(TestEnv testEnv, int startId, int count) throws Exception { try (VTGateBlockingConn conn = getBlockingConn(testEnv)) { // Deadline for the overall insert loop Context ctx = Context.getDefault().withDeadlineAfter(Duration.millis(5000)); VTGateBlockingTx tx = conn.begin(ctx); String insertSql = "insert into vtgate_test " + "(id, name, age, percent) values (:id, :name, :age, :percent)"; Map<String, Object> bindVars = new HashMap<>(); for (int id = startId; id - startId < count; id++) { bindVars.put("id", id); bindVars.put("name", "name_" + id); bindVars.put("age", id % 10); bindVars.put("percent", id / 100.0); tx.execute(ctx, insertSql, bindVars, TabletType.MASTER, Query.ExecuteOptions.IncludedFields.ALL); } tx.commit(ctx); } } }
{ "content_hash": "0438dcd1a82bacb4b81175e4bd0e1c86", "timestamp": "", "source": "github", "line_count": 124, "max_line_length": 105, "avg_line_length": 39.38709677419355, "alnum_prop": 0.6977886977886978, "repo_name": "theskyinflames/bpulse-go-client", "id": "9974f3ebe9d3766b21d590045179131c77725d93", "size": "4884", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "vendor/github.com/youtube/vitess/java/client/src/test/java/io/vitess/client/TestUtil.java", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "Go", "bytes": "133726" }, { "name": "Shell", "bytes": "2415" } ] }
<?php //Configuration $hubspotApiKey = "REPLACE_WITH_YOUR_HUBSPOT_API_KEY"; $wordsToAvoid = array( "free", "help", "reminder", "cancelled", "Re:", "Fwd:", "Fw:" ); $phrasesToAvoid = array( "percent off" ); $charactersToAvoid = array( "%", "$", "!" ); ?> <!DOCTYPE html> <html> <head> <title>Email Subject Line Tester</title> </head> <body> <h1>Email Subject Line Tester</h1> <?php if(empty($_POST['subjectLine']) || empty($_POST['email'])){ ?> <form method="post" action=""> <input type="text" name="subjectLine" placeholder="Enter your subject line"><br> <input type="email" name="email" placeholder="Enter your email address"><br> <button type="submit">Test my subject line</button> </form> <?php } else{ $subjectLine = $_POST['subjectLine']; $email = $_POST['email']; //add email address to HubSpot $array = array( 'properties' => array( array( 'property' => 'email', 'value' => $email ) ) ); $json = json_encode($array); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://api.hubapi.com/contacts/v1/contact?hapikey=$hubspotApiKey"); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'POST'); curl_setopt($ch, CURLOPT_POSTFIELDS, $json); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json')); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $response = curl_exec($ch); curl_close($ch); //check if subject line is 50 characters or less if(strlen($subjectLine) > 50){ echo " <h2>Make it shorter</h2> <p>Your subject line is longer than 50 characters. Try removing a few words to make it 50 characters or less.</p> "; } //check if the subject contains any words to avoid foreach($wordsToAvoid as $word){ if(strpos($subjectLine,$word) !== false){ echo " <h2>Remove the word \"$word\"</h2> <p>Try removing the word \"$word\" from your subject line to see if it improves your open rate.</p> "; } } //check if the subject contains any phrases to avoid foreach($phrasesToAvoid as $phrase){ if(strpos($subjectLine,$phrase) !== false){ echo " <h2>Remove the phrase \"$phrase\"</h2> <p>Try removing the phrase \"$phrase\" from your subject line to see if it improves your open rate.</p> "; } } //check if the subject contains any characters to avoid foreach($charactersToAvoid as $character){ if(strpos($subjectLine,$character) !== false){ echo " <h2>Remove the \"$character\" character</h2> <p>Try removing the \"$character\" character from your subject line to see if it improves your open rate.</p> "; } } //check for ALL CAPS words or phrases $subjectLineWithNoPunctuationOrNumbers = preg_replace("/[^A-Za-z ]/", "", $subjectLine); $wordsThatMakeUpSubjectLine = explode(" ", $subjectLineWithNoPunctuationOrNumbers); foreach ($wordsThatMakeUpSubjectLine as $word) { if (ctype_upper($word)) { echo " <h2>Use normal capitalization</h2> <p>Try to avoid using words and phrases in ALL CAPS to see if it improves your open rate.</p> "; } } //check for "you" or "your" if(strpos($subjectLine,"you") === false && strpos($subjectLine,"your") === false){ echo " <h2>Consider using \"you\" or \"your\"</h2> <p>Try adding \"you\" or \"your\" to see if it improves your open rate.</p> "; } //check for a number at the beginning $firstWordOfSubjectLine = strtok($subjectLine, " "); if(!is_numeric($firstWordOfSubjectLine)){ echo " <h2>Consider starting with a number</h2> <p>You could also try starting your subject line with a number. For example: \"10 tips that will help you...\"</p> "; } //check for a question mark if(strpos($subjectLine,"?") === false){ echo " <h2>Consider asking a question</h2> <p>You could also try asking a question and ending your subject line with a question mark (?) to see if it improves your open rate.</p> "; } //give a link to try another subject line echo "<p><strong><a href='" . $_SERVER['REQUEST_URI'] . "'>Test another subject line</a></strong></p>"; } ?> </body> </html>
{ "content_hash": "2d1951d368127930efb8873039eae966", "timestamp": "", "source": "github", "line_count": 162, "max_line_length": 141, "avg_line_length": 26.246913580246915, "alnum_prop": 0.6218250235183443, "repo_name": "reganstarr/subject-line-tester", "id": "e515636e0b51d82408571b891c1de18845b4876f", "size": "4252", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "index.php", "mode": "33188", "license": "mit", "language": [ { "name": "PHP", "bytes": "4252" } ] }
{% extends 'moj_template/base.html' %} {% load staticfiles %} {% block content %} <div class="content inner cf"> <header class="page-header group"> <div> <h1>Thanks</h1> </div> </header> <p>Thanks a lot for your feedback. We will get back to you soon.</p> <p><a href="/">Back to the site</a></p> </div> {% endblock %}
{ "content_hash": "e2c31a77c536f97ee5fce7577e1cc1ac", "timestamp": "", "source": "github", "line_count": 18, "max_line_length": 68, "avg_line_length": 18.38888888888889, "alnum_prop": 0.6163141993957704, "repo_name": "ministryofjustice/open-data-platform", "id": "42c17d27d98328490f3b5421d8b3a8573887a3fe", "size": "331", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "feedback/templates/feedback/thanks.html", "mode": "33188", "license": "mit", "language": [ { "name": "JavaScript", "bytes": "35086" }, { "name": "Python", "bytes": "35158" } ] }
package com.google.android.apps.exposurenotification.logging; import androidx.lifecycle.Lifecycle; import androidx.lifecycle.LifecycleObserver; import androidx.lifecycle.LifecycleOwner; import androidx.lifecycle.OnLifecycleEvent; import com.google.android.apps.exposurenotification.proto.UiInteraction.EventType; /** * Lifecycle observer that logs APP_OPENED every time the app is moved * back into foreground. This does not record activity transitions between the app, * but covers all calls into the app from settings or an exposure notification. */ public class ApplicationObserver implements LifecycleObserver { private final AnalyticsLogger analyticsLogger; ApplicationObserver(AnalyticsLogger analyticsLogger) { this.analyticsLogger = analyticsLogger; } public void observeLifecycle(LifecycleOwner lifecycleOwner) { lifecycleOwner.getLifecycle().addObserver(this); } @OnLifecycleEvent(Lifecycle.Event.ON_START) void onForeground() { analyticsLogger.logUiInteraction(EventType.APP_OPENED); } }
{ "content_hash": "72badd59ae38a4098cae23d1c88989c7", "timestamp": "", "source": "github", "line_count": 32, "max_line_length": 83, "avg_line_length": 32.46875, "alnum_prop": 0.8084696823869105, "repo_name": "google/exposure-notifications-android", "id": "05459dd0ee294a8ee42a04fab1e7a6971de1f2ba", "size": "1637", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "app/src/main/java/com/google/android/apps/exposurenotification/logging/ApplicationObserver.java", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "Java", "bytes": "2492039" } ] }
<div class="umb-content-grid"> <div class="umb-content-grid__item umb-outline umb-outline--surrounding" ng-repeat="item in content" ng-class="{'-selected': item.selected}" ng-click="clickItem(item, $event, $index)"> <div class="umb-content-grid__content"> <a class="umb-content-grid__item-name umb-outline" ng-href="{{'#' + item.editPath}}" ng-click="clickItemName(item, $event, $index)" ng-class="{'-light': !item.published && item.updater != null}"> <umb-icon icon="{{item.icon}}" class="{{item.icon}} umb-content-grid__icon"></umb-icon> <span>{{item.name}}</span> </a> <ul class="umb-content-grid__details-list" ng-class="{'-light': !item.published && item.updater != null}"> <li class="umb-content-grid__details-item" ng-if="item.state"> <div class="umb-content-grid__details-label"><localize key="general_status"></localize>:</div> <div class="umb-content-grid__details-value"><umb-variant-state variant="item"></umb-variant-state></div> </li> <li class="umb-content-grid__details-item" ng-repeat="property in contentProperties"> <div class="umb-content-grid__details-label">{{ property.header }}:</div> <div class="umb-content-grid__details-value">{{ item[property.alias] }}</div> </li> </ul> </div> </div> <umb-empty-state ng-if="!content" position="center"> <localize key="content_noItemsToShow">There are no items to show</localize> </umb-empty-state> </div>
{ "content_hash": "5b228c6d9dba6cfea443ba07c56ad7cb", "timestamp": "", "source": "github", "line_count": 40, "max_line_length": 121, "avg_line_length": 41.2, "alnum_prop": 0.5776699029126213, "repo_name": "leekelleher/Umbraco-CMS", "id": "5f6fd2d4852b5742f20d97d94dcb017fc8556a44", "size": "1648", "binary": false, "copies": "1", "ref": "refs/heads/v8/contrib", "path": "src/Umbraco.Web.UI.Client/src/views/components/umb-content-grid.html", "mode": "33188", "license": "mit", "language": [ { "name": "ASP", "bytes": "484235" }, { "name": "Batchfile", "bytes": "16156" }, { "name": "C#", "bytes": "16505882" }, { "name": "CSS", "bytes": "676666" }, { "name": "HTML", "bytes": "776273" }, { "name": "JavaScript", "bytes": "4045587" }, { "name": "PowerShell", "bytes": "18034" }, { "name": "Python", "bytes": "876" }, { "name": "Ruby", "bytes": "765" }, { "name": "XSLT", "bytes": "50045" } ] }
require "spec_helper" module CC::Analyzer describe EnginesRunner do include FileSystemHelpers around do |test| within_temp_dir { test.call } end before do system("git init > /dev/null") end it "builds and runs enabled engines from the registry with the formatter" do config = config_with_engine("an_engine") registry = registry_with_engine("an_engine") formatter = null_formatter expect_engine_run("an_engine", "/code", formatter) EnginesRunner.new(registry, formatter, "/code", config).run end it "raises for no enabled engines" do config = double(engines: {}, exclude_paths: []) runner = EnginesRunner.new({}, null_formatter, "/code", config) expect { runner.run }.to raise_error(EnginesRunner::NoEnabledEngines) end describe "when the formatter does not respond to #close" do let(:config) { config_with_engine("an_engine") } let(:formatter) do formatter = double(started: nil, write: nil, run: nil, finished: nil) allow(formatter).to receive(:engine_running).and_yield formatter end let(:registry) { registry_with_engine("an_engine") } it "does not call #close" do expect_engine_run("an_engine", "/code", formatter) EnginesRunner.new(registry, formatter, "/code", config).run end end def registry_with_engine(name) { name => { "channels" => { "stable" => "codeclimate/codeclimate-#{name}" } } } end def config_with_engine(name) CC::Yaml.parse(<<-EOYAML) engines: #{name}: enabled: true EOYAML end def expect_engine_run(name, source_dir, formatter, engine_config = nil) engine = double(name: name) expect(engine).to receive(:run). with(formatter, kind_of(ContainerListener)) image = "codeclimate/codeclimate-#{name}" engine_config ||= { "enabled" => true, include_paths: ["./"] } expect(Engine).to receive(:new). and_return(engine) # with(name, { "image" => image }, source_dir, engine_config, anything). # and_return(engine) end def null_formatter formatter = double(started: nil, write: nil, run: nil, finished: nil, close: nil) allow(formatter).to receive(:engine_running).and_yield formatter end end end
{ "content_hash": "e86e15cc7ccb0127f652ce14e202f9b0", "timestamp": "", "source": "github", "line_count": 88, "max_line_length": 87, "avg_line_length": 27.795454545454547, "alnum_prop": 0.6022076860179886, "repo_name": "mrb/codeclimate", "id": "92f14dcd46e5c51b0047ab8388901d99e14b0a25", "size": "2446", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "spec/cc/analyzer/engines_runner_spec.rb", "mode": "33188", "license": "mit", "language": [ { "name": "HTML", "bytes": "2183" }, { "name": "Makefile", "bytes": "1228" }, { "name": "Ruby", "bytes": "198493" }, { "name": "Shell", "bytes": "4326" } ] }
class Providers: # CONSTRUCTOR def __init__(self, name, services, link): self.name = name self.services = services self.link = link # METHODS # require restful api def getInfo(self, selector): # unique selector if(len(selector) == 1): # get summary for service in self.services: print(service[1]) print(self.link.get(service[1]))
{ "content_hash": "01d902f1cf7859afa3cf7fc03d58a9a9", "timestamp": "", "source": "github", "line_count": 16, "max_line_length": 43, "avg_line_length": 24.25, "alnum_prop": 0.6211340206185567, "repo_name": "flowgunso/server_manager", "id": "29a44be31ce91d5ff20e9ece06aed768f0d33bfe", "size": "388", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "server_manager/providers.py", "mode": "33188", "license": "mit", "language": [ { "name": "Python", "bytes": "2306" } ] }
package com.goit.Lesson9; /** * Created by 1 on 27.05.2015. */ public class Person { public String name; public Person(String name){ this.name = name; } }
{ "content_hash": "039ef5894cedc9555f69e09113838837", "timestamp": "", "source": "github", "line_count": 12, "max_line_length": 31, "avg_line_length": 14.916666666666666, "alnum_prop": 0.6033519553072626, "repo_name": "viktozhu/java-beginners", "id": "cf7390b0943046ac33abd64c569c4baac82c2d24", "size": "179", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "src/com/goit/Lesson9/Person.java", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "HTML", "bytes": "97" }, { "name": "Java", "bytes": "21461" } ] }
<!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Changelog](#changelog) - [v1.2.1](#v121) - [Bug Fixes](#bug-fixes) - [v1.2.0](#v120) - [Enhancements](#enhancements) - [Bug Fixes](#bug-fixes-1) - [v1.1.0](#v110) - [Enhancements](#enhancements-1) - [Bug Fixes](#bug-fixes-2) - [v1.0.0](#v100) - [Incompatible Changes](#incompatible-changes) - [v0.8.1](#v081) - [Enhancements](#enhancements-2) - [v0.8.0](#v080) - [Enhancements](#enhancements-3) - [v0.7.1](#v071) - [Bug Fixes](#bug-fixes-3) - [v0.7.0](#v070) - [Bug Fixes](#bug-fixes-4) - [Incompatible Changes](#incompatible-changes-1) - [v0.6.0](#v060) - [Enhancements](#enhancements-4) - [v0.5.1](#v051) - [Bug Fixes](#bug-fixes-5) - [v0.5.0](#v050) - [Enhancements](#enhancements-5) - [Incompatible Changes](#incompatible-changes-2) - [v0.4.1](#v041) - [Bug Fixes](#bug-fixes-6) - [v0.4.0](#v040) - [Enhancements](#enhancements-6) - [Bug Fixes](#bug-fixes-7) - [Incompatible Changes](#incompatible-changes-3) - [v0.3.0](#v030) - [Enhancements](#enhancements-7) - [Bug Fixes](#bug-fixes-8) - [v0.2.3](#v023) - [Enhancements](#enhancements-8) - [Bug Fixes](#bug-fixes-9) - [Upgrading](#upgrading) - [v0.2.1](#v021) - [Bug Fixes](#bug-fixes-10) - [v0.2.0](#v020) - [Enhancements](#enhancements-9) - [Bug Fixes](#bug-fixes-11) - [Incompatible Changes](#incompatible-changes-4) - [v0.1.2](#v012) - [Enhancements](#enhancements-10) - [Bug Fixes](#bug-fixes-12) - [v0.1.1](#v011) - [Enhancements](#enhancements-11) - [Bug Fixes](#bug-fixes-13) - [Incompatible Changes](#incompatible-changes-5) <!-- END doctoc generated TOC please keep comment here to allow auto update --> # Changelog All significant changes in the project are documented here. ## v1.2.1 ### Bug Fixes * [#58](https://github.com/C-S-D/carrot_rpc/pull/58) - Don't convert all hash-like things to hashes: preserve `HashWithIndifferentAccess` after renaming keys with `rename_keys` refinement. - [@jeffutter](https://github.com/jeffutter) ## v1.2.0 ### Enhancements * [#57](https://github.com/C-S-D/carrot_rpc/pull/57) - Regression test to prevent rename_keys being called on `String` - [@KronicDeth](https://gitub.com/KronicDeth) ### Bug Fixes * [#57](https://github.com/C-S-D/carrot_rpc/pull/57) - Only rename key in values that are `Array`s or `Hash`es. - [@KronicDeth](https://gitub.com/KronicDeth) ## v1.1.0 ### Enhancements * [#55](https://github.com/C-S-D/carrot_rpc/pull/55) - [@shamil614](https://github.com/shamil614) * `ActiveSupport::Notifications` to enable universal metrics gathering * `client.SERVER_QUEUE.remote_call` will include `correlation_id` of request * `server.SERVER_QUEUE.consume` will include `correlation_id` of request * [#56](https://github.com/C-S-D/carrot_rpc/pull/56) - [@KronicDeth](https://gitub.com/KronicDeth) * Match client log tags to server log tags. * Client tags will start with `client` as server tags start with `server`. * When available, it will have `server_queue=SERVER_QUEUE_NAME`. * Finally, it will always have `correlation_id=CORRELATION_ID`. ### Bug Fixes * [#56](https://github.com/C-S-D/carrot_rpc/pull/56) - `client.SERVER_QUEUE_NAME.remote_call` `ActiveSupport::Notification` was using the server queue object instead of name, so it interpolated as `#<...>`. - [@KronicDeth](https://gitub.com/KronicDeth) ## v1.0.0 ### Incompatible Changes * [#48](https://github.com/C-S-D/carrot_rpc/pull/48) - Remove queue for correlation_id when RpcClient#wait_for_result raises an exception. -[@nward](https://github.com/nward) * [#50](https://github.com/C-S-D/carrot_rpc/pull/50) - Raise an exception for error responses to let consuming application handle response. -[@nward](https://github.com/nward) * [#52](https://github.com/C-S-D/carrot_rpc/pull/52) - Allow custom queue options to be set -[@nward](https://github.com/nward) * [#53](https://github.com/C-S-D/carrot_rpc/pull/53) - Rename keys for any hashes inside arrays. Fixes issue [#35](https://github.com/C-S-D/carrot_rpc/issues/35) -[@thewalkingtoast](https://github.com/thewalkingtoast) ## v0.8.1 ### Enhancements * Update to Ruby 2.2.6 and have tests run for Ruby 2.3 and 2.4. ## v0.8.0 ### Enhancements * Don't assume that Bunny already has a connection to RabbitMQ. * Attempt to start Bunny for the servers * This allows the implementing application to decide when to start the connection when using a forking web server ## v0.7.1 ### Bug Fixes * [#40](https://github.com/C-S-D/carrot_rpc/pull/41) - Deletes Queues immediately after the last consumer is unsubscribed. Reduces memory load. API remains the same. - [@shamil614](https://github.com/C-S-D/carrot_rpc/pull/34) ## v0.7.0 ### Bug Fixes * [#38](https://github.com/C-S-D/carrot_rpc/pull/38) - The `until quit` busy-wait loop consumes ~1 core for each instance of `carrot_rpc` as the default `sleep 0` does the minimal amount of sleep before waking up to check the boolean `quit`. I've replaced it with an `IO.pipe` and `IO.select` that does not consume any resources while it waits. **NOTE: A Queue could not be used here because the MRI VM blocks use of Mutexes inside signal handlers to prevent deadlocks because the Mutex code is not-reentrant (i.e signal-interrupt-safe). If a Queue is used the thread silently fails with an exception and the signal is ignored.** - [@KronicDeth](https://github.com/KronicDeth) ### Incompatible Changes * [#38](https://github.com/C-S-D/carrot_rpc/pull/38) - Removal of the busy-wait removes the `-s` (`--runloop_sleep`) option as it is no longer needed. - [@KronicDeth](https://github.com/KronicDeth) ## v0.6.0 ### Enhancements * [#34](https://github.com/C-S-D/carrot_rpc/pull/34) - `--server_test_mode` options for `carrot_rpc` sets `CarrotRpc.configuration.server_test_mode` to `true`. When `server_test_mode` is true, `_test` is appended to the queue name used by `CarrotRpc::RpcServer` and `CarrotRpc::RpcClient`, so that tests don't use the same queue as production or development. - [@shamil614](https://github.com/C-S-D/carrot_rpc/pull/34) * [#36](https://github.com/C-S-D/carrot_rpc/pull/36) - Request in thread-local variable, so it can be used for client request - [@KronicDeth](https://github.com/KronicDeth) * `carrot_rpc --thread-request VARIABLE` allows the request payload to be put in a Thread-local `VARIABLE, so that client that are invoked during an RPC server request can use parts of the request, most importantly, parts of "meta" in their own requests. This is needed to pass along ownership information for db_connection in Ecto 2.0. * Tag the log with the correlation_id, as it can be filtered for then. * Clarify whether a request or response is being published or received, so that server vs client logging can be distinguished. ## v0.5.1 ### Bug Fixes * [#31](https://github.com/C-S-D/carrot_rpc/pull/31) - If the server does not respond to a method in the `request_message`, then return a "Method not found" JSONRPC 2.0 error instead of the server crashing with `NoMethodError` exception. - [@KronicDeth](https://github.com/KronicDeth) ## v0.5.0 ### Enhancements * [#25](https://github.com/C-S-D/carrot_rpc/pull/25) - [@shamil614](https://github.com/shamil614) * Timeout RpcClient requests when response is not received. * Default timeout is 5 seconds. * Timeout is configurable. * [#27](https://github.com/C-S-D/carrot_rpc/pull/27) - [@shamil614](https://github.com/shamil614) * Simplify RpcClient usage. * Each request which goes through `RpcClient.remote_request` ultimately needs to use a unique `reply_queue` on eqch request. * By closing the channel and opening a new channel on each request we ensure that cleanup takes place by the deletion of the `reply_queue`. * [#29](https://github.com/C-S-D/carrot_rpc/pull/29) - [@shamil614](https://github.com/shamil614) * Implementations of the RpcClient need to be flexible with the key formatter. * Formatting can be set globally via `Configuration`, overridden via passing Configuration object upon initializing client, or redefine `response_key_formatter` `request_key_formatter` methods. ### Incompatible Changes * [#27](https://github.com/C-S-D/carrot_rpc/pull/27) - [@shamil614](https://github.com/shamil614) * Calling `rpc_client.start` and `rpc_client.channel.close` are no longer required when calling `rpc_client.remote_call` or the methods that call it (`index` `create`, etc). * Calling `rpc_client.channel.close` after `rpc_client.remote_call` will cause an Exception to be raised as the channel is already closed. * [#29](https://github.com/C-S-D/carrot_rpc/pull/29) - [@shamil614](https://github.com/shamil614) * Replaced hard coded key formatter in place of a configurable option. * Need to set the following in config to maintain previous behavior ```ruby CarrotRpc.configure do |config| # RpcServers expect the params to be dashed. config.rpc_client_request_key_format = :dasherize # In most cases the RpcClient instances use JSONAPI::Resource classes and the keys need to be transformed. config.rpc_client_response_key_format = :underscore end ``` ## v0.4.1 ### Bug Fixes * [#23](https://githb.com/C-S-D/carrot_rpc/pull/23) - [@shamil614](https://github.com/shamil614) * Fixes errors for non-hash results being called with hash methods. * RPC client parses response to account for jsonrpc error object as well as jsonrpc result object. ## v0.4.0 ### Enhancements * [#20](https://githb.com/C-S-D/carrot_rpc/pull/20) - `config.before_request` may be set with a `#call(params) :: params` that is passed the `params` and returns altered `params` that are published to the queue. - [@shamil614](https://github.com/shamil614) ### Bug Fixes * [#19](https://githb.com/C-S-D/carrot_rpc/pull/19) - [@KronicDeth](http://github.com/kronicdeth) * Put JSONAPI errors documents into the JSONRPC error fields instead of returning as normal results as consumers, such as `Rpc.Generic.Client` are expecting all errors to be in JSONRPC's error field and not have to check if the non-error `result` contains a JSONAPI level error. This achieves parity with the behavior in the Elixir `Rpc.Generic.Server`. * Scrub JSONAPI error fields that are `nil` so they don't get transmitted as `null`. JSONAPI spec is quite clear that `null` columns shouldn't be transmitted except in the case of `null` data to signal a missing singleton resource. This achieves compatibility with the error parsing in `Rpc.Generic.Client` in Elixir. ### Incompatible Changes * [#20](https://githb.com/C-S-D/carrot_rpc/pull/20) - `base_url`, which must be implemented by any RPC server that `include CarrotRpc::RpcServer::JSONAPIResources`, changes from `base_url() :: String` to `base_url(JSONAPI::OperationResult, JSONAPI::Request) :: String` - [@shamil614](https://github.com/shamil614) ## v0.3.0 ### Enhancements * [#11](https://githb.com/C-S-D/carrot_rpc/pull/11) - Add CodeClimate badge to README - [@thewalkingtoast](https://github.com/thewalkingtoast) * [#13](https://githb.com/C-S-D/carrot_rpc/pull/13) - Document `queue_name` - [@shamil614](https://github.com/shamil614) * [#14](https://githb.com/C-S-D/carrot_rpc/pull/14) - Pass `rpc_request: true` in the `JSONAPI::Request` `context`, so resources can differentiate between API and RPC calls - [@shamil614](https://github.com/shamil614) ### Bug Fixes * [#12](https://githb.com/C-S-D/carrot_rpc/pull/12) - Pass `request` to `render_errors` when handling exceptions in `CarrotRpc::RpcServer::JSONAPIResources` - [@shamil614](https://github.com/shamil614) * [#15](https://githb.com/C-S-D/carrot_rpc/pull/15) - Fix argument error bug when passing block to `CarrotRpc::TaggedLog` methods by allowing either a message or a block like standard `Logger` interface - [@shamil614](https://github.com/shamil614) * [#17](https://githb.com/C-S-D/carrot_rpc/pull/17) - New rubocop versions add new cops or deprecate old config settings, so it is not safe to have `"rubocop"` without a version in the gemspec. - [@KronicDeth](http://github.com/kronicdeth) ## v0.2.3 ### Enhancements * [#9](https://github.com/C-S-D/carrot_rpc/pull/9) - [@KronicDeth](http://github.com/kronicdeth) * `CarrotRpc::RpcServer` subclasses can `include CarrotRpc::RpcServer::JSONAPIResources` to get [`JSONAPI::ActsAsResourceController`](https://github.com/cerebris/jsonapi-resources/blob/8e85d68dfbaf9181344c7618b0b29b4cfd362034/lib/jsonapi/acts_as_resource_controller.rb) helper methods for processing JSONAPI requests in server methods. * The primary entry point is `#process_request_params`, which expects an `ActionController::Parameters` (to do strong parameters) with `:action` set to the method name and `:controller` set to the name of the controller that corresponds to the `JSONAPI::Resource` subclass, such as `"api/v1/post"` to load `API::V1::PostResource`. * You need to define the following methods: * `base_url` * `resource_klass` * `CarrotRpc::RpcServer` subclasses, when including `CarrotRpc::Rpc::JSONAPIResources` can `extend CarrotRpc::Rpc::JSONAPIResources::Actions` to gain access to an `actions` DSL that takes a list of actions and defines methods that call `process_request_params` with the correct options. * You need to define the following methods: * `base_url` * `controller` * `resource_klass` ### Bug Fixes * [#9](https://github.com/C-S-D/carrot_rpc/pull/9) - [@KronicDeth](http://github.com/KronicDeth) * `CarrotRpc::Error` was moved from the incorrect `lib/carrot_rpc/rpc_server/error.rb` path to the correct `lib/carrot_rpc/error.rb` path. * `CarrotRpc::Error::Code` was moved from the incorrect `lib/carrot_rpc/rpc_server/error/code.rb` path to the correct `lib/carrot_rpc/error/code.rb` path. ### Upgrading * [#9](https://github.com/C-S-D/carrot_rpc/pull/9) - [@KronicDeth](http://github.com/KronicDeth) * If you previously loaded `CarrotRpc::Error` directly with `require "carrot_rpc/rpc_server/error"` you now need to `require "carrot_rpc/error"`, which is the corrected path. `CarrotRpc::Error` is autoloaded, so you don't need to require it. * If you previously loaded `CarrotRpc::Error::Code` directly with `require "carrot_rpc/rpc_server/error/code"` you now need to `require "carrot_rpc/error/code"`, which is the corrected path. `CarrotRpc::Error::Code` is autoloaded, so you don't need to require it. ## v0.2.1 ### Bug Fixes * [#6](https://github.com/C-S-D/carrot_rpc/pull/6) - [@shamil614](https://github.com/shamil614) * Error class not loaded in RpcServer * RpcServer should not rename json keys * RpcClient dasherizes keys before serializing hash to json. Better conformity to json property naming conventions. * RpcClient underscores keys after receiving response from server. Better conformity to ruby naming conventions. * [#7](https://github.com/C-S-D/carrot_rpc/pull/7) - [@shamil614](https://github.com/shamil614) * Make sure hash keys are strings before renaming ## v0.2.0 ### Enhancements * [#5](https://github.com/C-S-D/carrot_rpc/pull/5) - [@KronicDeth](http://github.com/KronicDeth) * Gems ordered and documented in gemspec and `Gemfile` * Temorpary (`#`) files removed from git * Rubocop is enabled and used on CircleCI * Unused variables are prefixed with `_` * `fail` is used instead of `raise` when first raising an exception * Remove usage of deprecated methods * Print error if `byebug` can't be loaded instead of failing silently, but CLI still starts * Stop shadowing outer local variables in blocks * Remove unused assignments * Set and enforce max line length to 120 * Use `find` instead of `select {}.first` for better performance * `queue_name` will retrieve the current queue name while `queue_name(new_name)` will set it. * Align hashes * Align parameters * Favor symbolic `&&` over `and`. (They have different precedence too) * Remove block comments * Assign to variable outside conditionals instead of on each branch * Remove extra empty lines * Don't favor guard clauses as they prevent break pointing the body and guard separately and obscure bodies that don't have code coverage. * Remove extra spacing * Correct indentation * Freeze `CarrotRpc::VERSION` so it is immutable * Use `until` instead of negated `while` * Use `_` to separate digits in large numerals * Use `( )` for sigils * Remove redundant `self.` for method calls * Use `%r{}` instead of `//` for regexps * Use newlines instead of `;` * Add spacing around blocks and braces * Enforce double quotes for all strings as double quotes work for strings in both Ruby and Elixir. (Single quotes are for Char Lists in Elixir) * Use `&:<method>` instead of calling a non-args method in blocks * Use `attr_reader` instead of trivial accessor methods * Remove unneed interpolation * Use double quotes instead of `%q` * Use `%w` for word arrays * Extract methods to lower to AbcSize metric and Method Length * Extract classes and modules to lower Class Length * Use `const_get` and `constantize` instead of security risk `eval` * Enable all RSpec 3 recommended options * Fix order-dependency of specs. * Use `autoload` to delay loading * Use compact class and module children to prevent parent from being missed when loading. * Add `rake spec` * Add Luke Imhoff as an author * Set gem home page to this repository * Semantic block delimiters, so we always think about procedural vs functional blocks to make Elixir coding easier. ### Bug Fixes * [#5](https://github.com/C-S-D/carrot_rpc/pull/5) - [@KronicDeth](http://github.com/KronicDeth) * `ClientServer::ClassMethods` has been moved under `CarrotRpc` namespace as `CarrotRpc::ClientServer` * `HashExtensions` has been moved under `CarrotRpc` namespace as `CarrotRpc::HashExtensions` ### Incompatible Changes * [#5](https://github.com/C-S-D/carrot_rpc/pull/5) - [@KronicDeth](http://github.com/KronicDeth) * `ClientServer::ClassMethods` renamed to `CarrotRpc::ClientServer` * `HashExtensions` renamed to `CarrotRpc::HashExtensions` * `ClientServer::ClassMethods#get_queue_name` renamed to `CarrotRpc::ClientServer#queue_name()` (no args is the reader, one argument is the writer) ## v0.1.2 ### Enhancements * [#4](https://github.com/C-S-D/carrot_rpc/pull1) - [@shamil614](https://github.com/shamil614) * Rename the keys in the parsed payload from '-' to '_' * Added integration specs to test functionality * Logging to test.log file * Setup for circleci integration tests to rabbitmq ### Bug Fixes * [#4](https://github.com/C-S-D/carrot_rpc/pull1) - [@shamil614](https://github.com/shamil614) * Some require statements not properly loading modules * Consistent use of require vs require_relative ## v0.1.1 ### Enhancements * [#1](https://github.com/C-S-D/carrot_rpc/pull1) - [@shamil614](https://github.com/shamil614) * `CarrotRpc.configuration.bunny` can be set to custom [`Bunny` instance](http://www.rubydoc.info/gems/bunny/Bunny#new-class_method). * `CarrotRpc::RpcClient` and `CarrotRpc::RpcServer` subclasses can set their queue name with the `queue_name` class method. (It can be retrieved with `get_queue_name`. * `carrot_rpc`'s `--autoload_rails` boolean flag determines whether to load Rails environment. The Rails path is assumed to the be the current working directory. * If a `CarrotRpc::RpcServer` method invoked from a JSON RPC `:method` raises an `CarrotRpc::Error`, then that error is converted to a JSON RPC error and sent back to the client. ### Bug Fixes * [#1](https://github.com/C-S-D/carrot_rpc/pull/1) - [@shamil614](https://github.com/shamil614) * Send `jsonrpc` key instead of incorrect `json_rpc` key in JSON RPC response messages * All files under `bin` are marked as gem executables instead of just `carrot_rpc` * Fix files not loading properly when using `carrot_rpc` * Fix bug in logger file setup * The logger for each `CarrotRpc::RpcServer` is set before the server is started in `CarrotRpc::ServerRunner#run_servers` to prevent a race condition where `#start` may try to use the logger. ### Incompatible Changes * [#1](https://github.com/C-S-D/carrot_rpc/pull/1) - [@shamil614](https://github.com/shamil614) * `CarrotRpc.configuration.bunny` **MUST** be set to a [`Bunny` instance](http://www.rubydoc.info/gems/bunny/Bunny#new-class_method), usually using `Bunny.new`. * `CarrotRpc::RpcClient` and `CarrotRpc::RpcServer` subclasses **MUST** set their queue name with the `queue_name` class method. * `:channel` keyword argument is no longer accepted in `CarrotRpc::RpcClient.new`. The channel had already been created from the `config.bunny.create_channel`, so the keyword argument was unused. * `CarrotRpc::RpcClient#logger` is now read-only and is set from `config.logger`. * `CarrotRpc::RpcServer#logger` is now read-only and is set from `config.logger`. * `CarrotRpc.configuration.logger` is set to the `CarrotRpc::ServerServer#logger`. * `carrot_rpc`'s `--rails_path PATH` flag has been replaced with `--autoload_rails` boolean flag that automatically assumes the Rails path is the current working directory. * `CarrotRpc.connfiguration.rails_path` no longer exists. The Rails path is assumed to be the current working directory.
{ "content_hash": "75752df7cea97be99c893627871b52a2", "timestamp": "", "source": "github", "line_count": 355, "max_line_length": 678, "avg_line_length": 61.16056338028169, "alnum_prop": 0.714719970523213, "repo_name": "C-S-D/carrot_rpc", "id": "b0686abdba6ae52d5f5ab97c46326c627160cbf8", "size": "21712", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "CHANGELOG.md", "mode": "33188", "license": "mit", "language": [ { "name": "Ruby", "bytes": "89744" }, { "name": "Shell", "bytes": "115" } ] }
package core.userDefinedTask.manualBuild.steps; import argo.jdom.JsonNode; import core.controller.Core; import core.userDefinedTask.manualBuild.ManuallyBuildStep; import utilities.KeyEventCodeToString; public class KeyboardReleaseKeyStep extends ManuallyBuildStep { private int key; public static KeyboardReleaseKeyStep of(int key) { KeyboardReleaseKeyStep result = new KeyboardReleaseKeyStep(); result.key = key; return result; } @Override public void execute(Core controller) throws InterruptedException { controller.keyBoard().press(key); } @Override public String getDisplayString() { return String.format("release key %s", KeyEventCodeToString.codeToString(key).toUpperCase()); } public static KeyboardReleaseKeyStep parseJSON(JsonNode node) { KeyboardReleaseKeyStep result = new KeyboardReleaseKeyStep(); result.parse(node); return result; } @Override public String getJsonSignature() { return "keyboard_release_key"; } }
{ "content_hash": "0f1003660b0637e37deca66eb8618d70", "timestamp": "", "source": "github", "line_count": 38, "max_line_length": 95, "avg_line_length": 25.473684210526315, "alnum_prop": 0.7871900826446281, "repo_name": "repeats/Repeat", "id": "d4cc00d1f6cf15aad8ae8b6902bbccfb23f0d67a", "size": "968", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "src/core/userDefinedTask/manualBuild/steps/KeyboardReleaseKeyStep.java", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "C#", "bytes": "910" }, { "name": "CSS", "bytes": "110441" }, { "name": "Java", "bytes": "1067303" }, { "name": "JavaScript", "bytes": "86809" }, { "name": "Python", "bytes": "24667" }, { "name": "SCSS", "bytes": "83382" } ] }
define(['exports', 'module', 'react', 'classnames', './BootstrapMixin', './FadeMixin', './utils/CustomPropTypes'], function (exports, module, _react, _classnames, _BootstrapMixin, _FadeMixin, _utilsCustomPropTypes) { 'use strict'; var _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; } function _defineProperty(obj, key, value) { return Object.defineProperty(obj, key, { value: value, enumerable: true, configurable: true, writable: true }); } /* eslint-disable react/no-multi-comp */ var _React = _interopRequireDefault(_react); var _classNames = _interopRequireDefault(_classnames); var _BootstrapMixin2 = _interopRequireDefault(_BootstrapMixin); var _FadeMixin2 = _interopRequireDefault(_FadeMixin); var _CustomPropTypes = _interopRequireDefault(_utilsCustomPropTypes); console.warn('This file is deprecated, and will be removed in v0.24.0. Use react-bootstrap.js or react-bootstrap.min.js instead.'); console.warn('You can read more about it at https://github.com/react-bootstrap/react-bootstrap/issues/693'); var Tooltip = _React['default'].createClass({ displayName: 'Tooltip', mixins: [_BootstrapMixin2['default'], _FadeMixin2['default']], propTypes: { /** * An html id attribute, necessary for accessibility * @type {string} * @required */ id: _CustomPropTypes['default'].isRequiredForA11y(_React['default'].PropTypes.string), /** * Sets the direction the Tooltip is positioned towards. */ placement: _React['default'].PropTypes.oneOf(['top', 'right', 'bottom', 'left']), /** * The "left" position value for the Tooltip. */ positionLeft: _React['default'].PropTypes.number, /** * The "top" position value for the Tooltip. */ positionTop: _React['default'].PropTypes.number, /** * The "left" position value for the Tooltip arrow. */ arrowOffsetLeft: _React['default'].PropTypes.oneOfType([_React['default'].PropTypes.number, _React['default'].PropTypes.string]), /** * The "top" position value for the Tooltip arrow. */ arrowOffsetTop: _React['default'].PropTypes.oneOfType([_React['default'].PropTypes.number, _React['default'].PropTypes.string]), /** * Title text */ title: _React['default'].PropTypes.node, /** * Specify whether the Tooltip should be use show and hide animations. */ animation: _React['default'].PropTypes.bool }, getDefaultProps: function getDefaultProps() { return { placement: 'right', animation: true }; }, render: function render() { var _classes; var classes = (_classes = { 'tooltip': true }, _defineProperty(_classes, this.props.placement, true), _defineProperty(_classes, 'in', !this.props.animation && (this.props.positionLeft != null || this.props.positionTop != null)), _defineProperty(_classes, 'fade', this.props.animation), _classes); var style = { 'left': this.props.positionLeft, 'top': this.props.positionTop }; var arrowStyle = { 'left': this.props.arrowOffsetLeft, 'top': this.props.arrowOffsetTop }; return _React['default'].createElement( 'div', _extends({ role: 'tooltip' }, this.props, { className: (0, _classNames['default'])(this.props.className, classes), style: style }), _React['default'].createElement('div', { className: 'tooltip-arrow', style: arrowStyle }), _React['default'].createElement( 'div', { className: 'tooltip-inner' }, this.props.children ) ); } }); module.exports = Tooltip; }); // in class will be added by the FadeMixin when the animation property is true
{ "content_hash": "f21821b65164a6febab3d4f3f32f94d3", "timestamp": "", "source": "github", "line_count": 109, "max_line_length": 259, "avg_line_length": 37.88073394495413, "alnum_prop": 0.6321143133930733, "repo_name": "UMNLibraries/ikidowinan", "id": "e3216ffab43bc24a12702c8a1e0947581ce9c805", "size": "4129", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "vendor/assets/components/react-bootstrap/lib/Tooltip.js", "mode": "33188", "license": "mit", "language": [ { "name": "CSS", "bytes": "22806" }, { "name": "CoffeeScript", "bytes": "261" }, { "name": "Dockerfile", "bytes": "1558" }, { "name": "HTML", "bytes": "161411" }, { "name": "JavaScript", "bytes": "71211" }, { "name": "Ruby", "bytes": "499120" } ] }
begin; drop table if exists ncaa_pbp.play_by_play; create table ncaa_pbp.play_by_play ( game_id integer, period_id integer, event_id integer, time text, score text, -- time interval, team_player text, team_event text, team_text text, team_score text, --integer, opponent_score text, --integer, opponent_player text, opponent_event text, opponent_text text, extra text ); --truncate table ncaa_pbp.play_by_play; copy ncaa_pbp.play_by_play from '/tmp/ncaa_games_play_by_play.tsv' with delimiter as E'\t' csv; -- header; /* delete from ncaa_pbp.pbp where game_id=1380752; alter table ncaa_pbp.pbp alter column time type interval using time::interval; alter table ncaa_pbp.play_by_play add column id integer; create temporary table reorder ( game_id integer, period integer, event_id integer, id serial, primary key (game_id,period,event_id) ); insert into reorder (game_id,period,event_id) ( select game_id,period,event_id from ncaa_pbp.pbp order by game_id asc,period asc,time desc, (case when coalesce(team_event,opponent_event)='Enters Games' then 1 when coalesce(team_event,opponent_event)='Leaves Games' then 3 else 2 end) asc ); update ncaa_pbp.pbp set id=r.id from reorder r where (r.game_id,r.period,r.event_id)=(pbp.game_id,pbp.period,pbp.event_id); create index on ncaa_pbp.pbp (id); --alter table ncaa.games add column game_id serial primary key; --update ncaa.games --set game_length = trim(both ' -' from game_length); */ commit;
{ "content_hash": "e91b2f9ac5e09605076e8d4c5580760c", "timestamp": "", "source": "github", "line_count": 69, "max_line_length": 95, "avg_line_length": 24.115942028985508, "alnum_prop": 0.6508413461538461, "repo_name": "octonion/volleyball-w", "id": "a8e2fea3756395258a82047c381b9e9068615e17", "size": "1664", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "ncaa_pbp/loaders_tsv/load_ncaa_games_play_by_play.sql", "mode": "33188", "license": "mit", "language": [ { "name": "PLpgSQL", "bytes": "128172" }, { "name": "Python", "bytes": "14935" }, { "name": "R", "bytes": "15222" }, { "name": "Ruby", "bytes": "123080" }, { "name": "Shell", "bytes": "11639" } ] }
<?php namespace Dan\PluginBundle\Plugin; class PluginManager { private $logger; private $plugins; public function __construct() { $this->plugins = array(); } public function setLogger($logger) { $this->logger = $logger; } public function addPlugin(AbstractPlugin $plugin) { $this->log('added '.$plugin->getCode()); $this->plugins[$plugin->getOrder()][] = $plugin; } public function getPlugins() { $result = array(); foreach($this->plugins as $order => $plugins) { $result = $result + $plugins; } return $result; } private function log($message, $param=array()) { if ($this->logger) { $this->logger->info('[PLUGIN] '.$message, $param); } } }
{ "content_hash": "b569bbbd06e718249e819714450a7c81", "timestamp": "", "source": "github", "line_count": 40, "max_line_length": 62, "avg_line_length": 20.45, "alnum_prop": 0.5354523227383863, "repo_name": "danielsan80/blab_old", "id": "3bea136b64e18694770643e9aba6206828eb7496", "size": "818", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "src/Dan/PluginBundle/Plugin/PluginManager.php", "mode": "33188", "license": "mit", "language": [ { "name": "ApacheConf", "bytes": "2907" }, { "name": "CSS", "bytes": "602843" }, { "name": "HTML", "bytes": "906715" }, { "name": "JavaScript", "bytes": "940311" }, { "name": "Makefile", "bytes": "3999" }, { "name": "PHP", "bytes": "370045" } ] }
package com.nian.firstproject.server.common; import java.util.Comparator; public class PatientComparator implements Comparator<Patient> { @Override public int compare(Patient p0, Patient p1) { if (p0.time - p1.time > 0) { return 1; } else if (p0.time - p1.time < 0) { return -1; } else { return 0; } } }
{ "content_hash": "74a1cd5e2832b559a1e046b7994b71c8", "timestamp": "", "source": "github", "line_count": 20, "max_line_length": 63, "avg_line_length": 17.5, "alnum_prop": 0.6228571428571429, "repo_name": "jhtorch/prognosticsys", "id": "87d2f898e6dc333403b01476fa2294bf2ecc7d61", "size": "350", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "src/src/com/nian/firstproject/server/common/PatientComparator.java", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "CSS", "bytes": "73159" }, { "name": "Java", "bytes": "189789" } ] }
import { ChangeDetectionStrategy, Component, Input } from '@angular/core'; import { Application } from 'app/model/application.model'; import { Environment } from 'app/model/environment.model'; import { Pipeline } from 'app/model/pipeline.model'; import { Project } from 'app/model/project.model'; import { Workflow } from 'app/model/workflow.model'; @Component({ selector: 'app-usage', templateUrl: './usage.component.html', styleUrls: ['./usage.component.scss'], changeDetection: ChangeDetectionStrategy.OnPush }) export class UsageComponent { @Input() project: Project; @Input() workflows: Array<Workflow>; @Input() applications: Array<Application>; @Input() pipelines: Array<Pipeline>; @Input() environments: Array<Environment>; constructor() { } }
{ "content_hash": "73fe25e263546ae956c224b6f6435fc0", "timestamp": "", "source": "github", "line_count": 23, "max_line_length": 74, "avg_line_length": 34.56521739130435, "alnum_prop": 0.7081761006289308, "repo_name": "ovh/cds", "id": "1d6a73188d262a7a0129b16c4b9b3fe64195757f", "size": "795", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "ui/src/app/shared/usage/usage.component.ts", "mode": "33188", "license": "bsd-3-clause", "language": [ { "name": "Dockerfile", "bytes": "1616" }, { "name": "Go", "bytes": "7822995" }, { "name": "HTML", "bytes": "594997" }, { "name": "JavaScript", "bytes": "47672" }, { "name": "Less", "bytes": "793" }, { "name": "Makefile", "bytes": "79754" }, { "name": "PLpgSQL", "bytes": "38853" }, { "name": "SCSS", "bytes": "114372" }, { "name": "Shell", "bytes": "14838" }, { "name": "TypeScript", "bytes": "1760477" } ] }
<?php /** * Created by PhpStorm. * User: yunsong * Date: 16-5-16 * Time: 下午3:03 */ namespace yunsong\search; use yunsong\search\interfaces\Engine; class Elastic implements Engine { public function query($q) { // TODO: Implement query() method. } public function indexAdd($params) { // TODO: Implement indexAdd() method. } public function indexUpdate($params) { // TODO: Implement indexUpdate() method. } public function indexDelete($params) { // TODO: Implement indexDelete() method. } }
{ "content_hash": "5786ed294a4b3e52864fa532b82e10ab", "timestamp": "", "source": "github", "line_count": 36, "max_line_length": 48, "avg_line_length": 16.11111111111111, "alnum_prop": 0.6068965517241379, "repo_name": "awebc/web_yi", "id": "97ea3d5e435f3405e6a301f467116089f8b9f7e7", "size": "584", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "yunsong/search/Elastic.php", "mode": "33261", "license": "bsd-3-clause", "language": [ { "name": "ApacheConf", "bytes": "138" }, { "name": "Batchfile", "bytes": "1541" }, { "name": "CSS", "bytes": "804515" }, { "name": "Groff", "bytes": "233777" }, { "name": "HTML", "bytes": "221365" }, { "name": "JavaScript", "bytes": "755518" }, { "name": "PHP", "bytes": "2153812" }, { "name": "Shell", "bytes": "113" } ] }
using content::BrowserContext; using content::BrowserThread; namespace { // Shorter names for fileapi::* constants. const fileapi::FileSystemType kTemporary = fileapi::kFileSystemTypeTemporary; const fileapi::FileSystemType kPersistent = fileapi::kFileSystemTypePersistent; // We'll use these three distinct origins for testing, both as strings and as // GURLs in appropriate contexts. const char kTestOrigin1[] = "http://host1:1/"; const char kTestOrigin2[] = "http://host2:2/"; const char kTestOrigin3[] = "http://host3:3/"; const GURL kOrigin1(kTestOrigin1); const GURL kOrigin2(kTestOrigin2); const GURL kOrigin3(kTestOrigin3); // TODO(mkwst): Update this size once the discussion in http://crbug.com/86114 // is concluded. const int kEmptyFileSystemSize = 0; typedef std::list<BrowsingDataFileSystemHelper::FileSystemInfo> FileSystemInfoList; typedef scoped_ptr<FileSystemInfoList> ScopedFileSystemInfoList; // The FileSystem APIs are all asynchronous; this testing class wraps up the // boilerplate code necessary to deal with waiting for responses. In a nutshell, // any async call whose response we want to test ought to be followed by a call // to BlockUntilNotified(), which will (shockingly!) block until Notify() is // called. For this to work, you'll need to ensure that each async call is // implemented as a class method that that calls Notify() at an appropriate // point. class BrowsingDataFileSystemHelperTest : public testing::Test { public: BrowsingDataFileSystemHelperTest() : ui_thread_(BrowserThread::UI, &message_loop_), db_thread_(BrowserThread::DB, &message_loop_), webkit_thread_(BrowserThread::WEBKIT_DEPRECATED, &message_loop_), file_thread_(BrowserThread::FILE, &message_loop_), file_user_blocking_thread_( BrowserThread::FILE_USER_BLOCKING, &message_loop_), io_thread_(BrowserThread::IO, &message_loop_) { profile_.reset(new TestingProfile()); helper_ = BrowsingDataFileSystemHelper::Create(profile_.get()); canned_helper_ = new CannedBrowsingDataFileSystemHelper(profile_.get()); } virtual ~BrowsingDataFileSystemHelperTest() { // Avoid memory leaks. profile_.reset(); message_loop_.RunAllPending(); } TestingProfile* GetProfile() { return profile_.get(); } // Blocks on the current MessageLoop until Notify() is called. void BlockUntilNotified() { MessageLoop::current()->Run(); } // Unblocks the current MessageLoop. Should be called in response to some sort // of async activity in a callback method. void Notify() { MessageLoop::current()->Quit(); } // Callback that should be executed in response to // fileapi::SandboxMountPointProvider::ValidateFileSystemRoot void ValidateFileSystemCallback(base::PlatformFileError error) { validate_file_system_result_ = error; Notify(); } // Calls fileapi::SandboxMountPointProvider::ValidateFileSystemRootAndGetURL // to verify the existence of a file system for a specified type and origin, // blocks until a response is available, then returns the result // synchronously to it's caller. bool FileSystemContainsOriginAndType(const GURL& origin, fileapi::FileSystemType type) { sandbox_->ValidateFileSystemRoot( origin, type, false, base::Bind( &BrowsingDataFileSystemHelperTest::ValidateFileSystemCallback, base::Unretained(this))); BlockUntilNotified(); return validate_file_system_result_ == base::PLATFORM_FILE_OK; } // Callback that should be executed in response to StartFetching(), and stores // found file systems locally so that they are available via GetFileSystems(). void CallbackStartFetching( const std::list<BrowsingDataFileSystemHelper::FileSystemInfo>& file_system_info_list) { file_system_info_list_.reset( new std::list<BrowsingDataFileSystemHelper::FileSystemInfo>( file_system_info_list)); Notify(); } // Calls StartFetching() on the test's BrowsingDataFileSystemHelper // object, then blocks until the callback is executed. void FetchFileSystems() { helper_->StartFetching( base::Bind(&BrowsingDataFileSystemHelperTest::CallbackStartFetching, base::Unretained(this))); BlockUntilNotified(); } // Calls StartFetching() on the test's CannedBrowsingDataFileSystemHelper // object, then blocks until the callback is executed. void FetchCannedFileSystems() { canned_helper_->StartFetching( base::Bind(&BrowsingDataFileSystemHelperTest::CallbackStartFetching, base::Unretained(this))); BlockUntilNotified(); } // Sets up kOrigin1 with a temporary file system, kOrigin2 with a persistent // file system, and kOrigin3 with both. virtual void PopulateTestFileSystemData() { sandbox_ = BrowserContext::GetFileSystemContext(profile_.get())-> sandbox_provider(); CreateDirectoryForOriginAndType(kOrigin1, kTemporary); CreateDirectoryForOriginAndType(kOrigin2, kPersistent); CreateDirectoryForOriginAndType(kOrigin3, kTemporary); CreateDirectoryForOriginAndType(kOrigin3, kPersistent); EXPECT_FALSE(FileSystemContainsOriginAndType(kOrigin1, kPersistent)); EXPECT_TRUE(FileSystemContainsOriginAndType(kOrigin1, kTemporary)); EXPECT_TRUE(FileSystemContainsOriginAndType(kOrigin2, kPersistent)); EXPECT_FALSE(FileSystemContainsOriginAndType(kOrigin2, kTemporary)); EXPECT_TRUE(FileSystemContainsOriginAndType(kOrigin3, kPersistent)); EXPECT_TRUE(FileSystemContainsOriginAndType(kOrigin3, kTemporary)); } // Uses the fileapi methods to create a filesystem of a given type for a // specified origin. void CreateDirectoryForOriginAndType(const GURL& origin, fileapi::FileSystemType type) { FilePath target = sandbox_->GetFileSystemRootPathOnFileThread( origin, type, FilePath(), true); EXPECT_TRUE(file_util::DirectoryExists(target)); } // Returns a list of the FileSystemInfo objects gathered in the most recent // call to StartFetching(). FileSystemInfoList* GetFileSystems() { return file_system_info_list_.get(); } // Temporary storage to pass information back from callbacks. base::PlatformFileError validate_file_system_result_; ScopedFileSystemInfoList file_system_info_list_; scoped_refptr<BrowsingDataFileSystemHelper> helper_; scoped_refptr<CannedBrowsingDataFileSystemHelper> canned_helper_; private: // message_loop_, as well as all the threads associated with it must be // defined before profile_ to prevent explosions. The threads also must be // defined in the order they're listed here. Oh how I love C++. MessageLoopForUI message_loop_; content::TestBrowserThread ui_thread_; content::TestBrowserThread db_thread_; content::TestBrowserThread webkit_thread_; content::TestBrowserThread file_thread_; content::TestBrowserThread file_user_blocking_thread_; content::TestBrowserThread io_thread_; scoped_ptr<TestingProfile> profile_; // We don't own this pointer: don't delete it. fileapi::SandboxMountPointProvider* sandbox_; DISALLOW_COPY_AND_ASSIGN(BrowsingDataFileSystemHelperTest); }; // Verifies that the BrowsingDataFileSystemHelper correctly finds the test file // system data, and that each file system returned contains the expected data. TEST_F(BrowsingDataFileSystemHelperTest, FetchData) { PopulateTestFileSystemData(); FetchFileSystems(); EXPECT_EQ(3UL, file_system_info_list_->size()); // Order is arbitrary, verify all three origins. bool test_hosts_found[3] = {false, false, false}; for (std::list<BrowsingDataFileSystemHelper::FileSystemInfo>::iterator info = file_system_info_list_->begin(); info != file_system_info_list_->end(); ++info) { if (info->origin == kOrigin1) { EXPECT_FALSE(test_hosts_found[0]); test_hosts_found[0] = true; EXPECT_FALSE(info->has_persistent); EXPECT_TRUE(info->has_temporary); EXPECT_EQ(0, info->usage_persistent); EXPECT_EQ(kEmptyFileSystemSize, info->usage_temporary); } else if (info->origin == kOrigin2) { EXPECT_FALSE(test_hosts_found[1]); test_hosts_found[1] = true; EXPECT_TRUE(info->has_persistent); EXPECT_FALSE(info->has_temporary); EXPECT_EQ(kEmptyFileSystemSize, info->usage_persistent); EXPECT_EQ(0, info->usage_temporary); } else if (info->origin == kOrigin3) { EXPECT_FALSE(test_hosts_found[2]); test_hosts_found[2] = true; EXPECT_TRUE(info->has_persistent); EXPECT_TRUE(info->has_temporary); EXPECT_EQ(kEmptyFileSystemSize, info->usage_persistent); EXPECT_EQ(kEmptyFileSystemSize, info->usage_temporary); } else { ADD_FAILURE() << info->origin.spec() << " isn't an origin we added."; } } for (size_t i = 0; i < arraysize(test_hosts_found); i++) { EXPECT_TRUE(test_hosts_found[i]); } } // Verifies that the BrowsingDataFileSystemHelper correctly deletes file // systems via DeleteFileSystemOrigin(). TEST_F(BrowsingDataFileSystemHelperTest, DeleteData) { PopulateTestFileSystemData(); helper_->DeleteFileSystemOrigin(kOrigin1); helper_->DeleteFileSystemOrigin(kOrigin2); FetchFileSystems(); EXPECT_EQ(1UL, file_system_info_list_->size()); BrowsingDataFileSystemHelper::FileSystemInfo info = *(file_system_info_list_->begin()); EXPECT_EQ(kOrigin3, info.origin); EXPECT_TRUE(info.has_persistent); EXPECT_TRUE(info.has_temporary); EXPECT_EQ(kEmptyFileSystemSize, info.usage_persistent); EXPECT_EQ(kEmptyFileSystemSize, info.usage_temporary); } // Verifies that the CannedBrowsingDataFileSystemHelper correctly reports // whether or not it currently contains file systems. TEST_F(BrowsingDataFileSystemHelperTest, Empty) { ASSERT_TRUE(canned_helper_->empty()); canned_helper_->AddFileSystem(kOrigin1, kTemporary, 0); ASSERT_FALSE(canned_helper_->empty()); canned_helper_->Reset(); ASSERT_TRUE(canned_helper_->empty()); } // Verifies that AddFileSystem correctly adds file systems, and that both // the type and usage metadata are reported as provided. TEST_F(BrowsingDataFileSystemHelperTest, CannedAddFileSystem) { canned_helper_->AddFileSystem(kOrigin1, kPersistent, 200); canned_helper_->AddFileSystem(kOrigin2, kTemporary, 100); FetchCannedFileSystems(); EXPECT_EQ(2U, file_system_info_list_->size()); std::list<BrowsingDataFileSystemHelper::FileSystemInfo>::iterator info = file_system_info_list_->begin(); EXPECT_EQ(kOrigin1, info->origin); EXPECT_TRUE(info->has_persistent); EXPECT_FALSE(info->has_temporary); EXPECT_EQ(200, info->usage_persistent); EXPECT_EQ(0, info->usage_temporary); info++; EXPECT_EQ(kOrigin2, info->origin); EXPECT_FALSE(info->has_persistent); EXPECT_TRUE(info->has_temporary); EXPECT_EQ(0, info->usage_persistent); EXPECT_EQ(100, info->usage_temporary); } } // namespace
{ "content_hash": "9917c461952ea31dec0c9fc51ada37db", "timestamp": "", "source": "github", "line_count": 281, "max_line_length": 80, "avg_line_length": 39.270462633451956, "alnum_prop": 0.7230629814227458, "repo_name": "rogerwang/chromium", "id": "8ed961bb360f1a0429f90380ec28d2d4c4e387cc", "size": "11796", "binary": false, "copies": "2", "ref": "refs/heads/node", "path": "chrome/browser/browsing_data_file_system_helper_unittest.cc", "mode": "33188", "license": "bsd-3-clause", "language": [ { "name": "Assembly", "bytes": "1178292" }, { "name": "C", "bytes": "73237787" }, { "name": "C++", "bytes": "116793287" }, { "name": "F#", "bytes": "381" }, { "name": "Go", "bytes": "10440" }, { "name": "Java", "bytes": "23296" }, { "name": "JavaScript", "bytes": "8698365" }, { "name": "Objective-C", "bytes": "5351255" }, { "name": "PHP", "bytes": "97796" }, { "name": "Perl", "bytes": "918286" }, { "name": "Python", "bytes": "5933085" }, { "name": "R", "bytes": "524" }, { "name": "Shell", "bytes": "4149150" }, { "name": "Tcl", "bytes": "277077" } ] }
package com.doubleleft.hook; import org.apache.http.Header; import org.json.JSONException; import org.json.JSONObject; import org.json.JSONTokener; import com.loopj.android.http.*; import android.content.Context; import android.content.SharedPreferences; import android.util.Log; /** * Created by glaet on 2/28/14. */ public class Auth { protected static String AUTH_TOKEN_KEY = "hook-auth-token"; protected static String AUTH_DATA_KEY = "hook-auth-data"; protected SharedPreferences localStorage; protected JSONObject _currentUser; protected Client client; public Auth(Client client) { this.client = client; if (Client.context != null) { localStorage = Client.context.getSharedPreferences("hook-localStorage-" + client.getAppId(), Context.MODE_PRIVATE); if (localStorage != null) { String currentUser = localStorage.getString(client.getAppId() + "-" + AUTH_DATA_KEY, null); if (currentUser != null) { try { JSONObject user = (JSONObject) new JSONTokener(currentUser).nextValue(); setCurrentUser(user); } catch (JSONException e) { Log.d("hook", "error on Auth module " + e.toString()); } } } } } public void register(JSONObject data, final JsonHttpResponseHandler responseHandler) { client.post("auth/email", data, new JsonHttpResponseHandler() { @Override public void onSuccess(int statusCode, Header[] headers, JSONObject response) { registerToken(response); responseHandler.onSuccess(statusCode, headers, response); } @Override public void onFailure(int statusCode, Header[] headers, Throwable throwable, JSONObject errorResponse) { responseHandler.onFailure(statusCode, headers, throwable, errorResponse); } }); } public void login(JSONObject data, final JsonHttpResponseHandler responseHandler) { client.post("auth/email/login", data, new JsonHttpResponseHandler() { @Override public void onSuccess(int statusCode, Header[] headers, JSONObject response) { registerToken(response); responseHandler.onSuccess(statusCode, headers, response); } @Override public void onFailure(int statusCode, Header[] headers, Throwable throwable, JSONObject errorResponse) { responseHandler.onFailure(statusCode, headers, throwable, errorResponse); } }); } public void forgotPassword(JSONObject data, AsyncHttpResponseHandler responder) { client.post("auth/email/forgotPassword", data, responder); } public void resetPassword(JSONObject data, AsyncHttpResponseHandler responder) { client.post("auth/email/resetPassword", data, responder); } public void logout() { setCurrentUser(null); } public boolean hasAuthToken() { return getAuthToken() != null; } public String getAuthToken() { return localStorage != null ? localStorage.getString(client.getAppId() + "-" + AUTH_TOKEN_KEY, null) : null; } protected void setCurrentUser(JSONObject data) { _currentUser = data; if (localStorage != null) { SharedPreferences.Editor editor = localStorage.edit(); if (_currentUser == null) { editor.remove(client.getAppId() + "-" + AUTH_TOKEN_KEY); editor.remove(client.getAppId() + "-" + AUTH_DATA_KEY); } else { editor.putString(client.getAppId() + "-" + AUTH_DATA_KEY, _currentUser.toString()); } editor.commit(); } } public JSONObject getCurrentUser() { return _currentUser; } protected void registerToken(JSONObject data) { JSONObject tokenObject = data.optJSONObject("token"); if (tokenObject != null) { if (localStorage != null) { SharedPreferences.Editor editor = localStorage.edit(); editor.putString(client.getAppId() + "-" + AUTH_TOKEN_KEY, tokenObject.optString("token")); editor.commit(); } setCurrentUser(data); } } }
{ "content_hash": "927971b2d715896035d346e10fbf4e70", "timestamp": "", "source": "github", "line_count": 131, "max_line_length": 118, "avg_line_length": 28.68702290076336, "alnum_prop": 0.7136774880255455, "repo_name": "doubleleft/hook-android", "id": "d7375d3ba5903a60611df0c9e796257053ccb0b8", "size": "3758", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "Hook/app/src/main/java/com/doubleleft/hook/Auth.java", "mode": "33188", "license": "mit", "language": [ { "name": "Java", "bytes": "24928" } ] }
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> <meta http-equiv="X-UA-Compatible" content="IE=9"/> <meta name="generator" content="Doxygen 1.9.2"/> <meta name="viewport" content="width=device-width, initial-scale=1"/> <title>GrPPI: testing::internal::IgnoredValue Class Reference</title> <link href="tabs.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="dynsections.js"></script> <link href="search/search.css" rel="stylesheet" type="text/css"/> <script type="text/javascript" src="search/searchdata.js"></script> <script type="text/javascript" src="search/search.js"></script> <link href="doxygen.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="top"><!-- do not remove this div, it is closed by doxygen! --> <div id="titlearea"> <table cellspacing="0" cellpadding="0"> <tbody> <tr style="height: 56px;"> <td id="projectlogo"><img alt="Logo" src="logo.svg"/></td> <td id="projectalign" style="padding-left: 0.5em;"> <div id="projectname">GrPPI &#160;<span id="projectnumber">1.0</span> </div> <div id="projectbrief">Generic and Reusable Parallel Pattern Interface</div> </td> </tr> </tbody> </table> </div> <!-- end header part --> <!-- Generated by Doxygen 1.9.2 --> <script type="text/javascript"> /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */ var searchBox = new SearchBox("searchBox", "search",false,'Search','.html'); /* @license-end */ </script> <script type="text/javascript" src="menudata.js"></script> <script type="text/javascript" src="menu.js"></script> <script type="text/javascript"> /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */ $(function() { initMenu('',true,false,'search.php','Search'); $(document).ready(function() { init_search(); }); }); /* @license-end */</script> <div id="main-nav"></div> <!-- window showing the filter options --> <div id="MSearchSelectWindow" onmouseover="return searchBox.OnSearchSelectShow()" onmouseout="return searchBox.OnSearchSelectHide()" onkeydown="return searchBox.OnSearchSelectKey(event)"> </div> <!-- iframe showing the search results (closed by default) --> <div id="MSearchResultsWindow"> <iframe src="javascript:void(0)" frameborder="0" name="MSearchResults" id="MSearchResults"> </iframe> </div> <div id="nav-path" class="navpath"> <ul> <li class="navelem"><a class="el" href="namespacetesting.html">testing</a></li><li class="navelem"><a class="el" href="namespacetesting_1_1internal.html">internal</a></li><li class="navelem"><a class="el" href="classtesting_1_1internal_1_1_ignored_value.html">IgnoredValue</a></li> </ul> </div> </div><!-- top --> <div class="header"> <div class="summary"> <a href="#nested-classes">Classes</a> &#124; <a href="#pub-methods">Public Member Functions</a> &#124; <a href="classtesting_1_1internal_1_1_ignored_value-members.html">List of all members</a> </div> <div class="headertitle"> <div class="title">testing::internal::IgnoredValue Class Reference</div> </div> </div><!--header--> <div class="contents"> <p><code>#include &lt;<a class="el" href="cmake-build-debug_2googletest-src_2googletest_2include_2gtest_2internal_2gtest-internal_8h_source.html">gtest-internal.h</a>&gt;</code></p> <table class="memberdecls"> <tr class="heading"><td colspan="2"><h2 class="groupheader"><a name="pub-methods"></a> Public Member Functions</h2></td></tr> <tr class="memitem:a851d14f6c0f584d5c5a49ddbc06d6538"><td class="memTemplParams" colspan="2">template&lt;typename T , typename std::enable_if&lt;!std::is_convertible&lt; T, Sink &gt;::value, int &gt;::type = 0&gt; </td></tr> <tr class="memitem:a851d14f6c0f584d5c5a49ddbc06d6538"><td class="memTemplItemLeft" align="right" valign="top">&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="classtesting_1_1internal_1_1_ignored_value.html#a851d14f6c0f584d5c5a49ddbc06d6538">IgnoredValue</a> (const T &amp;)</td></tr> <tr class="separator:a851d14f6c0f584d5c5a49ddbc06d6538"><td class="memSeparator" colspan="2">&#160;</td></tr> <tr class="memitem:a851d14f6c0f584d5c5a49ddbc06d6538"><td class="memTemplParams" colspan="2">template&lt;typename T , typename std::enable_if&lt;!std::is_convertible&lt; T, Sink &gt;::value, int &gt;::type = 0&gt; </td></tr> <tr class="memitem:a851d14f6c0f584d5c5a49ddbc06d6538"><td class="memTemplItemLeft" align="right" valign="top">&#160;</td><td class="memTemplItemRight" valign="bottom"><a class="el" href="classtesting_1_1internal_1_1_ignored_value.html#a851d14f6c0f584d5c5a49ddbc06d6538">IgnoredValue</a> (const T &amp;)</td></tr> <tr class="separator:a851d14f6c0f584d5c5a49ddbc06d6538"><td class="memSeparator" colspan="2">&#160;</td></tr> </table> <h2 class="groupheader">Constructor &amp; Destructor Documentation</h2> <a id="a851d14f6c0f584d5c5a49ddbc06d6538"></a> <h2 class="memtitle"><span class="permalink"><a href="#a851d14f6c0f584d5c5a49ddbc06d6538">&#9670;&nbsp;</a></span>IgnoredValue() <span class="overload">[1/2]</span></h2> <div class="memitem"> <div class="memproto"> <div class="memtemplate"> template&lt;typename T , typename std::enable_if&lt;!std::is_convertible&lt; T, Sink &gt;::value, int &gt;::type = 0&gt; </div> <table class="mlabels"> <tr> <td class="mlabels-left"> <table class="memname"> <tr> <td class="memname">testing::internal::IgnoredValue::IgnoredValue </td> <td>(</td> <td class="paramtype">const T &amp;&#160;</td> <td class="paramname"></td><td>)</td> <td></td> </tr> </table> </td> <td class="mlabels-right"> <span class="mlabels"><span class="mlabel">inline</span></span> </td> </tr> </table> </div><div class="memdoc"> </div> </div> <a id="a851d14f6c0f584d5c5a49ddbc06d6538"></a> <h2 class="memtitle"><span class="permalink"><a href="#a851d14f6c0f584d5c5a49ddbc06d6538">&#9670;&nbsp;</a></span>IgnoredValue() <span class="overload">[2/2]</span></h2> <div class="memitem"> <div class="memproto"> <div class="memtemplate"> template&lt;typename T , typename std::enable_if&lt;!std::is_convertible&lt; T, Sink &gt;::value, int &gt;::type = 0&gt; </div> <table class="mlabels"> <tr> <td class="mlabels-left"> <table class="memname"> <tr> <td class="memname">testing::internal::IgnoredValue::IgnoredValue </td> <td>(</td> <td class="paramtype">const T &amp;&#160;</td> <td class="paramname"></td><td>)</td> <td></td> </tr> </table> </td> <td class="mlabels-right"> <span class="mlabels"><span class="mlabel">inline</span></span> </td> </tr> </table> </div><div class="memdoc"> </div> </div> <hr/>The documentation for this class was generated from the following files:<ul> <li><a class="el" href="cmake-build-debug_2googletest-src_2googletest_2include_2gtest_2internal_2gtest-internal_8h_source.html">cmake-build-debug/googletest-src/googletest/include/gtest/internal/gtest-internal.h</a></li> <li><a class="el" href="cmake-build-release_2googletest-src_2googletest_2include_2gtest_2internal_2gtest-internal_8h_source.html">cmake-build-release/googletest-src/googletest/include/gtest/internal/gtest-internal.h</a></li> </ul> </div><!-- contents --> <!-- start footer part --> <hr class="footer"/><address class="footer"><small> Generated by&#160;<a href="https://www.doxygen.org/index.html"><img class="footer" src="doxygen.svg" width="104" height="31" alt="doxygen"/></a> 1.9.2 </small></address> </body> </html>
{ "content_hash": "756cd4afb2bc211f29bd9e797d603b21", "timestamp": "", "source": "github", "line_count": 158, "max_line_length": 312, "avg_line_length": 49.36708860759494, "alnum_prop": 0.6874358974358974, "repo_name": "arcosuc3m/grppi", "id": "6d815427445a98a12bf4e2ef2449482378657836", "size": "7800", "binary": false, "copies": "1", "ref": "refs/heads/master", "path": "docs/0.4/classtesting_1_1internal_1_1_ignored_value.html", "mode": "33188", "license": "apache-2.0", "language": [ { "name": "C", "bytes": "1463" }, { "name": "C++", "bytes": "474323" }, { "name": "CMake", "bytes": "10167" } ] }
End of preview.

No dataset card yet

Downloads last month
6