code
stringlengths
5
1.01M
repo_name
stringlengths
5
84
path
stringlengths
4
311
language
stringclasses
30 values
license
stringclasses
15 values
size
int64
5
1.01M
input_ids
listlengths
502
502
token_type_ids
listlengths
502
502
attention_mask
listlengths
502
502
labels
listlengths
502
502
<html> <head> <META http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Chapter&nbsp;4.&nbsp;Transfer Tool</title> <link href="../docbook.css" rel="stylesheet" type="text/css"> <meta content="DocBook XSL-NS Stylesheets V1.74.0" name="generator"> <meta name="keywords" content="Hsqldb, Transfer"> <meta name="keywords" content="HyperSQL, Hsqldb, Hypersonic, Database, JDBC, Java"> <link rel="home" href="index.html" title="HyperSQL Utilities Guide"> <link rel="up" href="index.html" title="HyperSQL Utilities Guide"> <link rel="prev" href="dbm-chapt.html" title="Chapter&nbsp;3.&nbsp;Database Manager"> <link rel="next" href="apa.html" title="Appendix&nbsp;A.&nbsp;HyperSQL File Links"> </head> <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"> <div class="navheader"> <table summary="Navigation header" width="100%"> <tr> <td align="left" width="30%"><a accesskey="p" href="dbm-chapt.html"><img src="../images/db/prev.png" alt="Prev"></a>&nbsp;</td><td align="center" width="40%" style="font-weight:bold;">Chapter&nbsp;4.&nbsp;Transfer Tool</td><td align="right" width="30%">&nbsp;<a accesskey="n" href="apa.html"><img src="../images/db/next.png" alt="Next"></a></td> </tr> <tr> <td valign="top" align="left" width="30%">Chapter&nbsp;3.&nbsp;Database Manager&nbsp;</td><td align="center" width="40%"><a accesskey="h" href="index.html"><img src="../images/db/home.png" alt="Home"></a></td><td valign="top" align="right" width="30%">&nbsp;Appendix&nbsp;A.&nbsp;HyperSQL File Links</td> </tr> </table> </div> <HR> <div class="chapter" lang="en"> <div class="titlepage"> <div> <div> <h2 class="title"> <a name="transfer-tool-chapt"></a>Chapter&nbsp;4.&nbsp;Transfer Tool</h2> </div> <div> <div class="authorgroup"> <div class="author"> <h3 class="author"> <span class="firstname">Fred</span> <span class="surname">Toussi</span> </h3> <div class="affiliation"> <span class="orgname">The HSQL Development Group<br> </span> </div> </div> </div> </div> </div> </div> <div class="toc"> <p> <b>Table of Contents</b> </p> <dl> <dt> <span class="section"><a href="transfer-tool-chapt.html#trantool_intro-sect">Brief Introduction</a></span> </dt> </dl> </div> <div class="section" lang="en"> <div class="titlepage"> <div> <div> <h2 class="title" style="clear: both"> <a name="trantool_intro-sect"></a>Brief Introduction</h2> </div> </div> </div> <p>Transfer Tool is a GUI program for transferring SQL schema and data from one JDBC source to another. Source and destination can be different database engines or different databases on the same server.</p> <p>Transfer Tool works in two different modes. Direct transfer maintains a connection to both source and destination and performs the transfer. Dump and Restore mode is invoked once to transfer the data from the source to a text file (Dump), then again to transfer the data from the text file to the destination (Restore). With Dump and Restore, it is possible to make any changes to database object definitions and data prior to restoring it to the target.</p> <p>Dump and Restore modes can be set via the command line with -d (--dump) or -r (--restore) options. Alternatively the Transfer Tool can be started with any of the three modes from the Database Manager's Tools menu.</p> <p>The connection dialogue allows you to save the settings for the connection you are about to make. You can then access the connection in future sessions. These settings are shared with those from the Database Manager tool. See the appendix on Database Manager for details of the connection dialogue box.</p> <p>From version 1.8.0 Transfer Tool is no longer part of the <code class="filename">hsqldb.jar</code>. You can build the <code class="filename">hsqldbutil.jar</code> using the Ant command of the same name, to build a jar that includes Transfer Tool and the Database Manager.</p> <p>When collecting meta-data, Transfer Tool performs SELECT * FROM &lt;table&gt; queries on all the tables in the source database. This may take a long time with some database engines. When the source database is HSQLDB, this means memory should be available for the result sets returned from the queries. Therefore, the memory allocation of the java process in which Transfer Tool is executed may have to be high.</p> <p>The current version of Transfer is far from ideal, as it has not been actively developed for several years. The program also lacks the ability to create UNIQUE constraints and creates UNIQUE indexes instead. However, some bugs have been fixed in the latest version and the program can be used with most of the supported databases. The best way to use the program is the DUMP and RESTORE modes, which allow you to manually change the SQL statements in the dump file before restoring to a database. A useful idea is to dump and restore the database definition separately from the database data.</p> </div> </div> <HR xmlns:xi="http://www.w3.org/2001/XInclude"> <P xmlns:xi="http://www.w3.org/2001/XInclude" class="svnrev">$Revision: 3539 $</P> <div class="navfooter"> <hr> <table summary="Navigation footer" width="100%"> <tr> <td align="left" width="40%"><a accesskey="p" href="dbm-chapt.html"><img src="../images/db/prev.png" alt="Prev"></a>&nbsp;</td><td align="center" width="20%">&nbsp;</td><td align="right" width="40%">&nbsp;<a accesskey="n" href="apa.html"><img src="../images/db/next.png" alt="Next"></a></td> </tr> <tr> <td valign="top" align="left" width="40%">Chapter&nbsp;3.&nbsp;Database Manager&nbsp;</td><td align="center" width="20%"><a accesskey="h" href="index.html"><img src="../images/db/home.png" alt="Home"></a></td><td valign="top" align="right" width="40%">&nbsp;Appendix&nbsp;A.&nbsp;HyperSQL File Links</td> </tr> </table> </div> </body> </html>
virtix/mut4j
lib/hsqldb-2.0.0/hsqldb/doc/util-guide/transfer-tool-chapt.html
HTML
mit
5,954
[ 30522, 1026, 16129, 1028, 1026, 2132, 1028, 1026, 18804, 8299, 1011, 1041, 15549, 2615, 1027, 1000, 4180, 1011, 2828, 1000, 4180, 1027, 1000, 3793, 1013, 16129, 1025, 25869, 13462, 30524, 1004, 1050, 5910, 2361, 1025, 1018, 1012, 1004, 1050...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package conformance func stringGet(s string, index int) byte { return s[index] } func stringLen(s string) int { return len(s) } func substring(s string, low, high int) string { switch { case low >= 0 && high >= 0: return s[low:high] case low >= 0: return s[low:] case high >= 0: return s[:high] default: return s } }
Quasilyte/goism
src/emacs/conformance/12_strings_test.go
GO
mit
336
[ 30522, 7427, 23758, 6651, 4569, 2278, 5164, 18150, 1006, 1055, 5164, 1010, 5950, 20014, 1007, 24880, 1063, 2709, 1055, 1031, 5950, 1033, 1065, 4569, 2278, 5164, 7770, 1006, 1055, 5164, 1007, 20014, 1063, 2709, 18798, 1006, 1055, 1007, 1065,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
module.exports = function(grunt) { grunt.config.set('sass', { options: { loadPath: 'src/styles' }, dev: { files: { 'build/dist/styles.css': 'build/tmp/styles.scss' } }, prod: { files: { 'build/dist/styles.css': 'build/tmp/styles.scss' } } }); grunt.loadNpmTasks('grunt-contrib-sass'); };
dlabey/Angular-Grunt-SASS-Workflow-Skeleton
build/tasks/contrib-sass.js
JavaScript
mit
440
[ 30522, 30524, 4487, 3367, 1013, 6782, 1012, 20116, 2015, 1005, 1024, 1005, 3857, 1013, 1056, 8737, 1013, 6782, 1012, 8040, 4757, 1005, 1065, 1065, 1010, 4013, 2094, 1024, 1063, 6764, 1024, 1063, 1005, 3857, 1013, 4487, 3367, 1013, 6782, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/** * @license Copyright (c) 2003-2019, CKSource - Frederico Knabben. All rights reserved. * For licensing, see https://ckeditor.com/legal/ckeditor-oss-license */ CKEDITOR.editorConfig = function( config ) { // Define changes to default configuration here. // For complete reference see: // https://ckeditor.com/docs/ckeditor4/latest/api/CKEDITOR_config.html // The toolbar groups arrangement, optimized for a single toolbar row. config.toolbarGroups = [ { name: 'document', groups: [ 'mode', 'document', 'doctools' ] }, { name: 'clipboard', groups: [ 'clipboard', 'undo' ] }, { name: 'editing', groups: [ 'find', 'selection', 'spellchecker' ] }, { name: 'forms' }, { name: 'basicstyles', groups: [ 'basicstyles', 'cleanup' ] }, { name: 'paragraph', groups: [ 'list', 'indent', 'blocks', 'align', 'bidi' ] }, { name: 'links' }, { name: 'insert' }, { name: 'styles' }, { name: 'colors' }, { name: 'tools' }, { name: 'others' }, { name: 'about' } ]; // The default plugins included in the basic setup define some buttons that // are not needed in a basic editor. They are removed here. config.removeButtons = 'Cut,Copy,Paste,Undo,Redo,Anchor,Underline,Strike,Subscript,Superscript'; // Dialog windows are also simplified. config.removeDialogTabs = 'link:advanced'; };
CRLbazin/agoraexmachina
public/js/ckeditor/config.js
JavaScript
gpl-2.0
1,321
[ 30522, 1013, 1008, 1008, 1008, 1030, 6105, 9385, 1006, 1039, 1007, 2494, 1011, 10476, 30524, 1012, 3559, 8663, 8873, 2290, 1027, 3853, 1006, 9530, 8873, 2290, 1007, 1063, 1013, 1013, 9375, 3431, 2000, 12398, 9563, 2182, 1012, 1013, 1013, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// -*-Mode: C++;-*- // * BeginRiceCopyright ***************************************************** // // $HeadURL$ // $Id$ // // -------------------------------------------------------------------------- // Part of HPCToolkit (hpctoolkit.org) // // Information about sources of support for research and development of // HPCToolkit is at 'hpctoolkit.org' and in 'README.Acknowledgments'. // -------------------------------------------------------------------------- // // Copyright ((c)) 2002-2015, Rice University // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // // * Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // * Neither the name of Rice University (RICE) nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // This software is provided by RICE and contributors "as is" and any // express or implied warranties, including, but not limited to, the // implied warranties of merchantability and fitness for a particular // purpose are disclaimed. In no event shall RICE or contributors be // liable for any direct, indirect, incidental, special, exemplary, or // consequential damages (including, but not limited to, procurement of // substitute goods or services; loss of use, data, or profits; or // business interruption) however caused and on any theory of liability, // whether in contract, strict liability, or tort (including negligence // or otherwise) arising in any way out of the use of this software, even // if advised of the possibility of such damage. // // ******************************************************* EndRiceCopyright * //*************************************************************************** // // File: // $HeadURL$ // // Purpose: // [The purpose of this file] // // Description: // [The set of functions, macros, etc. defined in the file] // //*************************************************************************** //************************* System Include Files **************************** #include <fstream> #include <string> using std::string; #include <cstring> //*************************** User Include Files **************************** #include "xml.hpp" #include <lib/support/diagnostics.h> //*************************** Forward Declarations *************************** using namespace xml; const string xml::SPC = " "; // space const string xml::eleB = "<"; // element begin, initial const string xml::eleBf = "</"; // element begin, final const string xml::eleE = ">"; // element end, normal const string xml::eleEc = "/>"; // element end, compact: <.../> const string xml::attB = "=\""; // attribute value begin const string xml::attE = "\""; // attribute value end //**************************************************************************** // Read //**************************************************************************** // Reads from 'attB' to and including 'attE'. bool xml::ReadAttrStr(std::istream& is, string& s, int flags) { bool STATE = true; // false indicates an error is >> std::ws; STATE &= IOUtil::Skip(is, "="); is >> std::ws; STATE &= IOUtil::Skip(is, "\""); is >> std::ws; s = IOUtil::Get(is, '"'); if (flags & UNESC_TRUE) { s = UnEscapeStr(s); } STATE &= IOUtil::Skip(is, "\""); is >> std::ws; return STATE; } //**************************************************************************** // Write //**************************************************************************** // Writes attribute value, beginning with 'attB' and ending with 'attE'. bool xml::WriteAttrStr(std::ostream& os, const char* s, int flags) { string str = ((flags & ESC_TRUE) ? EscapeStr(s) : s); os << attB << str << attE; return (!os.fail()); } //**************************************************************************** // //**************************************************************************** // 'EscapeStr' and 'UnEscapeStr': Returns the string with all // necessary characters (un)escaped; will not modify 'str' namespace xml { static string substitute(const char* str, const string* fromStrs, const string* toStrs); static const int numSubs = 4; // number of substitutes static const string RegStrs[] = {"<", ">", "&", "\""}; static const string EscStrs[] = {"&lt;", "&gt;", "&amp;", "&quot;"}; } string xml::EscapeStr(const char* str) { return substitute(str, RegStrs, EscStrs); } string xml::UnEscapeStr(const char* str) { return substitute(str, EscStrs, RegStrs); } static string xml::substitute(const char* str, const string* fromStrs, const string* toStrs) { static string newStr = string("", 512); string retStr = str; if (!str) { return retStr; } // Iterate over 'str' and substitute patterns newStr = ""; int strLn = strlen(str); for (int i = 0; str[i] != '\0'; /* */) { // Attempt to find a pattern for substitution at this position int curSub = 0, curSubLn = 0; for (/*curSub = 0*/; curSub < numSubs; curSub++) { curSubLn = fromStrs[curSub].length(); if ((strLn-i >= curSubLn) && (strncmp(str+i, fromStrs[curSub].c_str(), curSubLn) == 0)) { break; // only one substitution possible per position } } // Substitute or copy current position; Adjust iteration to // inspect next character. (resizes if necessary) if (curSub < numSubs) { // we found a string to substitute newStr += toStrs[curSub]; i += curSubLn; } else { newStr += str[i]; i++; } } retStr = newStr; return retStr; } //**************************************************************************** // //****************************************************************************
zcth428/hpctoolkit111
src/lib/xml/xml.cpp
C++
bsd-3-clause
6,180
[ 30522, 1013, 1013, 1011, 1008, 1011, 5549, 1024, 1039, 1009, 1009, 1025, 1011, 1008, 1011, 1013, 1013, 1008, 4088, 17599, 3597, 7685, 15950, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta charset="utf-8" /> <title>statsmodels.base.model.GenericLikelihoodModelResults.remove_data &#8212; statsmodels v0.10.0 documentation</title> <link rel="stylesheet" href="../../_static/nature.css" type="text/css" /> <link rel="stylesheet" href="../../_static/pygments.css" type="text/css" /> <link rel="stylesheet" type="text/css" href="../../_static/graphviz.css" /> <script type="text/javascript" id="documentation_options" data-url_root="../../" src="../../_static/documentation_options.js"></script> <script type="text/javascript" src="../../_static/jquery.js"></script> <script type="text/javascript" src="../../_static/underscore.js"></script> <script type="text/javascript" src="../../_static/doctools.js"></script> <script type="text/javascript" src="../../_static/language_data.js"></script> <script async="async" type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script> <link rel="shortcut icon" href="../../_static/statsmodels_hybi_favico.ico"/> <link rel="author" title="About these documents" href="../../about.html" /> <link rel="index" title="Index" href="../../genindex.html" /> <link rel="search" title="Search" href="../../search.html" /> <link rel="next" title="statsmodels.base.model.GenericLikelihoodModelResults.save" href="statsmodels.base.model.GenericLikelihoodModelResults.save.html" /> <link rel="prev" title="statsmodels.base.model.GenericLikelihoodModelResults.pvalues" href="statsmodels.base.model.GenericLikelihoodModelResults.pvalues.html" /> <link rel="stylesheet" href="../../_static/examples.css" type="text/css" /> <link rel="stylesheet" href="../../_static/facebox.css" type="text/css" /> <script type="text/javascript" src="../../_static/scripts.js"> </script> <script type="text/javascript" src="../../_static/facebox.js"> </script> <script type="text/javascript"> $.facebox.settings.closeImage = "../../_static/closelabel.png" $.facebox.settings.loadingImage = "../../_static/loading.gif" </script> <script> $(document).ready(function() { $.getJSON("../../../versions.json", function(versions) { var dropdown = document.createElement("div"); dropdown.className = "dropdown"; var button = document.createElement("button"); button.className = "dropbtn"; button.innerHTML = "Other Versions"; var content = document.createElement("div"); content.className = "dropdown-content"; dropdown.appendChild(button); dropdown.appendChild(content); $(".header").prepend(dropdown); for (var i = 0; i < versions.length; i++) { if (versions[i].substring(0, 1) == "v") { versions[i] = [versions[i], versions[i].substring(1)]; } else { versions[i] = [versions[i], versions[i]]; }; }; for (var i = 0; i < versions.length; i++) { var a = document.createElement("a"); a.innerHTML = versions[i][1]; a.href = "../../../" + versions[i][0] + "/index.html"; a.title = versions[i][1]; $(".dropdown-content").append(a); }; }); }); </script> </head><body> <div class="headerwrap"> <div class = "header"> <a href = "../../index.html"> <img src="../../_static/statsmodels_hybi_banner.png" alt="Logo" style="padding-left: 15px"/></a> </div> </div> <div class="related" role="navigation" aria-label="related navigation"> <h3>Navigation</h3> <ul> <li class="right" style="margin-right: 10px"> <a href="../../genindex.html" title="General Index" accesskey="I">index</a></li> <li class="right" > <a href="../../py-modindex.html" title="Python Module Index" >modules</a> |</li> <li class="right" > <a href="statsmodels.base.model.GenericLikelihoodModelResults.save.html" title="statsmodels.base.model.GenericLikelihoodModelResults.save" accesskey="N">next</a> |</li> <li class="right" > <a href="statsmodels.base.model.GenericLikelihoodModelResults.pvalues.html" title="statsmodels.base.model.GenericLikelihoodModelResults.pvalues" accesskey="P">previous</a> |</li> <li><a href ="../../install.html">Install</a></li> &nbsp;|&nbsp; <li><a href="https://groups.google.com/forum/?hl=en#!forum/pystatsmodels">Support</a></li> &nbsp;|&nbsp; <li><a href="https://github.com/statsmodels/statsmodels/issues">Bugs</a></li> &nbsp;|&nbsp; <li><a href="../index.html">Develop</a></li> &nbsp;|&nbsp; <li><a href="../../examples/index.html">Examples</a></li> &nbsp;|&nbsp; <li><a href="../../faq.html">FAQ</a></li> &nbsp;|&nbsp; <li class="nav-item nav-item-1"><a href="../index.html" >Developer Page</a> |</li> <li class="nav-item nav-item-2"><a href="../internal.html" >Internal Classes</a> |</li> <li class="nav-item nav-item-3"><a href="statsmodels.base.model.GenericLikelihoodModelResults.html" accesskey="U">statsmodels.base.model.GenericLikelihoodModelResults</a> |</li> </ul> </div> <div class="document"> <div class="documentwrapper"> <div class="bodywrapper"> <div class="body" role="main"> <div class="section" id="statsmodels-base-model-genericlikelihoodmodelresults-remove-data"> <h1>statsmodels.base.model.GenericLikelihoodModelResults.remove_data<a class="headerlink" href="#statsmodels-base-model-genericlikelihoodmodelresults-remove-data" title="Permalink to this headline">¶</a></h1> <p>method</p> <dl class="method"> <dt id="statsmodels.base.model.GenericLikelihoodModelResults.remove_data"> <code class="sig-prename descclassname">GenericLikelihoodModelResults.</code><code class="sig-name descname">remove_data</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#statsmodels.base.model.GenericLikelihoodModelResults.remove_data" title="Permalink to this definition">¶</a></dt> <dd><p>remove data arrays, all nobs arrays from result and model</p> <p>This reduces the size of the instance, so it can be pickled with less memory. Currently tested for use with predict from an unpickled results and model instance.</p> <div class="admonition warning"> <p class="admonition-title">Warning</p> <p>Since data and some intermediate results have been removed calculating new statistics that require them will raise exceptions. The exception will occur the first time an attribute is accessed that has been set to None.</p> </div> <p>Not fully tested for time series models, tsa, and might delete too much for prediction or not all that would be possible.</p> <p>The lists of arrays to delete are maintained as attributes of the result and model instance, except for cached values. These lists could be changed before calling remove_data.</p> <p>The attributes to remove are named in:</p> <dl class="simple"> <dt>model._data_attr<span class="classifier">arrays attached to both the model instance</span></dt><dd><p>and the results instance with the same attribute name.</p> </dd> <dt>result.data_in_cache<span class="classifier">arrays that may exist as values in</span></dt><dd><p>result._cache (TODO : should privatize name)</p> </dd> <dt>result._data_attr_model<span class="classifier">arrays attached to the model</span></dt><dd><p>instance but not to the results instance</p> </dd> </dl> </dd></dl> </div> </div> </div> </div> <div class="sphinxsidebar" role="navigation" aria-label="main navigation"> <div class="sphinxsidebarwrapper"> <h4>Previous topic</h4> <p class="topless"><a href="statsmodels.base.model.GenericLikelihoodModelResults.pvalues.html" title="previous chapter">statsmodels.base.model.GenericLikelihoodModelResults.pvalues</a></p> <h4>Next topic</h4> <p class="topless"><a href="statsmodels.base.model.GenericLikelihoodModelResults.save.html" title="next chapter">statsmodels.base.model.GenericLikelihoodModelResults.save</a></p> <div role="note" aria-label="source link"> <h3>This Page</h3> <ul class="this-page-menu"> <li><a href="../../_sources/dev/generated/statsmodels.base.model.GenericLikelihoodModelResults.remove_data.rst.txt" rel="nofollow">Show Source</a></li> </ul> </div> <div id="searchbox" style="display: none" role="search"> <h3 id="searchlabel">Quick search</h3> <div class="searchformwrapper"> <form class="search" action="../../search.html" method="get"> <input type="text" name="q" aria-labelledby="searchlabel" /> <input type="submit" value="Go" /> </form> </div> </div> <script type="text/javascript">$('#searchbox').show(0);</script> </div> </div> <div class="clearer"></div> </div> <div class="footer" role="contentinfo"> &#169; Copyright 2009-2018, Josef Perktold, Skipper Seabold, Jonathan Taylor, statsmodels-developers. Created using <a href="http://sphinx-doc.org/">Sphinx</a> 2.1.2. </div> </body> </html>
statsmodels/statsmodels.github.io
v0.10.0/dev/generated/statsmodels.base.model.GenericLikelihoodModelResults.remove_data.html
HTML
bsd-3-clause
9,140
[ 30522, 1026, 999, 9986, 13874, 16129, 1028, 1026, 16129, 20950, 3619, 1027, 1000, 8299, 1024, 1013, 1013, 7479, 1012, 1059, 2509, 1012, 8917, 1013, 2639, 1013, 1060, 11039, 19968, 1000, 1028, 1026, 2132, 1028, 1026, 18804, 25869, 13462, 102...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
jsonp({"cep":"40313050","logradouro":"Avenida Cl\u00e9ber","bairro":"Cidade Nova","cidade":"Salvador","uf":"BA","estado":"Bahia"});
lfreneda/cepdb
api/v1/40313050.jsonp.js
JavaScript
cc0-1.0
132
[ 30522, 1046, 3385, 2361, 1006, 1063, 1000, 8292, 2361, 1000, 1024, 1000, 28203, 17134, 2692, 12376, 1000, 1010, 1000, 8833, 12173, 8162, 2080, 1000, 1024, 1000, 13642, 3490, 2850, 18856, 1032, 1057, 8889, 2063, 2683, 5677, 1000, 1010, 1000,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<?php class CaptchaHelper extends CComponent { // public $sess_captcha = 'captcha'; protected $chars = 'ABCDEFGHKLMNPQRSTUVWYZ13456789'; protected $randomStr = ''; protected $image = null; protected $string = ''; public $location = ''; public function output( $width=100, $height=30, $length=3 ) { $location = $this->location; $resultStr = $this->randomStr; $chars = $this->chars; // Generating the captcha string for ( $i = 0; $i < $length; $i++ ) { $pos = mt_rand( 0, strlen( $chars )-1 ); $resultStr .= substr( $chars, $pos, 1 ); } $newImage = imagecreatefromjpeg( "$location/img.jpg" ); $textColor = imagecolorallocate( $newImage, 0, 0, 0 ); //$line_clr = imagecolorallocate($newImage, 0, 255, 11); //Top left to Bottom Left //imageline($newImage, 0, $height-22, $width, $height-1, $line_clr); // Bottom Left to Bottom Right //imageline($newImage, $width-1, 0, $width-100, $height, $line_clr); //imageline($newImage, $height-1, 0, $width-100, $width, $line_clr); //imageline($newImage, $width-1, 0, $height-1, $width, $line_clr); // Print the string $result = imagestring( $newImage, 5, 20, 6, $resultStr, $textColor ); $this->image = $newImage; $this->string = $resultStr; return $result; } public function getCaptchaImage() { return $this->image; } public function getCaptchaString() { return $this->string; } }
zmiftah/yiihippo
system/protected/admin/components/CaptchaHelper.php
PHP
mit
1,403
[ 30522, 1026, 1029, 25718, 2465, 14408, 7507, 16001, 4842, 8908, 10507, 25377, 5643, 3372, 1063, 1013, 1013, 2270, 1002, 7367, 4757, 1035, 14408, 7507, 1027, 1005, 14408, 7507, 1005, 1025, 5123, 1002, 25869, 2015, 1027, 1005, 5925, 3207, 254...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* The MIT License (MIT) Copyright (c) <2010-2020> <wenshengming> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #include "LClientManager.h" #include "LClient.h" #include "LLoginServerMainLogic.h" LClientManager::~LClientManager() { m_pLSMainLogic = NULL; } LClientManager::LClientManager() { } // 初始化连接 bool LClientManager::Initialize(unsigned int unMaxClientServ) { if (m_pLSMainLogic == NULL) { return false; } if (unMaxClientServ == 0) { unMaxClientServ = 1000; } for (unsigned int ui = 0; ui < unMaxClientServ; ++ui) { LClient *pClient = new LClient; if (pClient == NULL) { return false; } m_queueClientPool.push(pClient); } return true; } // 网络层SessionID,tsa连接信息 bool LClientManager::AddNewUpSession(uint64_t u64SessionID, t_Session_Accepted& tsa) { map<uint64_t, LClient*>::iterator _ito = m_mapClientManagerBySessionID.find(u64SessionID); if (_ito != m_mapClientManagerBySessionID.end()) { return false; } LClient* pClient = GetOneClientFromPool(); if (pClient == NULL) { return false; } pClient->SetClientInfo(u64SessionID, tsa); m_mapClientManagerBySessionID[u64SessionID] = pClient; return true; } void LClientManager::RemoveOneSession(uint64_t u64SessionID) { map<uint64_t, LClient*>::iterator _ito = m_mapClientManagerBySessionID.find(u64SessionID); if (_ito != m_mapClientManagerBySessionID.end()) { FreeOneClientToPool(_ito->second); m_mapClientManagerBySessionID.erase(_ito); } } LClient* LClientManager::GetOneClientFromPool() { if (m_queueClientPool.empty()) { return NULL; } LClient* pClient = m_queueClientPool.front(); m_queueClientPool.pop(); return pClient; } void LClientManager::FreeOneClientToPool(LClient* pClient) { if (pClient == NULL) { return ; } m_queueClientPool.push(pClient); } LClient* LClientManager::FindClientBySessionID(uint64_t u64SessionID) { map<uint64_t, LClient*>::iterator _ito = m_mapClientManagerBySessionID.find(u64SessionID); if (_ito == m_mapClientManagerBySessionID.end()) { return NULL; } return _ito->second; } void LClientManager::SetLSMainLogic(LLoginServerMainLogic* plsml) { m_pLSMainLogic = plsml; } LLoginServerMainLogic* LClientManager::GetLSMainLogic() { return m_pLSMainLogic; } unsigned int LClientManager::GetClientCount() { return m_mapClientManagerBySessionID.size(); }
MBeanwenshengming/linuxgameserver
LoginServer/LClientManager.cpp
C++
mit
3,333
[ 30522, 1013, 1008, 1996, 10210, 6105, 1006, 10210, 1007, 9385, 1006, 1039, 1007, 1026, 2230, 1011, 12609, 1028, 1026, 19181, 4095, 13159, 6562, 1028, 6656, 2003, 2182, 3762, 4379, 1010, 2489, 1997, 3715, 1010, 2000, 2151, 2711, 11381, 1037,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * Copyright (C) 2010, 2011 Apple Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef THIRD_PARTY_BLINK_RENDERER_CORE_SCROLL_SCROLL_ANIMATOR_MAC_H_ #define THIRD_PARTY_BLINK_RENDERER_CORE_SCROLL_SCROLL_ANIMATOR_MAC_H_ #include <memory> #include "base/mac/scoped_nsobject.h" #include "base/single_thread_task_runner.h" #include "third_party/blink/renderer/core/scroll/scroll_animator_base.h" #include "third_party/blink/renderer/platform/geometry/float_point.h" #include "third_party/blink/renderer/platform/geometry/float_size.h" #include "third_party/blink/renderer/platform/geometry/int_rect.h" #include "third_party/blink/renderer/platform/heap/handle.h" #include "third_party/blink/renderer/platform/scheduler/public/post_cancellable_task.h" #include "third_party/blink/renderer/platform/timer.h" @class BlinkScrollAnimationHelperDelegate; @class BlinkScrollbarPainterControllerDelegate; @class BlinkScrollbarPainterDelegate; typedef id ScrollbarPainterController; namespace blink { class Scrollbar; // ScrollAnimatorMac implements keyboard-triggered scroll offset animations, // scrollbar painting, and scrollbar opacity animations by delegating to native // Cocoa APIs. // // Scroll offset animations are also known as "smooth scrolling". For the // non-Mac implementation of user input smooth scrolling, see ScrollAnimator. // For programmatic (CSSOM) smooth scrolls, see ProgrammaticScrollAnimator. // // Unlike ScrollAnimator, ScrollAnimatorMac only smooth-scrolls keyboard // scrolls, and not mouse wheel scrolls. It also does not use compositor // animations or any of the standard Blink animation machinery. // // This divergence is mostly historical. We could probably switch Mac to use // ScrollAnimator for smooth scrolls if we factored out the scrollbar-related // logic. See crbug.com/574283 and crbug.com/682209. // // ScrollAnimatorMac's scroll offset animations are implemented by // NSScrollAnimationHelper which invokes a BlinkScrollAnimationHelperDelegate to // service an animation frame by performing an immediate scroll to the requested // offset (via NotifyOffsetChanged). // // The "scrollbar painter controller" is an NSScrollerImpPair object, which // calls back into Blink via BlinkScrollbarPainterControllerDelegate. // // The "scrollbar painter" is an NSScrollerImp object, which calls back into // Blink via BlinkScrollbarPainterDelegate. The scrollbar painter is registered // with ScrollbarThemeMac, so that the ScrollbarTheme painting APIs can call // into it. // // The scrollbar painter initiates an overlay scrollbar fade-out animation by // calling animateKnobAlphaTo on the delegate. This starts a timer inside the // BlinkScrollbarPartAnimationTimer. Each tick evaluates a cubic bezier // function to obtain the current opacity, which is stored in the scrollbar // painter with setKnobAlpha. // // If the scroller is composited, the opacity value stored on the scrollbar // painter is subsequently read out through ScrollbarThemeMac::ThumbOpacity and // plumbed into PaintedScrollbarLayerImpl::thumb_opacity_. // // TODO: explain other types of animations (TrackAlpha, UIStateTransition, // ExpansionTransition), scrollbar paint timer, plumbing of scrollbar paint // invalidations. class CORE_EXPORT ScrollAnimatorMac : public ScrollAnimatorBase { USING_PRE_FINALIZER(ScrollAnimatorMac, Dispose); public: ScrollAnimatorMac(ScrollableArea*); ~ScrollAnimatorMac() override; void Dispose() override; void ImmediateScrollToOffsetForScrollAnimation( const ScrollOffset& new_offset); bool HaveScrolledSincePageLoad() const { return have_scrolled_since_page_load_; } void UpdateScrollerStyle(); bool ScrollbarPaintTimerIsActive() const; void StartScrollbarPaintTimer(); void StopScrollbarPaintTimer(); void SendContentAreaScrolledSoon(const ScrollOffset& scroll_delta); void Trace(Visitor* visitor) override { ScrollAnimatorBase::Trace(visitor); } private: base::scoped_nsobject<id> scroll_animation_helper_; base::scoped_nsobject<BlinkScrollAnimationHelperDelegate> scroll_animation_helper_delegate_; base::scoped_nsobject<ScrollbarPainterController> scrollbar_painter_controller_; base::scoped_nsobject<BlinkScrollbarPainterControllerDelegate> scrollbar_painter_controller_delegate_; base::scoped_nsobject<BlinkScrollbarPainterDelegate> horizontal_scrollbar_painter_delegate_; base::scoped_nsobject<BlinkScrollbarPainterDelegate> vertical_scrollbar_painter_delegate_; void InitialScrollbarPaintTask(); TaskHandle initial_scrollbar_paint_task_handle_; void SendContentAreaScrolledTask(); TaskHandle send_content_area_scrolled_task_handle_; scoped_refptr<base::SingleThreadTaskRunner> task_runner_; ScrollOffset content_area_scrolled_timer_scroll_delta_; ScrollResult UserScroll(ScrollGranularity, const ScrollOffset& delta, ScrollableArea::ScrollCallback on_finish) override; void ScrollToOffsetWithoutAnimation(const ScrollOffset&) override; void CancelAnimation() override; void ContentAreaWillPaint() const override; void MouseEnteredContentArea() const override; void MouseExitedContentArea() const override; void MouseMovedInContentArea() const override; void MouseEnteredScrollbar(Scrollbar&) const override; void MouseExitedScrollbar(Scrollbar&) const override; void ContentsResized() const override; void ContentAreaDidShow() const override; void ContentAreaDidHide() const override; void FinishCurrentScrollAnimations() override; void DidAddVerticalScrollbar(Scrollbar&) override; void WillRemoveVerticalScrollbar(Scrollbar&) override; void DidAddHorizontalScrollbar(Scrollbar&) override; void WillRemoveHorizontalScrollbar(Scrollbar&) override; void NotifyContentAreaScrolled(const ScrollOffset& delta, mojom::blink::ScrollType) override; bool SetScrollbarsVisibleForTesting(bool) override; ScrollOffset AdjustScrollOffsetIfNecessary(const ScrollOffset&) const; void ImmediateScrollTo(const ScrollOffset&); bool have_scrolled_since_page_load_; bool needs_scroller_style_update_; }; } // namespace blink #endif // THIRD_PARTY_BLINK_RENDERER_CORE_SCROLL_SCROLL_ANIMATOR_MAC_H_
endlessm/chromium-browser
third_party/blink/renderer/core/scroll/scroll_animator_mac.h
C
bsd-3-clause
7,569
[ 30522, 1013, 1008, 1008, 9385, 1006, 1039, 1007, 2230, 1010, 2249, 6207, 4297, 1012, 2035, 2916, 9235, 1012, 1008, 1008, 25707, 1998, 2224, 1999, 3120, 1998, 12441, 3596, 1010, 2007, 2030, 2302, 1008, 14080, 1010, 2024, 7936, 3024, 2008, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to you under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hadoop.hbase.quotas; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; import java.io.IOException; import java.util.Arrays; import java.util.HashSet; import java.util.Map; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.SnapshotDescription; import org.apache.hadoop.hbase.client.SnapshotType; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.quotas.SnapshotQuotaObserverChore.SnapshotWithSize; import org.apache.hadoop.hbase.quotas.SpaceQuotaHelperForTests.NoFilesToDischarge; import org.apache.hadoop.hbase.quotas.SpaceQuotaHelperForTests.SpaceQuotaSnapshotPredicate; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Rule; import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; import org.apache.hadoop.hbase.shaded.com.google.common.collect.HashMultimap; import org.apache.hadoop.hbase.shaded.com.google.common.collect.Iterables; import org.apache.hadoop.hbase.shaded.com.google.common.collect.Multimap; /** * Test class for the {@link SnapshotQuotaObserverChore}. */ @Category(MediumTests.class) public class TestSnapshotQuotaObserverChore { private static final Log LOG = LogFactory.getLog(TestSnapshotQuotaObserverChore.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static final AtomicLong COUNTER = new AtomicLong(); @Rule public TestName testName = new TestName(); private Connection conn; private Admin admin; private SpaceQuotaHelperForTests helper; private HMaster master; private SnapshotQuotaObserverChore testChore; @BeforeClass public static void setUp() throws Exception { Configuration conf = TEST_UTIL.getConfiguration(); SpaceQuotaHelperForTests.updateConfigForQuotas(conf); // Clean up the compacted files faster than normal (15s instead of 2mins) conf.setInt("hbase.hfile.compaction.discharger.interval", 15 * 1000); TEST_UTIL.startMiniCluster(1); } @AfterClass public static void tearDown() throws Exception { TEST_UTIL.shutdownMiniCluster(); } @Before public void setup() throws Exception { conn = TEST_UTIL.getConnection(); admin = TEST_UTIL.getAdmin(); helper = new SpaceQuotaHelperForTests(TEST_UTIL, testName, COUNTER); master = TEST_UTIL.getHBaseCluster().getMaster(); helper.removeAllQuotas(conn); testChore = new SnapshotQuotaObserverChore( TEST_UTIL.getConnection(), TEST_UTIL.getConfiguration(), master.getFileSystem(), master, null); } @Test public void testSnapshotSizePersistence() throws IOException { final Admin admin = TEST_UTIL.getAdmin(); final TableName tn = TableName.valueOf("quota_snapshotSizePersistence"); if (admin.tableExists(tn)) { admin.disableTable(tn); admin.deleteTable(tn); } HTableDescriptor desc = new HTableDescriptor(tn); desc.addFamily(new HColumnDescriptor(QuotaTableUtil.QUOTA_FAMILY_USAGE)); admin.createTable(desc); Multimap<TableName,SnapshotWithSize> snapshotsWithSizes = HashMultimap.create(); try (Table table = conn.getTable(tn)) { // Writing no values will result in no records written. verify(table, () -> { testChore.persistSnapshotSizes(table, snapshotsWithSizes); assertEquals(0, count(table)); }); verify(table, () -> { TableName originatingTable = TableName.valueOf("t1"); snapshotsWithSizes.put(originatingTable, new SnapshotWithSize("ss1", 1024L)); snapshotsWithSizes.put(originatingTable, new SnapshotWithSize("ss2", 4096L)); testChore.persistSnapshotSizes(table, snapshotsWithSizes); assertEquals(2, count(table)); assertEquals(1024L, extractSnapshotSize(table, originatingTable, "ss1")); assertEquals(4096L, extractSnapshotSize(table, originatingTable, "ss2")); }); snapshotsWithSizes.clear(); verify(table, () -> { snapshotsWithSizes.put(TableName.valueOf("t1"), new SnapshotWithSize("ss1", 1024L)); snapshotsWithSizes.put(TableName.valueOf("t2"), new SnapshotWithSize("ss2", 4096L)); snapshotsWithSizes.put(TableName.valueOf("t3"), new SnapshotWithSize("ss3", 8192L)); testChore.persistSnapshotSizes(table, snapshotsWithSizes); assertEquals(3, count(table)); assertEquals(1024L, extractSnapshotSize(table, TableName.valueOf("t1"), "ss1")); assertEquals(4096L, extractSnapshotSize(table, TableName.valueOf("t2"), "ss2")); assertEquals(8192L, extractSnapshotSize(table, TableName.valueOf("t3"), "ss3")); }); } } @Test public void testSnapshotsFromTables() throws Exception { TableName tn1 = helper.createTableWithRegions(1); TableName tn2 = helper.createTableWithRegions(1); TableName tn3 = helper.createTableWithRegions(1); // Set a space quota on table 1 and 2 (but not 3) admin.setQuota(QuotaSettingsFactory.limitTableSpace( tn1, SpaceQuotaHelperForTests.ONE_GIGABYTE, SpaceViolationPolicy.NO_INSERTS)); admin.setQuota(QuotaSettingsFactory.limitTableSpace( tn2, SpaceQuotaHelperForTests.ONE_GIGABYTE, SpaceViolationPolicy.NO_INSERTS)); // Create snapshots on each table (we didn't write any data, so just skipflush) admin.snapshot(new SnapshotDescription(tn1 + "snapshot", tn1, SnapshotType.SKIPFLUSH)); admin.snapshot(new SnapshotDescription(tn2 + "snapshot", tn2, SnapshotType.SKIPFLUSH)); admin.snapshot(new SnapshotDescription(tn3 + "snapshot", tn3, SnapshotType.SKIPFLUSH)); Multimap<TableName,String> mapping = testChore.getSnapshotsToComputeSize(); assertEquals(2, mapping.size()); assertEquals(1, mapping.get(tn1).size()); assertEquals(tn1 + "snapshot", mapping.get(tn1).iterator().next()); assertEquals(1, mapping.get(tn2).size()); assertEquals(tn2 + "snapshot", mapping.get(tn2).iterator().next()); admin.snapshot(new SnapshotDescription(tn2 + "snapshot1", tn2, SnapshotType.SKIPFLUSH)); admin.snapshot(new SnapshotDescription(tn3 + "snapshot1", tn3, SnapshotType.SKIPFLUSH)); mapping = testChore.getSnapshotsToComputeSize(); assertEquals(3, mapping.size()); assertEquals(1, mapping.get(tn1).size()); assertEquals(tn1 + "snapshot", mapping.get(tn1).iterator().next()); assertEquals(2, mapping.get(tn2).size()); assertEquals( new HashSet<String>(Arrays.asList(tn2 + "snapshot", tn2 + "snapshot1")), mapping.get(tn2)); } @Test public void testSnapshotsFromNamespaces() throws Exception { NamespaceDescriptor ns = NamespaceDescriptor.create("snapshots_from_namespaces").build(); admin.createNamespace(ns); TableName tn1 = helper.createTableWithRegions(ns.getName(), 1); TableName tn2 = helper.createTableWithRegions(ns.getName(), 1); TableName tn3 = helper.createTableWithRegions(1); // Set a space quota on the namespace admin.setQuota(QuotaSettingsFactory.limitNamespaceSpace( ns.getName(), SpaceQuotaHelperForTests.ONE_GIGABYTE, SpaceViolationPolicy.NO_INSERTS)); // Create snapshots on each table (we didn't write any data, so just skipflush) admin.snapshot(new SnapshotDescription( tn1.getQualifierAsString() + "snapshot", tn1, SnapshotType.SKIPFLUSH)); admin.snapshot(new SnapshotDescription( tn2.getQualifierAsString() + "snapshot", tn2, SnapshotType.SKIPFLUSH)); admin.snapshot(new SnapshotDescription( tn3.getQualifierAsString() + "snapshot", tn3, SnapshotType.SKIPFLUSH)); Multimap<TableName,String> mapping = testChore.getSnapshotsToComputeSize(); assertEquals(2, mapping.size()); assertEquals(1, mapping.get(tn1).size()); assertEquals(tn1.getQualifierAsString() + "snapshot", mapping.get(tn1).iterator().next()); assertEquals(1, mapping.get(tn2).size()); assertEquals(tn2.getQualifierAsString() + "snapshot", mapping.get(tn2).iterator().next()); admin.snapshot(new SnapshotDescription( tn2.getQualifierAsString() + "snapshot1", tn2, SnapshotType.SKIPFLUSH)); admin.snapshot(new SnapshotDescription( tn3.getQualifierAsString() + "snapshot2", tn3, SnapshotType.SKIPFLUSH)); mapping = testChore.getSnapshotsToComputeSize(); assertEquals(3, mapping.size()); assertEquals(1, mapping.get(tn1).size()); assertEquals(tn1.getQualifierAsString() + "snapshot", mapping.get(tn1).iterator().next()); assertEquals(2, mapping.get(tn2).size()); assertEquals( new HashSet<String>(Arrays.asList(tn2.getQualifierAsString() + "snapshot", tn2.getQualifierAsString() + "snapshot1")), mapping.get(tn2)); } @Test public void testSnapshotSize() throws Exception { // Create a table and set a quota TableName tn1 = helper.createTableWithRegions(5); admin.setQuota(QuotaSettingsFactory.limitTableSpace( tn1, SpaceQuotaHelperForTests.ONE_GIGABYTE, SpaceViolationPolicy.NO_INSERTS)); // Write some data and flush it helper.writeData(tn1, 256L * SpaceQuotaHelperForTests.ONE_KILOBYTE); admin.flush(tn1); final AtomicReference<Long> lastSeenSize = new AtomicReference<>(); // Wait for the Master chore to run to see the usage (with a fudge factor) TEST_UTIL.waitFor(30_000, new SpaceQuotaSnapshotPredicate(conn, tn1) { @Override boolean evaluate(SpaceQuotaSnapshot snapshot) throws Exception { lastSeenSize.set(snapshot.getUsage()); return snapshot.getUsage() > 230L * SpaceQuotaHelperForTests.ONE_KILOBYTE; } }); // Create a snapshot on the table final String snapshotName = tn1 + "snapshot"; admin.snapshot(new SnapshotDescription(snapshotName, tn1, SnapshotType.SKIPFLUSH)); // Get the snapshots Multimap<TableName,String> snapshotsToCompute = testChore.getSnapshotsToComputeSize(); assertEquals( "Expected to see the single snapshot: " + snapshotsToCompute, 1, snapshotsToCompute.size()); // Get the size of our snapshot Multimap<TableName,SnapshotWithSize> snapshotsWithSize = testChore.computeSnapshotSizes( snapshotsToCompute); assertEquals(1, snapshotsWithSize.size()); SnapshotWithSize sws = Iterables.getOnlyElement(snapshotsWithSize.get(tn1)); assertEquals(snapshotName, sws.getName()); // The snapshot should take up no space since the table refers to it completely assertEquals(0, sws.getSize()); // Write some more data, flush it, and then major_compact the table helper.writeData(tn1, 256L * SpaceQuotaHelperForTests.ONE_KILOBYTE); admin.flush(tn1); TEST_UTIL.compact(tn1, true); // Test table should reflect it's original size since ingest was deterministic TEST_UTIL.waitFor(30_000, new SpaceQuotaSnapshotPredicate(conn, tn1) { @Override boolean evaluate(SpaceQuotaSnapshot snapshot) throws Exception { LOG.debug("Current usage=" + snapshot.getUsage() + " lastSeenSize=" + lastSeenSize.get()); return closeInSize( snapshot.getUsage(), lastSeenSize.get(), SpaceQuotaHelperForTests.ONE_KILOBYTE); } }); // Wait for no compacted files on the regions of our table TEST_UTIL.waitFor(30_000, new NoFilesToDischarge(TEST_UTIL.getMiniHBaseCluster(), tn1)); // Still should see only one snapshot snapshotsToCompute = testChore.getSnapshotsToComputeSize(); assertEquals( "Expected to see the single snapshot: " + snapshotsToCompute, 1, snapshotsToCompute.size()); snapshotsWithSize = testChore.computeSnapshotSizes( snapshotsToCompute); assertEquals(1, snapshotsWithSize.size()); sws = Iterables.getOnlyElement(snapshotsWithSize.get(tn1)); assertEquals(snapshotName, sws.getName()); // The snapshot should take up the size the table originally took up assertEquals(lastSeenSize.get().longValue(), sws.getSize()); } @Test public void testPersistingSnapshotsForNamespaces() throws Exception { Multimap<TableName,SnapshotWithSize> snapshotsWithSizes = HashMultimap.create(); TableName tn1 = TableName.valueOf("ns1:tn1"); TableName tn2 = TableName.valueOf("ns1:tn2"); TableName tn3 = TableName.valueOf("ns2:tn1"); TableName tn4 = TableName.valueOf("ns2:tn2"); TableName tn5 = TableName.valueOf("tn1"); snapshotsWithSizes.put(tn1, new SnapshotWithSize("", 1024L)); snapshotsWithSizes.put(tn2, new SnapshotWithSize("", 1024L)); snapshotsWithSizes.put(tn3, new SnapshotWithSize("", 512L)); snapshotsWithSizes.put(tn4, new SnapshotWithSize("", 1024L)); snapshotsWithSizes.put(tn5, new SnapshotWithSize("", 3072L)); Map<String,Long> nsSizes = testChore.groupSnapshotSizesByNamespace(snapshotsWithSizes); assertEquals(3, nsSizes.size()); assertEquals(2048L, (long) nsSizes.get("ns1")); assertEquals(1536L, (long) nsSizes.get("ns2")); assertEquals(3072L, (long) nsSizes.get(NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR)); } private long count(Table t) throws IOException { try (ResultScanner rs = t.getScanner(new Scan())) { long sum = 0; for (Result r : rs) { while (r.advance()) { sum++; } } return sum; } } private long extractSnapshotSize( Table quotaTable, TableName tn, String snapshot) throws IOException { Get g = QuotaTableUtil.makeGetForSnapshotSize(tn, snapshot); Result r = quotaTable.get(g); assertNotNull(r); CellScanner cs = r.cellScanner(); cs.advance(); Cell c = cs.current(); assertNotNull(c); return QuotaTableUtil.extractSnapshotSize( c.getValueArray(), c.getValueOffset(), c.getValueLength()); } private void verify(Table t, IOThrowingRunnable test) throws IOException { admin.disableTable(t.getName()); admin.truncateTable(t.getName(), false); test.run(); } @FunctionalInterface private interface IOThrowingRunnable { void run() throws IOException; } /** * Computes if {@code size2} is within {@code delta} of {@code size1}, inclusive. */ boolean closeInSize(long size1, long size2, long delta) { long lower = size1 - delta; long upper = size1 + delta; return lower <= size2 && size2 <= upper; } }
gustavoanatoly/hbase
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestSnapshotQuotaObserverChore.java
Java
apache-2.0
15,981
[ 30522, 1013, 1008, 1008, 7000, 2000, 1996, 15895, 4007, 3192, 1006, 2004, 2546, 1007, 2104, 2028, 2030, 2062, 1008, 12130, 6105, 10540, 1012, 2156, 1996, 5060, 5371, 5500, 2007, 1008, 2023, 2147, 2005, 3176, 2592, 4953, 9385, 6095, 1012, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// <auto-generated> // Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. See License.txt in the project root for // license information. // // Code generated by Microsoft (R) AutoRest Code Generator. // Changes may cause incorrect behavior and will be lost if the code is // regenerated. // </auto-generated> namespace Microsoft.Azure.Management.CosmosDB { using Microsoft.Rest; using Microsoft.Rest.Azure; using Models; using Newtonsoft.Json; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Threading; using System.Threading.Tasks; /// <summary> /// PercentileSourceTargetOperations operations. /// </summary> internal partial class PercentileSourceTargetOperations : IServiceOperations<CosmosDBManagementClient>, IPercentileSourceTargetOperations { /// <summary> /// Initializes a new instance of the PercentileSourceTargetOperations class. /// </summary> /// <param name='client'> /// Reference to the service client. /// </param> /// <exception cref="System.ArgumentNullException"> /// Thrown when a required parameter is null /// </exception> internal PercentileSourceTargetOperations(CosmosDBManagementClient client) { if (client == null) { throw new System.ArgumentNullException("client"); } Client = client; } /// <summary> /// Gets a reference to the CosmosDBManagementClient /// </summary> public CosmosDBManagementClient Client { get; private set; } /// <summary> /// Retrieves the metrics determined by the given filter for the given account, /// source and target region. This url is only for PBS and Replication Latency /// data /// </summary> /// <param name='resourceGroupName'> /// The name of the resource group. The name is case insensitive. /// </param> /// <param name='accountName'> /// Cosmos DB database account name. /// </param> /// <param name='sourceRegion'> /// Source region from which data is written. Cosmos DB region, with spaces /// between words and each word capitalized. /// </param> /// <param name='targetRegion'> /// Target region to which data is written. Cosmos DB region, with spaces /// between words and each word capitalized. /// </param> /// <param name='filter'> /// An OData filter expression that describes a subset of metrics to return. /// The parameters that can be filtered are name.value (name of the metric, can /// have an or of multiple names), startTime, endTime, and timeGrain. The /// supported operator is eq. /// </param> /// <param name='customHeaders'> /// Headers that will be added to request. /// </param> /// <param name='cancellationToken'> /// The cancellation token. /// </param> /// <exception cref="CloudException"> /// Thrown when the operation returned an invalid status code /// </exception> /// <exception cref="SerializationException"> /// Thrown when unable to deserialize the response /// </exception> /// <exception cref="ValidationException"> /// Thrown when a required parameter is null /// </exception> /// <exception cref="System.ArgumentNullException"> /// Thrown when a required parameter is null /// </exception> /// <return> /// A response object containing the response body and response headers. /// </return> public async Task<AzureOperationResponse<IEnumerable<PercentileMetric>>> ListMetricsWithHttpMessagesAsync(string resourceGroupName, string accountName, string sourceRegion, string targetRegion, string filter, Dictionary<string, List<string>> customHeaders = null, CancellationToken cancellationToken = default(CancellationToken)) { if (Client.SubscriptionId == null) { throw new ValidationException(ValidationRules.CannotBeNull, "this.Client.SubscriptionId"); } if (Client.SubscriptionId != null) { if (Client.SubscriptionId.Length < 1) { throw new ValidationException(ValidationRules.MinLength, "Client.SubscriptionId", 1); } } if (resourceGroupName == null) { throw new ValidationException(ValidationRules.CannotBeNull, "resourceGroupName"); } if (resourceGroupName != null) { if (resourceGroupName.Length > 90) { throw new ValidationException(ValidationRules.MaxLength, "resourceGroupName", 90); } if (resourceGroupName.Length < 1) { throw new ValidationException(ValidationRules.MinLength, "resourceGroupName", 1); } } if (accountName == null) { throw new ValidationException(ValidationRules.CannotBeNull, "accountName"); } if (accountName != null) { if (accountName.Length > 50) { throw new ValidationException(ValidationRules.MaxLength, "accountName", 50); } if (accountName.Length < 3) { throw new ValidationException(ValidationRules.MinLength, "accountName", 3); } if (!System.Text.RegularExpressions.Regex.IsMatch(accountName, "^[a-z0-9]+(-[a-z0-9]+)*")) { throw new ValidationException(ValidationRules.Pattern, "accountName", "^[a-z0-9]+(-[a-z0-9]+)*"); } } if (sourceRegion == null) { throw new ValidationException(ValidationRules.CannotBeNull, "sourceRegion"); } if (targetRegion == null) { throw new ValidationException(ValidationRules.CannotBeNull, "targetRegion"); } if (Client.ApiVersion == null) { throw new ValidationException(ValidationRules.CannotBeNull, "this.Client.ApiVersion"); } if (Client.ApiVersion != null) { if (Client.ApiVersion.Length < 1) { throw new ValidationException(ValidationRules.MinLength, "Client.ApiVersion", 1); } } if (filter == null) { throw new ValidationException(ValidationRules.CannotBeNull, "filter"); } // Tracing bool _shouldTrace = ServiceClientTracing.IsEnabled; string _invocationId = null; if (_shouldTrace) { _invocationId = ServiceClientTracing.NextInvocationId.ToString(); Dictionary<string, object> tracingParameters = new Dictionary<string, object>(); tracingParameters.Add("resourceGroupName", resourceGroupName); tracingParameters.Add("accountName", accountName); tracingParameters.Add("sourceRegion", sourceRegion); tracingParameters.Add("targetRegion", targetRegion); tracingParameters.Add("filter", filter); tracingParameters.Add("cancellationToken", cancellationToken); ServiceClientTracing.Enter(_invocationId, this, "ListMetrics", tracingParameters); } // Construct URL var _baseUrl = Client.BaseUri.AbsoluteUri; var _url = new System.Uri(new System.Uri(_baseUrl + (_baseUrl.EndsWith("/") ? "" : "/")), "subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/sourceRegion/{sourceRegion}/targetRegion/{targetRegion}/percentile/metrics").ToString(); _url = _url.Replace("{subscriptionId}", System.Uri.EscapeDataString(Client.SubscriptionId)); _url = _url.Replace("{resourceGroupName}", System.Uri.EscapeDataString(resourceGroupName)); _url = _url.Replace("{accountName}", System.Uri.EscapeDataString(accountName)); _url = _url.Replace("{sourceRegion}", System.Uri.EscapeDataString(sourceRegion)); _url = _url.Replace("{targetRegion}", System.Uri.EscapeDataString(targetRegion)); List<string> _queryParameters = new List<string>(); if (Client.ApiVersion != null) { _queryParameters.Add(string.Format("api-version={0}", System.Uri.EscapeDataString(Client.ApiVersion))); } if (filter != null) { _queryParameters.Add(string.Format("$filter={0}", System.Uri.EscapeDataString(filter))); } if (_queryParameters.Count > 0) { _url += (_url.Contains("?") ? "&" : "?") + string.Join("&", _queryParameters); } // Create HTTP transport objects var _httpRequest = new HttpRequestMessage(); HttpResponseMessage _httpResponse = null; _httpRequest.Method = new HttpMethod("GET"); _httpRequest.RequestUri = new System.Uri(_url); // Set Headers if (Client.GenerateClientRequestId != null && Client.GenerateClientRequestId.Value) { _httpRequest.Headers.TryAddWithoutValidation("x-ms-client-request-id", System.Guid.NewGuid().ToString()); } if (Client.AcceptLanguage != null) { if (_httpRequest.Headers.Contains("accept-language")) { _httpRequest.Headers.Remove("accept-language"); } _httpRequest.Headers.TryAddWithoutValidation("accept-language", Client.AcceptLanguage); } if (customHeaders != null) { foreach(var _header in customHeaders) { if (_httpRequest.Headers.Contains(_header.Key)) { _httpRequest.Headers.Remove(_header.Key); } _httpRequest.Headers.TryAddWithoutValidation(_header.Key, _header.Value); } } // Serialize Request string _requestContent = null; // Set Credentials if (Client.Credentials != null) { cancellationToken.ThrowIfCancellationRequested(); await Client.Credentials.ProcessHttpRequestAsync(_httpRequest, cancellationToken).ConfigureAwait(false); } // Send Request if (_shouldTrace) { ServiceClientTracing.SendRequest(_invocationId, _httpRequest); } cancellationToken.ThrowIfCancellationRequested(); _httpResponse = await Client.HttpClient.SendAsync(_httpRequest, cancellationToken).ConfigureAwait(false); if (_shouldTrace) { ServiceClientTracing.ReceiveResponse(_invocationId, _httpResponse); } HttpStatusCode _statusCode = _httpResponse.StatusCode; cancellationToken.ThrowIfCancellationRequested(); string _responseContent = null; if ((int)_statusCode != 200) { var ex = new CloudException(string.Format("Operation returned an invalid status code '{0}'", _statusCode)); try { _responseContent = await _httpResponse.Content.ReadAsStringAsync().ConfigureAwait(false); CloudError _errorBody = Rest.Serialization.SafeJsonConvert.DeserializeObject<CloudError>(_responseContent, Client.DeserializationSettings); if (_errorBody != null) { ex = new CloudException(_errorBody.Message); ex.Body = _errorBody; } } catch (JsonException) { // Ignore the exception } ex.Request = new HttpRequestMessageWrapper(_httpRequest, _requestContent); ex.Response = new HttpResponseMessageWrapper(_httpResponse, _responseContent); if (_httpResponse.Headers.Contains("x-ms-request-id")) { ex.RequestId = _httpResponse.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } if (_shouldTrace) { ServiceClientTracing.Error(_invocationId, ex); } _httpRequest.Dispose(); if (_httpResponse != null) { _httpResponse.Dispose(); } throw ex; } // Create Result var _result = new AzureOperationResponse<IEnumerable<PercentileMetric>>(); _result.Request = _httpRequest; _result.Response = _httpResponse; if (_httpResponse.Headers.Contains("x-ms-request-id")) { _result.RequestId = _httpResponse.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } // Deserialize Response if ((int)_statusCode == 200) { _responseContent = await _httpResponse.Content.ReadAsStringAsync().ConfigureAwait(false); try { _result.Body = Rest.Serialization.SafeJsonConvert.DeserializeObject<Page<PercentileMetric>>(_responseContent, Client.DeserializationSettings); } catch (JsonException ex) { _httpRequest.Dispose(); if (_httpResponse != null) { _httpResponse.Dispose(); } throw new SerializationException("Unable to deserialize the response.", _responseContent, ex); } } if (_shouldTrace) { ServiceClientTracing.Exit(_invocationId, _result); } return _result; } } }
Azure/azure-sdk-for-net
sdk/cosmosdb/Microsoft.Azure.Management.CosmosDB/src/Generated/PercentileSourceTargetOperations.cs
C#
mit
14,682
[ 30522, 1013, 1013, 1026, 8285, 1011, 7013, 1028, 1013, 1013, 9385, 1006, 1039, 1007, 7513, 3840, 1012, 2035, 2916, 9235, 1012, 1013, 1013, 7000, 2104, 1996, 10210, 6105, 1012, 2156, 6105, 1012, 19067, 2102, 1999, 1996, 2622, 7117, 2005, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# Copyright (C) 2001-2017 Nominum, Inc. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose with or without fee is hereby granted, # provided that the above copyright notice and this permission notice # appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT # OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. """DNS rdatasets (an rdataset is a set of rdatas of a given type and class)""" import random from io import StringIO import struct import dns.exception import dns.rdatatype import dns.rdataclass import dns.rdata import dns.set from ._compat import string_types # define SimpleSet here for backwards compatibility SimpleSet = dns.set.Set class DifferingCovers(dns.exception.DNSException): """An attempt was made to add a DNS SIG/RRSIG whose covered type is not the same as that of the other rdatas in the rdataset.""" class IncompatibleTypes(dns.exception.DNSException): """An attempt was made to add DNS RR data of an incompatible type.""" class Rdataset(dns.set.Set): """A DNS rdataset.""" __slots__ = ['rdclass', 'rdtype', 'covers', 'ttl'] def __init__(self, rdclass, rdtype, covers=dns.rdatatype.NONE, ttl=0): """Create a new rdataset of the specified class and type. *rdclass*, an ``int``, the rdataclass. *rdtype*, an ``int``, the rdatatype. *covers*, an ``int``, the covered rdatatype. *ttl*, an ``int``, the TTL. """ super(Rdataset, self).__init__() self.rdclass = rdclass self.rdtype = rdtype self.covers = covers self.ttl = ttl def _clone(self): obj = super(Rdataset, self)._clone() obj.rdclass = self.rdclass obj.rdtype = self.rdtype obj.covers = self.covers obj.ttl = self.ttl return obj def update_ttl(self, ttl): """Perform TTL minimization. Set the TTL of the rdataset to be the lesser of the set's current TTL or the specified TTL. If the set contains no rdatas, set the TTL to the specified TTL. *ttl*, an ``int``. """ if len(self) == 0: self.ttl = ttl elif ttl < self.ttl: self.ttl = ttl def add(self, rd, ttl=None): """Add the specified rdata to the rdataset. If the optional *ttl* parameter is supplied, then ``self.update_ttl(ttl)`` will be called prior to adding the rdata. *rd*, a ``dns.rdata.Rdata``, the rdata *ttl*, an ``int``, the TTL. Raises ``dns.rdataset.IncompatibleTypes`` if the type and class do not match the type and class of the rdataset. Raises ``dns.rdataset.DifferingCovers`` if the type is a signature type and the covered type does not match that of the rdataset. """ # # If we're adding a signature, do some special handling to # check that the signature covers the same type as the # other rdatas in this rdataset. If this is the first rdata # in the set, initialize the covers field. # if self.rdclass != rd.rdclass or self.rdtype != rd.rdtype: raise IncompatibleTypes if ttl is not None: self.update_ttl(ttl) if self.rdtype == dns.rdatatype.RRSIG or \ self.rdtype == dns.rdatatype.SIG: covers = rd.covers() if len(self) == 0 and self.covers == dns.rdatatype.NONE: self.covers = covers elif self.covers != covers: raise DifferingCovers if dns.rdatatype.is_singleton(rd.rdtype) and len(self) > 0: self.clear() super(Rdataset, self).add(rd) def union_update(self, other): self.update_ttl(other.ttl) super(Rdataset, self).union_update(other) def intersection_update(self, other): self.update_ttl(other.ttl) super(Rdataset, self).intersection_update(other) def update(self, other): """Add all rdatas in other to self. *other*, a ``dns.rdataset.Rdataset``, the rdataset from which to update. """ self.update_ttl(other.ttl) super(Rdataset, self).update(other) def __repr__(self): if self.covers == 0: ctext = '' else: ctext = '(' + dns.rdatatype.to_text(self.covers) + ')' return '<DNS ' + dns.rdataclass.to_text(self.rdclass) + ' ' + \ dns.rdatatype.to_text(self.rdtype) + ctext + ' rdataset>' def __str__(self): return self.to_text() def __eq__(self, other): if not isinstance(other, Rdataset): return False if self.rdclass != other.rdclass or \ self.rdtype != other.rdtype or \ self.covers != other.covers: return False return super(Rdataset, self).__eq__(other) def __ne__(self, other): return not self.__eq__(other) def to_text(self, name=None, origin=None, relativize=True, override_rdclass=None, **kw): """Convert the rdataset into DNS master file format. See ``dns.name.Name.choose_relativity`` for more information on how *origin* and *relativize* determine the way names are emitted. Any additional keyword arguments are passed on to the rdata ``to_text()`` method. *name*, a ``dns.name.Name``. If name is not ``None``, emit RRs with *name* as the owner name. *origin*, a ``dns.name.Name`` or ``None``, the origin for relative names. *relativize*, a ``bool``. If ``True``, names will be relativized to *origin*. """ if name is not None: name = name.choose_relativity(origin, relativize) ntext = str(name) pad = ' ' else: ntext = '' pad = '' s = StringIO() if override_rdclass is not None: rdclass = override_rdclass else: rdclass = self.rdclass if len(self) == 0: # # Empty rdatasets are used for the question section, and in # some dynamic updates, so we don't need to print out the TTL # (which is meaningless anyway). # s.write(u'%s%s%s %s\n' % (ntext, pad, dns.rdataclass.to_text(rdclass), dns.rdatatype.to_text(self.rdtype))) else: for rd in self: s.write(u'%s%s%d %s %s %s\n' % (ntext, pad, self.ttl, dns.rdataclass.to_text(rdclass), dns.rdatatype.to_text(self.rdtype), rd.to_text(origin=origin, relativize=relativize, **kw))) # # We strip off the final \n for the caller's convenience in printing # return s.getvalue()[:-1] def to_wire(self, name, file, compress=None, origin=None, override_rdclass=None, want_shuffle=True): """Convert the rdataset to wire format. *name*, a ``dns.name.Name`` is the owner name to use. *file* is the file where the name is emitted (typically a BytesIO file). *compress*, a ``dict``, is the compression table to use. If ``None`` (the default), names will not be compressed. *origin* is a ``dns.name.Name`` or ``None``. If the name is relative and origin is not ``None``, then *origin* will be appended to it. *override_rdclass*, an ``int``, is used as the class instead of the class of the rdataset. This is useful when rendering rdatasets associated with dynamic updates. *want_shuffle*, a ``bool``. If ``True``, then the order of the Rdatas within the Rdataset will be shuffled before rendering. Returns an ``int``, the number of records emitted. """ if override_rdclass is not None: rdclass = override_rdclass want_shuffle = False else: rdclass = self.rdclass file.seek(0, 2) if len(self) == 0: name.to_wire(file, compress, origin) stuff = struct.pack("!HHIH", self.rdtype, rdclass, 0, 0) file.write(stuff) return 1 else: if want_shuffle: l = list(self) random.shuffle(l) else: l = self for rd in l: name.to_wire(file, compress, origin) stuff = struct.pack("!HHIH", self.rdtype, rdclass, self.ttl, 0) file.write(stuff) start = file.tell() rd.to_wire(file, compress, origin) end = file.tell() assert end - start < 65536 file.seek(start - 2) stuff = struct.pack("!H", end - start) file.write(stuff) file.seek(0, 2) return len(self) def match(self, rdclass, rdtype, covers): """Returns ``True`` if this rdataset matches the specified class, type, and covers. """ if self.rdclass == rdclass and \ self.rdtype == rdtype and \ self.covers == covers: return True return False def from_text_list(rdclass, rdtype, ttl, text_rdatas): """Create an rdataset with the specified class, type, and TTL, and with the specified list of rdatas in text format. Returns a ``dns.rdataset.Rdataset`` object. """ if isinstance(rdclass, string_types): rdclass = dns.rdataclass.from_text(rdclass) if isinstance(rdtype, string_types): rdtype = dns.rdatatype.from_text(rdtype) r = Rdataset(rdclass, rdtype) r.update_ttl(ttl) for t in text_rdatas: rd = dns.rdata.from_text(r.rdclass, r.rdtype, t) r.add(rd) return r def from_text(rdclass, rdtype, ttl, *text_rdatas): """Create an rdataset with the specified class, type, and TTL, and with the specified rdatas in text format. Returns a ``dns.rdataset.Rdataset`` object. """ return from_text_list(rdclass, rdtype, ttl, text_rdatas) def from_rdata_list(ttl, rdatas): """Create an rdataset with the specified TTL, and with the specified list of rdata objects. Returns a ``dns.rdataset.Rdataset`` object. """ if len(rdatas) == 0: raise ValueError("rdata list must not be empty") r = None for rd in rdatas: if r is None: r = Rdataset(rd.rdclass, rd.rdtype) r.update_ttl(ttl) r.add(rd) return r def from_rdata(ttl, *rdatas): """Create an rdataset with the specified TTL, and with the specified rdata objects. Returns a ``dns.rdataset.Rdataset`` object. """ return from_rdata_list(ttl, rdatas)
pbaesse/Sissens
lib/python2.7/site-packages/eventlet/support/dns/rdataset.py
Python
gpl-3.0
11,374
[ 30522, 1001, 9385, 1006, 1039, 1007, 2541, 1011, 2418, 2053, 10020, 2819, 1010, 4297, 1012, 1001, 1001, 6656, 2000, 2224, 1010, 6100, 1010, 19933, 1010, 1998, 16062, 2023, 4007, 1998, 2049, 1001, 12653, 2005, 2151, 3800, 2007, 2030, 2302, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
#ifndef __LINUX_COMPILER_H #error "Please don't include <linux/compiler-gcc4.h> directly, include <linux/compiler.h> instead." #endif /* GCC 4.1.[01] miscompiles __weak */ #ifdef __KERNEL__ # if __GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ <= 1 # error Your version of gcc miscompiles the __weak directive # endif #endif #define __used __attribute__((__used__)) #define __must_check __attribute__((warn_unused_result)) #define __compiler_offsetof(a,b) __builtin_offsetof(a,b) #if __GNUC_MINOR__ >= 3 /* Mark functions as cold. gcc will assume any path leading to a call to them will be unlikely. This means a lot of manual unlikely()s are unnecessary now for any paths leading to the usual suspects like BUG(), printk(), panic() etc. [but let's keep them for now for older compilers] Early snapshots of gcc 4.3 don't support this and we can't detect this in the preprocessor, but we can live with this because they're unreleased. Maketime probing would be overkill here. gcc also has a __attribute__((__hot__)) to move hot functions into a special section, but I don't see any sense in this right now in the kernel context */ #define __cold __attribute__((__cold__)) /* * GCC 'asm goto' miscompiles certain code sequences: * * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 * * Work it around via a compiler barrier quirk suggested by Jakub Jelinek. * Fixed in GCC 4.8.2 and later versions. * * (asm goto is automatically volatile - the naming reflects this.) */ #if GCC_VERSION <= 40801 # define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0) #else # define asm_volatile_goto(x...) do { asm goto(x); } while (0) #endif #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) #if __GNUC_MINOR__ >= 5 /* * Mark a position in code as unreachable. This can be used to * suppress control flow warnings after asm blocks that transfer * control elsewhere. * * Early snapshots of gcc 4.5 don't support this and we can't detect * this in the preprocessor, but we can live with this because they're * unreleased. Really, we need to have autoconf for the kernel. */ #define unreachable() __builtin_unreachable() /* Mark a function definition as prohibited from being cloned. */ #define __noclone __attribute__((__noclone__)) #endif #endif #if __GNUC_MINOR__ >= 6 /* * Tell the optimizer that something else uses this function or variable. */ #define __visible __attribute__((externally_visible)) #endif #if __GNUC_MINOR__ > 0 #define __compiletime_object_size(obj) __builtin_object_size(obj, 0) #endif #if __GNUC_MINOR__ >= 4 && !defined(__CHECKER__) #define __compiletime_warning(message) __attribute__((warning(message))) #define __compiletime_error(message) __attribute__((error(message))) #endif
Minia89/MetallizedKernelRebased
include/linux/compiler-gcc4.h
C
gpl-2.0
2,805
[ 30522, 1001, 2065, 13629, 2546, 1035, 1035, 11603, 1035, 21624, 1035, 1044, 1001, 7561, 1000, 3531, 2123, 1005, 1056, 2421, 1026, 11603, 1013, 21624, 1011, 1043, 9468, 2549, 1012, 1044, 1028, 3495, 1010, 2421, 1026, 11603, 1013, 21624, 1012...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<?php namespace Wikibase\Test; use Wikibase\SqlIdGenerator; /** * @covers Wikibase\SqlIdGenerator * * @group Wikibase * @group WikibaseStore * @group WikibaseRepo * @group Database * * @group medium * * @licence GNU GPL v2+ * @author Katie Filbert < aude.wiki@gmail.com > */ class SqlIdGeneratorTest extends \MediaWikiTestCase { public function testGetNewId() { $generator = new SqlIdGenerator( wfGetLB() ); $id = $generator->getNewId( 'wikibase-kittens' ); $this->assertSame( 1, $id ); } public function testIdBlacklisting() { $generator = new SqlIdGenerator( wfGetLB(), array( 1, 2 ) ); $id = $generator->getNewId( 'wikibase-blacklist' ); $this->assertSame( 3, $id ); } }
JeroenDeDauw/mediawiki-extensions-Wikibase
repo/tests/phpunit/includes/store/sql/SqlIdGeneratorTest.php
PHP
gpl-2.0
710
[ 30522, 1026, 1029, 25718, 3415, 15327, 15536, 3211, 15058, 1032, 3231, 1025, 2224, 15536, 3211, 15058, 1032, 29296, 13623, 3678, 8844, 1025, 1013, 1008, 1008, 1008, 1030, 4472, 15536, 3211, 15058, 1032, 29296, 13623, 3678, 8844, 1008, 1008, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<?php /** * @version SEBLOD 3.x Core ~ $Id: new.php sebastienheraud $ * @package SEBLOD (App Builder & CCK) // SEBLOD nano (Form Builder) * @url http://www.seblod.com * @editor Octopoos - www.octopoos.com * @copyright Copyright (C) 2009 - 2016 SEBLOD. All Rights Reserved. * @license GNU General Public License version 2 or later; see _LICENSE.php **/ defined( '_JEXEC' ) or die; $elem = JText::_( 'COM_CCK_'._C4_TEXT ); Helper_Include::addDependencies( $this->getName(), $this->getLayout() ); $options = array(); $options[] = JHtml::_( 'select.option', 0, '- '.JText::_( 'COM_CCK_NONE' ).' -', 'value', 'text' ); $options2 = JCckDatabase::loadObjectList( 'SELECT a.title AS text, a.name AS value FROM #__cck_core_types AS a WHERE a.published = 1 ORDER BY a.title' ); if ( count( $options2 ) ) { $options[] = JHtml::_( 'select.option', '<OPTGROUP>', JText::_( 'COM_CCK_CONTENT_TYPES' ) ); $options = array_merge( $options, $options2 ); $options[] = JHtml::_( 'select.option', '</OPTGROUP>', '' ); } $template = Helper_Admin::getDefaultTemplate(); $lists['featured'] = JHtml::_( 'select.genericlist', $options, 'featured', 'class="inputbox" size="1"', 'value', 'text', '', 'featured' ); $doc = JFactory::getDocument(); $js = ' (function ($){ JCck.Dev = { submit: function() { var content_type = $("#featured").val(); var tpl_s = $("#tpl_search").val(); var tpl_f = $("#tpl_filter").val(); var tpl_l = ""; var tpl_i = $("#tpl_item").val(); var url = "index.php?option=com_cck&task=search.add&content_type="+content_type+"&tpl_s="+tpl_s+"&tpl_f="+tpl_f+"&tpl_l="+tpl_l+"&tpl_i="+tpl_i; top.location.href = url; return false; } } })(jQuery); '; $doc->addScriptDeclaration( $js ); ?> <form action="<?php echo JRoute::_( 'index.php' ); ?>" method="post" id="adminForm" name="adminForm"> <div class="seblod"> <div class="legend top center" style="font-size: 42px; font-style:italic;"> <?php echo JText::_( 'JTOOLBAR_NEW' ) .' '. $elem; ?> </div> <div class="legend top center" style="margin-top: 10px; font-style:italic;"> <?php echo JText::sprintf( 'COM_CCK_SEARCH_SPLASH_DESC', $elem ); ?> </div> <div style="text-align: center; margin-top: 30px;"> <ul class="adminformlist"> <li><label><?php echo JText::_( 'COM_CCK_'._C2_TEXT ); ?></label><?php echo $lists['featured']; ?></li> <li><label></label><button type="button" class="inputbutton" onclick="JCck.Dev.submit();"><?php echo JText::_( 'COM_CCK_CREATE' ) .' '. $elem; ?></button></li> </ul> </div> </div> <div class="clr"></div> <input type="hidden" id="tpl_search" name="tpl_search" value="<?php echo $template; ?>" /> <input type="hidden" id="tpl_filter" name="tpl_filter" value="<?php echo $template; ?>" /> <input type="hidden" id="tpl_list" name="tpl_list" value="" /> <input type="hidden" id="tpl_item" name="tpl_item" value="<?php echo $template; ?>" /> <input type="hidden" id="task" name="task" value="" /> <?php echo JHtml::_('form.token'); ?> </form> <?php Helper_Display::quickCopyright(); ?>
Silasfelipegarcia/pdvoluntario
administrator/components/com_cck/views/search/tmpl/new.php
PHP
gpl-2.0
3,114
[ 30522, 1026, 1029, 25718, 1013, 1008, 1008, 1008, 1030, 2544, 7367, 16558, 7716, 1017, 1012, 1060, 4563, 1066, 1002, 8909, 1024, 2047, 1012, 25718, 28328, 5886, 19513, 1002, 1008, 1030, 7427, 7367, 16558, 7716, 1006, 10439, 12508, 1004, 105...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package appeng.api.parts.layers; import ic2.api.energy.tile.IEnergySink; import net.minecraft.tileentity.TileEntity; import net.minecraftforge.common.ForgeDirection; import appeng.api.parts.IBusPart; import appeng.api.parts.LayerBase; public class LayerIEnergySink extends LayerBase implements IEnergySink { @Override public boolean acceptsEnergyFrom(TileEntity emitter, ForgeDirection direction) { IBusPart part = getPart( direction ); if ( part instanceof IEnergySink ) return ((IEnergySink) part).acceptsEnergyFrom( emitter, direction ); return false; } @Override public double demandedEnergyUnits() { // this is a flawed implementation, that requires a change to the IC2 API. double maxRequired = 0; for (ForgeDirection dir : ForgeDirection.VALID_DIRECTIONS) { IBusPart part = getPart( dir ); if ( part instanceof IEnergySink ) { // use lower number cause ic2 deletes power it sends that isn't recieved. maxRequired = Math.min( maxRequired, ((IEnergySink) part).demandedEnergyUnits() ); } } return maxRequired; } @Override public double injectEnergyUnits(ForgeDirection directionFrom, double amount) { IBusPart part = getPart( directionFrom ); if ( part instanceof IEnergySink ) return ((IEnergySink) part).injectEnergyUnits( directionFrom, amount ); return amount; } @Override public int getMaxSafeInput() { return Integer.MAX_VALUE; // no real options here... } }
Gamoholic/Applied-Energistics-2-API
parts/layers/LayerIEnergySink.java
Java
mit
1,448
[ 30522, 7427, 10439, 13159, 1012, 17928, 1012, 3033, 1012, 9014, 1025, 12324, 24582, 2475, 1012, 17928, 1012, 2943, 1012, 14090, 1012, 29464, 3678, 6292, 11493, 2243, 1025, 12324, 5658, 1012, 3067, 10419, 1012, 14090, 4765, 3012, 1012, 14090, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* =========================================================================== Copyright (C) 1999-2010 id Software LLC, a ZeniMax Media company. This file is part of Spearmint Source Code. Spearmint Source Code is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. Spearmint Source Code is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Spearmint Source Code. If not, see <http://www.gnu.org/licenses/>. In addition, Spearmint Source Code is also subject to certain additional terms. You should have received a copy of these additional terms immediately following the terms and conditions of the GNU General Public License. If not, please request a copy in writing from id Software at the address below. If you have questions concerning this license or the applicable additional terms, you may contact in writing id Software LLC, c/o ZeniMax Media Inc., Suite 120, Rockville, Maryland 20850 USA. =========================================================================== */ /***************************************************************************** * name: l_precomp.h * * desc: pre compiler * * $Archive: /source/code/botlib/l_precomp.h $ * *****************************************************************************/ #ifndef MAX_PATH #define MAX_PATH MAX_QPATH #endif #ifndef PATH_SEPERATORSTR #if defined(WIN32)|defined(_WIN32)|defined(__NT__)|defined(__WINDOWS__)|defined(__WINDOWS_386__) #define PATHSEPERATOR_STR "\\" #else #define PATHSEPERATOR_STR "/" #endif #endif #ifndef PATH_SEPERATORCHAR #if defined(WIN32)|defined(_WIN32)|defined(__NT__)|defined(__WINDOWS__)|defined(__WINDOWS_386__) #define PATHSEPERATOR_CHAR '\\' #else #define PATHSEPERATOR_CHAR '/' #endif #endif #if defined(BSPC) && !defined(QDECL) #define QDECL #endif #define DEFINE_FIXED 0x0001 #define BUILTIN_LINE 1 #define BUILTIN_FILE 2 #define BUILTIN_DATE 3 #define BUILTIN_TIME 4 #define BUILTIN_STDC 5 #define INDENT_IF 0x0001 #define INDENT_ELSE 0x0002 #define INDENT_ELIF 0x0004 #define INDENT_IFDEF 0x0008 #define INDENT_IFNDEF 0x0010 //macro definitions typedef struct define_s { char *name; //define name int flags; //define flags int builtin; // > 0 if builtin define int numparms; //number of define parameters token_t *parms; //define parameters token_t *tokens; //macro tokens (possibly containing parm tokens) struct define_s *next; //next defined macro in a list struct define_s *hashnext; //next define in the hash chain } define_t; //indents //used for conditional compilation directives: //#if, #else, #elif, #ifdef, #ifndef typedef struct indent_s { int type; //indent type int skip; //true if skipping current indent script_t *script; //script the indent was in struct indent_s *next; //next indent on the indent stack } indent_t; //source file typedef struct source_s { char filename[1024]; //file name of the script char includepath[1024]; //path to include files punctuation_t *punctuations; //punctuations to use script_t *scriptstack; //stack with scripts of the source token_t *tokens; //tokens to read first define_t *defines; //list with macro definitions define_t **definehash; //hash chain with defines indent_t *indentstack; //stack with indents int skip; // > 0 if skipping conditional code token_t token; //last read token } source_t; //read a token from the source int PC_ReadToken(source_t *source, token_t *token); //expect a certain token int PC_ExpectTokenString(source_t *source, char *string); //expect a certain token type int PC_ExpectTokenType(source_t *source, int type, int subtype, token_t *token); //expect a token int PC_ExpectAnyToken(source_t *source, token_t *token); //returns true when the token is available int PC_CheckTokenString(source_t *source, char *string); //returns true and reads the token when a token with the given type is available int PC_CheckTokenType(source_t *source, int type, int subtype, token_t *token); //skip tokens until the given token string is read int PC_SkipUntilString(source_t *source, char *string); //unread the last token read from the script void PC_UnreadLastToken(source_t *source); //unread the given token void PC_UnreadToken(source_t *source, token_t *token); //read a token only if on the same line, lines are concatenated with a slash int PC_ReadLine(source_t *source, token_t *token); //returns true if there was a white space in front of the token int PC_WhiteSpaceBeforeToken(token_t *token); //add a define to the source int PC_AddDefine(source_t *source, char *string); //add a globals define that will be added to all opened sources int PC_AddGlobalDefine(char *string); //remove the given global define int PC_RemoveGlobalDefine(char *name); //remove all globals defines void PC_RemoveAllGlobalDefines(void); //add builtin defines void PC_AddBuiltinDefines(source_t *source); //set the source include path void PC_SetIncludePath(source_t *source, char *path); //set the punction set void PC_SetPunctuations(source_t *source, punctuation_t *p); //set the base folder to load files from void PC_SetBaseFolder(const char *path); //load a source file source_t *LoadSourceFile(const char *filename); //load a source from memory source_t *LoadSourceMemory(char *ptr, int length, char *name); //free the given source void FreeSource(source_t *source); //print a source error void QDECL SourceError(source_t *source, char *str, ...) __attribute__ ((format (printf, 2, 3))); //print a source warning void QDECL SourceWarning(source_t *source, char *str, ...) __attribute__ ((format (printf, 2, 3))); #ifdef BSPC // some of BSPC source does include qcommon/q_shared.h and some does not // we define pc_token_s pc_token_t if needed (yes, it's ugly) #ifndef __Q_SHARED_H #define MAX_TOKENLENGTH 1024 typedef struct pc_token_s { int type; int subtype; int intvalue; float floatvalue; char string[MAX_TOKENLENGTH]; } pc_token_t; #endif //!_Q_SHARED_H #endif //BSPC // int PC_LoadSourceHandle(const char *filename, const char *basepath); int PC_FreeSourceHandle(int handle); int PC_ReadTokenHandle(int handle, pc_token_t *pc_token); void PC_UnreadLastTokenHandle( int handle ); int PC_SourceFileAndLine(int handle, char *filename, int *line); void PC_CheckOpenSourceHandles(void);
mecwerks/spearmint-ios
code/botlib/l_precomp.h
C
gpl-3.0
6,785
[ 30522, 1013, 1008, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 102...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
CS 395T and PHL 391 CS 395T and PHL 391, Spring 1996, Foundations of Mathematics, TT 2:00-3:30, Taylor 3.144 Course blurb: There are many approaches to formal reasoning. The objective of specifying computer programs, including the formalization of worlds with which programs are to interact, has led to the creation of numerous tools for formal reasoning. We will examine some systems for formal reasoning while examining a number of mechanical formal methods tools that support these different systems. Examples of such system/tool pairs are: System Tool Primitive Recursive Arithmetic Boyer-Moore Prover, ACL2 First Order Logic Otter, Nelson's qed Higher Order Logic HOL, IMPS Equational Reasoning OBJ Set Theory Mizar, Quaife/Otter, PVS Type Theory NuPrl, Lego, Coq Students will choose, with the help of the instructor, a system and/or tool to examine and the grade will be based upon presentations about these. The QED Project HTML Version of the QED Manifesto Plain text version of the QED Manifesto Bowen' Formal Methods Web Page and a backup copy. The chief assignment. Select a formal methods system, e.g., from Bowen's Formal Methods Web Page above, and report via in-class, oral presentations on either its logical foundations or upon its use. Many of these systems have good, freely available implementations. Consult with me before making a final choice. No tests, no final. Only the presentation(s). I hope to have a number of guest presentations from the local formal methods community. *Very* Tentative Schedule April 16 -- Rick Tanney -- Coq continued April 18 -- Trevor Hicks -- Otter April 23 -- Ruben Gamboa on ACL2 and Square root of 2 April 25 -- Samuel Guyer -- Circal and process algebras April 30 -- Sawada -- PVS May 2 -- Russell Turpin (SES) -- Galois
ML-SWAT/Web2KnowledgeBase
naive_bayes/course_train/untag_http:^^www.cs.utexas.edu^users^boyer^courses^cs395t-spring96.html
HTML
mit
1,967
[ 30522, 20116, 24673, 2102, 1998, 6887, 2140, 4464, 2487, 20116, 24673, 2102, 1998, 6887, 2140, 4464, 2487, 1010, 3500, 2727, 1010, 10100, 1997, 5597, 1010, 23746, 1016, 1024, 4002, 1011, 1017, 1024, 2382, 1010, 4202, 1017, 1012, 14748, 2607...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// Type definitions for bufferstream v0.6.2 // Project: https://github.com/dodo/node-bufferstream // Definitions by: Bart van der Schoor <https://github.com/Bartvds> // Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped /// <reference path="../node/node.d.ts" /> declare module 'bufferstream' { import stream = require('stream'); export = BufferStream; class BufferStream extends stream.Duplex { constructor(options?: BufferStream.Opts); /* different buffer behaviors can be triggered by size: none when output drains, bufferstream drains too flexible buffers everthing that it gets and not piping out <number> TODO buffer has given size. buffers everthing until buffer is full. when buffer is full then the stream will drain */ setSize(size: string): void; // can be one of ['none', 'flexible', <number>] setSize(size: number): void; // can be one of ['none', 'flexible', <number>] /* enables stream buffering default */ enable(): void; /* flushes buffer and disables stream buffering. BufferStream now pipes all data as long as the output accepting data. when the output is draining BufferStream will buffer all input temporary. token[s] buffer splitters (should be String or Buffer) disables given tokens. wont flush until no splitter tokens are left. */ disable(): void; disable(token: string, ...tokens: string[]): void; disable(tokens: string[]): void; // Array disable(token: Buffer, ...tokens: Buffer[]): void; disable(tokens: Buffer[]): void; // Array /* each time BufferStream finds a splitter token in the input data it will emit a split event. this also works for binary data. token[s] buffer splitters (should be String or Buffer) */ split(token: string, ...tokens: string[]): void; split(tokens: string[]): void; // Array split(token: Buffer, ...tokens: Buffer[]): void; split(tokens: Buffer[]): void; // Array /* returns Buffer. */ getBuffer(): Buffer; /* returns Buffer. */ buffer: Buffer; /* shortcut for buffer.toString() */ toString(): string; /* shortcut for buffer.length */ length: number; } namespace BufferStream { export interface Opts { /* default encoding for writing strings */ encoding?: string; /* if true and the source is a child_process the stream will block the entire process (timeouts wont work anymore, but splitting and listening on data still works, because they work sync) */ blocking?: boolean; /* defines buffer level or sets buffer to given size (see ↓setSize for more) */ size?: any; /* immediately call disable */ disabled?: boolean; /* short form for: split(token, function (chunk) {emit('data', chunk)}) */ // String or Buffer split?: any; } export var fn: {warn: boolean}; } } declare module 'bufferstream/postbuffer' { import http = require('http'); import BufferStream = require('bufferstream'); class PostBuffer extends BufferStream { /* for if you want to get all the post data from a http server request and do some db reqeust before. http client buffer */ constructor(req: http.IncomingMessage); /* set a callback to get all post data from a http server request */ onEnd(callback: (data: any) => void): void; /* pumps data into another stream to allow incoming streams given options will be passed to Stream.pipe */ pipe(stream: NodeJS.WritableStream, options?: BufferStream.Opts): NodeJS.ReadableStream; } export = PostBuffer; }
yuit/DefinitelyTyped
bufferstream/bufferstream.d.ts
TypeScript
mit
3,540
[ 30522, 1013, 1013, 2828, 15182, 2005, 17698, 21422, 1058, 2692, 1012, 1020, 1012, 1016, 1013, 1013, 2622, 1024, 16770, 1024, 1013, 1013, 21025, 2705, 12083, 1012, 4012, 1013, 26489, 2080, 1013, 13045, 30524, 1012, 4012, 1013, 12075, 16872, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* Copyright (C) 2010 Willow Garage <http://www.willowgarage.com> Copyright (C) 2004 - 2010 Ivo van Doorn <IvDoorn@gmail.com> <http://rt2x00.serialmonkey.com> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, see <http://www.gnu.org/licenses/>. */ /* Module: rt2x00usb Abstract: rt2x00 generic usb device routines. */ #include <linux/kernel.h> #include <linux/module.h> #include <linux/slab.h> #include <linux/usb.h> #include <linux/bug.h> #include "rt2x00.h" #include "rt2x00usb.h" /* * Interfacing with the HW. */ int rt2x00usb_vendor_request(struct rt2x00_dev *rt2x00dev, const u8 request, const u8 requesttype, const u16 offset, const u16 value, void *buffer, const u16 buffer_length, const int timeout) { struct usb_device *usb_dev = to_usb_device_intf(rt2x00dev->dev); int status; unsigned int pipe = (requesttype == USB_VENDOR_REQUEST_IN) ? usb_rcvctrlpipe(usb_dev, 0) : usb_sndctrlpipe(usb_dev, 0); unsigned long expire = jiffies + msecs_to_jiffies(timeout); if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags)) return -ENODEV; do { status = usb_control_msg(usb_dev, pipe, request, requesttype, value, offset, buffer, buffer_length, timeout / 2); if (status >= 0) return 0; if (status == -ENODEV) { /* Device has disappeared. */ clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); break; } } while (time_before(jiffies, expire)); rt2x00_err(rt2x00dev, "Vendor Request 0x%02x failed for offset 0x%04x with error %d\n", request, offset, status); return status; } EXPORT_SYMBOL_GPL(rt2x00usb_vendor_request); int rt2x00usb_vendor_req_buff_lock(struct rt2x00_dev *rt2x00dev, const u8 request, const u8 requesttype, const u16 offset, void *buffer, const u16 buffer_length, const int timeout) { int status; BUG_ON(!mutex_is_locked(&rt2x00dev->csr_mutex)); /* * Check for Cache availability. */ if (unlikely(!rt2x00dev->csr.cache || buffer_length > CSR_CACHE_SIZE)) { rt2x00_err(rt2x00dev, "CSR cache not available\n"); return -ENOMEM; } if (requesttype == USB_VENDOR_REQUEST_OUT) memcpy(rt2x00dev->csr.cache, buffer, buffer_length); status = rt2x00usb_vendor_request(rt2x00dev, request, requesttype, offset, 0, rt2x00dev->csr.cache, buffer_length, timeout); if (!status && requesttype == USB_VENDOR_REQUEST_IN) memcpy(buffer, rt2x00dev->csr.cache, buffer_length); return status; } EXPORT_SYMBOL_GPL(rt2x00usb_vendor_req_buff_lock); int rt2x00usb_vendor_request_buff(struct rt2x00_dev *rt2x00dev, const u8 request, const u8 requesttype, const u16 offset, void *buffer, const u16 buffer_length) { int status = 0; unsigned char *tb; u16 off, len, bsize; mutex_lock(&rt2x00dev->csr_mutex); tb = (char *)buffer; off = offset; len = buffer_length; while (len && !status) { bsize = min_t(u16, CSR_CACHE_SIZE, len); status = rt2x00usb_vendor_req_buff_lock(rt2x00dev, request, requesttype, off, tb, bsize, REGISTER_TIMEOUT); tb += bsize; len -= bsize; off += bsize; } mutex_unlock(&rt2x00dev->csr_mutex); return status; } EXPORT_SYMBOL_GPL(rt2x00usb_vendor_request_buff); int rt2x00usb_regbusy_read(struct rt2x00_dev *rt2x00dev, const unsigned int offset, const struct rt2x00_field32 field, u32 *reg) { unsigned int i; if (!test_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags)) return -ENODEV; for (i = 0; i < REGISTER_USB_BUSY_COUNT; i++) { *reg = rt2x00usb_register_read_lock(rt2x00dev, offset); if (!rt2x00_get_field32(*reg, field)) return 1; udelay(REGISTER_BUSY_DELAY); } rt2x00_err(rt2x00dev, "Indirect register access failed: offset=0x%.08x, value=0x%.08x\n", offset, *reg); *reg = ~0; return 0; } EXPORT_SYMBOL_GPL(rt2x00usb_regbusy_read); struct rt2x00_async_read_data { __le32 reg; struct usb_ctrlrequest cr; struct rt2x00_dev *rt2x00dev; bool (*callback)(struct rt2x00_dev *, int, u32); }; static void rt2x00usb_register_read_async_cb(struct urb *urb) { struct rt2x00_async_read_data *rd = urb->context; if (rd->callback(rd->rt2x00dev, urb->status, le32_to_cpu(rd->reg))) { usb_anchor_urb(urb, rd->rt2x00dev->anchor); if (usb_submit_urb(urb, GFP_ATOMIC) < 0) { usb_unanchor_urb(urb); kfree(rd); } } else kfree(rd); } void rt2x00usb_register_read_async(struct rt2x00_dev *rt2x00dev, const unsigned int offset, bool (*callback)(struct rt2x00_dev*, int, u32)) { struct usb_device *usb_dev = to_usb_device_intf(rt2x00dev->dev); struct urb *urb; struct rt2x00_async_read_data *rd; rd = kmalloc(sizeof(*rd), GFP_ATOMIC); if (!rd) return; urb = usb_alloc_urb(0, GFP_ATOMIC); if (!urb) { kfree(rd); return; } rd->rt2x00dev = rt2x00dev; rd->callback = callback; rd->cr.bRequestType = USB_VENDOR_REQUEST_IN; rd->cr.bRequest = USB_MULTI_READ; rd->cr.wValue = 0; rd->cr.wIndex = cpu_to_le16(offset); rd->cr.wLength = cpu_to_le16(sizeof(u32)); usb_fill_control_urb(urb, usb_dev, usb_rcvctrlpipe(usb_dev, 0), (unsigned char *)(&rd->cr), &rd->reg, sizeof(rd->reg), rt2x00usb_register_read_async_cb, rd); usb_anchor_urb(urb, rt2x00dev->anchor); if (usb_submit_urb(urb, GFP_ATOMIC) < 0) { usb_unanchor_urb(urb); kfree(rd); } usb_free_urb(urb); } EXPORT_SYMBOL_GPL(rt2x00usb_register_read_async); /* * TX data handlers. */ static void rt2x00usb_work_txdone_entry(struct queue_entry *entry) { /* * If the transfer to hardware succeeded, it does not mean the * frame was send out correctly. It only means the frame * was successfully pushed to the hardware, we have no * way to determine the transmission status right now. * (Only indirectly by looking at the failed TX counters * in the register). */ if (test_bit(ENTRY_DATA_IO_FAILED, &entry->flags)) rt2x00lib_txdone_noinfo(entry, TXDONE_FAILURE); else rt2x00lib_txdone_noinfo(entry, TXDONE_UNKNOWN); } static void rt2x00usb_work_txdone(struct work_struct *work) { struct rt2x00_dev *rt2x00dev = container_of(work, struct rt2x00_dev, txdone_work); struct data_queue *queue; struct queue_entry *entry; tx_queue_for_each(rt2x00dev, queue) { while (!rt2x00queue_empty(queue)) { entry = rt2x00queue_get_entry(queue, Q_INDEX_DONE); if (test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags) || !test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags)) break; rt2x00usb_work_txdone_entry(entry); } } } static void rt2x00usb_interrupt_txdone(struct urb *urb) { struct queue_entry *entry = (struct queue_entry *)urb->context; struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; if (!test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags)) return; /* * Check if the frame was correctly uploaded */ if (urb->status) set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); /* * Report the frame as DMA done */ rt2x00lib_dmadone(entry); if (rt2x00dev->ops->lib->tx_dma_done) rt2x00dev->ops->lib->tx_dma_done(entry); /* * Schedule the delayed work for reading the TX status * from the device. */ if (!rt2x00_has_cap_flag(rt2x00dev, REQUIRE_TXSTATUS_FIFO) || !kfifo_is_empty(&rt2x00dev->txstatus_fifo)) queue_work(rt2x00dev->workqueue, &rt2x00dev->txdone_work); } static bool rt2x00usb_kick_tx_entry(struct queue_entry *entry, void *data) { struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; struct usb_device *usb_dev = to_usb_device_intf(rt2x00dev->dev); struct queue_entry_priv_usb *entry_priv = entry->priv_data; u32 length; int status; if (!test_and_clear_bit(ENTRY_DATA_PENDING, &entry->flags) || test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags)) return false; /* * USB devices require certain padding at the end of each frame * and urb. Those paddings are not included in skbs. Pass entry * to the driver to determine what the overall length should be. */ length = rt2x00dev->ops->lib->get_tx_data_len(entry); status = skb_padto(entry->skb, length); if (unlikely(status)) { /* TODO: report something more appropriate than IO_FAILED. */ rt2x00_warn(rt2x00dev, "TX SKB padding error, out of memory\n"); set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); rt2x00lib_dmadone(entry); return false; } usb_fill_bulk_urb(entry_priv->urb, usb_dev, usb_sndbulkpipe(usb_dev, entry->queue->usb_endpoint), entry->skb->data, length, rt2x00usb_interrupt_txdone, entry); status = usb_submit_urb(entry_priv->urb, GFP_ATOMIC); if (status) { if (status == -ENODEV) clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); rt2x00lib_dmadone(entry); } return false; } /* * RX data handlers. */ static void rt2x00usb_work_rxdone(struct work_struct *work) { struct rt2x00_dev *rt2x00dev = container_of(work, struct rt2x00_dev, rxdone_work); struct queue_entry *entry; struct skb_frame_desc *skbdesc; u8 rxd[32]; while (!rt2x00queue_empty(rt2x00dev->rx)) { entry = rt2x00queue_get_entry(rt2x00dev->rx, Q_INDEX_DONE); if (test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags) || !test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags)) break; /* * Fill in desc fields of the skb descriptor */ skbdesc = get_skb_frame_desc(entry->skb); skbdesc->desc = rxd; skbdesc->desc_len = entry->queue->desc_size; /* * Send the frame to rt2x00lib for further processing. */ rt2x00lib_rxdone(entry, GFP_KERNEL); } } static void rt2x00usb_interrupt_rxdone(struct urb *urb) { struct queue_entry *entry = (struct queue_entry *)urb->context; struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; if (!test_and_clear_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags)) return; /* * Report the frame as DMA done */ rt2x00lib_dmadone(entry); /* * Check if the received data is simply too small * to be actually valid, or if the urb is signaling * a problem. */ if (urb->actual_length < entry->queue->desc_size || urb->status) set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); /* * Schedule the delayed work for reading the RX status * from the device. */ queue_work(rt2x00dev->workqueue, &rt2x00dev->rxdone_work); } static bool rt2x00usb_kick_rx_entry(struct queue_entry *entry, void *data) { struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; struct usb_device *usb_dev = to_usb_device_intf(rt2x00dev->dev); struct queue_entry_priv_usb *entry_priv = entry->priv_data; int status; if (test_and_set_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags) || test_bit(ENTRY_DATA_STATUS_PENDING, &entry->flags)) return false; rt2x00lib_dmastart(entry); usb_fill_bulk_urb(entry_priv->urb, usb_dev, usb_rcvbulkpipe(usb_dev, entry->queue->usb_endpoint), entry->skb->data, entry->skb->len, rt2x00usb_interrupt_rxdone, entry); status = usb_submit_urb(entry_priv->urb, GFP_ATOMIC); if (status) { if (status == -ENODEV) clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); rt2x00lib_dmadone(entry); } return false; } void rt2x00usb_kick_queue(struct data_queue *queue) { switch (queue->qid) { case QID_AC_VO: case QID_AC_VI: case QID_AC_BE: case QID_AC_BK: if (!rt2x00queue_empty(queue)) rt2x00queue_for_each_entry(queue, Q_INDEX_DONE, Q_INDEX, NULL, rt2x00usb_kick_tx_entry); break; case QID_RX: if (!rt2x00queue_full(queue)) rt2x00queue_for_each_entry(queue, Q_INDEX, Q_INDEX_DONE, NULL, rt2x00usb_kick_rx_entry); break; default: break; } } EXPORT_SYMBOL_GPL(rt2x00usb_kick_queue); static bool rt2x00usb_flush_entry(struct queue_entry *entry, void *data) { struct rt2x00_dev *rt2x00dev = entry->queue->rt2x00dev; struct queue_entry_priv_usb *entry_priv = entry->priv_data; struct queue_entry_priv_usb_bcn *bcn_priv = entry->priv_data; if (!test_bit(ENTRY_OWNER_DEVICE_DATA, &entry->flags)) return false; usb_kill_urb(entry_priv->urb); /* * Kill guardian urb (if required by driver). */ if ((entry->queue->qid == QID_BEACON) && (rt2x00_has_cap_flag(rt2x00dev, REQUIRE_BEACON_GUARD))) usb_kill_urb(bcn_priv->guardian_urb); return false; } void rt2x00usb_flush_queue(struct data_queue *queue, bool drop) { struct work_struct *completion; unsigned int i; if (drop) rt2x00queue_for_each_entry(queue, Q_INDEX_DONE, Q_INDEX, NULL, rt2x00usb_flush_entry); /* * Obtain the queue completion handler */ switch (queue->qid) { case QID_AC_VO: case QID_AC_VI: case QID_AC_BE: case QID_AC_BK: completion = &queue->rt2x00dev->txdone_work; break; case QID_RX: completion = &queue->rt2x00dev->rxdone_work; break; default: return; } for (i = 0; i < 10; i++) { /* * Check if the driver is already done, otherwise we * have to sleep a little while to give the driver/hw * the oppurtunity to complete interrupt process itself. */ if (rt2x00queue_empty(queue)) break; /* * Schedule the completion handler manually, when this * worker function runs, it should cleanup the queue. */ queue_work(queue->rt2x00dev->workqueue, completion); /* * Wait for a little while to give the driver * the oppurtunity to recover itself. */ msleep(50); } } EXPORT_SYMBOL_GPL(rt2x00usb_flush_queue); static void rt2x00usb_watchdog_tx_dma(struct data_queue *queue) { rt2x00_warn(queue->rt2x00dev, "TX queue %d DMA timed out, invoke forced forced reset\n", queue->qid); rt2x00queue_stop_queue(queue); rt2x00queue_flush_queue(queue, true); rt2x00queue_start_queue(queue); } static int rt2x00usb_dma_timeout(struct data_queue *queue) { struct queue_entry *entry; entry = rt2x00queue_get_entry(queue, Q_INDEX_DMA_DONE); return rt2x00queue_dma_timeout(entry); } void rt2x00usb_watchdog(struct rt2x00_dev *rt2x00dev) { struct data_queue *queue; tx_queue_for_each(rt2x00dev, queue) { if (!rt2x00queue_empty(queue)) { if (rt2x00usb_dma_timeout(queue)) rt2x00usb_watchdog_tx_dma(queue); } } } EXPORT_SYMBOL_GPL(rt2x00usb_watchdog); /* * Radio handlers */ void rt2x00usb_disable_radio(struct rt2x00_dev *rt2x00dev) { rt2x00usb_vendor_request_sw(rt2x00dev, USB_RX_CONTROL, 0, 0, REGISTER_TIMEOUT); } EXPORT_SYMBOL_GPL(rt2x00usb_disable_radio); /* * Device initialization handlers. */ void rt2x00usb_clear_entry(struct queue_entry *entry) { entry->flags = 0; if (entry->queue->qid == QID_RX) rt2x00usb_kick_rx_entry(entry, NULL); } EXPORT_SYMBOL_GPL(rt2x00usb_clear_entry); static void rt2x00usb_assign_endpoint(struct data_queue *queue, struct usb_endpoint_descriptor *ep_desc) { struct usb_device *usb_dev = to_usb_device_intf(queue->rt2x00dev->dev); int pipe; queue->usb_endpoint = usb_endpoint_num(ep_desc); if (queue->qid == QID_RX) { pipe = usb_rcvbulkpipe(usb_dev, queue->usb_endpoint); queue->usb_maxpacket = usb_maxpacket(usb_dev, pipe, 0); } else { pipe = usb_sndbulkpipe(usb_dev, queue->usb_endpoint); queue->usb_maxpacket = usb_maxpacket(usb_dev, pipe, 1); } if (!queue->usb_maxpacket) queue->usb_maxpacket = 1; } static int rt2x00usb_find_endpoints(struct rt2x00_dev *rt2x00dev) { struct usb_interface *intf = to_usb_interface(rt2x00dev->dev); struct usb_host_interface *intf_desc = intf->cur_altsetting; struct usb_endpoint_descriptor *ep_desc; struct data_queue *queue = rt2x00dev->tx; struct usb_endpoint_descriptor *tx_ep_desc = NULL; unsigned int i; /* * Walk through all available endpoints to search for "bulk in" * and "bulk out" endpoints. When we find such endpoints collect * the information we need from the descriptor and assign it * to the queue. */ for (i = 0; i < intf_desc->desc.bNumEndpoints; i++) { ep_desc = &intf_desc->endpoint[i].desc; if (usb_endpoint_is_bulk_in(ep_desc)) { rt2x00usb_assign_endpoint(rt2x00dev->rx, ep_desc); } else if (usb_endpoint_is_bulk_out(ep_desc) && (queue != queue_end(rt2x00dev))) { rt2x00usb_assign_endpoint(queue, ep_desc); queue = queue_next(queue); tx_ep_desc = ep_desc; } } /* * At least 1 endpoint for RX and 1 endpoint for TX must be available. */ if (!rt2x00dev->rx->usb_endpoint || !rt2x00dev->tx->usb_endpoint) { rt2x00_err(rt2x00dev, "Bulk-in/Bulk-out endpoints not found\n"); return -EPIPE; } /* * It might be possible not all queues have a dedicated endpoint. * Loop through all TX queues and copy the endpoint information * which we have gathered from already assigned endpoints. */ txall_queue_for_each(rt2x00dev, queue) { if (!queue->usb_endpoint) rt2x00usb_assign_endpoint(queue, tx_ep_desc); } return 0; } static int rt2x00usb_alloc_entries(struct data_queue *queue) { struct rt2x00_dev *rt2x00dev = queue->rt2x00dev; struct queue_entry_priv_usb *entry_priv; struct queue_entry_priv_usb_bcn *bcn_priv; unsigned int i; for (i = 0; i < queue->limit; i++) { entry_priv = queue->entries[i].priv_data; entry_priv->urb = usb_alloc_urb(0, GFP_KERNEL); if (!entry_priv->urb) return -ENOMEM; } /* * If this is not the beacon queue or * no guardian byte was required for the beacon, * then we are done. */ if (queue->qid != QID_BEACON || !rt2x00_has_cap_flag(rt2x00dev, REQUIRE_BEACON_GUARD)) return 0; for (i = 0; i < queue->limit; i++) { bcn_priv = queue->entries[i].priv_data; bcn_priv->guardian_urb = usb_alloc_urb(0, GFP_KERNEL); if (!bcn_priv->guardian_urb) return -ENOMEM; } return 0; } static void rt2x00usb_free_entries(struct data_queue *queue) { struct rt2x00_dev *rt2x00dev = queue->rt2x00dev; struct queue_entry_priv_usb *entry_priv; struct queue_entry_priv_usb_bcn *bcn_priv; unsigned int i; if (!queue->entries) return; for (i = 0; i < queue->limit; i++) { entry_priv = queue->entries[i].priv_data; usb_kill_urb(entry_priv->urb); usb_free_urb(entry_priv->urb); } /* * If this is not the beacon queue or * no guardian byte was required for the beacon, * then we are done. */ if (queue->qid != QID_BEACON || !rt2x00_has_cap_flag(rt2x00dev, REQUIRE_BEACON_GUARD)) return; for (i = 0; i < queue->limit; i++) { bcn_priv = queue->entries[i].priv_data; usb_kill_urb(bcn_priv->guardian_urb); usb_free_urb(bcn_priv->guardian_urb); } } int rt2x00usb_initialize(struct rt2x00_dev *rt2x00dev) { struct data_queue *queue; int status; /* * Find endpoints for each queue */ status = rt2x00usb_find_endpoints(rt2x00dev); if (status) goto exit; /* * Allocate DMA */ queue_for_each(rt2x00dev, queue) { status = rt2x00usb_alloc_entries(queue); if (status) goto exit; } return 0; exit: rt2x00usb_uninitialize(rt2x00dev); return status; } EXPORT_SYMBOL_GPL(rt2x00usb_initialize); void rt2x00usb_uninitialize(struct rt2x00_dev *rt2x00dev) { struct data_queue *queue; usb_kill_anchored_urbs(rt2x00dev->anchor); hrtimer_cancel(&rt2x00dev->txstatus_timer); cancel_work_sync(&rt2x00dev->rxdone_work); cancel_work_sync(&rt2x00dev->txdone_work); queue_for_each(rt2x00dev, queue) rt2x00usb_free_entries(queue); } EXPORT_SYMBOL_GPL(rt2x00usb_uninitialize); /* * USB driver handlers. */ static void rt2x00usb_free_reg(struct rt2x00_dev *rt2x00dev) { kfree(rt2x00dev->rf); rt2x00dev->rf = NULL; kfree(rt2x00dev->eeprom); rt2x00dev->eeprom = NULL; kfree(rt2x00dev->csr.cache); rt2x00dev->csr.cache = NULL; } static int rt2x00usb_alloc_reg(struct rt2x00_dev *rt2x00dev) { rt2x00dev->csr.cache = kzalloc(CSR_CACHE_SIZE, GFP_KERNEL); if (!rt2x00dev->csr.cache) goto exit; rt2x00dev->eeprom = kzalloc(rt2x00dev->ops->eeprom_size, GFP_KERNEL); if (!rt2x00dev->eeprom) goto exit; rt2x00dev->rf = kzalloc(rt2x00dev->ops->rf_size, GFP_KERNEL); if (!rt2x00dev->rf) goto exit; return 0; exit: rt2x00_probe_err("Failed to allocate registers\n"); rt2x00usb_free_reg(rt2x00dev); return -ENOMEM; } int rt2x00usb_probe(struct usb_interface *usb_intf, const struct rt2x00_ops *ops) { struct usb_device *usb_dev = interface_to_usbdev(usb_intf); struct ieee80211_hw *hw; struct rt2x00_dev *rt2x00dev; int retval; usb_dev = usb_get_dev(usb_dev); usb_reset_device(usb_dev); hw = ieee80211_alloc_hw(sizeof(struct rt2x00_dev), ops->hw); if (!hw) { rt2x00_probe_err("Failed to allocate hardware\n"); retval = -ENOMEM; goto exit_put_device; } usb_set_intfdata(usb_intf, hw); rt2x00dev = hw->priv; rt2x00dev->dev = &usb_intf->dev; rt2x00dev->ops = ops; rt2x00dev->hw = hw; rt2x00_set_chip_intf(rt2x00dev, RT2X00_CHIP_INTF_USB); INIT_WORK(&rt2x00dev->rxdone_work, rt2x00usb_work_rxdone); INIT_WORK(&rt2x00dev->txdone_work, rt2x00usb_work_txdone); hrtimer_init(&rt2x00dev->txstatus_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); retval = rt2x00usb_alloc_reg(rt2x00dev); if (retval) goto exit_free_device; rt2x00dev->anchor = devm_kmalloc(&usb_dev->dev, sizeof(struct usb_anchor), GFP_KERNEL); if (!rt2x00dev->anchor) { retval = -ENOMEM; goto exit_free_reg; } init_usb_anchor(rt2x00dev->anchor); retval = rt2x00lib_probe_dev(rt2x00dev); if (retval) goto exit_free_anchor; return 0; exit_free_anchor: usb_kill_anchored_urbs(rt2x00dev->anchor); exit_free_reg: rt2x00usb_free_reg(rt2x00dev); exit_free_device: ieee80211_free_hw(hw); exit_put_device: usb_put_dev(usb_dev); usb_set_intfdata(usb_intf, NULL); return retval; } EXPORT_SYMBOL_GPL(rt2x00usb_probe); void rt2x00usb_disconnect(struct usb_interface *usb_intf) { struct ieee80211_hw *hw = usb_get_intfdata(usb_intf); struct rt2x00_dev *rt2x00dev = hw->priv; /* * Free all allocated data. */ rt2x00lib_remove_dev(rt2x00dev); rt2x00usb_free_reg(rt2x00dev); ieee80211_free_hw(hw); /* * Free the USB device data. */ usb_set_intfdata(usb_intf, NULL); usb_put_dev(interface_to_usbdev(usb_intf)); } EXPORT_SYMBOL_GPL(rt2x00usb_disconnect); #ifdef CONFIG_PM int rt2x00usb_suspend(struct usb_interface *usb_intf, pm_message_t state) { struct ieee80211_hw *hw = usb_get_intfdata(usb_intf); struct rt2x00_dev *rt2x00dev = hw->priv; return rt2x00lib_suspend(rt2x00dev, state); } EXPORT_SYMBOL_GPL(rt2x00usb_suspend); int rt2x00usb_resume(struct usb_interface *usb_intf) { struct ieee80211_hw *hw = usb_get_intfdata(usb_intf); struct rt2x00_dev *rt2x00dev = hw->priv; return rt2x00lib_resume(rt2x00dev); } EXPORT_SYMBOL_GPL(rt2x00usb_resume); #endif /* CONFIG_PM */ /* * rt2x00usb module information. */ MODULE_AUTHOR(DRV_PROJECT); MODULE_VERSION(DRV_VERSION); MODULE_DESCRIPTION("rt2x00 usb library"); MODULE_LICENSE("GPL");
Isopod/linux
drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
C
gpl-2.0
23,035
[ 30522, 1013, 1008, 9385, 1006, 1039, 1007, 30524, 1030, 20917, 4014, 1012, 4012, 1028, 1026, 8299, 1024, 1013, 1013, 19387, 2475, 2595, 8889, 1012, 7642, 8202, 14839, 1012, 4012, 1028, 2023, 2565, 2003, 2489, 4007, 1025, 2017, 2064, 2417, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package ru.maximum13.wsdemo.util; import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.Date; import java.util.GregorianCalendar; import java.util.concurrent.TimeUnit; /** * Набор утилитных методов для работы с датой и временем. * * @author MAXIMUM13 */ public final class DateTimeUtils { public static Calendar calendarOf(int year, int month, int day) { Calendar calendar = Calendar.getInstance(); calendar.set(year, month, day); return calendar; } public static Calendar calendarOf(int year, int month, int day, int hours, int min) { Calendar calendar = Calendar.getInstance(); calendar.set(year, month, day, hours, min); return calendar; } public static Calendar calendarOf(int year, int month, int day, int hours, int min, int sec) { Calendar calendar = Calendar.getInstance(); calendar.set(year, month, day, hours, min, sec); return calendar; } public static Calendar calendarOf(int year, int month, int day, int hours, int min, int sec, int ms) { Calendar calendar = calendarOf(year, month, day, hours, min, sec); calendar.set(Calendar.MILLISECOND, ms); return calendar; } public static Date dateOf(int year, int month, int day) { return calendarOf(year, month, day).getTime(); } public static Date dateOf(int year, int month, int day, int hours, int min) { return calendarOf(year, month, day, hours, min).getTime(); } public static Date dateOf(int year, int month, int day, int hours, int min, int sec) { return calendarOf(year, month, day, hours, min, sec).getTime(); } public static Date dateOf(int year, int month, int day, int hours, int min, int sec, int ms) { return calendarOf(year, month, day, hours, min, sec, ms).getTime(); } /** * Проверяет, следует ли первая дата за второй или равна ей (great or equals). * * @param firstDate * первая дата * @param secondDate * вторая дата */ public static boolean ge(Date firstDate, Date secondDate) { return firstDate.compareTo(secondDate) >= 0; } /** * Проверяет, предшествует ли первая дата второй или равна ей (less or equals). * * @param firstDate * первая дата * @param secondDate * вторая дата */ public static boolean le(Date firstDate, Date secondDate) { return firstDate.compareTo(secondDate) <= 0; } /** * Возвращает строку с текущей датой, отформатированной по переданному шаблону. * * @param dateFormat * шаблон даты */ public static String formatCurrentDate(final String dateFormat) { return format(new Date(), dateFormat); } /** * Возвращает строку с датой, отформатированной по переданному шаблону. * * @param date * дата * @param dateFormat * шаблон даты */ public static String format(final Date date, final String dateFormat) { return new SimpleDateFormat(dateFormat).format(date); } /** * Обнуляет количество часов, минут, секунд и миллисекунд у переданного календаря. */ public static void resetTime(final Calendar calendar) { calendar.set(Calendar.HOUR_OF_DAY, 0); calendar.set(Calendar.MINUTE, 0); calendar.set(Calendar.SECOND, 0); calendar.set(Calendar.MILLISECOND, 0); } /** * Возвращает строку, содержащую текущее время в миллисекундах. */ public static String currentTimeMillisString() { return Long.toString(System.currentTimeMillis()); } /** * Возвращает новую дату, отличающуюся от переданной на заданное значение. * * @param date * первоначальная дата * @param value * значение единицы времени, на которую нужно изменить дату * @param unit * единица времени, в которой задано изменение */ public static Date changeDate(final Date date, final long value, final TimeUnit unit) { Calendar calendar = new GregorianCalendar(); calendar.setTimeInMillis(date.getTime() + unit.toMillis(value)); return calendar.getTime(); } private DateTimeUtils() { ErrorUtils.throwPrivateMethodAccessError(this); } }
MAXIMUM13/websocket-demo
src/main/java/ru/maximum13/wsdemo/util/DateTimeUtils.java
Java
apache-2.0
5,002
[ 30522, 7427, 21766, 1012, 4555, 17134, 1012, 1059, 16150, 6633, 2080, 1012, 21183, 4014, 1025, 12324, 9262, 1012, 3793, 1012, 3722, 13701, 14192, 4017, 1025, 12324, 9262, 1012, 21183, 4014, 1012, 8094, 1025, 12324, 9262, 1012, 21183, 4014, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// Test that we are able to introduce a negative constraint that // `MyType: !MyTrait` along with other "fundamental" wrappers. // aux-build:coherence_copy_like_lib.rs #![allow(dead_code)] extern crate coherence_copy_like_lib as lib; struct MyType { x: i32 } // These are all legal because they are all fundamental types: // Tuples are not fundamental, so this is not a local impl. impl lib::MyCopy for (MyType,) { } //~^ ERROR E0117 fn main() { }
aidancully/rust
src/test/ui/coherence/coherence_local_err_tuple.rs
Rust
apache-2.0
455
[ 30522, 1013, 1013, 3231, 2008, 2057, 2024, 2583, 2000, 8970, 1037, 4997, 27142, 2008, 1013, 1013, 1036, 2026, 13874, 1024, 999, 2026, 6494, 4183, 1036, 2247, 2007, 2060, 1000, 8050, 1000, 10236, 7347, 1012, 1013, 1013, 19554, 1011, 3857, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package org.cohorte.utilities.sql.pool; import java.util.LinkedList; import java.util.List; import java.util.concurrent.atomic.AtomicBoolean; import org.cohorte.utilities.sql.DBException; import org.cohorte.utilities.sql.IDBConnection; import org.cohorte.utilities.sql.IDBPool; import org.cohorte.utilities.sql.exec.CDBConnectionFactory; import org.cohorte.utilities.sql.exec.CDBConnectionInfos; import org.psem2m.utilities.logging.CActivityLoggerNull; import org.psem2m.utilities.logging.IActivityLogger; /** * @author ogattaz * */ public class CDBPool implements IDBPool { // The max unused duration of a dbconnection (default 15 minutes = // 15*60*1000) private static long DB_POOL_MAX_UNUSED_DURATION = 15 * 60 * 1000; private final List<IDBConnection> pConnections = new LinkedList<IDBConnection>(); private CDBConnectionInfos pDBConnectionInfos; private IActivityLogger pLogger; /** * Gestion des connexions innutilisées dans le pool de connexion de la base * agiliumdb */ private final long pMaxUnusedDuration; private final AtomicBoolean pOpened = new AtomicBoolean(); private final CDBPoolMonitor pPoolMonitor; /** * */ public CDBPool(final CDBConnectionInfos aDBConnectionInfos) { this(null, aDBConnectionInfos); } /** * */ public CDBPool(final IActivityLogger aLogger, final CDBConnectionInfos aDBConnectionInfos) { super(); setLogger(aLogger); setDBConnectionInfos(aDBConnectionInfos); pMaxUnusedDuration = DB_POOL_MAX_UNUSED_DURATION; pPoolMonitor = new CDBPoolMonitor(this); pLogger.logInfo(this, "<init>", "MaxUnusedDuration=[%d] NbConnection=[%d]", pMaxUnusedDuration, getNbConnection()); } /** * @throws Exception */ IDBConnection addNewDbConnection() throws DBException { return addNewDbConnection(createDbConnection()); } /** * @return * @throws ClusterStorageException */ private IDBConnection addNewDbConnection(final IDBConnection aConnection) throws DBException { if (!aConnection.isOpened()) { boolean wOpened = aConnection.open(); pLogger.logInfo(this, "addNewDbConnection", "NbConnection=[%d] Idx=[%d] Opened=[%b]", getNbConnection(), aConnection.getIdx(), wOpened); } synchronized (pConnections) { pConnections.add(aConnection); } return aConnection; } /* * (non-Javadoc) * * @see fr.agilium.ng.commons.sql.IDBPool#checkIn(fr.agilium.ng.commons.sql. * IDBConnection) */ @Override public void checkIn(final IDBConnection aDbConn) { // MOD_172 if (aDbConn != null) { // if no SQLException occurs during thes usage if (aDbConn.isValid()) { aDbConn.setBusyOff(); } else { aDbConn.close(); synchronized (pConnections) { pConnections.remove(aDbConn); } } } } /* * (non-Javadoc) * * @see fr.agilium.ng.commons.sql.IDBPool#checkOut() */ @Override public IDBConnection checkOut() throws IllegalStateException, Exception { if (!isConnected()) { throw new IllegalStateException("DBPool is not opened"); } IDBConnection wConnection = findFirstFree(); if (wConnection == null) { wConnection = createDbConnection(); wConnection.setBusyOn(); addNewDbConnection(wConnection); pLogger.logInfo(this, "checkOut", "NbConnection=[%d] NewConnectionIdx=[%d]", getNbConnection(), wConnection.getIdx()); } return wConnection; } /** * */ private void close() { pPoolMonitor.stopMonitor(); pLogger.logInfo(this, "close(): NbConnection to close=[%d]", getNbConnection()); synchronized (pConnections) { for (IDBConnection wConnection : pConnections) { wConnection.close(); } pConnections.clear(); } } /** * @return * @throws ClusterStorageException */ IDBConnection createDbConnection() throws DBException { return CDBConnectionFactory .newDbConnection(pLogger, pDBConnectionInfos); } /* * (non-Javadoc) * * @see fr.agilium.ng.commons.sql.IDBBase#dbClose() */ @Override public boolean dbClose() throws IllegalStateException { if (!isConnected()) { throw new IllegalStateException("DBPool is not opened"); } close(); pOpened.set(false); return true; } /* * (non-Javadoc) * * @see fr.agilium.ng.commons.sql.IDBBase#dbOpen() */ @Override public boolean dbOpen() throws IllegalStateException, Exception { if (isConnected()) { throw new IllegalStateException("DBPool is already opened"); } try { // add a new connection according the current DBConnectionInfos addNewDbConnection(); pOpened.set(true); } catch (Exception e) { pLogger.logSevere(this, "<init>", "Unable to create a connection to open the pool: %s", e); } return false; } @Override public boolean dbOpen(final CDBConnectionInfos aDBConnectionInfos) throws Exception { if (isConnected()) { throw new Exception("Pool is already opened"); } setDBConnectionInfos(aDBConnectionInfos); return dbOpen(); } /** * * detect and invalidate the unused connexions since more that the max * autorized unused duration * * @return the first "unbusy" db connexion found in the list */ private IDBConnection findFirstFree() { synchronized (pConnections) { for (IDBConnection wConnection : pConnections) { // if not used and always valid if (!wConnection.isBusy() && wConnection.isValid()) { // if unused since too much time if (wConnection.isUnusedTooLoong()) { wConnection.invalidate(); } else { wConnection.setBusyOn(); return wConnection; } } } } return null; } /** * MOD_99 * * @return */ List<IDBConnection> getConnections() { return pConnections; } /* * (non-Javadoc) * * @see fr.agilium.ng.commons.sql.IDBBase#getDBConnection() */ @Override public IDBConnection getDBConnection() throws Exception { return checkOut(); } /** * @return */ @Override public CDBConnectionInfos getDBConnectionInfos() { return pDBConnectionInfos; } /** * @return */ IActivityLogger getLogger() { return pLogger; } /** * @return */ public int getNbConnection() { synchronized (pConnections) { return pConnections.size(); } } /* * (non-Javadoc) * * @see fr.agilium.ng.commons.sql.IDBBase#isConnected() */ @Override public boolean isConnected() { return pOpened.get(); } /** * @param aDBConnectionInfos */ private void setDBConnectionInfos( final CDBConnectionInfos aDBConnectionInfos) { pDBConnectionInfos = aDBConnectionInfos; } /** * @param aLogger * @return the logger */ public void setLogger(final IActivityLogger aLogger) { pLogger = (aLogger != null) ? aLogger : CActivityLoggerNull .getInstance(); } }
isandlaTech/cohorte-utilities
extra/org.cohorte.utilities.sql/src/org/cohorte/utilities/sql/pool/CDBPool.java
Java
gpl-2.0
6,719
[ 30522, 7427, 8917, 1012, 2522, 27794, 2063, 1012, 16548, 1012, 29296, 1012, 4770, 1025, 12324, 9262, 1012, 21183, 4014, 1012, 5799, 9863, 1025, 12324, 9262, 1012, 21183, 4014, 1012, 2862, 1025, 12324, 9262, 1012, 21183, 4014, 1012, 16483, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="ru"> <head> <!-- Generated by javadoc (1.8.0_102) on Tue Oct 04 23:04:32 MSK 2016 --> <title>c1c.v8fs.assemble (v8fs latest-SNAPSHOT API)</title> <meta name="date" content="2016-10-04"> <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style"> <script type="text/javascript" src="../../../script.js"></script> </head> <body> <script type="text/javascript"><!-- try { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="c1c.v8fs.assemble (v8fs latest-SNAPSHOT API)"; } } catch(err) { } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar.top"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.top.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../overview-summary.html">Overview</a></li> <li class="navBarCell1Rev">Package</li> <li>Class</li> <li><a href="package-tree.html">Tree</a></li> <li><a href="../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../index-all.html">Index</a></li> <li><a href="../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../../c1c/v8fs/package-summary.html">Prev&nbsp;Package</a></li> <li><a href="../../../c1c/v8fs/jaxb/package-summary.html">Next&nbsp;Package</a></li> </ul> <ul class="navList"> <li><a href="../../../index.html?c1c/v8fs/assemble/package-summary.html" target="_top">Frames</a></li> <li><a href="package-summary.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip.navbar.top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <div class="header"> <h1 title="Package" class="title">Package&nbsp;c1c.v8fs.assemble</h1> </div> <div class="contentContainer"> <ul class="blockList"> <li class="blockList"> <table class="typeSummary" border="0" cellpadding="3" cellspacing="0" summary="Class Summary table, listing classes, and an explanation"> <caption><span>Class Summary</span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Class</th> <th class="colLast" scope="col">Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><a href="../../../c1c/v8fs/assemble/ContainerAssembler.html" title="class in c1c.v8fs.assemble">ContainerAssembler</a></td> <td class="colLast">&nbsp;</td> </tr> </tbody> </table> </li> </ul> </div> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar.bottom"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.bottom.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../overview-summary.html">Overview</a></li> <li class="navBarCell1Rev">Package</li> <li>Class</li> <li><a href="package-tree.html">Tree</a></li> <li><a href="../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../index-all.html">Index</a></li> <li><a href="../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../../c1c/v8fs/package-summary.html">Prev&nbsp;Package</a></li> <li><a href="../../../c1c/v8fs/jaxb/package-summary.html">Next&nbsp;Package</a></li> </ul> <ul class="navList"> <li><a href="../../../index.html?c1c/v8fs/assemble/package-summary.html" target="_top">Frames</a></li> <li><a href="package-summary.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip.navbar.bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </body> </html>
psyriccio/v8fs
docs/c1c/v8fs/assemble/package-summary.html
HTML
lgpl-3.0
4,680
[ 30522, 1026, 999, 9986, 13874, 16129, 2270, 1000, 1011, 1013, 1013, 1059, 2509, 2278, 1013, 1013, 26718, 2094, 16129, 1018, 1012, 5890, 17459, 1013, 1013, 4372, 1000, 1000, 8299, 1024, 1013, 1013, 7479, 1012, 1059, 2509, 1012, 8917, 1013, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* */ package othello.algoritmo; import othello.Utils.Casilla; import othello.Utils.Heuristica; import othello.Utils.Tablero; import java.util.ArrayList; /** * * @author gusamasan */ public class AlgoritmoPodaAlfaBeta extends Algoritmo{ // ---------------------------------------------------------------------- // ---------------------------------------------------------------------- /** Constructores **************************************************/ private int playerColor; public AlgoritmoPodaAlfaBeta(){ } /*******************************************************************/ @Override public Tablero obtenerNuevaConfiguracionTablero( Tablero tablero, short turno ){ System.out.println( "analizando siguiente jugada con ALFABETA" ); this.playerColor=turno; Tablero tableroJugada=tablero.copiarTablero(); try{ int beta = Integer.MAX_VALUE; int alfa = Integer.MIN_VALUE; alfaBeta(tableroJugada, this.getProfundidad(), playerColor, alfa, beta); Thread.sleep( 1000 ); } catch( Exception e ){ e.printStackTrace(); } return( tableroJugada ); } /** * * Éste es el método que tenemos que implementar. * * Algoritmo AlfaBeta para determinar cuál es el siguiente mejor movimiento * * @param tablero * Configuración actual del tablero * @param prof * Profundidad de búsqueda * @param jugadorActual * Nos indica a qué jugador (FICHA_BLANCA ó FICHA_NEGRA) le toca * @param alfa * @param beta * Parámetros alfa y beta del algoritmo * @return */ public int alfaBeta(Tablero tablero, int prof, int jugadorActual, int alfa, int beta) { // si el juego llega al final o no puede buscar mas if (tablero.EsFinalDeJuego()|| prof==0) { int value= Heuristica.h2(tablero, playerColor); return value; } // Si este jugador no puede jugar, pasa el turno if (!tablero.PuedeJugar(jugadorActual)) { int value = alfaBeta(tablero, prof, -jugadorActual, alfa, beta); return value; } // cogemos las casillas en las cuales podemos jugar ArrayList<Casilla> casillas = tablero.generarMovimiento(jugadorActual); // ahora tenemos que encontrar el mejor movimiento actual Casilla bestMovement = null; for (Casilla cas: casillas) { // se realiza una copia del objeto tablero. Tablero currentTablero = tablero.copiarTablero(); // se realiza un movimiento en el tablero creado if(jugadorActual == 1) cas.asignarFichaBlanca(); else if (jugadorActual == -1) cas.asignarFichaNegra(); currentTablero.ponerFicha(cas); currentTablero.imprimirTablero(); // Se evalua a ver si el movimiento es bueno. int valorActual = alfaBeta(currentTablero, prof - 1, -jugadorActual, alfa, beta); // Maximo if (jugadorActual == this.playerColor) { if (valorActual > alfa) { alfa = valorActual; bestMovement = cas; } // Es poda? if (alfa >= beta) return alfa; } // Minimo else { if (valorActual < beta) { beta = valorActual; bestMovement = cas; } // Es poda? if(alfa >= beta) return beta; } } // Realizar ahora sí el mejor movimiento disponible. if (bestMovement != null) { tablero.ponerFicha(bestMovement); } // Devolver el valor para el movimiento if (jugadorActual == this.playerColor) return alfa; else return beta; } }
nicomda/AI
Othello/src/othello/algoritmo/AlgoritmoPodaAlfaBeta.java
Java
gpl-3.0
4,165
[ 30522, 1013, 1008, 1008, 1013, 7427, 27178, 18223, 2080, 1012, 2632, 20255, 4183, 5302, 1025, 12324, 27178, 18223, 2080, 1012, 21183, 12146, 1012, 25222, 9386, 1025, 12324, 27178, 18223, 2080, 1012, 21183, 12146, 1012, 2002, 9496, 10074, 2050...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
void SetProgramOptions(); void InitializeProtein(); void DetermineTriplets(); void SetRadii(); void SetHardCore(); void SetContactDistance(); void ReadNative(); void GetResidueInfo(struct atom *, struct residue *, int, int); void GetPhiPsi(struct atom *, struct residue *, int); void GetChi(); void InitializeData(); void ReadSidechainTorsionData(); void InitializeSidechainRotationData(); void InitializeBackboneRotationData(); void CheckCorrelation(struct contact_data **, struct atom *, struct residue *, int); void ReadPotential(); void ReadDistPotential(); int SkipSelf(int, int, struct atom *, struct residue *); int SkipNeighbors(int, int, struct atom *, struct residue *); void TurnOffNativeClashes(int); void ReadAlignment(); void SetupAlignmentStructure(); void SetupAlignmentPotential(); void ReadAvgChis(void); void ReadHelicityData(); void InitializeHydrogenBonding(); int MatchAtomname(char *); /*================================================*/ /* main initialization routine */ /*================================================*/ void ReadTypesFile(void) { // read the type_file: including atom_name,res_name and type_num; Also refresh MAX_TYPES int type_num, i; char res_name[4], atom_name[4]; // twenty_res_typing FILE *type_file; if((type_file = fopen(atom_type_file, "r"))==NULL) { fprintf(STATUS,"ERROR: Can't open the file: %s!\n", atom_type_file); exit(1); } if (type_file == NULL) { fprintf(STATUS,"Error: Type file non-existant\n"); exit(-1); } i = -1; while(fscanf(type_file,"%s %s %d", atom_name, res_name, &type_num)!= EOF) { ++i; strcpy(atom_type_list[i].atom_name, atom_name); // copy atom_name into atom_type_list (struct) strcpy(atom_type_list[i].res_name, res_name); atom_type_list[i].type_num = type_num; if (!strcmp(res_name,"XXX")) { if (!strcmp(atom_name,"N")) bb_N_type = type_num; // bb_N_type? O_type? OXT_type? if (!strcmp(atom_name,"O")) bb_O_type = type_num; if (!strcmp(atom_name,"OXT")) bb_OXT_type = type_num; } } natom_type_list = i+1; // Because it begins from 0. In addition, total items are more than 84 kinds! see the file, some items has the same type_num MAX_TYPES = atom_type_list[i].type_num + 1; // total 84 kinds fclose(type_file); } void SetupMuPotential (void) { // calcluate mu potential: the first item of total energy, the contact energy, defination of mu int i, j, k, l; Float n_nat, avg; int **nat_cons, **nnat_cons; // both: max_types*max_types matrix record contacts or not nat_cons = (int **) calloc(MAX_TYPES, sizeof(int *)); nnat_cons = (int **) calloc(MAX_TYPES, sizeof(int *)); for (i = 0; i < MAX_TYPES; ++i) { nat_cons[i] = (int *) calloc(MAX_TYPES, sizeof(int)); nnat_cons[i] = (int *) calloc(MAX_TYPES, sizeof(int)); } for (k = 0; k < natoms; ++k) for (l = k+1; l < natoms; ++l) { if (data[k][l].contacts) { // where is data (contact matrix) from if (native[k].smogtype <= native[l].smogtype) // smogtype: maybe arangement indices of 20 aa nat_cons[native[k].smogtype][native[l].smogtype]++; else nat_cons[native[l].smogtype][native[k].smogtype]++; } else { if (native[k].smogtype <= native[l].smogtype) nnat_cons[native[k].smogtype][native[l].smogtype]++; else nnat_cons[native[l].smogtype][native[k].smogtype]++; } } avg = 0; n_nat = 0; for (i = 0; i < MAX_TYPES; ++i) for(j = i; j < MAX_TYPES; ++j) { if (nat_cons[i][j] != 0) { avg += nnat_cons[i][j]/nat_cons[i][j]; n_nat = n_nat + 1; } } mu = (avg/(float) n_nat)/(1 + avg/(float) n_nat); // defination of mu fprintf(STATUS,"mu: %f\n", mu); for (i = 0; i < MAX_TYPES; ++i) for(j = i; j < MAX_TYPES; ++j) if ((nat_cons[i][j] !=0) || (nnat_cons[i][j]!=0)) { if (mu == 1) { // why mu could equal 1? if (nat_cons[i][j] != 0) potential[i][j] = -1; else potential[i][j] = 1; } else if (mu == 0) { if (nnat_cons[i][j] != 0) potential[i][j] = 1; else potential[i][j] = -1; } else potential[i][j] = ((1-mu)*nnat_cons[i][j] - mu*nat_cons[i][j])/(mu*nat_cons[i][j] + (1-mu)*nnat_cons[i][j]); // mu potential potential[j][i] = potential[i][j]; } for (i = 0; i < MAX_TYPES; ++i) { free(nat_cons[i]); free(nnat_cons[i]); } free(nat_cons); free(nnat_cons); } void InitializeProtein() { // Very Important!! Center! int i; /* Reset variables */ fprintf(STATUS,"---MODEL---\n"); fprintf(STATUS," file:\t\t%s \n", native_file); native = (struct atom *) calloc(MAX_ATOMS,sizeof(struct atom)); // array of struct(atom) native_Emin = (struct atom *) calloc(MAX_ATOMS,sizeof(struct atom)); native_RMSDmin = (struct atom *) calloc(MAX_ATOMS,sizeof(struct atom)); prev_native = (struct atom *) calloc(MAX_ATOMS,sizeof(struct atom)); orig_native = (struct atom *) calloc(MAX_ATOMS,sizeof(struct atom)); natoms=0; fprintf(STATUS,"Initialized molecular data structures.\n"); /* Initialize static data */ amino_acids = (struct amino *) calloc(20, sizeof(struct amino)); // 20 kinds of aa fprintf(STATUS,"Made amino_acids structure\n"); ReadTypesFile(); fprintf(STATUS,"Read Types File\n"); ReadHelicityData(); // align.h fprintf(STATUS,"Read Helicity Data\n"); /* Read native */ ReadNative(native_file,native,&natoms); fprintf(STATUS,"Read Native\n"); nresidues=0; for(i=0; i<natoms; i++) if (!strncmp(native[i].atomname,"CA",2)) nresidues++; // strncmp:compare first 2 letters between them if (nresidues!=(native[natoms-1].res_num+1)) fprintf(STATUS,"FILE: %s MISSING RESIDUES!!!\n", native_file); fprintf(STATUS," pdb length:\t\t%d\n # of CA's:\t\t%d\n\n", nresidues, native[natoms-1].res_num+1); buf_in = (float *) calloc(3*natoms, sizeof(float)); buf_out = (float *) calloc(3*natoms, sizeof(float)); ReadAlignment(); // align.h fprintf(STATUS,"Read Alignment\n"); get_template(); // loop.h fprintf(STATUS,"Read Template Information\n"); if (USE_GO_POTENTIAL) MAX_TYPES = natoms; SetRadii(); SetHardCore(); SetContactDistance(); /* Allocate potential-related structures */ potential = (Float **) calloc(MAX_TYPES,sizeof(Float *)); for(i=0; i<MAX_TYPES; i++) { potential[i] = (Float *) calloc(MAX_TYPES,sizeof(Float)); } CenterProtein(&native,natoms); // to locate the protein in the center (0,0,0) for(i=0;i<natoms; i++) FindLatticeCoordinates(&native[i]); /* this also computes the integer value of the coordinates */ /* Get residue info */ native_residue = (struct residue *) calloc(nresidues,sizeof(struct residue)); cur_rotamers = (int *) calloc(nresidues, sizeof(int)); GetResidueInfo(native, native_residue, nresidues, natoms); /* Allocate memory for data structures */ InitializeData(); /* Determine phi-psi angles */ GetPhiPsi(native, native_residue, nresidues); /* Set up correlation matrix */ CheckCorrelation(data,native,native_residue,natoms); // whether check contacts, clashes or ohter /* Set up side-chain rotation structure */ if (USE_SIDECHAINS) { /* Initialize sidechain rotation data */ ReadSidechainTorsionData(); InitializeSidechainRotationData(); /* initialize sidechain torsions */ GetChi(); if (USE_ROTAMERS) { ReadAvgChis(); /* this will generate 1 side-chain move in the absence of backbone moves: */ SIDECHAIN_MOVES = 1; } } else SIDECHAIN_MOVES = 0; /* initialize moved atoms data structures */ ab = (struct pair *) calloc(natoms*natoms, sizeof(struct pair)); cd = (struct pair *) calloc(natoms*natoms, sizeof(struct pair)); initialize_torsion(); // these four potentials are all in loop.h initialize_sct(); initialize_aromatic(); initialize_secstr(); read_cluster(); // loop.h if (weight_hbond){ InitializeHydrogenBonding(); // hbonds.h hbond_pair = (struct pair *) calloc(natoms*natoms, sizeof(struct pair)); } /* Determine residue triplets */ DetermineTriplets(); /* Initialize backbone rotation data */ InitializeBackboneRotationData(); /* Get contacts */ Contacts(); // initialize contacts setting in contacts.h /* Setup potential and related structures */ SetupAlignmentStructure(); // align.h if (USE_GO_POTENTIAL){ mu = 1; SetupAlignmentPotential(); // align.h } else if (READ_POTENTIAL) ReadPotential(); else SetupMuPotential(); // if (!USE_GO_POTENTIAL) { // potential[bb_O_type][bb_N_type] = hydrogen_bond; // potential[bb_N_type][bb_O_type] = hydrogen_bond; // potential[bb_OXT_type][bb_N_type] = hydrogen_bond; // potential[bb_N_type][bb_OXT_type] = hydrogen_bond; // } TypeContacts(); // record the type of atoms for each contact ; contacts.h native_E = FullAtomEnergy(); // erergy (for different pair of atom types) * pair of atom types // free(amino_acids); return; } void TurnOffNativeClashes(int ReportBack) { int i, j; for(i=0; i<natoms; i++) for(j=i+1; j<natoms; j++) if (data[i][j].clashes) { if (ReportBack) fprintf(STATUS,"native clash\t%d %s - %s\t%d %s - %s \t %.3f %.3f\n",i,native[i].res,native[i].atomname,j,native[j].res,native[j].atomname, sqrt(D2(native[i].xyz,native[j].xyz)), sqrt(hard_core[native[i].smogtype][native[j].smogtype])/100); data[i][j].clashes = data[j][i].clashes = 0; nclashes--; } } /*================================================*/ /* routines for initializing static data */ /*================================================*/ void SetHardCore() { int i, j, k; Float temp; Float rad1, rad2; hard_core = (long int **) calloc(MAX_TYPES,sizeof(long int *)); // MAX_TYPES*MAX_TYPES matrix for(i=0; i<MAX_TYPES; i++) hard_core[i] = (long int *) calloc(MAX_TYPES,sizeof(long int)); /* smogtype is always used for atom sizes */ for(i=0; i<MAX_TYPES; i++) for(j=0; j<MAX_TYPES; j++) { if (!USE_GO_POTENTIAL) { for (k = 0; k < natom_type_list; ++k) if (i == atom_type_list[k].type_num) break; rad1 = radii[TypeAtom(atom_type_list[k].atom_name, atom_type_list[k].res_name)]; for (k = 0; k < natom_type_list; ++k) if (j == atom_type_list[k].type_num) break; rad2 = radii[TypeAtom(atom_type_list[k].atom_name, atom_type_list[k].res_name)]; } else { rad1 = radii[native[i].atomtype]; rad2 = radii[native[j].atomtype]; } temp = ALPHA*(rad1 + rad2); hard_core[i][j] = (long int) (temp*temp*INT_PRECISION*INT_PRECISION); // INT_PRECISION in define.h } return; } void SetContactDistance() { Float rad1, rad2; Float temp; Float temp00; int i, j, k; contact_distance = (struct cutoff **) calloc(MAX_TYPES,sizeof(struct cutoff *)); for(i=0; i<MAX_TYPES; i++) contact_distance[i] = (struct cutoff *) calloc(MAX_TYPES,sizeof(struct cutoff)); for(i=0; i<MAX_TYPES; i++) for(j=0; j<MAX_TYPES; j++) { if (!USE_GO_POTENTIAL) { for (k = 0; k < natom_type_list; ++k) if (i == atom_type_list[k].type_num) break; rad1 = radii[TypeAtom(atom_type_list[k].atom_name, atom_type_list[k].res_name)]; for (k = 0; k < natom_type_list; ++k) if (j == atom_type_list[k].type_num) break; rad2 = radii[TypeAtom(atom_type_list[k].atom_name, atom_type_list[k].res_name)]; } else { rad1 = radii[native[i].atomtype]; rad2 = radii[native[j].atomtype]; } temp = LAMBDA*ALPHA*(rad1 + rad2); contact_distance[i][j].b = (long int) (temp*temp*INT_PRECISION*INT_PRECISION); // contact reterion: [a,b] contact_distance[i][j].a = 0; beta = 0.; temp00 = beta*(rad1 + rad2); contact_distance[i][j].a = (long int) (temp00*temp00*INT_PRECISION*INT_PRECISION); } /* sets up proper distances for h-bonding */ if (!USE_GO_POTENTIAL) { // contact_distance[bb_O_type][bb_N_type].b = 3.25*3.25*INT_PRECISION*INT_PRECISION; // contact_distance[bb_N_type][bb_O_type].b = 3.25*3.25*INT_PRECISION*INT_PRECISION; // contact_distance[bb_OXT_type][bb_N_type].b = 3.25*3.25*INT_PRECISION*INT_PRECISION; // contact_distance[bb_N_type][bb_OXT_type].b = 3.25*3.25*INT_PRECISION*INT_PRECISION; // contact_distance[bb_O_type][bb_N_type].a = 2.75*2.75*INT_PRECISION*INT_PRECISION; // contact_distance[bb_N_type][bb_O_type].a = 2.75*2.75*INT_PRECISION*INT_PRECISION; // contact_distance[bb_OXT_type][bb_N_type].a = 2.75*2.75*INT_PRECISION*INT_PRECISION; // contact_distance[bb_N_type][bb_OXT_type].a = 2.75*2.75*INT_PRECISION*INT_PRECISION; /* turns off all backbone-sidechain contacts */ for (j = bb_N_type; j <= bb_OXT_type; ++j) for (i = 0; i < MAX_TYPES; ++i) if ((i < bb_N_type) || (i > bb_OXT_type)) { contact_distance[i][j].a = contact_distance[j][i].a = 0; contact_distance[i][j].b = contact_distance[j][i].b = 0; } } return; } void SetRadii() { // maybe the atomtype is total 14 kinds radii = (Float *) calloc(13,sizeof(Float)); /* carbons */ radii[0] = 1.61; radii[1] = 1.76; radii[2] = radii[3] = radii[4] = 1.88; /* nitrogens */ radii[5] = radii[6] = radii[7] = radii[8] = 1.64; /* oxygens */ radii[9] = 1.42; radii[10] = 1.46; /* sulfurs */ radii[11] = radii[12] = 1.77; return; } /*=============================================*/ /* routines for initializing and reading */ /* the native pdb */ /*=============================================*/ void ReadNative(char *file_name, struct atom *protein, int *Natoms) { // read the pdb file FILE *the_file; char line[250]; *Natoms = 0; if((the_file = fopen(file_name,"r"))==NULL) { fprintf(STATUS,"ERROR: Can't open the file: %s!\n", file_name); exit(1); } while(fgets(line,100,the_file) != NULL) // read 99 chars each line ParsePDBLine(line,protein,Natoms); fclose(the_file); return; } void GetResidueInfo(struct atom *Chain, struct residue *Residue, int Nres, int Natoms) { int i,j; for(i=0; i<Natoms; i++) for(j=0; j<40; j++) Residue[Chain[i].res_num].atomnumber[j] = -1; for(i=0; i<Natoms; i++) { j = MatchAtomname(Chain[i].atomname); // j stands for a kind of atom, range from 0 to 39? Residue[Chain[i].res_num].atomnumber[j] = i; if (!strncmp(Chain[i].atomname,"CA",2)) { Residue[Chain[i].res_num].CA = i; strcpy(Residue[Chain[i].res_num].res,Chain[i].res); Residue[Chain[i].res_num].amino_num = GetAminoNumber(Chain[i].res); Residue[Chain[i].res_num].psi=0; Residue[Chain[i].res_num].phi=0; Residue[Chain[i].res_num].chi[0]=0; Residue[Chain[i].res_num].chi[1]=0; Residue[Chain[i].res_num].chi[2]=0; Residue[Chain[i].res_num].chi[3]=0; Residue[Chain[i].res_num].is_core=Chain[i].is_core; Residue[Chain[i].res_num].is_designed=Chain[i].is_designed; } else if (!strcmp(Chain[i].atomname, "N")) Residue[Chain[i].res_num].N = i; else if (!strcmp(Chain[i].atomname, "C")) Residue[Chain[i].res_num].C = i; else if (!strcmp(Chain[i].atomname, "O")) Residue[Chain[i].res_num].O = i; else if (!strcmp(Chain[i].atomname, "CB")) Residue[Chain[i].res_num].CB = i; else if (!strcmp(Chain[i].atomname, "CG")) Residue[Chain[i].res_num].CG = i; else if (!strcmp(Chain[i].atomname, "CE1")) Residue[Chain[i].res_num].CE1 = i; else if (!strcmp(Chain[i].atomname, "CE2")) Residue[Chain[i].res_num].CE2 = i; else if (!strcmp(Chain[i].atomname, "CZ2")) Residue[Chain[i].res_num].CZ2 = i; else if (!strcmp(Chain[i].atomname, "CZ3")) Residue[Chain[i].res_num].CZ3 = i; } /* Handle CB atoms for glycine */ for(i=0; i<Nres; i++) { if (!strcmp(Residue[i].res,"GLY")) { Residue[i].CB = -999; } /* AddCB(native,native_residue[i]); */ } return; } void GetPhiPsi(struct atom *Chain, struct residue *Residue, int Nres) { /* Phi/Psi angles are stored in radians */ int i; for(i=0; i<Nres; i++) { if (i!=0) Residue[i].phi = PI/180.0*Phi(Residue[i],Residue[i-1],Chain); else Residue[i].phi = -999; if (i!=Nres-1) Residue[i].psi = PI/180.0*Psi(Residue[i],Residue[i+1],Chain); else Residue[i].psi = -999; } return; } void GetChi() { /* Chi angles are stored in radians */ int i, j; /* this routine also resets the native chis to the current values */ for(i=0; i<nresidues; i++) for(j=0; j<amino_acids[native_residue[i].amino_num].ntorsions; j++) { native_residue[i].chi[j] = PI/180.0*CalculateTorsion(native, sidechain_torsion[i][j][0], sidechain_torsion[i][j][1], sidechain_torsion[i][j][2], sidechain_torsion[i][j][3], 0); native_residue[i].native_chi[j] = native_residue[i].chi[j]; native_residue[i].tmpchi[j] = native_residue[i].chi[j]; // why use three arrays? } return; } void ReadAvgChis (void) { // no_chi_list int i; char name[4],line[200]; Float X, Y, Z, W; float value; float sX, sY, sZ, sW; rotamer_angles = calloc(nresidues, sizeof(struct angles)); if((DATA = fopen(rotamer_data_file,"r"))==NULL) // bbind02.May.lib { fprintf(STATUS,"ERROR: Can't open the file: %s!\n", rotamer_data_file); exit(1); } while (fgets(line,150,DATA)!=NULL) { sscanf(line, "%s %*d%*d%*d%*d %*d%*d %f%*f%*f%*f %f%f %f%f %f%f %f%f", name, &value, &X,&sX, &Y,&sY, &Z,&sZ, &W,&sW); for (i = 0; i < nresidues; ++i) { if(strcmp(name,native_residue[i].res)==0) { rotamer_angles[i].chis[no_chi_list[GetAminoNumber(name)]][0] = X; rotamer_angles[i].chis[no_chi_list[GetAminoNumber(name)]][1] = Y; rotamer_angles[i].chis[no_chi_list[GetAminoNumber(name)]][2] = Z; rotamer_angles[i].chis[no_chi_list[GetAminoNumber(name)]][3] = W; } } deviation_ang[GetAminoNumber(name)][no_chi_list[GetAminoNumber(name)]][0] = sX; deviation_ang[GetAminoNumber(name)][no_chi_list[GetAminoNumber(name)]][1] = sY; deviation_ang[GetAminoNumber(name)][no_chi_list[GetAminoNumber(name)]][2] = sZ; deviation_ang[GetAminoNumber(name)][no_chi_list[GetAminoNumber(name)]][3] = sW; prob_ang[GetAminoNumber(name)][no_chi_list[GetAminoNumber(name)]] = value; no_chi_list[GetAminoNumber(name)]++; } fclose(DATA); // for (i = 0; i < nresidues; ++i) // fprintf(STATUS,"%3d %s %2d %3d %8.3f %8.3f %8.3f\n", // i, native_residue[i].res, native_residue[i].amino_num, // no_chi_list[native_residue[i].amino_num], rotamer_angles[i].chis[2][0], // deviation_ang[native_residue[i].amino_num][2][0], prob_ang[native_residue[i].amino_num][2]); // exit(0); } void ReadSidechainTorsionData() { // amino_torsion.data // ntorsions: how many torsion angles in the side chain of an amino acid int temp_x, i, j, k; // ntotamers equal ntorsions for most of aa, but some are exceptional, see below line593-601 short temp; char AA[4], BB[4], CC[4], DD[4], symbol[2], name[4], line[250]; if((DATA = fopen(amino_data_file,"r"))==NULL) { fprintf(STATUS,"ERROR: Can't open the file: %s!\n", amino_data_file); exit(1); } temp =0; while (fgets(line,150,DATA)!=NULL) { if (!strncmp(line,"*",1)) temp++; else if (strncmp(line,"!!",2)) { if (temp == 0) sscanf(line,"%*s %*s"); else if (temp == 1) { sscanf(line,"%d %s %*d %d %*s %s",&i, name, &j, symbol); // %*d :skip this num in the input strcpy(amino_acids[i].name,name); strcpy(amino_acids[i].symbol,symbol); amino_acids[i].ntorsions = j; if (!strcmp(name,"PRO")) amino_acids[i].nrotamers = 2; else if (!strcmp(name,"TYR") || !strcmp(name,"HIS") || !strcmp(name,"PHE")) amino_acids[i].nrotamers = 6; else if (!strcmp(name,"GLY")) amino_acids[i].nrotamers = 0; else amino_acids[i].nrotamers = (int) three[amino_acids[i].ntorsions]; } else if (temp ==2) { sscanf(line, "%s %d %s %s %s %s %*f", name, &temp_x, AA, BB, CC, DD); for(i=0; i<20; i++) if (!strcmp(name,amino_acids[i].name)) break; strcpy(amino_acids[i].torsion[temp_x][0],AA); strcpy(amino_acids[i].torsion[temp_x][1],BB); strcpy(amino_acids[i].torsion[temp_x][2],CC); strcpy(amino_acids[i].torsion[temp_x][3],DD); } else if (temp == 3) { strncpy(name,line,3); for(i=0; i<20; i++) if (!strncmp(name,amino_acids[i].name,3)) break; strtok(line," \t"); j = atoi(strtok(NULL," \t\n")); amino_acids[i].rotate_natoms[j] = atoi(strtok(NULL," \t\n")); // delimiter: elther \t or \n or for(k=0; k<amino_acids[i].rotate_natoms[j]; k++) strcpy(amino_acids[i].rotate_atom[j][k],strtok(NULL," \t\n")); } } } fclose(DATA); return; } /*=============================================================*/ /* Read full atom potential */ /*=============================================================*/ void ReadPotential(){ // p178_conrange4_potential_0.995054 record the potential energy of a pair of 84 kinds atoms (84*84) FILE *pot_file; int i, j; float val; /* read max types potential */ if((pot_file = fopen(potential_file, "r"))==NULL) { fprintf(STATUS,"ERROR: Can't open the file: %s!\n", potential_file); exit(1); } while (fscanf(pot_file, "%d %d %f", &i, &j, &val)!= EOF) { potential[i][j] = potential[j][i] = val; } fclose(pot_file); } /*=============================================================*/ /* allocate data structures */ /*=============================================================*/ void InitializeData() { int i; #if DEBUG debug_contacts = (unsigned char **) calloc(natoms,sizeof(unsigned char *)); for(i=0; i<natoms; i++) debug_contacts[i] = (unsigned char *) calloc(natoms,sizeof(unsigned char)); debug_dcontacts = (unsigned char **) calloc(natoms,sizeof(unsigned char *)); // dcontacts?? for(i=0; i<natoms; i++) debug_dcontacts[i] = (unsigned char *) calloc(natoms,sizeof(unsigned char)); debug_clashes = (unsigned char **) calloc(natoms,sizeof(unsigned char *)); for(i=0; i<natoms; i++) debug_clashes[i] = (unsigned char *) calloc(natoms,sizeof(unsigned char)); #endif data = (struct contact_data **) calloc(natoms,sizeof(struct contact_data *)); for(i=0; i<natoms; i++) data[i] = (struct contact_data *) calloc(natoms,sizeof(struct contact_data)); type_contacts = (short **) calloc(MAX_TYPES,sizeof(short *)); for(i=0; i<MAX_TYPES; i++) type_contacts[i] = (short *) calloc(MAX_TYPES,sizeof(short)); is_rotated = (unsigned char *) calloc(natoms, sizeof(unsigned char)); for(i=0; i<natoms; i++) is_rotated[i]=0; return; } /*=============================================================*/ /* routines for initializing the move set */ /*=============================================================*/ void DetermineTriplets() { // record coordinates of a single residue/ double residues or a triplet int i, j, k; residue_triplets = (struct triplet *) calloc((nresidues-5)*81, sizeof(struct triplet)); // /* this is way more memory than needed, given that the exact number of triplets is 16n - 55 */ // yes! single:n; double:5n-15; triple:10n-40 total_triplets = 0; for(i = 0; i <nresidues; i++) { // for single residue_triplets[total_triplets].a = i; residue_triplets[total_triplets].b = -1; residue_triplets[total_triplets++].c = -1; } TOTAL_SINGLE_LOOP_MOVES = total_triplets; for(i = 0; i < nresidues; i++) // for double and local (residue seperate<6) for(j = i+1; j < nresidues; j++) if (j<i+6) { residue_triplets[total_triplets].a = i; residue_triplets[total_triplets].b = j; residue_triplets[total_triplets++].c = -1; } TOTAL_DOUBLE_LOOP_MOVES = total_triplets-TOTAL_SINGLE_LOOP_MOVES; // euqals num of local pair residues for(i = 0; i < nresidues; i++) // for triple and local (three residues are all <6 residue seperate) for(j = i+1; j < nresidues; j++) for(k = j+1; k < nresidues; k++) if (k<i+6 && j<i+6) { residue_triplets[total_triplets].a = i; residue_triplets[total_triplets].b = j; residue_triplets[total_triplets++].c = k; } TOTAL_TRIPLE_LOOP_MOVES = total_triplets-TOTAL_SINGLE_LOOP_MOVES-TOTAL_DOUBLE_LOOP_MOVES; return; } void InitializeBackboneRotationData() { int i, j, k; // !!! line 731-794 To define a global move that rotate the short part of the chain diveded by the residue; line796-944: a loop move for each residue in the single, double or triple based on the previous defination; line 945-952: init_rotate: to rotate the psi angle for the first residue or the phi angle for the last residue of the chain: that's why for the latter part of chain ,fist record phi then psi ;and for the former part of the chain, first psi then phi !!! /* rotate_atom[0=psi, 1=phi][which residue][list of atoms] */ yang_rotated_atoms = (short *) calloc(natoms, sizeof(short)); yang_not_rotated = (char *) calloc(natoms,sizeof(char)); rotate_natoms = (short **) calloc(2, sizeof(short *)); //2*n1 where n1:nresidues n2:natoms rotate_atom = (short ***) calloc(2, sizeof(short **)); // 2*n1*n2 not_rotated = (char ***) calloc(2,sizeof(char **)); // 2*n1*n2 for(i=0; i<2; i++) { rotate_atom[i] = (short **) calloc(nresidues, sizeof(short *)); not_rotated[i] = (char **) calloc(nresidues, sizeof(char *)); rotate_natoms[i] = (short *) calloc(nresidues,sizeof(short)); for(j=0; j<nresidues; j++) { rotate_atom[i][j] = (short *) calloc(natoms, sizeof(short)); not_rotated[i][j] = (char *) calloc(natoms,sizeof(char)); } } /* rotate the short end of the chain for each residue */ // Yes!! Only rotate the short part of the chain diveded by the residue /* and determine which atoms were rotated, for either phi or psi rotation */ for(i=0; i<nresidues; i++) { rotate_natoms[0][i]=0; rotate_natoms[1][i]=0; if (i > nresidues/2.0) { for(j=0; j<natoms; j++) if (native[j].res_num > i) { // record need to rotate psi and phi (mark with the larger atom index) for each pair of atoms of the latter part residues of the chain rotate_atom[0][i][rotate_natoms[0][i]++] = j; // rotate_natoms[0][i]++ : the index for residue i : begin with 0 rotate_atom[1][i][rotate_natoms[1][i]++] = j; // same to above } else if (native[j].res_num == i) { // record need to rotate psi and phi (mark with the larger atom index) for each pair of atoms fot each pair of atoms in the same residue except for N and CA if (j != native_residue[i].N && j!= native_residue[i].CA) rotate_atom[1][i][rotate_natoms[1][i]++] = j; if (j == native_residue[i].O) rotate_atom[0][i][rotate_natoms[0][i]++] = j; } } else { // same to above; the two parts tell us only rotate the short part of the chain devided by a residue for(j=0; j<natoms; j++) if (native[j].res_num < i) { rotate_atom[0][i][rotate_natoms[0][i]++] = j; rotate_atom[1][i][rotate_natoms[1][i]++] = j; } else if (native[j].res_num == i) { if (j != native_residue[i].O && j != native_residue[i].C && j != native_residue[i].CA) rotate_atom[0][i][rotate_natoms[0][i]++] = j; } } } /* set up the not_rotated array, with 1's at every unrotated atom */ for(i=0; i<nresidues; i++) { for(j=0; j<rotate_natoms[0][i]; j++) not_rotated[0][i][rotate_atom[0][i][j]]=1; for(j=0; j<rotate_natoms[1][i]; j++) not_rotated[1][i][rotate_atom[1][i][j]]=1; for(j=0; j<natoms; j++) { not_rotated[0][i][j]=!not_rotated[0][i][j]; not_rotated[1][i][j]=!not_rotated[1][i][j]; // yes! 0:rotated 1:not rotate } } /* store atoms rotated by loop moves */ // should to know the differences of these 5 matrices, group1: for number, 1,2 total_triplets*6 ; group2:3,4,5 total_triplets*6*natoms loop_rotate_natoms = (short **) calloc(total_triplets, sizeof(short *)); loop_int_rotate_natoms = (short **) calloc(total_triplets, sizeof(short *)); loop_rotate_atoms = (short ***) calloc(total_triplets, sizeof(short **)); loop_int_rotate_atoms = (short ***) calloc(total_triplets, sizeof(short **)); loop_not_rotated = (char ***) calloc(total_triplets, sizeof(char **)); for(i=0; i<total_triplets; i++){ /* there are up to 6 bonds to rotate for each loop move */ // single:phi+psi; double:2*(phi+psi); triple:3*(phi+psi) loop_rotate_natoms[i] = (short *) calloc(6, sizeof(short)); loop_int_rotate_natoms[i] = (short *) calloc(6, sizeof(short)); loop_rotate_atoms[i] = (short **) calloc(6, sizeof(short *)); loop_int_rotate_atoms[i] = (short **) calloc(6, sizeof(short *)); loop_not_rotated[i] = (char **) calloc(6, sizeof(char *)); /* all loop moves have at least 2 bonds */ loop_rotate_atoms[i][0] = (short *) calloc(natoms, sizeof(short)); loop_rotate_atoms[i][1] = (short *) calloc(natoms, sizeof(short)); loop_int_rotate_atoms[i][0] = (short *) calloc(natoms, sizeof(short)); loop_int_rotate_atoms[i][1] = (short *) calloc(natoms, sizeof(short)); loop_not_rotated[i][0] = (char *) calloc(natoms, sizeof(char)); loop_not_rotated[i][1] = (char *) calloc(natoms, sizeof(char)); if (residue_triplets[i].b >=0) { /* double and triple loop moves */ loop_rotate_atoms[i][2] = (short *) calloc(natoms, sizeof(short)); loop_rotate_atoms[i][3] = (short *) calloc(natoms, sizeof(short)); loop_int_rotate_atoms[i][2] = (short *) calloc(natoms, sizeof(short)); loop_int_rotate_atoms[i][3] = (short *) calloc(natoms, sizeof(short)); loop_not_rotated[i][2] = (char *) calloc(natoms, sizeof(char)); loop_not_rotated[i][3] = (char *) calloc(natoms, sizeof(char)); } else { /* single loop moves */ loop_rotate_atoms[i][2] = (short *) calloc(1, sizeof(short)); /* why are these allocated? */ loop_rotate_atoms[i][3] = (short *) calloc(1, sizeof(short)); loop_int_rotate_atoms[i][2] = (short *) calloc(1, sizeof(short)); loop_int_rotate_atoms[i][3] = (short *) calloc(1, sizeof(short)); loop_not_rotated[i][2] = (char *) calloc(natoms, sizeof(char)); loop_not_rotated[i][3] = (char *) calloc(natoms, sizeof(char)); } if (residue_triplets[i].c >=0) { /* triple loop moves */ loop_rotate_atoms[i][4] = (short *) calloc(natoms, sizeof(short)); loop_rotate_atoms[i][5] = (short *) calloc(natoms, sizeof(short)); loop_int_rotate_atoms[i][4] = (short *) calloc(natoms, sizeof(short)); loop_int_rotate_atoms[i][5] = (short *) calloc(natoms, sizeof(short)); loop_not_rotated[i][4] = (char *) calloc(natoms, sizeof(char)); loop_not_rotated[i][5] = (char *) calloc(natoms, sizeof(char)); } else { /* single and double loop moves */ loop_rotate_atoms[i][4] = (short *) calloc(1, sizeof(short)); loop_rotate_atoms[i][5] = (short *) calloc(1, sizeof(short)); loop_int_rotate_atoms[i][4] = (short *) calloc(1, sizeof(short)); loop_int_rotate_atoms[i][5] = (short *) calloc(1, sizeof(short)); loop_not_rotated[i][4] = (char *) calloc(natoms, sizeof(char)); loop_not_rotated[i][5] = (char *) calloc(natoms, sizeof(char)); } } for(i=0; i<total_triplets; i++) { if (residue_triplets[i].a > nresidues/2.0) { /* phi then psi */ loop_rotate_natoms[i][0] = rotate_natoms[1][residue_triplets[i].a]; /* phi */ for(j=0; j < loop_rotate_natoms[i][0]; j++) loop_rotate_atoms[i][0][j] = rotate_atom[1][residue_triplets[i].a][j]; // record all need to rotate phi of atoms (mark with the larger atom index) for a single loop_rotate_natoms[i][1] = rotate_natoms[0][residue_triplets[i].a]; /* psi */ for(j=0; j < loop_rotate_natoms[i][1]; j++) loop_rotate_atoms[i][1][j] = rotate_atom[0][residue_triplets[i].a][j]; // record all need to rotate psi of atoms (mark with the larger atom index) for a single if (residue_triplets[i].b >=0) { loop_rotate_natoms[i][2] = rotate_natoms[1][residue_triplets[i].b]; /* phi */ // record all need to rotate phi of atoms (mark with the larger atom index) for the second residue of a double for(j=0; j < loop_rotate_natoms[i][2]; j++) loop_rotate_atoms[i][2][j] = rotate_atom[1][residue_triplets[i].b][j]; loop_rotate_natoms[i][3] = rotate_natoms[0][residue_triplets[i].b]; /* psi */ // record all need to rotate psi of atoms (mark with the larger atom index) for the second residue of a double for(j=0; j < loop_rotate_natoms[i][3]; j++) loop_rotate_atoms[i][3][j] = rotate_atom[0][residue_triplets[i].b][j]; } else { loop_rotate_natoms[i][2] = 0; loop_rotate_natoms[i][3] = 0; } if (residue_triplets[i].c >=0) { loop_rotate_natoms[i][4] = rotate_natoms[1][residue_triplets[i].c]; /* phi */ // record all need to rotate phi of atoms (mark with the larger atom index) for the third residue of a triple for(j=0; j < loop_rotate_natoms[i][4]; j++) loop_rotate_atoms[i][4][j] = rotate_atom[1][residue_triplets[i].c][j]; loop_rotate_natoms[i][5] = rotate_natoms[0][residue_triplets[i].c]; /* psi */ // record all need to rotate psi of atoms (mark with the larger atom index) for the third residue of a triple for(j=0; j < loop_rotate_natoms[i][5]; j++) loop_rotate_atoms[i][5][j] = rotate_atom[0][residue_triplets[i].c][j]; } else { loop_rotate_natoms[i][4] = 0; loop_rotate_natoms[i][5] = 0; } } else { /* psi then phi */ j=0; if (residue_triplets[i].c >=0) { for(k=0; k<natoms; k++) if (native[k].res_num < residue_triplets[i].c) { loop_rotate_atoms[i][j][loop_rotate_natoms[i][j]++] = k; /* psi */ // also record need to rotate psi like above for all, but more clean ,but not very clear loop_rotate_atoms[i][j+1][loop_rotate_natoms[i][j+1]++] = k; /* phi */ } else if (native[k].res_num == residue_triplets[i].c) { if (k != native_residue[residue_triplets[i].c].O && k != native_residue[residue_triplets[i].c].C && k != native_residue[residue_triplets[i].c].CA) // except for O, C and CA in the same residue loop_rotate_atoms[i][j][loop_rotate_natoms[i][j]++] = k; /* psi */ } j+=2; } else { loop_rotate_natoms[i][4] = 0; loop_rotate_natoms[i][5] = 0; } if (residue_triplets[i].b >=0) { for(k=0; k<natoms; k++) if (native[k].res_num < residue_triplets[i].b) { loop_rotate_atoms[i][j][loop_rotate_natoms[i][j]++] = k; /* psi */ loop_rotate_atoms[i][j+1][loop_rotate_natoms[i][j+1]++] = k; /* phi */ } else if (native[k].res_num == residue_triplets[i].b) { if (k != native_residue[residue_triplets[i].b].O && k != native_residue[residue_triplets[i].b].C && k != native_residue[residue_triplets[i].b].CA) loop_rotate_atoms[i][j][loop_rotate_natoms[i][j]++] = k; /* psi */ } j+=2; } else { loop_rotate_natoms[i][2] = 0; loop_rotate_natoms[i][3] = 0; } loop_rotate_natoms[i][j] = rotate_natoms[0][residue_triplets[i].a]; /* psi */ for(k=0; k < loop_rotate_natoms[i][j]; k++) loop_rotate_atoms[i][j][k] = rotate_atom[0][residue_triplets[i].a][k]; loop_rotate_natoms[i][j+1] = rotate_natoms[1][residue_triplets[i].a]; /* phi */ for(k=0; k < loop_rotate_natoms[i][j+1]; k++) loop_rotate_atoms[i][j+1][k] = rotate_atom[1][residue_triplets[i].a][k]; } } for(i=0; i<total_triplets; i++) for(j=0; j<6; j++) { for(k=0; k<loop_rotate_natoms[i][j]; k++) loop_not_rotated[i][j][loop_rotate_atoms[i][j][k]]=1; for(k=0; k<natoms; k++) loop_not_rotated[i][j][k]=!loop_not_rotated[i][j][k]; } for(i=0; i<total_triplets; i++) // line945-952: init rotate :rotate the residue with only rotate phi or psi (this residue maybe in the boarder of the chain) for(j=5; j>0; j--) { if (loop_rotate_natoms[i][j] && loop_rotate_natoms[i][j-1]) { for(k=0; k<loop_rotate_natoms[i][j-1]; k++) if (loop_not_rotated[i][j][loop_rotate_atoms[i][j-1][k]]) loop_int_rotate_atoms[i][j-1][loop_int_rotate_natoms[i][j-1]++]=loop_rotate_atoms[i][j-1][k]; } } return; } void InitializeSidechainRotationData() { int i, j, k, l, m, n; sidechain_torsion = (short ***) calloc(nresidues,sizeof(short **)); // nresidue* (torsion kinds for a aa) * 4 (maybe a torsion angle includes 4 atoms) for(i=0; i<nresidues; i++) { sidechain_torsion[i] = (short **) calloc(amino_acids[native_residue[i].amino_num].ntorsions,sizeof(short *)); for(j=0; j<amino_acids[native_residue[i].amino_num].ntorsions; j++) sidechain_torsion[i][j] = (short *) calloc(4,sizeof(short)); // use 4 atoms to define a torsion ,see line1040-1041 } sct_E = (short *****) calloc(nresidues,sizeof(short ****)); // nresidues * 12*12*12*12 ???:line 968-981 for(i=0; i<nresidues; i++) { sct_E[i] = (short ****) calloc(12,sizeof(short ***)); for(j=0; j<12; j++) { sct_E[i][j] = (short ***) calloc(12,sizeof(short **)); for(k=0; k<12; k++) { sct_E[i][j][k] = (short **) calloc(12,sizeof(short *)); for(l=0; l<12; l++) { sct_E[i][j][k][l] = (short *) calloc(12,sizeof(short)); } } } } for(i=0; i<nresidues; i++) { native_residue[i].ntorsions = amino_acids[native_residue[i].amino_num].ntorsions; native_residue[i].nrotamers = amino_acids[native_residue[i].amino_num].nrotamers; /* copy rotamer angles from amino_acid structure into native_residue structure */ for(j=0; j<4; j++) for(k=0; k<4; k++) for(l=0; l<4; l++) for(m=0; m<4; m++) for(n=0; n<4; n++) native_residue[i].avg_angle[j][k][l][m][n] = amino_acids[native_residue[i].amino_num].avg_angle[j][k][l][m][n]; /* rot_position gives the 4-digit base 3 representation of the rotamer */ native_residue[i].rot_position = (short **) calloc(native_residue[i].nrotamers, sizeof(short *)); for(j=0; j<native_residue[i].nrotamers; j++) native_residue[i].rot_position[j] = (short *) calloc(4,sizeof(short)); if (native_residue[i].nrotamers>1) for(k=0; k<native_residue[i].nrotamers; k++) { j=native_residue[i].ntorsions; l = k; do { native_residue[i].rot_position[k][j-1] = l/three[j-1]; l -= three[j-1]*native_residue[i].rot_position[k][j-1]; // ??? equal 0 native_residue[i].rot_position[k][j-1] +=1; j--; } while (j>0); } } /* these structures keep track of which sidechain atoms rotate at each torsion */ rotate_sidechain_atom = (short ***) calloc(nresidues,sizeof(short **)); sidechain_not_rotated = (char ***) calloc(nresidues,sizeof(char **)); for(i=0; i<nresidues; i++) { rotate_sidechain_atom[i] = (short **) calloc(native_residue[i].ntorsions,sizeof(short *)); sidechain_not_rotated[i] = (char **) calloc(native_residue[i].ntorsions,sizeof(char *)); for(j=0; j<native_residue[i].ntorsions; j++) { rotate_sidechain_atom[i][j] = (short *) calloc(amino_acids[native_residue[i].amino_num].rotate_natoms[j],sizeof(short *)); // nresidue*ntorsions*num of atoms affected by torsion sidechain_not_rotated[i][j] = (char *) calloc(natoms,sizeof(char)); // nresidues*ntorsions*natoms (more memory than rotate_sidechain_atom, not rotate aotm=1;roatate atom=0,see line1070) } } rotate_sidechain_natoms = (short **) calloc(nresidues,sizeof(short *)); for(i=0; i<nresidues; i++) rotate_sidechain_natoms[i] = (short *) calloc(native_residue[i].ntorsions,sizeof(short)); for(i=0; i<nresidues; i++) { for(j=0; j<native_residue[i].ntorsions; j++) rotate_sidechain_natoms[i][j] = amino_acids[native_residue[i].amino_num].rotate_natoms[j]; for(j=0; j<native_residue[i].ntorsions; j++) for(k=0; k<4; k++) { for(l=0; l<natoms; l++) if (native[l].res_num == i && !strcmp(native[l].atomname,amino_acids[native_residue[i].amino_num].torsion[j][k])) { /* sidechain_torsion records the 4 atoms that define each torsion */ sidechain_torsion[i][j][k] = l; // see the above note; use four atoms to define a torsion break; } if (l==natoms) { fprintf(STATUS,"WARNING -- atom %s, residue %d %s not found!\n",amino_acids[native_residue[i].amino_num].torsion[j][k],i,native_residue[i].res); exit(1); } } for(j=0; j<native_residue[i].ntorsions; j++) for(k=0; k<rotate_sidechain_natoms[i][j]; k++) { for(l=0; l<natoms; l++) if (native[l].res_num == i && !strcmp(native[l].atomname,amino_acids[native_residue[i].amino_num].rotate_atom[j][k])) { rotate_sidechain_atom[i][j][k] = l; // record the atom affected by the torsion, see line 1025 break; } if (l==natoms) { fprintf(STATUS,"WARNING -- atom %s, residue %d %s not found!\n",amino_acids[native_residue[i].amino_num].rotate_atom[j][k],i,native_residue[i].res); exit(1); } } } for(i=0; i<nresidues; i++) for(j=0; j< native_residue[i].ntorsions; j++) { for(k=0; k<natoms; k++) sidechain_not_rotated[i][j][k]=1; for(k=0; k<rotate_sidechain_natoms[i][j]; k++) sidechain_not_rotated[i][j][rotate_sidechain_atom[i][j][k]]=0; } return; } /*=============================================================*/ /* set up correlation data */ /*=============================================================*/ int SkipSelf(int s, int b, struct atom *Protein, struct residue *Residue) { /* returns 1 if sidechain-backbone atom pair is separated by less than 3 bonds, or 0 if not */ if ((b == Residue[Protein[b].res_num].C || b == Residue[Protein[b].res_num].N || b == Residue[Protein[b].res_num].CA) && s == Residue[Protein[s].res_num].CB) return 1; else if (Residue[Protein[s].res_num].amino_num==14) return 1; else if (b == Residue[Protein[b].res_num].CA && Protein[s].atomname[1] == 'G') return 1; else return 0; } int SkipNeighbors(int i, int j, struct atom *Protein, struct residue *Residue) { int first,second; if (Protein[i].res_num < Protein[j].res_num) { first = i; second = j; } else { second = i; first = j; } if (!(first == Residue[Protein[first].res_num].C && !strcmp(Protein[second].atomname,"CD") && Residue[Protein[second].res_num].amino_num == 14) && !(first == Residue[Protein[first].res_num].CA && !strcmp(Protein[second].atomname,"CD") && Residue[Protein[second].res_num].amino_num == 14)) if ((first == Residue[Protein[first].res_num].N) || (second != Residue[Protein[second].res_num].CA && second != Residue[Protein[second].res_num].N) || Protein[first].is_sidechain || Protein[second].is_sidechain) return 0; return 1; } int Disulfide(int a, int b, struct atom *Protein, struct residue *Residue) { if ((Residue[Protein[a].res_num].amino_num == 4) && (Residue[Protein[b].res_num].amino_num == 4) && !strcmp(Protein[a].atomname,"SG") && !strcmp(Protein[b].atomname,"SG")) return 1; else return 0; } void CheckCorrelation(struct contact_data **Data, struct atom *Protein, struct residue *Residue, int Natoms) { int i, j; /*====================================================================================*/ /* non-local residues check clashes and contacts of all non-disulfide pairs */ /* non-local disulfides check only contacts of S-S atom pairs */ /* self check clashes of only non-bonded sidechain-backbone pairs */ /* i-i+1 check clashes of non-bonded pairs */ /* i-i+2 check clashes of all pairs */ /* relevant bb atoms check clashes of all pairs */ /*====================================================================================*/ for (i=0; i<Natoms; i++) for(j=i+1; j<Natoms; j++) { /* non-local residues */ if (fabs(Protein[i].res_num-Protein[j].res_num)>SKIP_LOCAL_CONTACT_RANGE) { if (!Disulfide(i,j,Protein,Residue)) { Data[i][j].check_clashes=1; Data[j][i].check_clashes=1; Data[i][j].check_contacts=1; Data[j][i].check_contacts=1; } else { /* disulfide */ Data[i][j].disulfide=1; Data[j][i].disulfide=1; Data[i][j].check_clashes=0; Data[j][i].check_clashes=0; Data[i][j].check_contacts=1; Data[j][i].check_contacts=1; } } /* self */ else if (Protein[i].res_num == Protein[j].res_num) { if (Protein[i].is_sidechain && !Protein[j].is_sidechain) { /* sidechain - backbone */ // ??? if (!SkipSelf(i,j,Protein,Residue)) { Data[i][j].check_clashes=1; Data[j][i].check_clashes=1; Data[i][j].check_contacts=0; Data[j][i].check_contacts=0; } } else if (!Protein[i].is_sidechain && Protein[j].is_sidechain) { /* backbone - sidechain */ // ??? if (!SkipSelf(j,i,Protein,Residue)) { Data[i][j].check_clashes=1; Data[j][i].check_clashes=1; Data[i][j].check_contacts=0; Data[j][i].check_contacts=0; } } } /* i-i+1 */ else if (fabs(Protein[i].res_num-Protein[j].res_num)==1) { if (!SkipNeighbors(i,j,Protein,Residue)) { Data[i][j].check_clashes=1; Data[j][i].check_clashes=1; Data[i][j].check_contacts=0; Data[j][i].check_contacts=0; } } /* all other local residues */ else if (fabs(Protein[i].res_num-Protein[j].res_num)<=SKIP_LOCAL_CONTACT_RANGE && fabs(Protein[i].res_num-Protein[j].res_num)>=2) { Data[i][j].check_clashes=1; Data[j][i].check_clashes=1; Data[i][j].check_contacts=0; Data[j][i].check_contacts=0; } else { fprintf(STATUS,"Uncategorizable pair\n"); exit(0); } /* Skip relevant backbone contacts */ if (!IsSidechainAtom(Protein[i].atomname) && !IsSidechainAtom(Protein[j].atomname) && fabs(Protein[i].res_num - Protein[j].res_num) <= SKIP_BB_CONTACT_RANGE) { Data[i][j].check_contacts=0; Data[j][i].check_contacts=0; } /* up to here, backbone contacts below the SKIP_BB_CONTACT_RANGE are turned off */ } return; } void SetProgramOptions(int argc, char *argv[]) { char line[150]; char token[50]; char name[50]; float value; int find_yang_move = 0; int find_yang_scale = 0; int find_frag_move = 0; char cfg_file[200]; int l=0, ls, MPI_STOP=0; memset(cfg_file,'\0',200); if(myrank == 0) { strcpy(cfg_file,argv[1]); if(argc != 2) { fprintf(STATUS,"ERROR!!! Usage is like this: ./fold_potential config_file, argc : %d\n", argc); for(l=0;l<argc;l++){ fprintf(STATUS,"argc : %3d, argv : %s\n", l, argv[l]); } MPI_STOP=1; } } ierr=MPI_Bcast(&MPI_STOP,1,MPI_INT,0,mpi_world_comm); // broadcast a message from the process with rank "root" to all otherprocesses of the communicator if(MPI_STOP == 1) { MPI_Finalize(); exit(1); } else { ierr=MPI_Bcast(cfg_file,200,MPI_CHAR,0,mpi_world_comm); } Tnode = (float *) calloc(nprocs,sizeof(float)); Enode = (float *) calloc(nprocs,sizeof(float)); replica_index = (int *) calloc(nprocs,sizeof(int)); accepted_replica = (int *) calloc(nprocs,sizeof(int)); rejected_replica = (int *) calloc(nprocs,sizeof(int)); for(l=0;l<nprocs;l++){ accepted_replica[l]=rejected_replica[l]=0; } if((DATA = fopen(cfg_file,"r"))==NULL) { fprintf(STATUS,"ERROR: Can't open the file: %s!\n", cfg_file); exit(1); } fprintf(STATUS,"Temperature range for replica exchange!\n"); for(l=0;l<nprocs;l++){ Tnode[l]=MC_TEMP_MIN * expf(l*logf(10.0)/(nprocs-1)); fprintf(STATUS,"%4d : %5.3f\n", l, Tnode[l]); } fprintf(STATUS,"myrank : %4d, cfg_file : %s\n", myrank, cfg_file); fflush(STATUS); while(fgets(line,150,DATA) != NULL) { /* fprintf(STATUS,"%s",line);*/ if(strncmp(line,"!",1)){ sscanf(line,"%s %f",token,&value); if (!strcmp(token,"NATIVE_FILE")) { sscanf(line,"%*s %s",name); strcpy(native_file,name); } else if (!strcmp(token,"STRUCTURE_FILE")) { sscanf(line,"%*s %s",name); strcpy(structure_file,name); } else if (!strcmp(token,"TEMPLATE_FILE")) { sscanf(line,"%*s %s",name); strcpy(template_file,name); } else if (!strcmp(token,"ALIGNMENT_FILE")) { sscanf(line,"%*s %s",name); strcpy(alignment_file,name); } else if (!strcmp(token,"FRAGLIB_FILE")) { sscanf(line,"%*s %s",name); strcpy(fraglib_file,name); //printf("%s\n",fraglib_file); } else if (!strcmp(token,"TRIPLET_ENERGY_FILE")) { sscanf(line,"%*s %s",name); strcpy(triplet_file,name); } else if (!strcmp(token,"SIDECHAIN_TORSION_FILE")) { sscanf(line,"%*s %s",name); strcpy(sctorsion_file,name); } else if (!strcmp(token,"SECONDARY_STRUCTURE_FILE")) { sscanf(line,"%*s %s",name); strcpy(sec_str_file,name); } else if (!strcmp(token,"AMINO_DATA_FILE")) { sscanf(line,"%*s %s",name); strcpy(amino_data_file,name); } else if (!strcmp(token,"ATOM_TYPE_FILE")) { sscanf(line,"%*s %s",name); strcpy(atom_type_file,name); } else if (!strcmp(token,"ROTAMER_DATA_FILE")) { sscanf(line,"%*s %s",name); strcpy(rotamer_data_file,name); } else if (!strcmp(token,"PDB_OUT_FILE")) { sscanf(line,"%*s %s",name); strcpy(pdb_out_file,name); ls = strlen(name); sprintf(pdb_out_file+ls,"_%5.3f", MC_TEMP); } else if (!strcmp(token,"POTENTIAL_DATA")) { sscanf(line,"%*s %s",name); strcpy(potential_file,name); } else if (!strcmp(token,"HELICITY_DATA")) { sscanf(line,"%*s %s",name); strcpy(helicity_data,name); } else if (!strcmp(token,"HYDROGEN_BONDING_DATA")) { sscanf(line,"%*s %s",name); strcpy(hydrogen_bonding_data,name); } else if (!strcmp(token,"AROMATIC_FILE")) { sscanf(line,"%*s %s",name); strcpy(aromatic_file,name); } else if (!strcmp(token,"PROTEIN_NAME")) { sscanf(line,"%*s %s",name); strcpy(PROTEIN_NAME,name); } else if (!strcmp(token,"PRINT_PDB")) PRINT_PDB = (int) value; else if (!strcmp(token,"MC_STEPS")) MC_STEPS = (long int) value; else if (!strcmp(token,"MC_ANNEAL_STEPS")) MC_ANNEAL_STEPS = (long int) value; else if (!strcmp(token,"MC_STEP_SIZE")) STEP_SIZE = value*PI/180.0; else if (!strcmp(token,"SIDECHAIN_NOISE")) SIDECHAIN_NOISE = value*PI/180.0; else if (!strcmp(token,"SIDECHAIN_MOVES")) SIDECHAIN_MOVES = (int) value; else if (!strcmp(token,"MC_PRINT_STEPS")) MC_PRINT_STEPS = (int) value; else if (!strcmp(token,"MC_PDB_PRINT_STEPS")) MC_PDB_PRINT_STEPS = (int) value; else if (!strcmp(token,"ALPHA")) ALPHA = value; else if (!strcmp(token,"LAMBDA")) LAMBDA = value; else if (!strcmp(token,"NATIVE_ATTRACTION")) NATIVE_ATTRACTION = value; else if (!strcmp(token,"NON_NATIVE_REPULSION")) NON_NATIVE_REPULSION = value; else if (!strcmp(token,"USE_ROT_PROB")) USE_ROT_PROB = value; else if (!strcmp(token,"SEQ_DEP_HB")) SEQ_DEP_HB = value; else if (!strcmp(token,"CLASH_WEIGHT")) weight_clash = value; else if (!strcmp(token,"HYDROGEN_BOND")) hydrogen_bond = value; else if (!strcmp(token,"RMS_WEIGHT")) weight_rms = value; else if (!strcmp(token,"NON_SPECIFIC_ENERGY")) NON_SPECIFIC_ENERGY = value; else if (!strcmp(token,"USE_GLOBAL_BB_MOVES")) USE_GLOBAL_BB_MOVES = value; else if (!strcmp(token,"YANG_MOVE")) { YANG_MOVE = value; find_yang_move = 1; } else if (!strcmp(token,"FRAG_MOVE")) { FRAG_MOVE = value; find_frag_move = 1; } else if (!strcmp(token,"USE_CRANK_MOVE")) { USE_CRANK_MOVE = value; } else if (!strcmp(token,"NOISE_RANGE_PHI")) { NOISE_RANGE_PHI = value; } else if (!strcmp(token,"NOISE_RANGE_PSI")) { NOISE_RANGE_PSI = value; } else if (!strcmp(token,"YANG_SCALE")) { YANG_SCALE = value; find_yang_scale = 1; } else if (!strcmp(token,"SKIP_LOCAL_CONTACT_RANGE")) SKIP_LOCAL_CONTACT_RANGE = (int) value; else if (!strcmp(token,"SKIP_BB_CONTACT_RANGE")) SKIP_BB_CONTACT_RANGE = (int) value; else if (!strcmp(token,"USE_SIDECHAINS")) USE_SIDECHAINS = (int) value; else if (!strcmp(token,"NO_NEW_CLASHES")) NO_NEW_CLASHES = (int) value; else if (!strcmp(token,"USE_ROTAMERS")) USE_ROTAMERS = (int) value; else if (!strcmp(token,"READ_POTENTIAL")) READ_POTENTIAL = (int) value; else if (!strcmp(token,"USE_GO_POTENTIAL")) USE_GO_POTENTIAL = (int) value; else if (!strcmp(token,"MC_REPLICA_STEPS")) MC_REPLICA_STEPS = (int) value; else if (!strcmp(token,"MAX_EXCHANGE")) MAX_EXCHANGE = (int) value; else { printf("config file option not found: %s\n",token); //exit(0); } } } fclose(DATA); if (find_yang_move == 0) { fprintf(STATUS,"There is nothing on YANG_MOVE!\n"); exit(1); } if (find_yang_scale == 0) { fprintf(STATUS,"There is nothing on YANG_SCALE!\n"); exit(1); } if (find_frag_move == 0) { fprintf(STATUS,"There is nothing on FRAG_MOVE!\n"); exit(1); } /* lattice parameters */ if (DISTANCE_DEPENDENCE) LATTICE_SIZE = 1.0/5.25; else LATTICE_SIZE = 1/(LAMBDA*ALPHA*(1.88+1.88)); MATRIX_SIZE = 20; HALF_MATRIX_SIZE = MATRIX_SIZE/2; return; }
ElwynWang/REMC
src_mpi/init.h
C
gpl-3.0
54,046
[ 30522, 11675, 2275, 21572, 13113, 7361, 9285, 1006, 1007, 1025, 11675, 3988, 4697, 21572, 9589, 1006, 1007, 1025, 11675, 5646, 24901, 13461, 1006, 1007, 1025, 11675, 2275, 12173, 6137, 1006, 1007, 1025, 11675, 6662, 4232, 17345, 1006, 1007, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * * Copyright (C) 2011 HTC Corporation. * * This software is licensed under the terms of the GNU General Public * License version 2, as published by the Free Software Foundation, and * may be copied, distributed, and modified under those terms. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * */ #include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> #include <linux/err.h> #include <linux/power_supply.h> #include <linux/platform_device.h> #include <linux/debugfs.h> #include <linux/wakelock.h> #include <linux/gpio.h> #include <linux/rtc.h> #include <linux/workqueue.h> #include <mach/htc_battery_core.h> #include <linux/android_alarm.h> #include <mach/board_htc.h> static ssize_t htc_battery_show_property(struct device *dev, struct device_attribute *attr, char *buf); static ssize_t htc_battery_rt_attr_show(struct device *dev, struct device_attribute *attr, char *buf); static int htc_power_get_property(struct power_supply *psy, enum power_supply_property psp, union power_supply_propval *val); static int htc_battery_get_property(struct power_supply *psy, enum power_supply_property psp, union power_supply_propval *val); static ssize_t htc_battery_charger_ctrl_timer(struct device *dev, struct device_attribute *attr, const char *buf, size_t count); #if 1 #define HTC_BATTERY_ATTR(_name) \ { \ .attr = { .name = #_name, .mode = S_IRUGO}, \ .show = htc_battery_show_property, \ .store = NULL, \ } #else #define HTC_BATTERY_ATTR(_name) \ { \ .attr = { .name = #_name, .mode = S_IRUGO, .owner = THIS_MODULE }, \ .show = htc_battery_show_property, \ .store = NULL, \ } #endif struct htc_battery_core_info { int present; int htc_charge_full; unsigned long update_time; struct mutex info_lock; struct battery_info_reply rep; struct htc_battery_core func; }; static struct htc_battery_core_info battery_core_info; static int battery_register = 1; static int battery_over_loading; static struct alarm batt_charger_ctrl_alarm; static struct work_struct batt_charger_ctrl_work; struct workqueue_struct *batt_charger_ctrl_wq; static unsigned int charger_ctrl_stat; static int test_power_monitor; static enum power_supply_property htc_battery_properties[] = { POWER_SUPPLY_PROP_STATUS, POWER_SUPPLY_PROP_HEALTH, POWER_SUPPLY_PROP_PRESENT, POWER_SUPPLY_PROP_TECHNOLOGY, POWER_SUPPLY_PROP_CAPACITY, POWER_SUPPLY_PROP_OVERLOAD, }; static enum power_supply_property htc_power_properties[] = { POWER_SUPPLY_PROP_ONLINE, }; static char *supply_list[] = { "battery", }; static struct power_supply htc_power_supplies[] = { { .name = "battery", .type = POWER_SUPPLY_TYPE_BATTERY, .properties = htc_battery_properties, .num_properties = ARRAY_SIZE(htc_battery_properties), .get_property = htc_battery_get_property, }, { .name = "usb", .type = POWER_SUPPLY_TYPE_USB, .supplied_to = supply_list, .num_supplicants = ARRAY_SIZE(supply_list), .properties = htc_power_properties, .num_properties = ARRAY_SIZE(htc_power_properties), .get_property = htc_power_get_property, }, { .name = "ac", .type = POWER_SUPPLY_TYPE_MAINS, .supplied_to = supply_list, .num_supplicants = ARRAY_SIZE(supply_list), .properties = htc_power_properties, .num_properties = ARRAY_SIZE(htc_power_properties), .get_property = htc_power_get_property, }, { .name = "wireless", .type = POWER_SUPPLY_TYPE_WIRELESS, .supplied_to = supply_list, .num_supplicants = ARRAY_SIZE(supply_list), .properties = htc_power_properties, .num_properties = ARRAY_SIZE(htc_power_properties), .get_property = htc_power_get_property, }, }; static BLOCKING_NOTIFIER_HEAD(wireless_charger_notifier_list); int register_notifier_wireless_charger(struct notifier_block *nb) { return blocking_notifier_chain_register(&wireless_charger_notifier_list, nb); } int unregister_notifier_wireless_charger(struct notifier_block *nb) { return blocking_notifier_chain_unregister(&wireless_charger_notifier_list, nb); } static int zcharge_enabled; int htc_battery_get_zcharge_mode(void) { return zcharge_enabled; } static int __init enable_zcharge_setup(char *str) { int rc; unsigned long cal; rc = strict_strtoul(str, 10, &cal); if (rc) return rc; zcharge_enabled = cal; return 1; } __setup("enable_zcharge=", enable_zcharge_setup); static int htc_battery_get_charging_status(void) { enum charger_type_t charger; int ret; mutex_lock(&battery_core_info.info_lock); charger = battery_core_info.rep.charging_source; mutex_unlock(&battery_core_info.info_lock); if (battery_core_info.rep.batt_id == 255) charger = CHARGER_UNKNOWN; switch (charger) { case CHARGER_BATTERY: ret = POWER_SUPPLY_STATUS_NOT_CHARGING; break; case CHARGER_USB: case CHARGER_AC: case CHARGER_9V_AC: case CHARGER_WIRELESS: case CHARGER_MHL_AC: case CHARGER_DETECTING: case CHARGER_UNKNOWN_USB: if (battery_core_info.htc_charge_full) ret = POWER_SUPPLY_STATUS_FULL; else { if (battery_core_info.rep.charging_enabled != 0) ret = POWER_SUPPLY_STATUS_CHARGING; else ret = POWER_SUPPLY_STATUS_DISCHARGING; } break; default: ret = POWER_SUPPLY_STATUS_UNKNOWN; } return ret; } static ssize_t htc_battery_show_batt_attr(struct device *dev, struct device_attribute *attr, char *buf) { return battery_core_info.func.func_show_batt_attr(attr, buf); } static ssize_t htc_battery_show_cc_attr(struct device *dev, struct device_attribute *attr, char *buf) { return battery_core_info.func.func_show_cc_attr(attr, buf); } static ssize_t htc_battery_set_delta(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long delta = 0; delta = simple_strtoul(buf, NULL, 10); if (delta > 100) return -EINVAL; return count; } static ssize_t htc_battery_debug_flag(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long debug_flag; debug_flag = simple_strtoul(buf, NULL, 10); if (debug_flag > 100 || debug_flag == 0) return -EINVAL; return 0; } static ssize_t htc_battery_set_full_level(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { int rc = 0; unsigned long percent = 100; rc = strict_strtoul(buf, 10, &percent); if (rc) return rc; if (percent > 100 || percent == 0) return -EINVAL; if (!battery_core_info.func.func_set_full_level) { BATT_ERR("No set full level function!"); return -ENOENT; } battery_core_info.func.func_set_full_level(percent); return count; } int htc_battery_charger_disable() { int rc = 0; if (!battery_core_info.func.func_charger_control) { BATT_ERR("No charger control function!"); return -ENOENT; } rc = battery_core_info.func.func_charger_control(STOP_CHARGER); if (rc < 0) BATT_ERR("charger control failed!"); return rc; } int htc_battery_pwrsrc_disable() { int rc = 0; if (!battery_core_info.func.func_charger_control) { BATT_ERR("No charger control function!"); return -ENOENT; } rc = battery_core_info.func.func_charger_control(DISABLE_PWRSRC); if (rc < 0) BATT_ERR("charger control failed!"); return rc; } static ssize_t htc_battery_charger_stat(struct device *dev, struct device_attribute *attr, char *buf) { int i = 0; i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", charger_ctrl_stat); return i; } static ssize_t htc_battery_charger_switch(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long enable = 0; int rc = 0; rc = strict_strtoul(buf, 10, &enable); if (rc) return rc; BATT_LOG("Set charger_control:%lu", enable); if (enable >= END_CHARGER) return -EINVAL; if (!battery_core_info.func.func_charger_control) { BATT_ERR("No charger control function!"); return -ENOENT; } rc = battery_core_info.func.func_charger_control(enable); if (rc < 0) { BATT_ERR("charger control failed!"); return rc; } charger_ctrl_stat = enable; alarm_cancel(&batt_charger_ctrl_alarm); return count; } static ssize_t htc_battery_set_phone_call(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long phone_call = 0; int rc = 0; rc = strict_strtoul(buf, 10, &phone_call); if (rc) return rc; BATT_LOG("set context phone_call=%lu", phone_call); if (!battery_core_info.func.func_context_event_handler) { BATT_ERR("No context_event_notify function!"); return -ENOENT; } if (phone_call) battery_core_info.func.func_context_event_handler(EVENT_TALK_START); else battery_core_info.func.func_context_event_handler(EVENT_TALK_STOP); return count; } static ssize_t htc_battery_set_network_search(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long network_search = 0; int rc = 0; rc = strict_strtoul(buf, 10, &network_search); if (rc) return rc; BATT_LOG("Set context network_search=%lu", network_search); if (!battery_core_info.func.func_context_event_handler) { BATT_ERR("No context_event_notify function!"); return -ENOENT; } if (network_search) { battery_core_info.func.func_context_event_handler( EVENT_NETWORK_SEARCH_START); } else { battery_core_info.func.func_context_event_handler( EVENT_NETWORK_SEARCH_STOP); } return count; } static ssize_t htc_battery_set_navigation(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long navigation = 0; int rc = 0; rc = strict_strtoul(buf, 10, &navigation); if (rc) return rc; BATT_LOG("Set context navigation=%lu", navigation); if (!battery_core_info.func.func_context_event_handler) { BATT_ERR("No context_event_notify function!"); return -ENOENT; } if (navigation) { battery_core_info.func.func_context_event_handler( EVENT_NAVIGATION_START); } else { battery_core_info.func.func_context_event_handler( EVENT_NAVIGATION_STOP); } return count; } static struct device_attribute htc_battery_attrs[] = { HTC_BATTERY_ATTR(batt_id), HTC_BATTERY_ATTR(batt_vol), HTC_BATTERY_ATTR(batt_temp), HTC_BATTERY_ATTR(batt_current), HTC_BATTERY_ATTR(charging_source), HTC_BATTERY_ATTR(charging_enabled), HTC_BATTERY_ATTR(full_bat), HTC_BATTERY_ATTR(over_vchg), HTC_BATTERY_ATTR(batt_state), __ATTR(batt_attr_text, S_IRUGO, htc_battery_show_batt_attr, NULL), __ATTR(batt_power_meter, S_IRUGO, htc_battery_show_cc_attr, NULL), }; static struct device_attribute htc_set_delta_attrs[] = { __ATTR(delta, S_IWUSR | S_IWGRP, NULL, htc_battery_set_delta), __ATTR(full_level, S_IWUSR | S_IWGRP, NULL, htc_battery_set_full_level), __ATTR(batt_debug_flag, S_IWUSR | S_IWGRP, NULL, htc_battery_debug_flag), __ATTR(charger_control, S_IWUSR | S_IWGRP, htc_battery_charger_stat, htc_battery_charger_switch), __ATTR(charger_timer, S_IWUSR | S_IWGRP, NULL, htc_battery_charger_ctrl_timer), __ATTR(phone_call, S_IWUSR | S_IWGRP, NULL, htc_battery_set_phone_call), __ATTR(network_search, S_IWUSR | S_IWGRP, NULL, htc_battery_set_network_search), __ATTR(navigation, S_IWUSR | S_IWGRP, NULL, htc_battery_set_navigation), }; static struct device_attribute htc_battery_rt_attrs[] = { __ATTR(batt_vol_now, S_IRUGO, htc_battery_rt_attr_show, NULL), __ATTR(batt_current_now, S_IRUGO, htc_battery_rt_attr_show, NULL), __ATTR(batt_temp_now, S_IRUGO, htc_battery_rt_attr_show, NULL), }; static int htc_battery_create_attrs(struct device *dev) { int i = 0, j = 0, k = 0, rc = 0; for (i = 0; i < ARRAY_SIZE(htc_battery_attrs); i++) { rc = device_create_file(dev, &htc_battery_attrs[i]); if (rc) goto htc_attrs_failed; } for (j = 0; j < ARRAY_SIZE(htc_set_delta_attrs); j++) { rc = device_create_file(dev, &htc_set_delta_attrs[j]); if (rc) goto htc_delta_attrs_failed; } for (k = 0; k < ARRAY_SIZE(htc_battery_rt_attrs); k++) { rc = device_create_file(dev, &htc_battery_rt_attrs[k]); if (rc) goto htc_rt_attrs_failed; } goto succeed; htc_rt_attrs_failed: while (k--) device_remove_file(dev, &htc_battery_rt_attrs[k]); htc_delta_attrs_failed: while (j--) device_remove_file(dev, &htc_set_delta_attrs[j]); htc_attrs_failed: while (i--) device_remove_file(dev, &htc_battery_attrs[i]); succeed: return rc; } static int htc_battery_get_property(struct power_supply *psy, enum power_supply_property psp, union power_supply_propval *val) { switch (psp) { case POWER_SUPPLY_PROP_STATUS: val->intval = htc_battery_get_charging_status(); break; case POWER_SUPPLY_PROP_HEALTH: val->intval = POWER_SUPPLY_HEALTH_GOOD; if (battery_core_info.rep.temp_fault != -1) { if (battery_core_info.rep.temp_fault == 1) val->intval = POWER_SUPPLY_HEALTH_OVERHEAT; } else if (battery_core_info.rep.batt_temp >= 480 || battery_core_info.rep.batt_temp <= 0) val->intval = POWER_SUPPLY_HEALTH_OVERHEAT; break; case POWER_SUPPLY_PROP_PRESENT: val->intval = battery_core_info.present; break; case POWER_SUPPLY_PROP_TECHNOLOGY: val->intval = POWER_SUPPLY_TECHNOLOGY_LION; break; case POWER_SUPPLY_PROP_CAPACITY: mutex_lock(&battery_core_info.info_lock); val->intval = battery_core_info.rep.level; mutex_unlock(&battery_core_info.info_lock); break; case POWER_SUPPLY_PROP_OVERLOAD: val->intval = battery_core_info.rep.overload; break; default: return -EINVAL; } return 0; } static int htc_power_get_property(struct power_supply *psy, enum power_supply_property psp, union power_supply_propval *val) { enum charger_type_t charger; mutex_lock(&battery_core_info.info_lock); charger = battery_core_info.rep.charging_source; #if 0 if (battery_core_info.rep.batt_id == 255) charger = CHARGER_BATTERY; #endif mutex_unlock(&battery_core_info.info_lock); switch (psp) { case POWER_SUPPLY_PROP_ONLINE: if (psy->type == POWER_SUPPLY_TYPE_MAINS) { if (charger == CHARGER_AC || charger == CHARGER_9V_AC || charger == CHARGER_MHL_AC) val->intval = 1; else val->intval = 0; } else if (psy->type == POWER_SUPPLY_TYPE_USB) { if (charger == CHARGER_USB || charger == CHARGER_UNKNOWN_USB || charger == CHARGER_DETECTING) val->intval = 1; else val->intval = 0; } else if (psy->type == POWER_SUPPLY_TYPE_WIRELESS) val->intval = (charger == CHARGER_WIRELESS ? 1 : 0); else val->intval = 0; break; default: return -EINVAL; } return 0; } static ssize_t htc_battery_show_property(struct device *dev, struct device_attribute *attr, char *buf) { int i = 0; const ptrdiff_t off = attr - htc_battery_attrs; mutex_lock(&battery_core_info.info_lock); switch (off) { case BATT_ID: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.batt_id); break; case BATT_VOL: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.batt_vol); break; case BATT_TEMP: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.batt_temp); break; case BATT_CURRENT: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.batt_current); break; case CHARGING_SOURCE: if(battery_core_info.rep.charging_source == CHARGER_MHL_AC) { i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", CHARGER_AC); } else { i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.charging_source); } break; case CHARGING_ENABLED: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.charging_enabled); break; case FULL_BAT: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.full_bat); break; case OVER_VCHG: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.over_vchg); break; case BATT_STATE: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.batt_state); break; case OVERLOAD: i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", battery_core_info.rep.overload); break; default: i = -EINVAL; } mutex_unlock(&battery_core_info.info_lock); if (i < 0) BATT_ERR("%s: battery: attribute is not supported: %d", __func__, off); return i; } static ssize_t htc_battery_rt_attr_show(struct device *dev, struct device_attribute *attr, char *buf) { int i = 0; int val = 0; int rc = 0; const ptrdiff_t attr_index = attr - htc_battery_rt_attrs; if (!battery_core_info.func.func_get_batt_rt_attr) { BATT_ERR("%s: func_get_batt_rt_attr does not exist", __func__); return -EINVAL; } rc = battery_core_info.func.func_get_batt_rt_attr(attr_index, &val); if (rc) { BATT_ERR("%s: get_batt_rt_attrs[%d] failed", __func__, attr_index); return -EINVAL; } i += scnprintf(buf + i, PAGE_SIZE - i, "%d\n", val); return i; } static ssize_t htc_battery_charger_ctrl_timer(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { int rc; unsigned long time_out = 0; ktime_t interval; ktime_t next_alarm; rc = strict_strtoul(buf, 10, &time_out); if (rc) return rc; if (time_out > 65536) return -EINVAL; if (time_out > 0) { rc = battery_core_info.func.func_charger_control(STOP_CHARGER); if (rc < 0) { BATT_ERR("charger control failed!"); return rc; } interval = ktime_set(time_out, 0); next_alarm = ktime_add(alarm_get_elapsed_realtime(), interval); alarm_start_range(&batt_charger_ctrl_alarm, next_alarm, next_alarm); charger_ctrl_stat = STOP_CHARGER; } else if (time_out == 0) { rc = battery_core_info.func.func_charger_control( ENABLE_CHARGER); if (rc < 0) { BATT_ERR("charger control failed!"); return rc; } alarm_cancel(&batt_charger_ctrl_alarm); charger_ctrl_stat = ENABLE_CHARGER; } return count; } static void batt_charger_ctrl_func(struct work_struct *work) { int rc; rc = battery_core_info.func.func_charger_control(ENABLE_CHARGER); if (rc) { BATT_ERR("charger control failed!"); return; } charger_ctrl_stat = (unsigned int)ENABLE_CHARGER; } static void batt_charger_ctrl_alarm_handler(struct alarm *alarm) { BATT_LOG("charger control alarm is timeout."); queue_work(batt_charger_ctrl_wq, &batt_charger_ctrl_work); } int htc_battery_core_update_changed(void) { struct battery_info_reply new_batt_info_rep; int is_send_batt_uevent = 0; int is_send_usb_uevent = 0; int is_send_ac_uevent = 0; int is_send_wireless_charger_uevent = 0; static int batt_temp_over_68c_count = 0; if (battery_register) { BATT_ERR("No battery driver exists."); return -1; } mutex_lock(&battery_core_info.info_lock); memcpy(&new_batt_info_rep, &battery_core_info.rep, sizeof(struct battery_info_reply)); mutex_unlock(&battery_core_info.info_lock); if (battery_core_info.func.func_get_battery_info) { battery_core_info.func.func_get_battery_info(&new_batt_info_rep); } else { BATT_ERR("no func_get_battery_info hooked."); return -EINVAL; } mutex_lock(&battery_core_info.info_lock); if (battery_core_info.rep.charging_source != new_batt_info_rep.charging_source) { if (CHARGER_BATTERY == battery_core_info.rep.charging_source || CHARGER_BATTERY == new_batt_info_rep.charging_source) is_send_batt_uevent = 1; if (CHARGER_USB == battery_core_info.rep.charging_source || CHARGER_USB == new_batt_info_rep.charging_source) is_send_usb_uevent = 1; if (CHARGER_AC == battery_core_info.rep.charging_source || CHARGER_AC == new_batt_info_rep.charging_source) is_send_ac_uevent = 1; if (CHARGER_MHL_AC == battery_core_info.rep.charging_source || CHARGER_MHL_AC == new_batt_info_rep.charging_source) is_send_ac_uevent = 1; if (CHARGER_WIRELESS == battery_core_info.rep.charging_source || CHARGER_WIRELESS == new_batt_info_rep.charging_source) is_send_wireless_charger_uevent = 1; } if ((!is_send_batt_uevent) && ((battery_core_info.rep.level != new_batt_info_rep.level) || (battery_core_info.rep.batt_vol != new_batt_info_rep.batt_vol) || (battery_core_info.rep.over_vchg != new_batt_info_rep.over_vchg) || (battery_core_info.rep.batt_temp != new_batt_info_rep.batt_temp))) { is_send_batt_uevent = 1; } if ((battery_core_info.rep.charging_enabled != 0) && (new_batt_info_rep.charging_enabled != 0)) { if (battery_core_info.rep.level > new_batt_info_rep.level) battery_over_loading++; else battery_over_loading = 0; } memcpy(&battery_core_info.rep, &new_batt_info_rep, sizeof(struct battery_info_reply)); if (battery_core_info.rep.batt_temp > 680) { batt_temp_over_68c_count++; if (batt_temp_over_68c_count < 3) { pr_info("[BATT] batt_temp_over_68c_count=%d, (temp=%d)\n", batt_temp_over_68c_count, battery_core_info.rep.batt_temp); battery_core_info.rep.batt_temp = 680; } } else { batt_temp_over_68c_count = 0; } if (test_power_monitor) { BATT_LOG("test_power_monitor is set: overwrite fake batt info."); battery_core_info.rep.batt_id = 77; battery_core_info.rep.batt_temp = 330; battery_core_info.rep.level = 77; battery_core_info.rep.temp_fault = 0; } if (battery_core_info.rep.charging_source <= 0) { if (battery_core_info.rep.batt_id == 255) { pr_info("[BATT] Ignore invalid id when no charging_source"); battery_core_info.rep.batt_id = 66; } } #if 0 battery_core_info.rep.batt_vol = new_batt_info_rep.batt_vol; battery_core_info.rep.batt_id = new_batt_info_rep.batt_id; battery_core_info.rep.batt_temp = new_batt_info_rep.batt_temp; battery_core_info.rep.batt_current = new_batt_info_rep.batt_current; battery_core_info.rep.batt_discharg_current = new_batt_info_rep.batt_discharg_current; battery_core_info.rep.level = new_batt_info_rep.level; battery_core_info.rep.charging_source = new_batt_info_rep.charging_source; battery_core_info.rep.charging_enabled = new_batt_info_rep.charging_enabled; battery_core_info.rep.full_bat = new_batt_info_rep.full_bat; battery_core_info.rep.over_vchg = new_batt_info_rep.over_vchg; battery_core_info.rep.temp_fault = new_batt_info_rep.temp_fault; battery_core_info.rep.batt_state = new_batt_info_rep.batt_state; #endif if (battery_core_info.rep.charging_source == CHARGER_BATTERY) battery_core_info.htc_charge_full = 0; else { if (battery_core_info.htc_charge_full && (battery_core_info.rep.level == 100)) battery_core_info.htc_charge_full = 1; else { if (battery_core_info.rep.level == 100) battery_core_info.htc_charge_full = 1; else battery_core_info.htc_charge_full = 0; } if (battery_over_loading >= 2) { battery_core_info.htc_charge_full = 0; battery_over_loading = 0; } } battery_core_info.update_time = jiffies; mutex_unlock(&battery_core_info.info_lock); BATT_LOG("ID=%d,level=%d,level_raw=%d,vol=%d,temp=%d,current=%d," "chg_src=%d,chg_en=%d,full_bat=%d,over_vchg=%d," "batt_state=%d,overload=%d,ui_chg_full=%d", battery_core_info.rep.batt_id, battery_core_info.rep.level, battery_core_info.rep.level_raw, battery_core_info.rep.batt_vol, battery_core_info.rep.batt_temp, battery_core_info.rep.batt_current, battery_core_info.rep.charging_source, battery_core_info.rep.charging_enabled, battery_core_info.rep.full_bat, battery_core_info.rep.over_vchg, battery_core_info.rep.batt_state, battery_core_info.rep.overload, battery_core_info.htc_charge_full); if (is_send_batt_uevent) { power_supply_changed(&htc_power_supplies[BATTERY_SUPPLY]); BATT_LOG("power_supply_changed: battery"); } if (is_send_usb_uevent) { power_supply_changed(&htc_power_supplies[USB_SUPPLY]); BATT_LOG("power_supply_changed: usb"); } if (is_send_ac_uevent) { power_supply_changed(&htc_power_supplies[AC_SUPPLY]); BATT_LOG("power_supply_changed: ac"); } if (is_send_wireless_charger_uevent) { power_supply_changed(&htc_power_supplies[WIRELESS_SUPPLY]); BATT_LOG("power_supply_changed: wireless"); } return 0; } EXPORT_SYMBOL_GPL(htc_battery_core_update_changed); int htc_battery_core_register(struct device *dev, struct htc_battery_core *htc_battery) { int i, rc = 0; if (!battery_register) { BATT_ERR("Only one battery driver could exist."); return -1; } battery_register = 0; test_power_monitor = (get_kernel_flag() & KERNEL_FLAG_TEST_PWR_SUPPLY) ? 1 : 0; mutex_init(&battery_core_info.info_lock); if (htc_battery->func_get_batt_rt_attr) battery_core_info.func.func_get_batt_rt_attr = htc_battery->func_get_batt_rt_attr; if (htc_battery->func_show_batt_attr) battery_core_info.func.func_show_batt_attr = htc_battery->func_show_batt_attr; if (htc_battery->func_show_cc_attr) battery_core_info.func.func_show_cc_attr = htc_battery->func_show_cc_attr; if (htc_battery->func_get_battery_info) battery_core_info.func.func_get_battery_info = htc_battery->func_get_battery_info; if (htc_battery->func_charger_control) battery_core_info.func.func_charger_control = htc_battery->func_charger_control; if (htc_battery->func_context_event_handler) battery_core_info.func.func_context_event_handler = htc_battery->func_context_event_handler; if (htc_battery->func_set_full_level) battery_core_info.func.func_set_full_level = htc_battery->func_set_full_level; for (i = 0; i < ARRAY_SIZE(htc_power_supplies); i++) { rc = power_supply_register(dev, &htc_power_supplies[i]); if (rc) BATT_ERR("Failed to register power supply" " (%d)\n", rc); } htc_battery_create_attrs(htc_power_supplies[CHARGER_BATTERY].dev); charger_ctrl_stat = ENABLE_CHARGER; INIT_WORK(&batt_charger_ctrl_work, batt_charger_ctrl_func); alarm_init(&batt_charger_ctrl_alarm, ANDROID_ALARM_ELAPSED_REALTIME_WAKEUP, batt_charger_ctrl_alarm_handler); batt_charger_ctrl_wq = create_singlethread_workqueue("charger_ctrl_timer"); battery_core_info.update_time = jiffies; battery_core_info.present = 1; battery_core_info.htc_charge_full = 0; battery_core_info.rep.charging_source = CHARGER_BATTERY; battery_core_info.rep.batt_id = 1; battery_core_info.rep.batt_vol = 4000; battery_core_info.rep.batt_temp = 285; battery_core_info.rep.batt_current = 162; battery_core_info.rep.level = 66; battery_core_info.rep.level_raw = 0; battery_core_info.rep.full_bat = 1580000; battery_core_info.rep.full_level = 100; battery_core_info.rep.temp_fault = -1; battery_core_info.rep.batt_state = 0; battery_core_info.rep.overload = 0; battery_over_loading = 0; return 0; } EXPORT_SYMBOL_GPL(htc_battery_core_register); const struct battery_info_reply* htc_battery_core_get_batt_info_rep(void) { return &battery_core_info.rep; } EXPORT_SYMBOL_GPL(htc_battery_core_get_batt_info_rep);
xiaolvmu/villec2-kernel
arch/arm/mach-msm/htc_battery_core.c
C
gpl-2.0
27,232
[ 30522, 1013, 1008, 1008, 1008, 9385, 1006, 1039, 1007, 2249, 1044, 13535, 3840, 1012, 1008, 1008, 2023, 4007, 2003, 7000, 2104, 1996, 3408, 1997, 1996, 27004, 2236, 2270, 1008, 6105, 2544, 1016, 1010, 2004, 2405, 2011, 1996, 2489, 4007, 3...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.camel.support.jsse; import java.util.ArrayList; import java.util.List; /** * Represents a list of TLS/SSL cipher suite names. */ public class CipherSuitesParameters { private List<String> cipherSuite; /** * Returns a live reference to the list of cipher suite names. * * @return a reference to the list, never {@code null} */ public List<String> getCipherSuite() { if (this.cipherSuite == null) { this.cipherSuite = new ArrayList<>(); } return this.cipherSuite; } /** * Sets the cipher suite. It creates a copy of the given cipher suite. * * @param cipherSuite cipher suite */ public void setCipherSuite(List<String> cipherSuite) { this.cipherSuite = cipherSuite == null ? null : new ArrayList<>(cipherSuite); } @Override public String toString() { StringBuilder builder = new StringBuilder(); builder.append("CipherSuitesParameters[cipherSuite="); builder.append(getCipherSuite()); builder.append("]"); return builder.toString(); } }
punkhorn/camel-upstream
core/camel-api/src/main/java/org/apache/camel/support/jsse/CipherSuitesParameters.java
Java
apache-2.0
1,929
[ 30522, 1013, 1008, 1008, 1008, 7000, 2000, 1996, 15895, 4007, 3192, 1006, 2004, 2546, 1007, 2104, 2028, 2030, 2062, 1008, 12130, 6105, 10540, 1012, 2156, 1996, 5060, 5371, 5500, 2007, 1008, 2023, 2147, 2005, 3176, 2592, 4953, 9385, 6095, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * Copyright (C) 2017 - 2018 FIX94 * * This software may be modified and distributed under the terms * of the MIT license. See the LICENSE file for details. */ #include <stdio.h> #include <inttypes.h> #include <stdbool.h> #include "../ppu.h" #include "../mapper.h" #include "../mem.h" #include "../mapper_h/mmc3.h" #include "../mapper_h/mmc3add.h" #include "../mapper_h/common.h" extern uint32_t mmc3_prgROMadd; extern uint32_t mmc3_prgROMand; extern uint32_t mmc3_chrROMand; static bool mmc3add_regLock; static bool mmc3add_prgRAMenable; extern void mmc3SetPrgROMBankPtr(); extern void mmc3SetChrROMBankPtr(); void m37_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); m37_reset(); //start with default config printf("Mapper 37 (Mapper 4 Game Select) inited\n"); } void m44_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); m44_reset(); //start with default config printf("Mapper 44 (Mapper 4 Game Select) inited\n"); } static uint8_t m45_curReg, m45_chrROMandVal; void m45_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); m45_reset(); //start with default config printf("Mapper 45 (Mapper 4 Game Select) inited\n"); } void m47_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); m47_reset(); //start with default config printf("Mapper 47 (Mapper 4 Game Select) inited\n"); } static uint8_t m49_prgmode; static uint8_t m49_prgreg; void m49_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); //for prg mode prg32init(prgROMin,prgROMsizeIn); m49_reset(); //start with default config printf("Mapper 49 (Mapper 4 Game Select) inited\n"); } void m52_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); m52_reset(); printf("Mapper 52 (Mapper 4 Game Select) inited\n"); } void m205_init(uint8_t *prgROMin, uint32_t prgROMsizeIn, uint8_t *prgRAMin, uint32_t prgRAMsizeIn, uint8_t *chrROMin, uint32_t chrROMsizeIn) { mmc3init(prgROMin, prgROMsizeIn, prgRAMin, prgRAMsizeIn, chrROMin, chrROMsizeIn); m205_reset(); //start with default config printf("Mapper 205 (Mapper 4 Game Select) inited\n"); } void mmc3NoRAMInitGet8(uint16_t addr) { if(addr >= 0x8000) mmc3initGet8(addr); } static void mmc3addSetParamsAXX1(uint16_t addr, uint8_t val) { (void)addr; mmc3add_prgRAMenable = (!!(val&0x80)) && ((val&0x40) == 0); } void m49_initGet8(uint16_t addr) { if(addr >= 0x8000) { if(m49_prgmode == 0) prg32initGet8(addr); else mmc3initGet8(addr); } } static void m37_setParams6XXX(uint16_t addr, uint8_t val) { (void)addr; if(!mmc3add_prgRAMenable) return; val &= 7; if(val < 3) { mmc3_prgROMadd = 0; mmc3_prgROMand = 0xFFFF; mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; } else if(val == 3) { mmc3_prgROMadd = 0x10000; mmc3_prgROMand = 0xFFFF; mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; } else if(val < 7) { mmc3_prgROMadd = 0x20000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x20000); mmc3_chrROMand = 0x1FFFF; } else //val == 7 { mmc3_prgROMadd = 0x30000; mmc3_prgROMand = 0xFFFF; mmc3SetChrROMadd(0x20000); mmc3_chrROMand = 0x1FFFF; } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); } void m37_initSet8(uint16_t addr) { //special select regs if(addr >= 0x6000 && addr < 0x8000) memInitMapperSetPointer(addr, m37_setParams6XXX); else if((addr&0xE001) == 0xA001) //reg enable/disable memInitMapperSetPointer(addr, mmc3addSetParamsAXX1); else //do normal mmc3 sets mmc3initSet8(addr); } static void m44_setParamsAXX1(uint16_t addr, uint8_t val) { (void)addr; val &= 7; switch(val) { case 0: mmc3_prgROMadd = 0; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; break; case 1: mmc3_prgROMadd = 0x20000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x20000); mmc3_chrROMand = 0x1FFFF; break; case 2: mmc3_prgROMadd = 0x40000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x40000); mmc3_chrROMand = 0x1FFFF; break; case 3: mmc3_prgROMadd = 0x60000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x60000); mmc3_chrROMand = 0x1FFFF; break; case 4: mmc3_prgROMadd = 0x80000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x80000); mmc3_chrROMand = 0x1FFFF; break; case 5: mmc3_prgROMadd = 0xA0000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0xA0000); mmc3_chrROMand = 0x1FFFF; break; default: //6,7 mmc3_prgROMadd = 0xC0000; mmc3_prgROMand = 0x3FFFF; mmc3SetChrROMadd(0xC0000); mmc3_chrROMand = 0x3FFFF; break; } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); } void m44_initSet8(uint16_t addr) { //special select regs if((addr&0xE001) == 0xA001) memInitMapperSetPointer(addr, m44_setParamsAXX1); else //do normal mmc3 sets mmc3initSet8(addr); } static void m45_setChrROMand() { if(mmc3GetChrROMadd() || m45_chrROMandVal) //default chr rom mask generation mmc3_chrROMand = (((0xFF >> ((~m45_chrROMandVal)&0xF))+1)<<10)-1; else //this code is just a guess, if no chr rom or/and value, use full and value mmc3_chrROMand = (((0xFF >> ((~0xF)&0xF))+1)<<10)-1; } static void m45_setParams6XXX(uint16_t addr, uint8_t val) { (void)addr; if(m45_curReg == 0) { mmc3SetChrROMadd((mmc3GetChrROMadd() & ~0x3FFFF) | (val<<10)); m45_setChrROMand(); //update because of new add value //printf("mmc3_chrROMadd r0 %08x inVal %02x\n", mmc3GetChrROMadd(), val); } else if(m45_curReg == 1) { mmc3_prgROMadd = val<<13; //printf("mmc3_prgROMadd %08x inVal %02x\n", mmc3_prgROMadd, val); } else if(m45_curReg == 2) { mmc3SetChrROMadd((mmc3GetChrROMadd() & 0x3FFFF) | ((val>>4)<<18)); m45_chrROMandVal = val&0xF; m45_setChrROMand(); //printf("mmc3_chrROMand %08x mmc3_chrROMadd r1 %08x inVal %02x\n", mmc3_chrROMand, mmc3GetChrROMadd(), val); } else if(m45_curReg == 3) { mmc3add_regLock = ((val&0x40) != 0); mmc3_prgROMand = ((((val^0x3F)&0x3F)+1)<<13)-1; //printf("mmc3add_regLock %d mmc3_prgROMand %08x inVal %02x\n", mmc3add_regLock, mmc3_prgROMand, val); if(mmc3add_regLock) { //this will allow ram writes //printf("allowing prg ram\n"); uint16_t ramaddr; for(ramaddr = 0x6000; ramaddr < 0x8000; ramaddr++) m45_initSet8(ramaddr); } } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); m45_curReg++; m45_curReg&=3; } void m45_initSet8(uint16_t addr) { //special select regs if(addr >= 0x6000 && addr < 0x8000 && !mmc3add_regLock) memInitMapperSetPointer(addr, m45_setParams6XXX); else //do normal mmc3 sets mmc3initSet8(addr); } static void m47_setParams6XXX(uint16_t addr, uint8_t val) { (void)addr; if(!mmc3add_prgRAMenable) return; val &= 1; if(val == 0) { mmc3_prgROMadd = 0; mmc3SetChrROMadd(0); } else { mmc3_prgROMadd = 0x20000; mmc3SetChrROMadd(0x20000); } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); } void m47_initSet8(uint16_t addr) { //special select regs if(addr >= 0x6000 && addr < 0x8000) memInitMapperSetPointer(addr, m47_setParams6XXX); else if((addr&0xE001) == 0xA001) //prg ram enable/disable memInitMapperSetPointer(addr, mmc3addSetParamsAXX1); else //do normal mmc3 sets mmc3initSet8(addr); } static void m49_setParams6XXX(uint16_t addr, uint8_t val) { (void)addr; if(!mmc3add_prgRAMenable) return; switch(val>>6) { case 0: mmc3_prgROMadd = 0; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; break; case 1: mmc3_prgROMadd = 0x20000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x20000); mmc3_chrROMand = 0x1FFFF; break; case 2: mmc3_prgROMadd = 0x40000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x40000); mmc3_chrROMand = 0x1FFFF; break; case 3: mmc3_prgROMadd = 0x60000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x60000); mmc3_chrROMand = 0x1FFFF; break; default: break; } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); //for prgmode 0 m49_prgreg = (val>>4)&3; prg32setBank0((m49_prgreg<<15)|mmc3_prgROMadd); //reset how prg rom is read if(!!(val&1) ^ m49_prgmode) { m49_prgmode = val&1; //printf("Switched to prg mode %d\n", m49_prgmode); uint32_t addr; for(addr = 0x8000; addr <= 0xFFFF; addr++) m49_initGet8(addr); } } void m49_initSet8(uint16_t addr) { //special select regs if(addr >= 0x6000 && addr < 0x8000) memInitMapperSetPointer(addr, m49_setParams6XXX); else if((addr&0xE001) == 0xA001) //regs enable/disable memInitMapperSetPointer(addr, mmc3addSetParamsAXX1); else //do normal mmc3 sets mmc3initSet8(addr); } static void m52_setParams6XXX(uint16_t addr, uint8_t val) { (void)addr; if(!mmc3add_prgRAMenable) return; if((val&8) != 0) { mmc3_prgROMand = 0x1FFFF; mmc3_prgROMadd = (val&7)<<17; } else { mmc3_prgROMand = 0x3FFFF; mmc3_prgROMadd = (val&6)<<17; } uint8_t chrVal = ((val>>4)&3) | (val&4); if((val&0x40) != 0) { mmc3_chrROMand = 0x1FFFF; mmc3SetChrROMadd((chrVal&7)<<17); } else { mmc3_chrROMand = 0x3FFFF; mmc3SetChrROMadd((chrVal&6)<<17); } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); mmc3add_regLock = ((val&0x80) != 0); if(mmc3add_regLock) { //this will allow ram writes //printf("allowing prg ram\n"); uint16_t ramaddr; for(ramaddr = 0x6000; ramaddr < 0x8000; ramaddr++) m52_initSet8(ramaddr); } } void m52_initSet8(uint16_t addr) { //special select regs if(addr >= 0x6000 && addr < 0x8000 && !mmc3add_regLock) memInitMapperSetPointer(addr, m52_setParams6XXX); else if((addr&0xE001) == 0xA001) //regs enable/disable memInitMapperSetPointer(addr, mmc3addSetParamsAXX1); else //do normal mmc3 sets mmc3initSet8(addr); } void m205_setParams6XXX(uint16_t addr, uint8_t val) { (void)addr; val &= 3; if(val == 0) { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x3FFFF; mmc3SetChrROMadd(0); mmc3_chrROMand = 0x3FFFF; } else if(val == 1) { mmc3_prgROMadd = 0x20000; mmc3_prgROMand = 0x3FFFF; mmc3SetChrROMadd(0x20000); mmc3_chrROMand = 0x3FFFF; } else if(val == 2) { mmc3_prgROMadd = 0x40000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x40000); mmc3_chrROMand = 0x1FFFF; } else //val == 3 { mmc3_prgROMadd = 0x60000; mmc3_prgROMand = 0x1FFFF; mmc3SetChrROMadd(0x60000); mmc3_chrROMand = 0x1FFFF; } mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); } void m205_initSet8(uint16_t addr) { //special select regs if(addr >= 0x6000 && addr < 0x8000) memInitMapperSetPointer(addr, m205_setParams6XXX); else //do normal mmc3 sets mmc3initSet8(addr); } void m37_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0xFFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; mmc3SetChrROMBankPtr(); mmc3add_regLock = false; mmc3add_prgRAMenable = false; } void m44_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x1FFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; mmc3SetChrROMBankPtr(); mmc3add_regLock = false; mmc3add_prgRAMenable = false; } void m45_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x7FFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMadd(0); m45_chrROMandVal = 0; m45_setChrROMand(); mmc3SetChrROMBankPtr(); mmc3add_regLock = false; mmc3add_prgRAMenable = false; //make sure to unlock reg writes of course //printf("allowing reg writes\n"); uint16_t regaddr; for(regaddr = 0x6000; regaddr < 0x8000; regaddr++) m45_initSet8(regaddr); m45_curReg = 0; } void m47_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x1FFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; mmc3SetChrROMBankPtr(); mmc3add_regLock = false; mmc3add_prgRAMenable = false; } void m49_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x1FFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMadd(0); mmc3_chrROMand = 0x1FFFF; mmc3SetChrROMBankPtr(); mmc3add_regLock = false; mmc3add_prgRAMenable = false; //for prgmode 0 m49_prgreg = 0; prg32setBank0((m49_prgreg<<15)|mmc3_prgROMadd); //reset how prg rom is read m49_prgmode = 0; //printf("Switched to prg mode 0\n"); uint32_t addr; for(addr = 0x8000; addr <= 0xFFFF; addr++) m49_initGet8(addr); } void m52_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x3FFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMadd(0); mmc3_chrROMand = 0x3FFFF; mmc3SetChrROMBankPtr(); mmc3add_regLock = false; mmc3add_prgRAMenable = false; //make sure to unlock reg writes of course //printf("allowing reg writes\n"); uint16_t regaddr; for(regaddr = 0x6000; regaddr < 0x8000; regaddr++) m52_initSet8(regaddr); } void m205_reset() { mmc3_prgROMadd = 0; mmc3_prgROMand = 0x3FFFF; mmc3SetChrROMadd(0); mmc3_chrROMand = 0x3FFFF; mmc3SetPrgROMBankPtr(); mmc3SetChrROMBankPtr(); mmc3add_regLock = false; }
FIX94/fixNES
mapper/mmc3add.c
C
mit
13,613
[ 30522, 1013, 1008, 1008, 9385, 1006, 1039, 1007, 2418, 1011, 2760, 8081, 2683, 2549, 1008, 1008, 2023, 4007, 2089, 2022, 6310, 1998, 5500, 2104, 1996, 3408, 1008, 1997, 1996, 10210, 6105, 1012, 2156, 1996, 6105, 5371, 2005, 4751, 1012, 10...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package fiddle.repository; import java.io.IOException; import com.google.common.base.Optional; import com.google.common.collect.ImmutableList; public class FallbackRepository<ReadRsc, WriteRsc, Id> implements Repository<ReadRsc, WriteRsc, Id> { private final Repository<ReadRsc, WriteRsc, Id> main; private final Repository<ReadRsc, WriteRsc, Id> fallback; public FallbackRepository(final Repository<ReadRsc, WriteRsc, Id> main, final Repository<ReadRsc, WriteRsc, Id> fallback) { this.main = main; this.fallback = fallback; } @Override public Optional<ReadRsc> open(Id id) { return main.open(id).or(fallback.open(id)); } @Override public void write(Id id, WriteRsc rsc) throws IOException { main.write(id, rsc); } @Override public ImmutableList<Id> ids() { return main.ids(); } }
cedricbou/FiddleWith
repository/src/main/java/fiddle/repository/FallbackRepository.java
Java
apache-2.0
818
[ 30522, 7427, 15888, 1012, 22409, 1025, 12324, 9262, 1012, 22834, 1012, 22834, 10288, 24422, 1025, 12324, 4012, 1012, 8224, 1012, 2691, 1012, 2918, 1012, 11887, 1025, 12324, 4012, 1012, 8224, 1012, 2691, 1012, 8145, 1012, 10047, 28120, 3085, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
//setup Dependencies var connect = require('connect'); //Setup Express var express = require('express'); var path = require('path'); let app = express(); var server = require('http').Server(app); var io = require('socket.io')(server); var keypress = require('keypress'); var port = (process.env.PORT || 8081); var muted = false; const debug = true; app.set("view engine", "pug"); app.set("views", path.join(__dirname, "views")); app.use(express.static(path.join(__dirname, "public"))); app.set('env', 'development'); server.listen(port); var message = ''; var main_socket; var auksalaq_mode = 'chatMode'; //Setup Socket.IO io.on('connection', function(socket){ if(debug){ console.log('Client Connected'); } main_socket = socket; //start time setInterval(sendTime, 1000); //ceiling socket.on('ceiling_newuser', function (data) { if(debug){ console.log('new user added! ' + data.username); console.log(data); } socket.emit('ceiling_user_confirmed', data); socket.broadcast.emit('ceiling_user_confirmed', data); }); //see NomadsMobileClient.js for data var socket.on('ceiling_message', function(data){ socket.broadcast.emit('ceiling_proc_update',data); //send data to all clients for processing sketch socket.broadcast.emit('ceiling_client_update',data); //send data back to all clients? if(debug){ console.log(data); } }); //auksalaq socket.on('auksalaq_newuser', function (data) { if(debug){ console.log('new user added! ' + data.username); console.log(data); } data.mode = auksalaq_mode; data.muted = muted; socket.emit('auksalaq_user_confirmed', data); socket.broadcast.emit('auksalaq_user_confirmed', data); }); //see NomadsMobileClient.js for data var socket.on('auksalaq_message', function(data){ //socket.broadcast.emit('auksalaq_proc_update',data); //send data to all clients for processing sketch socket.broadcast.emit('auksalaq_client_update',data); socket.emit('auksalaq_client_update',data); if(debug){ console.log(data); } }); //mode change from controller socket.on('auksalaq_mode', function(data){ socket.broadcast.emit('auksalaq_mode', data); auksalaq_mode = data; if(debug){ console.log(data); } }); socket.on('mute_state', function(data){ muted = data; socket.broadcast.emit('mute_state', data); console.log(data); }); //clocky socket.on('clock_start', function(data){ socket.broadcast.emit('clock_start', data); if(debug){ console.log(data); } }); socket.on('clock_stop', function(data){ socket.broadcast.emit('clock_stop', data); if(debug){ console.log(data); } }); socket.on('clock_reset', function(data){ socket.broadcast.emit('clock_reset', data); if(debug){ console.log("resettting clock"); } }); /* socket.on('begin_ceiling', function(){ ; }); socket.on('begin_auksalak', function(){ ; }); socket.on('stop_ceiling', function(){ ; }); socket.on('stop_auksalak', function(){ ; }); */ socket.on('disconnect', function(){ if(debug){ console.log('Client Disconnected.'); } }); }); /////////////////////////////////////////// // Routes // /////////////////////////////////////////// /////// ADD ALL YOUR ROUTES HERE ///////// app.get('/', function(req,res){ //res.send('hello world'); res.render('index.pug', { locals : { title : 'Nomads' ,description: 'Nomads System' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' ,cache: 'false' } }); }); // The Ceiling Floats Away Routes app.get('/ceiling', function(req,res){ res.render('ceiling/ceiling_client.pug', { locals : { title : 'The Ceiling Floats Away' ,description: 'The Ceiluing Floats Away' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); app.get('/ceiling_display', function(req,res){ res.render('ceiling/ceiling_display.pug', { locals : { title : 'The Ceiling Floats Away' ,description: 'Ceiling Nomads message disply' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); app.get('/ceiling_control', function(req,res){ res.render('ceiling/ceiling_control.pug', { locals : { title : 'The Ceiling Floats Away Control' ,description: 'Ceiling Nomads System Control' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); // Auksalaq Routes app.get('/auksalaq', function(req,res){ res.render('auksalaq/auksalaq_client.pug', { locals : { title : 'Auksalaq' ,description: 'Auksalaq Nomads System' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); app.get('/auksalaq_display', function(req,res){ res.render('auksalaq/auksalaq_display.pug', { locals : { title : 'Auksalaq' ,description: 'Auksalaq Nomads message disply' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); app.get('/auksalaq_control', function(req,res){ res.render('auksalaq/auksalaq_control.pug', { locals : { title : 'Auksalaq Control' ,description: 'Auksalaq Nomads System Control' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); app.get('/auksalaq_clock', function(req,res){ res.render('auksalaq/auksalaq_clock.pug', { locals : { title : 'Auksalaq Clock' ,description: 'Auksalaq Nomads System Clock' ,author: 'TThatcher' ,analyticssiteid: 'XXXXXXX' } }); }); // catch 404 and forward to error handler app.use(function(req, res, next) { var err = new Error('Not Found '+req); err.status = 404; next(err); }); // error handler app.use(function(err, req, res, next) { // set locals, only providing error in development res.locals.message = err.message; res.locals.error = req.app.get('env') === 'development' ? err : {}; // very basic! if(debug){ console.error(err.stack); } // render the error page res.status(err.status || 500); res.render('404'); }); function NotFound(msg){ this.name = 'NotFound'; Error.call(this, msg); Error.captureStackTrace(this, arguments.callee); } if(debug){ console.log('Listening on http://127.0.0.1:' + port ); } //for testing sendChat = function(data, type){ if(debug) console.log("sending data ", data); var messageToSend = {}; messageToSend.id = 123; messageToSend.username = "Nomads_Server"; messageToSend.type = type; messageToSend.messageText = data; messageToSend.location = 0; messageToSend.latitude = 0; messageToSend.longitude = 0; messageToSend.x = 0; messageToSend.y = 0; var date = new Date(); d = date.getMonth()+1+"."+date.getDate()+"."+date.getFullYear()+ " at " + date.getHours()+":"+date.getMinutes()+":"+date.getSeconds(); messageToSend.timestamp = d; main_socket.broadcast.emit('auksalaq_client_update', messageToSend); } sendTime = function(){ var d = new Date(); main_socket.broadcast.emit('clock_update', d.getTime()); }
nomads2/new-nomads
server.js
JavaScript
mit
7,491
[ 30522, 1013, 1013, 16437, 12530, 15266, 13075, 7532, 1027, 5478, 1006, 1005, 7532, 1005, 1007, 1025, 1013, 1013, 16437, 4671, 13075, 4671, 1027, 5478, 1006, 1005, 4671, 1005, 1007, 1025, 13075, 4130, 1027, 5478, 1006, 1005, 4130, 1005, 1007...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
jsonp({"cep":"21515001","logradouro":"Avenida Brasil","bairro":"Barros Filho","cidade":"Rio de Janeiro","uf":"RJ","estado":"Rio de Janeiro"});
lfreneda/cepdb
api/v1/21515001.jsonp.js
JavaScript
cc0-1.0
143
[ 30522, 1046, 3385, 2361, 1006, 1063, 1000, 8292, 2361, 1000, 1024, 1000, 17405, 16068, 8889, 2487, 1000, 1010, 1000, 8833, 12173, 8162, 2080, 1000, 1024, 1000, 13642, 3490, 2850, 21133, 1000, 1010, 1000, 21790, 18933, 1000, 1024, 1000, 1982...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * PROJECT: Aura Operating System Development * CONTENT: DNS Client * PROGRAMMERS: Valentin Charbonnier <valentinbreiz@gmail.com> */ using Cosmos.System.Network.Config; using Cosmos.HAL; using System; using System.Collections.Generic; using System.Text; namespace Cosmos.System.Network.IPv4.UDP.DNS { /// <summary> /// DnsClient class. Used to manage the DNS connection to a server. /// </summary> public class DnsClient : UdpClient { /// <summary> /// Domain Name query string /// </summary> private string queryurl; /// <summary> /// Create new instance of the <see cref="DnsClient"/> class. /// </summary> /// <exception cref="ArgumentOutOfRangeException">Thrown on fatal error (contact support).</exception> /// <exception cref="ArgumentException">Thrown if UdpClient with localPort 53 exists.</exception> public DnsClient() : base(53) { } /// <summary> /// Connect to client. /// </summary> /// <param name="dest">Destination address.</param> public void Connect(Address address) { Connect(address, 53); } /// <summary> /// Send DNS Ask for Domain Name string /// </summary> /// <param name="url">Domain Name string.</param> public void SendAsk(string url) { Address source = IPConfig.FindNetwork(destination); queryurl = url; var askpacket = new DNSPacketAsk(source, destination, url); OutgoingBuffer.AddPacket(askpacket); NetworkStack.Update(); } /// <summary> /// Receive data /// </summary> /// <param name="timeout">timeout value, default 5000ms</param> /// <returns>Address from Domain Name</returns> /// <exception cref="InvalidOperationException">Thrown on fatal error (contact support).</exception> public Address Receive(int timeout = 5000) { int second = 0; int _deltaT = 0; while (rxBuffer.Count < 1) { if (second > (timeout / 1000)) { return null; } if (_deltaT != RTC.Second) { second++; _deltaT = RTC.Second; } } var packet = new DNSPacketAnswer(rxBuffer.Dequeue().RawData); if ((ushort)(packet.DNSFlags & 0x0F) == (ushort)ReplyCode.OK) { if (packet.Queries.Count > 0 && packet.Queries[0].Name == queryurl) { if (packet.Answers.Count > 0 && packet.Answers[0].Address.Length == 4) { return new Address(packet.Answers[0].Address, 0); } } } return null; } } }
CosmosOS/Cosmos
source/Cosmos.System2/Network/IPv4/UDP/DNS/DNSClient.cs
C#
bsd-3-clause
3,003
[ 30522, 1013, 1008, 1008, 2622, 1024, 15240, 4082, 2291, 2458, 1008, 4180, 1024, 1040, 3619, 7396, 1008, 28547, 1024, 24632, 25869, 11735, 14862, 1026, 24632, 13578, 10993, 1030, 20917, 4014, 1012, 4012, 1028, 1008, 1013, 2478, 21182, 1012, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<?php class ModelOpenbayAmazonPatch extends Model { public function runPatch($manual = true) { $this->load->model('setting/setting'); $settings = $this->model_setting_setting->getSetting('openbay_amazon'); if ($settings) { $this->db->query(" CREATE TABLE IF NOT EXISTS `" . DB_PREFIX . "amazon_product_search` ( `product_id` int(11) NOT NULL, `marketplace` enum('uk','de','es','it','fr') NOT NULL, `status` enum('searching','finished') NOT NULL, `matches` int(11) DEFAULT NULL, `data` text, PRIMARY KEY (`product_id`,`marketplace`) ) DEFAULT COLLATE=utf8_general_ci;" ); $this->db->query(" CREATE TABLE IF NOT EXISTS `" . DB_PREFIX . "amazon_listing_report` ( `marketplace` enum('uk','de','fr','es','it') NOT NULL, `sku` varchar(255) NOT NULL, `quantity` int(10) unsigned NOT NULL, `asin` varchar(255) NOT NULL, `price` decimal(10,4) NOT NULL, PRIMARY KEY (`marketplace`,`sku`) ) DEFAULT COLLATE=utf8_general_ci;" ); if (!$this->config->get('openbay_amazon_processing_listing_reports')) { $settings['openbay_amazon_processing_listing_reports'] = array(); } $this->model_setting_setting->editSetting('openbay_amazon', $settings); } return true; } }
villagedefrance/OpenCart-Overclocked
upload/admin/model/openbay/amazon_patch.php
PHP
gpl-3.0
1,268
[ 30522, 1026, 1029, 25718, 2465, 2944, 26915, 15907, 8067, 11597, 4502, 10649, 8908, 2944, 1063, 2270, 3853, 2448, 4502, 10649, 1006, 1002, 6410, 1027, 2995, 1007, 1063, 1002, 2023, 1011, 1028, 7170, 1011, 1028, 2944, 1006, 1005, 4292, 1013,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<?php // no direct access defined( '_JEXEC' ) or die( 'Restricted access' ); ?> <fieldset class="adminform"> <legend><?php echo JText::_( 'Mail Settings' ); ?></legend> <table class="admintable" cellspacing="1"> <tbody> <tr> <td width="185" class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'Mailer' ); ?>::<?php echo JText::_( 'TIPMAILER' ); ?>"> <?php echo JText::_( 'Mailer' ); ?> </span> </td> <td> <?php echo $lists['mailer']; ?> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'Mail From' ); ?>::<?php echo JText::_( 'TIPMAILFROM' ); ?>"> <?php echo JText::_( 'Mail From' ); ?> </span> </td> <td> <input class="text_area" type="text" name="mailfrom" size="30" value="<?php echo $row->mailfrom; ?>" /> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'From Name' ); ?>::<?php echo JText::_( 'TIPFROMNAME' ); ?>"> <?php echo JText::_( 'From Name' ); ?> </span> </td> <td> <input class="text_area" type="text" name="fromname" size="30" value="<?php echo $row->fromname; ?>" /> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'Sendmail Path' ); ?>::<?php echo JText::_( 'TIPSENDMAILPATH' ); ?>"> <?php echo JText::_( 'Sendmail Path' ); ?> </span> </td> <td> <input class="text_area" type="text" name="sendmail" size="30" value="<?php echo $row->sendmail; ?>" /> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'SMTP Auth' ); ?>::<?php echo JText::_( 'TIPSMTPAUTH' ); ?>"> <?php echo JText::_( 'SMTP Auth' ); ?> </span> </td> <td> <?php echo $lists['smtpauth']; ?> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'SMTP Security' ); ?>::<?php echo JText::_( 'TIPSMTPSECURITY' ); ?>"> <?php echo JText::_( 'SMTP Security' ); ?> </span> </td> <td> <?php echo $lists['smtpsecure']; ?> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'SMTP Port' ); ?>::<?php echo JText::_( 'TIPSMTPPORT' ); ?>"> <?php echo JText::_( 'SMTP Port' ); ?> </span> </td> <td> <input class="text_area" type="text" name="smtpport" size="30" value="<?php echo (isset($row->smtpport) ? $row->smtpport : ''); ?>" /> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'SMTP User' ); ?>::<?php echo JText::_( 'TIPSMTPUSER' ); ?>"> <?php echo JText::_( 'SMTP User' ); ?> </span> </td> <td> <input class="text_area" type="text" name="smtpuser" size="30" value="<?php echo $row->smtpuser; ?>" /> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'SMTP Pass' ); ?>::<?php echo JText::_( 'TIPSMTPPASS' ); ?>"> <?php echo JText::_( 'SMTP Pass' ); ?> </span> </td> <td> <input class="text_area" type="password" name="smtppass" size="30" value="<?php echo $row->smtppass; ?>" /> </td> </tr> <tr> <td class="key"> <span class="editlinktip hasTip" title="<?php echo JText::_( 'SMTP Host' ); ?>::<?php echo JText::_( 'TIPSMTPHOST' ); ?>"> <?php echo JText::_( 'SMTP Host' ); ?> </span> </td> <td> <input class="text_area" type="text" name="smtphost" size="30" value="<?php echo $row->smtphost; ?>" /> </td> </tr> </tbody> </table> </fieldset>
viollarr/alab
site2011/administrator/components/com_config/views/application/tmpl/config_mail.php
PHP
gpl-2.0
3,771
[ 30522, 1026, 1029, 25718, 1013, 1013, 2053, 3622, 3229, 4225, 1006, 1005, 1035, 15333, 2595, 8586, 1005, 1007, 2030, 3280, 1006, 1005, 7775, 3229, 1005, 1007, 1025, 1029, 1028, 1026, 4249, 3388, 2465, 1027, 1000, 4748, 10020, 14192, 1000, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * Copyright (C) 2014 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.google.android.exoplayer.demo.simple; import com.google.android.exoplayer.ExoPlaybackException; import com.google.android.exoplayer.ExoPlayer; import com.google.android.exoplayer.MediaCodecAudioTrackRenderer; import com.google.android.exoplayer.MediaCodecTrackRenderer.DecoderInitializationException; import com.google.android.exoplayer.MediaCodecVideoTrackRenderer; import com.google.android.exoplayer.VideoSurfaceView; import com.google.android.exoplayer.demo.DemoUtil; import com.google.android.exoplayer.demo.R; import com.google.android.exoplayer.util.PlayerControl; import android.app.Activity; import android.content.Intent; import android.media.MediaCodec.CryptoException; import android.net.Uri; import android.os.Bundle; import android.os.Handler; import android.util.Log; import android.view.MotionEvent; import android.view.Surface; import android.view.SurfaceHolder; import android.view.View; import android.view.View.OnTouchListener; import android.widget.MediaController; import android.widget.Toast; /** * An activity that plays media using {@link ExoPlayer}. */ public class SimplePlayerActivity extends Activity implements SurfaceHolder.Callback, ExoPlayer.Listener, MediaCodecVideoTrackRenderer.EventListener { /** * Builds renderers for the player. */ public interface RendererBuilder { void buildRenderers(RendererBuilderCallback callback); } public static final int RENDERER_COUNT = 2; public static final int TYPE_VIDEO = 0; public static final int TYPE_AUDIO = 1; private static final String TAG = "PlayerActivity"; private MediaController mediaController; private Handler mainHandler; private View shutterView; private VideoSurfaceView surfaceView; private ExoPlayer player; private RendererBuilder builder; private RendererBuilderCallback callback; private MediaCodecVideoTrackRenderer videoRenderer; private boolean autoPlay = true; private long playerPosition; private Uri contentUri; private int contentType; private String contentId; // Activity lifecycle @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Intent intent = getIntent(); contentUri = intent.getData(); contentType = intent.getIntExtra(DemoUtil.CONTENT_TYPE_EXTRA, DemoUtil.TYPE_OTHER); contentId = intent.getStringExtra(DemoUtil.CONTENT_ID_EXTRA); mainHandler = new Handler(getMainLooper()); builder = getRendererBuilder(); setContentView(R.layout.player_activity_simple); View root = findViewById(R.id.root); root.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View arg0, MotionEvent arg1) { if (arg1.getAction() == MotionEvent.ACTION_DOWN) { toggleControlsVisibility(); } return true; } }); mediaController = new MediaController(this); mediaController.setAnchorView(root); shutterView = findViewById(R.id.shutter); surfaceView = (VideoSurfaceView) findViewById(R.id.surface_view); surfaceView.getHolder().addCallback(this); DemoUtil.setDefaultCookieManager(); } @Override public void onResume() { super.onResume(); // Setup the player player = ExoPlayer.Factory.newInstance(RENDERER_COUNT, 1000, 5000); player.addListener(this); player.seekTo(playerPosition); // Build the player controls mediaController.setMediaPlayer(new PlayerControl(player)); mediaController.setEnabled(true); // Request the renderers callback = new RendererBuilderCallback(); builder.buildRenderers(callback); } @Override public void onPause() { super.onPause(); // Release the player if (player != null) { playerPosition = player.getCurrentPosition(); player.release(); player = null; } callback = null; videoRenderer = null; shutterView.setVisibility(View.VISIBLE); } // Public methods public Handler getMainHandler() { return mainHandler; } // Internal methods private void toggleControlsVisibility() { if (mediaController.isShowing()) { mediaController.hide(); } else { mediaController.show(0); } } private RendererBuilder getRendererBuilder() { String userAgent = DemoUtil.getUserAgent(this); switch (contentType) { case DemoUtil.TYPE_SS: return new SmoothStreamingRendererBuilder(this, userAgent, contentUri.toString(), contentId); case DemoUtil.TYPE_DASH: return new DashRendererBuilder(this, userAgent, contentUri.toString(), contentId); default: return new DefaultRendererBuilder(this, contentUri); } } private void onRenderers(RendererBuilderCallback callback, MediaCodecVideoTrackRenderer videoRenderer, MediaCodecAudioTrackRenderer audioRenderer) { if (this.callback != callback) { return; } this.callback = null; this.videoRenderer = videoRenderer; player.prepare(videoRenderer, audioRenderer); maybeStartPlayback(); } private void maybeStartPlayback() { Surface surface = surfaceView.getHolder().getSurface(); if (videoRenderer == null || surface == null || !surface.isValid()) { // We're not ready yet. return; } player.sendMessage(videoRenderer, MediaCodecVideoTrackRenderer.MSG_SET_SURFACE, surface); if (autoPlay) { player.setPlayWhenReady(true); autoPlay = false; } } private void onRenderersError(RendererBuilderCallback callback, Exception e) { if (this.callback != callback) { return; } this.callback = null; onError(e); } private void onError(Exception e) { Log.e(TAG, "Playback failed", e); Toast.makeText(this, R.string.failed, Toast.LENGTH_SHORT).show(); finish(); } // ExoPlayer.Listener implementation @Override public void onPlayerStateChanged(boolean playWhenReady, int playbackState) { // Do nothing. } @Override public void onPlayWhenReadyCommitted() { // Do nothing. } @Override public void onPlayerError(ExoPlaybackException e) { onError(e); } // MediaCodecVideoTrackRenderer.Listener @Override public void onVideoSizeChanged(int width, int height, float pixelWidthHeightRatio) { surfaceView.setVideoWidthHeightRatio( height == 0 ? 1 : (pixelWidthHeightRatio * width) / height); } @Override public void onDrawnToSurface(Surface surface) { shutterView.setVisibility(View.GONE); } @Override public void onDroppedFrames(int count, long elapsed) { Log.d(TAG, "Dropped frames: " + count); } @Override public void onDecoderInitializationError(DecoderInitializationException e) { // This is for informational purposes only. Do nothing. } @Override public void onCryptoError(CryptoException e) { // This is for informational purposes only. Do nothing. } // SurfaceHolder.Callback implementation @Override public void surfaceCreated(SurfaceHolder holder) { maybeStartPlayback(); } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { // Do nothing. } @Override public void surfaceDestroyed(SurfaceHolder holder) { if (videoRenderer != null) { player.blockingSendMessage(videoRenderer, MediaCodecVideoTrackRenderer.MSG_SET_SURFACE, null); } } /* package */ final class RendererBuilderCallback { public void onRenderers(MediaCodecVideoTrackRenderer videoRenderer, MediaCodecAudioTrackRenderer audioRenderer) { SimplePlayerActivity.this.onRenderers(this, videoRenderer, audioRenderer); } public void onRenderersError(Exception e) { SimplePlayerActivity.this.onRenderersError(this, e); } } }
bboyfeiyu/ExoPlayer
demo/src/main/java/com/google/android/exoplayer/demo/simple/SimplePlayerActivity.java
Java
apache-2.0
8,386
[ 30522, 1013, 1008, 1008, 9385, 1006, 1039, 1007, 2297, 1996, 11924, 2330, 3120, 2622, 1008, 1008, 7000, 2104, 1996, 15895, 6105, 1010, 2544, 1016, 1012, 1014, 1006, 1996, 1000, 6105, 1000, 1007, 1025, 1008, 2017, 2089, 2025, 2224, 2023, 5...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
#!/usr/bin/env python import errno import os import re import tempfile from hashlib import md5 class _FileCacheError(Exception): """Base exception class for FileCache related errors""" class _FileCache(object): DEPTH = 3 def __init__(self, root_directory=None): self._InitializeRootDirectory(root_directory) def Get(self, key): path = self._GetPath(key) if os.path.exists(path): with open(path) as f: return f.read() else: return None def Set(self, key, data): path = self._GetPath(key) directory = os.path.dirname(path) if not os.path.exists(directory): os.makedirs(directory) if not os.path.isdir(directory): raise _FileCacheError('%s exists but is not a directory' % directory) temp_fd, temp_path = tempfile.mkstemp() temp_fp = os.fdopen(temp_fd, 'w') temp_fp.write(data) temp_fp.close() if not path.startswith(self._root_directory): raise _FileCacheError('%s does not appear to live under %s' % (path, self._root_directory)) if os.path.exists(path): os.remove(path) os.rename(temp_path, path) def Remove(self, key): path = self._GetPath(key) if not path.startswith(self._root_directory): raise _FileCacheError('%s does not appear to live under %s' % (path, self._root_directory )) if os.path.exists(path): os.remove(path) def GetCachedTime(self, key): path = self._GetPath(key) if os.path.exists(path): return os.path.getmtime(path) else: return None def _GetUsername(self): """Attempt to find the username in a cross-platform fashion.""" try: return os.getenv('USER') or \ os.getenv('LOGNAME') or \ os.getenv('USERNAME') or \ os.getlogin() or \ 'nobody' except (AttributeError, IOError, OSError): return 'nobody' def _GetTmpCachePath(self): username = self._GetUsername() cache_directory = 'python.cache_' + username return os.path.join(tempfile.gettempdir(), cache_directory) def _InitializeRootDirectory(self, root_directory): if not root_directory: root_directory = self._GetTmpCachePath() root_directory = os.path.abspath(root_directory) try: os.mkdir(root_directory) except OSError as e: if e.errno == errno.EEXIST and os.path.isdir(root_directory): # directory already exists pass else: # exists but is a file, or no permissions, or... raise self._root_directory = root_directory def _GetPath(self, key): try: hashed_key = md5(key.encode('utf-8')).hexdigest() except TypeError: hashed_key = md5.new(key).hexdigest() return os.path.join(self._root_directory, self._GetPrefix(hashed_key), hashed_key) def _GetPrefix(self, hashed_key): return os.path.sep.join(hashed_key[0:_FileCache.DEPTH]) class ParseTweet(object): # compile once on import regexp = {"RT": "^RT", "MT": r"^MT", "ALNUM": r"(@[a-zA-Z0-9_]+)", "HASHTAG": r"(#[\w\d]+)", "URL": r"([http://]?[a-zA-Z\d\/]+[\.]+[a-zA-Z\d\/\.]+)"} regexp = dict((key, re.compile(value)) for key, value in list(regexp.items())) def __init__(self, timeline_owner, tweet): """ timeline_owner : twitter handle of user account. tweet - 140 chars from feed; object does all computation on construction properties: RT, MT - boolean URLs - list of URL Hashtags - list of tags """ self.Owner = timeline_owner self.tweet = tweet self.UserHandles = ParseTweet.getUserHandles(tweet) self.Hashtags = ParseTweet.getHashtags(tweet) self.URLs = ParseTweet.getURLs(tweet) self.RT = ParseTweet.getAttributeRT(tweet) self.MT = ParseTweet.getAttributeMT(tweet) # additional intelligence if ( self.RT and len(self.UserHandles) > 0 ): # change the owner of tweet? self.Owner = self.UserHandles[0] return def __str__(self): """ for display method """ return "owner %s, urls: %d, hashtags %d, user_handles %d, len_tweet %d, RT = %s, MT = %s" % ( self.Owner, len(self.URLs), len(self.Hashtags), len(self.UserHandles), len(self.tweet), self.RT, self.MT) @staticmethod def getAttributeRT(tweet): """ see if tweet is a RT """ return re.search(ParseTweet.regexp["RT"], tweet.strip()) is not None @staticmethod def getAttributeMT(tweet): """ see if tweet is a MT """ return re.search(ParseTweet.regexp["MT"], tweet.strip()) is not None @staticmethod def getUserHandles(tweet): """ given a tweet we try and extract all user handles in order of occurrence""" return re.findall(ParseTweet.regexp["ALNUM"], tweet) @staticmethod def getHashtags(tweet): """ return all hashtags""" return re.findall(ParseTweet.regexp["HASHTAG"], tweet) @staticmethod def getURLs(tweet): """ URL : [http://]?[\w\.?/]+""" return re.findall(ParseTweet.regexp["URL"], tweet)
milmd90/TwitterBot
twitter/_file_cache.py
Python
apache-2.0
5,588
[ 30522, 1001, 999, 1013, 2149, 2099, 1013, 8026, 1013, 4372, 2615, 18750, 12324, 9413, 19139, 12324, 9808, 12324, 2128, 12324, 8915, 8737, 8873, 2571, 2013, 23325, 29521, 12324, 9108, 2629, 2465, 1035, 5371, 3540, 25923, 18933, 2099, 1006, 6...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
.window { font-size:12px; position:absolute; overflow:hidden; background:transparent url('images/panel_title.png'); background1:#878787; padding:5px; border:1px solid #99BBE8; -moz-border-radius:5px; -webkit-border-radius: 5px; } .window-shadow{ position:absolute; background:#ddd; -moz-border-radius:5px; -webkit-border-radius: 5px; -moz-box-shadow: 2px 2px 3px rgba(0, 0, 0, 0.2); -webkit-box-shadow: 2px 2px 3px rgba(0, 0, 0, 0.2); filter: progid:DXImageTransform.Microsoft.Blur(pixelRadius=2,MakeShadow=false,ShadowOpacity=0.2); } .window .window-header{ background:transparent; padding:2px 0px 4px 0px; } .window .window-body{ background:#fff; border:1px solid #99BBE8; border-top-width:0px; } .window .window-header .panel-icon{ left:1px; top:1px; } .window .window-header .panel-with-icon{ padding-left:18px; } .window .window-header .panel-tool{ top:0px; right:1px; } .window-proxy{ position:absolute; overflow:hidden; border:1px dashed #15428b; } .window-mask{ position:absolute; left:0; top:0; width:100%; height:100%; filter:alpha(opacity=40); opacity:0.40; background:#ccc; display1:none; font-size:1px; *zoom:1; overflow:hidden; }
Y2MD/blog
web/weixin/js/themes/default/window.css
CSS
bsd-3-clause
1,244
[ 30522, 1012, 3332, 1063, 15489, 1011, 2946, 1024, 2260, 2361, 2595, 1025, 2597, 1024, 7619, 1025, 2058, 12314, 1024, 5023, 1025, 4281, 1024, 13338, 24471, 2140, 1006, 1005, 4871, 1013, 5997, 1035, 2516, 1012, 1052, 3070, 1005, 1007, 1025, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * #%L * Alfresco Repository * %% * Copyright (C) 2005 - 2016 Alfresco Software Limited * %% * This file is part of the Alfresco software. * If the software was purchased under a paid Alfresco license, the terms of * the paid license agreement will prevail. Otherwise, the software is * provided under the following open source license terms: * * Alfresco is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * Alfresco is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License * along with Alfresco. If not, see <http://www.gnu.org/licenses/>. * #L% */ package org.alfresco.opencmis.search; import java.util.HashMap; import java.util.Map; import java.util.Set; import org.alfresco.opencmis.dictionary.CMISDictionaryService; import org.alfresco.opencmis.search.CMISQueryOptions.CMISQueryMode; import org.alfresco.repo.search.impl.lucene.PagingLuceneResultSet; import org.alfresco.repo.search.impl.querymodel.Query; import org.alfresco.repo.search.impl.querymodel.QueryEngine; import org.alfresco.repo.search.impl.querymodel.QueryEngineResults; import org.alfresco.repo.search.impl.querymodel.QueryModelException; import org.alfresco.repo.security.permissions.impl.acegi.FilteringResultSet; import org.alfresco.service.cmr.dictionary.DictionaryService; import org.alfresco.service.cmr.repository.NodeRef; import org.alfresco.service.cmr.repository.NodeService; import org.alfresco.service.cmr.repository.StoreRef; import org.alfresco.service.cmr.search.LimitBy; import org.alfresco.service.cmr.search.QueryConsistency; import org.alfresco.service.cmr.search.ResultSet; import org.alfresco.util.Pair; import org.apache.chemistry.opencmis.commons.enums.BaseTypeId; import org.apache.chemistry.opencmis.commons.enums.CapabilityJoin; import org.apache.chemistry.opencmis.commons.enums.CapabilityQuery; /** * @author andyh */ public class CMISQueryServiceImpl implements CMISQueryService { private CMISDictionaryService cmisDictionaryService; private QueryEngine luceneQueryEngine; private QueryEngine dbQueryEngine; private NodeService nodeService; private DictionaryService alfrescoDictionaryService; public void setOpenCMISDictionaryService(CMISDictionaryService cmisDictionaryService) { this.cmisDictionaryService = cmisDictionaryService; } /** * @param queryEngine * the luceneQueryEngine to set */ public void setLuceneQueryEngine(QueryEngine queryEngine) { this.luceneQueryEngine = queryEngine; } /** * @param queryEngine * the dbQueryEngine to set */ public void setDbQueryEngine(QueryEngine queryEngine) { this.dbQueryEngine = queryEngine; } /** * @param nodeService * the nodeService to set */ public void setNodeService(NodeService nodeService) { this.nodeService = nodeService; } /** * @param alfrescoDictionaryService * the Alfresco Dictionary Service to set */ public void setAlfrescoDictionaryService(DictionaryService alfrescoDictionaryService) { this.alfrescoDictionaryService = alfrescoDictionaryService; } public CMISResultSet query(CMISQueryOptions options) { Pair<Query, QueryEngineResults> resultPair = executeQuerySwitchingImpl(options); Query query = resultPair.getFirst(); QueryEngineResults results = resultPair.getSecond(); Map<String, ResultSet> wrapped = new HashMap<String, ResultSet>(); Map<Set<String>, ResultSet> map = results.getResults(); for (Set<String> group : map.keySet()) { ResultSet current = map.get(group); for (String selector : group) { wrapped.put(selector, filterNotExistingNodes(current)); } } LimitBy limitBy = null; if ((null != results.getResults()) && !results.getResults().isEmpty() && (null != results.getResults().values()) && !results.getResults().values().isEmpty()) { limitBy = results.getResults().values().iterator().next().getResultSetMetaData().getLimitedBy(); } CMISResultSet cmis = new CMISResultSet(wrapped, options, limitBy, nodeService, query, cmisDictionaryService, alfrescoDictionaryService); return cmis; } private Pair<Query, QueryEngineResults> executeQuerySwitchingImpl(CMISQueryOptions options) { switch (options.getQueryConsistency()) { case TRANSACTIONAL_IF_POSSIBLE : { try { return executeQueryUsingEngine(dbQueryEngine, options); } catch(QueryModelException qme) { return executeQueryUsingEngine(luceneQueryEngine, options); } } case TRANSACTIONAL : { return executeQueryUsingEngine(dbQueryEngine, options); } case EVENTUAL : case DEFAULT : default : { return executeQueryUsingEngine(luceneQueryEngine, options); } } } private Pair<Query, QueryEngineResults> executeQueryUsingEngine(QueryEngine queryEngine, CMISQueryOptions options) { CapabilityJoin joinSupport = getJoinSupport(); if (options.getQueryMode() == CMISQueryOptions.CMISQueryMode.CMS_WITH_ALFRESCO_EXTENSIONS) { joinSupport = CapabilityJoin.INNERANDOUTER; } // TODO: Refactor to avoid duplication of valid scopes here and in // CMISQueryParser BaseTypeId[] validScopes = (options.getQueryMode() == CMISQueryMode.CMS_STRICT) ? CmisFunctionEvaluationContext.STRICT_SCOPES : CmisFunctionEvaluationContext.ALFRESCO_SCOPES; CmisFunctionEvaluationContext functionContext = new CmisFunctionEvaluationContext(); functionContext.setCmisDictionaryService(cmisDictionaryService); functionContext.setNodeService(nodeService); functionContext.setValidScopes(validScopes); CMISQueryParser parser = new CMISQueryParser(options, cmisDictionaryService, joinSupport); QueryConsistency queryConsistency = options.getQueryConsistency(); if (queryConsistency == QueryConsistency.DEFAULT) { options.setQueryConsistency(QueryConsistency.EVENTUAL); } Query query = parser.parse(queryEngine.getQueryModelFactory(), functionContext); QueryEngineResults queryEngineResults = queryEngine.executeQuery(query, options, functionContext); return new Pair<Query, QueryEngineResults>(query, queryEngineResults); } /* MNT-8804 filter ResultSet for nodes with corrupted indexes */ private ResultSet filterNotExistingNodes(ResultSet resultSet) { if (resultSet instanceof PagingLuceneResultSet) { ResultSet wrapped = ((PagingLuceneResultSet)resultSet).getWrapped(); if (wrapped instanceof FilteringResultSet) { FilteringResultSet filteringResultSet = (FilteringResultSet)wrapped; for (int i = 0; i < filteringResultSet.length(); i++) { NodeRef nodeRef = filteringResultSet.getNodeRef(i); /* filter node if it does not exist */ if (!nodeService.exists(nodeRef)) { filteringResultSet.setIncluded(i, false); } } } } return resultSet; } public CMISResultSet query(String query, StoreRef storeRef) { CMISQueryOptions options = new CMISQueryOptions(query, storeRef); return query(options); } public boolean getPwcSearchable() { return true; } public boolean getAllVersionsSearchable() { return false; } public CapabilityQuery getQuerySupport() { return CapabilityQuery.BOTHCOMBINED; } public CapabilityJoin getJoinSupport() { return CapabilityJoin.NONE; } }
Alfresco/alfresco-repository
src/main/java/org/alfresco/opencmis/search/CMISQueryServiceImpl.java
Java
lgpl-3.0
8,952
[ 30522, 1013, 1008, 1008, 1001, 1003, 1048, 1008, 24493, 6072, 3597, 22409, 1008, 1003, 1003, 1008, 9385, 1006, 1039, 1007, 2384, 1011, 2355, 24493, 6072, 3597, 4007, 3132, 1008, 1003, 1003, 1008, 2023, 5371, 2003, 2112, 1997, 1996, 24493, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* ======================================================================== * Bootstrap: iconset-typicon-2.0.6.js by @recktoner * https://victor-valencia.github.com/bootstrap-iconpicker * * Iconset: Typicons 2.0.6 * https://github.com/stephenhutchings/typicons.font * ======================================================================== * Copyright 2013-2014 Victor Valencia Rico. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ======================================================================== */ ;(function($){ $.iconset_typicon = { iconClass: 'typcn', iconClassFix: 'typcn-', icons: [ 'adjust-brightness', 'adjust-contrast', 'anchor-outline', 'anchor', 'archive', 'arrow-back-outline', 'arrow-back', 'arrow-down-outline', 'arrow-down-thick', 'arrow-down', 'arrow-forward-outline', 'arrow-forward', 'arrow-left-outline', 'arrow-left-thick', 'arrow-left', 'arrow-loop-outline', 'arrow-loop', 'arrow-maximise-outline', 'arrow-maximise', 'arrow-minimise-outline', 'arrow-minimise', 'arrow-move-outline', 'arrow-move', 'arrow-repeat-outline', 'arrow-repeat', 'arrow-right-outline', 'arrow-right-thick', 'arrow-right', 'arrow-shuffle', 'arrow-sorted-down', 'arrow-sorted-up', 'arrow-sync-outline', 'arrow-sync', 'arrow-unsorted', 'arrow-up-outline', 'arrow-up-thick', 'arrow-up', 'at', 'attachment-outline', 'attachment', 'backspace-outline', 'backspace', 'battery-charge', 'battery-full', 'battery-high', 'battery-low', 'battery-mid', 'beaker', 'beer', 'bell', 'book', 'bookmark', 'briefcase', 'brush', 'business-card', 'calculator', 'calendar-outline', 'calendar', 'camera-outline', 'camera', 'cancel-outline', 'cancel', 'chart-area-outline', 'chart-area', 'chart-bar-outline', 'chart-bar', 'chart-line-outline', 'chart-line', 'chart-pie-outline', 'chart-pie', 'chevron-left-outline', 'chevron-left', 'chevron-right-outline', 'chevron-right', 'clipboard', 'cloud-storage', 'cloud-storage-outline', 'code-outline', 'code', 'coffee', 'cog-outline', 'cog', 'compass', 'contacts', 'credit-card', 'css3', 'database', 'delete-outline', 'delete', 'device-desktop', 'device-laptop', 'device-phone', 'device-tablet', 'directions', 'divide-outline', 'divide', 'document-add', 'document-delete', 'document-text', 'document', 'download-outline', 'download', 'dropbox', 'edit', 'eject-outline', 'eject', 'equals-outline', 'equals', 'export-outline', 'export', 'eye-outline', 'eye', 'feather', 'film', 'filter', 'flag-outline', 'flag', 'flash-outline', 'flash', 'flow-children', 'flow-merge', 'flow-parallel', 'flow-switch', 'folder-add', 'folder-delete', 'folder-open', 'folder', 'gift', 'globe-outline', 'globe', 'group-outline', 'group', 'headphones', 'heart-full-outline', 'heart-half-outline', 'heart-outline', 'heart', 'home-outline', 'home', 'html5', 'image-outline', 'image', 'infinity-outline', 'infinity', 'info-large-outline', 'info-large', 'info-outline', 'info', 'input-checked-outline', 'input-checked', 'key-outline', 'key', 'keyboard', 'leaf', 'lightbulb', 'link-outline', 'link', 'location-arrow-outline', 'location-arrow', 'location-outline', 'location', 'lock-closed-outline', 'lock-closed', 'lock-open-outline', 'lock-open', 'mail', 'map', 'media-eject-outline', 'media-eject', 'media-fast-forward-outline', 'media-fast-forward', 'media-pause-outline', 'media-pause', 'media-play-outline', 'media-play-reverse-outline', 'media-play-reverse', 'media-play', 'media-record-outline', 'media-record', 'media-rewind-outline', 'media-rewind', 'media-stop-outline', 'media-stop', 'message-typing', 'message', 'messages', 'microphone-outline', 'microphone', 'minus-outline', 'minus', 'mortar-board', 'news', 'notes-outline', 'notes', 'pen', 'pencil', 'phone-outline', 'phone', 'pi-outline', 'pi', 'pin-outline', 'pin', 'pipette', 'plane-outline', 'plane', 'plug', 'plus-outline', 'plus', 'point-of-interest-outline', 'point-of-interest', 'power-outline', 'power', 'printer', 'puzzle-outline', 'puzzle', 'radar-outline', 'radar', 'refresh-outline', 'refresh', 'rss-outline', 'rss', 'scissors-outline', 'scissors', 'shopping-bag', 'shopping-cart', 'social-at-circular', 'social-dribbble-circular', 'social-dribbble', 'social-facebook-circular', 'social-facebook', 'social-flickr-circular', 'social-flickr', 'social-github-circular', 'social-github', 'social-google-plus-circular', 'social-google-plus', 'social-instagram-circular', 'social-instagram', 'social-last-fm-circular', 'social-last-fm', 'social-linkedin-circular', 'social-linkedin', 'social-pinterest-circular', 'social-pinterest', 'social-skype-outline', 'social-skype', 'social-tumbler-circular', 'social-tumbler', 'social-twitter-circular', 'social-twitter', 'social-vimeo-circular', 'social-vimeo', 'social-youtube-circular', 'social-youtube', 'sort-alphabetically-outline', 'sort-alphabetically', 'sort-numerically-outline', 'sort-numerically', 'spanner-outline', 'spanner', 'spiral', 'star-full-outline', 'star-half-outline', 'star-half', 'star-outline', 'star', 'starburst-outline', 'starburst', 'stopwatch', 'support', 'tabs-outline', 'tag', 'tags', 'th-large-outline', 'th-large', 'th-list-outline', 'th-list', 'th-menu-outline', 'th-menu', 'th-small-outline', 'th-small', 'thermometer', 'thumbs-down', 'thumbs-ok', 'thumbs-up', 'tick-outline', 'tick', 'ticket', 'time', 'times-outline', 'times', 'trash', 'tree', 'upload-outline', 'upload', 'user-add-outline', 'user-add', 'user-delete-outline', 'user-delete', 'user-outline', 'user', 'vendor-android', 'vendor-apple', 'vendor-microsoft', 'video-outline', 'video', 'volume-down', 'volume-mute', 'volume-up', 'volume', 'warning-outline', 'warning', 'watch', 'waves-outline', 'waves', 'weather-cloudy', 'weather-downpour', 'weather-night', 'weather-partly-sunny', 'weather-shower', 'weather-snow', 'weather-stormy', 'weather-sunny', 'weather-windy-cloudy', 'weather-windy', 'wi-fi-outline', 'wi-fi', 'wine', 'world-outline', 'world', 'zoom-in-outline', 'zoom-in', 'zoom-out-outline', 'zoom-out', 'zoom-outline', 'zoom' ]}; })(jQuery);
ahsina/StudExpo
wp-content/plugins/tiny-bootstrap-elements-light/assets/js/iconset/iconset-typicon-2.0.6.js
JavaScript
gpl-2.0
10,551
[ 30522, 1013, 1008, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 102...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
import {test} from '../qunit'; import {localeModule} from '../qunit-locale'; import moment from '../../moment'; localeModule('hu'); test('parse', function (assert) { var tests = 'január jan_február feb_március márc_április ápr_május máj_június jún_július júl_augusztus aug_szeptember szept_október okt_november nov_december dec'.split('_'), i; function equalTest(input, mmm, i) { assert.equal(moment(input, mmm).month(), i, input + ' should be month ' + (i + 1)); } for (i = 0; i < 12; i++) { tests[i] = tests[i].split(' '); equalTest(tests[i][0], 'MMM', i); equalTest(tests[i][1], 'MMM', i); equalTest(tests[i][0], 'MMMM', i); equalTest(tests[i][1], 'MMMM', i); equalTest(tests[i][0].toLocaleLowerCase(), 'MMMM', i); equalTest(tests[i][1].toLocaleLowerCase(), 'MMMM', i); equalTest(tests[i][0].toLocaleUpperCase(), 'MMMM', i); equalTest(tests[i][1].toLocaleUpperCase(), 'MMMM', i); } }); test('format', function (assert) { var a = [ ['dddd, MMMM Do YYYY, HH:mm:ss', 'vasárnap, február 14. 2010, 15:25:50'], ['ddd, HH', 'vas, 15'], ['M Mo MM MMMM MMM', '2 2. 02 február feb'], ['YYYY YY', '2010 10'], ['D Do DD', '14 14. 14'], ['d do dddd ddd dd', '0 0. vasárnap vas v'], ['DDD DDDo DDDD', '45 45. 045'], ['w wo ww', '6 6. 06'], ['H HH', '15 15'], ['m mm', '25 25'], ['s ss', '50 50'], ['[az év] DDDo [napja]', 'az év 45. napja'], ['LTS', '15:25:50'], ['L', '2010.02.14.'], ['LL', '2010. február 14.'], ['LLL', '2010. február 14. 15:25'], ['LLLL', '2010. február 14., vasárnap 15:25'], ['l', '2010.2.14.'], ['ll', '2010. feb 14.'], ['lll', '2010. feb 14. 15:25'], ['llll', '2010. feb 14., vas 15:25'] ], b = moment(new Date(2010, 1, 14, 15, 25, 50, 125)), i; for (i = 0; i < a.length; i++) { assert.equal(b.format(a[i][0]), a[i][1], a[i][0] + ' ---> ' + a[i][1]); } }); test('meridiem', function (assert) { assert.equal(moment([2011, 2, 23, 0, 0]).format('a'), 'de', 'am'); assert.equal(moment([2011, 2, 23, 11, 59]).format('a'), 'de', 'am'); assert.equal(moment([2011, 2, 23, 12, 0]).format('a'), 'du', 'pm'); assert.equal(moment([2011, 2, 23, 23, 59]).format('a'), 'du', 'pm'); assert.equal(moment([2011, 2, 23, 0, 0]).format('A'), 'DE', 'AM'); assert.equal(moment([2011, 2, 23, 11, 59]).format('A'), 'DE', 'AM'); assert.equal(moment([2011, 2, 23, 12, 0]).format('A'), 'DU', 'PM'); assert.equal(moment([2011, 2, 23, 23, 59]).format('A'), 'DU', 'PM'); }); test('format ordinal', function (assert) { assert.equal(moment([2011, 0, 1]).format('DDDo'), '1.', '1.'); assert.equal(moment([2011, 0, 2]).format('DDDo'), '2.', '2.'); assert.equal(moment([2011, 0, 3]).format('DDDo'), '3.', '3.'); assert.equal(moment([2011, 0, 4]).format('DDDo'), '4.', '4.'); assert.equal(moment([2011, 0, 5]).format('DDDo'), '5.', '5.'); assert.equal(moment([2011, 0, 6]).format('DDDo'), '6.', '6.'); assert.equal(moment([2011, 0, 7]).format('DDDo'), '7.', '7.'); assert.equal(moment([2011, 0, 8]).format('DDDo'), '8.', '8.'); assert.equal(moment([2011, 0, 9]).format('DDDo'), '9.', '9.'); assert.equal(moment([2011, 0, 10]).format('DDDo'), '10.', '10.'); assert.equal(moment([2011, 0, 11]).format('DDDo'), '11.', '11.'); assert.equal(moment([2011, 0, 12]).format('DDDo'), '12.', '12.'); assert.equal(moment([2011, 0, 13]).format('DDDo'), '13.', '13.'); assert.equal(moment([2011, 0, 14]).format('DDDo'), '14.', '14.'); assert.equal(moment([2011, 0, 15]).format('DDDo'), '15.', '15.'); assert.equal(moment([2011, 0, 16]).format('DDDo'), '16.', '16.'); assert.equal(moment([2011, 0, 17]).format('DDDo'), '17.', '17.'); assert.equal(moment([2011, 0, 18]).format('DDDo'), '18.', '18.'); assert.equal(moment([2011, 0, 19]).format('DDDo'), '19.', '19.'); assert.equal(moment([2011, 0, 20]).format('DDDo'), '20.', '20.'); assert.equal(moment([2011, 0, 21]).format('DDDo'), '21.', '21.'); assert.equal(moment([2011, 0, 22]).format('DDDo'), '22.', '22.'); assert.equal(moment([2011, 0, 23]).format('DDDo'), '23.', '23.'); assert.equal(moment([2011, 0, 24]).format('DDDo'), '24.', '24.'); assert.equal(moment([2011, 0, 25]).format('DDDo'), '25.', '25.'); assert.equal(moment([2011, 0, 26]).format('DDDo'), '26.', '26.'); assert.equal(moment([2011, 0, 27]).format('DDDo'), '27.', '27.'); assert.equal(moment([2011, 0, 28]).format('DDDo'), '28.', '28.'); assert.equal(moment([2011, 0, 29]).format('DDDo'), '29.', '29.'); assert.equal(moment([2011, 0, 30]).format('DDDo'), '30.', '30.'); assert.equal(moment([2011, 0, 31]).format('DDDo'), '31.', '31.'); }); test('format month', function (assert) { var expected = 'január jan_február feb_március márc_április ápr_május máj_június jún_július júl_augusztus aug_szeptember szept_október okt_november nov_december dec'.split('_'), i; for (i = 0; i < expected.length; i++) { assert.equal(moment([2011, i, 1]).format('MMMM MMM'), expected[i], expected[i]); } }); test('format week', function (assert) { var expected = 'vasárnap vas_hétfő hét_kedd kedd_szerda sze_csütörtök csüt_péntek pén_szombat szo'.split('_'), i; for (i = 0; i < expected.length; i++) { assert.equal(moment([2011, 0, 2 + i]).format('dddd ddd'), expected[i], expected[i]); } }); test('from', function (assert) { var start = moment([2007, 1, 28]); assert.equal(start.from(moment([2007, 1, 28]).add({s: 44}), true), 'néhány másodperc', '44 másodperc = néhány másodperc'); assert.equal(start.from(moment([2007, 1, 28]).add({s: 45}), true), 'egy perc', '45 másodperc = egy perc'); assert.equal(start.from(moment([2007, 1, 28]).add({s: 89}), true), 'egy perc', '89 másodperc = egy perc'); assert.equal(start.from(moment([2007, 1, 28]).add({s: 90}), true), '2 perc', '90 másodperc = 2 perc'); assert.equal(start.from(moment([2007, 1, 28]).add({m: 44}), true), '44 perc', '44 perc = 44 perc'); assert.equal(start.from(moment([2007, 1, 28]).add({m: 45}), true), 'egy óra', '45 perc = egy óra'); assert.equal(start.from(moment([2007, 1, 28]).add({m: 89}), true), 'egy óra', '89 perc = egy óra'); assert.equal(start.from(moment([2007, 1, 28]).add({m: 90}), true), '2 óra', '90 perc = 2 óra'); assert.equal(start.from(moment([2007, 1, 28]).add({h: 5}), true), '5 óra', '5 óra = 5 óra'); assert.equal(start.from(moment([2007, 1, 28]).add({h: 21}), true), '21 óra', '21 óra = 21 óra'); assert.equal(start.from(moment([2007, 1, 28]).add({h: 22}), true), 'egy nap', '22 óra = egy nap'); assert.equal(start.from(moment([2007, 1, 28]).add({h: 35}), true), 'egy nap', '35 óra = egy nap'); assert.equal(start.from(moment([2007, 1, 28]).add({h: 36}), true), '2 nap', '36 óra = 2 nap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 1}), true), 'egy nap', '1 nap = egy nap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 5}), true), '5 nap', '5 nap = 5 nap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 25}), true), '25 nap', '25 nap = 25 nap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 26}), true), 'egy hónap', '26 nap = egy hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 30}), true), 'egy hónap', '30 nap = egy hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 43}), true), 'egy hónap', '45 nap = egy hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 46}), true), '2 hónap', '46 nap = 2 hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 74}), true), '2 hónap', '75 nap = 2 hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 76}), true), '3 hónap', '76 nap = 3 hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({M: 1}), true), 'egy hónap', '1 hónap = egy hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({M: 5}), true), '5 hónap', '5 hónap = 5 hónap'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 345}), true), 'egy év', '345 nap = egy év'); assert.equal(start.from(moment([2007, 1, 28]).add({d: 548}), true), '2 év', '548 nap = 2 év'); assert.equal(start.from(moment([2007, 1, 28]).add({y: 1}), true), 'egy év', '1 év = egy év'); assert.equal(start.from(moment([2007, 1, 28]).add({y: 5}), true), '5 év', '5 év = 5 év'); }); test('suffix', function (assert) { assert.equal(moment(30000).from(0), 'néhány másodperc múlva', 'prefix'); assert.equal(moment(0).from(30000), 'néhány másodperce', 'suffix'); }); test('now from now', function (assert) { assert.equal(moment().fromNow(), 'néhány másodperce', 'now from now should display as in the past'); }); test('fromNow', function (assert) { assert.equal(moment().add({s: 30}).fromNow(), 'néhány másodperc múlva', 'néhány másodperc múlva'); assert.equal(moment().add({d: 5}).fromNow(), '5 nap múlva', '5 nap múlva'); }); test('calendar day', function (assert) { var a = moment().hours(12).minutes(0).seconds(0); assert.equal(moment(a).calendar(), 'ma 12:00-kor', 'today at the same time'); assert.equal(moment(a).add({m: 25}).calendar(), 'ma 12:25-kor', 'Now plus 25 min'); assert.equal(moment(a).add({h: 1}).calendar(), 'ma 13:00-kor', 'Now plus 1 hour'); assert.equal(moment(a).add({d: 1}).calendar(), 'holnap 12:00-kor', 'tomorrow at the same time'); assert.equal(moment(a).subtract({h: 1}).calendar(), 'ma 11:00-kor', 'Now minus 1 hour'); assert.equal(moment(a).subtract({d: 1}).calendar(), 'tegnap 12:00-kor', 'yesterday at the same time'); }); test('calendar next week', function (assert) { var i, m, days = 'vasárnap_hétfőn_kedden_szerdán_csütörtökön_pénteken_szombaton'.split('_'); for (i = 2; i < 7; i++) { m = moment().add({d: i}); assert.equal(m.calendar(), m.format('[' + days[m.day()] + '] LT[-kor]'), 'today + ' + i + ' days current time'); m.hours(0).minutes(0).seconds(0).milliseconds(0); assert.equal(m.calendar(), m.format('[' + days[m.day()] + '] LT[-kor]'), 'today + ' + i + ' days beginning of day'); m.hours(23).minutes(59).seconds(59).milliseconds(999); assert.equal(m.calendar(), m.format('[' + days[m.day()] + '] LT[-kor]'), 'today + ' + i + ' days end of day'); } }); test('calendar last week', function (assert) { var i, m, days = 'vasárnap_hétfőn_kedden_szerdán_csütörtökön_pénteken_szombaton'.split('_'); for (i = 2; i < 7; i++) { m = moment().subtract({d: i}); assert.equal(m.calendar(), m.format('[múlt ' + days[m.day()] + '] LT[-kor]'), 'today - ' + i + ' days current time'); m.hours(0).minutes(0).seconds(0).milliseconds(0); assert.equal(m.calendar(), m.format('[múlt ' + days[m.day()] + '] LT[-kor]'), 'today - ' + i + ' days beginning of day'); m.hours(23).minutes(59).seconds(59).milliseconds(999); assert.equal(m.calendar(), m.format('[múlt ' + days[m.day()] + '] LT[-kor]'), 'today - ' + i + ' days end of day'); } }); test('calendar all else', function (assert) { var weeksAgo = moment().subtract({w: 1}), weeksFromNow = moment().add({w: 1}); assert.equal(weeksAgo.calendar(), weeksAgo.format('L'), 'egy héte'); assert.equal(weeksFromNow.calendar(), weeksFromNow.format('L'), 'egy hét múlva'); weeksAgo = moment().subtract({w: 2}); weeksFromNow = moment().add({w: 2}); assert.equal(weeksAgo.calendar(), weeksAgo.format('L'), '2 hete'); assert.equal(weeksFromNow.calendar(), weeksFromNow.format('L'), '2 hét múlva'); }); test('weeks year starting sunday formatted', function (assert) { assert.equal(moment([2011, 11, 26]).format('w ww wo'), '52 52 52.', 'Dec 26 2011 should be week 52'); assert.equal(moment([2012, 0, 1]).format('w ww wo'), '52 52 52.', 'Jan 1 2012 should be week 52'); assert.equal(moment([2012, 0, 2]).format('w ww wo'), '1 01 1.', 'Jan 2 2012 should be week 1'); assert.equal(moment([2012, 0, 8]).format('w ww wo'), '1 01 1.', 'Jan 8 2012 should be week 1'); assert.equal(moment([2012, 0, 9]).format('w ww wo'), '2 02 2.', 'Jan 9 2012 should be week 2'); });
Oire/moment
src/test/locale/hu.js
JavaScript
mit
13,562
[ 30522, 12324, 30524, 1005, 1025, 12324, 1063, 2334, 6633, 7716, 9307, 1065, 2013, 1005, 1012, 1012, 1013, 24209, 3490, 2102, 1011, 2334, 2063, 1005, 1025, 12324, 2617, 2013, 1005, 1012, 1012, 1013, 1012, 1012, 1013, 2617, 1005, 1025, 2334, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
""" SleekXMPP: The Sleek XMPP Library Copyright (C) 2011 Nathanael C. Fritz This file is part of SleekXMPP. See the file LICENSE for copying permission. """ import logging from sleekxmpp.xmlstream import JID from sleekxmpp.xmlstream.handler import Callback from sleekxmpp.xmlstream.matcher import StanzaPath from sleekxmpp.plugins.base import BasePlugin from sleekxmpp.plugins.xep_0060 import stanza log = logging.getLogger(__name__) class XEP_0060(BasePlugin): """ XEP-0060 Publish Subscribe """ name = 'xep_0060' description = 'XEP-0060: Publish-Subscribe' dependencies = set(['xep_0030', 'xep_0004']) stanza = stanza def plugin_init(self): self.node_event_map = {} self.xmpp.register_handler( Callback('Pubsub Event: Items', StanzaPath('message/pubsub_event/items'), self._handle_event_items)) self.xmpp.register_handler( Callback('Pubsub Event: Purge', StanzaPath('message/pubsub_event/purge'), self._handle_event_purge)) self.xmpp.register_handler( Callback('Pubsub Event: Delete', StanzaPath('message/pubsub_event/delete'), self._handle_event_delete)) self.xmpp.register_handler( Callback('Pubsub Event: Configuration', StanzaPath('message/pubsub_event/configuration'), self._handle_event_configuration)) self.xmpp.register_handler( Callback('Pubsub Event: Subscription', StanzaPath('message/pubsub_event/subscription'), self._handle_event_subscription)) def plugin_end(self): self.xmpp.remove_handler('Pubsub Event: Items') self.xmpp.remove_handler('Pubsub Event: Purge') self.xmpp.remove_handler('Pubsub Event: Delete') self.xmpp.remove_handler('Pubsub Event: Configuration') self.xmpp.remove_handler('Pubsub Event: Subscription') def _handle_event_items(self, msg): """Raise events for publish and retraction notifications.""" node = msg['pubsub_event']['items']['node'] multi = len(msg['pubsub_event']['items']) > 1 values = {} if multi: values = msg.values del values['pubsub_event'] for item in msg['pubsub_event']['items']: event_name = self.node_event_map.get(node, None) event_type = 'publish' if item.name == 'retract': event_type = 'retract' if multi: condensed = self.xmpp.Message() condensed.values = values condensed['pubsub_event']['items']['node'] = node condensed['pubsub_event']['items'].append(item) self.xmpp.event('pubsub_%s' % event_type, msg) if event_name: self.xmpp.event('%s_%s' % (event_name, event_type), condensed) else: self.xmpp.event('pubsub_%s' % event_type, msg) if event_name: self.xmpp.event('%s_%s' % (event_name, event_type), msg) def _handle_event_purge(self, msg): """Raise events for node purge notifications.""" node = msg['pubsub_event']['purge']['node'] event_name = self.node_event_map.get(node, None) self.xmpp.event('pubsub_purge', msg) if event_name: self.xmpp.event('%s_purge' % event_name, msg) def _handle_event_delete(self, msg): """Raise events for node deletion notifications.""" node = msg['pubsub_event']['delete']['node'] event_name = self.node_event_map.get(node, None) self.xmpp.event('pubsub_delete', msg) if event_name: self.xmpp.event('%s_delete' % event_name, msg) def _handle_event_configuration(self, msg): """Raise events for node configuration notifications.""" node = msg['pubsub_event']['configuration']['node'] event_name = self.node_event_map.get(node, None) self.xmpp.event('pubsub_config', msg) if event_name: self.xmpp.event('%s_config' % event_name, msg) def _handle_event_subscription(self, msg): """Raise events for node subscription notifications.""" node = msg['pubsub_event']['subscription']['node'] event_name = self.node_event_map.get(node, None) self.xmpp.event('pubsub_subscription', msg) if event_name: self.xmpp.event('%s_subscription' % event_name, msg) def map_node_event(self, node, event_name): """ Map node names to events. When a pubsub event is received for the given node, raise the provided event. For example:: map_node_event('http://jabber.org/protocol/tune', 'user_tune') will produce the events 'user_tune_publish' and 'user_tune_retract' when the respective notifications are received from the node 'http://jabber.org/protocol/tune', among other events. Arguments: node -- The node name to map to an event. event_name -- The name of the event to raise when a notification from the given node is received. """ self.node_event_map[node] = event_name def create_node(self, jid, node, config=None, ntype=None, ifrom=None, block=True, callback=None, timeout=None): """ Create and configure a new pubsub node. A server MAY use a different name for the node than the one provided, so be sure to check the result stanza for a server assigned name. If no configuration form is provided, the node will be created using the server's default configuration. To get the default configuration use get_node_config(). Arguments: jid -- The JID of the pubsub service. node -- Optional name of the node to create. If no name is provided, the server MAY generate a node ID for you. The server can also assign a different name than the one you provide; check the result stanza to see if the server assigned a name. config -- Optional XEP-0004 data form of configuration settings. ntype -- The type of node to create. Servers typically default to using 'leaf' if no type is provided. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub']['create']['node'] = node if config is not None: form_type = 'http://jabber.org/protocol/pubsub#node_config' if 'FORM_TYPE' in config['fields']: config.field['FORM_TYPE']['value'] = form_type else: config.add_field(var='FORM_TYPE', ftype='hidden', value=form_type) if ntype: if 'pubsub#node_type' in config['fields']: config.field['pubsub#node_type']['value'] = ntype else: config.add_field(var='pubsub#node_type', value=ntype) iq['pubsub']['configure'].append(config) return iq.send(block=block, callback=callback, timeout=timeout) def subscribe(self, jid, node, bare=True, subscribee=None, options=None, ifrom=None, block=True, callback=None, timeout=None): """ Subscribe to updates from a pubsub node. The rules for determining the JID that is subscribing to the node are: 1. If subscribee is given, use that as provided. 2. If ifrom was given, use the bare or full version based on bare. 3. Otherwise, use self.xmpp.boundjid based on bare. Arguments: jid -- The pubsub service JID. node -- The node to subscribe to. bare -- Indicates if the subscribee is a bare or full JID. Defaults to True for a bare JID. subscribee -- The JID that is subscribing to the node. options -- ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub']['subscribe']['node'] = node if subscribee is None: if ifrom: if bare: subscribee = JID(ifrom).bare else: subscribee = ifrom else: if bare: subscribee = self.xmpp.boundjid.bare else: subscribee = self.xmpp.boundjid iq['pubsub']['subscribe']['jid'] = subscribee if options is not None: iq['pubsub']['options'].append(options) return iq.send(block=block, callback=callback, timeout=timeout) def unsubscribe(self, jid, node, subid=None, bare=True, subscribee=None, ifrom=None, block=True, callback=None, timeout=None): """ Unubscribe from updates from a pubsub node. The rules for determining the JID that is unsubscribing from the node are: 1. If subscribee is given, use that as provided. 2. If ifrom was given, use the bare or full version based on bare. 3. Otherwise, use self.xmpp.boundjid based on bare. Arguments: jid -- The pubsub service JID. node -- The node to subscribe to. subid -- The specific subscription, if multiple subscriptions exist for this JID/node combination. bare -- Indicates if the subscribee is a bare or full JID. Defaults to True for a bare JID. subscribee -- The JID that is subscribing to the node. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub']['unsubscribe']['node'] = node if subscribee is None: if ifrom: if bare: subscribee = JID(ifrom).bare else: subscribee = ifrom else: if bare: subscribee = self.xmpp.boundjid.bare else: subscribee = self.xmpp.boundjid iq['pubsub']['unsubscribe']['jid'] = subscribee iq['pubsub']['unsubscribe']['subid'] = subid return iq.send(block=block, callback=callback, timeout=timeout) def get_subscriptions(self, jid, node=None, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') iq['pubsub']['subscriptions']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def get_affiliations(self, jid, node=None, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') iq['pubsub']['affiliations']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def get_subscription_options(self, jid, node=None, user_jid=None, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') if user_jid is None: iq['pubsub']['default']['node'] = node else: iq['pubsub']['options']['node'] = node iq['pubsub']['options']['jid'] = user_jid return iq.send(block=block, callback=callback, timeout=timeout) def set_subscription_options(self, jid, node, user_jid, options, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') iq['pubsub']['options']['node'] = node iq['pubsub']['options']['jid'] = user_jid iq['pubsub']['options'].append(options) return iq.send(block=block, callback=callback, timeout=timeout) def get_node_config(self, jid, node=None, ifrom=None, block=True, callback=None, timeout=None): """ Retrieve the configuration for a node, or the pubsub service's default configuration for new nodes. Arguments: jid -- The JID of the pubsub service. node -- The node to retrieve the configuration for. If None, the default configuration for new nodes will be requested. Defaults to None. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') if node is None: iq['pubsub_owner']['default'] else: iq['pubsub_owner']['configure']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def get_node_subscriptions(self, jid, node, ifrom=None, block=True, callback=None, timeout=None): """ Retrieve the subscriptions associated with a given node. Arguments: jid -- The JID of the pubsub service. node -- The node to retrieve subscriptions from. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') iq['pubsub_owner']['subscriptions']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def get_node_affiliations(self, jid, node, ifrom=None, block=True, callback=None, timeout=None): """ Retrieve the affiliations associated with a given node. Arguments: jid -- The JID of the pubsub service. node -- The node to retrieve affiliations from. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') iq['pubsub_owner']['affiliations']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def delete_node(self, jid, node, ifrom=None, block=True, callback=None, timeout=None): """ Delete a a pubsub node. Arguments: jid -- The JID of the pubsub service. node -- The node to delete. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub_owner']['delete']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def set_node_config(self, jid, node, config, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub_owner']['configure']['node'] = node iq['pubsub_owner']['configure']['form'].values = config.values return iq.send(block=block, callback=callback, timeout=timeout) def publish(self, jid, node, id=None, payload=None, options=None, ifrom=None, block=True, callback=None, timeout=None): """ Add a new item to a node, or edit an existing item. For services that support it, you can use the publish command as an event signal by not including an ID or payload. When including a payload and you do not provide an ID then the service will generally create an ID for you. Publish options may be specified, and how those options are processed is left to the service, such as treating the options as preconditions that the node's settings must match. Arguments: jid -- The JID of the pubsub service. node -- The node to publish the item to. id -- Optionally specify the ID of the item. payload -- The item content to publish. options -- A form of publish options. ifrom -- Specify the sender's JID. block -- Specify if the send call will block until a response is received, or a timeout occurs. Defaults to True. timeout -- The length of time (in seconds) to wait for a response before exiting the send call if blocking is used. Defaults to sleekxmpp.xmlstream.RESPONSE_TIMEOUT callback -- Optional reference to a stream handler function. Will be executed when a reply stanza is received. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub']['publish']['node'] = node if id is not None: iq['pubsub']['publish']['item']['id'] = id if payload is not None: iq['pubsub']['publish']['item']['payload'] = payload iq['pubsub']['publish_options'] = options return iq.send(block=block, callback=callback, timeout=timeout) def retract(self, jid, node, id, notify=None, ifrom=None, block=True, callback=None, timeout=None): """ Delete a single item from a node. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub']['retract']['node'] = node iq['pubsub']['retract']['notify'] = notify iq['pubsub']['retract']['item']['id'] = id return iq.send(block=block, callback=callback, timeout=timeout) def purge(self, jid, node, ifrom=None, block=True, callback=None, timeout=None): """ Remove all items from a node. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub_owner']['purge']['node'] = node return iq.send(block=block, callback=callback, timeout=timeout) def get_nodes(self, *args, **kwargs): """ Discover the nodes provided by a Pubsub service, using disco. """ return self.xmpp['xep_0030'].get_items(*args, **kwargs) def get_item(self, jid, node, item_id, ifrom=None, block=True, callback=None, timeout=None): """ Retrieve the content of an individual item. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') item = stanza.Item() item['id'] = item_id iq['pubsub']['items']['node'] = node iq['pubsub']['items'].append(item) return iq.send(block=block, callback=callback, timeout=timeout) def get_items(self, jid, node, item_ids=None, max_items=None, iterator=False, ifrom=None, block=False, callback=None, timeout=None): """ Request the contents of a node's items. The desired items can be specified, or a query for the last few published items can be used. Pubsub services may use result set management for nodes with many items, so an iterator can be returned if needed. """ iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='get') iq['pubsub']['items']['node'] = node iq['pubsub']['items']['max_items'] = max_items if item_ids is not None: for item_id in item_ids: item = stanza.Item() item['id'] = item_id iq['pubsub']['items'].append(item) if iterator: return self.xmpp['xep_0059'].iterate(iq, 'pubsub') else: return iq.send(block=block, callback=callback, timeout=timeout) def get_item_ids(self, jid, node, ifrom=None, block=True, callback=None, timeout=None, iterator=False): """ Retrieve the ItemIDs hosted by a given node, using disco. """ return self.xmpp['xep_0030'].get_items(jid, node, ifrom=ifrom, block=block, callback=callback, timeout=timeout, iterator=iterator) def modify_affiliations(self, jid, node, affiliations=None, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub_owner']['affiliations']['node'] = node if affiliations is None: affiliations = [] for jid, affiliation in affiliations: aff = stanza.OwnerAffiliation() aff['jid'] = jid aff['affiliation'] = affiliation iq['pubsub_owner']['affiliations'].append(aff) return iq.send(block=block, callback=callback, timeout=timeout) def modify_subscriptions(self, jid, node, subscriptions=None, ifrom=None, block=True, callback=None, timeout=None): iq = self.xmpp.Iq(sto=jid, sfrom=ifrom, stype='set') iq['pubsub_owner']['subscriptions']['node'] = node if subscriptions is None: subscriptions = [] for jid, subscription in subscriptions: sub = stanza.OwnerSubscription() sub['jid'] = jid sub['subscription'] = subscription iq['pubsub_owner']['subscriptions'].append(sub) return iq.send(block=block, callback=callback, timeout=timeout)
tiancj/emesene
emesene/e3/xmpp/SleekXMPP/sleekxmpp/plugins/xep_0060/pubsub.py
Python
gpl-3.0
25,426
[ 30522, 1000, 1000, 1000, 21185, 2595, 8737, 2361, 1024, 1996, 21185, 1060, 8737, 2361, 3075, 9385, 1006, 1039, 1007, 2249, 7150, 21147, 1039, 1012, 12880, 2023, 5371, 2003, 2112, 1997, 21185, 2595, 8737, 2361, 1012, 2156, 1996, 5371, 6105, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package net.tofweb.starlite; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; import java.util.LinkedList; import org.junit.Before; import org.junit.Test; public class CellSpaceTest { CellSpace space; // Test values double costA = 1.0; double gA = 12.12435565298214; double rhsA = 12.12435565298214; double gB = 17.320508075688775; double gC = 8.774964387392123; Double costPlusHeuristicA = 26.302685732130897; double costB = 3.7416573867739413; double gD = 3.7416573867739413; double rhsD = 4.0; double gE = 3.7416573867739413; @Before public void setup() { space = new CellSpace(); space.setGoalCell(10, 10, 10); space.setStartCell(-5, -5, -5); } @Test public void testGetInfo() { // Happy path Cell cell = space.makeNewCell(3, 3, 3); CellInfo returnedInfo = space.getInfo(cell); assertNotNull(returnedInfo); assertTrue(costA == returnedInfo.getCost()); assertTrue(gA == returnedInfo.getG()); assertTrue(rhsA == returnedInfo.getRhs()); // Null condition assertNull(space.getInfo(null)); /* * Illegal argument case - CellSpace managed cells should be made with * makeCell */ Cell illegalCell = new Cell(); assertNull(space.getInfo(illegalCell)); illegalCell.setX(100); illegalCell.setY(100); illegalCell.setZ(100); assertNull(space.getInfo(illegalCell)); } @Test public void testUpdateCellCost() { Cell cell = space.makeNewCell(3, 3, 3); CellInfo returnedInfo = space.getInfo(cell); // Existing state, cost = 1 assertNotNull(returnedInfo); assertTrue(1 == returnedInfo.getCost()); // Happy path, set cost = 2 space.updateCellCost(cell, 2); returnedInfo = space.getInfo(cell); assertNotNull(returnedInfo); assertTrue(2 == returnedInfo.getCost()); // Null condition A space.updateCellCost(null, 3); returnedInfo = space.getInfo(cell); assertNotNull(returnedInfo); assertTrue(2 == returnedInfo.getCost()); } @Test public void testGetG() { Cell cell = space.makeNewCell(3, 3, 3); // Existing state assertTrue(gA == space.getG(cell)); // Null conditions assertTrue(0.0 == space.getG(null)); assertTrue(0.0 == space.getG(new Cell())); Cell illegalCell = new Cell(); illegalCell.setX(100); illegalCell.setY(100); illegalCell.setZ(100); assertNull(space.getInfo(illegalCell)); } @Test public void testMakeNewCellIntIntInt() { Cell cell = space.makeNewCell(5, 4, 6); assertNotNull(cell); CellInfo info = space.getInfo(cell); assertNotNull(info); assertTrue(gC == info.getG()); assertTrue(gC == info.getRhs()); assertTrue(costA == info.getCost()); } @Test public void testMakeNewCellIntIntIntCosts() { Costs k = new Costs(3.14, 21.0); Cell cell = space.makeNewCell(7, 8, 9, k); assertNotNull(cell); assertEquals(costPlusHeuristicA, cell.getKey().getCostPlusHeuristic()); assertTrue(costB == cell.getKey().getCost()); CellInfo info = space.getInfo(cell); assertNotNull(info); assertTrue(gD == info.getG()); assertTrue(rhsD == info.getRhs()); assertTrue(costA == info.getCost()); } @Test public void testSetStartCell() { space.setStartCell(7, 8, 9); Cell startCell = space.getStartCell(); assertNotNull(startCell); assertEquals(7, startCell.getX()); assertEquals(8, startCell.getY()); assertEquals(9, startCell.getZ()); CellInfo info = space.getInfo(startCell); assertNotNull(info); assertTrue(gE == info.getG()); assertTrue(gE == info.getRhs()); assertTrue(costA == info.getCost()); } @Test public void testSetGoalCell() { space.setGoalCell(10, 11, 12); Cell goalCell = space.getGoalCell(); assertNotNull(goalCell); assertEquals(10, goalCell.getX()); assertEquals(11, goalCell.getY()); assertEquals(12, goalCell.getZ()); CellInfo info = space.getInfo(goalCell); assertNotNull(info); assertTrue(0.0 == info.getG()); assertTrue(0.0 == info.getRhs()); assertTrue(costA == info.getCost()); } @Test public void testIsClose() { assertTrue(space.isClose(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY)); assertFalse(space.isClose(Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY)); assertFalse(space.isClose(1.0, 2.0)); assertTrue(space.isClose(1.0, 1.000009)); assertFalse(space.isClose(1.0, 1.00001)); } @Test public void testGetSuccessors() { Cell cell = space.makeNewCell(20, 20, 20); LinkedList<Cell> neighbors = space.getPredecessors(cell); assertNotNull(neighbors); assertTrue(6 == neighbors.size()); assertEquals(20, neighbors.getFirst().getX()); assertEquals(20, neighbors.getFirst().getY()); assertEquals(19, neighbors.getFirst().getZ()); assertEquals(21, neighbors.getLast().getX()); assertEquals(20, neighbors.getLast().getY()); assertEquals(20, neighbors.getLast().getZ()); } @Test public void testGetPredecessors() { Cell cell = space.makeNewCell(20, 20, 20); LinkedList<Cell> neighbors = space.getPredecessors(cell); assertNotNull(neighbors); assertTrue(6 == neighbors.size()); assertEquals(20, neighbors.getFirst().getX()); assertEquals(20, neighbors.getFirst().getY()); assertEquals(19, neighbors.getFirst().getZ()); assertEquals(21, neighbors.getLast().getX()); assertEquals(20, neighbors.getLast().getY()); assertEquals(20, neighbors.getLast().getZ()); } }
LynnOwens/starlite
src/test/java/net/tofweb/starlite/CellSpaceTest.java
Java
mit
5,440
[ 30522, 7427, 5658, 1012, 2000, 2546, 8545, 2497, 1012, 2732, 22779, 1025, 12324, 10763, 8917, 1012, 12022, 4183, 1012, 20865, 1012, 20865, 2063, 26426, 2015, 1025, 12324, 10763, 8917, 1012, 12022, 4183, 1012, 20865, 1012, 20865, 7011, 4877, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package org.wso2.developerstudio.eclipse.gmf.esb.diagram.edit.parts; import java.util.ArrayList; import java.util.LinkedList; import java.util.List; import org.eclipse.draw2d.IFigure; import org.eclipse.draw2d.Shape; import org.eclipse.draw2d.StackLayout; import org.eclipse.draw2d.geometry.Dimension; import org.eclipse.gef.EditPart; import org.eclipse.gef.EditPolicy; import org.eclipse.gef.Request; import org.eclipse.gef.commands.Command; import org.eclipse.gef.editpolicies.LayoutEditPolicy; import org.eclipse.gef.editpolicies.NonResizableEditPolicy; import org.eclipse.gef.requests.CreateRequest; import org.eclipse.gmf.runtime.diagram.ui.editparts.AbstractBorderItemEditPart; import org.eclipse.gmf.runtime.diagram.ui.editpolicies.EditPolicyRoles; import org.eclipse.gmf.runtime.emf.type.core.IElementType; import org.eclipse.gmf.runtime.gef.ui.figures.DefaultSizeNodeFigure; import org.eclipse.gmf.runtime.gef.ui.figures.NodeFigure; import org.eclipse.gmf.runtime.notation.View; import org.eclipse.swt.graphics.Color; import org.wso2.developerstudio.eclipse.gmf.esb.diagram.custom.AbstractEndpointInputConnectorEditPart; import org.wso2.developerstudio.eclipse.gmf.esb.diagram.custom.EastPointerShape; import org.wso2.developerstudio.eclipse.gmf.esb.diagram.edit.policies.FailoverEndPointInputConnector2ItemSemanticEditPolicy; import org.wso2.developerstudio.eclipse.gmf.esb.diagram.providers.EsbElementTypes; /** * @generated NOT */ public class FailoverEndPointInputConnector2EditPart extends AbstractEndpointInputConnectorEditPart { /** * @generated */ public static final int VISUAL_ID = 3650; /** * @generated */ protected IFigure contentPane; /** * @generated */ protected IFigure primaryShape; /** * @generated */ public FailoverEndPointInputConnector2EditPart(View view) { super(view); } /** * @generated */ protected void createDefaultEditPolicies() { super.createDefaultEditPolicies(); installEditPolicy(EditPolicy.PRIMARY_DRAG_ROLE, getPrimaryDragEditPolicy()); installEditPolicy(EditPolicyRoles.SEMANTIC_ROLE, new FailoverEndPointInputConnector2ItemSemanticEditPolicy()); installEditPolicy(EditPolicy.LAYOUT_ROLE, createLayoutEditPolicy()); // XXX need an SCR to runtime to have another abstract superclass that would let children add reasonable editpolicies // removeEditPolicy(org.eclipse.gmf.runtime.diagram.ui.editpolicies.EditPolicyRoles.CONNECTION_HANDLES_ROLE); } /** * @generated */ protected LayoutEditPolicy createLayoutEditPolicy() { org.eclipse.gmf.runtime.diagram.ui.editpolicies.LayoutEditPolicy lep = new org.eclipse.gmf.runtime.diagram.ui.editpolicies.LayoutEditPolicy() { protected EditPolicy createChildEditPolicy(EditPart child) { EditPolicy result = child .getEditPolicy(EditPolicy.PRIMARY_DRAG_ROLE); if (result == null) { result = new NonResizableEditPolicy(); } return result; } protected Command getMoveChildrenCommand(Request request) { return null; } protected Command getCreateCommand(CreateRequest request) { return null; } }; return lep; } /** * @generated */ protected IFigure createNodeShape() { return primaryShape = new EastPointerFigure(); } /** * @generated */ public EastPointerFigure getPrimaryShape() { return (EastPointerFigure) primaryShape; } /** * @generated */ protected NodeFigure createNodePlate() { DefaultSizeNodeFigure result = new DefaultSizeNodeFigure(12, 10); //FIXME: workaround for #154536 result.getBounds().setSize(result.getPreferredSize()); return result; } /** * Creates figure for this edit part. * * Body of this method does not depend on settings in generation model * so you may safely remove <i>generated</i> tag and modify it. * * @generated NOT */ protected NodeFigure createNodeFigure() { NodeFigure figure = createNodePlate(); figure.setLayoutManager(new StackLayout()); IFigure shape = createNodeShapeForward(); figure.add(shape); contentPane = setupContentPane(shape); figure_ = figure; createNodeShapeReverse(); return figure; } /** * Default implementation treats passed figure as content pane. * Respects layout one may have set for generated figure. * @param nodeShape instance of generated figure class * @generated */ protected IFigure setupContentPane(IFigure nodeShape) { return nodeShape; // use nodeShape itself as contentPane } /** * @generated */ public IFigure getContentPane() { if (contentPane != null) { return contentPane; } return super.getContentPane(); } /** * @generated */ protected void setForegroundColor(Color color) { if (primaryShape != null) { primaryShape.setForegroundColor(color); } } /** * @generated */ protected void setBackgroundColor(Color color) { if (primaryShape != null) { primaryShape.setBackgroundColor(color); } } /** * @generated */ protected void setLineWidth(int width) { if (primaryShape instanceof Shape) { ((Shape) primaryShape).setLineWidth(width); } } /** * @generated */ protected void setLineType(int style) { if (primaryShape instanceof Shape) { ((Shape) primaryShape).setLineStyle(style); } } /** * @generated */ public List<IElementType> getMARelTypesOnTarget() { ArrayList<IElementType> types = new ArrayList<IElementType>(1); types.add(EsbElementTypes.EsbLink_4001); return types; } /** * @generated */ public List<IElementType> getMATypesForSource(IElementType relationshipType) { LinkedList<IElementType> types = new LinkedList<IElementType>(); if (relationshipType == EsbElementTypes.EsbLink_4001) { types.add(EsbElementTypes.ProxyOutputConnector_3002); types.add(EsbElementTypes.PropertyMediatorOutputConnector_3034); types.add(EsbElementTypes.ThrottleMediatorOutputConnector_3122); types.add(EsbElementTypes.ThrottleMediatorOnAcceptOutputConnector_3581); types.add(EsbElementTypes.ThrottleMediatorOnRejectOutputConnector_3582); types.add(EsbElementTypes.FilterMediatorOutputConnector_3534); types.add(EsbElementTypes.FilterMediatorPassOutputConnector_3011); types.add(EsbElementTypes.FilterMediatorFailOutputConnector_3012); types.add(EsbElementTypes.LogMediatorOutputConnector_3019); types.add(EsbElementTypes.EnrichMediatorOutputConnector_3037); types.add(EsbElementTypes.XSLTMediatorOutputConnector_3040); types.add(EsbElementTypes.SwitchCaseBranchOutputConnector_3043); types.add(EsbElementTypes.SwitchDefaultBranchOutputConnector_3044); types.add(EsbElementTypes.SwitchMediatorOutputConnector_3499); types.add(EsbElementTypes.SequenceOutputConnector_3050); types.add(EsbElementTypes.EventMediatorOutputConnector_3053); types.add(EsbElementTypes.EntitlementMediatorOutputConnector_3056); types.add(EsbElementTypes.ClassMediatorOutputConnector_3059); types.add(EsbElementTypes.SpringMediatorOutputConnector_3062); types.add(EsbElementTypes.ScriptMediatorOutputConnector_3065); types.add(EsbElementTypes.FaultMediatorOutputConnector_3068); types.add(EsbElementTypes.XQueryMediatorOutputConnector_3071); types.add(EsbElementTypes.CommandMediatorOutputConnector_3074); types.add(EsbElementTypes.DBLookupMediatorOutputConnector_3077); types.add(EsbElementTypes.DBReportMediatorOutputConnector_3080); types.add(EsbElementTypes.SmooksMediatorOutputConnector_3083); types.add(EsbElementTypes.SendMediatorOutputConnector_3086); types.add(EsbElementTypes.SendMediatorEndpointOutputConnector_3539); types.add(EsbElementTypes.HeaderMediatorOutputConnector_3101); types.add(EsbElementTypes.CloneMediatorOutputConnector_3104); types.add(EsbElementTypes.CloneMediatorTargetOutputConnector_3133); types.add(EsbElementTypes.CacheMediatorOutputConnector_3107); types.add(EsbElementTypes.CacheMediatorOnHitOutputConnector_3618); types.add(EsbElementTypes.IterateMediatorOutputConnector_3110); types.add(EsbElementTypes.IterateMediatorTargetOutputConnector_3606); types.add(EsbElementTypes.CalloutMediatorOutputConnector_3116); types.add(EsbElementTypes.TransactionMediatorOutputConnector_3119); types.add(EsbElementTypes.RMSequenceMediatorOutputConnector_3125); types.add(EsbElementTypes.RuleMediatorOutputConnector_3128); types.add(EsbElementTypes.RuleMediatorChildMediatorsOutputConnector_3640); types.add(EsbElementTypes.OAuthMediatorOutputConnector_3131); types.add(EsbElementTypes.AggregateMediatorOutputConnector_3113); types.add(EsbElementTypes.AggregateMediatorOnCompleteOutputConnector_3132); types.add(EsbElementTypes.StoreMediatorOutputConnector_3590); types.add(EsbElementTypes.BuilderMediatorOutputConector_3593); types.add(EsbElementTypes.CallTemplateMediatorOutputConnector_3596); types.add(EsbElementTypes.PayloadFactoryMediatorOutputConnector_3599); types.add(EsbElementTypes.EnqueueMediatorOutputConnector_3602); types.add(EsbElementTypes.URLRewriteMediatorOutputConnector_3622); types.add(EsbElementTypes.ValidateMediatorOutputConnector_3625); types.add(EsbElementTypes.ValidateMediatorOnFailOutputConnector_3626); types.add(EsbElementTypes.RouterMediatorOutputConnector_3630); types.add(EsbElementTypes.RouterMediatorTargetOutputConnector_3631); types.add(EsbElementTypes.ConditionalRouterMediatorOutputConnector_3637); types.add(EsbElementTypes.ConditionalRouterMediatorAdditionalOutputConnector_3638); types.add(EsbElementTypes.DefaultEndPointOutputConnector_3022); types.add(EsbElementTypes.AddressEndPointOutputConnector_3031); types.add(EsbElementTypes.FailoverEndPointOutputConnector_3090); types.add(EsbElementTypes.FailoverEndPointWestOutputConnector_3097); types.add(EsbElementTypes.WSDLEndPointOutputConnector_3093); types.add(EsbElementTypes.NamedEndpointOutputConnector_3662); types.add(EsbElementTypes.LoadBalanceEndPointOutputConnector_3096); types.add(EsbElementTypes.LoadBalanceEndPointWestOutputConnector_3098); types.add(EsbElementTypes.APIResourceEndpointOutputConnector_3676); types.add(EsbElementTypes.MessageOutputConnector_3047); types.add(EsbElementTypes.MergeNodeOutputConnector_3016); types.add(EsbElementTypes.SequencesOutputConnector_3617); types.add(EsbElementTypes.DefaultEndPointOutputConnector_3645); types.add(EsbElementTypes.AddressEndPointOutputConnector_3648); types.add(EsbElementTypes.FailoverEndPointOutputConnector_3651); types.add(EsbElementTypes.FailoverEndPointWestOutputConnector_3652); types.add(EsbElementTypes.WSDLEndPointOutputConnector_3655); types.add(EsbElementTypes.LoadBalanceEndPointOutputConnector_3658); types.add(EsbElementTypes.LoadBalanceEndPointWestOutputConnector_3659); types.add(EsbElementTypes.APIResourceOutputConnector_3671); types.add(EsbElementTypes.ComplexEndpointsOutputConnector_3679); } return types; } /** * @generated */ public class EastPointerFigure extends EastPointerShape { /** * @generated */ public EastPointerFigure() { this.setBackgroundColor(THIS_BACK); this.setPreferredSize(new Dimension(getMapMode().DPtoLP(12), getMapMode().DPtoLP(10))); } } /** * @generated */ static final Color THIS_BACK = new Color(null, 50, 50, 50); }
rajeevanv89/developer-studio
esb/org.wso2.developerstudio.eclipse.gmf.esb.diagram/src/org/wso2/developerstudio/eclipse/gmf/esb/diagram/edit/parts/FailoverEndPointInputConnector2EditPart.java
Java
apache-2.0
11,273
[ 30522, 7427, 8917, 1012, 1059, 6499, 2475, 1012, 9797, 8525, 20617, 1012, 13232, 1012, 13938, 2546, 1012, 9686, 2497, 1012, 16403, 1012, 10086, 1012, 3033, 1025, 12324, 9262, 1012, 21183, 4014, 1012, 9140, 9863, 1025, 12324, 9262, 1012, 211...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
using System; using System.Collections.Generic; using System.ComponentModel.DataAnnotations; using System.Runtime.Serialization; using Eurofurence.App.Domain.Model.Fragments; namespace Eurofurence.App.Domain.Model.Knowledge { [DataContract] public class KnowledgeEntryRecord : EntityBase { [Required] [DataMember] public Guid KnowledgeGroupId { get; set; } [Required] [DataMember] public string Title { get; set; } [Required] [DataMember] public string Text { get; set; } [Required] [DataMember] public int Order { get; set; } [DataMember] public LinkFragment[] Links { get; set; } [DataMember] public Guid[] ImageIds { get; set; } } }
Pinselohrkater/ef_app-backend-dotnet_core
src/Eurofurence.App.Domain.Model/Knowledge/KnowledgeEntryRecord.cs
C#
mit
788
[ 30522, 2478, 2291, 1025, 2478, 2291, 1012, 6407, 1012, 12391, 1025, 2478, 2291, 1012, 6922, 5302, 9247, 1012, 2951, 11639, 17287, 9285, 1025, 2478, 2291, 1012, 2448, 7292, 1012, 7642, 3989, 1025, 2478, 9944, 27942, 10127, 1012, 10439, 1012,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/************************************************************************** ** ** This file is part of . ** https://github.com/HamedMasafi/ ** ** is free software: you can redistribute it and/or modify ** it under the terms of the GNU Lesser General Public License as published by ** the Free Software Foundation, either version 3 of the License, or ** (at your option) any later version. ** ** is distributed in the hope that it will be useful, ** but WITHOUT ANY WARRANTY; without even the implied warranty of ** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ** GNU Lesser General Public License for more details. ** ** You should have received a copy of the GNU Lesser General Public License ** along with . If not, see <http://www.gnu.org/licenses/>. ** **************************************************************************/ #include <QEventLoop> #include <QtCore/QDebug> #include <QtNetwork/QTcpSocket> #include "abstracthub_p.h" #include "serverhub.h" #include "serverhub_p.h" NEURON_BEGIN_NAMESPACE ServerHubPrivate::ServerHubPrivate() : serverThread(nullptr), connectionEventLoop(nullptr) { } ServerHub::ServerHub(QObject *parent) : AbstractHub(parent), d(new ServerHubPrivate) { } ServerHub::ServerHub(AbstractSerializer *serializer, QObject *parent) : AbstractHub(serializer, parent), d(new ServerHubPrivate) { } ServerHub::ServerHub(QTcpSocket *socket, QObject *parent) : AbstractHub(parent), d(new ServerHubPrivate) { this->socket = socket; } ServerHub::~ServerHub() { // QList<SharedObject *> soList = sharedObjects(); // foreach (SharedObject *so, soList) { // if(so) // removeSharedObject(so); // } // while(sharedObjects().count()){ // removeSharedObject(sharedObjects().at(0)); // } auto so = sharedObjectHash(); QHashIterator<const QString, SharedObject*> i(so); while (i.hasNext()) { i.next(); // cout << i.key() << ": " << i.value() << endl; detachSharedObject(i.value()); } } ServerThread *ServerHub::serverThread() const { return d->serverThread; } qlonglong ServerHub::hi(qlonglong hubId) { initalizeMutex.lock(); setHubId(hubId); // emit connected(); K_TRACE_DEBUG; // invokeOnPeer(THIS_HUB, "hi", hubId); if (d->connectionEventLoop) { d->connectionEventLoop->quit(); d->connectionEventLoop->deleteLater(); } initalizeMutex.unlock(); setStatus(Connected); return this->hubId(); } bool ServerHub::setSocketDescriptor(qintptr socketDescriptor, bool waitForConnect) { bool ok = socket->setSocketDescriptor(socketDescriptor); if(waitForConnect) socket->waitForReadyRead(); return ok; } void ServerHub::setServerThread(ServerThread *serverThread) { if(d->serverThread != serverThread) d->serverThread = serverThread; } void ServerHub::beginConnection() { K_TRACE_DEBUG; d->connectionEventLoop = new QEventLoop; K_REG_OBJECT(d->connectionEventLoop); d->connectionEventLoop->exec(); } NEURON_END_NAMESPACE
HamedMasafi/Noron
src/serverhub.cpp
C++
lgpl-3.0
3,066
[ 30522, 1013, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package de.kueken.ethereum.party.publishing; // Start of user code ShortBlogTest.customImports import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import java.util.concurrent.CompletableFuture; import org.adridadou.ethereum.propeller.keystore.AccountProvider; import org.adridadou.ethereum.propeller.solidity.SolidityContractDetails; import org.adridadou.ethereum.propeller.values.EthAddress; import org.junit.Before; import org.junit.Test; import de.kueken.ethereum.party.basics.ManageableTest; // End of user code /** * Test for the ShortBlog contract. * */ public class ShortBlogTest extends ManageableTest{ private ShortBlog fixture; // Start of user code ShortBlogTest.attributes // private String senderAddressS = "5db10750e8caff27f906b41c71b3471057dd2004"; // End of user code @Override protected String getContractName() { return "ShortBlog"; } @Override protected String getQuallifiedContractName() { return "publishing.sol:ShortBlog"; } /** * Read the contract from the file and deploys the contract code. * @throws Exception */ @Before public void prepareTest() throws Exception { //Start of user code prepareTest createFixture(); //End of user code } /** * Create a new fixture by deploying the contract source. * @throws Exception */ protected void createFixture() throws Exception { //Start of user code createFixture SolidityContractDetails compiledContract = getCompiledContract("/mix/combine.json"); //TODO: set the constructor args String _name = "_name"; CompletableFuture<EthAddress> address = ethereum.publishContract(compiledContract, sender , _name); fixtureAddress = address.get(); setFixture(ethereum.createContractProxy(compiledContract, fixtureAddress, sender, ShortBlog.class)); //End of user code } protected void setFixture(ShortBlog f) { this.fixture = f; super.setFixture(f); } /** * Test method for sendMessage(String message,String hash,String er). * see {@link ShortBlog#sendMessage( String, String, String)} * @throws Exception */ @Test public void testSendMessage_string_string_string() throws Exception { //Start of user code testSendMessage_string_string_string assertEquals(0, fixture.messageCount().intValue()); String message = "test1"; String hash = "h1"; String er = "er1"; fixture.sendMessage(message, hash, er).get(); assertEquals(1, fixture.messageCount().intValue()); Integer lastMessageDate = fixture.lastMessageDate(); System.out.println("-->" + lastMessageDate); ShortBlogMessage messages = fixture.messages(0); assertEquals(message, messages.getMessage()); assertEquals(hash, messages.getHashValue()); assertEquals(er, messages.getExternalResource()); // End of user code } //Start of user code customTests /** * Test method for sendMessage(String message,String hash,String er). see * {@link ShortBlog#sendMessage( String, String, String)} * * @throws Exception */ @Test public void testSendMessage_No_Manager() throws Exception { assertEquals(0, fixture.messageCount().intValue()); fixture.sendMessage("test1", "h1", "er1").get(); assertEquals(1, fixture.messageCount().intValue()); fixture.addManager(AccountProvider.fromPrivateKey((java.math.BigInteger.valueOf(100002L))).getAddress()).get(); assertEquals(2, fixture.mangerCount().intValue()); fixture.removeManager(sender.getAddress()).get(); assertEquals(1, fixture.mangerCount().intValue()); try { fixture.sendMessage("test1", "h1", "er1").get(); fail("Thow exception"); } catch (Exception e) { } assertEquals(1, fixture.messageCount().intValue()); } /** * Test method for sendMessage(String message,String hash,String er). * see {@link ShortBlog#sendMessage( String, String, String)} * @throws Exception */ @Test public void testSendMessage_Often() throws Exception { assertEquals(0, fixture.messageCount().intValue()); String message = "test1"; String hash = "h1"; String er = "er1"; fixture.sendMessage(message, hash, er).get(); assertEquals(1, fixture.messageCount().intValue()); ShortBlogMessage messages = fixture.messages(0); assertEquals(message, messages.getMessage()); assertEquals(hash, messages.getHashValue()); assertEquals(er, messages.getExternalResource()); Integer lastMessageDate = fixture.lastMessageDate(); System.out.println("-->" + lastMessageDate); message = "message1"; hash = "hash1"; er = "external resource 1"; fixture.sendMessage(message, hash, er).get(); assertEquals(2, fixture.messageCount().intValue()); messages = fixture.messages(1); assertEquals(message, messages.getMessage()); assertEquals(hash, messages.getHashValue()); assertEquals(er, messages.getExternalResource()); assertTrue(lastMessageDate<fixture.lastMessageDate()); } // End of user code }
KuekenPartei/party-contracts
src/test/java/de/kueken/ethereum/party/publishing/ShortBlogTest.java
Java
gpl-3.0
4,929
[ 30522, 7427, 2139, 1012, 13970, 23941, 2078, 1012, 28855, 14820, 1012, 2283, 1012, 4640, 1025, 1013, 1013, 2707, 1997, 5310, 3642, 2460, 16558, 8649, 22199, 1012, 7661, 5714, 25378, 30524, 4014, 1012, 16483, 1012, 4012, 10814, 10880, 11263, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/******************************************************************************* * Copyright (c) 2009-2014, MAV'RIC Development Team * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * 3. Neither the name of the copyright holder nor the names of its contributors * may be used to endorse or promote products derived from this software without * specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. ******************************************************************************/ /******************************************************************************* * \file i2c_driver.c * * \author MAV'RIC Team * \author Felix Schill * * \brief The i2c driver * ******************************************************************************/ #include "i2c_driver.h" #include "gpio.h" #include "pdca.h" #include "sysclk.h" #include "print_util.h" static volatile i2c_schedule_event_t schedule[I2C_DEVICES][I2C_SCHEDULE_SLOTS]; static volatile int8_t current_schedule_slot[I2C_DEVICES]; /*! The PDCA interrupt handler. */ __attribute__((__interrupt__)) static void pdca_int_handler_i2c0(void) { AVR32_TWIM0.cr = AVR32_TWIM_CR_MDIS_MASK; pdca_disable(TWI0_DMA_CH); pdca_disable_interrupt_transfer_complete(TWI0_DMA_CH); // call callback function to process data, at end of transfer // to process data, and maybe add some more data schedule[0][current_schedule_slot[0]].transfer_in_progress = 0; if (schedule[0][current_schedule_slot[0]].callback) { schedule[0][current_schedule_slot[0]].callback; } print_util_dbg_print( "!"); } int32_t i2c_driver_init(uint8_t i2c_device) { int32_t i; volatile avr32_twim_t *twim; switch (i2c_device) { case 0: twim = &AVR32_TWIM0; // Register PDCA IRQ interrupt. INTC_register_interrupt( (__int_handler) &pdca_int_handler_i2c0, TWI0_DMA_IRQ, AVR32_INTC_INT0); gpio_enable_module_pin(AVR32_TWIMS0_TWCK_0_0_PIN, AVR32_TWIMS0_TWCK_0_0_FUNCTION); gpio_enable_module_pin(AVR32_TWIMS0_TWD_0_0_PIN, AVR32_TWIMS0_TWD_0_0_FUNCTION); break; case 1: twim = &AVR32_TWIM1;// Register PDCA IRQ interrupt. INTC_register_interrupt( (__int_handler) &pdca_int_handler_i2c0, TWI1_DMA_IRQ, AVR32_INTC_INT0); gpio_enable_module_pin(AVR32_TWIMS1_TWCK_0_0_PIN, AVR32_TWIMS1_TWCK_0_0_FUNCTION); gpio_enable_module_pin(AVR32_TWIMS1_TWD_0_0_PIN, AVR32_TWIMS1_TWD_0_0_FUNCTION); gpio_enable_pin_pull_up(AVR32_TWIMS1_TWCK_0_0_PIN); gpio_enable_pin_pull_up(AVR32_TWIMS1_TWD_0_0_PIN); break; default: // invalid device ID return -1; } for (i = 0; i < I2C_SCHEDULE_SLOTS; i++) { schedule[i2c_device][i].active = -1; } bool global_interrupt_enabled = cpu_irq_is_enabled (); // Disable TWI interrupts if (global_interrupt_enabled) { cpu_irq_disable (); } twim->idr = ~0UL; // Enable master transfer twim->cr = AVR32_TWIM_CR_MEN_MASK; // Reset TWI twim->cr = AVR32_TWIM_CR_SWRST_MASK; if (global_interrupt_enabled) { cpu_irq_enable (); } // Clear SR twim->scr = ~0UL; // register Register twim_master_interrupt_handler interrupt on level CONF_TWIM_IRQ_LEVEL // irqflags_t flags = cpu_irq_save(); // irq_register_handler(twim_master_interrupt_handler, // CONF_TWIM_IRQ_LINE, CONF_TWIM_IRQ_LEVEL); // cpu_irq_restore(flags); // Select the speed if (twim_set_speed(twim, 100000, sysclk_get_pba_hz()) == ERR_INVALID_ARG) { return ERR_INVALID_ARG; } return STATUS_OK; } int8_t i2c_driver_reset(uint8_t i2c_device) { volatile avr32_twim_t *twim; switch (i2c_device) { case 0: twim = &AVR32_TWIM0; break; case 1: twim = &AVR32_TWIM1; break; default: // invalid device ID return -1; } bool global_interrupt_enabled = cpu_irq_is_enabled (); // Disable TWI interrupts if (global_interrupt_enabled) { cpu_irq_disable (); } twim->idr = ~0UL; // Enable master transfer twim->cr = AVR32_TWIM_CR_MEN_MASK; // Reset TWI twim->cr = AVR32_TWIM_CR_SWRST_MASK; if (global_interrupt_enabled) { cpu_irq_enable (); } // Clear SR twim->scr = ~0UL; } int8_t i2c_driver_add_request(uint8_t i2c_device, i2c_schedule_event_t* new_event) { // find free schedule slot int32_t i = 0; for (i = 0; i < I2C_SCHEDULE_SLOTS; i++) { if(schedule[i2c_device][i].active < 0) { break; } } // add request to schedule if (i < I2C_SCHEDULE_SLOTS) { new_event->schedule_slot = i; new_event->transfer_in_progress = 0; new_event->active=1; schedule[i2c_device][i] = *new_event; } else { i = -1; } // return assigned schedule slot return i; } int8_t i2c_driver_change_request(uint8_t i2c_device, i2c_schedule_event_t* new_event) { int32_t i = new_event->schedule_slot; if ((i>=0) && (i < I2C_SCHEDULE_SLOTS)) { new_event->transfer_in_progress = 0; new_event->active=1; schedule[i2c_device][i] = *new_event; } } int8_t i2c_driver_trigger_request(uint8_t i2c_device, uint8_t schedule_slot) { // initiate transfer of given request // set up DMA channel volatile avr32_twim_t *twim; i2c_packet_conf_t* conf = &schedule[i2c_device][schedule_slot].config; static pdca_channel_options_t PDCA_OPTIONS = { .addr = 0, // memory address .pid = AVR32_TWIM0_PDCA_ID_TX, // select peripheral .size = 4, // transfer counter .r_addr = NULL, // next memory address .r_size = 0, // next transfer counter .transfer_size = PDCA_TRANSFER_SIZE_BYTE // select size of the transfer }; switch (i2c_device) { case 0: twim = &AVR32_TWIM0; twim->cr = AVR32_TWIM_CR_MEN_MASK; twim->cr = AVR32_TWIM_CR_SWRST_MASK; twim->cr = AVR32_TWIM_CR_MDIS_MASK; switch (conf->direction) { case I2C_WRITE1_THEN_READ: case I2C_READ: PDCA_OPTIONS.pid = AVR32_TWIM0_PDCA_ID_RX; PDCA_OPTIONS.addr = (void *)conf->read_data; PDCA_OPTIONS.size=conf->read_count; // Init PDCA channel with the pdca_options. pdca_init_channel(TWI0_DMA_CH, &PDCA_OPTIONS); // init PDCA channel with options. break; case I2C_WRITE: PDCA_OPTIONS.pid = AVR32_TWIM0_PDCA_ID_TX; PDCA_OPTIONS.addr = (void *)conf->write_data; PDCA_OPTIONS.size=conf->write_count; // Init PDCA channel with the pdca_options. pdca_init_channel(TWI0_DMA_CH, &PDCA_OPTIONS); // init PDCA channel with options. pdca_load_channel(TWI0_DMA_CH, (void *)conf->write_data, conf->write_count); break; } //pdca_load_channel(TWI0_DMA_CH, (void *)schedule[i2c_device][schedule_slot].config.write_data, schedule[i2c_device][schedule_slot].config.write_count); // Enable pdca interrupt each time the reload counter reaches zero, i.e. each time // the whole block was received pdca_enable_interrupt_transfer_complete(TWI0_DMA_CH); pdca_enable_interrupt_transfer_error(TWI0_DMA_CH); break; case 1: twim = &AVR32_TWIM1; break; default: // invalid device ID return -1; } // set up I2C speed and mode //twim_set_speed(twim, 100000, sysclk_get_pba_hz()); switch (conf->direction) { case I2C_READ: twim->cmdr = (conf->slave_address << AVR32_TWIM_CMDR_SADR_OFFSET) | (conf->read_count << AVR32_TWIM_CMDR_NBYTES_OFFSET) | (AVR32_TWIM_CMDR_VALID_MASK) | (AVR32_TWIM_CMDR_START_MASK) | (0 << AVR32_TWIM_CMDR_STOP_OFFSET) | (0 << AVR32_TWIM_CMDR_READ_OFFSET); break; case I2C_WRITE1_THEN_READ: print_util_dbg_print( "wr"); // set up next command register for the burst read transfer // set up command register to initiate the write transfer. The DMA will take care of the reading once this is done. twim->cmdr = (conf->slave_address << AVR32_TWIM_CMDR_SADR_OFFSET) | (1 << AVR32_TWIM_CMDR_NBYTES_OFFSET) | (AVR32_TWIM_CMDR_VALID_MASK) | (AVR32_TWIM_CMDR_START_MASK) | (0 << AVR32_TWIM_CMDR_STOP_OFFSET) ; twim->ncmdr = (conf->slave_address << AVR32_TWIM_CMDR_SADR_OFFSET) | ((conf->read_count) << AVR32_TWIM_CMDR_NBYTES_OFFSET) | (AVR32_TWIM_CMDR_VALID_MASK) | (AVR32_TWIM_CMDR_START_MASK) | (0 << AVR32_TWIM_CMDR_STOP_OFFSET) | (0 << AVR32_TWIM_CMDR_READ_OFFSET); // set up writing of one byte (usually a slave register index) //twim->cr = AVR32_TWIM_CR_MEN_MASK; twim->thr = conf->write_then_read_preamble; twim->cr = AVR32_TWIM_CR_MEN_MASK; break; case I2C_WRITE: print_util_dbg_print( "w"); twim->cmdr = (conf->slave_address << AVR32_TWIM_CMDR_SADR_OFFSET) | ((conf->write_count) << AVR32_TWIM_CMDR_NBYTES_OFFSET) | (AVR32_TWIM_CMDR_VALID_MASK) | (AVR32_TWIM_CMDR_START_MASK) | (0 << AVR32_TWIM_CMDR_STOP_OFFSET) ; twim->ncmdr = (conf->slave_address << AVR32_TWIM_CMDR_SADR_OFFSET) | ((conf->write_count) << AVR32_TWIM_CMDR_NBYTES_OFFSET) //| (AVR32_TWIM_CMDR_VALID_MASK) | (AVR32_TWIM_CMDR_START_MASK) | (0 << AVR32_TWIM_CMDR_STOP_OFFSET) ; break; } // start transfer current_schedule_slot[i2c_device] = schedule_slot; schedule[i2c_device][schedule_slot].transfer_in_progress = 1; twim->cr = AVR32_TWIM_CR_MEN_MASK; pdca_enable(TWI0_DMA_CH); return 0; } int8_t i2c_driver_pause_request(uint8_t i2c_device, uint8_t schedule_slot) { // pause scheduler // if this request currently active, wait for current transfer to finish // deactivate request // resume scheduler } int8_t i2c_driver_enable_request(uint8_t i2c_device, uint8_t schedule_slot){ return 0; } int8_t i2c_driver_remove_request(uint8_t i2c_device, uint8_t schedule_slot){ return 0; }
gburri/MAVRIC_Library
hal/i2c_driver.c
C
bsd-3-clause
10,872
[ 30522, 1013, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* Copyright The kNet Project. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #pragma once /** @file SerializedDataIterator.h @brief The SerializedDataIterator class. */ #include "SharedPtr.h" #include "MessageListParser.h" namespace kNet { class SerializedDataIterator : public RefCountable { public: SerializedDataIterator(const SerializedMessageDesc &desc_) :desc(desc_) { ResetTraversal(); } BasicSerializedDataType NextElementType() const; const SerializedElementDesc *NextElementDesc() const; void ProceedToNextVariable(); void ProceedNVariables(int count); /// Sets the number of instances in a varying element. When iterating over /// the message to insert data into serialized form, this information needs /// to be passed to this iterator in order to continue. void SetVaryingElemSize(u32 count); void ResetTraversal(); private: struct ElemInfo { /// The element we are accessing next. SerializedElementDesc *elem; /// The index of the elem we are accessing next. int nextElem; /// The index of the instance we are accessing next. int nextIndex; /// The total number of instances of this element we are accessing. int count; /// If this element is a dynamic count -one, then this tracks whether the count has been passed in. bool dynamicCountSpecified; }; void ProceedToNextElement(); void DescendIntoStructure(); /// Stores the tree traversal progress. std::vector<ElemInfo> currentElementStack; /// The type of the message we are building. const SerializedMessageDesc &desc; }; } // ~kNet
benjcooley/Urhonimo
Urho3D-1.32/Source/ThirdParty/kNet/include/kNet/SerializedDataIterator.h
C
mit
2,141
[ 30522, 1013, 1008, 9385, 1996, 14161, 3388, 2622, 1012, 7000, 2104, 1996, 15895, 6105, 1010, 2544, 1016, 1012, 1014, 1006, 1996, 1000, 6105, 1000, 1007, 1025, 2017, 2089, 2025, 2224, 2023, 5371, 3272, 1999, 12646, 2007, 1996, 6105, 1012, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!--NewPage--> <HTML> <HEAD> <!-- Generated by javadoc (build 1.6.0_24) on Mon Apr 09 23:07:47 CEST 2012 --> <TITLE> Uses of Package es.arturocandela.android.mislugares.image </TITLE> <META NAME="date" CONTENT="2012-04-09"> <LINK REL ="stylesheet" TYPE="text/css" HREF="../../../../../stylesheet.css" TITLE="Style"> <SCRIPT type="text/javascript"> function windowTitle() { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="Uses of Package es.arturocandela.android.mislugares.image"; } } </SCRIPT> <NOSCRIPT> </NOSCRIPT> </HEAD> <BODY BGCOLOR="white" onload="windowTitle();"> <HR> <!-- ========= START OF TOP NAVBAR ======= --> <A NAME="navbar_top"><!-- --></A> <A HREF="#skip-navbar_top" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_top_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <FONT CLASS="NavBarFont1">Class</FONT>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Use</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> </EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;PREV&nbsp; &nbsp;NEXT</FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../../index.html?es/arturocandela/android/mislugares/image/package-use.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="package-use.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_top"></A> <!-- ========= END OF TOP NAVBAR ========= --> <HR> <CENTER> <H2> <B>Uses of Package<br>es.arturocandela.android.mislugares.image</B></H2> </CENTER> No usage of es.arturocandela.android.mislugares.image <P> <HR> <!-- ======= START OF BOTTOM NAVBAR ====== --> <A NAME="navbar_bottom"><!-- --></A> <A HREF="#skip-navbar_bottom" title="Skip navigation links"></A> <TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY=""> <TR> <TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A NAME="navbar_bottom_firstrow"><!-- --></A> <TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY=""> <TR ALIGN="center" VALIGN="top"> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <FONT CLASS="NavBarFont1">Class</FONT>&nbsp;</TD> <TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> &nbsp;<FONT CLASS="NavBarFont1Rev"><B>Use</B></FONT>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A>&nbsp;</TD> <TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A>&nbsp;</TD> </TR> </TABLE> </TD> <TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM> </EM> </TD> </TR> <TR> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> &nbsp;PREV&nbsp; &nbsp;NEXT</FONT></TD> <TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2"> <A HREF="../../../../../index.html?es/arturocandela/android/mislugares/image/package-use.html" target="_top"><B>FRAMES</B></A> &nbsp; &nbsp;<A HREF="package-use.html" target="_top"><B>NO FRAMES</B></A> &nbsp; &nbsp;<SCRIPT type="text/javascript"> <!-- if(window==top) { document.writeln('<A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A>'); } //--> </SCRIPT> <NOSCRIPT> <A HREF="../../../../../allclasses-noframe.html"><B>All Classes</B></A> </NOSCRIPT> </FONT></TD> </TR> </TABLE> <A NAME="skip-navbar_bottom"></A> <!-- ======== END OF BOTTOM NAVBAR ======= --> <HR> </BODY> </HTML>
arturocandela/mislugares
doc/es/arturocandela/android/mislugares/image/package-use.html
HTML
mit
5,693
[ 30522, 1026, 999, 9986, 13874, 16129, 2270, 1000, 1011, 1013, 1013, 1059, 2509, 2278, 1013, 1013, 26718, 2094, 16129, 1018, 1012, 5890, 17459, 1013, 1013, 4372, 1000, 1000, 8299, 1024, 1013, 1013, 7479, 1012, 1059, 2509, 1012, 8917, 1013, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DOCUMENT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Essential LaTeX headers \documentclass{standalone} \usepackage{tikz} % Graph \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TIKZ STYLE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Colors \definecolor{nothing}{HTML}{FFFFFF} \definecolor{player1}{HTML}{FF0000} \definecolor{player2}{HTML}{00FF00} \definecolor{coin}{HTML}{E9DF42} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% TIKZ FIGURE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{tikzpicture} % Nodes \node[draw, circle, fill=nothing] at (0, 8) (0-8) {}; \node[draw, circle, fill=nothing] at (1, 8) (1-8) {}; \node[draw, circle, fill=nothing] at (2, 8) (2-8) {}; \node[draw, circle, fill=nothing] at (3, 8) (3-8) {}; \node[draw, circle, fill=nothing] at (4, 8) (4-8) {}; \node[draw, circle, fill=nothing] at (5, 8) (5-8) {}; \node[draw, circle, fill=nothing] at (6, 8) (6-8) {}; \node[draw, circle, fill=nothing] at (7, 8) (7-8) {}; \node[draw, circle, fill=coin] at (8, 8) (8-8) {}; \node[draw, circle, fill=nothing] at (0, 7) (0-7) {}; \node[draw, circle, fill=nothing] at (1, 7) (1-7) {}; \node[draw, circle, fill=nothing] at (2, 7) (2-7) {}; \node[draw, circle, fill=nothing] at (3, 7) (3-7) {}; \node[draw, circle, fill=nothing] at (4, 7) (4-7) {}; \node[draw, circle, fill=nothing] at (5, 7) (5-7) {}; \node[draw, circle, fill=nothing] at (6, 7) (6-7) {}; \node[draw, circle, fill=nothing] at (7, 7) (7-7) {}; \node[draw, circle, fill=nothing] at (8, 7) (8-7) {}; \node[draw, circle, fill=nothing] at (0, 6) (0-6) {}; \node[draw, circle, fill=nothing] at (1, 6) (1-6) {}; \node[draw, circle, fill=nothing] at (2, 6) (2-6) {}; \node[draw, circle, fill=nothing] at (3, 6) (3-6) {}; \node[draw, circle, fill=nothing] at (4, 6) (4-6) {}; \node[draw, circle, fill=nothing] at (5, 6) (5-6) {}; \node[draw, circle, fill=nothing] at (6, 6) (6-6) {}; \node[draw, circle, fill=nothing] at (7, 6) (7-6) {}; \node[draw, circle, fill=nothing] at (8, 6) (8-6) {}; \node[draw, circle, fill=nothing] at (0, 5) (0-5) {}; \node[draw, circle, fill=nothing] at (1, 5) (1-5) {}; \node[draw, circle, fill=nothing] at (2, 5) (2-5) {}; \node[draw, circle, fill=nothing] at (3, 5) (3-5) {}; \node[draw, circle, fill=nothing] at (4, 5) (4-5) {}; \node[draw, circle, fill=nothing] at (5, 5) (5-5) {}; \node[draw, circle, fill=nothing] at (6, 5) (6-5) {}; \node[draw, circle, fill=nothing] at (7, 5) (7-5) {}; \node[draw, circle, fill=nothing] at (8, 5) (8-5) {}; \node[draw, circle, fill=nothing] at (0, 4) (0-4) {}; \node[draw, circle, fill=nothing] at (1, 4) (1-4) {}; \node[draw, circle, fill=nothing] at (2, 4) (2-4) {}; \node[draw, circle, fill=nothing] at (3, 4) (3-4) {}; \node[draw, circle, fill=nothing] at (4, 4) (4-4) {}; \node[draw, circle, fill=nothing] at (5, 4) (5-4) {}; \node[draw, circle, fill=nothing] at (6, 4) (6-4) {}; \node[draw, circle, fill=nothing] at (7, 4) (7-4) {}; \node[draw, circle, fill=nothing] at (8, 4) (8-4) {}; \node[draw, circle, fill=nothing] at (0, 3) (0-3) {}; \node[draw, circle, fill=nothing] at (1, 3) (1-3) {}; \node[draw, circle, fill=nothing] at (2, 3) (2-3) {}; \node[draw, circle, fill=nothing] at (3, 3) (3-3) {}; \node[draw, circle, fill=nothing] at (4, 3) (4-3) {}; \node[draw, circle, fill=nothing] at (5, 3) (5-3) {}; \node[draw, circle, fill=nothing] at (6, 3) (6-3) {}; \node[draw, circle, fill=nothing] at (7, 3) (7-3) {}; \node[draw, circle, fill=nothing] at (8, 3) (8-3) {}; \node[draw, circle, fill=nothing] at (0, 2) (0-2) {}; \node[draw, circle, fill=nothing] at (1, 2) (1-2) {}; \node[draw, circle, fill=nothing] at (2, 2) (2-2) {}; \node[draw, circle, fill=nothing] at (3, 2) (3-2) {}; \node[draw, circle, fill=nothing] at (4, 2) (4-2) {}; \node[draw, circle, fill=nothing] at (5, 2) (5-2) {}; \node[draw, circle, fill=nothing] at (6, 2) (6-2) {}; \node[draw, circle, fill=nothing] at (7, 2) (7-2) {}; \node[draw, circle, fill=nothing] at (8, 2) (8-2) {}; \node[draw, circle, fill=nothing] at (0, 1) (0-1) {}; \node[draw, circle, fill=nothing] at (1, 1) (1-1) {}; \node[draw, circle, fill=nothing] at (2, 1) (2-1) {}; \node[draw, circle, fill=nothing] at (3, 1) (3-1) {}; \node[draw, circle, fill=nothing] at (4, 1) (4-1) {}; \node[draw, circle, fill=nothing] at (5, 1) (5-1) {}; \node[draw, circle, fill=nothing] at (6, 1) (6-1) {}; \node[draw, circle, fill=nothing] at (7, 1) (7-1) {}; \node[draw, circle, fill=nothing] at (8, 1) (8-1) {}; \node[draw, circle, fill=player1] at (0, 0) (0-0) {}; \node[draw, circle, fill=nothing] at (1, 0) (1-0) {}; \node[draw, circle, fill=nothing] at (2, 0) (2-0) {}; \node[draw, circle, fill=nothing] at (3, 0) (3-0) {}; \node[draw, circle, fill=nothing] at (4, 0) (4-0) {}; \node[draw, circle, fill=nothing] at (5, 0) (5-0) {}; \node[draw, circle, fill=nothing] at (6, 0) (6-0) {}; \node[draw, circle, fill=nothing] at (7, 0) (7-0) {}; \node[draw, circle, fill=nothing] at (8, 0) (8-0) {}; % Edges \draw[] (0-8) -- (0-7); \draw[] (1-8) -- (2-8); \draw[] (1-8) -- (1-7); \draw[] (2-8) -- (3-8); \draw[] (2-8) -- (2-7); \draw[] (3-8) -- (3-7); \draw[] (4-8) -- (5-8); \draw[] (4-8) -- (4-7); \draw[] (5-8) -- (6-8); \draw[] (5-8) -- (5-7); \draw[] (6-8) -- (7-8); \draw[] (6-8) -- (6-7); \draw[] (7-8) -- (8-8); \draw[] (7-8) -- (7-7); \draw[] (8-8) -- (8-7); \draw[] (0-7) -- (1-7); \draw[] (0-7) -- (0-6); \draw[] (1-7) -- (2-7); \draw[] (1-7) -- (1-6); \draw[] (2-7) -- (3-7); \draw[] (2-7) -- (2-6); \draw[] (4-7) -- (5-7); \draw[] (4-7) -- (4-6); \draw[] (5-7) -- (6-7); \draw[] (5-7) -- (5-6); \draw[] (6-7) -- (6-6); \draw[] (0-6) -- (0-5); \draw[] (1-6) -- (2-6); \draw[] (1-6) -- (1-5); \draw[] (2-6) -- (3-6); \draw[] (2-6) -- (2-5); \draw[] (3-6) -- (4-6); \draw[] (3-6) -- (3-5); \draw[] (4-6) -- (5-6); \draw[] (4-6) -- (4-5); \draw[] (5-6) -- (6-6); \draw[] (6-6) -- (7-6); \draw[] (6-6) -- (6-5); \draw[] (7-6) -- (8-6); \draw[] (7-6) -- (7-5); \draw[] (8-6) -- (8-5); \draw[] (0-5) -- (1-5); \draw[] (0-5) -- (0-4); \draw[] (1-5) -- (2-5); \draw[] (1-5) -- (1-4); \draw[] (2-5) -- (3-5); \draw[] (2-5) -- (2-4); \draw[] (3-5) -- (3-4); \draw[] (4-5) -- (5-5); \draw[] (5-5) -- (5-4); \draw[] (6-5) -- (7-5); \draw[] (6-5) -- (6-4); \draw[] (7-5) -- (8-5); \draw[] (8-5) -- (8-4); \draw[] (0-4) -- (1-4); \draw[] (0-4) -- (0-3); \draw[] (1-4) -- (1-3); \draw[] (2-4) -- (3-4); \draw[] (2-4) -- (2-3); \draw[] (4-4) -- (4-3); \draw[] (5-4) -- (6-4); \draw[] (6-4) -- (7-4); \draw[] (6-4) -- (6-3); \draw[] (7-4) -- (8-4); \draw[] (0-3) -- (0-2); \draw[] (1-3) -- (1-2); \draw[] (2-3) -- (3-3); \draw[] (3-3) -- (4-3); \draw[] (3-3) -- (3-2); \draw[] (4-3) -- (4-2); \draw[] (5-3) -- (5-2); \draw[] (6-3) -- (7-3); \draw[] (6-3) -- (6-2); \draw[] (7-3) -- (8-3); \draw[] (7-3) -- (7-2); \draw[] (8-3) -- (8-2); \draw[] (2-2) -- (3-2); \draw[] (2-2) -- (2-1); \draw[] (3-2) -- (4-2); \draw[] (3-2) -- (3-1); \draw[] (4-2) -- (5-2); \draw[] (4-2) -- (4-1); \draw[] (5-2) -- (6-2); \draw[] (5-2) -- (5-1); \draw[] (6-2) -- (7-2); \draw[] (6-2) -- (6-1); \draw[] (7-2) -- (7-1); \draw[] (8-2) -- (8-1); \draw[] (0-1) -- (1-1); \draw[] (0-1) -- (0-0); \draw[] (1-1) -- (1-0); \draw[] (2-1) -- (2-0); \draw[] (3-1) -- (4-1); \draw[] (3-1) -- (3-0); \draw[] (4-1) -- (5-1); \draw[] (4-1) -- (4-0); \draw[] (5-1) -- (6-1); \draw[] (5-1) -- (5-0); \draw[] (6-1) -- (7-1); \draw[] (6-1) -- (6-0); \draw[] (8-1) -- (8-0); \draw[] (0-0) -- (1-0); \draw[] (1-0) -- (2-0); \draw[] (3-0) -- (4-0); \draw[] (6-0) -- (7-0); \draw[] (7-0) -- (8-0); \end{tikzpicture} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
dimtion/jml
outputFiles/statistics/archives/ourIA/closest.py/0.6/7/game.tex
TeX
mit
13,286
[ 30522, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 1003, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# Native Packaging for iOS and Android This guide describes how to package a Sencha Touch app to run natively on iOS or Android devices using the Sencha Touch Native Packager tool. ## Native app packaging general procedures The app packaging process is very much the same whether you target iOS or Android devices. The main difference is that each environment requires you to complete a different prerequisite. Additionally, some of the details of creating the config file differ between the two environments. Here are the basic steps for app packaging: - 1 **Provisioning.** For **iOS**, you first need to complete iOS provisioning on [Apple iOS provisioning portal][1] including certificates and devices set up through the provisioning portal and Xcode. **Android** provisioning involves obtaining an appropriate Android ready certificate (debug or release) for signing the application. - 2 **Installation.** Install the packager, part of [Sencha SDK Tools 2.0](http://www.sencha.com/products/sdk-tools/) - 3 **Create config file.** Create a packaging configuration file to be used with the Native Packager. - 4 **Create package.** Run the packager to create a packaged `\<application\>.app` file for iOS or an `.apk` file for Android. Each of these steps is detailed further in this guide, with special care given to detailing the differences between iOS and Android packaging procedures. ### Required software Before you begin, make sure your computer is running the following: - **iOS packaging**: Mac OS X 10.6+ or Windows Vista+ and Xcode - **Android packaging**: [Android SDK Tools](http://developer.android.com/sdk/index.html) (Revision 16+) and [Eclipse](http://www.eclipse.org/) (optional). ## Step 1: Provisioning Provisioning differs between the two environments, as follows: **iOS:** Refer to the [Native iOS Provisioning Guide](#!/guide/native_provisioning) and use the [Apple iOS provisioning portal][1] to set up the appropriate development and distribution certifications and profiles. Create an App ID and finish provisioning your application. You need your App ID and App Name to complete the packaging process. Refer to the How-To section in the [Apple iOS provisioning portal][1] for help. **Android:** The Android Keytool included in the Android SDK tools is one way of creating certificate for signing Android applications. Below is an example of a Keytool command that generates a private key: $ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 See the Android developers guide [Signing Your Applications](http://developer.android.com/tools/publishing/app-signing.html) for more information about creating certificates and signing applications. ## Step 2: Install the packager - Run the [Sencha SDK Tools][5] installation: SenchaSDKTools (SenchaSDKTools-2.0.0-Beta) - The `sencha` command that includes the package option will be installed to the specified location during installation (default: Applications/SenchaSDKTools-2.0.0-Beta/command). ## Step 3: Create a packaging configuration file Create a configuration file template by running the following command in the Terminal: sencha package generate <configTemplate.json> `<configTemplate.json>` is the name of the configuration file. It cannot contain any spaces. The configuration file should have the following format. Parameters unique to **iOS** or **Android** are noted. Note that the parameters do not have to follow any particular order in an actual config file. { "applicationName": "<AppName>", "applicationId": "<AppID>", "bundleSeedId": "<String>", (iOS only) "versionString": "<AppVersion>", "versionCode": "<BuildNumber>", (Android only) "icon": "<Object>", "inputPath": "<AppPackageInputPath>", "outputPath": "<AppPackageOutputPath>", "rawConfig": "<Info.plistKeys>", (iOS only) "configuration": "<Release | Debug>", "notificationConfiguration": "<Release | Debug>", (iOS only; optional) "platform": "<iOSSimulator | iOS | Android | AndroidEmulator>", "deviceType": "<iPhone | iPad | Universal>", (iOS only) "certificatePath": "<CertificateLocation>", "certificateAlias": "<CertificateAlias>", (Optional) "certificatePassword": "<Password>", (Optional) "permissions": "<ApplicationPermissions>" (Android only) "sdkPath": "<SDKLocation>", (Android only) "androidAPILevel": "<VersionNumber>", (Android only) "minOSVersion": "<VersionNumber>", (iOS only) "orientations": "<Direction>" } The rest of this section provides details about each parameter, noting environment-specific settings. ### `applicationName` The name of your application, which a device displays to the user after the app is installed. **iOS:** The application name should match the name provided in the [iOS Provisioning Portal][1], in the App IDs section. Here's an example iOS App ID, showing both the name and the ID: {@img idScreen.png App ID} This example uses the following: - AppName: Sencha Touch 2 Packaging - AppID: com.Sencha.Touch2Packaging *Note.* the App ID is the same as the one you put in the Identifier field in Xcode. **Android:** The output file will have the name \<AppName\>.apk. ### `applicationId` The ID for your app. It's suggested that you use a nameSpace for your app, such as `com.sencha.Touch2Package`, as shown in the example above. For iOS, this can also be found in the provisioning portal. ### `bundleSeedId` (iOS only) This is the ten character string in front of the iOS application ID obtained from the [iOS Provisioning Portal][1]. In the example shown above under `applicationName`, it's `H8A8ADYR7H`. ### `versionString` This is the version number of your application. Usually it takes a string such as `1.0`. ### `versionCode` (Android only) The build number of an Android app, also called the integer version code. ### `icon` The icon displayed to the user along with your app name. **iOS:** Specifies the icon file to be used for your application. Retina icon should be specified with `@2x` at the end of the icon name. A regular icon name looks like `icon.png`, while a retina icon looks like `(regular) andicon@2x.png`. If a retina icon with the `@2x.png` exists, the packager includes the retina icon. Should also specify target device for app, as follows: "icon": { "57": "resources/icons/Icon.png", "72": "resources/icons/Icon~ipad.png", "114": "resources/icons/Icon@2x.png", "144": "resources/icons/Icon~ipad@2x.png" } Refer to the [Apple documentation](https://developer.apple.com/library/ios/#documentation/userexperience/conceptual/mobilehig/IconsImages/IconsImages.html) for specific information about icon sizes. **Android:** Specifies the launcher icon file to be used for your application. Refer to the [Android Launcher Icons guide](http://developer.android.com/guide/practices/ui_guidelines/icon_design_launcher.html) for more information. ### `inputPath` This is location of your Sencha Touch 2 application, relative to the configuration file. ### `outputPath` This is the output location of the packaged application, that is where the built application file will be saved. ### `rawConfig` (iOS only) "Raw" keys that can be included with `info.plist` configuration with iOS apps. `info.plist` is the name of an information property list file, a structure text file with configuration information for a bundled executable. See [Information Property List Files](https://developer.apple.com/library/ios/#documentation/MacOSX/Conceptual/BPRuntimeConfig/Articles/ConfigFiles.html) in the iOS Developer Library for more information. ### `configuration` Indicates whether you are building the debug or release configuration of your application. `Debug` should be used unless you are submitting your app to an online store, in which case `Release` should be specified. ### `notificationConfiguration` (iOS only) Optional for apps that use push notifications. `Debug` should be used unless you are submitting your app to an online store, in which case `Release` should be specified. If app doesn't use push notifications, leave blank or remove parameter. ### `platform` Indicate the platform on which your application will run. - **iOS:** Options are `iOSSimulator` or `iOS`. - **Android:** Options are `Android` or `AndroidEmulator`. ### `deviceType` (iOS only) Indicates the iOS device type that your application will run on. Available options are: - iPhone - iPad - Universal ### `certificatePath` This is the location of your certificate. This is required when you are developing for Android or you are developing on Windows. ### `certificateAlias` (Optional) Indicates the name of your certificate. It this is not specified when developing on OSX, the packaging tool automatically tries to find the certificate using the applicationId. Can be just a simple matcher. For example, if your certificate name is "iPhone Developer: Robert Dougan (ABCDEFGHIJ)", you can just enter `iPhone Developer`. Not required when using a `certificatePath` on Windows. ### `certificatePassword` (Optional) Use only if password was specified when generating certificate for release build of Android (iOS or Windows) or any iOS build on Windows. Indicates password set for certificate. If no password set, leave blank or eliminate parameter. ### `permissions` (Android only) Array of permissions to use services called from an Android app, including coarse location, fine location, information about networks, the camera, and so on. See the [complete list](http://developer.android.com/reference/android/Manifest.permission.html) of permissions for Android app services. ### `sdkPath` (Android only) Indicates the path to the Android SDK. ### `androidAPILevel` (Android only) Indicates the Android API level, the version of Android SDK to use. For more information, see [What is API Level](http://developer.android.com/guide/appendix/api-levels.html) in the Android SDK documentation. Be sure to install the corresponding platform API in the Android SDK manager (*android_sdk/tools/android*). ### `minOSVersion` (iOS only) Indicates number of lowest iOS version required for app to run. ### `orientations` Indicates the device orientations in which the application can run. Options are: - portrait - landscapeLeft - landscapeRight - portraitUpsideDown *Note.* If this omitted, default orientations setting is all four orientations. ## Sample iOS config file The following is an example iOS config file. { "applicationName": "SenchaAPI", "applicationId": "com.sencha.api", "outputPath": "~/stbuild/app/", "versionString": "1.2", "inputPath": "~/stbuild/webapp", "icon": { "57": "resources/icons/Icon.png", "72": "resources/icons/Icon~ipad.png", "114": "resources/icons/Icon@2x.png", "144": "resources/icons/Icon~ipad@2x.png" }, "rawConfig": "<key>UIPrerenderedIcon</key><true/>", "configuration": "debug", "notificationConfiguration": "debug", "platform": "iOS", "deviceType": "iPhone", "certificatePath": "stbuild.keystore", "certificateAlias": "iPhone Developer", "certificatePassword": "stbuild", "minOSVersion": "4.0", "bundleSeedId": "KPXFEPZ6EF", "orientations": [ "portrait", "landscapeLeft", "landscapeRight", "portraitUpsideDown" ] } ## Sample Android config file The following is an example Android config file. { "applicationName": "SenchaAPI", "applicationId": "com.sencha.api", "outputPath": "~/stbuild/app/", "versionString": "1.2", "versionCode": "12", "inputPath": "~/stbuild/webapp", "icon": { "57": "resources/icons/Icon.png", "72": "resources/icons/Icon~ipad.png", "114": "resources/icons/Icon@2x.png", "144": "resources/icons/Icon~ipad@2x.png" }, "configuration": "debug", "platform": "android", "certificatePath": "stbuild.keystore", "certificateAlias": "Android Developer", "certificatePassword": "stbuild", "permissions": [ "INTERNET", "ACCESS_NETWORK_STATE", "CAMERA", "VIBRATE", "ACCESS_FINE_LOCATION", "ACCESS_COARSE_LOCATION", "CALL_PHONE" ], "sdkPath": "/android_sdk-mac_86/", "androidAPILevel": "7", "orientations": [ "portrait", "landscapeLeft", "landscapeRight", "portraitUpsideDown" ] } ## Step 4: Run the packager to create the packaged application After creating the config file, the next step is to package the app. Here are the procedures for packaging both debug and release versions of an app for both iOS and Android. ### iOS: Package a debug application The appropriate `platform` and `configuration` settings need to be made in the config file, for example: platform: iOSSimulator configuration: Debug If `platform` and `configuration` are not set, the packaged app will not run correctly. With these configs set properly, issue the following command in Terminal: sencha package run <configFile.json> In this example, which targets the iOS Simulator in the `platform` config parameter, successful completion of the `package` command launches the iOS simulator with the application running natively. Note that the `deviceType` identifier -- `iPhone` or `iPad` -- has to be set properly to trigger the appropriate simulator. ### iOS: Package a release application To package a signed application to run on the device, issue the following command in the terminal: sencha package <configFile.json> *Note.* an `<AppName.app>` is created in the specified output location. This is the application that you can use to deploy to the iOS device. ### Android: Package a debug application and run it on Android Emulator The appropriate `platform` and `configuration` settings need to be made in the config file, for example: platform: AndroidEmulator configuration: Debug If `platform` and `configuration` are not set, the packaged app will not run correctly. With these configs set properly, start the Android Emulator and issue the following command: sencha package run <configFile.json> In this example, which targets the Android Emulator in the `platform` config parameter, successful completion of the `package` command launches the app in the already running emulator. If `package` is successful, an `.apk` is available in the application output location for you to manually test on an Android Emulator or a device. More information about the Android Emulator can be found in [Android Developer Guide: Using the Android Emulator](http://developer.android.com/tools/devices/emulator.html). ### Android: Package an application for distribution To package a signed application to run on the device, issue the following command: sencha package <configFile.json> An `<AppName.apk>` is created in the specified output location. This is the application that you can use to release for distribution. ## Additional resources ### iOS resources 1. [Native iOS Provisioning](#!/guide/native_provisioning) 2. [Apple iOS provisioning portal][1] 2. [iOS Icon guideline][4] ### Android resources 1. [Signing Your Applications](http://developer.android.com/tools/publishing/app-signing.html) 2. [Installing the ADT Plugin for Eclipse](http://developer.android.com/tools/sdk/eclipse-adt.html) 3. [Eclipse](http://www.eclipse.org/) 4. [Managing Virtual Devices for Android Emulator](http://developer.android.com/tools/publishing/app-signing.html), "Setting up Virtual Devices". [1]: https://developer.apple.com/ios/manage/overview/index.action [3]: http://developer.apple.com/library/ios/%23documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/BuildTimeConfiguration/BuildTimeConfiguration.html%23//apple_ref/doc/uid/TP40007072-CH7-SW1 [4]: http://developer.apple.com/library/ios/%23documentation/userexperience/conceptual/mobilehig/IconsImages/IconsImages.html [5]: http://www.sencha.com/products/sdk-tools/
msmeeks/sencha-keypad
js/sencha-touch-2.1.0/docs/guides/native_packaging/README.md
Markdown
mit
16,871
[ 30522, 1001, 3128, 14793, 2005, 16380, 1998, 11924, 2023, 5009, 5577, 2129, 2000, 7427, 1037, 12411, 7507, 3543, 10439, 2000, 2448, 3128, 2135, 2006, 16380, 2030, 11924, 5733, 2478, 1996, 12411, 7507, 3543, 3128, 7427, 2099, 6994, 1012, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
enetworking =========== Educations Networking
PrasanthCreations/enetworking
README.md
Markdown
unlicense
47
[ 30522, 4372, 3388, 21398, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 1027, 2495, 2015, 14048, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* YUI 3.5.1 (build 22) Copyright 2012 Yahoo! Inc. All rights reserved. Licensed under the BSD License. http://yuilibrary.com/license/ */ YUI.add('sortable', function(Y) { /** * The class allows you to create a Drag & Drop reordered list. * @module sortable */ /** * The class allows you to create a Drag & Drop reordered list. * @class Sortable * @extends Base * @constructor */ var Sortable = function(o) { Sortable.superclass.constructor.apply(this, arguments); }, CURRENT_NODE = 'currentNode', OPACITY_NODE = 'opacityNode', CONT = 'container', ID = 'id', ZINDEX = 'zIndex', OPACITY = 'opacity', PARENT_NODE = 'parentNode', NODES = 'nodes', NODE = 'node'; Y.extend(Sortable, Y.Base, { /** * @property delegate * @type DD.Delegate * @description A reference to the DD.Delegate instance. */ delegate: null, initializer: function() { var id = 'sortable-' + Y.guid(), c, delConfig = { container: this.get(CONT), nodes: this.get(NODES), target: true, invalid: this.get('invalid'), dragConfig: { groups: [ id ] } }, del; if (this.get('handles')) { delConfig.handles = this.get('handles'); } del = new Y.DD.Delegate(delConfig); this.set(ID, id); del.dd.plug(Y.Plugin.DDProxy, { moveOnEnd: false, cloneNode: true }); c = new Y.DD.Drop({ node: this.get(CONT), bubbleTarget: del, groups: del.dd.get('groups') }).on('drop:over', Y.bind(this._onDropOver, this)); del.on({ 'drag:start': Y.bind(this._onDragStart, this), 'drag:end': Y.bind(this._onDragEnd, this), 'drag:over': Y.bind(this._onDragOver, this), 'drag:drag': Y.bind(this._onDrag, this) }); this.delegate = del; Sortable.reg(this); }, _up: null, _y: null, _onDrag: function(e) { if (e.pageY < this._y) { this._up = true; } else if (e.pageY > this._y) { this._up = false; } this._y = e.pageY; }, /** * @private * @method _onDropOver * @param Event e The Event Object * @description Handles the DropOver event to append a drop node to an empty target */ _onDropOver: function(e) { if (!e.drop.get(NODE).test(this.get(NODES))) { var nodes = e.drop.get(NODE).all(this.get(NODES)); if (nodes.size() === 0) { e.drop.get(NODE).append(e.drag.get(NODE)); } } }, /** * @private * @method _onDragOver * @param Event e The Event Object * @description Handles the DragOver event that moves the object in the list or to another list. */ _onDragOver: function(e) { if (!e.drop.get(NODE).test(this.get(NODES))) { return; } if (e.drag.get(NODE) == e.drop.get(NODE)) { return; } // is drop a child of drag? if (e.drag.get(NODE).contains(e.drop.get(NODE))) { return; } var same = false, dir, oldNode, newNode, dropsort, dropNode, moveType = this.get('moveType').toLowerCase(); if (e.drag.get(NODE).get(PARENT_NODE).contains(e.drop.get(NODE))) { same = true; } if (same && moveType == 'move') { moveType = 'insert'; } switch (moveType) { case 'insert': dir = ((this._up) ? 'before' : 'after'); dropNode = e.drop.get(NODE); if (Y.Sortable._test(dropNode, this.get(CONT))) { dropNode.append(e.drag.get(NODE)); } else { dropNode.insert(e.drag.get(NODE), dir); } break; case 'swap': Y.DD.DDM.swapNode(e.drag, e.drop); break; case 'move': case 'copy': dropsort = Y.Sortable.getSortable(e.drop.get(NODE).get(PARENT_NODE)); if (!dropsort) { Y.log('No delegate parent found', 'error', 'sortable'); return; } Y.DD.DDM.getDrop(e.drag.get(NODE)).addToGroup(dropsort.get(ID)); //Same List if (same) { Y.DD.DDM.swapNode(e.drag, e.drop); } else { if (this.get('moveType') == 'copy') { //New List oldNode = e.drag.get(NODE); newNode = oldNode.cloneNode(true); newNode.set(ID, ''); e.drag.set(NODE, newNode); dropsort.delegate.createDrop(newNode, [dropsort.get(ID)]); oldNode.setStyles({ top: '', left: '' }); } e.drop.get(NODE).insert(e.drag.get(NODE), 'before'); } break; } this.fire(moveType, { same: same, drag: e.drag, drop: e.drop }); this.fire('moved', { same: same, drag: e.drag, drop: e.drop }); }, /** * @private * @method _onDragStart * @param Event e The Event Object * @description Handles the DragStart event and initializes some settings. */ _onDragStart: function(e) { this.delegate.get('lastNode').setStyle(ZINDEX, ''); this.delegate.get(this.get(OPACITY_NODE)).setStyle(OPACITY, this.get(OPACITY)); this.delegate.get(CURRENT_NODE).setStyle(ZINDEX, '999'); }, /** * @private * @method _onDragEnd * @param Event e The Event Object * @description Handles the DragEnd event that cleans up the settings in the drag:start event. */ _onDragEnd: function(e) { this.delegate.get(this.get(OPACITY_NODE)).setStyle(OPACITY, 1); this.delegate.get(CURRENT_NODE).setStyles({ top: '', left: '' }); this.sync(); }, /** * @method plug * @param Class cls The class to plug * @param Object config The class config * @description Passthrough to the DD.Delegate.ddplug method * @chainable */ plug: function(cls, config) { //I don't like this.. Not at all, need to discuss with the team if (cls && cls.NAME.substring(0, 4).toLowerCase() === 'sort') { this.constructor.superclass.plug.call(this, cls, config); } else { this.delegate.dd.plug(cls, config); } return this; }, /** * @method sync * @description Passthrough to the DD.Delegate syncTargets method. * @chainable */ sync: function() { this.delegate.syncTargets(); return this; }, destructor: function() { this.delegate.destroy(); Sortable.unreg(this); }, /** * @method join * @param Sortable sel The Sortable list to join with * @param String type The type of join to do: full, inner, outer, none. Default: full * @description Join this Sortable with another Sortable instance. * <ul> * <li>full: Exchange nodes with both lists.</li> * <li>inner: Items can go into this list from the joined list.</li> * <li>outer: Items can go out of the joined list into this list.</li> * <li>none: Removes the join.</li> * </ul> * @chainable */ join: function(sel, type) { if (!(sel instanceof Y.Sortable)) { Y.error('Sortable: join needs a Sortable Instance'); return this; } if (!type) { type = 'full'; } type = type.toLowerCase(); var method = '_join_' + type; if (this[method]) { this[method](sel); } return this; }, /** * @private * @method _join_none * @param Sortable sel The Sortable to remove the join from * @description Removes the join with the passed Sortable. */ _join_none: function(sel) { this.delegate.dd.removeFromGroup(sel.get(ID)); sel.delegate.dd.removeFromGroup(this.get(ID)); }, /** * @private * @method _join_full * @param Sortable sel The Sortable list to join with * @description Joins both of the Sortables together. */ _join_full: function(sel) { this.delegate.dd.addToGroup(sel.get(ID)); sel.delegate.dd.addToGroup(this.get(ID)); }, /** * @private * @method _join_outer * @param Sortable sel The Sortable list to join with * @description Allows this Sortable to accept items from the passed Sortable. */ _join_outer: function(sel) { this.delegate.dd.addToGroup(sel.get(ID)); }, /** * @private * @method _join_inner * @param Sortable sel The Sortable list to join with * @description Allows this Sortable to give items to the passed Sortable. */ _join_inner: function(sel) { sel.delegate.dd.addToGroup(this.get(ID)); }, /** * A custom callback to allow a user to extract some sort of id or any other data from the node to use in the "ordering list" and then that data should be returned from the callback. * @method getOrdering * @param Function callback * @return Array */ getOrdering: function(callback) { var ordering = []; if (!Y.Lang.isFunction(callback)) { callback = function (node) { return node; }; } Y.one(this.get(CONT)).all(this.get(NODES)).each(function(node) { ordering.push(callback(node)); }); return ordering; } }, { NAME: 'sortable', ATTRS: { /** * @attribute handles * @description Drag handles to pass on to the internal DD.Delegate instance. * @type Array */ handles: { value: false }, /** * @attribute container * @description A selector query to get the container to listen for mousedown events on. All "nodes" should be a child of this container. * @type String */ container: { value: 'body' }, /** * @attribute nodes * @description A selector query to get the children of the "container" to make draggable elements from. * @type String */ nodes: { value: '.dd-draggable' }, /** * @attribute opacity * @description The opacity to change the proxy item to when dragging. * @type String */ opacity: { value: '.75' }, /** * @attribute opacityNode * @description The node to set opacity on when dragging (dragNode or currentNode). Default: currentNode. * @type String */ opacityNode: { value: 'currentNode' }, /** * @attribute id * @description The id of this Sortable, used to get a reference to this Sortable list from another list. * @type String */ id: { value: null }, /** * @attribute moveType * @description How should an item move to another list: insert, swap, move, copy. Default: insert * @type String */ moveType: { value: 'insert' }, /** * @attribute invalid * @description A selector string to test if a list item is invalid and not sortable * @type String */ invalid: { value: '' } }, /** * @static * @property _sortables * @private * @type Array * @description Hash map of all Sortables on the page. */ _sortables: [], /** * @static * @method _test * @param {Node} node The node instance to test. * @param {String|Node} test The node instance or selector string to test against. * @description Test a Node or a selector for the container */ _test: function(node, test) { if (test instanceof Y.Node) { return (test === node); } else { return node.test(test); } }, /** * @static * @method getSortable * @param {String|Node} node The node instance or selector string to use to find a Sortable instance. * @description Get a Sortable instance back from a node reference or a selector string. */ getSortable: function(node) { var s = null; node = Y.one(node); Y.each(Y.Sortable._sortables, function(v) { if (Y.Sortable._test(node, v.get(CONT))) { s = v; } }); return s; }, /** * @static * @method reg * @param Sortable s A Sortable instance. * @description Register a Sortable instance with the singleton to allow lookups later. */ reg: function(s) { Y.Sortable._sortables.push(s); }, /** * @static * @method unreg * @param Sortable s A Sortable instance. * @description Unregister a Sortable instance with the singleton. */ unreg: function(s) { Y.each(Y.Sortable._sortables, function(v, k) { if (v === s) { Y.Sortable._sortables[k] = null; delete Sortable._sortables[k]; } }); } }); Y.Sortable = Sortable; /** * @event copy * @description A Sortable node was moved with a copy. * @param {Event.Facade} event An Event Facade object * @param {Boolean} event.same Moved to the same list. * @param {DD.Drag} event.drag The drag instance. * @param {DD.Drop} event.drop The drop instance. * @type {Event.Custom} */ /** * @event move * @description A Sortable node was moved with a move. * @param {Event.Facade} event An Event Facade object with the following specific property added: * @param {Boolean} event.same Moved to the same list. * @param {DD.Drag} event.drag The drag instance. * @param {DD.Drop} event.drop The drop instance. * @type {Event.Custom} */ /** * @event insert * @description A Sortable node was moved with an insert. * @param {Event.Facade} event An Event Facade object with the following specific property added: * @param {Boolean} event.same Moved to the same list. * @param {DD.Drag} event.drag The drag instance. * @param {DD.Drop} event.drop The drop instance. * @type {Event.Custom} */ /** * @event swap * @description A Sortable node was moved with a swap. * @param {Event.Facade} event An Event Facade object with the following specific property added: * @param {Boolean} event.same Moved to the same list. * @param {DD.Drag} event.drag The drag instance. * @param {DD.Drop} event.drop The drop instance. * @type {Event.Custom} */ /** * @event moved * @description A Sortable node was moved. * @param {Event.Facade} event An Event Facade object with the following specific property added: * @param {Boolean} event.same Moved to the same list. * @param {DD.Drag} event.drag The drag instance. * @param {DD.Drop} event.drop The drop instance. * @type {Event.Custom} */ }, '3.5.1' ,{requires:['dd-delegate', 'dd-drop-plugin', 'dd-proxy']});
sergiomt/zesped
src/webapp/js/yui/sortable/sortable-debug.js
JavaScript
agpl-3.0
17,368
[ 30522, 1013, 1008, 9805, 2072, 1017, 1012, 1019, 1012, 1015, 1006, 3857, 2570, 1007, 9385, 2262, 20643, 999, 4297, 1012, 2035, 2916, 9235, 1012, 7000, 2104, 1996, 18667, 2094, 6105, 1012, 8299, 1024, 1013, 1013, 9805, 18622, 10024, 2854, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
@echo off setlocal call h2 start java -cp "%H2CP%" org.h2.tools.Server
jihwan/spring-data-jdbc
src/test/resources/h2database/run.bat
Batchfile
apache-2.0
71
[ 30522, 1030, 9052, 2125, 2275, 4135, 9289, 2655, 1044, 2475, 2707, 9262, 1011, 18133, 1000, 1003, 1044, 2475, 21906, 1003, 1000, 8917, 1012, 1044, 2475, 1012, 5906, 1012, 8241, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
--- description: Manager administration guide keywords: docker, container, swarm, manager, raft redirect_from: - /engine/swarm/manager-administration-guide/ title: Administer and maintain a swarm of Docker Engines --- When you run a swarm of Docker Engines, **manager nodes** are the key components for managing the swarm and storing the swarm state. It is important to understand some key features of manager nodes in order to properly deploy and maintain the swarm. This article covers the following swarm administration tasks: * [Using a static IP for manager node advertise address](#use-a-static-ip-for-manager-node-advertise-address) * [Adding manager nodes for fault tolerance](#add-manager-nodes-for-fault-tolerance) * [Distributing manager nodes](#distribute-manager-nodes) * [Running manager-only nodes](#run-manager-only-nodes) * [Backing up the swarm state](#back-up-the-swarm-state) * [Monitoring the swarm health](#monitor-swarm-health) * [Troubleshooting a manager node](#troubleshoot-a-manager-node) * [Forcefully removing a node](#force-remove-a-node) * [Recovering from disaster](#recover-from-disaster) * [Forcing the swarm to rebalance](#forcing-the-swarm-to-rebalance) Refer to [How nodes work](how-swarm-mode-works/nodes.md) for a brief overview of Docker Swarm mode and the difference between manager and worker nodes. ## Operating manager nodes in a swarm Swarm manager nodes use the [Raft Consensus Algorithm](raft.md) to manage the swarm state. You only need to understand some general concepts of Raft in order to manage a swarm. There is no limit on the number of manager nodes. The decision about how many manager nodes to implement is a trade-off between performance and fault-tolerance. Adding manager nodes to a swarm makes the swarm more fault-tolerant. However, additional manager nodes reduce write performance because more nodes must acknowledge proposals to update the swarm state. This means more network round-trip traffic. Raft requires a majority of managers, also called the quorum, to agree on proposed updates to the swarm, such as node additions or removals. Membership operations are subject to the same constraints as state replication. ### Maintaining the quorum of managers If the swarm loses the quorum of managers, the swarm cannot perform management tasks. If your swarm has multiple managers, always have more than two. In order to maintain quorum, a majority of managers must be available. An odd number of managers is recommended, because the next even number does not make the quorum easier to keep. For instance, whether you have 3 or 4 managers, you can still only lose 1 manager and maintain the quorum. If you have 5 or 6 managers, you can still only lose two. Even if a swarm loses the quorum of managers, swarm tasks on existing worker nodes continue to run. However, swarm nodes cannot be added, updated, or removed, and new or existing tasks cannot be started, stopped, moved, or updated. See [Recovering from losing the quorum](#recovering-from-losing-the-quorum) for troubleshooting steps if you do lose the quorum of managers. ## Use a static IP for manager node advertise address When initiating a swarm, you have to specify the `--advertise-addr` flag to advertise your address to other manager nodes in the swarm. For more information, see [Run Docker Engine in swarm mode](swarm-mode.md#configure-the-advertise-address). Because manager nodes are meant to be a stable component of the infrastructure, you should use a *fixed IP address* for the advertise address to prevent the swarm from becoming unstable on machine reboot. If the whole swarm restarts and every manager node subsequently gets a new IP address, there is no way for any node to contact an existing manager. Therefore the swarm is hung while nodes to contact one another at their old IP addresses. Dynamic IP addresses are OK for worker nodes. ## Add manager nodes for fault tolerance You should maintain an odd number of managers in the swarm to support manager node failures. Having an odd number of managers ensures that during a network partition, there is a higher chance that the quorum remains available to process requests if the network is partitioned into two sets. Keeping the quorum is not guaranteed if you encounter more than two network partitions. | Swarm Size | Majority | Fault Tolerance | |:------------:|:----------:|:-----------------:| | 1 | 1 | 0 | | 2 | 2 | 0 | | **3** | 2 | **1** | | 4 | 3 | 1 | | **5** | 3 | **2** | | 6 | 4 | 2 | | **7** | 4 | **3** | | 8 | 5 | 3 | | **9** | 5 | **4** | For example, in a swarm with *5 nodes*, if you lose *3 nodes*, you don't have a quorum. Therefore you can't add or remove nodes until you recover one of the unavailable manager nodes or recover the swarm with disaster recovery commands. See [Recover from disaster](#recover-from-disaster). While it is possible to scale a swarm down to a single manager node, it is impossible to demote the last manager node. This ensures you maintain access to the swarm and that the swarm can still process requests. Scaling down to a single manager is an unsafe operation and is not recommended. If the last node leaves the swarm unexpectedly during the demote operation, the swarm will become unavailable until you reboot the node or restart with `--force-new-cluster`. You manage swarm membership with the `docker swarm` and `docker node` subsystems. Refer to [Add nodes to a swarm](join-nodes.md) for more information on how to add worker nodes and promote a worker node to be a manager. ## Distribute manager nodes In addition to maintaining an odd number of manager nodes, pay attention to datacenter topology when placing managers. For optimal fault-tolerance, distribute manager nodes across a minimum of 3 availability-zones to support failures of an entire set of machines or common maintenance scenarios. If you suffer a failure in any of those zones, the swarm should maintain the quorum of manager nodes available to process requests and rebalance workloads. | Swarm manager nodes | Repartition (on 3 Availability zones) | |:-------------------:|:--------------------------------------:| | 3 | 1-1-1 | | 5 | 2-2-1 | | 7 | 3-2-2 | | 9 | 3-3-3 | ## Run manager-only nodes By default manager nodes also act as a worker nodes. This means the scheduler can assign tasks to a manager node. For small and non-critical swarms assigning tasks to managers is relatively low-risk as long as you schedule services using **resource constraints** for *cpu* and *memory*. However, because manager nodes use the Raft consensus algorithm to replicate data in a consistent way, they are sensitive to resource starvation. You should isolate managers in your swarm from processes that might block swarm operations like swarm heartbeat or leader elections. To avoid interference with manager node operation, you can drain manager nodes to make them unavailable as worker nodes: ```bash docker node update --availability drain <NODE> ``` When you drain a node, the scheduler reassigns any tasks running on the node to other available worker nodes in the swarm. It also prevents the scheduler from assigning tasks to the node. ## Back up the swarm state Docker manager nodes store the swarm state and manager logs in the following directory: ```bash /var/lib/docker/swarm/raft ``` Back up the `raft` data directory often so that you can use it in case of [disaster recovery](#recover-from-disaster). Then you can take the `raft` directory of one of the manager nodes to restore to a new swarm. ## Monitor swarm health You can monitor the health of manager nodes by querying the docker `nodes` API in JSON format through the `/nodes` HTTP endpoint. Refer to the [nodes API documentation](../reference/api/docker_remote_api_v1.24.md#36-nodes) for more information. From the command line, run `docker node inspect <id-node>` to query the nodes. For instance, to query the reachability of the node as a manager: ```bash{% raw %} docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}" reachable {% endraw %}``` To query the status of the node as a worker that accept tasks: ```bash{% raw %} docker node inspect manager1 --format "{{ .Status.State }}" ready {% endraw %}``` From those commands, we can see that `manager1` is both at the status `reachable` as a manager and `ready` as a worker. An `unreachable` health status means that this particular manager node is unreachable from other manager nodes. In this case you need to take action to restore the unreachable manager: - Restart the daemon and see if the manager comes back as reachable. - Reboot the machine. - If neither restarting or rebooting work, you should add another manager node or promote a worker to be a manager node. You also need to cleanly remove the failed node entry from the manager set with `docker node demote <NODE>` and `docker node rm <id-node>`. Alternatively you can also get an overview of the swarm health from a manager node with `docker node ls`: ```bash docker node ls ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS 1mhtdwhvsgr3c26xxbnzdc3yp node05 Accepted Ready Active 516pacagkqp2xc3fk9t1dhjor node02 Accepted Ready Active Reachable 9ifojw8of78kkusuc4a6c23fx * node01 Accepted Ready Active Leader ax11wdpwrrb6db3mfjydscgk7 node04 Accepted Ready Active bb1nrq2cswhtbg4mrsqnlx1ck node03 Accepted Ready Active Reachable di9wxgz8dtuh9d2hn089ecqkf node06 Accepted Ready Active ``` ## Troubleshoot a manager node You should never restart a manager node by copying the `raft` directory from another node. The data directory is unique to a node ID. A node can only use a node ID once to join the swarm. The node ID space should be globally unique. To cleanly re-join a manager node to a cluster: 1. To demote the node to a worker, run `docker node demote <NODE>`. 2. To remove the node from the swarm, run `docker node rm <NODE>`. 3. Re-join the node to the swarm with a fresh state using `docker swarm join`. For more information on joining a manager node to a swarm, refer to [Join nodes to a swarm](join-nodes.md). ## Force remove a node In most cases, you should shut down a node before removing it from a swarm with the `docker node rm` command. If a node becomes unreachable, unresponsive, or compromised you can forcefully remove the node without shutting it down by passing the `--force` flag. For instance, if `node9` becomes compromised: <!-- bash hint breaks block quote --> ``` $ docker node rm node9 Error response from daemon: rpc error: code = 9 desc = node node9 is not down and can't be removed $ docker node rm --force node9 Node node9 removed from swarm ``` Before you forcefully remove a manager node, you must first demote it to the worker role. Make sure that you always have an odd number of manager nodes if you demote or remove a manager ## Recover from disaster Swarm is resilient to failures and the swarm can recover from any number of temporary node failures (machine reboots or crash with restart) or other transient errors. However, a swarm cannot automatically recover if it loses a quorum. Tasks on existing worker nodes will continue to run, but administrative tasks are not possible, including scaling or updating services and joining or removing nodes from the swarm. The best way to recover is to bring the missing manager nodes back online. If that is not possible, continue reading for some options for recovering your swarm. In a swarm of `N` managers, a quorum (a majority) of manager nodes must always be available. For example, in a swarm with 5 managers, a minimum of 3 must be operational and in communication with each other. In other words, the swarm can tolerate up to `(N-1)/2` permanent failures beyond which requests involving swarm management cannot be processed. These types of failures include data corruption or hardware failures. ### Recovering from losing the quorum If you lose the quorum of managers, you cannot administer the swarm. If you have lost the quorum and you attempt to perform any management operation on the swarm, an error occurs: ```no-highlight Error response from daemon: rpc error: code = 4 desc = context deadline exceeded ``` The best way to recover from losing the quorum is to bring the failed nodes back online. If you can't do that, the only way to recover from this state is to use the `--force-new-cluster` action from a manager node. This removes all managers except the manager the command was run from. The quorum is achieved because there is now only one manager. Promote nodes to be managers until you have the desired number of managers. ```bash # From the node to recover docker swarm init --force-new-cluster --advertise-addr node01:2377 ``` When you run the `docker swarm init` command with the `--force-new-cluster` flag, the Docker Engine where you run the command becomes the manager node of a single-node swarm which is capable of managing and running services. The manager has all the previous information about services and tasks, worker nodes are still part of the swarm, and services are still running. You will need to add or re-add manager nodes to achieve your previous task distribution and ensure that you have enough managers to maintain high availability and prevent losing the quorum. ## Forcing the swarm to rebalance Generally, you do not need to force the swarm to rebalance its tasks. When you add a new node to a swarm, or a node reconnects to the swarm after a period of unavailability, the swarm does not automatically give a workload to the idle node. This is a design decision. If the swarm periodically shifted tasks to different nodes for the sake of balance, the clients using those tasks would be disrupted. The goal is to avoid disrupting running services for the sake of balance across the swarm. When new tasks start, or when a node with running tasks becomes unavailable, those tasks are given to less busy nodes. The goal is eventual balance, with minimal disruption to the end user. In Docker 1.13 and higher, you can use the `--force` or `-f` flag with the `docker service update` command to force the service to redistribute its tasks across the available worker nodes. This will cause the service tasks to restart. Client applications may be disrupted. If you have configured it, your service will use a [rolling update](swarm-tutorial.md#rolling-update). If you use an earlier version and you want to achieve an even balance of load across workers and don't mind disrupting running tasks, you can force your swarm to re-balance by temporarily scaling the service upward. Use `docker service inspect --pretty <servicename>` to see the configured scale of a service. When you use `docker service scale`, the nodes with the lowest number of tasks are targeted to receive the new workloads. There may be multiple under-loaded nodes in your swarm. You may need to scale the service up by modest increments a few times to achieve the balance you want across all the nodes. When the load is balanced to your satisfaction, you can scale the service back down to the original scale. You can use `docker service ps` to assess the current balance of your service across nodes. See also [`docker service scale`](../reference/commandline/service_scale.md) and [`docker service ps`](../reference/commandline/service_ps.md).
jzwlqx/denverdino.github.io
engine/swarm/admin_guide.md
Markdown
apache-2.0
15,972
[ 30522, 1011, 1011, 1011, 6412, 1024, 3208, 3447, 5009, 3145, 22104, 1024, 8946, 2121, 1010, 11661, 1010, 21708, 1010, 3208, 1010, 21298, 2417, 7442, 6593, 1035, 2013, 1024, 1011, 1013, 3194, 1013, 21708, 1013, 3208, 1011, 3447, 1011, 5009, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
import Ember from "ember"; const { Route } = Ember; const set = Ember.set; export default Route.extend({ setupController() { this.controllerFor('mixinStack').set('model', []); let port = this.get('port'); port.on('objectInspector:updateObject', this, this.updateObject); port.on('objectInspector:updateProperty', this, this.updateProperty); port.on('objectInspector:updateErrors', this, this.updateErrors); port.on('objectInspector:droppedObject', this, this.droppedObject); port.on('deprecation:count', this, this.setDeprecationCount); port.send('deprecation:getCount'); }, deactivate() { let port = this.get('port'); port.off('objectInspector:updateObject', this, this.updateObject); port.off('objectInspector:updateProperty', this, this.updateProperty); port.off('objectInspector:updateErrors', this, this.updateErrors); port.off('objectInspector:droppedObject', this, this.droppedObject); port.off('deprecation:count', this, this.setDeprecationCount); }, updateObject(options) { const details = options.details, name = options.name, property = options.property, objectId = options.objectId, errors = options.errors; Ember.NativeArray.apply(details); details.forEach(arrayize); let controller = this.get('controller'); if (options.parentObject) { controller.pushMixinDetails(name, property, objectId, details); } else { controller.activateMixinDetails(name, objectId, details, errors); } this.send('expandInspector'); }, setDeprecationCount(message) { this.controller.set('deprecationCount', message.count); }, updateProperty(options) { const detail = this.controllerFor('mixinDetails').get('model.mixins').objectAt(options.mixinIndex); const property = Ember.get(detail, 'properties').findProperty('name', options.property); set(property, 'value', options.value); }, updateErrors(options) { const mixinDetails = this.controllerFor('mixinDetails'); if (mixinDetails.get('model.objectId') === options.objectId) { mixinDetails.set('model.errors', options.errors); } }, droppedObject(message) { let controller = this.get('controller'); controller.droppedObject(message.objectId); }, actions: { expandInspector() { this.set("controller.inspectorExpanded", true); }, toggleInspector() { this.toggleProperty("controller.inspectorExpanded"); }, inspectObject(objectId) { if (objectId) { this.get('port').send('objectInspector:inspectById', { objectId: objectId }); } }, setIsDragging(isDragging) { this.set('controller.isDragging', isDragging); }, refreshPage() { // If the adapter defined a `reloadTab` method, it means // they prefer to handle the reload themselves if (typeof this.get('adapter').reloadTab === 'function') { this.get('adapter').reloadTab(); } else { // inject ember_debug as quickly as possible in chrome // so that promises created on dom ready are caught this.get('port').send('general:refresh'); this.get('adapter').willReload(); } } } }); function arrayize(mixin) { Ember.NativeArray.apply(mixin.properties); }
jryans/ember-inspector
app/routes/application.js
JavaScript
mit
3,292
[ 30522, 12324, 7861, 5677, 2013, 1000, 7861, 5677, 1000, 1025, 9530, 3367, 1063, 2799, 1065, 1027, 7861, 5677, 1025, 9530, 3367, 2275, 1027, 7861, 5677, 1012, 2275, 1025, 9167, 12398, 2799, 1012, 7949, 1006, 1063, 16437, 8663, 13181, 10820, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'google/apis/apigateway_v1/service.rb' require 'google/apis/apigateway_v1/classes.rb' require 'google/apis/apigateway_v1/representations.rb' module Google module Apis # API Gateway API # # # # @see https://cloud.google.com/api-gateway/docs module ApigatewayV1 VERSION = 'V1' REVISION = '20201211' # View and manage your data across Google Cloud Platform services AUTH_CLOUD_PLATFORM = 'https://www.googleapis.com/auth/cloud-platform' end end end
googleapis/google-api-ruby-client
google-api-client/generated/google/apis/apigateway_v1.rb
Ruby
apache-2.0
1,091
[ 30522, 1001, 9385, 2325, 8224, 4297, 1012, 1001, 1001, 7000, 2104, 1996, 15895, 6105, 1010, 2544, 1016, 1012, 1014, 1006, 1996, 1000, 6105, 1000, 1007, 1025, 1001, 2017, 2089, 2025, 2224, 2023, 5371, 3272, 1999, 12646, 2007, 1996, 6105, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
using System; using System.Collections.Generic; using System.Linq; using System.Text; using MissionPlanner.Comms; namespace MissionPlanner.Antenna { class DegreeTracker : ITrackerOutput { public SerialPort ComPort { get; set; } /// <summary> /// 0-360 /// </summary> public double TrimPan { get; set; } /// <summary> /// -90 - 90 /// </summary> public double TrimTilt { get; set; } public int PanStartRange { get; set; } public int TiltStartRange { get; set; } public int PanEndRange { get; set; } public int TiltEndRange { get; set; } public int PanPWMRange { get; set; } public int TiltPWMRange { get; set; } public int PanPWMCenter { get; set; } public int TiltPWMCenter { get; set; } public bool PanReverse { get { return _panreverse == 1; } set { _panreverse = value == true ? -1 : 1; } } public bool TiltReverse { get { return _tiltreverse == 1; } set { _tiltreverse = value == true ? -1 : 1; } } int _panreverse = 1; int _tiltreverse = 1; int currentpan = 1500; int currenttilt = 1500; public bool Init() { /* if ((PanStartRange - PanEndRange) == 0) { System.Windows.Forms.CustomMessageBox.Show("Invalid Pan Range", "Error"); return false; } if ((TiltStartRange - TiltEndRange) == 0) { System.Windows.Forms.CustomMessageBox.Show("Invalid Tilt Range", "Error"); return false; } */ try { ComPort.Open(); } catch (Exception ex) { CustomMessageBox.Show("Connect failed " + ex.Message, "Error"); return false; } return true; } public bool Setup() { return true; } double wrap_180(double input) { if (input > 180) return input - 360; if (input < -180) return input + 360; return input; } double wrap_range(double input, double range) { if (input > range) return input - 360; if (input < -range) return input + 360; return input; } public bool Pan(double Angle) { currentpan = (int)(Angle*10); return false; } public bool Tilt(double Angle) { currenttilt = (int)(Angle * 10); return false; } public bool PanAndTilt(double pan, double tilt) { Tilt(tilt); Pan(pan); string command = string.Format("!!!PAN:{0:0000},TLT:{1:0000}\n", currentpan, currenttilt); Console.Write(command); ComPort.Write(command); return false; } public bool Close() { try { ComPort.Close(); } catch { } return true; } short Constrain(double input, double min, double max) { if (input < min) return (short)min; if (input > max) return (short)max; return (short)input; } } }
jlnaudin/x-drone
MissionPlanner-master/Antenna/DegreeTracker.cs
C#
gpl-3.0
3,407
[ 30522, 2478, 2291, 1025, 2478, 2291, 1012, 6407, 1012, 12391, 1025, 2478, 2291, 1012, 11409, 4160, 1025, 2478, 2291, 1012, 3793, 1025, 2478, 3260, 24759, 20147, 2099, 1012, 4012, 5244, 1025, 3415, 15327, 3260, 24759, 20147, 2099, 1012, 1343...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>18 --> 19</title> <link href="./../../assets/style.css" rel="stylesheet"> </head> <body> <h2>You have to be fast</h2> <a href="./52b2fda3bd58f1a43a67b48b0bb8d03f82aae6247c19cb3e1f6cb057ca1c8718.html">Teleport</a> <hr> <a href="./../../about.md">About</a> (Spoilers! ) <script src="./../../assets/md5.js"></script> <script> window.currentLevel = 7; </script> <script src="./../../assets/script.js"></script> </body> </html>
simonmysun/praxis
TAIHAO2019/pub/SmallGame/AsFastAsYouCan2/76fdc108858381c1bd203e3bd2fb2126995af3857aa91fa0c688c54727428c26.html
HTML
mit
550
[ 30522, 1026, 999, 9986, 13874, 16129, 1028, 1026, 16129, 11374, 1027, 1000, 4372, 1000, 1028, 1026, 2132, 1028, 1026, 18804, 25869, 13462, 1027, 1000, 21183, 2546, 1011, 1022, 1000, 1028, 1026, 2516, 1028, 2324, 1011, 1011, 1028, 2539, 1026...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# Copyright (c) 2012 NetApp, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the NetApp-specific NFS driver module.""" from lxml import etree import mock import mox from mox import IgnoreArg from mox import IsA import os from cinder import context from cinder import exception from cinder.image import image_utils from cinder.openstack.common.gettextutils import _ from cinder.openstack.common import log as logging from cinder import test from cinder.volume import configuration as conf from cinder.volume.drivers.netapp import api from cinder.volume.drivers.netapp import nfs as netapp_nfs from cinder.volume.drivers.netapp import utils from oslo.config import cfg CONF = cfg.CONF LOG = logging.getLogger(__name__) def create_configuration(): configuration = mox.MockObject(conf.Configuration) configuration.append_config_values(mox.IgnoreArg()) configuration.nfs_mount_point_base = '/mnt/test' configuration.nfs_mount_options = None return configuration class FakeVolume(object): def __init__(self, size=0): self.size = size self.id = hash(self) self.name = None def __getitem__(self, key): return self.__dict__[key] def __setitem__(self, key, val): self.__dict__[key] = val class FakeSnapshot(object): def __init__(self, volume_size=0): self.volume_name = None self.name = None self.volume_id = None self.volume_size = volume_size self.user_id = None self.status = None def __getitem__(self, key): return self.__dict__[key] class FakeResponse(object): def __init__(self, status): """Initialize FakeResponse. :param status: Either 'failed' or 'passed' """ self.Status = status if status == 'failed': self.Reason = 'Sample error' class NetappDirectCmodeNfsDriverTestCase(test.TestCase): """Test direct NetApp C Mode driver.""" def setUp(self): super(NetappDirectCmodeNfsDriverTestCase, self).setUp() self._custom_setup() def test_create_snapshot(self): """Test snapshot can be created and deleted.""" mox = self.mox drv = self._driver mox.StubOutWithMock(drv, '_clone_volume') drv._clone_volume(IgnoreArg(), IgnoreArg(), IgnoreArg()) mox.ReplayAll() drv.create_snapshot(FakeSnapshot()) mox.VerifyAll() def test_create_volume_from_snapshot(self): """Tests volume creation from snapshot.""" drv = self._driver mox = self.mox volume = FakeVolume(1) snapshot = FakeSnapshot(1) location = '127.0.0.1:/nfs' expected_result = {'provider_location': location} mox.StubOutWithMock(drv, '_clone_volume') mox.StubOutWithMock(drv, '_get_volume_location') mox.StubOutWithMock(drv, 'local_path') mox.StubOutWithMock(drv, '_discover_file_till_timeout') mox.StubOutWithMock(drv, '_set_rw_permissions_for_all') drv._clone_volume(IgnoreArg(), IgnoreArg(), IgnoreArg()) drv._get_volume_location(IgnoreArg()).AndReturn(location) drv.local_path(IgnoreArg()).AndReturn('/mnt') drv._discover_file_till_timeout(IgnoreArg()).AndReturn(True) drv._set_rw_permissions_for_all(IgnoreArg()) mox.ReplayAll() loc = drv.create_volume_from_snapshot(volume, snapshot) self.assertEqual(loc, expected_result) mox.VerifyAll() def _prepare_delete_snapshot_mock(self, snapshot_exists): drv = self._driver mox = self.mox mox.StubOutWithMock(drv, '_get_provider_location') mox.StubOutWithMock(drv, '_volume_not_present') mox.StubOutWithMock(drv, '_post_prov_deprov_in_ssc') if snapshot_exists: mox.StubOutWithMock(drv, '_execute') mox.StubOutWithMock(drv, '_get_volume_path') drv._get_provider_location(IgnoreArg()) drv._get_provider_location(IgnoreArg()) drv._volume_not_present(IgnoreArg(), IgnoreArg())\ .AndReturn(not snapshot_exists) if snapshot_exists: drv._get_volume_path(IgnoreArg(), IgnoreArg()) drv._execute('rm', None, run_as_root=True) drv._post_prov_deprov_in_ssc(IgnoreArg()) mox.ReplayAll() return mox def test_delete_existing_snapshot(self): drv = self._driver mox = self._prepare_delete_snapshot_mock(True) drv.delete_snapshot(FakeSnapshot()) mox.VerifyAll() def test_delete_missing_snapshot(self): drv = self._driver mox = self._prepare_delete_snapshot_mock(False) drv.delete_snapshot(FakeSnapshot()) mox.VerifyAll() def _custom_setup(self): kwargs = {} kwargs['netapp_mode'] = 'proxy' kwargs['configuration'] = create_configuration() self._driver = netapp_nfs.NetAppDirectCmodeNfsDriver(**kwargs) def test_check_for_setup_error(self): mox = self.mox drv = self._driver required_flags = [ 'netapp_transport_type', 'netapp_login', 'netapp_password', 'netapp_server_hostname', 'netapp_server_port'] # set required flags for flag in required_flags: setattr(drv.configuration, flag, None) # check exception raises when flags are not set self.assertRaises(exception.CinderException, drv.check_for_setup_error) # set required flags for flag in required_flags: setattr(drv.configuration, flag, 'val') setattr(drv, 'ssc_enabled', False) mox.StubOutWithMock(netapp_nfs.NetAppDirectNfsDriver, '_check_flags') netapp_nfs.NetAppDirectNfsDriver._check_flags() mox.ReplayAll() drv.check_for_setup_error() mox.VerifyAll() # restore initial FLAGS for flag in required_flags: delattr(drv.configuration, flag) def test_do_setup(self): mox = self.mox drv = self._driver mox.StubOutWithMock(netapp_nfs.NetAppNFSDriver, 'do_setup') mox.StubOutWithMock(drv, '_get_client') mox.StubOutWithMock(drv, '_do_custom_setup') netapp_nfs.NetAppNFSDriver.do_setup(IgnoreArg()) drv._get_client() drv._do_custom_setup(IgnoreArg()) mox.ReplayAll() drv.do_setup(IsA(context.RequestContext)) mox.VerifyAll() def _prepare_clone_mock(self, status): drv = self._driver mox = self.mox volume = FakeVolume() setattr(volume, 'provider_location', '127.0.0.1:/nfs') mox.StubOutWithMock(drv, '_get_host_ip') mox.StubOutWithMock(drv, '_get_export_path') mox.StubOutWithMock(drv, '_get_if_info_by_ip') mox.StubOutWithMock(drv, '_get_vol_by_junc_vserver') mox.StubOutWithMock(drv, '_clone_file') mox.StubOutWithMock(drv, '_post_prov_deprov_in_ssc') drv._get_host_ip(IgnoreArg()).AndReturn('127.0.0.1') drv._get_export_path(IgnoreArg()).AndReturn('/nfs') drv._get_if_info_by_ip('127.0.0.1').AndReturn( self._prepare_info_by_ip_response()) drv._get_vol_by_junc_vserver('openstack', '/nfs').AndReturn('nfsvol') drv._clone_file('nfsvol', 'volume_name', 'clone_name', 'openstack') drv._post_prov_deprov_in_ssc(IgnoreArg()) return mox def _prepare_info_by_ip_response(self): res = """<attributes-list> <net-interface-info> <address>127.0.0.1</address> <administrative-status>up</administrative-status> <current-node>fas3170rre-cmode-01</current-node> <current-port>e1b-1165</current-port> <data-protocols> <data-protocol>nfs</data-protocol> </data-protocols> <dns-domain-name>none</dns-domain-name> <failover-group/> <failover-policy>disabled</failover-policy> <firewall-policy>data</firewall-policy> <home-node>fas3170rre-cmode-01</home-node> <home-port>e1b-1165</home-port> <interface-name>nfs_data1</interface-name> <is-auto-revert>false</is-auto-revert> <is-home>true</is-home> <netmask>255.255.255.0</netmask> <netmask-length>24</netmask-length> <operational-status>up</operational-status> <role>data</role> <routing-group-name>c10.63.165.0/24</routing-group-name> <use-failover-group>disabled</use-failover-group> <vserver>openstack</vserver> </net-interface-info></attributes-list>""" response_el = etree.XML(res) return api.NaElement(response_el).get_children() def test_clone_volume(self): drv = self._driver mox = self._prepare_clone_mock('pass') mox.ReplayAll() volume_name = 'volume_name' clone_name = 'clone_name' volume_id = volume_name + str(hash(volume_name)) share = 'ip:/share' drv._clone_volume(volume_name, clone_name, volume_id, share) mox.VerifyAll() def test_register_img_in_cache_noshare(self): volume = {'id': '1', 'name': 'testvol'} volume['provider_location'] = '10.61.170.1:/share/path' drv = self._driver mox = self.mox mox.StubOutWithMock(drv, '_do_clone_rel_img_cache') drv._do_clone_rel_img_cache('testvol', 'img-cache-12345', '10.61.170.1:/share/path', 'img-cache-12345') mox.ReplayAll() drv._register_image_in_cache(volume, '12345') mox.VerifyAll() def test_register_img_in_cache_with_share(self): volume = {'id': '1', 'name': 'testvol'} volume['provider_location'] = '10.61.170.1:/share/path' drv = self._driver mox = self.mox mox.StubOutWithMock(drv, '_do_clone_rel_img_cache') drv._do_clone_rel_img_cache('testvol', 'img-cache-12345', '10.61.170.1:/share/path', 'img-cache-12345') mox.ReplayAll() drv._register_image_in_cache(volume, '12345') mox.VerifyAll() def test_find_image_in_cache_no_shares(self): drv = self._driver drv._mounted_shares = [] result = drv._find_image_in_cache('image_id') if not result: pass else: self.fail('Return result is unexpected') def test_find_image_in_cache_shares(self): drv = self._driver mox = self.mox drv._mounted_shares = ['testshare'] mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(os.path, 'exists') drv._get_mount_point_for_share('testshare').AndReturn('/mnt') os.path.exists('/mnt/img-cache-id').AndReturn(True) mox.ReplayAll() result = drv._find_image_in_cache('id') (share, file_name) = result[0] mox.VerifyAll() drv._mounted_shares.remove('testshare') if (share == 'testshare' and file_name == 'img-cache-id'): pass else: LOG.warn(_("Share %(share)s and file name %(file_name)s") % {'share': share, 'file_name': file_name}) self.fail('Return result is unexpected') def test_find_old_cache_files_notexists(self): drv = self._driver mox = self.mox cmd = ['find', '/mnt', '-maxdepth', '1', '-name', 'img-cache*', '-amin', '+720'] setattr(drv.configuration, 'expiry_thres_minutes', 720) mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(drv, '_execute') drv._get_mount_point_for_share(IgnoreArg()).AndReturn('/mnt') drv._execute(*cmd, run_as_root=True).AndReturn((None, '')) mox.ReplayAll() res = drv._find_old_cache_files('share') mox.VerifyAll() if len(res) == 0: pass else: self.fail('No files expected but got return values.') def test_find_old_cache_files_exists(self): drv = self._driver mox = self.mox cmd = ['find', '/mnt', '-maxdepth', '1', '-name', 'img-cache*', '-amin', '+720'] setattr(drv.configuration, 'expiry_thres_minutes', '720') files = '/mnt/img-id1\n/mnt/img-id2\n' r_files = ['img-id1', 'img-id2'] mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(drv, '_execute') mox.StubOutWithMock(drv, '_shortlist_del_eligible_files') drv._get_mount_point_for_share('share').AndReturn('/mnt') drv._execute(*cmd, run_as_root=True).AndReturn((files, None)) drv._shortlist_del_eligible_files( IgnoreArg(), r_files).AndReturn(r_files) mox.ReplayAll() res = drv._find_old_cache_files('share') mox.VerifyAll() if len(res) == len(r_files): for f in res: r_files.remove(f) else: self.fail('Returned files not same as expected.') def test_delete_files_till_bytes_free_success(self): drv = self._driver mox = self.mox files = [('img-cache-1', 230), ('img-cache-2', 380)] mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(drv, '_delete_file') drv._get_mount_point_for_share(IgnoreArg()).AndReturn('/mnt') drv._delete_file('/mnt/img-cache-2').AndReturn(True) drv._delete_file('/mnt/img-cache-1').AndReturn(True) mox.ReplayAll() drv._delete_files_till_bytes_free(files, 'share', bytes_to_free=1024) mox.VerifyAll() def test_clean_image_cache_exec(self): drv = self._driver mox = self.mox drv.configuration.thres_avl_size_perc_start = 20 drv.configuration.thres_avl_size_perc_stop = 50 drv._mounted_shares = ['testshare'] mox.StubOutWithMock(drv, '_find_old_cache_files') mox.StubOutWithMock(drv, '_delete_files_till_bytes_free') mox.StubOutWithMock(drv, '_get_capacity_info') drv._get_capacity_info('testshare').AndReturn((100, 19, 81)) drv._find_old_cache_files('testshare').AndReturn(['f1', 'f2']) drv._delete_files_till_bytes_free( ['f1', 'f2'], 'testshare', bytes_to_free=31) mox.ReplayAll() drv._clean_image_cache() mox.VerifyAll() drv._mounted_shares.remove('testshare') if not drv.cleaning: pass else: self.fail('Clean image cache failed.') def test_clean_image_cache_noexec(self): drv = self._driver mox = self.mox drv.configuration.thres_avl_size_perc_start = 20 drv.configuration.thres_avl_size_perc_stop = 50 drv._mounted_shares = ['testshare'] mox.StubOutWithMock(drv, '_get_capacity_info') drv._get_capacity_info('testshare').AndReturn((100, 30, 70)) mox.ReplayAll() drv._clean_image_cache() mox.VerifyAll() drv._mounted_shares.remove('testshare') if not drv.cleaning: pass else: self.fail('Clean image cache failed.') def test_clone_image_fromcache(self): drv = self._driver mox = self.mox volume = {'name': 'vol', 'size': '20'} mox.StubOutWithMock(drv, '_find_image_in_cache') mox.StubOutWithMock(drv, '_do_clone_rel_img_cache') mox.StubOutWithMock(drv, '_post_clone_image') mox.StubOutWithMock(drv, '_is_share_vol_compatible') drv._find_image_in_cache(IgnoreArg()).AndReturn( [('share', 'file_name')]) drv._is_share_vol_compatible(IgnoreArg(), IgnoreArg()).AndReturn(True) drv._do_clone_rel_img_cache('file_name', 'vol', 'share', 'file_name') drv._post_clone_image(volume) mox.ReplayAll() drv.clone_image(volume, ('image_location', None), 'image_id', {}) mox.VerifyAll() def get_img_info(self, format): class img_info(object): def __init__(self, fmt): self.file_format = fmt return img_info(format) def test_clone_image_cloneableshare_nospace(self): drv = self._driver mox = self.mox volume = {'name': 'vol', 'size': '20'} mox.StubOutWithMock(drv, '_find_image_in_cache') mox.StubOutWithMock(drv, '_is_cloneable_share') mox.StubOutWithMock(drv, '_is_share_vol_compatible') drv._find_image_in_cache(IgnoreArg()).AndReturn([]) drv._is_cloneable_share(IgnoreArg()).AndReturn('127.0.0.1:/share') drv._is_share_vol_compatible(IgnoreArg(), IgnoreArg()).AndReturn(False) mox.ReplayAll() (prop, cloned) = drv. clone_image( volume, ('nfs://127.0.0.1:/share/img-id', None), 'image_id', {}) mox.VerifyAll() if not cloned and not prop['provider_location']: pass else: self.fail('Expected not cloned, got cloned.') def test_clone_image_cloneableshare_raw(self): drv = self._driver mox = self.mox volume = {'name': 'vol', 'size': '20'} mox.StubOutWithMock(drv, '_find_image_in_cache') mox.StubOutWithMock(drv, '_is_cloneable_share') mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(image_utils, 'qemu_img_info') mox.StubOutWithMock(drv, '_clone_volume') mox.StubOutWithMock(drv, '_discover_file_till_timeout') mox.StubOutWithMock(drv, '_set_rw_permissions_for_all') mox.StubOutWithMock(drv, '_resize_image_file') mox.StubOutWithMock(drv, '_is_share_vol_compatible') drv._find_image_in_cache(IgnoreArg()).AndReturn([]) drv._is_cloneable_share(IgnoreArg()).AndReturn('127.0.0.1:/share') drv._is_share_vol_compatible(IgnoreArg(), IgnoreArg()).AndReturn(True) drv._get_mount_point_for_share(IgnoreArg()).AndReturn('/mnt') image_utils.qemu_img_info('/mnt/img-id').AndReturn( self.get_img_info('raw')) drv._clone_volume( 'img-id', 'vol', share='127.0.0.1:/share', volume_id=None) drv._get_mount_point_for_share(IgnoreArg()).AndReturn('/mnt') drv._discover_file_till_timeout(IgnoreArg()).AndReturn(True) drv._set_rw_permissions_for_all('/mnt/vol') drv._resize_image_file({'name': 'vol'}, IgnoreArg()) mox.ReplayAll() drv. clone_image( volume, ('nfs://127.0.0.1:/share/img-id', None), 'image_id', {}) mox.VerifyAll() def test_clone_image_cloneableshare_notraw(self): drv = self._driver mox = self.mox volume = {'name': 'vol', 'size': '20'} mox.StubOutWithMock(drv, '_find_image_in_cache') mox.StubOutWithMock(drv, '_is_cloneable_share') mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(image_utils, 'qemu_img_info') mox.StubOutWithMock(drv, '_clone_volume') mox.StubOutWithMock(drv, '_discover_file_till_timeout') mox.StubOutWithMock(drv, '_set_rw_permissions_for_all') mox.StubOutWithMock(drv, '_resize_image_file') mox.StubOutWithMock(image_utils, 'convert_image') mox.StubOutWithMock(drv, '_register_image_in_cache') mox.StubOutWithMock(drv, '_is_share_vol_compatible') drv._find_image_in_cache(IgnoreArg()).AndReturn([]) drv._is_cloneable_share('nfs://127.0.0.1/share/img-id').AndReturn( '127.0.0.1:/share') drv._is_share_vol_compatible(IgnoreArg(), IgnoreArg()).AndReturn(True) drv._get_mount_point_for_share('127.0.0.1:/share').AndReturn('/mnt') image_utils.qemu_img_info('/mnt/img-id').AndReturn( self.get_img_info('notraw')) image_utils.convert_image(IgnoreArg(), IgnoreArg(), 'raw') image_utils.qemu_img_info('/mnt/vol').AndReturn( self.get_img_info('raw')) drv._register_image_in_cache(IgnoreArg(), IgnoreArg()) drv._get_mount_point_for_share('127.0.0.1:/share').AndReturn('/mnt') drv._discover_file_till_timeout(IgnoreArg()).AndReturn(True) drv._set_rw_permissions_for_all('/mnt/vol') drv._resize_image_file({'name': 'vol'}, IgnoreArg()) mox.ReplayAll() drv. clone_image( volume, ('nfs://127.0.0.1/share/img-id', None), 'image_id', {}) mox.VerifyAll() def test_clone_image_file_not_discovered(self): drv = self._driver mox = self.mox volume = {'name': 'vol', 'size': '20'} mox.StubOutWithMock(drv, '_find_image_in_cache') mox.StubOutWithMock(drv, '_is_cloneable_share') mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(image_utils, 'qemu_img_info') mox.StubOutWithMock(drv, '_clone_volume') mox.StubOutWithMock(drv, '_discover_file_till_timeout') mox.StubOutWithMock(image_utils, 'convert_image') mox.StubOutWithMock(drv, '_register_image_in_cache') mox.StubOutWithMock(drv, '_is_share_vol_compatible') mox.StubOutWithMock(drv, 'local_path') mox.StubOutWithMock(os.path, 'exists') mox.StubOutWithMock(drv, '_delete_file') drv._find_image_in_cache(IgnoreArg()).AndReturn([]) drv._is_cloneable_share('nfs://127.0.0.1/share/img-id').AndReturn( '127.0.0.1:/share') drv._is_share_vol_compatible(IgnoreArg(), IgnoreArg()).AndReturn(True) drv._get_mount_point_for_share('127.0.0.1:/share').AndReturn('/mnt') image_utils.qemu_img_info('/mnt/img-id').AndReturn( self.get_img_info('notraw')) image_utils.convert_image(IgnoreArg(), IgnoreArg(), 'raw') image_utils.qemu_img_info('/mnt/vol').AndReturn( self.get_img_info('raw')) drv._register_image_in_cache(IgnoreArg(), IgnoreArg()) drv.local_path(IgnoreArg()).AndReturn('/mnt/vol') drv._discover_file_till_timeout(IgnoreArg()).AndReturn(False) drv.local_path(IgnoreArg()).AndReturn('/mnt/vol') os.path.exists('/mnt/vol').AndReturn(True) drv._delete_file('/mnt/vol') mox.ReplayAll() vol_dict, result = drv. clone_image( volume, ('nfs://127.0.0.1/share/img-id', None), 'image_id', {}) mox.VerifyAll() self.assertFalse(result) self.assertFalse(vol_dict['bootable']) self.assertIsNone(vol_dict['provider_location']) def test_clone_image_resizefails(self): drv = self._driver mox = self.mox volume = {'name': 'vol', 'size': '20'} mox.StubOutWithMock(drv, '_find_image_in_cache') mox.StubOutWithMock(drv, '_is_cloneable_share') mox.StubOutWithMock(drv, '_get_mount_point_for_share') mox.StubOutWithMock(image_utils, 'qemu_img_info') mox.StubOutWithMock(drv, '_clone_volume') mox.StubOutWithMock(drv, '_discover_file_till_timeout') mox.StubOutWithMock(drv, '_set_rw_permissions_for_all') mox.StubOutWithMock(drv, '_resize_image_file') mox.StubOutWithMock(image_utils, 'convert_image') mox.StubOutWithMock(drv, '_register_image_in_cache') mox.StubOutWithMock(drv, '_is_share_vol_compatible') mox.StubOutWithMock(drv, 'local_path') mox.StubOutWithMock(os.path, 'exists') mox.StubOutWithMock(drv, '_delete_file') drv._find_image_in_cache(IgnoreArg()).AndReturn([]) drv._is_cloneable_share('nfs://127.0.0.1/share/img-id').AndReturn( '127.0.0.1:/share') drv._is_share_vol_compatible(IgnoreArg(), IgnoreArg()).AndReturn(True) drv._get_mount_point_for_share('127.0.0.1:/share').AndReturn('/mnt') image_utils.qemu_img_info('/mnt/img-id').AndReturn( self.get_img_info('notraw')) image_utils.convert_image(IgnoreArg(), IgnoreArg(), 'raw') image_utils.qemu_img_info('/mnt/vol').AndReturn( self.get_img_info('raw')) drv._register_image_in_cache(IgnoreArg(), IgnoreArg()) drv.local_path(IgnoreArg()).AndReturn('/mnt/vol') drv._discover_file_till_timeout(IgnoreArg()).AndReturn(True) drv._set_rw_permissions_for_all('/mnt/vol') drv._resize_image_file( IgnoreArg(), IgnoreArg()).AndRaise(exception.InvalidResults()) drv.local_path(IgnoreArg()).AndReturn('/mnt/vol') os.path.exists('/mnt/vol').AndReturn(True) drv._delete_file('/mnt/vol') mox.ReplayAll() vol_dict, result = drv. clone_image( volume, ('nfs://127.0.0.1/share/img-id', None), 'image_id', {}) mox.VerifyAll() self.assertFalse(result) self.assertFalse(vol_dict['bootable']) self.assertIsNone(vol_dict['provider_location']) def test_is_cloneable_share_badformats(self): drv = self._driver strgs = ['10.61.666.22:/share/img', 'nfs://10.61.666.22:/share/img', 'nfs://10.61.666.22//share/img', 'nfs://com.netapp.com:/share/img', 'nfs://com.netapp.com//share/img', 'com.netapp.com://share/im\g', 'http://com.netapp.com://share/img', 'nfs://com.netapp.com:/share/img', 'nfs://com.netapp.com:8080//share/img' 'nfs://com.netapp.com//img', 'nfs://[ae::sr::ty::po]/img'] for strg in strgs: res = drv._is_cloneable_share(strg) if res: msg = 'Invalid format matched for url %s.' % strg self.fail(msg) def test_is_cloneable_share_goodformat1(self): drv = self._driver mox = self.mox strg = 'nfs://10.61.222.333/share/img' mox.StubOutWithMock(drv, '_check_share_in_use') drv._check_share_in_use(IgnoreArg(), IgnoreArg()).AndReturn('share') mox.ReplayAll() drv._is_cloneable_share(strg) mox.VerifyAll() def test_is_cloneable_share_goodformat2(self): drv = self._driver mox = self.mox strg = 'nfs://10.61.222.333:8080/share/img' mox.StubOutWithMock(drv, '_check_share_in_use') drv._check_share_in_use(IgnoreArg(), IgnoreArg()).AndReturn('share') mox.ReplayAll() drv._is_cloneable_share(strg) mox.VerifyAll() def test_is_cloneable_share_goodformat3(self): drv = self._driver mox = self.mox strg = 'nfs://com.netapp:8080/share/img' mox.StubOutWithMock(drv, '_check_share_in_use') drv._check_share_in_use(IgnoreArg(), IgnoreArg()).AndReturn('share') mox.ReplayAll() drv._is_cloneable_share(strg) mox.VerifyAll() def test_is_cloneable_share_goodformat4(self): drv = self._driver mox = self.mox strg = 'nfs://netapp.com/share/img' mox.StubOutWithMock(drv, '_check_share_in_use') drv._check_share_in_use(IgnoreArg(), IgnoreArg()).AndReturn('share') mox.ReplayAll() drv._is_cloneable_share(strg) mox.VerifyAll() def test_is_cloneable_share_goodformat5(self): drv = self._driver mox = self.mox strg = 'nfs://netapp.com/img' mox.StubOutWithMock(drv, '_check_share_in_use') drv._check_share_in_use(IgnoreArg(), IgnoreArg()).AndReturn('share') mox.ReplayAll() drv._is_cloneable_share(strg) mox.VerifyAll() def test_check_share_in_use_no_conn(self): drv = self._driver share = drv._check_share_in_use(None, '/dir') if share: self.fail('Unexpected share detected.') def test_check_share_in_use_invalid_conn(self): drv = self._driver share = drv._check_share_in_use(':8989', '/dir') if share: self.fail('Unexpected share detected.') def test_check_share_in_use_incorrect_host(self): drv = self._driver mox = self.mox mox.StubOutWithMock(utils, 'resolve_hostname') utils.resolve_hostname(IgnoreArg()).AndRaise(Exception()) mox.ReplayAll() share = drv._check_share_in_use('incorrect:8989', '/dir') mox.VerifyAll() if share: self.fail('Unexpected share detected.') def test_check_share_in_use_success(self): drv = self._driver mox = self.mox drv._mounted_shares = ['127.0.0.1:/dir/share'] mox.StubOutWithMock(utils, 'resolve_hostname') mox.StubOutWithMock(drv, '_share_match_for_ip') utils.resolve_hostname(IgnoreArg()).AndReturn('10.22.33.44') drv._share_match_for_ip( '10.22.33.44', ['127.0.0.1:/dir/share']).AndReturn('share') mox.ReplayAll() share = drv._check_share_in_use('127.0.0.1:8989', '/dir/share') mox.VerifyAll() if not share: self.fail('Expected share not detected') def test_construct_image_url_loc(self): drv = self._driver img_loc = (None, [{'metadata': {'share_location': 'nfs://host/path', 'mount_point': '/opt/stack/data/glance', 'type': 'nfs'}, 'url': 'file:///opt/stack/data/glance/image-id'}]) location = drv._construct_image_nfs_url(img_loc) if location != "nfs://host/path/image-id": self.fail("Unexpected direct url.") def test_construct_image_url_direct(self): drv = self._driver img_loc = ("nfs://host/path/image-id", None) location = drv._construct_image_nfs_url(img_loc) if location != "nfs://host/path/image-id": self.fail("Unexpected direct url.") class NetappDirectCmodeNfsDriverOnlyTestCase(test.TestCase): """Test direct NetApp C Mode driver only and not inherit.""" def setUp(self): super(NetappDirectCmodeNfsDriverOnlyTestCase, self).setUp() self._custom_setup() def _custom_setup(self): kwargs = {} kwargs['netapp_mode'] = 'proxy' kwargs['configuration'] = create_configuration() self._driver = netapp_nfs.NetAppDirectCmodeNfsDriver(**kwargs) self._driver.ssc_enabled = True self._driver.configuration.netapp_copyoffload_tool_path = 'cof_path' @mock.patch.object(netapp_nfs, 'get_volume_extra_specs') def test_create_volume(self, mock_volume_extra_specs): drv = self._driver drv.ssc_enabled = False extra_specs = {} mock_volume_extra_specs.return_value = extra_specs fake_share = 'localhost:myshare' with mock.patch.object(drv, '_ensure_shares_mounted'): with mock.patch.object(drv, '_find_shares', return_value=['localhost:myshare']): with mock.patch.object(drv, '_do_create_volume'): volume_info = self._driver.create_volume(FakeVolume(1)) self.assertEqual(volume_info.get('provider_location'), fake_share) @mock.patch.object(netapp_nfs, 'get_volume_extra_specs') def test_create_volume_with_qos_policy(self, mock_volume_extra_specs): drv = self._driver drv.ssc_enabled = False extra_specs = {'netapp:qos_policy_group': 'qos_policy_1'} fake_volume = FakeVolume(1) fake_share = 'localhost:myshare' fake_qos_policy = 'qos_policy_1' mock_volume_extra_specs.return_value = extra_specs with mock.patch.object(drv, '_ensure_shares_mounted'): with mock.patch.object(drv, '_find_shares', return_value=['localhost:myshare']): with mock.patch.object(drv, '_do_create_volume'): with mock.patch.object(drv, '_set_qos_policy_group_on_volume' ) as mock_set_qos: volume_info = self._driver.create_volume(fake_volume) self.assertEqual(volume_info.get('provider_location'), 'localhost:myshare') mock_set_qos.assert_called_once_with(fake_volume, fake_share, fake_qos_policy) def test_copy_img_to_vol_copyoffload_success(self): drv = self._driver context = object() volume = {'id': 'vol_id', 'name': 'name'} image_service = object() image_id = 'image_id' drv._client = mock.Mock() drv._client.get_api_version = mock.Mock(return_value=(1, 20)) drv._try_copyoffload = mock.Mock() drv._get_provider_location = mock.Mock(return_value='share') drv._get_vol_for_share = mock.Mock(return_value='vol') drv._update_stale_vols = mock.Mock() drv.copy_image_to_volume(context, volume, image_service, image_id) drv._try_copyoffload.assert_called_once_with(context, volume, image_service, image_id) drv._update_stale_vols.assert_called_once_with('vol') def test_copy_img_to_vol_copyoffload_failure(self): drv = self._driver context = object() volume = {'id': 'vol_id', 'name': 'name'} image_service = object() image_id = 'image_id' drv._client = mock.Mock() drv._client.get_api_version = mock.Mock(return_value=(1, 20)) drv._try_copyoffload = mock.Mock(side_effect=Exception()) netapp_nfs.NetAppNFSDriver.copy_image_to_volume = mock.Mock() drv._get_provider_location = mock.Mock(return_value='share') drv._get_vol_for_share = mock.Mock(return_value='vol') drv._update_stale_vols = mock.Mock() drv.copy_image_to_volume(context, volume, image_service, image_id) drv._try_copyoffload.assert_called_once_with(context, volume, image_service, image_id) netapp_nfs.NetAppNFSDriver.copy_image_to_volume.\ assert_called_once_with(context, volume, image_service, image_id) drv._update_stale_vols.assert_called_once_with('vol') def test_copy_img_to_vol_copyoffload_nonexistent_binary_path(self): drv = self._driver context = object() volume = {'id': 'vol_id', 'name': 'name'} image_service = mock.Mock() image_service.get_location.return_value = (mock.Mock(), mock.Mock()) image_service.show.return_value = {'size': 0} image_id = 'image_id' drv._client = mock.Mock() drv._client.get_api_version = mock.Mock(return_value=(1, 20)) drv._find_image_in_cache = mock.Mock(return_value=[]) drv._construct_image_nfs_url = mock.Mock(return_value="") drv._check_get_nfs_path_segs = mock.Mock(return_value=("test:test", "dr")) drv._get_ip_verify_on_cluster = mock.Mock(return_value="192.1268.1.1") drv._get_mount_point_for_share = mock.Mock(return_value='mnt_point') drv._get_host_ip = mock.Mock() drv._get_provider_location = mock.Mock() drv._get_export_path = mock.Mock(return_value="dr") drv._check_share_can_hold_size = mock.Mock() # Raise error as if the copyoffload file can not be found drv._clone_file_dst_exists = mock.Mock(side_effect=OSError()) # Verify the original error is propagated self.assertRaises(OSError, drv._try_copyoffload, context, volume, image_service, image_id) def test_copyoffload_frm_cache_success(self): drv = self._driver context = object() volume = {'id': 'vol_id', 'name': 'name'} image_service = object() image_id = 'image_id' drv._find_image_in_cache = mock.Mock(return_value=[('share', 'img')]) drv._copy_from_cache = mock.Mock(return_value=True) drv._try_copyoffload(context, volume, image_service, image_id) drv._copy_from_cache.assert_called_once_with(volume, image_id, [('share', 'img')]) def test_copyoffload_frm_img_service_success(self): drv = self._driver context = object() volume = {'id': 'vol_id', 'name': 'name'} image_service = object() image_id = 'image_id' drv._client = mock.Mock() drv._client.get_api_version = mock.Mock(return_value=(1, 20)) drv._find_image_in_cache = mock.Mock(return_value=[]) drv._copy_from_img_service = mock.Mock() drv._try_copyoffload(context, volume, image_service, image_id) drv._copy_from_img_service.assert_called_once_with(context, volume, image_service, image_id) def test_cache_copyoffload_workflow_success(self): drv = self._driver volume = {'id': 'vol_id', 'name': 'name', 'size': 1} image_id = 'image_id' cache_result = [('ip1:/openstack', 'img-cache-imgid')] drv._get_ip_verify_on_cluster = mock.Mock(return_value='ip1') drv._get_host_ip = mock.Mock(return_value='ip2') drv._get_export_path = mock.Mock(return_value='/exp_path') drv._execute = mock.Mock() drv._register_image_in_cache = mock.Mock() drv._get_provider_location = mock.Mock(return_value='/share') drv._post_clone_image = mock.Mock() copied = drv._copy_from_cache(volume, image_id, cache_result) self.assertTrue(copied) drv._get_ip_verify_on_cluster.assert_any_call('ip1') drv._get_export_path.assert_called_with('vol_id') drv._execute.assert_called_once_with('cof_path', 'ip1', 'ip1', '/openstack/img-cache-imgid', '/exp_path/name', run_as_root=False, check_exit_code=0) drv._post_clone_image.assert_called_with(volume) drv._get_provider_location.assert_called_with('vol_id') @mock.patch.object(image_utils, 'qemu_img_info') def test_img_service_raw_copyoffload_workflow_success(self, mock_qemu_img_info): drv = self._driver volume = {'id': 'vol_id', 'name': 'name', 'size': 1} image_id = 'image_id' context = object() image_service = mock.Mock() image_service.get_location.return_value = ('nfs://ip1/openstack/img', None) image_service.show.return_value = {'size': 1, 'disk_format': 'raw'} drv._check_get_nfs_path_segs = mock.Mock(return_value= ('ip1', '/openstack')) drv._get_ip_verify_on_cluster = mock.Mock(return_value='ip1') drv._get_host_ip = mock.Mock(return_value='ip2') drv._get_export_path = mock.Mock(return_value='/exp_path') drv._get_provider_location = mock.Mock(return_value='share') drv._execute = mock.Mock() drv._get_mount_point_for_share = mock.Mock(return_value='mnt_point') drv._discover_file_till_timeout = mock.Mock(return_value=True) img_inf = mock.Mock() img_inf.file_format = 'raw' mock_qemu_img_info.return_value = img_inf drv._check_share_can_hold_size = mock.Mock() drv._move_nfs_file = mock.Mock(return_value=True) drv._delete_file = mock.Mock() drv._clone_file_dst_exists = mock.Mock() drv._post_clone_image = mock.Mock() drv._copy_from_img_service(context, volume, image_service, image_id) drv._get_ip_verify_on_cluster.assert_any_call('ip1') drv._get_export_path.assert_called_with('vol_id') drv._check_share_can_hold_size.assert_called_with('share', 1) assert drv._execute.call_count == 1 drv._post_clone_image.assert_called_with(volume) @mock.patch.object(image_utils, 'convert_image') @mock.patch.object(image_utils, 'qemu_img_info') @mock.patch('os.path.exists') def test_img_service_qcow2_copyoffload_workflow_success(self, mock_exists, mock_qemu_img_info, mock_cvrt_image): drv = self._driver volume = {'id': 'vol_id', 'name': 'name', 'size': 1} image_id = 'image_id' context = object() image_service = mock.Mock() image_service.get_location.return_value = ('nfs://ip1/openstack/img', None) image_service.show.return_value = {'size': 1, 'disk_format': 'qcow2'} drv._check_get_nfs_path_segs = mock.Mock(return_value= ('ip1', '/openstack')) drv._get_ip_verify_on_cluster = mock.Mock(return_value='ip1') drv._get_host_ip = mock.Mock(return_value='ip2') drv._get_export_path = mock.Mock(return_value='/exp_path') drv._get_provider_location = mock.Mock(return_value='share') drv._execute = mock.Mock() drv._get_mount_point_for_share = mock.Mock(return_value='mnt_point') img_inf = mock.Mock() img_inf.file_format = 'raw' mock_qemu_img_info.return_value = img_inf drv._check_share_can_hold_size = mock.Mock() drv._move_nfs_file = mock.Mock(return_value=True) drv._delete_file = mock.Mock() drv._clone_file_dst_exists = mock.Mock() drv._post_clone_image = mock.Mock() drv._copy_from_img_service(context, volume, image_service, image_id) drv._get_ip_verify_on_cluster.assert_any_call('ip1') drv._get_export_path.assert_called_with('vol_id') drv._check_share_can_hold_size.assert_called_with('share', 1) assert mock_cvrt_image.call_count == 1 assert drv._execute.call_count == 1 assert drv._delete_file.call_count == 2 drv._clone_file_dst_exists.call_count == 1 drv._post_clone_image.assert_called_with(volume) class NetappDirect7modeNfsDriverTestCase(NetappDirectCmodeNfsDriverTestCase): """Test direct NetApp C Mode driver.""" def _custom_setup(self): self._driver = netapp_nfs.NetAppDirect7modeNfsDriver( configuration=create_configuration()) def _prepare_delete_snapshot_mock(self, snapshot_exists): drv = self._driver mox = self.mox mox.StubOutWithMock(drv, '_get_provider_location') mox.StubOutWithMock(drv, '_volume_not_present') if snapshot_exists: mox.StubOutWithMock(drv, '_execute') mox.StubOutWithMock(drv, '_get_volume_path') drv._get_provider_location(IgnoreArg()) drv._volume_not_present(IgnoreArg(), IgnoreArg())\ .AndReturn(not snapshot_exists) if snapshot_exists: drv._get_volume_path(IgnoreArg(), IgnoreArg()) drv._execute('rm', None, run_as_root=True) mox.ReplayAll() return mox def test_check_for_setup_error_version(self): drv = self._driver drv._client = api.NaServer("127.0.0.1") # check exception raises when version not found self.assertRaises(exception.VolumeBackendAPIException, drv.check_for_setup_error) drv._client.set_api_version(1, 8) # check exception raises when not supported version self.assertRaises(exception.VolumeBackendAPIException, drv.check_for_setup_error) def test_check_for_setup_error(self): mox = self.mox drv = self._driver drv._client = api.NaServer("127.0.0.1") drv._client.set_api_version(1, 9) required_flags = [ 'netapp_transport_type', 'netapp_login', 'netapp_password', 'netapp_server_hostname', 'netapp_server_port'] # set required flags for flag in required_flags: setattr(drv.configuration, flag, None) # check exception raises when flags are not set self.assertRaises(exception.CinderException, drv.check_for_setup_error) # set required flags for flag in required_flags: setattr(drv.configuration, flag, 'val') mox.ReplayAll() drv.check_for_setup_error() mox.VerifyAll() # restore initial FLAGS for flag in required_flags: delattr(drv.configuration, flag) def test_do_setup(self): mox = self.mox drv = self._driver mox.StubOutWithMock(netapp_nfs.NetAppNFSDriver, 'do_setup') mox.StubOutWithMock(drv, '_get_client') mox.StubOutWithMock(drv, '_do_custom_setup') netapp_nfs.NetAppNFSDriver.do_setup(IgnoreArg()) drv._get_client() drv._do_custom_setup(IgnoreArg()) mox.ReplayAll() drv.do_setup(IsA(context.RequestContext)) mox.VerifyAll() def _prepare_clone_mock(self, status): drv = self._driver mox = self.mox volume = FakeVolume() setattr(volume, 'provider_location', '127.0.0.1:/nfs') mox.StubOutWithMock(drv, '_get_export_ip_path') mox.StubOutWithMock(drv, '_get_actual_path_for_export') mox.StubOutWithMock(drv, '_start_clone') mox.StubOutWithMock(drv, '_wait_for_clone_finish') if status == 'fail': mox.StubOutWithMock(drv, '_clear_clone') drv._get_export_ip_path( IgnoreArg(), IgnoreArg()).AndReturn(('127.0.0.1', '/nfs')) drv._get_actual_path_for_export(IgnoreArg()).AndReturn('/vol/vol1/nfs') drv._start_clone(IgnoreArg(), IgnoreArg()).AndReturn(('1', '2')) if status == 'fail': drv._wait_for_clone_finish('1', '2').AndRaise( api.NaApiError('error', 'error')) drv._clear_clone('1') else: drv._wait_for_clone_finish('1', '2') return mox def test_clone_volume_clear(self): drv = self._driver mox = self._prepare_clone_mock('fail') mox.ReplayAll() volume_name = 'volume_name' clone_name = 'clone_name' volume_id = volume_name + str(hash(volume_name)) try: drv._clone_volume(volume_name, clone_name, volume_id) except Exception as e: if isinstance(e, api.NaApiError): pass else: raise mox.VerifyAll()
github-borat/cinder
cinder/tests/test_netapp_nfs.py
Python
apache-2.0
47,799
[ 30522, 1001, 9385, 1006, 1039, 1007, 2262, 5658, 29098, 1010, 4297, 1012, 1001, 2035, 2916, 9235, 1012, 1001, 1001, 7000, 2104, 1996, 15895, 6105, 1010, 2544, 1016, 1012, 1014, 1006, 1996, 1000, 6105, 1000, 1007, 1025, 2017, 2089, 1001, 2...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>deriving: Not compatible 👼</title> <link rel="shortcut icon" type="image/png" href="../../../../../favicon.png" /> <link href="../../../../../bootstrap.min.css" rel="stylesheet"> <link href="../../../../../bootstrap-custom.css" rel="stylesheet"> <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet"> <script src="../../../../../moment.min.js"></script> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> <![endif]--> </head> <body> <div class="container"> <div class="navbar navbar-default" role="navigation"> <div class="container-fluid"> <div class="navbar-header"> <a class="navbar-brand" href="../../../../.."><i class="fa fa-lg fa-flag-checkered"></i> Coq bench</a> </div> <div id="navbar" class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="../..">clean / released</a></li> <li class="active"><a href="">8.5.0~camlp4 / deriving - 0.1.0</a></li> </ul> </div> </div> </div> <div class="article"> <div class="row"> <div class="col-md-12"> <a href="../..">« Up</a> <h1> deriving <small> 0.1.0 <span class="label label-info">Not compatible 👼</span> </small> </h1> <p>📅 <em><script>document.write(moment("2022-03-05 14:29:34 +0000", "YYYY-MM-DD HH:mm:ss Z").fromNow());</script> (2022-03-05 14:29:34 UTC)</em><p> <h2>Context</h2> <pre># Packages matching: installed # Name # Installed # Synopsis base-bigarray base base-num base Num library distributed with the OCaml compiler base-ocamlbuild base OCamlbuild binary and libraries distributed with the OCaml compiler base-threads base base-unix base camlp4 4.02+7 Camlp4 is a system for writing extensible parsers for programming languages conf-findutils 1 Virtual package relying on findutils conf-which 1 Virtual package relying on which coq 8.5.0~camlp4 Formal proof management system num 0 The Num library for arbitrary-precision integer and rational arithmetic ocaml 4.02.3 The OCaml compiler (virtual package) ocaml-base-compiler 4.02.3 Official 4.02.3 release ocaml-config 1 OCaml Switch Configuration ocamlbuild 0 Build system distributed with the OCaml compiler since OCaml 3.10.0 # opam file: opam-version: &quot;2.0&quot; name: &quot;coq-deriving&quot; version: &quot;0.1.0&quot; maintainer: &quot;Arthur Azevedo de Amorim &lt;arthur.aa@gmail.com&gt;&quot; homepage: &quot;https://github.com/arthuraa/deriving&quot; bug-reports: &quot;https://github.com/arthuraa/deriving/issues&quot; dev-repo: &quot;git+https://github.com/arthuraa/deriving.git&quot; license: &quot;MIT&quot; build: [ make &quot;-j&quot; &quot;%{jobs}%&quot; &quot;test&quot; {with-test} ] install: [ make &quot;install&quot; ] depends: [ &quot;coq&quot; { (&gt;= &quot;8.11&quot; &amp; &lt; &quot;8.16~&quot;) | (= &quot;dev&quot;) } &quot;coq-mathcomp-ssreflect&quot; {&gt;= &quot;1.11&quot; | (= &quot;dev&quot;)} ] tags: [ &quot;keyword:generic programming&quot; &quot;category:Computer Science/Data Types and Data Structures&quot; &quot;logpath:deriving&quot; ] authors: [ &quot;Arthur Azevedo de Amorim&quot; ] synopsis: &quot;Generic instances of MathComp classes&quot; description: &quot;&quot;&quot; Deriving provides generic instances of MathComp classes for inductive data types. It includes native support for eqType, choiceType, countType and finType instances, and it allows users to define their own instances for other classes. &quot;&quot;&quot; url { src: &quot;https://github.com/arthuraa/deriving/archive/v0.1.0.tar.gz&quot; checksum: &quot;sha512=872bfdc6d919492e30fef4bd06de0f781ff6783d75f8a97394f2b62e8dff96c7c5fd58eb037635d47eb43eca3593b851764c9bfea2d96fc2a89483f784b5a040&quot; } </pre> <h2>Lint</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <h2>Dry install 🏜️</h2> <p>Dry install with the current Coq version:</p> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam install -y --show-action coq-deriving.0.1.0 coq.8.5.0~camlp4</code></dd> <dt>Return code</dt> <dd>5120</dd> <dt>Output</dt> <dd><pre>[NOTE] Package coq is already installed (current version is 8.5.0~camlp4). The following dependencies couldn&#39;t be met: - coq-deriving -&gt; coq &gt;= dev -&gt; ocaml &gt;= 4.05.0 base of this switch (use `--unlock-base&#39; to force) No solution found, exiting </pre></dd> </dl> <p>Dry install without Coq/switch base, to test if the problem was incompatibility with the current Coq/OCaml version:</p> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam remove -y coq; opam install -y --show-action --unlock-base coq-deriving.0.1.0</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <h2>Install dependencies</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Duration</dt> <dd>0 s</dd> </dl> <h2>Install 🚀</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Duration</dt> <dd>0 s</dd> </dl> <h2>Installation size</h2> <p>No files were installed.</p> <h2>Uninstall 🧹</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Missing removes</dt> <dd> none </dd> <dt>Wrong removes</dt> <dd> none </dd> </dl> </div> </div> </div> <hr/> <div class="footer"> <p class="text-center"> Sources are on <a href="https://github.com/coq-bench">GitHub</a> © Guillaume Claret 🐣 </p> </div> </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script src="../../../../../bootstrap.min.js"></script> </body> </html>
coq-bench/coq-bench.github.io
clean/Linux-x86_64-4.02.3-2.0.6/released/8.5.0~camlp4/deriving/0.1.0.html
HTML
mit
7,470
[ 30522, 1026, 999, 9986, 13874, 16129, 1028, 1026, 16129, 11374, 1027, 1000, 4372, 1000, 1028, 1026, 2132, 1028, 1026, 18804, 25869, 13462, 1027, 1000, 21183, 2546, 1011, 1022, 1000, 1028, 1026, 18804, 2171, 1027, 1000, 3193, 6442, 1000, 418...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package jp.co.thcomp.bluetoothhelper; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Arrays; public class BleReceiveDataProvider extends BleDataProvider { public static final int AddPacketResultSuccess = -1; public static final int AddPacketResultAlreadyFinished = -2; public static final int AddPacketResultNoData = -3; private boolean mReceiveDataFinish = false; private byte[][] mReceiveDataArray; private int mLeftPacketCount = 0; private int mDataSize; private Short mReservedMessageId = null; private ArrayList<byte[]> mReservedPacketList = new ArrayList<>(); /** * @param packetData * @return AddPacketResultAlreadyFinished: 既に完了済みのメッセージへの追加(追加失敗) * AddPacketResultSuccess: 追加成功 * 0-ShortMax: 別のメッセージを追加している(追加失敗) */ public int addPacket(byte[] packetData) { int ret = AddPacketResultSuccess; if (packetData != null && packetData.length > 0) { if (!mReceiveDataFinish) { ByteBuffer tempBufferForShort = ByteBuffer.allocate(Short.SIZE / Byte.SIZE); ByteBuffer tempBufferForInt = ByteBuffer.allocate(Integer.SIZE / Byte.SIZE); // 0-1バイト:メッセージID(ShortMax上限且つPeripheralからの送信順番を示すが値は循環する) tempBufferForShort.position(0); tempBufferForShort.put(packetData, 0, LengthMessageID); short messageId = tempBufferForShort.getShort(0); // 2-5バイト:パケットサイズ、MTUサイズ以下の値が設定される tempBufferForInt.position(0); tempBufferForInt.put(packetData, IndexPacketSize, LengthPacketSize); int packetSize = tempBufferForInt.getInt(0); // 6-9バイト: パケットポジション、0は設定パケット、1以上の値が設定されている場合はデータパケット tempBufferForInt.position(0); tempBufferForInt.put(packetData, IndexPacketPosition, LengthPacketPosition); int packetPosition = tempBufferForInt.getInt(0); if (packetPosition == 0) { if (mMessageId == null) { boolean matchMessageId = true; if (mReservedMessageId != null) { // 既にリザーブされたMessageIdがあるので、それ以外の設定パケットは受け付けない if (messageId != mReservedMessageId) { matchMessageId = false; } } if (matchMessageId) { mMessageId = messageId; // 設定パケット // 10-13バイト:パケット数(設定パケットも含む)(IntMax上限) tempBufferForInt.position(0); tempBufferForInt.put(packetData, IndexPacketCount, LengthPacketCount); mLeftPacketCount = tempBufferForInt.getInt(0) - 1; mReceiveDataArray = new byte[mLeftPacketCount][]; // 14-17バイト:データサイズ(IntMax上限) tempBufferForInt.position(0); tempBufferForInt.put(packetData, IndexDataSize, LengthDataSize); mDataSize = tempBufferForInt.getInt(0); if (mReservedMessageId != null && mReservedPacketList.size() > 0) { // 保留されているメッセージを展開 for (byte[] reservedPacketData : mReservedPacketList) { addPacket(reservedPacketData); } } mReservedMessageId = null; mReservedPacketList.clear(); } } else { // 別のメッセージパケットを追加しようとしているので、新しい方のメッセージIDを返却 ret = messageId; } } else { if (mMessageId == null) { if (mReservedMessageId == null) { mReservedMessageId = messageId; } if (mReservedMessageId == messageId) { // 設定パケットが未だないので保留リストに mReservedPacketList.add(packetData); } } else if (mMessageId == messageId) { // データパケット if (mReceiveDataArray != null) { mLeftPacketCount--; // 10 バイト:次のパケットがあるかのフラグ、0:次パケットなし、1:次パケットあり tempBufferForInt.position(0); tempBufferForInt.put(packetData, IndexExistNextPacket, LengthExistNextPacket); int existNextPacket = tempBufferForInt.getInt(0); // 以後、0-3バイトに記載されていたサイズ - 9バイトを引算したサイズだけデータが格納 mReceiveDataArray[packetPosition - 1] = Arrays.copyOfRange(packetData, IndexDataStartPosition, packetSize); if ((mLeftPacketCount == 0) || (existNextPacket == NotExistNextPacket)) { // 一旦0に残パケット数を0にして、受信状況に合わせて正しい値にする mLeftPacketCount = 0; for (int i = 0, size = mReceiveDataArray.length; i < size; i++) { if (mReceiveDataArray[i] == null) { mLeftPacketCount++; } } if (mLeftPacketCount == 0) { mReceiveDataFinish = true; } } } } else { ret = messageId; } } } else { ret = AddPacketResultAlreadyFinished; } } else { ret = AddPacketResultNoData; } return ret; } public boolean isCompleted() { return mReceiveDataFinish; } @Override public byte[] getData() { byte[] ret = null; if (mReceiveDataFinish) { if (mData == null) { ByteArrayOutputStream stream = new ByteArrayOutputStream(); try { for (int i = 0, size = mReceiveDataArray.length; i < size; i++) { stream.write(mReceiveDataArray[i]); } mData = stream.toByteArray(); } catch (IOException e) { e.printStackTrace(); } } ret = super.getData(); } return ret; } @Override public Short getMessageId() { if (mReservedMessageId != null && mMessageId == null) { return mReservedMessageId; } else { return super.getMessageId(); } } }
thcomp/Android_BluetoothHelper
app/src/main/java/jp/co/thcomp/bluetoothhelper/BleReceiveDataProvider.java
Java
apache-2.0
7,875
[ 30522, 7427, 16545, 1012, 2522, 1012, 16215, 9006, 30524, 1012, 21183, 4014, 1012, 9140, 9863, 1025, 12324, 9262, 1012, 21183, 4014, 1012, 27448, 1025, 2270, 2465, 1038, 3917, 26005, 3512, 2850, 2696, 21572, 17258, 2121, 8908, 23919, 6790, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
namespace CAAssistant.Models { public class ClientFileViewModel { public ClientFileViewModel() { } public ClientFileViewModel(ClientFile clientFile) { Id = clientFile.Id; FileNumber = clientFile.FileNumber; ClientName = clientFile.ClientName; ClientContactPerson = clientFile.ClientContactPerson; AssociateReponsible = clientFile.AssociateReponsible; CaSign = clientFile.CaSign; DscExpiryDate = clientFile.DscExpiryDate; FileStatus = clientFile.FileStatus; } public string Id { get; set; } public int FileNumber { get; set; } public string ClientName { get; set; } public string ClientContactPerson { get; set; } public string AssociateReponsible { get; set; } public string CaSign { get; set; } public string DscExpiryDate { get; set; } public string FileStatus { get; set; } public string UserName { get; set; } public FileStatusModification InitialFileStatus { get; set; } } }
vishipayyallore/CAAssitant
CAAssistant/Models/ClientFileViewModel.cs
C#
mit
1,131
[ 30522, 3415, 15327, 6187, 12054, 23137, 2102, 1012, 4275, 1063, 2270, 2465, 7396, 8873, 20414, 2666, 2860, 5302, 9247, 1063, 2270, 7396, 8873, 20414, 2666, 2860, 5302, 9247, 1006, 1007, 1063, 1065, 2270, 7396, 8873, 20414, 2666, 2860, 5302,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * Copyright 2014 Soichiro Kashima * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.marshalchen.common.demoofui.observablescrollview; import android.os.Bundle; import android.support.v4.view.ViewCompat; import android.support.v7.app.ActionBarActivity; import android.support.v7.widget.Toolbar; import android.view.View; import com.github.ksoichiro.android.observablescrollview.ObservableScrollView; import com.github.ksoichiro.android.observablescrollview.ObservableScrollViewCallbacks; import com.github.ksoichiro.android.observablescrollview.ObservableWebView; import com.github.ksoichiro.android.observablescrollview.ScrollState; import com.marshalchen.common.demoofui.R; import com.nineoldandroids.view.ViewHelper; import com.nineoldandroids.view.ViewPropertyAnimator; public class ToolbarControlWebViewActivity extends ActionBarActivity { private View mHeaderView; private View mToolbarView; private ObservableScrollView mScrollView; private boolean mFirstScroll; private boolean mDragging; private int mBaseTranslationY; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.observable_scroll_view_activity_toolbarcontrolwebview); setSupportActionBar((Toolbar) findViewById(R.id.toolbar)); mHeaderView = findViewById(R.id.header); ViewCompat.setElevation(mHeaderView, getResources().getDimension(R.dimen.toolbar_elevation)); mToolbarView = findViewById(R.id.toolbar); mScrollView = (ObservableScrollView) findViewById(R.id.scroll); mScrollView.setScrollViewCallbacks(mScrollViewScrollCallbacks); ObservableWebView mWebView = (ObservableWebView) findViewById(R.id.web); mWebView.setScrollViewCallbacks(mWebViewScrollCallbacks); mWebView.loadUrl("file:///android_asset/lipsum.html"); } private ObservableScrollViewCallbacks mScrollViewScrollCallbacks = new ObservableScrollViewCallbacks() { @Override public void onScrollChanged(int scrollY, boolean firstScroll, boolean dragging) { if (mDragging) { int toolbarHeight = mToolbarView.getHeight(); if (mFirstScroll) { mFirstScroll = false; float currentHeaderTranslationY = ViewHelper.getTranslationY(mHeaderView); if (-toolbarHeight < currentHeaderTranslationY && toolbarHeight < scrollY) { mBaseTranslationY = scrollY; } } int headerTranslationY = Math.min(0, Math.max(-toolbarHeight, -(scrollY - mBaseTranslationY))); ViewPropertyAnimator.animate(mHeaderView).cancel(); ViewHelper.setTranslationY(mHeaderView, headerTranslationY); } } @Override public void onDownMotionEvent() { } @Override public void onUpOrCancelMotionEvent(ScrollState scrollState) { mDragging = false; mBaseTranslationY = 0; float headerTranslationY = ViewHelper.getTranslationY(mHeaderView); int toolbarHeight = mToolbarView.getHeight(); if (scrollState == ScrollState.UP) { if (toolbarHeight < mScrollView.getCurrentScrollY()) { if (headerTranslationY != -toolbarHeight) { ViewPropertyAnimator.animate(mHeaderView).cancel(); ViewPropertyAnimator.animate(mHeaderView).translationY(-toolbarHeight).setDuration(200).start(); } } } else if (scrollState == ScrollState.DOWN) { if (toolbarHeight < mScrollView.getCurrentScrollY()) { if (headerTranslationY != 0) { ViewPropertyAnimator.animate(mHeaderView).cancel(); ViewPropertyAnimator.animate(mHeaderView).translationY(0).setDuration(200).start(); } } } } }; private ObservableScrollViewCallbacks mWebViewScrollCallbacks = new ObservableScrollViewCallbacks() { @Override public void onScrollChanged(int scrollY, boolean firstScroll, boolean dragging) { } @Override public void onDownMotionEvent() { // Workaround: WebView inside a ScrollView absorbs down motion events, so observing // down motion event from the WebView is required. mFirstScroll = mDragging = true; } @Override public void onUpOrCancelMotionEvent(ScrollState scrollState) { } }; }
cymcsg/UltimateAndroid
deprecated/UltimateAndroidGradle/demoofui/src/main/java/com/marshalchen/common/demoofui/observablescrollview/ToolbarControlWebViewActivity.java
Java
apache-2.0
5,205
[ 30522, 1013, 1008, 1008, 9385, 2297, 2061, 11319, 3217, 10556, 24772, 1008, 1008, 7000, 2104, 1996, 15895, 6105, 1010, 2544, 1016, 1012, 1014, 1006, 1996, 1000, 6105, 1000, 1007, 1025, 1008, 2017, 2089, 2025, 2224, 2023, 5371, 3272, 1999, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# -*- coding: utf-8 -*- # # Copyright (C) 2013-2016 DNAnexus, Inc. # # This file is part of dx-toolkit (DNAnexus platform client libraries). # # Licensed under the Apache License, Version 2.0 (the "License"); you may not # use this file except in compliance with the License. You may obtain a copy # of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ''' This submodule contains helper functions for parsing and printing the contents of describe hashes for various DNAnexus entities (projects, containers, dataobjects, apps, and jobs). ''' from __future__ import print_function, unicode_literals, division, absolute_import import datetime, time, json, math, sys, copy import locale import subprocess from collections import defaultdict import dxpy from .printing import (RED, GREEN, BLUE, YELLOW, WHITE, BOLD, UNDERLINE, ENDC, DELIMITER, get_delimiter, fill) from ..compat import basestring, USING_PYTHON2 def JOB_STATES(state): if state == 'failed': return BOLD() + RED() + state + ENDC() elif state == 'done': return BOLD() + GREEN() + state + ENDC() elif state in ['running', 'in_progress']: return GREEN() + state + ENDC() elif state == 'partially_failed': return RED() + state + ENDC() else: return YELLOW() + state + ENDC() def DATA_STATES(state): if state == 'open': return YELLOW() + state + ENDC() elif state == 'closing': return YELLOW() + state + ENDC() elif state == 'closed': return GREEN() + state + ENDC() else: return state SIZE_LEVEL = ['bytes', 'KB', 'MB', 'GB', 'TB'] def get_size_str(size): """ Formats a byte size as a string. The returned string is no more than 9 characters long. """ if size is None: return "0 " + SIZE_LEVEL[0] if size == 0: magnitude = 0 level = 0 else: magnitude = math.floor(math.log(size, 10)) level = int(min(math.floor(magnitude // 3), 4)) return ('%d' if level == 0 else '%.2f') % (float(size) / 2**(level*10)) + ' ' + SIZE_LEVEL[level] def parse_typespec(thing): if isinstance(thing, basestring): return thing elif '$and' in thing: return '(' + ' AND '.join(map(parse_typespec, thing['$and'])) + ')' elif '$or' in thing: return '(' + ' OR '.join(map(parse_typespec, thing['$or'])) + ')' else: return 'Type spec could not be parsed' def get_io_desc(parameter, include_class=True, show_opt=True, app_help_version=False): # For interactive help, format array:CLASS inputs as: # -iNAME=CLASS [-iNAME=... [...]] # If input is required (needs >=1 inputs) # [-iNAME=CLASS [...]] # If input is optional (needs >=0 inputs if app_help_version and parameter["class"].startswith("array"): scalar_parameter = parameter.copy() # Munge the parameter dict (strip off "array:" to turn it into a # scalar) and recurse scalar_parameter["class"] = scalar_parameter["class"][6:] if "default" in parameter or parameter.get("optional"): return "[" + get_io_desc(scalar_parameter, include_class=include_class, show_opt=False, app_help_version=app_help_version) + " [-i%s=... [...]]]" % (parameter["name"],) else: return get_io_desc(scalar_parameter, include_class=include_class, show_opt=False, app_help_version=app_help_version) + " [-i%s=... [...]]" % (parameter["name"],) desc = "" is_optional = False if show_opt: if "default" in parameter or parameter.get("optional"): is_optional = True desc += "[" desc += ('-i' if app_help_version else '') + parameter["name"] include_parens = include_class or 'type' in parameter or 'default' in parameter if include_parens: desc += ("=" if app_help_version else " ") + "(" is_first = True if include_class: desc += parameter["class"] is_first = False if "type" in parameter: if not is_first: desc += ", " else: is_first = False desc += "type " + parse_typespec(parameter["type"]) if "default" in parameter: if not is_first: desc += ', ' desc += 'default=' + json.dumps(parameter['default']) if include_parens: desc += ")" if show_opt and is_optional: desc += "]" return desc def get_io_spec(spec, skip_fields=None): if spec is None: return 'null' if skip_fields is None: skip_fields = [] filtered_spec = [param for param in spec if param["name"] not in skip_fields] groups = defaultdict(list) for param in filtered_spec: groups[param.get('group')].append(param) list_of_params = [] for param in groups.get(None, []): list_of_params.append(get_io_desc(param)) for group in groups: if group is None: continue list_of_params.append("{g}:".format(g=group)) for param in groups[group]: list_of_params.append(" "+get_io_desc(param)) if len(skip_fields) > 0: list_of_params.append("<advanced inputs hidden; use --verbose to see more>") if len(list_of_params) == 0: return '-' if get_delimiter() is not None: return ('\n' + get_delimiter()).join(list_of_params) else: return ('\n' + ' '*16).join([fill(param, subsequent_indent=' '*18, width_adjustment=-18) for param in list_of_params]) def is_job_ref(thing, reftype=dict): ''' :param thing: something that might be a job-based object reference hash :param reftype: type that a job-based object reference would be (default is dict) ''' return isinstance(thing, reftype) and \ ((len(thing) == 2 and \ isinstance(thing.get('field'), basestring) and \ isinstance(thing.get('job'), basestring)) or \ (len(thing) == 1 and \ isinstance(thing.get('$dnanexus_link'), reftype) and \ isinstance(thing['$dnanexus_link'].get('field'), basestring) and \ isinstance(thing['$dnanexus_link'].get('job'), basestring))) def get_job_from_jbor(thing): ''' :returns: Job ID from a JBOR Assumes :func:`is_job_ref` evaluates to True ''' if '$dnanexus_link' in thing: return thing['$dnanexus_link']['job'] else: return thing['job'] def get_field_from_jbor(thing): ''' :returns: Output field name from a JBOR Assumes :func:`is_job_ref` evaluates to True ''' if '$dnanexus_link' in thing: return thing['$dnanexus_link']['field'] else: return thing['field'] def get_index_from_jbor(thing): ''' :returns: Array index of the JBOR if applicable; None otherwise Assumes :func:`is_job_ref` evaluates to True ''' if '$dnanexus_link' in thing: return thing['$dnanexus_link'].get('index') else: return None def is_metadata_ref(thing, reftype=dict): return isinstance(thing, reftype) and \ len(thing) == 1 and \ isinstance(thing.get('$dnanexus_link'), reftype) and \ isinstance(thing['$dnanexus_link'].get('metadata'), basestring) def jbor_to_str(val): ans = get_job_from_jbor(val) + ':' + get_field_from_jbor(val) index = get_index_from_jbor(val) if index is not None: ans += "." + str(index) return ans def io_val_to_str(val): if is_job_ref(val): # Job-based object references return jbor_to_str(val) elif isinstance(val, dict) and '$dnanexus_link' in val: # DNAnexus link if isinstance(val['$dnanexus_link'], basestring): # simple link return val['$dnanexus_link'] elif 'project' in val['$dnanexus_link'] and 'id' in val['$dnanexus_link']: return val['$dnanexus_link']['project'] + ':' + val['$dnanexus_link']['id'] else: return json.dumps(val) elif isinstance(val, list): if len(val) == 0: return '[]' else: return '[ ' + ', '.join([io_val_to_str(item) for item in val]) + ' ]' elif isinstance(val, dict): return '{ ' + ', '.join([key + ': ' + io_val_to_str(value) for key, value in val.items()]) + ' }' else: return json.dumps(val) def job_output_to_str(job_output, prefix='\n', title="Output: ", title_len=None): if len(job_output) == 0: return prefix + title + "-" else: if title_len is None: title_len = len(title) return prefix + title + (prefix+' '*title_len).join([fill(key + ' = ' + io_val_to_str(value), subsequent_indent=' '*9, break_long_words=False) for key, value in job_output.items()]) def get_io_field(io_hash, defaults=None, delim='=', highlight_fields=()): def highlight_value(key, value): if key in highlight_fields: return YELLOW() + value + ENDC() else: return value if defaults is None: defaults = {} if io_hash is None: return '-' if len(io_hash) == 0 and len(defaults) == 0: return '-' if get_delimiter() is not None: return ('\n' + get_delimiter()).join([(key + delim + highlight_value(key, io_val_to_str(value))) for key, value in io_hash.items()] + [('[' + key + delim + io_val_to_str(value) + ']') for key, value in defaults.items()]) else: lines = [fill(key + ' ' + delim + ' ' + highlight_value(key, io_val_to_str(value)), initial_indent=' ' * FIELD_NAME_WIDTH, subsequent_indent=' ' * (FIELD_NAME_WIDTH + 1), break_long_words=False) for key, value in io_hash.items()] lines.extend([fill('[' + key + ' ' + delim + ' ' + io_val_to_str(value) + ']', initial_indent=' ' * FIELD_NAME_WIDTH, subsequent_indent=' ' * (FIELD_NAME_WIDTH + 1), break_long_words=False) for key, value in defaults.items()]) return '\n'.join(lines)[FIELD_NAME_WIDTH:] def get_resolved_jbors(resolved_thing, orig_thing, resolved_jbors): if resolved_thing == orig_thing: return if is_job_ref(orig_thing): jbor_str = jbor_to_str(orig_thing) if jbor_str not in resolved_jbors: try: from dxpy.api import job_describe job_output = job_describe(get_job_from_jbor(orig_thing)).get('output') if job_output is not None: field_value = job_output.get(get_field_from_jbor(orig_thing)) jbor_index = get_index_from_jbor(orig_thing) if jbor_index is not None: if isinstance(field_value, list): resolved_jbors[jbor_str] = field_value[jbor_index] else: resolved_jbors[jbor_str] = field_value except: # Just don't report any resolved JBORs if there are # any problems pass elif isinstance(orig_thing, list): for i in range(len(orig_thing)): get_resolved_jbors(resolved_thing[i], orig_thing[i], resolved_jbors) elif isinstance(orig_thing, dict) and '$dnanexus_link' not in orig_thing: for key in orig_thing: get_resolved_jbors(resolved_thing[key], orig_thing[key], resolved_jbors) def render_bundleddepends(thing): from ..bindings.search import find_one_data_object from ..exceptions import DXError bundles = [] for item in thing: bundle_asset_record = dxpy.DXFile(item["id"]["$dnanexus_link"]).get_properties().get("AssetBundle") asset = None if bundle_asset_record: asset = dxpy.DXRecord(bundle_asset_record) if asset: try: bundles.append(asset.describe().get("name") + " (" + asset.get_id() + ")") except DXError: asset = None if not asset: bundles.append(item["name"] + " (" + item["id"]["$dnanexus_link"] + ")") return bundles def render_execdepends(thing): rendered = [] for item in thing: dep = copy.copy(item) dep.setdefault('package_manager', 'apt') dep['version'] = ' = '+dep['version'] if 'version' in dep else '' rendered.append("{package_manager}: {name}{version}".format(**dep)) return rendered def render_stage(title, stage, as_stage_of=None): lines_to_print = [] if stage['name'] is not None: lines_to_print.append((title, "{name} ({id})".format(name=stage['name'], id=stage['id']))) else: lines_to_print.append((title, stage['id'])) lines_to_print.append((' Executable', stage['executable'] + \ (" (" + RED() + "inaccessible" + ENDC() + ")" \ if stage.get('accessible') is False else ""))) if 'execution' in stage: is_cached_result = as_stage_of is not None and 'parentAnalysis' in stage['execution'] and \ stage['execution']['parentAnalysis'] != as_stage_of execution_id_str = stage['execution']['id'] if is_cached_result: execution_id_str = "[" + execution_id_str + "]" if 'state' in stage['execution']: lines_to_print.append((' Execution', execution_id_str + ' (' + JOB_STATES(stage['execution']['state']) + ')')) else: lines_to_print.append((' Execution', execution_id_str)) if is_cached_result: lines_to_print.append((' Cached from', stage['execution']['parentAnalysis'])) for line in lines_to_print: print_field(line[0], line[1]) def render_short_timestamp(timestamp): return str(datetime.datetime.fromtimestamp(timestamp//1000)) def render_timestamp(timestamp): return datetime.datetime.fromtimestamp(timestamp//1000).ctime() FIELD_NAME_WIDTH = 22 def print_field(label, value): if get_delimiter() is not None: sys.stdout.write(label + get_delimiter() + value + '\n') else: sys.stdout.write( label + " " * (FIELD_NAME_WIDTH-len(label)) + fill(value, subsequent_indent=' '*FIELD_NAME_WIDTH, width_adjustment=-FIELD_NAME_WIDTH) + '\n') def print_nofill_field(label, value): sys.stdout.write(label + DELIMITER(" " * (FIELD_NAME_WIDTH - len(label))) + value + '\n') def print_list_field(label, values): print_field(label, ('-' if len(values) == 0 else DELIMITER(', ').join(values))) def print_json_field(label, json_value): print_field(label, json.dumps(json_value, ensure_ascii=False)) def print_project_desc(desc, verbose=False): recognized_fields = [ 'id', 'class', 'name', 'summary', 'description', 'protected', 'restricted', 'created', 'modified', 'dataUsage', 'sponsoredDataUsage', 'tags', 'level', 'folders', 'objects', 'permissions', 'properties', 'appCaches', 'billTo', 'version', 'createdBy', 'totalSponsoredEgressBytes', 'consumedSponsoredEgressBytes', 'containsPHI', 'databaseUIViewOnly', 'region', 'storageCost', 'pendingTransfer','atSpendingLimit', # Following are app container-specific 'destroyAt', 'project', 'type', 'app', 'appName' ] # Basic metadata print_field("ID", desc["id"]) print_field("Class", desc["class"]) if "name" in desc: print_field("Name", desc["name"]) if 'summary' in desc: print_field("Summary", desc["summary"]) if 'description' in desc and (verbose or 'summary' not in desc): print_field("Description", desc['description']) if 'version' in desc and verbose: print_field("Version", str(desc['version'])) # Ownership and permissions if 'billTo' in desc: print_field("Billed to", desc['billTo'][5 if desc['billTo'].startswith('user-') else 0:]) if 'pendingTransfer' in desc and (verbose or desc['pendingTransfer'] is not None): print_json_field('Pending transfer to', desc['pendingTransfer']) if "level" in desc: print_field("Access level", desc["level"]) if 'region' in desc: print_field('Region', desc['region']) # Project settings if 'protected' in desc: print_json_field("Protected", desc["protected"]) if 'restricted' in desc: print_json_field("Restricted", desc["restricted"]) if 'containsPHI' in desc: print_json_field('Contains PHI', desc['containsPHI']) if 'databaseUIViewOnly' in desc and desc['databaseUIViewOnly']: print_json_field('Database UI View Only', desc['databaseUIViewOnly']) # Usage print_field("Created", render_timestamp(desc['created'])) if 'createdBy' in desc: print_field("Created by", desc['createdBy']['user'][desc['createdBy']['user'].find('-') + 1:]) print_field("Last modified", render_timestamp(desc['modified'])) print_field("Data usage", ('%.2f' % desc["dataUsage"]) + ' GB') if 'sponsoredDataUsage' in desc: print_field("Sponsored data", ('%.2f' % desc["sponsoredDataUsage"]) + ' GB') if 'storageCost' in desc: print_field("Storage cost", "$%.3f/month" % desc["storageCost"]) if 'totalSponsoredEgressBytes' in desc or 'consumedSponsoredEgressBytes' in desc: total_egress_str = '%.2f GB' % (desc['totalSponsoredEgressBytes'] / 1073741824.,) \ if 'totalSponsoredEgressBytes' in desc else '??' consumed_egress_str = '%.2f GB' % (desc['consumedSponsoredEgressBytes'] / 1073741824.,) \ if 'consumedSponsoredEgressBytes' in desc else '??' print_field('Sponsored egress', ('%s used of %s total' % (consumed_egress_str, total_egress_str))) if 'atSpendingLimit' in desc: print_json_field("At spending limit?", desc['atSpendingLimit']) # Misc metadata if "objects" in desc: print_field("# Files", str(desc["objects"])) if "folders" in desc: print_list_field("Folders", desc["folders"]) if "permissions" in desc: print_list_field( "Permissions", [key[5 if key.startswith('user-') else 0:] + ':' + value for key, value in desc["permissions"].items()] ) if 'tags' in desc: print_list_field("Tags", desc["tags"]) if "properties" in desc: print_list_field("Properties", [key + '=' + value for key, value in desc["properties"].items()]) if "appCaches" in desc: print_json_field("App caches", desc["appCaches"]) # Container-specific if 'type' in desc: print_field("Container type", desc["type"]) if 'project' in desc: print_field("Associated project", desc["project"]) if 'destroyAt' in desc: print_field("To be destroyed", render_timestamp(desc['modified'])) if 'app' in desc: print_field("Associated App ID", desc["app"]) if 'appName' in desc: print_field("Associated App", desc["appName"]) for field in desc: if field not in recognized_fields: print_json_field(field, desc[field]) def get_advanced_inputs(desc, verbose): details = desc.get("details") if not verbose and isinstance(details, dict): return details.get("advancedInputs", []) return [] def print_app_desc(desc, verbose=False): recognized_fields = ['id', 'class', 'name', 'version', 'aliases', 'createdBy', 'created', 'modified', 'deleted', 'published', 'title', 'subtitle', 'description', 'categories', 'access', 'dxapi', 'inputSpec', 'outputSpec', 'runSpec', 'resources', 'billTo', 'installed', 'openSource', 'summary', 'applet', 'installs', 'billing', 'details', 'developerNotes', 'authorizedUsers'] print_field("ID", desc["id"]) print_field("Class", desc["class"]) if 'billTo' in desc: print_field("Billed to", desc['billTo'][5 if desc['billTo'].startswith('user-') else 0:]) print_field("Name", desc["name"]) print_field("Version", desc["version"]) print_list_field("Aliases", desc["aliases"]) print_field("Created by", desc["createdBy"][5 if desc['createdBy'].startswith('user-') else 0:]) print_field("Created", render_timestamp(desc['created'])) print_field("Last modified", render_timestamp(desc['modified'])) print_field("Created from", desc["applet"]) print_json_field('Installed', desc['installed']) print_json_field('Open source', desc['openSource']) print_json_field('Deleted', desc['deleted']) if not desc['deleted']: advanced_inputs = [] details = desc["details"] if isinstance(details, dict) and "advancedInputs" in details: if not verbose: advanced_inputs = details["advancedInputs"] del details["advancedInputs"] if 'published' not in desc or desc["published"] < 0: print_field("Published", "-") else: print_field("Published", render_timestamp(desc['published'])) if "title" in desc and desc['title'] is not None: print_field("Title", desc["title"]) if "subtitle" in desc and desc['subtitle'] is not None: print_field("Subtitle", desc["subtitle"]) if 'summary' in desc and desc['summary'] is not None: print_field("Summary", desc['summary']) print_list_field("Categories", desc["categories"]) if 'details' in desc: print_json_field("Details", desc["details"]) print_json_field("Access", desc["access"]) print_field("API version", desc["dxapi"]) if 'inputSpec' in desc: print_nofill_field("Input Spec", get_io_spec(desc["inputSpec"], skip_fields=advanced_inputs)) print_nofill_field("Output Spec", get_io_spec(desc["outputSpec"])) print_field("Interpreter", desc["runSpec"]["interpreter"]) if "resources" in desc["runSpec"]: print_json_field("Resources", desc["runSpec"]["resources"]) if "bundledDepends" in desc["runSpec"]: print_list_field("bundledDepends", render_bundleddepends(desc["runSpec"]["bundledDepends"])) if "execDepends" in desc["runSpec"]: print_list_field("execDepends", render_execdepends(desc["runSpec"]["execDepends"])) if "systemRequirements" in desc['runSpec']: print_json_field('Sys Requirements', desc['runSpec']['systemRequirements']) if 'resources' in desc: print_field("Resources", desc['resources']) if 'installs' in desc: print_field('# Installs', str(desc['installs'])) if 'authorizedUsers' in desc: print_list_field('AuthorizedUsers', desc["authorizedUsers"]) for field in desc: if field not in recognized_fields: print_json_field(field, desc[field]) def print_globalworkflow_desc(desc, verbose=False): recognized_fields = ['id', 'class', 'name', 'version', 'aliases', 'createdBy', 'created', 'modified', 'deleted', 'published', 'title', 'description', 'categories', 'dxapi', 'billTo', 'summary', 'billing', 'developerNotes', 'authorizedUsers', 'regionalOptions'] is_locked_workflow = False print_field("ID", desc["id"]) print_field("Class", desc["class"]) if 'billTo' in desc: print_field("Billed to", desc['billTo'][5 if desc['billTo'].startswith('user-') else 0:]) print_field("Name", desc["name"]) print_field("Version", desc["version"]) print_list_field("Aliases", desc["aliases"]) print_field("Created by", desc["createdBy"][5 if desc['createdBy'].startswith('user-') else 0:]) print_field("Created", render_timestamp(desc['created'])) print_field("Last modified", render_timestamp(desc['modified'])) # print_json_field('Open source', desc['openSource']) print_json_field('Deleted', desc.get('deleted', False)) if not desc.get('deleted', False): if 'published' not in desc or desc["published"] < 0: print_field("Published", "-") else: print_field("Published", render_timestamp(desc['published'])) if "title" in desc and desc['title'] is not None: print_field("Title", desc["title"]) if "subtitle" in desc and desc['subtitle'] is not None: print_field("Subtitle", desc["subtitle"]) if 'summary' in desc and desc['summary'] is not None: print_field("Summary", desc['summary']) print_list_field("Categories", desc["categories"]) if 'details' in desc: print_json_field("Details", desc["details"]) print_field("API version", desc["dxapi"]) # Additionally, print inputs, outputs, stages of the underlying workflow # from the region of the current workspace current_project = dxpy.WORKSPACE_ID if current_project: region = dxpy.api.project_describe(current_project, input_params={"fields": {"region": True}})["region"] if region and region in desc['regionalOptions']: workflow_desc = desc['regionalOptions'][region]['workflowDescribe'] print_field("Workflow region", region) if 'id' in workflow_desc: print_field("Workflow ID", workflow_desc['id']) if workflow_desc.get('inputSpec') is not None and workflow_desc.get('inputs') is None: print_nofill_field("Input Spec", get_io_spec(workflow_desc['inputSpec'], skip_fields=get_advanced_inputs(workflow_desc, verbose))) if workflow_desc.get('outputSpec') is not None and workflow_desc.get('outputs') is None: print_nofill_field("Output Spec", get_io_spec(workflow_desc['outputSpec'])) if workflow_desc.get('inputs') is not None: is_locked_workflow = True print_nofill_field("Workflow Inputs", get_io_spec(workflow_desc['inputs'])) if workflow_desc.get('outputs') is not None: print_nofill_field("Workflow Outputs", get_io_spec(workflow_desc['outputs'])) if 'stages' in workflow_desc: for i, stage in enumerate(workflow_desc["stages"]): render_stage("Stage " + str(i), stage) if 'authorizedUsers' in desc: print_list_field('AuthorizedUsers', desc["authorizedUsers"]) if is_locked_workflow: print_locked_workflow_note() for field in desc: if field not in recognized_fields: print_json_field(field, desc[field]) def get_col_str(col_desc): return col_desc['name'] + DELIMITER(" (") + col_desc['type'] + DELIMITER(")") def print_data_obj_desc(desc, verbose=False): recognized_fields = ['id', 'class', 'project', 'folder', 'name', 'properties', 'tags', 'types', 'hidden', 'details', 'links', 'created', 'modified', 'state', 'title', 'subtitle', 'description', 'inputSpec', 'outputSpec', 'runSpec', 'summary', 'dxapi', 'access', 'createdBy', 'summary', 'sponsored', 'developerNotes', 'stages', 'inputs', 'outputs', 'latestAnalysis', 'editVersion', 'outputFolder', 'initializedFrom', 'temporary'] is_locked_workflow = False print_field("ID", desc["id"]) print_field("Class", desc["class"]) if 'project' in desc: print_field("Project", desc['project']) if 'folder' in desc: print_field("Folder", desc["folder"]) print_field("Name", desc["name"]) if 'state' in desc: print_field("State", DATA_STATES(desc['state'])) if 'hidden' in desc: print_field("Visibility", ("hidden" if desc["hidden"] else "visible")) if 'types' in desc: print_list_field("Types", desc['types']) if 'properties' in desc: print_list_field("Properties", ['='.join([k, v]) for k, v in desc['properties'].items()]) if 'tags' in desc: print_list_field("Tags", desc['tags']) if verbose and 'details' in desc: print_json_field("Details", desc["details"]) if 'links' in desc: print_list_field("Outgoing links", desc['links']) print_field("Created", render_timestamp(desc['created'])) if 'createdBy' in desc: print_field("Created by", desc['createdBy']['user'][5:]) if 'job' in desc["createdBy"]: print_field(" via the job", desc['createdBy']['job']) if verbose and 'executable' in desc['createdBy']: print_field(" running", desc['createdBy']['executable']) print_field("Last modified", render_timestamp(desc['modified'])) if "editVersion" in desc: print_field("Edit Version", str(desc['editVersion'])) if "title" in desc: print_field("Title", desc["title"]) if "subtitle" in desc: print_field("Subtitle", desc["subtitle"]) if 'summary' in desc: print_field("Summary", desc['summary']) if 'description' in desc and verbose: print_field("Description", desc["description"]) if 'outputFolder' in desc: print_field("Output Folder", desc["outputFolder"] if desc["outputFolder"] is not None else "-") if 'access' in desc: print_json_field("Access", desc["access"]) if 'dxapi' in desc: print_field("API version", desc["dxapi"]) # In case of a workflow: do not display "Input/Output Specs" that show stages IO # when the workflow has workflow-level input/output fields defined. if desc.get('inputSpec') is not None and desc.get('inputs') is None: print_nofill_field("Input Spec", get_io_spec(desc['inputSpec'], skip_fields=get_advanced_inputs(desc, verbose))) if desc.get('outputSpec') is not None and desc.get('outputs') is None: print_nofill_field("Output Spec", get_io_spec(desc['outputSpec'])) if desc.get('inputs') is not None: is_locked_workflow = True print_nofill_field("Workflow Inputs", get_io_spec(desc['inputs'])) if desc.get('outputs') is not None: print_nofill_field("Workflow Outputs", get_io_spec(desc['outputs'])) if 'runSpec' in desc: print_field("Interpreter", desc["runSpec"]["interpreter"]) if "resources" in desc['runSpec']: print_json_field("Resources", desc["runSpec"]["resources"]) if "bundledDepends" in desc["runSpec"]: print_list_field("bundledDepends", render_bundleddepends(desc["runSpec"]["bundledDepends"])) if "execDepends" in desc["runSpec"]: print_list_field("execDepends", render_execdepends(desc["runSpec"]["execDepends"])) if "systemRequirements" in desc['runSpec']: print_json_field('Sys Requirements', desc['runSpec']['systemRequirements']) if 'stages' in desc: for i, stage in enumerate(desc["stages"]): render_stage("Stage " + str(i), stage) if 'initializedFrom' in desc: print_field("initializedFrom", desc["initializedFrom"]["id"]) if 'latestAnalysis' in desc and desc['latestAnalysis'] is not None: print_field("Last execution", desc["latestAnalysis"]["id"]) print_field(" run at", render_timestamp(desc["latestAnalysis"]["created"])) print_field(" state", JOB_STATES(desc["latestAnalysis"]["state"])) for field in desc: if field in recognized_fields: continue else: if field == "media": print_field("Media type", desc['media']) elif field == "size": if desc["class"] == "file": sponsored_str = "" if 'sponsored' in desc and desc['sponsored']: sponsored_str = DELIMITER(", ") + "sponsored by DNAnexus" print_field("Size", get_size_str(desc['size']) + sponsored_str) else: print_field("Size", str(desc['size'])) elif field == "length": print_field("Length", str(desc['length'])) elif field == "columns": if len(desc['columns']) > 0: coldescs = "Columns" + DELIMITER(" " *(16-len("Columns"))) + get_col_str(desc["columns"][0]) for column in desc["columns"][1:]: coldescs += '\n' + DELIMITER(" "*16) + get_col_str(column) print(coldescs) else: print_list_field("Columns", desc['columns']) else: # Unhandled prettifying print_json_field(field, desc[field]) if is_locked_workflow: print_locked_workflow_note() def printable_ssh_host_key(ssh_host_key): try: keygen = subprocess.Popen(["ssh-keygen", "-lf", "/dev/stdin"], stdin=subprocess.PIPE, stdout=subprocess.PIPE) if USING_PYTHON2: (stdout, stderr) = keygen.communicate(ssh_host_key) else: (stdout, stderr) = keygen.communicate(ssh_host_key.encode()) except: return ssh_host_key.strip() else: if not USING_PYTHON2: stdout = stdout.decode() return stdout.replace(" no comment", "").strip() def print_execution_desc(desc): recognized_fields = ['id', 'class', 'project', 'workspace', 'region', 'app', 'applet', 'executable', 'workflow', 'state', 'rootExecution', 'parentAnalysis', 'parentJob', 'originJob', 'analysis', 'stage', 'function', 'runInput', 'originalInput', 'input', 'output', 'folder', 'launchedBy', 'created', 'modified', 'failureReason', 'failureMessage', 'stdout', 'stderr', 'waitingOnChildren', 'dependsOn', 'resources', 'projectCache', 'details', 'tags', 'properties', 'name', 'instanceType', 'systemRequirements', 'executableName', 'failureFrom', 'billTo', 'startedRunning', 'stoppedRunning', 'stateTransitions', 'delayWorkspaceDestruction', 'stages', 'totalPrice', 'isFree', 'invoiceMetadata', 'priority', 'sshHostKey'] print_field("ID", desc["id"]) print_field("Class", desc["class"]) if "name" in desc and desc['name'] is not None: print_field("Job name", desc['name']) if "executableName" in desc and desc['executableName'] is not None: print_field("Executable name", desc['executableName']) print_field("Project context", desc["project"]) if 'region' in desc: print_field("Region", desc["region"]) if 'billTo' in desc: print_field("Billed to", desc['billTo'][5 if desc['billTo'].startswith('user-') else 0:]) if 'workspace' in desc: print_field("Workspace", desc["workspace"]) if 'projectCache' in desc: print_field('Cache workspace', desc['projectCache']) print_field('Resources', desc['resources']) if "app" in desc: print_field("App", desc["app"]) elif desc.get("executable", "").startswith("globalworkflow"): print_field("Workflow", desc["executable"]) elif "applet" in desc: print_field("Applet", desc["applet"]) elif "workflow" in desc: print_field("Workflow", desc["workflow"]["id"]) if "instanceType" in desc and desc['instanceType'] is not None: print_field("Instance Type", desc["instanceType"]) if "priority" in desc: print_field("Priority", desc["priority"]) print_field("State", JOB_STATES(desc["state"])) if "rootExecution" in desc: print_field("Root execution", desc["rootExecution"]) if "originJob" in desc: if desc["originJob"] is None: print_field("Origin job", "-") else: print_field("Origin job", desc["originJob"]) if desc["parentJob"] is None: print_field("Parent job", "-") else: print_field("Parent job", desc["parentJob"]) if "parentAnalysis" in desc: if desc["parentAnalysis"] is not None: print_field("Parent analysis", desc["parentAnalysis"]) if "analysis" in desc and desc["analysis"] is not None: print_field("Analysis", desc["analysis"]) print_field("Stage", desc["stage"]) if "stages" in desc: for i, (stage, analysis_stage) in enumerate(zip(desc["workflow"]["stages"], desc["stages"])): stage['execution'] = analysis_stage['execution'] render_stage("Stage " + str(i), stage, as_stage_of=desc["id"]) if "function" in desc: print_field("Function", desc["function"]) if 'runInput' in desc: default_fields = {k: v for k, v in desc["originalInput"].items() if k not in desc["runInput"]} print_nofill_field("Input", get_io_field(desc["runInput"], defaults=default_fields)) else: print_nofill_field("Input", get_io_field(desc["originalInput"])) resolved_jbors = {} input_with_jbors = desc.get('runInput', desc['originalInput']) for k in desc["input"]: if k in input_with_jbors and desc["input"][k] != input_with_jbors[k]: get_resolved_jbors(desc["input"][k], input_with_jbors[k], resolved_jbors) if len(resolved_jbors) != 0: print_nofill_field("Resolved JBORs", get_io_field(resolved_jbors, delim=(GREEN() + '=>' + ENDC()))) print_nofill_field("Output", get_io_field(desc["output"])) if 'folder' in desc: print_field('Output folder', desc['folder']) print_field("Launched by", desc["launchedBy"][5:]) print_field("Created", render_timestamp(desc['created'])) if 'startedRunning' in desc: if 'stoppedRunning' in desc: print_field("Started running", render_timestamp(desc['startedRunning'])) else: print_field("Started running", "{t} (running for {rt})".format(t=render_timestamp(desc['startedRunning']), rt=datetime.timedelta(seconds=int(time.time())-desc['startedRunning']//1000))) if 'stoppedRunning' in desc: print_field("Stopped running", "{t} (Runtime: {rt})".format( t=render_timestamp(desc['stoppedRunning']), rt=datetime.timedelta(seconds=(desc['stoppedRunning']-desc['startedRunning'])//1000))) if desc.get('class') == 'analysis' and 'stateTransitions' in desc and desc['stateTransitions']: # Display finishing time of the analysis if available if desc['stateTransitions'][-1]['newState'] in ['done', 'failed', 'terminated']: print_field("Finished", "{t} (Wall-clock time: {wt})".format( t=render_timestamp(desc['stateTransitions'][-1]['setAt']), wt=datetime.timedelta(seconds=(desc['stateTransitions'][-1]['setAt']-desc['created'])//1000))) print_field("Last modified", render_timestamp(desc['modified'])) if 'waitingOnChildren' in desc: print_list_field('Pending subjobs', desc['waitingOnChildren']) if 'dependsOn' in desc: print_list_field('Depends on', desc['dependsOn']) if "failureReason" in desc: print_field("Failure reason", desc["failureReason"]) if "failureMessage" in desc: print_field("Failure message", desc["failureMessage"]) if "failureFrom" in desc and desc['failureFrom'] is not None and desc['failureFrom']['id'] != desc['id']: print_field("Failure is from", desc['failureFrom']['id']) if 'systemRequirements' in desc: print_json_field("Sys Requirements", desc['systemRequirements']) if "tags" in desc: print_list_field("Tags", desc["tags"]) if "properties" in desc: print_list_field("Properties", [key + '=' + value for key, value in desc["properties"].items()]) if "details" in desc and "clonedFrom" in desc["details"]: cloned_hash = desc["details"]["clonedFrom"] if "id" in cloned_hash: print_field("Re-run of", cloned_hash["id"]) print_field(" named", cloned_hash["name"]) same_executable = cloned_hash["executable"] == desc.get("applet", desc.get("app", "")) print_field(" using", ("" if same_executable else YELLOW()) + \ cloned_hash["executable"] + \ (" (same)" if same_executable else ENDC())) same_project = cloned_hash["project"] == desc["project"] same_folder = cloned_hash["folder"] == desc["folder"] or not same_project print_field(" output folder", ("" if same_project else YELLOW()) + \ cloned_hash["project"] + \ ("" if same_project else ENDC()) + ":" + \ ("" if same_folder else YELLOW()) + \ cloned_hash["folder"] + \ (" (same)" if (same_project and same_folder) else "" if same_folder else ENDC())) different_inputs = [] for item in cloned_hash["runInput"]: if cloned_hash["runInput"][item] != desc["runInput"][item]: different_inputs.append(item) print_nofill_field(" input", get_io_field(cloned_hash["runInput"], highlight_fields=different_inputs)) cloned_sys_reqs = cloned_hash.get("systemRequirements") if isinstance(cloned_sys_reqs, dict): if cloned_sys_reqs == desc.get('systemRequirements'): print_nofill_field(" sys reqs", json.dumps(cloned_sys_reqs) + ' (same)') else: print_nofill_field(" sys reqs", YELLOW() + json.dumps(cloned_sys_reqs) + ENDC()) if not desc.get('isFree') and desc.get('totalPrice') is not None: print_field('Total Price', format_currency(desc['totalPrice'], meta=desc['currency'])) if desc.get('invoiceMetadata'): print_json_field("Invoice Metadata", desc['invoiceMetadata']) if desc.get('sshHostKey'): print_nofill_field("SSH Host Key", printable_ssh_host_key(desc['sshHostKey'])) for field in desc: if field not in recognized_fields: print_json_field(field, desc[field]) def locale_from_currency_code(dx_code): """ This is a (temporary) hardcoded mapping between currency_list.json in nucleus and standard locale string useful for further formatting :param dx_code: An id of nucleus/commons/pricing_models/currency_list.json collection :return: standardised locale, eg 'en_US'; None when no mapping found """ currency_locale_map = {0: 'en_US', 1: 'en_GB'} return currency_locale_map[dx_code] if dx_code in currency_locale_map else None def format_currency_from_meta(value, meta): """ Formats currency value into properly decorated currency string based on provided currency metadata. Please note that this is very basic solution missing some of the localisation features (such as negative symbol position and type. Better option is to use 'locale' module to reflect currency string decorations more accurately. See 'format_currency' :param value: :param meta: :return: """ prefix = '-' if value < 0 else '' # .. TODO: some locales position neg symbol elsewhere, missing meta prefix += meta['symbol'] if meta['symbolPosition'] == 'left' else '' suffix = ' %s' % meta["symbol"] if meta['symbolPosition'] == 'right' else '' # .. TODO: take the group and decimal separators from meta into account (US & UK are the same, so far we're safe) formatted_value = '{:,.2f}'.format(abs(value)) return prefix + formatted_value + suffix def format_currency(value, meta, currency_locale=None): """ Formats currency value into properly decorated currency string based on either locale (preferred) or if that is not available then currency metadata. Until locale is provided from the server a crude mapping between `currency.dxCode` and a locale string is used instead (eg 0: 'en_US') :param value: amount :param meta: server metadata (`currency`) :return: formatted currency string """ try: if currency_locale is None: currency_locale = locale_from_currency_code(meta['dxCode']) if currency_locale is None: return format_currency_from_meta(value, meta) else: locale.setlocale(locale.LC_ALL, currency_locale) return locale.currency(value, grouping=True) except locale.Error: # .. locale is probably not available -> fallback to format manually return format_currency_from_meta(value, meta) def print_user_desc(desc): print_field("ID", desc["id"]) print_field("Name", desc["first"] + " " + ((desc["middle"] + " ") if desc["middle"] != '' else '') + desc["last"]) if "email" in desc: print_field("Email", desc["email"]) bill_to_label = "Default bill to" if "billTo" in desc: print_field(bill_to_label, desc["billTo"]) if "appsInstalled" in desc: print_list_field("Apps installed", desc["appsInstalled"]) def print_generic_desc(desc): for field in desc: print_json_field(field, desc[field]) def print_desc(desc, verbose=False): ''' :param desc: The describe hash of a DNAnexus entity :type desc: dict Depending on the class of the entity, this method will print a formatted and human-readable string containing the data in *desc*. ''' if desc['class'] in ['project', 'workspace', 'container']: print_project_desc(desc, verbose=verbose) elif desc['class'] == 'app': print_app_desc(desc, verbose=verbose) elif desc['class'] == 'globalworkflow': print_globalworkflow_desc(desc, verbose=verbose) elif desc['class'] in ['job', 'analysis']: print_execution_desc(desc) elif desc['class'] == 'user': print_user_desc(desc) elif desc['class'] in ['org', 'team']: print_generic_desc(desc) else: print_data_obj_desc(desc, verbose=verbose) def get_ls_desc(desc, print_id=False): addendum = ' : ' + desc['id'] if print_id is True else '' if desc['class'] in ['applet', 'workflow']: return BOLD() + GREEN() + desc['name'] + ENDC() + addendum else: return desc['name'] + addendum def print_ls_desc(desc, **kwargs): print(get_ls_desc(desc, **kwargs)) def get_ls_l_header(): return (BOLD() + 'State' + DELIMITER(' ') + 'Last modified' + DELIMITER(' ') + 'Size' + DELIMITER(' ') + 'Name' + DELIMITER(' (') + 'ID' + DELIMITER(')') + ENDC()) def print_ls_l_header(): print(get_ls_l_header()) def get_ls_l_desc_fields(): return { 'id': True, 'class': True, 'folder': True, 'length': True, 'modified': True, 'name': True, 'project': True, 'size': True, 'state': True } def get_ls_l_desc(desc, include_folder=False, include_project=False): """ desc must have at least all the fields given by get_ls_l_desc_fields. """ # If you make this method consume an additional field, you must add it to # get_ls_l_desc_fields above. if 'state' in desc: state_len = len(desc['state']) if desc['state'] != 'closed': state_str = YELLOW() + desc['state'] + ENDC() else: state_str = GREEN() + desc['state'] + ENDC() else: state_str = '' state_len = 0 name_str = '' if include_folder: name_str += desc['folder'] + ('/' if desc['folder'] != '/' else '') name_str += desc['name'] if desc['class'] in ['applet', 'workflow']: name_str = BOLD() + GREEN() + name_str + ENDC() size_str = '' if 'size' in desc and desc['class'] == 'file': size_str = get_size_str(desc['size']) elif 'length' in desc: size_str = str(desc['length']) + ' rows' size_padding = ' ' * max(0, 9 - len(size_str)) return (state_str + DELIMITER(' '*(8 - state_len)) + render_short_timestamp(desc['modified']) + DELIMITER(' ') + size_str + DELIMITER(size_padding + ' ') + name_str + DELIMITER(' (') + ((desc['project'] + DELIMITER(':')) if include_project else '') + desc['id'] + DELIMITER(')')) def print_ls_l_desc(desc, **kwargs): print(get_ls_l_desc(desc, **kwargs)) def get_find_executions_string(desc, has_children, single_result=False, show_outputs=True, is_cached_result=False): ''' :param desc: hash of execution's describe output :param has_children: whether the execution has children to be printed :param single_result: whether the execution is displayed as a single result or as part of an execution tree :param is_cached_result: whether the execution should be formatted as a cached result ''' is_not_subjob = desc['parentJob'] is None or desc['class'] == 'analysis' or single_result result = ("* " if is_not_subjob and get_delimiter() is None else "") canonical_execution_name = desc['executableName'] if desc['class'] == 'job': canonical_execution_name += ":" + desc['function'] execution_name = desc.get('name', '<no name>') # Format the name of the execution if is_cached_result: result += BOLD() + "[" + ENDC() result += BOLD() + BLUE() if desc['class'] == 'analysis': result += UNDERLINE() result += execution_name + ENDC() if execution_name != canonical_execution_name and execution_name+":main" != canonical_execution_name: result += ' (' + canonical_execution_name + ')' if is_cached_result: result += BOLD() + "]" + ENDC() # Format state result += DELIMITER(' (') + JOB_STATES(desc['state']) + DELIMITER(') ') + desc['id'] # Add unicode pipe to child if necessary result += DELIMITER('\n' + (u'│ ' if is_not_subjob and has_children else (" " if is_not_subjob else ""))) result += desc['launchedBy'][5:] + DELIMITER(' ') result += render_short_timestamp(desc['created']) cached_and_runtime_strs = [] if is_cached_result: cached_and_runtime_strs.append(YELLOW() + "cached" + ENDC()) if desc['class'] == 'job': # Only print runtime if it ever started running if desc.get('startedRunning'): if desc['state'] in ['done', 'failed', 'terminated', 'waiting_on_output']: runtime = datetime.timedelta(seconds=int(desc['stoppedRunning']-desc['startedRunning'])//1000) cached_and_runtime_strs.append("runtime " + str(runtime)) elif desc['state'] == 'running': seconds_running = max(int(time.time()-desc['startedRunning']//1000), 0) msg = "running for {rt}".format(rt=datetime.timedelta(seconds=seconds_running)) cached_and_runtime_strs.append(msg) if cached_and_runtime_strs: result += " (" + ", ".join(cached_and_runtime_strs) + ")" if show_outputs: prefix = DELIMITER('\n' + (u'│ ' if is_not_subjob and has_children else (" " if is_not_subjob else ""))) if desc.get("output") != None: result += job_output_to_str(desc['output'], prefix=prefix) elif desc['state'] == 'failed' and 'failureReason' in desc: result += prefix + BOLD() + desc['failureReason'] + ENDC() + ": " + fill(desc.get('failureMessage', ''), subsequent_indent=prefix.lstrip('\n')) return result def print_locked_workflow_note(): print_field('Note', 'This workflow has an explicit input specification (i.e. it is locked), and as such stage inputs cannot be modified at run-time.')
dnanexus/dx-toolkit
src/python/dxpy/utils/describe.py
Python
apache-2.0
52,416
[ 30522, 1001, 1011, 1008, 1011, 16861, 1024, 21183, 2546, 1011, 1022, 1011, 1008, 1011, 1001, 1001, 9385, 1006, 1039, 1007, 2286, 1011, 2355, 6064, 2638, 2595, 2271, 1010, 4297, 1012, 1001, 1001, 2023, 5371, 2003, 2112, 1997, 1040, 2595, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
#!/bin/bash ########################################################################################### ## Copyright 2003, 2015 IBM Corp ## ## ## ## Redistribution and use in source and binary forms, with or without modification, ## ## are permitted provided that the following conditions are met: ## ## 1.Redistributions of source code must retain the above copyright notice, ## ## this list of conditions and the following disclaimer. ## ## 2.Redistributions in binary form must reproduce the above copyright notice, this ## ## list of conditions and the following disclaimer in the documentation and/or ## ## other materials provided with the distribution. ## ## ## ## THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS AND ANY EXPRESS ## ## OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF ## ## MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ## ## THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, ## ## EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF ## ## SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ## ## HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, ## ## OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS ## ## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ## ############################################################################################ ### File : perl-HTML-Format.sh ## ## ### Description: This testcase tests perl-HTML-Format package ## ## ### Author: Ramya BS , ramya@linux.vnet.ibm.com ## ########################################################################################### ######cd $(dirname $0) #LTPBIN=${LTPBIN%/shared}/perl_HTML_Format MAPPER_FILE="$LTPBIN/mapper_file" source $LTPBIN/tc_utils.source source $MAPPER_FILE TESTS_DIR="${LTPBIN%/shared}/perl_HTML_Format/t" required="perl" LINK="/usr/share/perl5/vendor_perl/HTML" function tc_local_setup() { # check installation and environment tc_exec_or_break $required # install check tc_check_package "$PERL_HTML_FORMAT" tc_break_if_bad $? "$PERL_HTML_FORMAT not installed" #creating folder lib and soft linking modules (instead of copying lib folder from test source as this modules will be installed along with source dpkg ) ,as one of the test provided by source is written to take this modules from lib folder and execute test. mkdir -p $TESTS_DIR/lib ln -s $LINK/FormatPS.pm $TESTS_DIR/lib ln -s $LINK/FormatRTF.pm $TESTS_DIR/lib ln -s $LINK/Formatter.pm $TESTS_DIR/lib ln -s $LINK/FormatText.pm $TESTS_DIR/lib } function tc_local_cleanup() { rm -rf $TESTS_DIR/lib } ################################################################################ # testcase functions # ################################################################################ # # Function: runtests # # Description: - test perl-HTML-Format # # Parameters: - none # # Return - zero on success # - return value from commands on failure ################################################################################ function run_test() { pushd $TESTS_DIR &>/dev/null TESTS=`ls *.t` TST_TOTAL=`echo $TESTS | wc -w` for test in $TESTS; do tc_register "Test $test" perl $test >$stdout 2>$stderr rc=`grep "not ok" $stdout` [ -z "$rc" ] tc_pass_or_fail $? "Test $test fail" done popd &>/dev/null } ############################################## #MAIN # ############################################## TST_TOTAL=1 tc_setup && \ run_test
rajashreer7/autotest-client-tests
linux-tools/perl_HTML_Format/perl-HTML-Format.sh
Shell
gpl-2.0
4,310
[ 30522, 1001, 999, 1013, 8026, 1013, 24234, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 1001, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* Copyright_License { XCSoar Glide Computer - http://www.xcsoar.org/ Copyright (C) 2000-2012 The XCSoar Project A detailed list of copyright holders can be found in the file "AUTHORS". This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. } */ #ifndef XCSOAR_VEGA_EMULATOR_HPP #define XCSOAR_VEGA_EMULATOR_HPP #include "DeviceEmulator.hpp" #include "Device/Port/LineSplitter.hpp" #include "Device/Internal.hpp" #include "NMEA/InputLine.hpp" #include "NMEA/Checksum.hpp" #include "Util/Macros.hpp" #include "Operation/ConsoleOperationEnvironment.hpp" #include <string> #include <map> #include <stdio.h> #include <string.h> class VegaEmulator : public Emulator, PortLineSplitter { std::map<std::string, std::string> settings; public: VegaEmulator() { handler = this; } private: void PDVSC_S(NMEAInputLine &line) { char name[64], value[256]; line.Read(name, ARRAY_SIZE(name)); line.Read(value, ARRAY_SIZE(value)); settings[name] = value; ConsoleOperationEnvironment env; char buffer[512]; snprintf(buffer, ARRAY_SIZE(buffer), "PDVSC,A,%s,%s", name, value); PortWriteNMEA(*port, buffer, env); } void PDVSC_R(NMEAInputLine &line) { char name[64]; line.Read(name, ARRAY_SIZE(name)); auto i = settings.find(name); if (i == settings.end()) return; const char *value = i->second.c_str(); ConsoleOperationEnvironment env; char buffer[512]; snprintf(buffer, ARRAY_SIZE(buffer), "PDVSC,A,%s,%s", name, value); PortWriteNMEA(*port, buffer, env); } void PDVSC(NMEAInputLine &line) { char command[4]; line.Read(command, ARRAY_SIZE(command)); if (strcmp(command, "S") == 0) PDVSC_S(line); else if (strcmp(command, "R") == 0) PDVSC_R(line); } protected: virtual void DataReceived(const void *data, size_t length) { fwrite(data, 1, length, stdout); PortLineSplitter::DataReceived(data, length); } virtual void LineReceived(const char *_line) { if (!VerifyNMEAChecksum(_line)) return; NMEAInputLine line(_line); if (line.ReadCompare("$PDVSC")) PDVSC(line); } }; #endif
damianob/xcsoar
test/src/VegaEmulator.hpp
C++
gpl-2.0
2,789
[ 30522, 1013, 1008, 9385, 30524, 2456, 1011, 2262, 1996, 1060, 6169, 10441, 2099, 2622, 1037, 6851, 2862, 1997, 9385, 13304, 2064, 2022, 2179, 1999, 1996, 5371, 1000, 6048, 1000, 1012, 2023, 2565, 2003, 2489, 4007, 1025, 2017, 2064, 2417, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
import { SolidarityRunContext, SolidaritySettings } from '../../types' module.exports = (settings: SolidaritySettings, context: SolidarityRunContext): void => { const { filesystem } = context if (settings.requirements) { // Write file filesystem.write('.solidarity', JSON.stringify(settings, null, 2), { atomic: true }) } else { throw 'You must have a requirements key to be a valid solidarity file' } }
infinitered/solidarity
src/extensions/functions/setSolidaritySettings.ts
TypeScript
mit
425
[ 30522, 12324, 1063, 14657, 15532, 8663, 18209, 1010, 14657, 21678, 8613, 1065, 2013, 1005, 1012, 1012, 1013, 1012, 1012, 1013, 4127, 1005, 11336, 1012, 14338, 1027, 1006, 10906, 1024, 14657, 21678, 8613, 1010, 6123, 1024, 14657, 15532, 8663, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content="Filosofian Akatemian Timo Tiuraniemi esitt&auml;ytyy."> <title>Filosofian Akatemia | Ihmiset | Timo Tiuraniemi</title> <link rel="stylesheet" href="sites/default/files/newfa/css/normalize.css"> <link rel="stylesheet" href="sites/default/files/newfa/css/foundation.min.css"> <link rel="stylesheet" href="sites/default/files/newfa/css/app.css"> <link rel="apple-touch-icon" sizes="57x57" href="/sites/default/files/newfa/favicon/apple-icon-57x57.png"> <link rel="apple-touch-icon" sizes="60x60" href="/sites/default/files/newfa/favicon/apple-icon-60x60.png"> <link rel="apple-touch-icon" sizes="72x72" href="/sites/default/files/newfa/favicon/apple-icon-72x72.png"> <link rel="apple-touch-icon" sizes="76x76" href="/sites/default/files/newfa/favicon/apple-icon-76x76.png"> <link rel="apple-touch-icon" sizes="114x114" href="/sites/default/files/newfa/favicon/apple-icon-114x114.png"> <link rel="apple-touch-icon" sizes="120x120" href="/sites/default/files/newfa/favicon/apple-icon-120x120.png"> <link rel="apple-touch-icon" sizes="144x144" href="/sites/default/files/newfa/favicon/apple-icon-144x144.png"> <link rel="apple-touch-icon" sizes="152x152" href="/sites/default/files/newfa/favicon/apple-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/sites/default/files/newfa/favicon/apple-icon-180x180.png"> <link rel="icon" type="image/png" sizes="192x192" href="/sites/default/files/newfa/favicon/android-icon-192x192.png"> <link rel="icon" type="image/png" sizes="32x32" href="/sites/default/files/newfa/favicon/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="96x96" href="/sites/default/files/newfa/favicon/favicon-96x96.png"> <link rel="icon" type="image/png" sizes="16x16" href="/sites/default/files/newfa/favicon/favicon-16x16.png"> <link rel="manifest" href="/sites/default/files/newfa/favicon/manifest.json"> <meta name="msapplication-TileColor" content="#ffffff"> <meta name="msapplication-TileImage" content="/sites/default/files/newfa/favicon/ms-icon-144x144.png"> <meta name="theme-color" content="#ffffff"> <script src="//use.typekit.net/znj7soo.js"></script> <script>try{Typekit.load();}catch(e){}</script> <script> if (document.location.hostname.search("filosofianakatemia.fi") !== -1) { (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-5906151-4', 'auto'); ga('send', 'pageview'); } </script> </head> <body class="people-details"> <nav class="top-bar" data-topbar role="navigation"> <ul class="title-area"> <li class="name"> <div class="logo"><a href="etusivu.html"><span class="icon-logo-fill-name"></span></a></div> </li> <li class="toggle-topbar menu-icon"><a href="etusivu.html"><span></span></a></li> </ul> <section class="top-bar-section"> <ul class="right"> <li><a href="etusivu.html">Etusivu</a></li> <li class="active"><a href="ihmiset.html">Ihmiset</a></li> <li><a href="palvelut.html">Palvelut</a></li> <li><a href="tutkimus.html">Tutkimus</a></li> </ul> </section> </nav> <div class="main"> <div class="row breadcrumbs"> <div class="left-aligned large-10 large-offset-1 end columns"> <a href="ihmiset.html#thumbnails">Ihmiset<span class="icon-arrow-right"></span></a><span class="font-bold">Timo Tiuraniemi</span> </div> </div> <div class="section section-person row"> <div class="large-5 large-offset-1 left-aligned columns"> <h1>Timo Tiuraniemi</h1> <div class="description"> Timo Tiuraniemi (FM) kehitt&auml;&auml; Filosofian Akatemiassa <a href="http://ext.md" target="_blank">Extended Mind</a> -sovellusta, jonka tavoitteena on tukea ja auttaa ihmismielt&auml; arjenhallinnassa ja v&auml;hent&auml;&auml; stressi&auml;. H&auml;n vet&auml;&auml; my&ouml;s ajattelunhallintaan liittyvi&auml; valmennuksia. Timo haluaa tehd&auml; joka p&auml;iv&auml; mahdollisimman hyv&auml;&auml; ja kaunista ty&ouml;t&auml;, joka vie h&auml;nelle t&auml;rkeit&auml; asioita eteenp&auml;in. Yksi niist&auml; on ihmisten arjen sujuvoittaminen. El&auml;m&auml;nlaatu paranee, kun s&auml;hl&auml;ys v&auml;henee. </div> <h4>Erityisalueet:</h4> <p> <ul class="list-plain"> <li>Teknologia</li> <li>Ajattelunhallinta</li> <li>Mielenfilosofia</li> </ul> </p> <h4>Tutkimushankkeet:</h4> <p> <ul class="list-plain"> <li>Extended Medium Theory</li> <li><a href="http://ext.md" target="_blank">The Extended Mind</a></li> </ul> </p> <h4>Blogi:</h4> <p> <ul class="list-link"> <li><a href="http://extendedmind.org" target="_blank">extendedmind.org</a></li> </ul> </p> <h4>Twitter:</h4> <p><a href="http://twitter.com/ttiurani" target="_blank">@ttiurani</a></p> </div> <div class="large-5 columns end person-details"> <div class="show-for-large-up"><br/><br/></div> <img src="sites/default/files/newfa/img/timo-large.jpg"/> <div class="person-contact left-aligned"> <h4>Ota yhteytt&auml;:</h4> <p>+358 40 823 2355<br/> <a href="mailto:timo.tiuraniemi@filosofianakatemia.fi">timo.tiuraniemi@filosofianakatemia.fi</a> </p> </div> </div> </div> <div class="section wrapper-embedded row person-quote"> <div class="small-3 medium-offset-1 right-aligned columns"> <span class="icon-quote"></span> </div> <div class="small-9 medium-4 end left-aligned columns"> <p class="quote">&#x201c;Timossa yhdistyy insin&ouml;&ouml;rin ja filosofin parhaat puolet. H&auml;n kykenee nopeasti hahmottamaan laajoja kokonaisuuksia ja olemaan samalla todella t&auml;sm&auml;llinen.&#x201d;</p> <div class="row separator"> <div class="underline small-3 medium-2 large-1 end columns"></div> </div> <div class="quote-author">Jukka-Pekka Salo</div> </div> </div> </div> </div> <div class="footer"> <div class="row hide-for-small-only"> <div class="medium-12 large-8 large-offset-2 end columns"> <ul class="icon-bar four-up"> <li class="item"><a href="etusivu.html">Etusivu</a></li> <li class="item"><a href="ihmiset.html">Ihmiset</a></li> <li class="item"><a href="palvelut.html">Palvelut</a></li> <li class="item"><a href="tutkimus.html">Tutkimus</a></li> </ul> </div> </div> <div class="row"> <div class="small-12 medium-4 medium-offset-4 columns"> <div class="address"> <span class="font-bold">Filosofian Akatemia Oy</span><br/> <span>Bulevardi 9 B 28</span><br/> <span>00120 Helsinki</span><br/> <a href="mailto:info@filosofianakatemia.fi"><span class="font-bold">info@filosofianakatemia.fi</span></a><br/> <span>+358 44 7432 020</span> </div> </div> </div> </div> <script type='text/javascript' src='//code.jquery.com/jquery-2.1.3.min.js'></script> <script type='text/javascript' src='//cdnjs.cloudflare.com/ajax/libs/modernizr/2.8.3/modernizr.min.js'></script> <script type='text/javascript' src='sites/default/files/newfa/js/fastclick.js'></script> <script type='text/javascript' src='sites/default/files/newfa/js/foundation.min.js'></script> <script> $(document).foundation(); </script> </body> </html>
ttiurani/filosofianakatemia2
ihmiset_timo.html
HTML
agpl-3.0
8,085
[ 30522, 1026, 999, 9986, 13874, 16129, 1028, 1026, 16129, 11374, 1027, 1000, 4372, 1000, 1028, 1026, 2132, 1028, 1026, 18804, 25869, 13462, 1027, 1000, 21183, 2546, 1011, 1022, 1000, 1028, 1026, 18804, 8299, 1011, 1041, 15549, 2615, 1027, 10...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
DELETE FROM `weenie` WHERE `class_Id` = 2356; INSERT INTO `weenie` (`class_Id`, `class_Name`, `type`, `last_Modified`) VALUES (2356, 'portallighthousetop', 7, '2019-02-10 00:00:00') /* Portal */; INSERT INTO `weenie_properties_int` (`object_Id`, `type`, `value`) VALUES (2356, 1, 65536) /* ItemType - Portal */ , (2356, 16, 32) /* ItemUseable - Remote */ , (2356, 86, 18) /* MinLevel */ , (2356, 93, 3084) /* PhysicsState - Ethereal, ReportCollisions, Gravity, LightingOn */ , (2356, 111, 17) /* PortalBitmask - Unrestricted, NoSummon */ , (2356, 133, 4) /* ShowableOnRadar - ShowAlways */ , (2356, 8007, 0) /* PCAPRecordedAutonomousMovement */; INSERT INTO `weenie_properties_bool` (`object_Id`, `type`, `value`) VALUES (2356, 1, True ) /* Stuck */; INSERT INTO `weenie_properties_float` (`object_Id`, `type`, `value`) VALUES (2356, 54, -0.1) /* UseRadius */; INSERT INTO `weenie_properties_string` (`object_Id`, `type`, `value`) VALUES (2356, 1, 'Portal to Lighthouse') /* Name */ , (2356, 8006, 'AAA9AAAAAAA=') /* PCAPRecordedCurrentMotionState */; INSERT INTO `weenie_properties_d_i_d` (`object_Id`, `type`, `value`) VALUES (2356, 1, 33555923) /* Setup */ , (2356, 2, 150994947) /* MotionTable */ , (2356, 8, 100667499) /* Icon */ , (2356, 8001, 8388656) /* PCAPRecordedWeenieHeader - Usable, UseRadius, RadarBehavior */ , (2356, 8003, 262164) /* PCAPRecordedObjectDesc - Stuck, Attackable, Portal */ , (2356, 8005, 98307) /* PCAPRecordedPhysicsDesc - CSetup, MTable, Position, Movement */; INSERT INTO `weenie_properties_position` (`object_Id`, `position_Type`, `obj_Cell_Id`, `origin_X`, `origin_Y`, `origin_Z`, `angles_W`, `angles_X`, `angles_Y`, `angles_Z`) VALUES (2356, 8040, 612630579, 159.735, 56.6959, 339.937, -0.3378759, 0, 0, -0.9411907) /* PCAPRecordedLocation */ /* @teleloc 0x24840033 [159.735000 56.695900 339.937000] -0.337876 0.000000 0.000000 -0.941191 */; INSERT INTO `weenie_properties_i_i_d` (`object_Id`, `type`, `value`) VALUES (2356, 8000, 1917337603) /* PCAPRecordedObjectIID */;
LtRipley36706/ACE-World
Database/3-Core/9 WeenieDefaults/SQL/Portal/Portal/02356 Portal to Lighthouse.sql
SQL
agpl-3.0
2,166
[ 30522, 3972, 12870, 2013, 1036, 16776, 8034, 1036, 2073, 1036, 2465, 1035, 8909, 1036, 1027, 17825, 2575, 1025, 19274, 2046, 1036, 16776, 8034, 1036, 1006, 1036, 2465, 1035, 8909, 1036, 1010, 1036, 2465, 1035, 2171, 1036, 1010, 1036, 2828, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
\hypertarget{structnlohmann_1_1detail_1_1is__compatible__integer__type__impl}{}\section{nlohmann\+:\+:detail\+:\+:is\+\_\+compatible\+\_\+integer\+\_\+type\+\_\+impl$<$ bool, typename, typename $>$ Struct Template Reference} \label{structnlohmann_1_1detail_1_1is__compatible__integer__type__impl}\index{nlohmann\+::detail\+::is\+\_\+compatible\+\_\+integer\+\_\+type\+\_\+impl$<$ bool, typename, typename $>$@{nlohmann\+::detail\+::is\+\_\+compatible\+\_\+integer\+\_\+type\+\_\+impl$<$ bool, typename, typename $>$}} Inheritance diagram for nlohmann\+:\+:detail\+:\+:is\+\_\+compatible\+\_\+integer\+\_\+type\+\_\+impl$<$ bool, typename, typename $>$\+:\begin{figure}[H] \begin{center} \leavevmode \includegraphics[height=2.000000cm]{dd/d13/structnlohmann_1_1detail_1_1is__compatible__integer__type__impl} \end{center} \end{figure} The documentation for this struct was generated from the following file\+:\begin{DoxyCompactItemize} \item /home/mutzi/progs/linux\+\_\+workspace/alexa-\/avs-\/prototype/src/include/nlohmann\+\_\+json.\+h\end{DoxyCompactItemize}
blackmutzi/alexa-avs-prototype
docs/latex/dd/d13/structnlohmann_1_1detail_1_1is__compatible__integer__type__impl.tex
TeX
gpl-3.0
1,065
[ 30522, 1032, 23760, 7559, 18150, 1063, 2358, 6820, 6593, 20554, 11631, 5804, 1035, 1015, 1035, 1015, 3207, 14162, 1035, 1015, 1035, 1015, 2483, 1035, 1035, 11892, 1035, 1035, 16109, 1035, 1035, 2828, 1035, 1035, 17727, 2140, 1065, 1063, 106...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
#include <windows.h> #include "NativeCore.hpp" bool RC_CallConv IsProcessValid(RC_Pointer handle) { if (handle == nullptr) { return false; } const auto retn = WaitForSingleObject(handle, 0); if (retn == WAIT_FAILED) { return false; } return retn == WAIT_TIMEOUT; }
KN4CK3R/ReClass.NET
NativeCore/Windows/IsProcessValid.cpp
C++
mit
281
[ 30522, 1001, 2421, 1026, 3645, 1012, 1044, 1028, 1001, 2421, 1000, 3128, 17345, 1012, 6522, 2361, 1000, 22017, 2140, 22110, 1035, 2655, 8663, 2615, 2003, 21572, 9623, 2015, 10175, 3593, 1006, 22110, 1035, 20884, 5047, 1007, 1063, 2065, 1006...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <assert.h> #include "treestore.h" #ifdef WIN32 #include <malloc.h> #else #include <alloca.h> #endif struct ts_node *ts_text_load(struct ts_io *io); int ts_text_save(struct ts_node *tree, struct ts_io *io); static long io_read(void *buf, size_t bytes, void *uptr); static long io_write(const void *buf, size_t bytes, void *uptr); /* ---- ts_value implementation ---- */ int ts_init_value(struct ts_value *tsv) { memset(tsv, 0, sizeof *tsv); return 0; } void ts_destroy_value(struct ts_value *tsv) { int i; free(tsv->str); free(tsv->vec); for(i=0; i<tsv->array_size; i++) { ts_destroy_value(tsv->array + i); } free(tsv->array); } struct ts_value *ts_alloc_value(void) { struct ts_value *v = malloc(sizeof *v); if(!v || ts_init_value(v) == -1) { free(v); return 0; } return v; } void ts_free_value(struct ts_value *tsv) { ts_destroy_value(tsv); free(tsv); } int ts_copy_value(struct ts_value *dest, struct ts_value *src) { int i; if(dest == src) return 0; *dest = *src; dest->str = 0; dest->vec = 0; dest->array = 0; if(src->str) { if(!(dest->str = malloc(strlen(src->str) + 1))) { goto fail; } strcpy(dest->str, src->str); } if(src->vec && src->vec_size > 0) { if(!(dest->vec = malloc(src->vec_size * sizeof *src->vec))) { goto fail; } memcpy(dest->vec, src->vec, src->vec_size * sizeof *src->vec); } if(src->array && src->array_size > 0) { if(!(dest->array = calloc(src->array_size, sizeof *src->array))) { goto fail; } for(i=0; i<src->array_size; i++) { if(ts_copy_value(dest->array + i, src->array + i) == -1) { goto fail; } } } return 0; fail: free(dest->str); free(dest->vec); if(dest->array) { for(i=0; i<dest->array_size; i++) { ts_destroy_value(dest->array + i); } free(dest->array); } return -1; } #define MAKE_NUMSTR_FUNC(type, fmt) \ static char *make_##type##str(type x) \ { \ static char scrap[128]; \ char *str; \ int sz = snprintf(scrap, sizeof scrap, fmt, x); \ if(!(str = malloc(sz + 1))) return 0; \ sprintf(str, fmt, x); \ return str; \ } MAKE_NUMSTR_FUNC(int, "%d") MAKE_NUMSTR_FUNC(float, "%g") struct val_list_node { struct ts_value val; struct val_list_node *next; }; int ts_set_value_str(struct ts_value *tsv, const char *str) { if(tsv->str) { ts_destroy_value(tsv); if(ts_init_value(tsv) == -1) { return -1; } } tsv->type = TS_STRING; if(!(tsv->str = malloc(strlen(str) + 1))) { return -1; } strcpy(tsv->str, str); #if 0 /* try to parse the string and see if it fits any of the value types */ if(*str == '[' || *str == '{') { /* try to parse as a vector */ struct val_list_node *list = 0, *tail = 0, *node; int nelem = 0; char endsym = *str++ + 2; /* ']' is '[' + 2 and '}' is '{' + 2 */ while(*str && *str != endsym) { float val = strtod(str, &endp); if(endp == str || !(node = malloc(sizeof *node))) { break; } ts_init_value(&node->val); ts_set_valuef(&node->val, val); node->next = 0; if(list) { tail->next = node; tail = node; } else { list = tail = node; } ++nelem; str = endp; } if(nelem && (tsv->array = malloc(nelem * sizeof *tsv->array)) && (tsv->vec = malloc(nelem * sizeof *tsv->vec))) { int idx = 0; while(list) { node = list; list = list->next; tsv->array[idx] = node->val; tsv->vec[idx] = node->val.fnum; ++idx; free(node); } tsv->type = TS_VECTOR; } } else if((tsv->fnum = strtod(str, &endp)), endp != str) { /* it's a number I guess... */ tsv->type = TS_NUMBER; } #endif return 0; } int ts_set_valuei_arr(struct ts_value *tsv, int count, const int *arr) { int i; if(count < 1) return -1; if(count == 1) { if(!(tsv->str = make_intstr(*arr))) { return -1; } tsv->type = TS_NUMBER; tsv->fnum = (float)*arr; tsv->inum = *arr; return 0; } /* otherwise it's an array, we need to create the ts_value array, and * the simplified vector */ if(!(tsv->vec = malloc(count * sizeof *tsv->vec))) { return -1; } tsv->vec_size = count; for(i=0; i<count; i++) { tsv->vec[i] = arr[i]; } if(!(tsv->array = malloc(count * sizeof *tsv->array))) { free(tsv->vec); } tsv->array_size = count; for(i=0; i<count; i++) { ts_init_value(tsv->array + i); ts_set_valuef(tsv->array + i, arr[i]); } tsv->type = TS_VECTOR; return 0; } int ts_set_valueiv(struct ts_value *tsv, int count, ...) { int res; va_list ap; va_start(ap, count); res = ts_set_valueiv_va(tsv, count, ap); va_end(ap); return res; } int ts_set_valueiv_va(struct ts_value *tsv, int count, va_list ap) { int i, *vec; if(count < 1) return -1; if(count == 1) { int num = va_arg(ap, int); ts_set_valuei(tsv, num); return 0; } vec = alloca(count * sizeof *vec); for(i=0; i<count; i++) { vec[i] = va_arg(ap, int); } return ts_set_valuei_arr(tsv, count, vec); } int ts_set_valuei(struct ts_value *tsv, int inum) { return ts_set_valuei_arr(tsv, 1, &inum); } int ts_set_valuef_arr(struct ts_value *tsv, int count, const float *arr) { int i; if(count < 1) return -1; if(count == 1) { if(!(tsv->str = make_floatstr(*arr))) { return -1; } tsv->type = TS_NUMBER; tsv->fnum = *arr; tsv->inum = (int)*arr; return 0; } /* otherwise it's an array, we need to create the ts_value array, and * the simplified vector */ if(!(tsv->vec = malloc(count * sizeof *tsv->vec))) { return -1; } tsv->vec_size = count; for(i=0; i<count; i++) { tsv->vec[i] = arr[i]; } if(!(tsv->array = malloc(count * sizeof *tsv->array))) { free(tsv->vec); } tsv->array_size = count; for(i=0; i<count; i++) { ts_init_value(tsv->array + i); ts_set_valuef(tsv->array + i, arr[i]); } tsv->type = TS_VECTOR; return 0; } int ts_set_valuefv(struct ts_value *tsv, int count, ...) { int res; va_list ap; va_start(ap, count); res = ts_set_valuefv_va(tsv, count, ap); va_end(ap); return res; } int ts_set_valuefv_va(struct ts_value *tsv, int count, va_list ap) { int i; float *vec; if(count < 1) return -1; if(count == 1) { float num = va_arg(ap, double); ts_set_valuef(tsv, num); return 0; } vec = alloca(count * sizeof *vec); for(i=0; i<count; i++) { vec[i] = va_arg(ap, double); } return ts_set_valuef_arr(tsv, count, vec); } int ts_set_valuef(struct ts_value *tsv, float fnum) { return ts_set_valuef_arr(tsv, 1, &fnum); } int ts_set_value_arr(struct ts_value *tsv, int count, const struct ts_value *arr) { int i, allnum = 1; if(count <= 1) return -1; if(!(tsv->array = malloc(count * sizeof *tsv->array))) { return -1; } tsv->array_size = count; for(i=0; i<count; i++) { if(arr[i].type != TS_NUMBER) { allnum = 0; } if(ts_copy_value(tsv->array + i, (struct ts_value*)arr + i) == -1) { while(--i >= 0) { ts_destroy_value(tsv->array + i); } free(tsv->array); tsv->array = 0; return -1; } } if(allnum) { if(!(tsv->vec = malloc(count * sizeof *tsv->vec))) { ts_destroy_value(tsv); return -1; } tsv->type = TS_VECTOR; tsv->vec_size = count; for(i=0; i<count; i++) { tsv->vec[i] = tsv->array[i].fnum; } } else { tsv->type = TS_ARRAY; } return 0; } int ts_set_valuev(struct ts_value *tsv, int count, ...) { int res; va_list ap; va_start(ap, count); res = ts_set_valuev_va(tsv, count, ap); va_end(ap); return res; } int ts_set_valuev_va(struct ts_value *tsv, int count, va_list ap) { int i; if(count <= 1) return -1; if(!(tsv->array = malloc(count * sizeof *tsv->array))) { return -1; } tsv->array_size = count; for(i=0; i<count; i++) { struct ts_value *src = va_arg(ap, struct ts_value*); if(ts_copy_value(tsv->array + i, src) == -1) { while(--i >= 0) { ts_destroy_value(tsv->array + i); } free(tsv->array); tsv->array = 0; return -1; } } return 0; } /* ---- ts_attr implementation ---- */ int ts_init_attr(struct ts_attr *attr) { memset(attr, 0, sizeof *attr); return ts_init_value(&attr->val); } void ts_destroy_attr(struct ts_attr *attr) { free(attr->name); ts_destroy_value(&attr->val); } struct ts_attr *ts_alloc_attr(void) { struct ts_attr *attr = malloc(sizeof *attr); if(!attr || ts_init_attr(attr) == -1) { free(attr); return 0; } return attr; } void ts_free_attr(struct ts_attr *attr) { ts_destroy_attr(attr); free(attr); } int ts_copy_attr(struct ts_attr *dest, struct ts_attr *src) { if(dest == src) return 0; if(ts_set_attr_name(dest, src->name) == -1) { return -1; } if(ts_copy_value(&dest->val, &src->val) == -1) { ts_destroy_attr(dest); return -1; } return 0; } int ts_set_attr_name(struct ts_attr *attr, const char *name) { char *n = malloc(strlen(name) + 1); if(!n) return -1; strcpy(n, name); free(attr->name); attr->name = n; return 0; } /* ---- ts_node implementation ---- */ int ts_init_node(struct ts_node *node) { memset(node, 0, sizeof *node); return 0; } void ts_destroy_node(struct ts_node *node) { if(!node) return; free(node->name); while(node->attr_list) { struct ts_attr *attr = node->attr_list; node->attr_list = node->attr_list->next; ts_free_attr(attr); } } struct ts_node *ts_alloc_node(void) { struct ts_node *node = malloc(sizeof *node); if(!node || ts_init_node(node) == -1) { free(node); return 0; } return node; } void ts_free_node(struct ts_node *node) { ts_destroy_node(node); free(node); } void ts_free_tree(struct ts_node *tree) { if(!tree) return; while(tree->child_list) { struct ts_node *child = tree->child_list; tree->child_list = tree->child_list->next; ts_free_tree(child); } ts_free_node(tree); } int ts_set_node_name(struct ts_node *node, const char *name) { char *n = malloc(strlen(name) + 1); if(!n) return -1; strcpy(n, name); free(node->name); node->name = n; return 0; } void ts_add_attr(struct ts_node *node, struct ts_attr *attr) { attr->next = 0; if(node->attr_list) { node->attr_tail->next = attr; node->attr_tail = attr; } else { node->attr_list = node->attr_tail = attr; } node->attr_count++; } struct ts_attr *ts_get_attr(struct ts_node *node, const char *name) { struct ts_attr *attr = node->attr_list; while(attr) { if(strcmp(attr->name, name) == 0) { return attr; } attr = attr->next; } return 0; } const char *ts_get_attr_str(struct ts_node *node, const char *aname, const char *def_val) { struct ts_attr *attr = ts_get_attr(node, aname); if(!attr || !attr->val.str) { return def_val; } return attr->val.str; } float ts_get_attr_num(struct ts_node *node, const char *aname, float def_val) { struct ts_attr *attr = ts_get_attr(node, aname); if(!attr || attr->val.type != TS_NUMBER) { return def_val; } return attr->val.fnum; } int ts_get_attr_int(struct ts_node *node, const char *aname, int def_val) { struct ts_attr *attr = ts_get_attr(node, aname); if(!attr || attr->val.type != TS_NUMBER) { return def_val; } return attr->val.inum; } float *ts_get_attr_vec(struct ts_node *node, const char *aname, float *def_val) { struct ts_attr *attr = ts_get_attr(node, aname); if(!attr || !attr->val.vec) { return def_val; } return attr->val.vec; } struct ts_value *ts_get_attr_array(struct ts_node *node, const char *aname, struct ts_value *def_val) { struct ts_attr *attr = ts_get_attr(node, aname); if(!attr || !attr->val.array) { return def_val; } return attr->val.array; } void ts_add_child(struct ts_node *node, struct ts_node *child) { if(child->parent) { if(child->parent == node) return; ts_remove_child(child->parent, child); } child->parent = node; child->next = 0; if(node->child_list) { node->child_tail->next = child; node->child_tail = child; } else { node->child_list = node->child_tail = child; } node->child_count++; } int ts_remove_child(struct ts_node *node, struct ts_node *child) { struct ts_node dummy, *iter = &dummy; dummy.next = node->child_list; while(iter->next && iter->next != child) { iter = iter->next; } if(!iter->next) { return -1; } child->parent = 0; iter->next = child->next; if(!iter->next) { node->child_tail = iter; } node->child_list = dummy.next; node->child_count--; assert(node->child_count >= 0); return 0; } struct ts_node *ts_get_child(struct ts_node *node, const char *name) { struct ts_node *res = node->child_list; while(res) { if(strcmp(res->name, name) == 0) { return res; } res = res->next; } return 0; } struct ts_node *ts_load(const char *fname) { FILE *fp; struct ts_node *root; if(!(fp = fopen(fname, "rb"))) { fprintf(stderr, "ts_load: failed to open file: %s: %s\n", fname, strerror(errno)); return 0; } root = ts_load_file(fp); fclose(fp); return root; } struct ts_node *ts_load_file(FILE *fp) { struct ts_io io = {0}; io.data = fp; io.read = io_read; return ts_load_io(&io); } struct ts_node *ts_load_io(struct ts_io *io) { return ts_text_load(io); } int ts_save(struct ts_node *tree, const char *fname) { FILE *fp; int res; if(!(fp = fopen(fname, "wb"))) { fprintf(stderr, "ts_save: failed to open file: %s: %s\n", fname, strerror(errno)); return 0; } res = ts_save_file(tree, fp); fclose(fp); return res; } int ts_save_file(struct ts_node *tree, FILE *fp) { struct ts_io io = {0}; io.data = fp; io.write = io_write; return ts_save_io(tree, &io); } int ts_save_io(struct ts_node *tree, struct ts_io *io) { return ts_text_save(tree, io); } static const char *pathtok(const char *path, char *tok) { int len; const char *dot = strchr(path, '.'); if(!dot) { strcpy(tok, path); return 0; } len = dot - path; memcpy(tok, path, len); tok[len] = 0; return dot + 1; } struct ts_attr *ts_lookup(struct ts_node *node, const char *path) { char *name = alloca(strlen(path) + 1); if(!node) return 0; if(!(path = pathtok(path, name)) || strcmp(name, node->name) != 0) { return 0; } while((path = pathtok(path, name)) && (node = ts_get_child(node, name))); if(path || !node) return 0; return ts_get_attr(node, name); } const char *ts_lookup_str(struct ts_node *root, const char *path, const char *def_val) { struct ts_attr *attr = ts_lookup(root, path); if(!attr || !attr->val.str) { return def_val; } return attr->val.str; } float ts_lookup_num(struct ts_node *root, const char *path, float def_val) { struct ts_attr *attr = ts_lookup(root, path); if(!attr || attr->val.type != TS_NUMBER) { return def_val; } return attr->val.fnum; } int ts_lookup_int(struct ts_node *root, const char *path, int def_val) { struct ts_attr *attr = ts_lookup(root, path); if(!attr || attr->val.type != TS_NUMBER) { return def_val; } return attr->val.inum; } float *ts_lookup_vec(struct ts_node *root, const char *path, float *def_val) { struct ts_attr *attr = ts_lookup(root, path); if(!attr || !attr->val.vec) { return def_val; } return attr->val.vec; } struct ts_value *ts_lookup_array(struct ts_node *node, const char *path, struct ts_value *def_val) { struct ts_attr *attr = ts_lookup(node, path); if(!attr || !attr->val.array) { return def_val; } return attr->val.array; } static long io_read(void *buf, size_t bytes, void *uptr) { size_t sz = fread(buf, 1, bytes, uptr); if(sz < bytes && errno) return -1; return sz; } static long io_write(const void *buf, size_t bytes, void *uptr) { size_t sz = fwrite(buf, 1, bytes, uptr); if(sz < bytes && errno) return -1; return sz; }
jtsiomb/libtreestore
src/treestore.c
C
mit
15,446
[ 30522, 1001, 2421, 1026, 2358, 20617, 1012, 1044, 1028, 1001, 2421, 1026, 2358, 19422, 12322, 1012, 1044, 1028, 1001, 2421, 1026, 5164, 1012, 1044, 1028, 1001, 2421, 1026, 9413, 19139, 1012, 1044, 1028, 1001, 2421, 1026, 20865, 1012, 1044, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Domain { public class Meeting { public int ConsultantId { get; set; } public Consultant Consultant { get; set; } public int UserId { get; set; } public User User { get; set; } public DateTime BeginTime { get; set; } public DateTime EndTime { get; set; } public override string ToString() { return $"{BeginTime} -> {EndTime}"; } } }
rohansen/Code-Examples
Database/TransactionScopeWithGUI/Domain/Meeting.cs
C#
mit
559
[ 30522, 2478, 2291, 1025, 2478, 2291, 1012, 6407, 1012, 12391, 1025, 2478, 2291, 1012, 11409, 4160, 1025, 2478, 2291, 1012, 3793, 1025, 2478, 2291, 1012, 11689, 2075, 1012, 8518, 1025, 3415, 15327, 5884, 1063, 2270, 2465, 3116, 1063, 2270, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/// <reference types="openlayers" /> import { OnInit } from '@angular/core'; import { source, ProjectionLike, format } from 'openlayers'; import { LayerVectorComponent } from '../layers'; import { SourceComponent } from './source.component'; export declare class SourceGeoJSONComponent extends SourceComponent implements OnInit { instance: source.Vector; format: format.Feature; defaultDataProjection: ProjectionLike; featureProjection: ProjectionLike; geometryName: string; url: string; constructor(layer: LayerVectorComponent); ngOnInit(): void; }
karomamczi/ngx-openlayers
dist/components/sources/geojson.component.d.ts
TypeScript
mpl-2.0
582
[ 30522, 1013, 1013, 1013, 1026, 4431, 4127, 1027, 1000, 2330, 24314, 2015, 1000, 1013, 1028, 12324, 1063, 2006, 5498, 2102, 1065, 2013, 1005, 1030, 16108, 1013, 4563, 1005, 1025, 12324, 1063, 3120, 1010, 13996, 10359, 1010, 4289, 1065, 2013,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
.PHONY : clean CC = g++ FLAGS = -std=c++11 -fPIC -g -c #-Wall LDFLAG = -shared SHELL = /bin/sh LIBDIR = ../lib INCLUDEDIR = ../include TARGET = $(LIBDIR)/kdTree.so SOURCES = $(shell echo ./*.cpp) HEADERS = $(shell echo ../include/*.hpp) OBJECTS = $(SOURCES:.cpp=.o) EXEDIR = ../bin TESTSOURCES = $(shell echo ../test/*.cpp) TESTOBJECTS = ../src/testkdTree.o TESTTARGET = $(EXEDIR)/testkdTree all:$(TESTTARGET) clean: rm -f $(OBJECTS) $(TARGET) $(TESTOBJECTS) $(TESTTARGET) $(TESTTARGET): $(TESTOBJECTS) $(TARGET) $(CC) -L$(TARGET) -o $@ $(TESTOBJECTS) $(OBJECTS) $(TESTOBJECTS) : $(TESTSOURCES) $(CC) $(FLAGS) -I$(INCLUDEDIR) $(TESTSOURCES) $(TARGET) : $(OBJECTS) $(CC) $(LDFLAG) -Wl,-soname,$(TARGET) -o $(TARGET) $(OBJECTS) $(OBJECTS) : $(SOURCES) $(CC) $(FLAGS) -I$(INCLUDEDIR) $(SOURCES)
PoorniK/kdTree
src/Makefile
Makefile
bsd-2-clause
809
[ 30522, 1012, 6887, 16585, 1024, 4550, 10507, 1027, 1043, 1009, 1009, 9245, 1027, 1011, 2358, 2094, 1027, 1039, 1009, 1009, 2340, 1011, 1042, 24330, 1011, 1043, 1011, 1039, 1001, 1011, 2813, 25510, 10258, 8490, 1027, 1011, 4207, 5806, 1027, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// Copyright 2010 Dolphin Emulator Project // Licensed under GPLv2+ // Refer to the license.txt file included. // --------------------------------------------------------------------------------------------- // GC graphics pipeline // --------------------------------------------------------------------------------------------- // 3d commands are issued through the fifo. The GPU draws to the 2MB EFB. // The efb can be copied back into ram in two forms: as textures or as XFB. // The XFB is the region in RAM that the VI chip scans out to the television. // So, after all rendering to EFB is done, the image is copied into one of two XFBs in RAM. // Next frame, that one is scanned out and the other one gets the copy. = double buffering. // --------------------------------------------------------------------------------------------- #include <cinttypes> #include <cmath> #include <string> #include "Common/Atomic.h" #include "Common/Event.h" #include "Common/Profiler.h" #include "Common/StringUtil.h" #include "Common/Timer.h" #include "Core/ConfigManager.h" #include "Core/Core.h" #include "Core/Host.h" #include "Core/Movie.h" #include "Core/FifoPlayer/FifoRecorder.h" #include "Core/HW/VideoInterface.h" #include "VideoCommon/AVIDump.h" #include "VideoCommon/BPMemory.h" #include "VideoCommon/CommandProcessor.h" #include "VideoCommon/CPMemory.h" #include "VideoCommon/Debugger.h" #include "VideoCommon/Fifo.h" #include "VideoCommon/FPSCounter.h" #include "VideoCommon/FramebufferManagerBase.h" #include "VideoCommon/MainBase.h" #include "VideoCommon/OpcodeDecoding.h" #include "VideoCommon/RenderBase.h" #include "VideoCommon/Statistics.h" #include "VideoCommon/TextureCacheBase.h" #include "VideoCommon/VideoConfig.h" #include "VideoCommon/XFMemory.h" // TODO: Move these out of here. int frameCount; int OSDChoice; static int OSDTime; Renderer *g_renderer = nullptr; std::mutex Renderer::s_criticalScreenshot; std::string Renderer::s_sScreenshotName; Common::Event Renderer::s_screenshotCompleted; volatile bool Renderer::s_bScreenshot; // The framebuffer size int Renderer::s_target_width; int Renderer::s_target_height; // TODO: Add functionality to reinit all the render targets when the window is resized. int Renderer::s_backbuffer_width; int Renderer::s_backbuffer_height; PostProcessingShaderImplementation* Renderer::m_post_processor; TargetRectangle Renderer::target_rc; int Renderer::s_last_efb_scale; bool Renderer::XFBWrited; PEControl::PixelFormat Renderer::prev_efb_format = PEControl::INVALID_FMT; unsigned int Renderer::efb_scale_numeratorX = 1; unsigned int Renderer::efb_scale_numeratorY = 1; unsigned int Renderer::efb_scale_denominatorX = 1; unsigned int Renderer::efb_scale_denominatorY = 1; Renderer::Renderer() : frame_data() , bLastFrameDumped(false) { UpdateActiveConfig(); TextureCache::OnConfigChanged(g_ActiveConfig); #if defined _WIN32 || defined HAVE_LIBAV bAVIDumping = false; #endif OSDChoice = 0; OSDTime = 0; } Renderer::~Renderer() { // invalidate previous efb format prev_efb_format = PEControl::INVALID_FMT; efb_scale_numeratorX = efb_scale_numeratorY = efb_scale_denominatorX = efb_scale_denominatorY = 1; #if defined _WIN32 || defined HAVE_LIBAV if (SConfig::GetInstance().m_DumpFrames && bLastFrameDumped && bAVIDumping) AVIDump::Stop(); #else if (pFrameDump.IsOpen()) pFrameDump.Close(); #endif } void Renderer::RenderToXFB(u32 xfbAddr, const EFBRectangle& sourceRc, u32 fbStride, u32 fbHeight, float Gamma) { CheckFifoRecording(); if (!fbStride || !fbHeight) return; XFBWrited = true; if (g_ActiveConfig.bUseXFB) { FramebufferManagerBase::CopyToXFB(xfbAddr, fbStride, fbHeight, sourceRc, Gamma); } else { // below div two to convert from bytes to pixels - it expects width, not stride Swap(xfbAddr, fbStride/2, fbStride/2, fbHeight, sourceRc, Gamma); } } int Renderer::EFBToScaledX(int x) { switch (g_ActiveConfig.iEFBScale) { case SCALE_AUTO: // fractional return FramebufferManagerBase::ScaleToVirtualXfbWidth(x); default: return x * (int)efb_scale_numeratorX / (int)efb_scale_denominatorX; }; } int Renderer::EFBToScaledY(int y) { switch (g_ActiveConfig.iEFBScale) { case SCALE_AUTO: // fractional return FramebufferManagerBase::ScaleToVirtualXfbHeight(y); default: return y * (int)efb_scale_numeratorY / (int)efb_scale_denominatorY; }; } void Renderer::CalculateTargetScale(int x, int y, int* scaledX, int* scaledY) { if (g_ActiveConfig.iEFBScale == SCALE_AUTO || g_ActiveConfig.iEFBScale == SCALE_AUTO_INTEGRAL) { *scaledX = x; *scaledY = y; } else { *scaledX = x * (int)efb_scale_numeratorX / (int)efb_scale_denominatorX; *scaledY = y * (int)efb_scale_numeratorY / (int)efb_scale_denominatorY; } } // return true if target size changed bool Renderer::CalculateTargetSize(unsigned int framebuffer_width, unsigned int framebuffer_height) { int newEFBWidth, newEFBHeight; newEFBWidth = newEFBHeight = 0; // TODO: Ugly. Clean up switch (s_last_efb_scale) { case SCALE_AUTO: case SCALE_AUTO_INTEGRAL: newEFBWidth = FramebufferManagerBase::ScaleToVirtualXfbWidth(EFB_WIDTH); newEFBHeight = FramebufferManagerBase::ScaleToVirtualXfbHeight(EFB_HEIGHT); if (s_last_efb_scale == SCALE_AUTO_INTEGRAL) { newEFBWidth = ((newEFBWidth-1) / EFB_WIDTH + 1) * EFB_WIDTH; newEFBHeight = ((newEFBHeight-1) / EFB_HEIGHT + 1) * EFB_HEIGHT; } efb_scale_numeratorX = newEFBWidth; efb_scale_denominatorX = EFB_WIDTH; efb_scale_numeratorY = newEFBHeight; efb_scale_denominatorY = EFB_HEIGHT; break; case SCALE_1X: efb_scale_numeratorX = efb_scale_numeratorY = 1; efb_scale_denominatorX = efb_scale_denominatorY = 1; break; case SCALE_1_5X: efb_scale_numeratorX = efb_scale_numeratorY = 3; efb_scale_denominatorX = efb_scale_denominatorY = 2; break; case SCALE_2X: efb_scale_numeratorX = efb_scale_numeratorY = 2; efb_scale_denominatorX = efb_scale_denominatorY = 1; break; case SCALE_2_5X: efb_scale_numeratorX = efb_scale_numeratorY = 5; efb_scale_denominatorX = efb_scale_denominatorY = 2; break; default: efb_scale_numeratorX = efb_scale_numeratorY = s_last_efb_scale - 3; efb_scale_denominatorX = efb_scale_denominatorY = 1; int maxSize; maxSize = GetMaxTextureSize(); if ((unsigned)maxSize < EFB_WIDTH * efb_scale_numeratorX / efb_scale_denominatorX) { efb_scale_numeratorX = efb_scale_numeratorY = (maxSize / EFB_WIDTH); efb_scale_denominatorX = efb_scale_denominatorY = 1; } break; } if (s_last_efb_scale > SCALE_AUTO_INTEGRAL) CalculateTargetScale(EFB_WIDTH, EFB_HEIGHT, &newEFBWidth, &newEFBHeight); if (newEFBWidth != s_target_width || newEFBHeight != s_target_height) { s_target_width = newEFBWidth; s_target_height = newEFBHeight; return true; } return false; } void Renderer::ConvertStereoRectangle(const TargetRectangle& rc, TargetRectangle& leftRc, TargetRectangle& rightRc) { // Resize target to half its original size TargetRectangle drawRc = rc; if (g_ActiveConfig.iStereoMode == STEREO_TAB) { // The height may be negative due to flipped rectangles int height = rc.bottom - rc.top; drawRc.top += height / 4; drawRc.bottom -= height / 4; } else { int width = rc.right - rc.left; drawRc.left += width / 4; drawRc.right -= width / 4; } // Create two target rectangle offset to the sides of the backbuffer leftRc = drawRc, rightRc = drawRc; if (g_ActiveConfig.iStereoMode == STEREO_TAB) { leftRc.top -= s_backbuffer_height / 4; leftRc.bottom -= s_backbuffer_height / 4; rightRc.top += s_backbuffer_height / 4; rightRc.bottom += s_backbuffer_height / 4; } else { leftRc.left -= s_backbuffer_width / 4; leftRc.right -= s_backbuffer_width / 4; rightRc.left += s_backbuffer_width / 4; rightRc.right += s_backbuffer_width / 4; } } void Renderer::SetScreenshot(const std::string& filename) { std::lock_guard<std::mutex> lk(s_criticalScreenshot); s_sScreenshotName = filename; s_bScreenshot = true; } // Create On-Screen-Messages void Renderer::DrawDebugText() { std::string final_yellow, final_cyan; if (g_ActiveConfig.bShowFPS || SConfig::GetInstance().m_ShowFrameCount) { if (g_ActiveConfig.bShowFPS) final_cyan += StringFromFormat("FPS: %d", g_renderer->m_fps_counter.m_fps); if (g_ActiveConfig.bShowFPS && SConfig::GetInstance().m_ShowFrameCount) final_cyan += " - "; if (SConfig::GetInstance().m_ShowFrameCount) { final_cyan += StringFromFormat("Frame: %llu", (unsigned long long) Movie::g_currentFrame); if (Movie::IsPlayingInput()) final_cyan += StringFromFormat(" / %llu", (unsigned long long) Movie::g_totalFrames); } final_cyan += "\n"; final_yellow += "\n"; } if (SConfig::GetInstance().m_ShowLag) { final_cyan += StringFromFormat("Lag: %" PRIu64 "\n", Movie::g_currentLagCount); final_yellow += "\n"; } if (SConfig::GetInstance().m_ShowInputDisplay) { final_cyan += Movie::GetInputDisplay(); final_yellow += "\n"; } // OSD Menu messages if (OSDChoice > 0) { OSDTime = Common::Timer::GetTimeMs() + 3000; OSDChoice = -OSDChoice; } if ((u32)OSDTime > Common::Timer::GetTimeMs()) { std::string res_text; switch (g_ActiveConfig.iEFBScale) { case SCALE_AUTO: res_text = "Auto (fractional)"; break; case SCALE_AUTO_INTEGRAL: res_text = "Auto (integral)"; break; case SCALE_1X: res_text = "Native"; break; case SCALE_1_5X: res_text = "1.5x"; break; case SCALE_2X: res_text = "2x"; break; case SCALE_2_5X: res_text = "2.5x"; break; default: res_text = StringFromFormat("%dx", g_ActiveConfig.iEFBScale - 3); break; } const char* ar_text = ""; switch (g_ActiveConfig.iAspectRatio) { case ASPECT_AUTO: ar_text = "Auto"; break; case ASPECT_STRETCH: ar_text = "Stretch"; break; case ASPECT_ANALOG: ar_text = "Force 4:3"; break; case ASPECT_ANALOG_WIDE: ar_text = "Force 16:9"; } const char* const efbcopy_text = g_ActiveConfig.bSkipEFBCopyToRam ? "to Texture" : "to RAM"; // The rows const std::string lines[] = { std::string("Internal Resolution: ") + res_text, std::string("Aspect Ratio: ") + ar_text + (g_ActiveConfig.bCrop ? " (crop)" : ""), std::string("Copy EFB: ") + efbcopy_text, std::string("Fog: ") + (g_ActiveConfig.bDisableFog ? "Disabled" : "Enabled"), }; enum { lines_count = sizeof(lines) / sizeof(*lines) }; // The latest changed setting in yellow for (int i = 0; i != lines_count; ++i) { if (OSDChoice == -i - 1) final_yellow += lines[i]; final_yellow += '\n'; } // The other settings in cyan for (int i = 0; i != lines_count; ++i) { if (OSDChoice != -i - 1) final_cyan += lines[i]; final_cyan += '\n'; } } final_cyan += Common::Profiler::ToString(); if (g_ActiveConfig.bOverlayStats) final_cyan += Statistics::ToString(); if (g_ActiveConfig.bOverlayProjStats) final_cyan += Statistics::ToStringProj(); //and then the text g_renderer->RenderText(final_cyan, 20, 20, 0xFF00FFFF); g_renderer->RenderText(final_yellow, 20, 20, 0xFFFFFF00); } void Renderer::UpdateDrawRectangle(int backbuffer_width, int backbuffer_height) { float FloatGLWidth = (float)backbuffer_width; float FloatGLHeight = (float)backbuffer_height; float FloatXOffset = 0; float FloatYOffset = 0; // The rendering window size const float WinWidth = FloatGLWidth; const float WinHeight = FloatGLHeight; // Update aspect ratio hack values // Won't take effect until next frame // Don't know if there is a better place for this code so there isn't a 1 frame delay if (g_ActiveConfig.bWidescreenHack) { float source_aspect = VideoInterface::GetAspectRatio(g_aspect_wide); float target_aspect; switch (g_ActiveConfig.iAspectRatio) { case ASPECT_STRETCH: target_aspect = WinWidth / WinHeight; break; case ASPECT_ANALOG: target_aspect = VideoInterface::GetAspectRatio(false); break; case ASPECT_ANALOG_WIDE: target_aspect = VideoInterface::GetAspectRatio(true); break; default: // ASPECT_AUTO target_aspect = source_aspect; break; } float adjust = source_aspect / target_aspect; if (adjust > 1) { // Vert+ g_Config.fAspectRatioHackW = 1; g_Config.fAspectRatioHackH = 1 / adjust; } else { // Hor+ g_Config.fAspectRatioHackW = adjust; g_Config.fAspectRatioHackH = 1; } } else { // Hack is disabled g_Config.fAspectRatioHackW = 1; g_Config.fAspectRatioHackH = 1; } // Check for force-settings and override. // The rendering window aspect ratio as a proportion of the 4:3 or 16:9 ratio float Ratio; switch (g_ActiveConfig.iAspectRatio) { case ASPECT_ANALOG_WIDE: Ratio = (WinWidth / WinHeight) / VideoInterface::GetAspectRatio(true); break; case ASPECT_ANALOG: Ratio = (WinWidth / WinHeight) / VideoInterface::GetAspectRatio(false); break; default: Ratio = (WinWidth / WinHeight) / VideoInterface::GetAspectRatio(g_aspect_wide); break; } if (g_ActiveConfig.iAspectRatio != ASPECT_STRETCH) { if (Ratio > 1.0f) { // Scale down and center in the X direction. FloatGLWidth /= Ratio; FloatXOffset = (WinWidth - FloatGLWidth) / 2.0f; } // The window is too high, we have to limit the height else { // Scale down and center in the Y direction. FloatGLHeight *= Ratio; FloatYOffset = FloatYOffset + (WinHeight - FloatGLHeight) / 2.0f; } } // ----------------------------------------------------------------------- // Crop the picture from Analog to 4:3 or from Analog (Wide) to 16:9. // Output: FloatGLWidth, FloatGLHeight, FloatXOffset, FloatYOffset // ------------------ if (g_ActiveConfig.iAspectRatio != ASPECT_STRETCH && g_ActiveConfig.bCrop) { switch (g_ActiveConfig.iAspectRatio) { case ASPECT_ANALOG_WIDE: Ratio = (16.0f / 9.0f) / VideoInterface::GetAspectRatio(true); break; case ASPECT_ANALOG: Ratio = (4.0f / 3.0f) / VideoInterface::GetAspectRatio(false); break; default: Ratio = (!g_aspect_wide ? (4.0f / 3.0f) : (16.0f / 9.0f)) / VideoInterface::GetAspectRatio(g_aspect_wide); break; } if (Ratio <= 1.0f) { Ratio = 1.0f / Ratio; } // The width and height we will add (calculate this before FloatGLWidth and FloatGLHeight is adjusted) float IncreasedWidth = (Ratio - 1.0f) * FloatGLWidth; float IncreasedHeight = (Ratio - 1.0f) * FloatGLHeight; // The new width and height FloatGLWidth = FloatGLWidth * Ratio; FloatGLHeight = FloatGLHeight * Ratio; // Adjust the X and Y offset FloatXOffset = FloatXOffset - (IncreasedWidth * 0.5f); FloatYOffset = FloatYOffset - (IncreasedHeight * 0.5f); } int XOffset = (int)(FloatXOffset + 0.5f); int YOffset = (int)(FloatYOffset + 0.5f); int iWhidth = (int)ceil(FloatGLWidth); int iHeight = (int)ceil(FloatGLHeight); iWhidth -= iWhidth % 4; // ensure divisibility by 4 to make it compatible with all the video encoders iHeight -= iHeight % 4; target_rc.left = XOffset; target_rc.top = YOffset; target_rc.right = XOffset + iWhidth; target_rc.bottom = YOffset + iHeight; } void Renderer::SetWindowSize(int width, int height) { if (width < 1) width = 1; if (height < 1) height = 1; // Scale the window size by the EFB scale. CalculateTargetScale(width, height, &width, &height); Host_RequestRenderWindowSize(width, height); } void Renderer::CheckFifoRecording() { bool wasRecording = g_bRecordFifoData; g_bRecordFifoData = FifoRecorder::GetInstance().IsRecording(); if (g_bRecordFifoData) { if (!wasRecording) { RecordVideoMemory(); } FifoRecorder::GetInstance().EndFrame(CommandProcessor::fifo.CPBase, CommandProcessor::fifo.CPEnd); } } void Renderer::RecordVideoMemory() { u32 *bpmem_ptr = (u32*)&bpmem; u32 cpmem[256]; // The FIFO recording format splits XF memory into xfmem and xfregs; follow // that split here. u32 *xfmem_ptr = (u32*)&xfmem; u32 *xfregs_ptr = (u32*)&xfmem + FifoDataFile::XF_MEM_SIZE; u32 xfregs_size = sizeof(XFMemory) / 4 - FifoDataFile::XF_MEM_SIZE; memset(cpmem, 0, 256 * 4); FillCPMemoryArray(cpmem); FifoRecorder::GetInstance().SetVideoMemory(bpmem_ptr, cpmem, xfmem_ptr, xfregs_ptr, xfregs_size); } void Renderer::Swap(u32 xfbAddr, u32 fbWidth, u32 fbStride, u32 fbHeight, const EFBRectangle& rc, float Gamma) { // TODO: merge more generic parts into VideoCommon g_renderer->SwapImpl(xfbAddr, fbWidth, fbStride, fbHeight, rc, Gamma); if (XFBWrited) g_renderer->m_fps_counter.Update(); frameCount++; GFX_DEBUGGER_PAUSE_AT(NEXT_FRAME, true); // Begin new frame // Set default viewport and scissor, for the clear to work correctly // New frame stats.ResetFrame(); Core::Callback_VideoCopiedToXFB(XFBWrited || (g_ActiveConfig.bUseXFB && g_ActiveConfig.bUseRealXFB)); XFBWrited = false; } void Renderer::PokeEFB(EFBAccessType type, const std::vector<EfbPokeData>& data) { for (EfbPokeData poke : data) { AccessEFB(type, poke.x, poke.y, poke.data); } }
aroulin/dolphin
Source/Core/VideoCommon/RenderBase.cpp
C++
gpl-2.0
17,004
[ 30522, 1013, 1013, 9385, 2230, 17801, 7861, 20350, 2622, 1013, 1013, 7000, 2104, 14246, 2140, 2615, 2475, 1009, 1013, 1013, 6523, 2000, 1996, 6105, 1012, 19067, 2102, 5371, 2443, 1012, 1013, 1013, 1011, 1011, 1011, 1011, 1011, 1011, 1011, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
const _parseHash = function (hash) { let name = ''; let urlType = ''; let hashParts = hash.split('_'); if (hashParts && hashParts.length === 2) { name = hashParts[1]; let type = hashParts[0]; // take off the "#" let finalType = type.slice(1, type.length); switch (finalType) { case 'method': urlType = 'methods'; break; case 'property': urlType = 'properties'; break; case 'event': urlType = 'events'; break; default: urlType = ''; } return { urlType, name, }; } return null; }; function hashToUrl(window) { if (window && window.location && window.location.hash) { let hashInfo = _parseHash(window.location.hash); if (hashInfo) { return `${window.location.pathname}/${hashInfo.urlType}/${hashInfo.name}?anchor=${hashInfo.name}`; } } return null; } function hasRedirectableHash(window) { let canRedirect = false; if (window && window.location && window.location.hash) { let hashParts = window.location.hash.split('_'); if (hashParts && hashParts.length === 2) { canRedirect = true; } } return canRedirect; } export { hashToUrl, hasRedirectableHash };
ember-learn/ember-api-docs
app/utils/hash-to-url.js
JavaScript
mit
1,240
[ 30522, 9530, 3367, 1035, 11968, 3366, 14949, 2232, 1027, 3853, 1006, 23325, 1007, 1063, 2292, 2171, 1027, 1005, 1005, 1025, 2292, 24471, 24228, 5051, 1027, 1005, 1005, 1025, 2292, 23325, 26950, 1027, 23325, 1012, 3975, 1006, 1005, 1035, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
{/include file="simpla/common/header.html"/} {/include file="simpla/common/left.html"/} <div id="main-content"> <h2>欢迎您 {/$_adminname/}</h2> <p id="page-intro">添加和编辑会员帐号。带<span class="red">*</span>为必填</p> <div class="clear"></div> <div class="content-box"> <div class="content-box-header"> <h3>添加编辑帐号</h3> <ul class="content-box-tabs"> <li><a href="{/get_url rule="/member/index"/}">帐号管理</a></li> <li><a href="#tab1" class="default-tab">添加帐号</a></li> </ul> <div class="clear"></div> </div> <div class="content-box-content"> <div class="tab-content default-tab" id="tab1"> <div class="form"> <form action="{/get_url rule='/member/addmember'/}" method="post" id="js-form"> <fieldset class="clearfix"> <input type="hidden" value="{/$member.mid/}" name="mid" /> <p> <label><font class="red"> * </font>真实名字:</label> <span> <input type="text" value="{/$member.realname/}" class="text-input small-input" name="realname" id="realname" /> </span> </p> <p> <label><font class="red"> * </font>会员卡卡号:</label> <span> <input type="text" class="text-input small-input" name="membercardid" value="{/$member.membercardid/}" id="membercardid" /> </span> </p> <p><font class="red"> * </font>用户组:<span> <select name="grade"> {/section name=i loop=$group/} <option value="{/$group[i].mgid/}" {/if $group[i].mgid eq $member.grade/}selected="selected"{//if/}>{/$group[i].mgroup_name/}-{/$group[i].discount/}%</option> {//section/} </select> </span> </p> <p>身份证号: <span> <input type="text" class="text-input small-input" name="cardid" value="{/$member.cardid/}" /> </span> </p> <p>用户状态: <span> <input name="state" type="radio" value="1" {/if $member.state eq 1/}checked="checked"{//if/}/> 启用 <input name="state" type="radio" value="0" {/if $member.state eq 0/}checked="checked"{//if/}/> 禁用</span> </p> <p>手机号码: <span> <input type="text" class="text-input min-input" name="mobile" value="{/$member.mobile/}" /> </span> </p> <p>座机号码: <span> <input type="text" class="text-input min-input" name="phone" value="{/$member.phone/}" /> </span> </p> <p>邮箱地址: <span> <input type="text" class="text-input min-input" name="email" value="{/$member.email/}" /> </span> </p> <p>地区选择: <span> <select id="province" name="prov_id" onChange="getcity(this.value)"> <option value="">---请选择省份---</option> </select> <select id="city" name="city_id"> <option value="">---请选择城市---</option> </select> </span> </p> <p>详细地址: <span> <input type="text" class="text-input small-input" name="address" value="{/$member.address/}" /> </span> </p> <p>邮政编码: <span> <input type="text" class="text-input small-input" name="zipcode" value="{/$member.zipcode/}" /> </span> </p> <dt> <input type="submit" name="" class="button" value="{/if $member.mid/}编辑{/else/}添加{//if/}" /> </dt> </fieldset> </form> </div> </div> </div> </div> {/include file="simpla/common/copy.html"/} </div> {/include file="simpla/common/footer.html"/} <script type="text/javascript" src="{/$root_dir/}/assets/js/g.js"></script> <script type="text/javascript"> function getprovince(rid,pid) { $.ajax({ url:'{/$root_dir/}/ajax/getregion', data:'exce=1&parent_id='+rid, success:function(json) { for(i=0;i<json.length;i++) { if(json[i].region_id == pid) { var slt; slt = document.getElementById('province'); slt.options.add(new Option(json[i].region_name,json[i].region_id)); slt.options[slt.options.length-1].selected='selected'; } else { slt = document.getElementById('province'); slt.options.add(new Option(json[i].region_name,json[i].region_id)); } } } }); } function getcity(rid,cid) { $.ajax({ url:'{/$root_dir/}/ajax/getregion', data:'parent_id='+rid, success:function(json) { document.getElementById('city').options.length = 0; document.getElementById('city').options.add(new Option('---请选择城市---','')); for(i=0;i<json.length;i++) { if(json[i].region_id == cid) { var slt; slt = document.getElementById('city'); slt.options.add(new Option(json[i].region_name,json[i].region_id)); slt.options[slt.options.length-1].selected='selected'; } else { slt = document.getElementById('city'); slt.options.add(new Option(json[i].region_name,json[i].region_id)); } } } }); } $(document).ready(function(){ getprovince('1','{/$member.prov_id/}'); {/if $member.prov_id/} getcity('{/$member.prov_id/}','{/$member.city_id/}'); {//if/} }) </script>
hVenus/bunny-erp-system
app/v/simpla/member/addmember.html
HTML
bsd-3-clause
5,785
[ 30522, 1063, 1013, 2421, 5371, 1027, 1000, 21934, 24759, 2050, 1013, 2691, 1013, 20346, 1012, 16129, 1000, 1013, 1065, 1063, 1013, 2421, 5371, 1027, 1000, 21934, 24759, 2050, 1013, 2691, 1013, 2187, 1012, 16129, 1000, 1013, 1065, 1026, 4487...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
//! \file ArcSG.cs //! \date 2018 Feb 01 //! \brief 'fSGX' multi-frame image container. // // Copyright (C) 2018 by morkt // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to // deal in the Software without restriction, including without limitation the // rights to use, copy, modify, merge, publish, distribute, sublicense, and/or // sell copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING // FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS // IN THE SOFTWARE. // using System.Collections.Generic; using System.ComponentModel.Composition; using System.IO; namespace GameRes.Formats.Ivory { [Export(typeof(ArchiveFormat))] public class SgOpener : ArchiveFormat { public override string Tag { get { return "SG/cOBJ"; } } public override string Description { get { return "Ivory multi-frame image"; } } public override uint Signature { get { return 0x58475366; } } // 'fSGX' public override bool IsHierarchic { get { return false; } } public override bool CanWrite { get { return false; } } public override ArcFile TryOpen (ArcView file) { long offset = 8; var base_name = Path.GetFileNameWithoutExtension (file.Name); var dir = new List<Entry>(); while (offset < file.MaxOffset && file.View.AsciiEqual (offset, "cOBJ")) { uint obj_size = file.View.ReadUInt32 (offset+4); if (0 == obj_size) break; if (file.View.AsciiEqual (offset+0x10, "fSG ")) { var entry = new Entry { Name = string.Format ("{0}#{1}", base_name, dir.Count), Type = "image", Offset = offset+0x10, Size = file.View.ReadUInt32 (offset+0x14), }; dir.Add (entry); } offset += obj_size; } if (0 == dir.Count) return null; return new ArcFile (file, this, dir); } } }
morkt/GARbro
ArcFormats/Ivory/ArcSG.cs
C#
mit
2,865
[ 30522, 1013, 1013, 999, 1032, 5371, 29137, 2290, 1012, 20116, 1013, 1013, 999, 1032, 3058, 2760, 13114, 5890, 1013, 1013, 999, 1032, 4766, 1005, 1042, 28745, 2595, 1005, 4800, 1011, 4853, 3746, 11661, 1012, 1013, 1013, 1013, 1013, 9385, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# GTD <a name="top"></a> <a href="http://spacemacs.org"><img src="https://cdn.rawgit.com/syl20bnr/spacemacs/442d025779da2f62fc86c2082703697714db6514/assets/spacemacs-badge.svg" alt="Made with Spacemacs"> </a><a href="http://www.twitter.com/zwb_ict"><img src="http://i.imgur.com/tXSoThF.png" alt="Twitter" align="right"></a><br> *** Spacemacs GTD layer, which based on org-query.
zwb-ict/.spacemacs.d
layers/gtd/README.md
Markdown
mit
380
[ 30522, 1001, 14181, 2094, 1026, 1037, 2171, 1027, 1000, 2327, 1000, 1028, 1026, 1013, 1037, 1028, 1026, 1037, 17850, 12879, 1027, 1000, 8299, 1024, 1013, 1013, 2686, 22911, 2015, 1012, 8917, 1000, 1028, 1026, 10047, 2290, 5034, 2278, 1027, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
export default function mapNodesToColumns({ children = [], columns = 1, dimensions = [], } = {}) { let nodes = [] let heights = [] if (columns === 1) { return children } // use dimensions to calculate the best column for each child if (dimensions.length && dimensions.length === children.length) { for(let i=0; i<columns; i++) { nodes[i] = [] heights[i] = 0 } children.forEach((child, i) => { let { width, height } = dimensions[i] let index = heights.indexOf(Math.min(...heights)) nodes[index].push(child) heights[index] += height / width }) } // equally spread the children across the columns else { for(let i=0; i<columns; i++) { nodes[i] = children.filter((child, j) => j % columns === i) } } return nodes }
novascreen/react-columns
src/mapNodesToColumns.js
JavaScript
mit
811
[ 30522, 9167, 12398, 3853, 4949, 3630, 6155, 3406, 25778, 2819, 3619, 1006, 1063, 2336, 1027, 1031, 1033, 1010, 7753, 1027, 1015, 1010, 9646, 1027, 1031, 1033, 1010, 1065, 1027, 1063, 1065, 1007, 1063, 2292, 14164, 1027, 1031, 1033, 2292, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
module Ldaptic class Error < ::RuntimeError end class EntryNotSaved < Error end # All server errors are instances of this class. The error message and error # code can be accessed with <tt>exception.message</tt> and # <tt>exception.code</tt> respectively. class ServerError < Error attr_accessor :code end # The module houses all subclasses of Ldaptic::ServerError. The methods # contained within are for internal use only. module Errors #{ # 0=>"Success", # 1=>"Operations error", # 2=>"Protocol error", # 3=>"Time limit exceeded", # 4=>"Size limit exceeded", # 5=>"Compare False", # 6=>"Compare True", # 7=>"Authentication method not supported" # 8=>"Strong(er) authentication required", # 9=>"Partial results and referral received", # 10=>"Referral", # 11=>"Administrative limit exceeded", # 12=>"Critical extension is unavailable", # 13=>"Confidentiality required", # 14=>"SASL bind in progress", # 16=>"No such attribute", # 17=>"Undefined attribute type", # 18=>"Inappropriate matching", # 19=>"Constraint violation", # 20=>"Type or value exists", # 21=>"Invalid syntax", # 32=>"No such object", # 33=>"Alias problem", # 34=>"Invalid DN syntax", # 35=>"Entry is a leaf", # 36=>"Alias dereferencing problem", # 47=>"Proxy Authorization Failure", # 48=>"Inappropriate authentication", # 49=>"Invalid credentials", # 50=>"Insufficient access", # 51=>"Server is busy", # 52=>"Server is unavailable", # 53=>"Server is unwilling to perform", # 54=>"Loop detected", # 64=>"Naming violation", # 65=>"Object class violation", # 66=>"Operation not allowed on non-leaf", # 67=>"Operation not allowed on RDN", # 68=>"Already exists", # 69=>"Cannot modify object class", # 70=>"Results too large", # 71=>"Operation affects multiple DSAs", # 80=>"Internal (implementation specific) error", # 81=>"Can't contact LDAP server", # 82=>"Local error", # 83=>"Encoding error", # 84=>"Decoding error", # 85=>"Timed out", # 86=>"Unknown authentication method", # 87=>"Bad search filter", # 88=>"User cancelled operation", # 89=>"Bad parameter to an ldap routine", # 90=>"Out of memory", # 91=>"Connect error", # 92=>"Not Supported", # 93=>"Control not found", # 94=>"No results returned", # 95=>"More results to return", # 96=>"Client Loop", # 97=>"Referral Limit Exceeded", #} # Error code 32. class NoSuchObject < ServerError end # Error code 5. class CompareFalse < ServerError end # Error code 6. class CompareTrue < ServerError end EXCEPTIONS = { 32 => NoSuchObject, 5 => CompareFalse, 6 => CompareTrue } class << self # Provides a backtrace minus all files shipped with Ldaptic. def application_backtrace dir = File.dirname(File.dirname(__FILE__)) c = caller c.shift while c.first[0,dir.length] == dir c end # Raise an exception (object only, no strings or classes) with the # backtrace stripped of all Ldaptic files. def raise(exception) exception.set_backtrace(application_backtrace) Kernel.raise exception end def for(code, message = nil) #:nodoc: message ||= "Unknown error #{code}" klass = EXCEPTIONS[code] || ServerError exception = klass.new(message) exception.code = code exception end # Given an error code and a message, raise an Ldaptic::ServerError unless # the code is zero. The right subclass is selected automatically if it # is available. def raise_unless_zero(code, message = nil) raise self.for(code, message) unless code.zero? end end end end
tpope/ldaptic
lib/ldaptic/errors.rb
Ruby
mit
3,960
[ 30522, 11336, 25510, 9331, 4588, 2465, 7561, 1026, 1024, 1024, 2448, 7292, 2121, 29165, 2203, 2465, 4443, 17048, 3736, 7178, 1026, 7561, 2203, 1001, 2035, 8241, 10697, 2024, 12107, 1997, 2023, 2465, 1012, 1996, 7561, 4471, 1998, 7561, 1001,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ gTestfile = '15.9.5.30-1.js'; /** File Name: 15.9.5.30-1.js ECMA Section: 15.9.5.30 Date.prototype.setHours(hour [, min [, sec [, ms ]]] ) Description: If min is not specified, this behaves as if min were specified with the value getMinutes( ). If sec is not specified, this behaves as if sec were specified with the value getSeconds ( ). If ms is not specified, this behaves as if ms were specified with the value getMilliseconds( ). 1. Let t be the result of LocalTime(this time value). 2. Call ToNumber(hour). 3. If min is not specified, compute MinFromTime(t); otherwise, call ToNumber(min). 4. If sec is not specified, compute SecFromTime(t); otherwise, call ToNumber(sec). 5. If ms is not specified, compute msFromTime(t); otherwise, call ToNumber(ms). 6. Compute MakeTime(Result(2), Result(3), Result(4), Result(5)). 7. Compute UTC(MakeDate(Day(t), Result(6))). 8. Set the [[Value]] property of the this value to TimeClip(Result(7)). 9. Return the value of the [[Value]] property of the this value. Author: christine@netscape.com Date: 12 november 1997 */ var SECTION = "15.9.5.30-1"; var VERSION = "ECMA_1"; startTest(); writeHeaderToLog( SECTION + " Date.prototype.setHours( hour [, min, sec, ms] )"); addNewTestCase( 0,0,0,0,void 0, "TDATE = new Date(0);(TDATE).setHours(0);TDATE" ); addNewTestCase( 28800000, 23, 59, 999,void 0, "TDATE = new Date(28800000);(TDATE).setHours(23,59,999);TDATE" ); addNewTestCase( 28800000, 999, 999, void 0, void 0, "TDATE = new Date(28800000);(TDATE).setHours(999,999);TDATE" ); addNewTestCase( 28800000,999,0, void 0, void 0, "TDATE = new Date(28800000);(TDATE).setHours(999);TDATE" ); addNewTestCase( 28800000,-8, void 0, void 0, void 0, "TDATE = new Date(28800000);(TDATE).setHours(-8);TDATE" ); addNewTestCase( 946684800000,8760, void 0, void 0, void 0, "TDATE = new Date(946684800000);(TDATE).setHours(8760);TDATE" ); addNewTestCase( TIME_2000 - msPerDay, 23, 59, 59, 999, "d = new Date( " + (TIME_2000-msPerDay) +"); d.setHours(23,59,59,999)" ); addNewTestCase( TIME_2000 - msPerDay, 23, 59, 59, 1000, "d = new Date( " + (TIME_2000-msPerDay) +"); d.setHours(23,59,59,1000)" ); test(); function addNewTestCase( time, hours, min, sec, ms, DateString) { var UTCDate = UTCDateFromTime( SetHours( time, hours, min, sec, ms )); var LocalDate = LocalDateFromTime( SetHours( time, hours, min, sec, ms )); var DateCase = new Date( time ); if ( min == void 0 ) { DateCase.setHours( hours ); } else { if ( sec == void 0 ) { DateCase.setHours( hours, min ); } else { if ( ms == void 0 ) { DateCase.setHours( hours, min, sec ); } else { DateCase.setHours( hours, min, sec, ms ); } } } new TestCase( SECTION, DateString+".getTime()", UTCDate.value, DateCase.getTime() ); new TestCase( SECTION, DateString+".valueOf()", UTCDate.value, DateCase.valueOf() ); new TestCase( SECTION, DateString+".getUTCFullYear()", UTCDate.year, DateCase.getUTCFullYear() ); new TestCase( SECTION, DateString+".getUTCMonth()", UTCDate.month, DateCase.getUTCMonth() ); new TestCase( SECTION, DateString+".getUTCDate()", UTCDate.date, DateCase.getUTCDate() ); new TestCase( SECTION, DateString+".getUTCDay()", UTCDate.day, DateCase.getUTCDay() ); new TestCase( SECTION, DateString+".getUTCHours()", UTCDate.hours, DateCase.getUTCHours() ); new TestCase( SECTION, DateString+".getUTCMinutes()", UTCDate.minutes,DateCase.getUTCMinutes() ); new TestCase( SECTION, DateString+".getUTCSeconds()", UTCDate.seconds,DateCase.getUTCSeconds() ); new TestCase( SECTION, DateString+".getUTCMilliseconds()", UTCDate.ms, DateCase.getUTCMilliseconds() ); new TestCase( SECTION, DateString+".getFullYear()", LocalDate.year, DateCase.getFullYear() ); new TestCase( SECTION, DateString+".getMonth()", LocalDate.month, DateCase.getMonth() ); new TestCase( SECTION, DateString+".getDate()", LocalDate.date, DateCase.getDate() ); new TestCase( SECTION, DateString+".getDay()", LocalDate.day, DateCase.getDay() ); new TestCase( SECTION, DateString+".getHours()", LocalDate.hours, DateCase.getHours() ); new TestCase( SECTION, DateString+".getMinutes()", LocalDate.minutes, DateCase.getMinutes() ); new TestCase( SECTION, DateString+".getSeconds()", LocalDate.seconds, DateCase.getSeconds() ); new TestCase( SECTION, DateString+".getMilliseconds()", LocalDate.ms, DateCase.getMilliseconds() ); DateCase.toString = Object.prototype.toString; new TestCase( SECTION, DateString+".toString=Object.prototype.toString;"+DateString+".toString()", "[object Date]", DateCase.toString() ); } function MyDate() { this.year = 0; this.month = 0; this.date = 0; this.hours = 0; this.minutes = 0; this.seconds = 0; this.ms = 0; } function LocalDateFromTime(t) { t = LocalTime(t); return ( MyDateFromTime(t) ); } function UTCDateFromTime(t) { return ( MyDateFromTime(t) ); } function MyDateFromTime( t ) { var d = new MyDate(); d.year = YearFromTime(t); d.month = MonthFromTime(t); d.date = DateFromTime(t); d.hours = HourFromTime(t); d.minutes = MinFromTime(t); d.seconds = SecFromTime(t); d.ms = msFromTime(t); d.day = WeekDay( t ); d.time = MakeTime( d.hours, d.minutes, d.seconds, d.ms ); d.value = TimeClip( MakeDate( MakeDay( d.year, d.month, d.date ), d.time ) ); return (d); } function SetHours( t, hour, min, sec, ms ) { var TIME = LocalTime(t); var HOUR = Number(hour); var MIN = ( min == void 0) ? MinFromTime(TIME) : Number(min); var SEC = ( sec == void 0) ? SecFromTime(TIME) : Number(sec); var MS = ( ms == void 0 ) ? msFromTime(TIME) : Number(ms); var RESULT6 = MakeTime( HOUR, MIN, SEC, MS ); var UTC_TIME = UTC( MakeDate(Day(TIME), RESULT6) ); return ( TimeClip(UTC_TIME) ); }
sam/htmlunit-rhino-fork
testsrc/tests/ecma/Date/15.9.5.30-1.js
JavaScript
mpl-2.0
6,427
[ 30522, 1013, 1008, 1011, 1008, 1011, 5549, 1024, 1039, 1009, 1009, 1025, 21628, 1011, 9381, 1024, 1016, 1025, 27427, 4765, 1011, 21628, 2015, 1011, 5549, 1024, 9152, 2140, 1025, 1039, 1011, 3937, 1011, 16396, 1024, 1016, 1011, 1008, 1011, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
# ES6 Shim Provides compatibility shims so that legacy JavaScript engines behave as closely as possible to ECMAScript 6 (Harmony). [![Build Status][1]][2] [![dependency status][3]][4] [![dev dependency status][5]][6] [![browser support](https://ci.testling.com/paulmillr/es6-shim.png)](https://ci.testling.com/paulmillr/es6-shim) ## Installation If you want to use it in browser: * Just include es6-shim before your scripts. * Include [es5-shim](https://github.com/kriskowal/es5-shim) if your browser doesn't support ECMAScript 5. * `component install paulmillr/es6-shim` if you’re using [component(1)](https://github.com/component/component). * `bower install es6-shim` if you’re using [Twitter Bower](http://bower.io/). For node.js: npm install es6-shim ## Safe shims * `Map`, `Set` (requires ES5) * `String`: * `fromCodePoint()` * `raw()` * `String.prototype`: * `codePointAt()` * `repeat()` * `startsWith()` * `endsWith()` * `contains()` * `Number`: * `MAX_SAFE_INTEGER` * `EPSILON` * `parseInt()` * `parseFloat()` * `isNaN()` * `isSafeInteger()` * `isFinite()` * `Number.prototype`: * `clz()` * `Array`: * `from()` * `of()` * `Array.prototype`: * `find()` * `findIndex()` * `keys()` (note: keys/values/entries return an `ArrayIterator` object) * `entries()` * `values()` * `Object`: * `getOwnPropertyDescriptors()` (ES5) * `getPropertyDescriptor()` (ES5) * `getPropertyNames()` (ES5) * `is()` * `assign()` * `mixin()` (ES5) * `Math`: * `sign()` * `log10()` * `log2()` * `log1p()` * `expm1()` * `cosh()` * `sinh()` * `tanh()` * `acosh()` * `asinh()` * `atanh()` * `hypot()` * `trunc()` * `imul()` Math functions accuracy is 1e-11. ## WeakMap shim It is not possible to implement WeakMap in pure javascript. The [es6-collections](https://github.com/WebReflection/es6-collections) implementation doesn't hold values strongly, which is critical for the collection. es6-shim decided to not include an incorrect shim. WeakMap has a very unusual use-case so you probably won't need it at all (use simple `Map` instead). ## Getting started ```javascript 'abc'.startsWith('a') // true 'abc'.endsWith('a') // false 'john alice'.contains('john') // true '123'.repeat(2) // '123123' Object.is(NaN, NaN) // Fixes ===. 0 isnt -0, NaN is NaN Object.assign({a: 1}, {b: 2}) // {a: 1, b: 2} Object.mixin({a: 1}, {get b: function() {return 2}}) // {a: 1, b: getter} Number.isNaN('123') // false. isNaN('123') will give true. Number.isFinite('asd') // false. Global isFinite() will give true. Number.toInteger(2.4) // 2. converts values to IEEE754 double precision integers // Tests if value is a number, finite, // >= -9007199254740992 && <= 9007199254740992 and floor(value) === value Number.isInteger(2.4) // false. Math.sign(400) // 1, 0 or -1 depending on sign. In this case 1. [5, 10, 15, 10].find(function(item) {return item / 2 === 5;}) // 10 [5, 10, 15, 10].findIndex(function(item) {return item / 2 === 5;}) // 1 // Replacement for `{}` key-value storage. // Keys can be anything. var map = new Map() map.set('John', 25) map.set('Alice', 400) map.set(['meh'], 555) map.get(['meh']) // undefined because you need to use exactly the same object. map.delete('Alice') map.keys() map.values() map.size // 2 // Useful for storing unique items. var set = new Set() set.add(1) set.add(5) set.has(1) set.has(4) // => false set.delete(5) ``` Other stuff: * [ECMAScript 6 drafts](http://wiki.ecmascript.org/doku.php?id=harmony:specification_drafts) * [Harmony proposals](http://wiki.ecmascript.org/doku.php?id=harmony:harmony) ## License The project was initially based on [es6-shim by Axel Rauschmayer](https://github.com/rauschma/es6-shim). The MIT License (MIT) Copyright (c) 2013 Paul Miller (http://paulmillr.com) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [1]: https://travis-ci.org/paulmillr/es6-shim.png [2]: https://travis-ci.org/paulmillr/es6-shim [3]: https://david-dm.org/paulmillr/es6-shim.png [4]: https://david-dm.org/paulmillr/es6-shim [5]: https://david-dm.org/paulmillr/es6-shim/dev-status.png [6]: https://david-dm.org/paulmillr/es6-shim#info=devDependencies
jamesmorgan/photo-flicker
photo-flicker-web/public/libs/es6-shim/README.md
Markdown
mit
5,250
[ 30522, 1001, 9686, 2575, 11895, 2213, 3640, 21778, 11895, 5244, 2061, 2008, 8027, 9262, 22483, 5209, 16582, 2004, 4876, 2004, 2825, 2000, 14925, 9335, 23235, 1020, 1006, 9396, 1007, 1012, 1031, 999, 1031, 3857, 3570, 1033, 1031, 1015, 1033,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
while(<>) { if(/^\s*#define\s+(SV_\S+)\s+/) { print "$1\n"; } }
sptim/legacy-sputils
scripts/extract_sysval_conv1.pl
Perl
mit
68
[ 30522, 2096, 1006, 1026, 1028, 1007, 1063, 2065, 1006, 1013, 1034, 1032, 1055, 1008, 1001, 9375, 1032, 1055, 1009, 1006, 17917, 1035, 1032, 1055, 1009, 1007, 1032, 1055, 1009, 1013, 1007, 1063, 6140, 1000, 1002, 1015, 1032, 1050, 1000, 10...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
using System; using System.Collections.Generic; using Newtonsoft.Json; using Newtonsoft.Json.Converters; using QuickBooks.Net.Data.Models.Fields; namespace QuickBooks.Net.Data.Models { public enum Month { January, February, March, April, May, June, July, August, September, October, November, December } public class CompanyInfo : QuickBooksBaseModel { [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public string CompanyName { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public string LegalName { get; set; } [JsonProperty("CompanyAddr", NullValueHandling = NullValueHandling.Ignore)] public Address CompanyAddress { get; set; } [JsonProperty("CustomerCommunicationAddr", NullValueHandling = NullValueHandling.Ignore)] public Address CustomerCommunicationAddress { get; set; } [JsonProperty("LegalAddr", NullValueHandling = NullValueHandling.Ignore)] public Address LegalAddress { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public PhoneNumber PrimaryPhone { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public DateTime? CompanyStartDate { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] [JsonConverter(typeof(StringEnumConverter))] public Month? FiscalYearStartMonth { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public string Country { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public EmailAddress Email { get; set; } [JsonProperty("WebAddr", NullValueHandling = NullValueHandling.Ignore)] public WebAddress WebAddress { get; set; } [JsonProperty(NullValueHandling = NullValueHandling.Ignore)] public string SupportedLanguages { get; set; } [JsonProperty("NameValue", NullValueHandling = NullValueHandling.Ignore)] public List<NameValuePair<string, string>> NameValues { get; set; } // Can't create company info internal override QuickBooksBaseModel CreateReturnObject() { throw new NotImplementedException(); } // Can't update company info internal override QuickBooksBaseModel UpdateReturnObject() { throw new NotImplementedException(); } // Can't delete company info internal override QuickBooksBaseModel DeleteReturnObject() { throw new NotImplementedException(); } } }
EduSource/QuickBooks.Net
QuickBooks.Net.Data/Models/CompanyInfo.cs
C#
mit
2,752
[ 30522, 2478, 2291, 1025, 2478, 2291, 1012, 6407, 1012, 12391, 1025, 2478, 8446, 6499, 6199, 1012, 1046, 3385, 1025, 2478, 8446, 6499, 6199, 1012, 1046, 3385, 1012, 10463, 2545, 1025, 2478, 4248, 17470, 1012, 5658, 1012, 2951, 1012, 4275, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
require 'spec_helper' describe User do it { should have_db_column(:id) } it { should have_db_column(:name).of_type(:string) } it { should have_db_column(:login).of_type(:string) } it { should have_db_column(:persistence_token).of_type(:string) } it_should_behave_like "a timestamped model" it { should have_many(:role_associations) } it { should have_many(:roles) } it { should have_many(:schedules) } context "default order" do it "should order by login in a ascending way" do first_user = described_class.make!(:login => 'adam') second_user = described_class.make!(:login => 'zirbel') users = described_class.all users.index(first_user).should < users.index(second_user) end end end
fslab/lilith
spec/models/user_spec.rb
Ruby
gpl-3.0
743
[ 30522, 5478, 1005, 28699, 1035, 2393, 2121, 1005, 6235, 5310, 2079, 30524, 5930, 1006, 1024, 8909, 1007, 1065, 2009, 1063, 2323, 2031, 1035, 16962, 1035, 5930, 1006, 1024, 2171, 1007, 1012, 1997, 1035, 2828, 1006, 1024, 5164, 1007, 1065, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package com.ftchinese.jobs.common import org.eclipse.jetty.server.handler.gzip.GzipHandler import org.eclipse.jetty.server.handler.{ContextHandler, ContextHandlerCollection} import org.eclipse.jetty.server.{Server, ServerConnector} import scala.collection.mutable.ArrayBuffer /** * An http server. * Created by wanbo on 15/8/21. */ class HttpServer(conf: JobsConfig) extends Logging { private var _server: Server = null private val _port: Int = conf.serverUIPort private var _handlers: ArrayBuffer[ContextHandler] = ArrayBuffer[ContextHandler]() def start(): Unit ={ if(_server != null) throw new Exception("Server is already started.") else { doStart() } } /** * Actually start the HTTP server. */ private def doStart(): Unit ={ // The server _server = new Server() val connector = new ServerConnector(_server) connector.setHost(conf.serverHost) connector.setPort(_port) _server.addConnector(connector) // Set handlers if(_handlers.size > 0) { val collection = new ContextHandlerCollection val gzipHandlers = _handlers.map(h => { val gzipHandler = new GzipHandler gzipHandler.setHandler(h) gzipHandler }) collection.setHandlers(gzipHandlers.toArray) _server.setHandler(collection) } // Start the server _server.start() _server.join() } def attachHandler(handler: ContextHandler): Unit ={ _handlers += handler } def stop(): Unit ={ if(_server == null) throw new Exception("Server is already stopped.") else { _server.stop() _server = null } } }
FTChinese/push
src/main/scala/com/ftchinese/jobs/common/HttpServer.scala
Scala
mit
1,840
[ 30522, 7427, 4012, 1012, 3027, 17231, 6810, 1012, 5841, 1012, 2691, 12324, 8917, 1012, 13232, 1012, 22962, 2100, 1012, 8241, 1012, 28213, 1012, 1043, 5831, 2361, 1012, 1043, 5831, 21890, 4859, 3917, 12324, 8917, 1012, 13232, 1012, 22962, 21...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/* * Copyright (c) 2006, 2018, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ /* * @test * @bug 6270982 * @summary Redefine a class so that the order of external classes in * the constant pool are changed. * @comment converted from test/jdk/com/sun/jdi/RedefineChangeClassOrder.sh * * @library /test/lib * @compile -g RedefineChangeClassOrder.java * @run main/othervm RedefineChangeClassOrder */ import jdk.test.lib.process.OutputAnalyzer; import lib.jdb.JdbCommand; import lib.jdb.JdbTest; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; class RedefineChangeClassOrderTarg { public static void main(String[] args) { new RedefineChangeClassOrderTarg().hi(false); new RedefineChangeClassOrderTarg().hi(true); // @1 breakpoint } public void hi(boolean expected) { boolean isNewVersion = false; // @1 commentout // @1 uncomment boolean isNewVersion = true; if (expected == isNewVersion) { System.out.println("PASS: expected and isNewVersion match."); } else { System.out.println("FAIL: expected and isNewVersion do not match."); System.out.println("expected=" + expected + " isNewVersion=" + isNewVersion); } Foo1 foo1 = new Foo1(); // @1 commentout foo1.hi(); // @1 commentout // This Hack code block exists to force some verification_type_info // objects of subtype Object_variable_info into the StackMapTable. // // In the redefined code, the above Foo1 code is effectively // moved after the Foo2 code below which causes things to be // layed out in a different order in the constant pool. The // cpool_index in the Object_variable_info has to be updated // in the redefined code's StackMapTable to refer to right /// constant pool index in the merged constant pool. Hack hack = getClass().getAnnotation(Hack.class); if (hack != null) { String class_annotation = hack.value(); System.out.println("class annotation is: " + class_annotation); if (isNewVersion) { if (class_annotation.equals("JUNK")) { System.out.println("class_annotation is JUNK."); } else { System.out.println("class_annotation is NOT JUNK."); } } } Foo2 foo2 = new Foo2(); foo2.hi(); // @1 uncomment Foo1 foo1 = new Foo1(); // @1 uncomment foo1.hi(); } } class Foo1 { public void hi() { System.out.println("Hello from " + getClass()); } } class Foo2 { public void hi() { System.out.println("Hello from " + getClass()); } } @Retention(RetentionPolicy.RUNTIME) @interface Hack { String value(); } public class RedefineChangeClassOrder extends JdbTest { public static void main(String argv[]) { new RedefineChangeClassOrder().run(); } private RedefineChangeClassOrder() { super(DEBUGGEE_CLASS, SOURCE_FILE); } private static final String DEBUGGEE_CLASS = RedefineChangeClassOrderTarg.class.getName(); private static final String SOURCE_FILE = "RedefineChangeClassOrder.java"; @Override protected void runCases() { setBreakpoints(1); jdb.command(JdbCommand.run()); redefineClass(1, "-g"); jdb.contToExit(1); new OutputAnalyzer(getDebuggeeOutput()) .shouldNotContain("FAIL:"); } }
md-5/jdk10
test/jdk/com/sun/jdi/RedefineChangeClassOrder.java
Java
gpl-2.0
4,549
[ 30522, 1013, 1008, 1008, 9385, 1006, 1039, 1007, 2294, 1010, 2760, 1010, 14721, 1998, 1013, 2030, 2049, 18460, 1012, 2035, 2916, 9235, 1012, 1008, 2079, 2025, 11477, 2030, 6366, 9385, 14444, 2030, 2023, 5371, 20346, 1012, 1008, 1008, 2023, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// Copyright © 2016 Gary W. Hudson Esq. // Released under GNU GPL 3.0 var lzwh = (function() {var z={ Decode:function(p) {function f(){--h?k>>=1:(k=p.charCodeAt(q++)-32,h=15);return k&1}var h=1,q=0,k=0,e=[""],l=[],g=0,m=0,c="",d,a=0,n,b; do{m&&(e[g-1]=c.charAt(0));m=a;l.push(c);d=0;for(a=g++;d!=a;)f()?d=(d+a>>1)+1:a=d+a>>1;if(d)c=l[d]+e[d],e[g]=c.charAt(0) else{b=1;do for(n=8;n--;b*=2)d+=b*f();while(f());d&&(c=String.fromCharCode(d-1),e[g]="")}}while(d);return l.join("")}, Encode:function(p) {function f(b){b&&(k|=e);16384==e?(q.push(String.fromCharCode(k+32)),e=1,k=0):e<<=1}function h(b,d){for(var a=0,e,c=l++;a!=c;) e=a+c>>1,b>e?(a=e+1,f(1)):(c=e,f(0));if(!a){-1!=b&&(a=d+1);do{for(c=8;c--;a=(a-a%2)/2)f(a%2);f(a)}while(a)}}for(var q=[],k=0, e=1,l=0,g=[],m=[],c=0,d=p.length,a,n,b=0;c<d;)a=p.charCodeAt(c++),g[b]?(n=g[b].indexOf(a),-1==n?(g[b].push(a),m[b].push(l+1), c-=b?1:0,h(b,a),b=0):b=m[b][n]):(g[b]=[a],m[b]=[l+1],c-=b?1:0,h(b,a),b=0);b&&h(b,0);for(h(-1,0);1!=e;)f(0);return q.join("")} };return z})(); if(typeof define==='function'&&define.amd)define(function(){return lzw}) else if(typeof module!=='undefined'&&module!=null)module.exports=lzw;
GWHudson/lzwh
lzwhutf16-min.js
JavaScript
gpl-3.0
1,186
[ 30522, 1013, 1013, 9385, 1075, 2355, 5639, 1059, 1012, 6842, 25325, 1012, 1013, 1013, 2207, 2104, 27004, 14246, 2140, 1017, 1012, 1014, 13075, 1048, 2480, 2860, 2232, 1027, 1006, 3853, 1006, 1007, 1063, 13075, 1062, 1027, 1063, 21933, 3207,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
/******************************************************************************* * Copyright (c) 1998, 2015 Oracle and/or its affiliates. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * Contributors: * Oracle - initial API and implementation from Oracle TopLink ******************************************************************************/ package org.eclipse.persistence.internal.expressions; import java.io.*; import java.util.*; import org.eclipse.persistence.exceptions.*; import org.eclipse.persistence.expressions.Expression; import org.eclipse.persistence.internal.helper.*; import org.eclipse.persistence.queries.*; import org.eclipse.persistence.internal.sessions.AbstractSession; /** * <p><b>Purpose</b>: Print UPDATE statement. * <p><b>Responsibilities</b>:<ul> * <li> Print UPDATE statement. * </ul> * @author Dorin Sandu * @since TOPLink/Java 1.0 */ public class SQLUpdateStatement extends SQLModifyStatement { /** * Append the string containing the SQL insert string for the given table. */ protected SQLCall buildCallWithoutReturning(AbstractSession session) { SQLCall call = new SQLCall(); call.returnNothing(); Writer writer = new CharArrayWriter(100); try { writer.write("UPDATE "); if (getHintString() != null) { writer.write(getHintString()); writer.write(" "); } writer.write(getTable().getQualifiedNameDelimited(session.getPlatform())); writer.write(" SET "); ExpressionSQLPrinter printer = null; Vector fieldsForTable = new Vector(); Enumeration valuesEnum = getModifyRow().getValues().elements(); Vector values = new Vector(); for (Enumeration fieldsEnum = getModifyRow().keys(); fieldsEnum.hasMoreElements();) { DatabaseField field = (DatabaseField)fieldsEnum.nextElement(); Object value = valuesEnum.nextElement(); if (field.getTable().equals(getTable()) || (!field.hasTableName())) { fieldsForTable.addElement(field); values.addElement(value); } } if (fieldsForTable.isEmpty()) { return null; } for (int i = 0; i < fieldsForTable.size(); i++) { DatabaseField field = (DatabaseField)fieldsForTable.elementAt(i); writer.write(field.getNameDelimited(session.getPlatform())); writer.write(" = "); if(values.elementAt(i) instanceof Expression) { // the value in the modify row is an expression - assign it. Expression exp = (Expression)values.elementAt(i); if(printer == null) { printer = new ExpressionSQLPrinter(session, getTranslationRow(), call, false, getBuilder()); printer.setWriter(writer); } printer.printExpression(exp); } else { // the value in the modify row is ignored, the parameter corresponding to the key field will be assigned. call.appendModify(writer, field); } if ((i + 1) < fieldsForTable.size()) { writer.write(", "); } } if (!(getWhereClause() == null)) { writer.write(" WHERE "); if(printer == null) { printer = new ExpressionSQLPrinter(session, getTranslationRow(), call, false, getBuilder()); printer.setWriter(writer); } printer.printExpression(getWhereClause()); } call.setSQLString(writer.toString()); return call; } catch (IOException exception) { throw ValidationException.fileError(exception); } } }
RallySoftware/eclipselink.runtime
foundation/org.eclipse.persistence.core/src/org/eclipse/persistence/internal/expressions/SQLUpdateStatement.java
Java
epl-1.0
4,367
[ 30522, 1013, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 1008, 100...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
package cn.wh.bean; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name="person") public class Person implements Serializable { private static final long serialVersionUID = 8648652046877078029L; private Integer id; private String name; public Person() {} public Person(String name) { this.name = name; } @Id @Column(name="id") @GeneratedValue(strategy=GenerationType.AUTO) public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } @Column(name="name",length=20,nullable=false) public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Person other = (Person) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; return true; } }
bingoogolapple/J2EENote
EJB/EntityBean/src/cn/wh/bean/Person.java
Java
apache-2.0
1,405
[ 30522, 7427, 27166, 1012, 1059, 2232, 1012, 14068, 1025, 12324, 9262, 1012, 22834, 1012, 7642, 21335, 3468, 1025, 12324, 9262, 2595, 1012, 28297, 1012, 5930, 1025, 12324, 9262, 2595, 1012, 28297, 1012, 9178, 1025, 12324, 9262, 2595, 1012, 2...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
require 'highline/import' require 'super_tues/board' require "super_tues/repl/version" require "super_tues/repl/game" require "super_tues/repl/candidate_picker" require "super_tues/repl/console_helpers" require "super_tues/repl/day" require "super_tues/repl/business_action" module SuperTues module Repl end end
jonlhouse/super_tues-repl
lib/super_tues/repl/all.rb
Ruby
mit
317
[ 30522, 5478, 1005, 2152, 4179, 1013, 12324, 1005, 5478, 1005, 3565, 1035, 10722, 2229, 1013, 2604, 1005, 5478, 1000, 3565, 1035, 10722, 2229, 1013, 16360, 2140, 1013, 2544, 1000, 5478, 1000, 3565, 1035, 10722, 2229, 1013, 16360, 2140, 1013,...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
using System; using System.Collections.Generic; using System.Linq; using System.Text.RegularExpressions; using LibGit2Sharp; namespace PullRequestReleaseNotes { public class UnreleasedCommitsProvider { private static readonly Regex ParseSemVer = new Regex(@"^(?<SemVer>(?<Major>\d+)(\.(?<Minor>\d+))(\.(?<Patch>\d+))?)(\.(?<FourthPart>\d+))?(-(?<Tag>[^\+]*))?(\+(?<BuildMetaData>.*))?$", RegexOptions.Compiled); public IEnumerable<Commit> GetAllUnreleasedMergeCommits(IRepository repo, string releaseBranchRef, bool annotatedTagOnly) { var releasedCommitsHash = new Dictionary<string, Commit>(); var branchReference = repo.Branches[releaseBranchRef]; var tagCommits = repo.Tags .Where(x => !annotatedTagOnly || x.IsAnnotated) .Where(t => ParseSemVer.Match(t.FriendlyName).Success) .Select(tag => tag.PeeledTarget.Peel<Commit>()).Where(x => x != null) .OrderByDescending(x => x.Author.When) .ToList() .AsParallel().Where(x => BranchContainsTag(repo, x, branchReference)) .ToList(); var branchAncestors = repo.Commits .QueryBy(new CommitFilter { IncludeReachableFrom = branchReference }) .Where(commit => commit.Parents.Count() > 1); if (!tagCommits.Any()) return branchAncestors; var checkedTags = new List<Commit>(); // for each tagged commit walk down all its parents and collect a dictionary of unique commits foreach (var tagCommit in tagCommits) { var containedInOtherTag = TagContainedInOtherCheckedTags(repo, checkedTags, tagCommit); if (containedInOtherTag) { // insert to the beginning so this tag will be checked first for next tag // because this tag is probably the closest tag that contains the next one. checkedTags.Insert(0, tagCommit); continue; } var releasedCommits = repo.Commits .QueryBy(new CommitFilter {IncludeReachableFrom = tagCommit.Id}) .Where(commit => commit.Parents.Count() > 1) .ToDictionary(i => i.Sha, i => i); releasedCommitsHash.Merge(releasedCommits); checkedTags.Insert(0, tagCommit); } // remove released commits from the branch ancestor commits as they have been previously released return branchAncestors.Except(releasedCommitsHash.Values.AsEnumerable()); } private static bool TagContainedInOtherCheckedTags(IRepository repo, IEnumerable<Commit> checkedTags, Commit tagCommit) { var containedInOtherTag = false; foreach (var checkedTag in checkedTags) { containedInOtherTag = repo.ObjectDatabase.FindMergeBase(checkedTag, tagCommit)?.Sha == tagCommit.Sha; if (containedInOtherTag) { break; } } return containedInOtherTag; } private static bool BranchContainsTag(IRepository repo, Commit tagCommit, Branch branchReference) { var mergeBase = repo.ObjectDatabase.FindMergeBase(tagCommit, branchReference.Tip); var branchContainsTag = mergeBase?.Sha == tagCommit.Sha; return branchContainsTag; } } }
jasminsehic/PullRequestReleaseNotes
src/PullRequestReleaseNotes/UnreleasedCommitsProvider.cs
C#
mit
3,578
[ 30522, 2478, 2291, 1025, 2478, 2291, 1012, 6407, 1012, 12391, 1025, 2478, 2291, 1012, 11409, 4160, 1025, 2478, 2291, 1012, 3793, 1012, 3180, 10288, 20110, 8496, 1025, 2478, 5622, 2497, 23806, 2475, 7377, 14536, 1025, 3415, 15327, 4139, 2890...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
// Type definitions for react-hammerjs 1.0 // Project: https://github.com/JedWatson/react-hammerjs#readme // Definitions by: Jason Unger <https://github.com/jsonunger> // Cecchi MacNaughton <https://github.com/cecchi> // Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped // TypeScript Version: 2.8 import * as Hammer from "hammerjs"; import * as React from "react"; type Omit<T, K> = Pick<T, Exclude<keyof T, K>>; type HammerOptionsWithRecognizers = Omit<HammerOptions, "recognizers"> & { recognizers?: { [gesture: string]: RecognizerOptions }; }; declare namespace ReactHammer { interface ReactHammerProps { direction?: | "DIRECTION_NONE" | "DIRECTION_LEFT" | "DIRECTION_RIGHT" | "DIRECTION_UP" | "DIRECTION_DOWN" | "DIRECTION_HORIZONTAL" | "DIRECTION_VERTICAL" | "DIRECTION_ALL"; options?: HammerOptionsWithRecognizers; recognizeWith?: { [gesture: string]: Recognizer | string }; vertical?: boolean; action?: HammerListener; onDoubleTap?: HammerListener; onPan?: HammerListener; onPanCancel?: HammerListener; onPanEnd?: HammerListener; onPanStart?: HammerListener; onPinch?: HammerListener; onPinchCancel?: HammerListener; onPinchEnd?: HammerListener; onPinchIn?: HammerListener; onPinchStart?: HammerListener; onPress?: HammerListener; onPressUp?: HammerListener; onRotate?: HammerListener; onRotateCancel?: HammerListener; onRotateEnd?: HammerListener; onRotateMove?: HammerListener; onRotateStart?: HammerListener; onSwipe?: HammerListener; onTap?: HammerListener; } } declare const ReactHammer: React.ComponentClass<ReactHammer.ReactHammerProps>; export = ReactHammer;
AgentME/DefinitelyTyped
types/react-hammerjs/index.d.ts
TypeScript
mit
1,909
[ 30522, 1013, 1013, 2828, 15182, 2005, 10509, 1011, 8691, 22578, 1015, 1012, 1014, 1013, 1013, 2622, 1024, 16770, 1024, 1013, 1013, 21025, 2705, 12083, 1012, 4012, 1013, 24401, 24281, 3385, 1013, 10509, 1011, 8691, 22578, 1001, 3191, 4168, 1...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
#ifndef QT_NO_QT_INCLUDE_WARN #if defined(__GNUC__) #warning "Inclusion of header files from include/Qt is deprecated." #elif defined(_MSC_VER) #pragma message("WARNING: Inclusion of header files from include/Qt is deprecated.") #endif #endif #include "../QtDeclarative/qdeclarativeexpression.h"
kzhong1991/Flight-AR.Drone-2
src/3rdparty/Qt4.8.4/include/Qt/qdeclarativeexpression.h
C
bsd-3-clause
325
[ 30522, 1001, 2065, 13629, 2546, 1053, 2102, 1035, 2053, 1035, 1053, 2102, 1035, 2421, 1035, 11582, 1001, 2065, 4225, 1006, 1035, 1035, 27004, 2278, 1035, 1035, 1007, 1001, 5432, 1000, 10502, 1997, 20346, 6764, 2013, 2421, 1013, 1053, 2102, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
import urllib import urllib2 from bs4 import BeautifulSoup textToSearch = 'gorillaz' query = urllib.quote(textToSearch) url = "https://www.youtube.com/results?search_query=" + query response = urllib2.urlopen(url) html = response.read() soup = BeautifulSoup(html) for vid in soup.findAll(attrs={'class':'yt-uix-tile-link'}): print 'https://www.youtube.com' + vid['href']
arbakker/yt-daemon
search_yt.py
Python
mit
380
[ 30522, 12324, 24471, 6894, 2497, 12324, 24471, 6894, 2497, 2475, 2013, 18667, 2549, 12324, 3376, 6499, 6279, 3793, 22282, 2906, 2818, 1027, 1005, 23526, 2480, 1005, 23032, 1027, 24471, 6894, 2497, 1012, 14686, 1006, 3793, 22282, 2906, 2818, ...
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...