text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
I’ve made notes again at the 2019-04-11 Amsterdam Python meetup in the byte office. Here are the summaries. Computer systems are taking over the world. They’re influencing everything. A lot of good has come out of this. But there’s also a downside. We’re talking (at meetups like this, for instance) mostly about tech, not about the people that build it or the people affected by it. Nick Groenen calls that an ethics problem. As an example, Uber’s “greyball” that was originally written for a good purpose but that was later used to mislead governments. Same with Volkswagen’s diesel emissions scandal: detecting when there’s an official test and adjusting the motor parameters to seem more environmentally friendly. How ethical is it to work on something like that? The above examples are of big companies. But what about working for smaller companies? You have the same ethical challenges there. What are you doing with your logfiles? How much information do you mine about your customers? Do you give customers’ data to your boss when he asks about it even though it isn’t allowed by your privacy policy? And how do you treat new programmers? Is there budget for training? How do you treat interns? That’s also an ethical question. A good first step would be to actually acknowledge the problem in our IT industry. Ethics… we didn’t learn for that. There are some pointers to get you started: Talk to other departments in your organisation, like the legal department. Or your local UX designer. Try to apply ethics during design and code review. Design for evil. Try to abuse what you’re building. It is actually quite a funny exercise! Understand the communication and decision-making structure of your company. Who do you have to talk to? Who makes the actual decisions? Noam Tenne works at Van Moof, a bicycle company. He made his own testing framework, Nimoy, named after the actor who played Spock. TDD ought to be a given now. It isn’t always, but it ought to be. Test Driven Development. What about BDD: Behaviour Driven Development? A test for a feature is written like this: SCENARIO some description GIVEN some starting point WHEN some action THEN some expected result You can do it with the python behave library. It works with one file per scenario with decorators inside them. Which is a bit cumbersome. And not business-people-friendly. Nimoy goes back to the old way. One test method per feature. The steps (“given”, “when”) are indicated by using the with statement: from nimoy.specification import Specification class ExampleSpec(Specification): def example_feature(self): with setup: a = 5 with when: a += 1 with expect: a == 6 # Look! No It functions like a DSL (domain specific language) this way. It isn’t real regular python code, but it uses python syntax in a specific way. (Note by Reinout: I suspected he was using metaclasses behind the scenes to adjust the way python treats keywords and variables within the Specification classes, but he later mentioned he used the AST (python’s Abstract Syntax Tree module). ast.NoteTransformer and so.). And DDT? Data Driven Testing? There’s support in nimoy for parametrizing existing data (for instance from a file or database) and to use that in the tests. Nimoy has some nice syntactic sugar for using mock classes. Easy to specify expected output, for instance. He’s got a series of blog posts on how/why he created Nimoy. He showed some internal code examples afterwards. Including Nimoy tests for nimoy itself. Looked):
https://reinout.vanrees.org/weblog/2019/04/11/python-meetup-2019.html
CC-MAIN-2021-21
en
refinedweb
Date of establishment: March 28, 2020 Update Date: April 22, 2020 (end) Personal collection Tool.py website: Note: please quote or change this article at will, just mark the source and the author. The author does not guarantee that the content is absolutely correct. Please be responsible for any consequences Title: the first Python Programming challenge on the web Find a website to challenge your understanding of Python and your IQ. The following is a detailed explanation of each problem. You will not solve all the problems at once, and supplement the content periodically (the content of the question is mainly in English) Level 0… Hint: try to change the URL address. Tip: try changing the URL address [explanation] - Because this is the 0 level, to find the first level, so change the 0 in the URL to 1… 2**38 is much much larger. 2 * * 38 is very, very large - Let’s figure out how big 2 * * 38 is >>> print(2**38) 274877906944 - Put this number into the URL, the results into the first level… Moreover, the URL is changed to map, indicating that the map may have holes Level 1…. everybody thinks twice before solving this.. [solution] there is a paragraph at the bottom of the graph that looks like garbled code. In the figure, converting K, O, e into m, Q, G is a transcoding problem - The intuitive idea, of course, is to convert K, O, e into m, Q, G in the chaotic content, and the result is still garbled.") text = text.replace('k', 'm').replace('o', 'q').replace('e', 'g') print(text) """ g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr ammnsrcpq ypcdmp. bmglg gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmlg. sqglg qrpglg.myicrpylq() gq pcammmclbcb. lmu ynnjw ml rfc spj. """ - Observe that each conversion moves the letter 2 bits backward. Suppose y, Z moves a, B. here it is lowercase, regardless of upper case.")#Use the maketrans and translate of the string to make the conversion= ''.join([chr(ord('a')+i) for i in range(26)]) # abcd...xyz table2 = table1[2:]+table1[:2] # cdef...zab trans = table1.maketrans(table1, table2) result = text.translate(trans) print(result) """ i hope you didnt translate it by hand. thats what computers arefor. doing it in by hand is inefficient and that's why this text is so long. using string.maketrans() is recommended. now apply on the url. """ - Well, see the normal English sentence, but how to put the website? Change to maketrans?… that’s a handy little function. isn’t it? It’s still wrong. I didn’t enter the second level - By the way, the map on the web site may have holes, try to convert it text = "map" ... """ ocr """ - Changed the URL to OCR, and it really entered the second level Level 2…. recognize the characters. maybe they are in the book, but MAYBE they are in the page source. [explanation] the words in the book are too vague to be seen clearly, so the answer should be in page source, which means that in the source code, right-click to view the original source code of the web page - Look at the source code There is a special place for us to find the rare words in that long string <!-- find rare characters in the mess below: --> <!-- %%[email protected]_$^__#)^)&!_+]!*@&^}@[@%]()%+$&[([email protected]%+%$*^@$^!+] !&_#)_*}{}}!}_]$[%}@[{[email protected]#_^{* @##&{#&{&)*%(]{{([*}@[@&]+!!*{)!}{%+{))])[!^})+)$] #{*+^((@^@}$[**$&^{[email protected]#$%)[email protected](& ... }!)$]&($)@](+(#{$)_%^%_^^#][{*[)%}+[##(##^{$}^]#&( &*{)%)&][&{]&#]}[[^^&[!#}${@_( | #@}&$[[%]_&$+)$!%{(}$^$}* | --> - In order not to copy a large number of characters, open the web page capture, this starts the crawler Because it’s simple, we don’t need BS4 or other packages to parse HTML. We set up a dictionary to record the number of times all words appear from urllib import request url = "" with request.urlopen(url) as response: data = response.read() html = data.decode('utf-8') start_string = "<!--\n" length = len(start_string) stop_string = "-->\n"#There are two comments,We're looking for the one in the back,So use rfind,No find. start = html.rfind(start_string) stop = html.rfind(stop_string) text = html[start+length:stop].replace('\n', '') chars = {} for char in text: chars[char] = chars[char] + 1 if char in chars else 1 avg = sum(chars.values())/len(chars) result = ''.join([char for char in chars if chars[char]<avg]) print(result) """ equality """ - Change the URL to equality, well, as expected, see the third level - We can also use the regular form here, but this is a known result of A-Z each time import re print(''.join(re.findall('[a-z]', text))) """ equality """ < continue next time > This work adoptsCC agreementThe author and the link to this article must be indicated in the reprint
https://developpaper.com/the-first-python-programming-challenge-on-the-internet-end/
CC-MAIN-2021-21
en
refinedweb
procmgr_ability() Control a process's ability to perform certain operations Synopsis: #include <sys/procmgr.h> extern int procmgr_ability( pid_t pid, unsigned ability, ... ); Arguments: - pid - The process ID of the process whose abilities you want to control, or 0 to control those of the calling process.You need to be running as root in order to change a different process's abilities. - ability - A list of the abilities. Each ability in the list is composed of three separate components that are ORed together: - An identifier (PROCMGR_AID_*) that tells procnto which particular ability you're modifying. - One or more operations (PROCMGR_AOP_*) that identify the operation you're performing on the ability. - One or more domains (PROCMGR_ADN_*) that indicate whether you're modifying what you can do with the ability while running as root or non-root. Terminate the list with PROCMGR_AID_EOL. - lower, upper - If the ability is for a subrange (see below), follow it by two uint64_t arguments that specify the lower and upper limits on the range. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The procmgr_ability() function takes a list of ability operations to control what the identified process is allowed to do. This function lets a process start running as root, set the abilities it needs, and then change its group and user IDs. This can make your system more secure by reducing the number of potentially vulnerable processes running as root. The table below describes the identifier portion for each ability, indicates whether the operation is normally privileged (e.g. rebooting the system) or not (e.g. spawning and forking), and describes the subrange if applicable. If you want to specify a subrange, include PROCMGR_AOP_SUBRANGE in the operation flags. You must OR in at least one of the following operations: - PROCMGR_AOP_DENY - Disallow the performance of the operation in the specified domain(s). - PROCMGR_AOP_ALLOW - Allow the performance of the operation in the specified domain(s). You must be root when doing this for privileged abilities. - PROCMGR_AOP_SUBRANGE - Restrict the feature to set its parameter to a certain subrange in the specified domain(s). You must still have the PROCMGR_AOP_ALLOW flag set for the ability in the domain to successfully use the feature. The meaning of the parameter varies, depends on on the ability, as described above. Follow the ability entry in the function's parameter list with two uint64_t arguments that give the lower and upper bounds of the subrange. You must be root when doing this for privileged abilities. You can have multiple subrange entries in the list; they OR together to let you form a discontiguous set. To do this, specify the identifier-operation-domain argument again, followed by another pair of range arguments. - PROCMGR_AOP_LOCK - Lock the current ability so that no further changes to it can be made. - PROCMGR_AOP_INHERIT_YES - The changes to the ability are inherited across a spawn or exec. - PROCMGR_AOP_INHERIT_NO - The changes to the ability aren't inherited across a spawn or exec. This is the default. You must OR in at least one of the following for the domain portion: - PROCMGR_ADN_ROOT - Modify the ability of the process when it's running as root. - PROCMGR_ADN_NONROOT - Modify the ability of the process when it isn't running as root. There's also a special ability identifier, PROCMGR_AID_EOL, that indicates the end of the list of abilities passed to this function. You can OR the domain flags and any of PROCMGR_AOP_DENY, PROCMGR_AOP_ALLOW, and PROCMGR_AOP_LOCK into the PROCMGR_AID_EOL macro that ends the list. If you do this, the operations are performed on all the unlocked abilities that you didn't specify in the list. Returns: - EOK - The abilities were successfully modified. - EINVAL - There's something wrong with the parameters to the function. - EPERM - A non-root process tried to give itself a privileged ability or tried to change a locked ability. Examples: Remove the ability for a root process to set the user ID when spawning a process: procmgr_ability(0, PROCMGR_ADN_ROOT|PROCMGR_AOP_DENY|PROCMGR_AID_SPAWN_SETUID, PROCMGR_AID_EOL); Regain the ability for a root process to set the user ID: procmgr_ability(0, PROCMGR_ADN_ROOT|PROCMGR_AOP_ALLOW|PROCMGR_AID_SPAWN_SETUID, PROCMGR_AID_EOL); Drop all abilities while running as root and lock out any further changes: procmgr_ability(0, PROCMGR_ADN_ROOT|PROCMGR_AOP_DENY|PROCMGR_AOP_LOCK|PROCMGR_AID_EOL); Allow a non-root process to set the user ID while spawning, but restrict the number to 10000 or higher and lock out any further changes. Remove all other abilities while running as root, and lock any further changes to them as well: procmgr_ability(0, PROCMGR_ADN_NONROOT|PROCMGR_AOP_ALLOW|PROCMGR_AID_SPAWN_SETUID, PROCMGR_ADN_NONROOT|PROCMGR_AOP_SUBRANGE|PROCMGR_AOP_LOCK|PROCMGR_AID_SPAWN_SETUID, (uint64_t)10000, ~(uint64_t)0, PROCMGR_ADN_ROOT|PROCMGR_AOP_DENY|PROCMGR_AOP_LOCK|PROCMGR_AID_EOL); Do the same as the above, but specify a discontiguous set of user IDs by using a second PROCMGR_AID_SPAWN_SETUID ability: procmgr_ability(0, PROCMGR_ADN_NONROOT|PROCMGR_AOP_ALLOW|PROCMGR_AID_SPAWN_SETUID, PROCMGR_ADN_NONROOT|PROCMGR_AOP_SUBRANGE|PROCMGR_AID_SPAWN_SETUID, (uint64_t)1000, (uint64_t)1050, PROCMGR_ADN_NONROOT|PROCMGR_AOP_SUBRANGE|PROCMGR_AOP_LOCK|PROCMGR_AID_SPAWN_SETUID, (uint64_t)2000, (uint64_t)2013, PROCMGR_ADN_ROOT|PROCMGR_AOP_DENY|PROCMGR_AOP_LOCK|PROCMGR_AID_EOL);
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/p/procmgr_ability.html
CC-MAIN-2021-21
en
refinedweb
The result class generated by the seqan3::seach algorithm. More... #include <seqan3/search/search_result.hpp> The result class generated by the seqan3::seach algorithm. The seqan3::search algorithm returns a range of hits. A single hit is stored in a seqan3::search_result. By default, the search result contains the query id, the reference id where the query matched and the begin position in the reference where the query sequence starts to match the reference sequence. Those information can be accessed via the respective member functions. The following member functions exist: Note that the index cursor is not included in a hit by default. If you are trying to use the respective member function, a static_assert will prevent you from doing so. You can configure the result of the search with the output configuration. Returns the index cursor pointing to the suffix array range where the query was found. Returns the reference id where the query was found. The reference id is an arithmetic value that corresponds to the index of the reference text in the index. The order is determined on construction of the index.
https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1search__result.html
CC-MAIN-2021-21
en
refinedweb
A fast, zero-copy, circular buffer for Python Project description circbuf implements a circular buffer for Python. It allows for zero copy operation, i.e. it uses memoryview to expose consumer and producer buffers. Access to the buffer is synchronised by locks, managed by context managers. Example import circbuf buf = circbuf.CircBuf() # Produce data with buf.producer_buf as mv: mv[0] = 42 buf.produced(1) print('First entry: {}'.format(next(iter(buf)))) # First entry: 42 Features - Pure Python - Minimises allocation of big memory chunks - Automatic access synchronisation - Tested on Python 3.2, 3.3, 3.4 Useful Links Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/circbuf/
CC-MAIN-2021-21
en
refinedweb
iCelGraph Struct Reference Interface for the CEL Graph. More... #include <tools/celgraph.h> Inheritance diagram for iCelGraph: Detailed Description Interface for the CEL Graph. Definition at line 270 of file celgraph.h. Member Function Documentation Adds an edge to the graph. Adds an edge to the graph. Adds a node to the graph. Create a node for this graph. The node will be added to the graph. Gets the closest node to position. Get Number of Nodes. Gets the shortest path from node from to node to. Gets the shortest path from node from to node to. The documentation for this struct was generated from the following file: - tools/celgraph.h Generated for CEL: Crystal Entity Layer 1.4.1 by doxygen 1.7.1
http://crystalspace3d.org/cel/docs/online/api-1.4.1/structiCelGraph.html
CC-MAIN-2014-10
en
refinedweb
Using Query Notifications Query notifications are a new feature available in Microsoft SQL Server 2005 and the System.Data.SqlClient namespace in ADO.NET 2.0.. Implementing Query Notifications There are three ways you can implement query notifications using ADO.NET: The low-level implementation is provided by the SqlNotificationRequest class that exposes server-side functionality, enabling you to execute a command with a notification request. The high-level implementation is provided by the SqlDependency class, which is a class that provides a high-level abstraction of notification functionality between the source application and SQL Server, enabling you to use a dependency to detect changes in the server. In most cases, this is the simplest and most effective way to leverage SQL Server 2005 notifications capability by managed client applications using the .NET Framework Data Provider for SQL Server. In addition, Web applications built using ASP.NET 2.0 can use the SqlCacheDependency helper classes. Query notifications are useful for applications that need to refresh displays or caches (for example, in a DataGrid control or Web page) in response to changes in underlying data. Microsoft SQL Server 2005 allows .NET Framework applications to send a command to SQL Server and request notification if executing the same command would produce result sets different from those initially retrieved. Query Notifications Architecture and Service Broker The notifications infrastructure is built on top of a new queuing feature included in SQL Server 2005. In general, notifications generated at the server are sent through these queues to be processed later. For more information, see "Service Broker Architecture" section in SQL Server Books Online. In This Section - Enabling Query Notifications Discusses how to use query notifications, including the requirements for enabling and using them. - Special Considerations When Using Query Notifications Discusses issues to be aware of when using query notifications. - Using SqlNotificationRequest and Detecting Notifications Demonstrates how to use query notifications from a Windows forms application. - Using SqlDependency in a Windows Application Provides examples of query notifications in Windows Forms applications and ASP.NET applications. - Using SqlDependency in an ASP.NET Application Demonstrates how to use query notifications from an ASP.NET application. - Using SqlDependency to Detect Changes in the Server Demonstrates how to detect when query results will be different from those originally received. - Executing a SqlCommand.
http://msdn.microsoft.com/en-US/library/t9x04ed2(v=vs.80).aspx
CC-MAIN-2014-10
en
refinedweb
SqlConnection.EnlistTransaction Method Enlists in the specified transaction as a distributed transaction. Namespace: System.Data.SqlClientNamespace: System.Data.SqlClient Assembly: System.Data (in System.Data.dll) Parameters - transaction - Type: System.Transactions.Transaction A reference to an existing Transaction in which to enlist. You can use the EnlistTransaction method to enlist in a distributed transaction. Because it enlists a connection in a Transaction instance, EnlistTransaction takes advantage of functionality available in the System.Transactions namespace for managing distributed transactions, making it preferable to EnlistDistributedTransaction, which uses a System.EnterpriseServices.ITransaction object. It also has slightly different semantics: once a connection is explicitly enlisted on a transaction, it cannot be unenlisted or enlisted in another transaction until the first transaction finishes. For more information about distributed transactions, see Distributed.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.enlisttransaction.aspx?cs-save-lang=1&cs-lang=fsharp
CC-MAIN-2014-10
en
refinedweb
collective.prettydate Represents a date on a relative format so, instead of '01/02/2012', it would be displayed as '4 hours ago', 'yesterday' or 'last week', which is easier to read and understand for most people. Project Description collective.prettydate 'date' method to convert the DateTime object: from DateTime import DateTime today = DateTime() str_date = date_utility.date(today) In previous example, 'str_date' will be "now" The 'date' method also allows 2 additional parameters: 'short' and 'asdays' which will modify the output to be in short format ('h' instead of 'hours', 'd' instead of 'days', etc) and whole days (it will use 'today' - '4 hours ago' - '4h ago' (short format) - 'in 28 minutes' - 'in 6 months' - 'today' - 'last week' - 'yesterday' - 'last year' DateTime 3.0 collective.prettydate is fully compatible with DateTime 3.0 which provides a significantly smaller memory footprint. Mostly Harmless Have an idea? Found a bug? Let us know by opening a support ticket.. Self-Certification [X] Internationalized [X] Unit tests [X] End-user documentation [X] Internal documentation (documentation, interfaces, etc.) [ ] Existed and maintained for at least 6 months [X] Installs and uninstalls cleanly [X] Code structure follows best practice Current Release collective.prettydate 1.2 Released Mar 19, 2013 — tested with Plone 4.1, Plone 4.2 Get collective.prettydate for all platforms - collective.prettydate-1.2.zip - If you are using Plone 3.2 or higher, you probably want to install this product with buildout. See our tutorial on installing add-on products with buildout for more information.
http://plone.org/products/collective.prettydate
CC-MAIN-2014-10
en
refinedweb
Introduction The Service Component Architecture (SCA) assembly model offers you the ability to assemble and wire service components using commonly used programming languages. The specification lets you concentrate on solving specific problems by enabling a diverse range of service components to interact and collaborate. The language-neutral design enables the extension of SCA concepts into willing application environments, regardless of whatever specific technologies might be the focus of those environments. The SCA programming model provides meta data and APIs for enabling services to operate in an SOA aware environment. The Java programming language is a popular choice for implementing a service. The Open SCA community has worked to define a Java variant of the SCA programming model, which is supported by the IBM WebSphere Application Server V7 Feature Pack for SCA. The programming model provides for existing Java annotations and APIs so that programmers can make services operate seamlessly for SCA business logic. Wherever possible, Java techniques like injection are used in place of APIs to simplify programming. An additional fundamental principle of the SCA programming model, which is realized in Java, is the notion of a separation of concerns, in that the developer of the service implementation need not be concerned with the details of the service implementations it references, nor does the developer need be concerned about the implementation details of the consumers that use any of the provided services. This article provides an overview of the SCA programming model for Java. This programming model is described in two specifications: - The Java Component Implementation Specification extends the SCA assembly model (the core spec) by defining how a Java class provides an implementation of an SCA component and how that class is used in SCA as a component implementation. This specification defines a simple Java POJO model for creating service components. - The SCA Java Common Annotations and APIs Specification defines Java APIs and annotations that enable service components and service clients to be built in the Java language. There are closely related specifications describing how other Java-based frameworks and application environments can be used in the context of SCA, such as Spring and EJB components, which can also use the common annotations and APIs. These specs and behaviors are beyond the scope of the initial delivery of the Feature Pack for SCA. Figure 1. The SCA component As shown in Figure 1, a component is the basic element of a business function in an SCA application. In terms of implementation, this means that a component is basically a configured instance of a single implementation. The configurable aspects of a component are: - A service is defined in terms of an interface, bindings, intents, and policy sets for quality of service. A good service interface is focused on providing business value. It should be loosely coupled to distributed technologies and loosely coupled to implementation technology. Following these best practices will lead to a more resilient and adaptable enterprise. - A reference is a description of a service dependency. It’s also defined in terms of an interface, bindings, intents and policy sets for quality of service. References enable a business service to compose and aggregate other services to provide new business services. - A property is also a configurable aspect of an implementation, but differs from a reference in that a property can hold complex or hierarchical data values. A property type is declared by an implementation, but its value is declared by a component. In general, properties should have a business meaning with sufficient generality as to enable the component implementation to be used in multiple applications. Properties are not meant to be used for infrastructural purposes. Every SCA application is built from one or more components. A group of components, assembled in a composite, can be interconnected to form an assembly that exposes one or more business services. In Figure 1, the interface is shown as a line going into the service and reference. An interface is a contract provided by a service or a contract required by a reference. The component in Figure 1 is part of an SCA domain. This domain represents an administration scoping, potentially distributed over a series of interconnected run time nodes. Components sit inside composites which sit inside a domain. For SCA-enabled WebSphere Application Server configurations, the SCA domain is equivalent in scope with the Network Deployment cell. Bindings describe the transport and protocol configuration of services and references when interacting with endpoints or consumers, both inside and outside the SCA domain. Examples include: Web services, EJB RMI/IIOP, and so on. The bottom left and right of Figure 1 depicts bindings as lines going into the service and reference. A binding is a description of an access mechanism offered by a service, a description of an access mechanism needed by a reference. SCA also provides a default binding that can be used by remotely deployed components in the same SCA Domain. This has the advantage of enabling the assembler to refer to services in the SCA domain by a logical name, requiring no specific binding configuration, and is therefore easier to use. Be aware that binding configuration generally does not surface into a component implementation; this is intentional and desirable. By abstracting out the binding specifics into metadata, business logic can be reused and recast even as the specific infrastructure and connectivity of an SOA matures and evolves. This separation of concerns really divides development of an SCA application into two roles: - business logic developer - assembler. For simple development scenarios, this is usually the same person; in enterprise deployments, however, business logic can be reused in different ways, or even assembled using enterprise-specific conventions, such that the two roles are performed by different individuals. Java component implementation A Java class provides an implementation of an SCA component, including its various attributes (services, references, and properties). The services provided by a Java-based implementation might have an interface defined in one of these ways: - a Java interface - a Java class - a Java interface generated from a Web Services Description Language (WSDL) portType. Java for SCA enables the creation of local or remotable services. Local services can only be consumed by clients that are in the same JVM. Local services are fine-grained, tightly-coupled services that enable you to easily segment business function into discrete components. Remote services are meant to be coarse-grained business services that individually provide a business identifiable function. It is typical to implement a remote service by assembling several local services into an SCA composite. Java implementation classes must implement all the operations defined by the service interface. If the service interface is defined by a Java interface, the Java-based component can either implement that Java interface, or implement all the operations of the interface. A service whose interface is defined by a Java class (as opposed to a Java interface) is not remotable, while Java interfaces generated from WSDL portTypes are remotable. A Java implementation might specify the services it provides explicitly through the use of @Service. In certain cases, the use of @Service is not required and the services a Java implementation offers might be inferred from the implementation class itself. @Service can be used to specify that an implementation offers multiple services. Figure 2 depicts a Java service interface and a Java implementation that provides a service using that interface. There is no infrastructure logic in the business component. Java 5.0 annotations make service declaration and implementation easy. Figure 2. Java implementation example: Service in Java A Java service contract defined by an interface or implementation class might use @Remotable to declare that the service follows the semantics of remotable services as defined by the SCA Assembly specification. If @Remotable is not used, then the service is assumed to be local. If an implementation class has implemented interfaces that are not decorated with an @Remotable annotation, the class is considered to implement a single local service whose interface is defined by the public methods defined in the class. The data exchange semantic for calls to local services is by-reference. This means that code must be written with the knowledge that changes made to parameters (other than simple types) by either the client or the provider of the service are visible to the calling service. Java common annotations Unlike Javadoc tags, Java annotations are reflective in that they are embedded in class files generated by the compiler and can be retained by the JVM to be made retrievable at run time. Annotations provide a crucial mechanism for specifying elements of the SCA assembly model. For example, annotations can specify the attributes of a service's scope, the nature of implementation, plus interaction attributes, constraints, and requirements. The use of Java annotations is a well established method for specifying metadata, and this aspect is highlighted by the important role given to annotations in several Java EE specifications. The SCA assembly model can utilize introspection for Java implementations to determine the attributes and nature of SCA services. Annotations provide a way for this metadata to be represented as part of the implementation. Listing 1 shows an example of a service interface for a PaymentService service, which is intended to be called only by clients that are operating in the same JVM and are able to directly receive and pass object references: Listing 1 package session.pay; import org.osoa.sca.annotations.AllowsPassByReference; @AllowsPassByReference public interface PaymentService { public PaymentOutput MakePayment(PaymentInput pI); } Figure 3. SCA scope annotation The Java programming model enables Java implementations to indicate how the SCA runtime must manage its state. This is covered by the SCA concept of implementation scope. Scope controls the visibility and lifecycle aspects of the implementation. Implementation scope is specified through @Scope, @Init and @Destroy Java annotations. Invocations on a service offered by a component will be dispatched by the SCA runtime to an implementation instance according to the semantics of its implementation scope. Scopes are specified using the @Scope annotation on the implementation class. The @Scope annotation is used on either a service's interface definition or on a service implementation class itself. A scope annotation can be added to either the interface or implementation class definition. The Feature Pack for SCA supports stateless and composite scopes: - The stateless scope is used when service requests have no implied correlation with prior and future requests. Each request operates independent of each other and the SCA runtime makes no guarantees of retaining state information of that service. - When an implementation is under composite scope, all service requests are routed to the same implementation instance for the lifetime of the containing composite (aervices deployed with the composite scope in a cluster result in one singleton per JVM). The Feature Pack for SCA does not support request or conversational scopes. An implementation can declare lifecycle methods that are called when a component instance is instantiated or when the scope is expired. @Init denotes that the method must be called upon the first use of an instance during the lifetime of the scope. @Destroy specifies the method to be called when the scope ends. Only public methods that have no arguments can be annotated as lifecycle methods. An example of a service implementation that uses scope and lifecycle annotations is: Listing 2 package session.shop; @Scope("stateless") public class ShoppingCartServiceImpl implements ShoppingCartService { public void addToCart(int quantity) { //Code to populate shopping cart resources. ... return; } @Init public void beginCart() { //code to initialize shopping cart resources. } @Destroy public void endCart() { //code to free shopping cart resources. } } The SCA assembly model provides a way to specify attributes that you would typically find in Java annotations via a component type or composite file, without the need for an implementation language that supports annotation. The side file provides an alternative to annotations when the annotation style does not match the installation’s needs. Configuration properties Properties are used to tailor implementations by enabling SCA component definitions to specify values which condition the behavior of the implementation. The @Property annotation is used to annotate the Java class field that either holds an injected property, or designates a setter method that is used by the runtime to set an SCA property value. The type of an injected property can be a simple Java type or a complex Java type. Either the Java class field or the type of the setter method input argument can determine the type of the injected property. A public setter method might be used to inject a property value even when the @Property annotation on the field is not there. In this case, the name of the field or setter method is used to determine the name of the property. Non-public fields must use an @Property annotation to be injected. The setter method is used when both setter method and a field for a property are present in the Java implementation. The @Property annotation has the following attributes: - name (optional): the name of the property, defaults to the name of the field of the Java class. - required (optional): specifies whether injection is required, defaults to false. Below are two examples of using an @Property annotation statement to define a property definition for a string. The first (Listing 3) is accomplished by annotating the field, using a property field definition, while the second (Listing 4) does it by annotating the setter method,using a property setter. Using a private field and annotating the setter method is considered a best practice. Listing 3 @Property(name="currency", required=”true”) protected String currency; Listing 4 @Property(name="currency", required=”true”) public void setCurrency( String theCurrency ){ currency =thecurrency; } Properties can be used to change the behavior of a component at run time without making code changes. Assume there is a BankUpdate component in a bank that uses SCA. Using the example above, the balance “currency” can be set to different values based on the bank’s policy. Listing 5 shows an example from the BankUpdate component implementation that uses an @Property annotation to identify a property called maxCurrency. Suppose the property has a default value of $1,000. Listing 5 private int maxCurrency = 1000; // Property declaration using a setter method @Property(name = "maxCurrency", override = "may") public void setMaxCurrency(int maxCurrency) { this.maxCurrency = maxCurrency; } Listing 6 shows customization of the component by setting the maxCurrency property to 1500 in the composite SCDL file to override the default value (1000) defined in the component type. Listing 6 <component name="BankUpdateComponent"> <implementation.java <property name="maxCurrency">1500</property> ... </component> Accessing services Java SCA implementations interact with services through service references. Service references are sometimes called proxies, the common industry term used in distributed programming. An SCA component can obtain a service reference through injection, or programmatically through the component Context API. Using reference injection is the recommended way to obtain a service reference for SCA Java implementations, since it results in code with minimal use of middleware APIs. The ComponentContext API should be used in cases where reference injection is not possible, such as within an EJB or servlet. This API returns references to services that are exposed in the SCA domain. Figure 4. Reference injection The @Reference annotation is used to annotate a Java class field or a setter method that is used to inject a service reference, used to invoke the service. The interface of the service injected is defined by either the type of the Java class field or the type of the setter method input argument. Listing 7 private HelloService helloService; @Reference (name="helloService", required=true) public setHelloService(HelloService service){ helloService = service; } public void clientMethod() { String result = helloService.hello("Hello World!"); } Like the Property annotation, The @Reference annotation has the following attributes: - name (optional): the name of the reference, defaults to the name of the field of the Java class. - required (optional): whether injection is required. Defaults to true. CompositeContext API The SCA V1.0 Java specification does not specify an API that enables a Java consumer to get a reference to a service unless it is statically wired to the service through the SCA assembly mechanisms. The Feature Pack for SCA provides a WebSphere Application Server extension that enables this capability called the CompositeContext API (Figure 5). Figure 5. CompositeContext API Java programs that use the CompositeContext API must be deployed in the same topology (either base server or Network Deployment cell) as the target service, and the service must be deployed in the SCA domain. In this initial release of the Feature Pack for SCA, the service is only available over the default binding. Asynchronous programming The Java programming model for SCA provides for non-blocking and callback asynchronous patterns, as shown in Figure 6. A non-blocking one-way invocation of a service enables the client to carry on executing without waiting for the service to run. The invoked service executes immediately or at some later time. A response from the service, if any, must be fed back to the client through a separate mechanism, since no response is available at invocation. Figure 6. Asynchronous patterns Asynchronous model The @OneWay annotation indicates the non-blocking interaction style. One-way invocations represent the simplest form of asynchronous programming, where the client of the service invokes the service and continues processing immediately, without waiting for the service to execute. Any method that returns "void" and has no declared exceptions can be marked with an @OneWay annotation. SCA does not define a mechanism for making non-blocking calls to methods that return values or are declared to throw exceptions. Asynchronous responses might be modeled in a way so that both the consumer and provider create a pair of one-way requests such that a one-way request from the consumer to the provider is used to invoke the service, and a one-way request from the provider to the consumer provides the response. Policy annotations SCA provides facilities for the attachment of policy-related metadata to SCA assemblies. Implementation policy influences how implementations, services, and references behave at run time, and interaction policy influences how services should behave when invoked. The SCA Policy Framework includes two main concepts: - Intents express abstract, high-level policy requirements. - Policy sets express low-level detailed concrete policies. SCA Java defines the @Requires annotation to enable the attachment of intents to a Java class or interface, or to elements within classes and interfaces, such as methods and fields. Here is an example of the @Requires annotation that attaches two intents, confidentiality.message and integrity.message (from the Security domain): @Requires({CONFIDENTIALITY, INTEGRITY}) Java for SCA also provides annotations that correspond to specific policy intents defined in the SCA Policy Framework. In general, the Java annotations have the same names as the intents defined by SCA. If the intent is a qualified intent, qualifiers are supplied as an attribute to the annotation in the form of a string or an array of strings. For example, the SCA confidentiality intent can be specified as: @Requires(CONFIDENTIALITY) or @Confidentiality An example of a Java annotation for a specific qualified intent is: @Authentication ( {“message”, “transport”} ) The abstract intents are then realized by SCA through the expression of concrete policy for the interaction policy on the bindings, or they cause the SCA container to condition the environment prior to the service getting control for the implementation policy. The abstract intents enable the generic expression of needed behavior to be mapped to specific bindings, but maintain the separation of concerns between what is important for the business logic provider, and the implementation detail of infrastructural decisions that affect the access paths to and from the service. Example: Business logic bound to metadata The following SCDL example shows how an SCA service can be exposed as both a Web service and an EJB service without changing the business logic. Consider a HistoryService that stores an audit trail of payments made to an online payment system. The business logic, which might include a record being stored in a database table, is implemented using Java programming language and JDBC in a HistoryServiceImpl.class class, which is part of history.sca.jdbc package. This service performs the audit trail as part of a larger global transaction. The service can be made available over different bindings based on the need of the application. A scenario requiring the use of default binding, giving the most flexibility for the SCA runtime to choose a wiring mechanism, would specify the three scenarios (Figures 7 through 9) in the SCDL: Figure 7. A service with SCA binding Figure 8. A scenario that requires the same service to be accessible like a Web service Figure 9. A scenario that requires interaction with EJB services and needs the capability of an EJB container The implementation remains the same in all of the above cases. The choice of binding decides how this service is invoked by its client: as a Web service or as an EJB service. The service is developed using a technology familiar to the developer with a concentration on the business logic: in this case, how to create audit trails for payments. The wiring aspects are controlled by the assembler by choosing the binding appropriate for a deployment, hence freeing the developer from adding wiring-specific code, while providing flexibility to choose a binding that works best for each scenario. Summary The Java experience for SCA programmers should be one that leverages the ease-of-use characteristics of Java 5, which utilizes annotations and dependency injection, and demonstrates a real, tangible separation of concerns that frees business logic from configuration and protocol specific APIs that burden many applications today. When this Java experience is coupled with the visual composition metaphors that can be realized by tools to edit the SCA assembly, the focus of developing the application aligns directly with the principles of SOA -- loosely-coupled, reused, coarse-grained services composed from a broad-array of implementation and implementation technologies -- with the focus squarely on the business service itself, without the burden of any specific implementation. Resources - More in this series - Service Component Architecture Specifications - Open SOA - SCA Java Common Annotations and APIs specifications - Java Common Annotations and APIs - Software components: Coarse-grained versus fine-grained - Software components: Coarse-grained versus fine-grained -.
http://www.ibm.com/developerworks/websphere/library/techarticles/0902_beck2/0902_beck2.html
CC-MAIN-2014-10
en
refinedweb
Defines the minimum and possible values that can be displayed on an axis. Namespace: DevExpress.UI.Xaml.Charts Assembly: DevExpress.UI.Xaml.Charts.v19.2.dll public class WholeAxisRange : AxisRangeBase Public Class WholeAxisRange Inherits AxisRangeBase Properties that return WholeAxisRange instances: An instance of the WholeAxisRange class can be obtained via the AxisBase.WholeRange property, which defines the whole available range of axis values. The AxisBase.VisualRange property defines which part of the AxisBase.WholeRange should be currently visible on a chart. So, if the VisualRange is equal to the WholeRange, no scrolling and zooming is applied to chart data. And, if the VisualRange is less than the WholeRange, chart data is zoomed in and scrolled.
https://docs.devexpress.com/Win10Apps/DevExpress.UI.Xaml.Charts.WholeAxisRange
CC-MAIN-2020-16
en
refinedweb
Content types¶ Description Plone’s content type subsystems and creating new content types programmatically. Note While using Archetypes is fully supported in the Plone 5.x series, the recommendation is to use Dexterity content types for any new development. Archetypes support will be removed from core Plone in version 6. Introduction¶ Plone has two kind of content types subsystems: Archetypes-based - See also Plomino (later in this document). Flexible architecture allows other kinds of content type subsystems as well. Type information registry¶ Plone maintains available content types in the portal_types tool. portal_types is a folderish object which stores type information as child objects, keyed by the portal_type property of the types. portal_factory is a tool responsible for creating the persistent object representing the content. Listing available content types¶ Often you need to ask the user to choose specific Plone content types. Plone offers two Zope 3 vocabularies for this purpose: plone.app.vocabularies.PortalTypes a list of types installed in portal_types plone.app.vocabularies.ReallyUserFriendlyTypes a list of those types that are likely to mean something to users. If you need to build a vocabulary of user-selectable content types in Python instead, here’s how: from Acquisition import aq_inner from zope.app.component.hooks import getSite from zope.schema.vocabulary import SimpleVocabulary, SimpleTerm from Products.CMFCore.utils import getToolByName def friendly_types(site): """ List user-selectable content types. We cannot use the method provided by the IPortalState utility view, because the vocabulary factory must be available in contexts where there is no HTTP request (e.g. when installing add-on product). This code is copied from @return: Generator for (id, type_info title) tuples """ context = aq_inner(site) site_properties = getToolByName(context, "portal_properties").site_properties not_searched = site_properties.getProperty('types_not_searched', []) portal_types = getToolByName(context, "portal_types") types = portal_types.listContentTypes() # Get list of content type ids which are not filtered out prepared_types = [t for t in types if t not in not_searched] # Return (id, title) pairs return [ (id, portal_types[id].title) for id in prepared_types ] Creating a new content type¶ These instructions apply to Archetypes-based content types. Install ZopeSkel¶ Add ZopeSkel to your buildout.cfg and run buildout: [buildout] ... parts = instance zopeskel ... [zopeskel] recipe = zc.recipe.egg eggs = PasteScript ZopeSkel Create an archetypes product¶ Run the following command and answer the questions e.g. for the project name use my.product: ./bin/paster create -t archetype Install the product¶ Adjust your buildout.cfg and run buildout again: [buildout] develop = my.product ... parts = instance zopeskel ... [instance] eggs = my.product Note You need to install your new product using buildout before you can add a new content type in the next step. Otherwise paster complains with the following message: “Command ‘addcontent’ not known”. Create a new content type¶ Deprecated since version may_2015: Use bobtemplates.plone instead Change into the directory of the new product and then use paster to add a new content type: cd my.product ../bin/paster addcontent contenttype Related how-tos: Note Creating types by hand is not worth the trouble. Please use a code generator to create the skeleton for your new content type. Warning The content type name must not contain spaces. Neither the content type name or the description may contain non-ASCII letters. If you need to change these please create a translation catalog which will translate the text to one with spaces or international letters. Debugging new content type problems¶ Creating types by hand is not worth the trouble. Creating new content types through-the-web¶ There are solutions for non-programmers and Plone novices to create their content types. Dexterity¶ Core feature Use Dexterity control panel in site setup Use bobtemplates.plone Plomino (Archetypes-based add-on)¶ With Plomino you can make an entire web application that can organize & manipulate data with very limited programming experience. - Implicitly allowed¶ Implictly allowed is a flag specifying whether the content is globally addable or must be specifically enabled for certain folders. The following example allows creation of Large Plone Folder anywhere at the site (it is disabled by default). For available properties, see TypesTool._advanced_properties. Example: portal_types = self.context.portal_types lpf = portal_types["Large Plone Folder"] lpf.global_allow = True # This is "Globally allowed" property Constraining the addable types per type instance¶ For the instances of some content types, the user may manually restrict which kinds of objects may be added inside. This is done by clicking the Add new… link on the green edit bar and then choosing Restrictions…. This can also be done programmatically on an instance of a content type that supports it. First, we need to know whether the instance supports this. Example: from Products.Archetypes.utils import shasattr # To avoid acquisition if shasattr(context, 'canSetConstrainTypes'): # constrain the types context.setConstrainTypesMode(1) context.setLocallyAllowedTypes(('News Item',)) If setConstrainTypesMode is 1, then only the types enabled by using setLocallyAllowedTypes will be allowed. The types specified by setLocallyAllowedTypes must be a subset of the allowable types specified in the content type’s FTI (Factory Type Information) in the portal_types tool. If you want the types to appear in the :guilabel: Add new.. dropdown menu, then you must also set the immediately addable types. Otherwise, they will appear under the more submenu of Add new... Example: context.setImmediatelyAddableTypes(('News Item',)) The immediately addable types must be a subset of the locally allowed types. To retrieve information on the constrained types, you can just use the accessor equivalents of the above methods. Example: context.getConstrainTypesMode() context.getLocallyAllowedTypes() context.getImmediatelyAddableTypes() context.getDefaultAddableTypes() context.allowedContentTypes() Be careful of Acquisition. You might be acquiring these methods from the current instance’s parent. It would be wise to first check whether the current object has this attribute, either by using shasattr or by using hasattr on the object’s base (access the base object using aq_base). The default addable types are the types that are addable when constrainTypesMode is 0 (i.e not enabled). For more information, see Products/CMFPlone/interfaces/constraints.py
https://docs.plone.org/develop/plone/content/types.html
CC-MAIN-2020-16
en
refinedweb
For production clusters, regular maintenance should include routine backup operations on a regular basis to ensure data integrity and reduce the risk of data loss due to unexpected events. Back up operations should include the cluster state, application state, and the running configuration of both stateless and stateful applications in the cluster. VeleroVelero As a production-ready solution, Konvoy provides the Velero addon by default, to support backup and restore operations for your Kubernetes cluster and persistent volumes. For on-premise deployments, Konvoy deploys Velero integrated with Minio, operating inside the same cluster. For production use-cases, it’s advisable to provide an external storage volume for Minio to use. Install the Velero command-line interfaceInstall the Velero command-line interface Although installing the Velero command-line interface is optional and independent of deploying a Konvoy cluster, having access to the command-line interface provides several benefits. For example, you can use the Velero command-line interface to back up or restore a cluster on demand, or to modify certain settings without changing the Velero platform service configuration. By default, Konvoy sets up Velero to use Minio over TLS using a self-signed certificate. Currently, the Velero command-line interface does not handle self-signed certificates. Until an upstream fix is released, please use our patched 1.0.0 version of Velero, which adds an --insecureskipverify flag. Enable or disable the backup addonEnable or disable the backup addon You can enable the Velero addon using the following settings in the ClusterConfiguration section of the cluster.yaml file: addons: - name: velero enabled: true ... If you want to replace the Velero addon with a different backup addon service, you can disable the velero addon by modifying the ClusterConfiguration section of the cluster.yaml file as follows: addons: - name: velero enabled: false ... Before disabling the Velero platform service addon, however, be sure you have a recent backup that you can use to restore the cluster in the event that there is a problem converting to the new backup service. After making changes to your cluster.yaml, you must run konvoy up to apply them to the running cluster. Regular backup operationsRegular backup operations For production clusters, you should be familiar with the following basic administrative functions Velero provides: Set a backup scheduleSet a backup schedule By default, Konvoy configures a regular, automatic backup of the cluster’s state in Velero. The default settings do the following: - create backups on a daily basis - save the data from all namespaces These default settings take effect after the cluster is created. If you install Konvoy with the default platform services deployed, the initial backup starts after the cluster is successfully provisioned and ready for use. Alternate backup schedulesAlternate backup schedules The Velero CLI provides an easy way to create alternate backup schedules. For example: velero create schedule thrice-daily --schedule="@every 8h" To change the default backup service settings: Check the backup schedules currently configured for the cluster by running the following command: velero get schedules Delete the velero-kubeaddons-defaultschedule by running the following command: velero delete schedule velero-kubeaddons-default Replace the default schedule with your custom settings by running the following command: velero create schedule velero-kubeaddons-default --schedule="@every 24h" You can also create backup schedules for specific namespaces. Creating a backup for a specific namespace can be useful for clusters running multiple apps operated by multiple teams. For example: velero create schedule system-critical --include-namespaces=kube-system,kube-public,kubeaddons --schedule="@every 24h" The Velero command line interface provides many more options worth exploring. You can also find tutorials for disaster recovery and cluster migration on the Velero community site. Fetching a backup archiveFetching a backup archive To list the available backup archives in your cluster, run the following command: velero backup get To download a selected archive to your current working directory on your local workstation, run a command similar to the following: velero backup download BACKUP_NAME --insecureskipverify Back up on demandBack up on demand In some cases, you might find it necessary create a backup outside of the regularly-scheduled interval. For example, if you are preparing to upgrade any components or modify your cluster configuration, you should perform a backup immediately before taking that action. You can then create a backup by running a command similar to the following: velero backup create BACKUP_NAME Restore a clusterRestore a cluster Before attempting to restore the cluster state using the Velero command-line interface, you should verify the following requirements: - The backend storage, Minio, is still operational. - The Velero platform service in the cluster is still operational. - The Velero platform service must be set to a restore-only-modeto avoid having backups run while restoring. To list the available backup archives for your cluster, run the following command: velero backup get To set Velero to a restore-only-mode, modify the Velero addon in the ClusterConfiguration of the cluster.yaml file: addons: ... - name: velero enabled: true values: |- configuration: restoreOnlyMode: true ... Then you may apply the configuration change by running: konvoy deploy addons -y Finally, check your deployment to verify that the configuration change was applied correctly: helm get values velero-kubeaddons To restore cluster data on demand from a selected backup snapshot available in the cluster, run a command similar to the following: velero restore create --from-backup BACKUP_NAME Backup service diagnosticsBackup service diagnostics You can check whether the Velero service is currently running on your cluster through the operations portal, or by running the following kubectl command: kubectl get all -n velero If the Velero platform service addon is currently running, you can generate diagnostic information about Velero backup and restore operations. For example, you can run the following commands to retrieve, back up, and restore information that you can use to assess the overall health of Velero in your cluster: velero get schedules velero get backups velero get restores velero get backup-locations velero get snapshot-locations
http://docs-staging.mesosphere.com/ksphere/konvoy/latest/backup/
CC-MAIN-2020-16
en
refinedweb
For the Gatsby version of my website, currently in development, I am serving all my images from Imagekit.io - a global image CDN. The reasons for doing this is so I will have the ultimate flexibility in how images are used within my site, which didn’t necessarily fit with what Gatsby has to offer especially when it came to how I wanted to position images within blog post content served from markdown files. As I understand it, Gatsby Image has two methods of responsively resizing images: - Fixed: Images that have a fixed width and height. - Fluid: Images that stretch across a fluid container. In my blog posts, I like to align my images (just take look at my post about my time in the Maldives) as it helps break the post up a bit. I won’t be able to achieve that look by the options provided in Gatsby. It’ll look all a little bit too stacked. The only option is to serve my images from Imagekit.io, which in the grand scheme isn’t a bad idea. I get the benefit of being able to transform images on the fly, optimisation (that can be customised through Imagekit.io dashboard) and fast delivery through its content-delivery network. To meet my image requirements, I decided to develop a custom responsive image component that will perform the following: - Lazyload image when visible in viewport. - Ability to parse an array “srcset" sizes. - Set a default image width. - Render the image on page load in low resolution. React Visibility Sensor The component requires the use of "react-visibility-sensor” plugin to mimic the lazy loading functionality. The plugin notifies you when a component enters and exits the viewport. In our case, we only want the sensor to run once an image enters the viewport. By default, the sensor is always fired every time a block enters and exits the viewport, causing our image to constantly alternate between the small and large versions - something we don't want. Thanks for a useful post by Mark Oskon, he provided a solution that extends upon the react-visibility-sensor plugin and allows us to turn off the sensor after the first reveal. I ported the code from Mark's solution in a newly created component housed in "/core/visibility-sensor.js", which I then reference into my LazyloadImage component: import React, { Component } from "react"; import PropTypes from "prop-types"; import VSensor from "react-visibility-sensor"; class VisibilitySensor extends Component { state = { active: true }; render() { const { active } = this.state; const { once, children, ...theRest } = this.props; return ( <VSensor active={active} onChange={isVisible => once && isVisible && this.setState({ active: false }) } {...theRest} > {({ isVisible }) => children({ isVisible })} </VSensor> ); } } VisibilitySensor.propTypes = { once: PropTypes.bool, children: PropTypes.func.isRequired }; VisibilitySensor.defaultProps = { once: false }; export default VisibilitySensor; LazyloadImage Component import PropTypes from "prop-types"; import React, { Component } from "react"; import VisibilitySensor from "../core/visibility-sensor" class LazyloadImage extends Component { render() { let srcSetAttributeValue = ""; let sanitiseImageSrc = this.props.src.replace(" ", "%20"); // Iterate through the array of values from the "srcsetSizes" array property. if (this.props.srcsetSizes !== undefined && this.props.srcsetSizes.length > 0) { for (let i = 0; i < this.props.srcsetSizes.length; i++) { srcSetAttributeValue += `${sanitiseImageSrc}?tr=w-${this.props.srcsetSizes[i].imageWidth} ${this.props.srcsetSizes[i].viewPortWidth}w`; if (this.props.srcsetSizes.length - 1 !== i) { srcSetAttributeValue += ", "; } } } return ( <VisibilitySensor key={sanitiseImageSrc} delayedCall={true} partialVisibility={true} once> {({isVisible}) => <> {isVisible ? <img src={`${sanitiseImageSrc}?tr=w-${this.props.widthPx}`} alt={this.props.alt} sizes={this.props.sizes} srcSet={srcSetAttributeValue} /> : <img src={`${sanitiseImageSrc}?tr=w-${this.props.defaultWidthPx}`} alt={this.props.alt} />} </> } </VisibilitySensor> ) } } LazyloadImage.propTypes = { alt: PropTypes.string, defaultWidthPx: PropTypes.number, sizes: PropTypes.string, src: PropTypes.string, srcsetSizes: PropTypes.arrayOf( PropTypes.shape({ imageWidth: PropTypes.number, viewPortWidth: PropTypes.number }) ), widthPx: PropTypes.number } LazyloadImage.defaultProps = { alt: ``, defaultWidthPx: 50, sizes: `50vw`, src: ``, widthPx: 50 } export default LazyloadImage Component In Use The example below shows the LazyloadImage component used to serve a logo that will serve a different sized image with the following widths - 400, 300 and 200. <LazyloadImage src="" widthPx={400} srcsetSizes={[{ imageWidth: 400, viewPortWidth: 992 }, { imageWidth: 300, viewPortWidth: 768 }, { imageWidth: 200, viewPortWidth: 500 }]} Useful Links
https://www.surinderbhomra.com/Blog/2020/02/07/Lazyload-Responsively-Serve-External-Images-Gatsby
CC-MAIN-2020-16
en
refinedweb
Open − #include <omp.h> #include <stdio.h> int main(int argc, char *argv[]){ /* sequential code */ #pragma omp parallel{ printf("I am a parallel region."); } /* sequential code */ return 0; }. E.g., they can set the number of threads manually. It also allows developers to identify whether data are shared between threads or are private to a thread. OpenMP is available on several open-source and commercial compilers for Linux, Windows, and Mac OS X systems.
https://www.tutorialspoint.com/what-is-openmp
CC-MAIN-2020-16
en
refinedweb
Pretty printing objects with multiline strings in terminal with colors Pacharapol Withayasakpunt ・2 min read If you have use JavaScript for some time, you should notice that pretty printing JSON in Node.js is as simple as JSON.stringify(obj, null, 2). (Also, if you need multiline strings, there is js-yaml.) - But there is never coloring An alternative is console.log, which in Node.js, it is not as interactive as web browsers with Chrome DevTools, and the depth in by default limited to 2. - How do you maximize depths? - Easy, use console.dir(obj, { depth: null })-- console.dir BTW, in my test project, I got this, Even with proper options ( { depth: null, breakLength: Infinity, compact: false }), I still get this So, what's the solution? You can customize inspect by providing your own class. import util from 'util' class MultilineString { // eslint-disable-next-line no-useless-constructor constructor (public s: string) {} [util.inspect.custom] (depth: number, options: util.InspectOptionsStylized) { return [ '', ...this.s.split('\n').map((line) => { return '\x1b[2m|\x1b[0m ' + options.stylize(line, 'string') }) ].join('\n') } } (BTW, worry about \x1b[2m? It is How to change node.js's console font color?) And, replace every instance of multiline string with the class. function cloneAndReplace (obj: any) { if (obj && typeof obj === 'object') { if (Array.isArray(obj) && obj.constructor === Array) { const o = [] as any[] obj.map((el, i) => { o[i] = cloneAndReplace(el) }) return o } else if (obj.constructor === Object) { const o = {} as any Object.entries(obj).map(([k, v]) => { o[k] = cloneAndReplace(v) }) return o } } else if (typeof obj === 'string') { if (obj.includes('\n')) { return new MultilineString(obj) } } return obj } export function pp (obj: any, options: util.InspectOptions = {}) { console.log(util.inspect(cloneAndReplace(obj), { colors: true, depth: null, ...options })) } Now the pretty printing function is ready to go. If you only need the pretty printing function, I have provided it here. patarapolw / prettyprint prettyprint beyond `JSON.stringify(obj, null, 2)` -- Multiline strings and colors I also made it accessible via CLI, and possibly other programming languages, such as Python (via JSON / safeEval, actually). Demystifying Open Source Contributions This quick guide is mainly for first-time contributors and people who want to start helping open sour... Thanks for sharing. That Stack Overflow post with all of the colors listed out is very handy. I don't do a ton of node.js. I use a lot of python in the back end and really like the colorama package.
https://dev.to/patarapolw/pretty-printing-objects-in-terminal-with-multiline-strings-with-colors-3jd7
CC-MAIN-2020-16
en
refinedweb
BaseMigrateController is the base class for migrate controllers. The default command action. public string $defaultAction = 'up'. public array $migrationNamespaces = []. public string|array $migrationPath = ['@app/migrations'] The template file for generating new migrations. This can be either a path alias (e.g. "@app/migrations/template.php") or a file path. public string $templateFile = null Creates a new migration. This command creates a new migration using the available migration template. After using this command, developers should modify the created migration skeleton by filling up the actual migration logic. yii migrate/create create_user_table/to app\migrations\M101129185401CreateUser # using full namespace # showing all new migrations Redoes the last few migrations. This command will first revert the specified migrations, and then apply them again. For example, yii migrate/redo # redo the last applied migration yii migrate/redo 3 # redo the last 3 applied migrations yii migrate/redo all # redo all migrations Upgrades or downgrades till the specified version. Can also downgrade versions to the certain apply time in the past by providing a UNIX timestamp or a string parseable by the strtotime() function. This means that all the versions applied after the specified certain time would be reverted. This command will first revert the specified migrations, and then apply them again. For example, yii migrate/to 101129_185401 # using timestamp yii migrate/to m101129_185401_create_user_table # using full name yii migrate/to 1392853618 # using UNIX timestamp yii migrate/to "2014-02-15 13:00:50" # using strtotime() parseable string..
https://docs.w3cub.com/yii~2.0/yii-console-controllers-basemigratecontroller/
CC-MAIN-2020-16
en
refinedweb
For used when we want something to be executed irrespective of whether it resulted in a value or ended up in a catch condition. A much used scenario is to clean up any resources in finally such as file.close() Now, in scala since it would yield a value, the recommendation is that you should not change the value which was computed as a part of try or catch in the finally block. What happens if you do it? To answer look at the following code blocks def f:Int = try {return 1} finally {return 2} //> f: => Int println(f) //> 2 def g:Int = try 1 finally 2 //> g: => Int println(g) //> 1 Since the code is done in Scala worksheet, you would be able to see the corresponding output as well. The difference in the output is because Scala tries to keep the result compatible with what we see in Java, which is, if there is an explicit exception or a return specified in finally then the same would override the value computed in the try or catch block. Surprising ! That is the reason it is best avoided to change the values in finally and keep it for ensuring just side effects like releasing resources.
https://blog.knoldus.com/2012/11/14/scala-knol-why-returning-a-value-in-finally-is-a-bad-idea/
CC-MAIN-2017-43
en
refinedweb
This week I will mostly be posting about Overrules. [For those of you who haven’t seen The Fast Show (called Brilliant in the US), this is an obscure reference to a character named Jesse. :-)] Aside from this post, Stephen Preston has sent me the samples he’s put together for his AU class on this topic, so expect some serious plagiarism (although now that I’ve given him credit I suppose it’s not really plagiarism :-). Here’s a question I received recently by email: Is there some posibility to write something in some of your next blogs about how to get coordinate of grip when user move on screen in realtime. By using grip_stretch command... Example: we first draw polyline, then select it and move grip point...how we can know current position for grip coordinate before user pick point...in realtime... Coincidentally I’ve been working on an internal project that uses Overrules to achieve this (or something quite similar). As I expect I’ve said before, AutoCAD 2010’s Overrule API is an incredibly powerful mechanism for hooking into and controlling object behaviour: it’s essentially our approach for providing the equivalent of custom object support to .NET programmers (which was the top item for a number of years on AutoCAD’s API wishlist). To hook into object modification inside AutoCAD, we have a couple of options: - A TransformOverrule allows us to hook into an object’s TransformBy(), which tells use when it’s rotated, scaled or moved. - This is effective at trapping object-level transformations, whether via grips or commands such as MOVE, but won’t tell you when a specific vertex is modified (for instance) - A GripOverrule allows us to hook into GetGripPointsAt() and MoveGripPointsAt(), which can tell use when a grip-stretch is performed on our object. - This gives us more fine-grained information on a per-grip basis, but clearly won’t be called for commands such as MOVE which bypass the use of grips. In this particular instance it makes more sense to use a GripOverrule, as we need to know when a particular polyline vertex is edited. Here’s some C# code that implements a GripOverrule for AutoCAD entities (it works just as well for any entity with grips implemented via GetGripPointsAt() & MoveGripPointsAt(), so there’s no need to limit it just to Polylines, even if we could very easily). GripOverruleTest { public class GripVectorOverrule : GripOverrule { // A static pointer to our overrule instance static public GripVectorOverrule theOverrule = new GripVectorOverrule(); // A flag to indicate whether we're overruling static bool overruling = false; // A single set of grips would not have worked in // the case where multiple objects were selected. static Dictionary<string, Point3dCollection> _gripDict = new Dictionary<string, Point3dCollection>(); public GripVectorOverrule() { } private string GetKey(Entity e) { // Generate a key based on the name of the object's type // and its geometric extents // (We cannot use the ObjectId, as this is null during // grip-stretch operations.) return e.GetType().Name + ":" + e.GeometricExtents.ToString(); } // Save the locations of the grips for a particular entity private void StoreGripInfo(Entity e, Point3dCollection grips) { string key = GetKey(e); if (_gripDict.ContainsKey(key)) { // Clear the grips if any already associated Point3dCollection grps = _gripDict[key]; using (grps) { grps.Clear(); } _gripDict.Remove(key); } // Now we add our grips Point3d[] pts = new Point3d[grips.Count]; grips.CopyTo(pts, 0); Point3dCollection gps = new Point3dCollection(pts); _gripDict.Add(key, gps); } // Get the locations of the grips for an entity private Point3dCollection RetrieveGripInfo(Entity e) { Point3dCollection grips = null; string key = GetKey(e); if (_gripDict.ContainsKey(key)) { grips = _gripDict[key]; } return grips; } public override void GetGripPoints( Entity e, Point3dCollection grips, IntegerCollection snaps, IntegerCollection geomIds ) { base.GetGripPoints(e, grips, snaps, geomIds); StoreGripInfo(e, grips); } public override void MoveGripPointsAt( Entity e, IntegerCollection indices, Vector3d offset ) { Document doc = Application.DocumentManager.MdiActiveDocument; Editor ed = doc.Editor; Point3dCollection grips = RetrieveGripInfo(e); if (grips != null) { // Could get multiple points moved at once, // hence the integer collection foreach (int i in indices) { // Get the grip point from our internal state Point3d pt = grips[i]; // Draw a vector from the grip point to the newly // offset location, using the index into the // grip array as the color (excluding colours 0 and 7). // These vectors don't getting cleared, which makes // for a fun effect. ed.DrawVector( pt, pt + offset, (i >= 6 ? i + 2 : i + 1), // exclude colours 0 and 7 false ); } } base.MoveGripPointsAt(e, indices, offset); } [CommandMethod("GOO")] public void GripOverruleOnOff() { Document doc = Application.DocumentManager.MdiActiveDocument; Editor ed = doc.Editor; if (overruling) { ObjectOverrule.RemoveOverrule( RXClass.GetClass(typeof(Entity)), GripVectorOverrule.theOverrule ); } else { ObjectOverrule.AddOverrule( RXClass.GetClass(typeof(Entity)), GripVectorOverrule.theOverrule, true ); } overruling = !overruling; GripOverrule.Overruling = overruling; ed.WriteMessage( "\nGrip overruling turned {0}.", (overruling ? "on" : "off") ); } } } One important thing to bear in mind about this sample: we have to store the grips for a particular entity during the GetGripPointsAt() call and then use them during the MoveGripPointsAt() call. This is complicated for a number of reasons... Firstly, we can’t just store the points in a single point collection, as there could be multiple calls to GetGripPointsAt() (while the grips are retrieved and displayed by AutoCAD) followed by multiple calls to MoveGripPointsAt() (while the grips are used to manipulate the objects). So we really need to map a set of points to a particular object. The really tricky thing is that we have an entity passed into both GetGripPointsAt() and MoveGripPointsAt(), but – and here’s the rub – the GetGripPointsAt() entity is typically the original, database-dependent entity, while the entities passed into MoveGripPointsAt() are temporary clones. The temporary clones do not have ObjectIds, which is the best way to identify objects and associate data with them in a map (for instance). The default cloning behaviour can be overruled using a TransformOverrule – by returning false from the CloneMeForDragging() callback – but assuming we don’t do that (and we don’t really want to do that, as it opens another can of worms), we need to find another way to identify an object that is passed into GetGripPointsAt() with its clone passed in to MoveGripPointsAt(). The solution I ended up going for was to generate a key based on the class-name of the object followed by its geometric extents. This isn’t perfect, but it’s as good as I could find. There is a possibility that entities of the same type could exist at the same spot and still have different grips (which is the main problem – if they have the same grip locations then it doesn’t matter), but that will cause problems with this implementation. If people want to implement a more robust technique, they will probably need to do so for specific entity types, where they have access to more specific information that will help identify their objects: we are sticking to generic entity-level information, which makes it a little hard to be 100% certain we have the right object. One other point… as we’re using the geometric extents to identify an object, it’s hard to clear the grip information from our dictionary: by the time the OnGripStatusChanged() method is called, the entity’s geometry has typically changed, which means we can no longer find it in the dictionary. So for a cleaner solution it’s worth clearing the dictionary at an appropriate moment (probably using some kind of command- or document locking-event). OK, now let’s see what happens when we NETLOAD our application and run our GOO command, which toggles the use of the grip overrule: Command: GOO Grip overruling turned on. OK, so far so good. Let’s create a standard AutoCAD polyline containing both line and arc segments: When we select it, we see its grips along with the quick properties panel: And when we use its various grips, we see temporary vectors drawn during each MoveGripPointsAt() call (which will disappear at the next REGEN): Fun stuff! I’m looking forward to diving further into Overrules over the next week or two… :-)
http://through-the-interface.typepad.com/through_the_interface/2009/08/knowing-when-an-autocad-object-is-grip-edited-using-overrules-in-net.html
CC-MAIN-2017-43
en
refinedweb
INDENTSection: Misc. Reference Manual Pages (1L) Index Return to Main Contents NAMEindent - changes the appearance of a C program by inserting or deleting whitespace. SYNOPSISindent [options] [input-files] indent [options] [single-input-file] [-o output-file] DESCRIPTIONThis variable INDENT_PROFILE. If that exists its value is expected to name the file that is to be used. If the environment variable does not exist, indent looks for `.indent.pro' in the current directory and use that if found. Finally indent will search your home directory for `. SIMPLE necessary. numbered backups will be made. If its value is `numbered-existing', then numbered backups will be made if there already exist numbered backups for the file being indented; otherwise, a simple backup is made. If VERSION_CONTROL is not set, then indent assumes the behaviour of `numbered-existing'. Other versions of indent use the suffix `.BAK' in naming backup files. This behaviour can be emulated by setting SIMPLE_BACKUP_SUFFIX to `.BAK'. Note also that other versions of indent make backups in the current directory, rather than in the directory of the source file as GNU indent now does. COMMON STYLES `. indent formats both C and C++ comments. C comments are begun with `/*', terminated with `*/' and may contain newline characters. C++ comments begin with the delimiter `//' neccessarily in column 1). indent further distinguishes between comments found outside of procedures `/*'. */ `-fca' option. To format those beginning in column one, specify `-fc1'. Such formatting is disabled by default. The right margin for formatting defaults to 78, but may be changed with the ` ` ` `-cdb' option places the comment delimiters on blank lines. Thus, a single line comment like /* Loving hug */ can be transformed into: /* Loving hug */ Stars can be placed at the beginning of multi-line comments with the `-sc' option. Thus, the single-line comment containing an for and the following parenthesis. This is the default. The `-sai' option forces a space between an if and the following parenthesis. This is the default. The `-saw' option forces a space between an while and the following parenthesis. This is the default. The `-prs' option causes all parentheses to be seperated with a space from the what is between them. For example, declaration. The arguments will appear at one indention level deeper than the function declaration. This is particularly specifiedbecomes #if X # if Y # define Z 1 # else # define Z 0 # endif #endif BREAKING LONG LINES With the option `-ln', or `--line-lengthn', it is possible to specify the maximum length of a line of C code, not including possible comments that follow it. When lines become longer then the specified line length, GNU indent tries to break the line at a logical place. This is new as of version 2.1 however and not very intelligent or flexible yet. Currently there are two options that allows one to interfere with the algorithm that determines where to break a line.. MISCELLANEOUS OPTIONS To find out what version of indent you have, use the command indent -version. This will report the version number --brace-indent -bli --braces-after-struct-decl-line -bls --braces-on-if-line -br --level -in --k-and-r-style -kr --leave-optional-blank-lines -nsob --leave-preprocessor-space -lps --line-comments-indentation -dn --line-length -ln - --procnames-start-lines -psl -Unknown FILES $HOME/.indent.pro holds default options for indent. AUTHORS Carlo Wood Joseph Arceneaux Jim Kingdon David Ingamells HISTORYDerived from the UCB program "indent". COPYINGCopyright . Index - NAME - - SYNOPSIS - - DESCRIPTION - - OPTIONS - - INVOKING INDENT - - BACKUP FILES - - COMMON STYLES - - BLANK LINES - - --blank-lines-after-declarations - - --blank-lines-after-procedures - - - STATEMENTS - - DECLARATIONS - - INDENTATION - - BREAKING LONG LINES - - DISABLING FORMATTING - - MISCELLANEOUS OPTIONS - - BUGS - - - Options' Cross Key - - RETURN VALUE - - FILES - - AUTHORS - - HISTORY - - COPYING - Random Man Pages: SSL_CTX_use_RSAPrivateKey rtc BN_MONT_CTX_set hal-set-property
http://www.thelinuxblog.com/linux-man-pages/1/indent
CC-MAIN-2017-43
en
refinedweb
Updated on Kisan Patel In .Net Framework, a new syntax, named Lambda expressions, was introduced for anonymous methods. A Lambda expression is an anonymous function that can contain expressions and statements to create delegates or expression tree types. Lambda expressions are specified as a comma-delimited list of parameters followed by the lambda operator, followed by an expression or statement block. In C#, the lambda operator is =>. If there is more than one input parameter, enclose the input parameters in parentheses. Syntax (param1, param2, …paramN) => expr For Example, x => x.Length > 0 This lambda expression could be read as “input x returns x.Length > 0.”. If multiple parameters are passed into the lambda expression, separate them with commas, and enclose them in parentheses like this. (x, y) => x == y C# can define an anonymous delegate in an easier way, using a lambda expression. Using a lambda expression, you can rewrite the filtering code more compactly. For Example, string[] names = { "Kisan", "Devang", "Ravi", "Ujas", "Karan" }; var filteredName = names.Where (n => n.Contains ("K")); Here, Where is an extension methods. The Select statement is also an extension method provided by the Enumerable class. For Example, string[] names = { "Kisan", "Devang", "Ravi", "Ujas", "Karan" }; var filteredName = names .Where (n => n.Contains ("K")) .Select (n => n.ToUpper()); using System; using System.Collections.Generic; using System.Linq; namespace ConsoleApp { class Program { static void Main(string[] args) { string[] names = { "Kisan", "Devang", "Ravi", "Ujas", "Karan" }; IEnumerable query = names .Where(n => n.Contains("K")) .Select(n => n.ToUpper()); foreach (var name in query) Console.WriteLine(name); } } } the output of the above C# program…
http://csharpcode.org/blog/lambda-expressions/
CC-MAIN-2019-18
en
refinedweb
AI & Machine Learning NVIDIA’s RAPIDS joins our set of Deep Learning VM images for faster data science If you’re a data scientist, researcher, engineer, or developer, you may be familiar with Google Cloud’s set of Deep Learning Virtual Machine (VM) images, which enable the one-click setup machine learning-focused development environments. But some data scientists still use a combination of pandas, Dask, scikit-learn, and Spark on traditional CPU-based instances. If you’d like to speed up your end-to-end pipeline through scale, Google Cloud’s Deep Learning VMs now include an experimental image with RAPIDS, NVIDIA’s open source and Python-based GPU-accelerated data processing and machine learning libraries that are a key part of NVIDIA’s larger collection of CUDA-X AI accelerated software. CUDA-X AI is the collection of NVIDIA's GPU acceleration libraries to accelerate deep learning, machine learning, and data analysis. The Deep Learning VM Images comprise a set of Debian 9-based Compute Engine virtual machine disk images optimized for data science and machine learning tasks. All images include common machine learning (typically deep learning, specifically) frameworks and tools installed from first boot, and can be used out of the box on instances with GPUs to accelerate your data processing tasks. In this blog post you’ll learn to use a Deep Learning VM which includes GPU-accelerated RAPIDS libraries. RAPIDS is an open-source suite of data processing and machine learning libraries developed by NVIDIA that enables GPU-acceleration for data science workflows. RAPIDS relies on NVIDIA’s CUDA language allowing users to leverage GPU processing and high-bandwidth GPU memory through user-friendly Python interfaces. It includes the DataFrame API based on Apache Arrow data structures called cuDF, which will be familiar to users of pandas. It also includes cuML, a growing library of GPU-accelerated ML algorithms that will be familiar to users of scikit-learn. Together, these libraries provide an accelerated solution for ML practitioners to use requiring only minimal code changes and no new tools to learn. RAPIDS is available as a conda or pip package, in a Docker image, and as source code. Using the RAPIDS Google Cloud Deep Learning VM image automatically initializes a Compute Engine instance with all the pre-installed packages required to run RAPIDS. No extra steps required! Creating a new RAPIDS virtual machine instance Compute Engine offers predefined machine types that you can use when you create an instance. Each predefined machine type includes a preset number of vCPUs and amount of memory, and bills you at a fixed rate, as described on the pricing page. If predefined machine types do not meet your needs, you can create an instance with a custom virtualized hardware configuration. Specifically, you can create an instance with a custom number of vCPUs and amount of memory, effectively using a custom machine type. In this case, we’ll create a custom Deep Learning VM image with 48 vCPUs, extended memory of 384 GB, 4 NVIDIA Tesla T4 GPUs and RAPIDS support. export IMAGE_FAMILY="rapids-latest-gpu-experimental" export ZONE="us-central1-b" export INSTANCE_NAME="rapids-instance" export INSTANCE_TYPE="custom-48-393216-ext" gcloud compute instances create $INSTANCE_NAME \ --zone=$ZONE \ --image-family=$IMAGE_FAMILY \ --image-project=deeplearning-platform-release \ --maintenance-policy=TERMINATE \ --accelerator='type=nvidia-tesla-t4,count=4' \ --machine-type=$INSTANCE_TYPE \ --boot-disk-size=1TB \ --scopes= \ --metadata='install-nvidia-driver=True,proxy-mode=project_editors' Notes: - You can create this instance in any available zone that supports T4 GPUs. - The option install-nvidia-driver=TrueInstalls NVIDIA GPU driver automatically. - The option proxy-mode=project_editorsmakes the VM visible in the Notebook Instances section. - To define extended memory, use 1024*X where X is the number of GB required for RAM. Using RAPIDS To put the RAPIDS through its paces on Google Cloud Platform (GCP), we focused on a common HPC workload: a parallel sum reduction test. This test can operate on very large problems (the default size is 2TB) using distributed memory and parallel task processing. There are several applications that require the computation of parallel sum reductions in high performance computing (HPC). Some examples include: - Solving linear recurrences - Evaluation of polynomials - Random number generation - Sequence alignment - N-body simulation It turns out that parallel sum reduction is useful for the data science community at large. To manage the deluge of big data, a parallel programming model called “MapReduce” is used for processing data using distributed clusters. The “Map” portion of this model supports sorting: for example, sorting products into queues. Once the model maps the data, it then summarizes the output with the “Reduce” algorithm—for example, count the number of products in each queue. A summation operation is the most compute-heavy step, and given the scale of data that the model is processing, these sum operations must be carried out using parallel distributed clusters in order to complete in a reasonable amount of time. But certain reduction sum operations contain dependencies that inhibit parallelization. To illustrate such a dependency, suppose we want to add a series of numbers as shown in Figure 1. From the figure 1 on the left, we must first add 7 + 6 to obtain 13, before we can add 13 + 14 to obtain 27 and so on in a sequential fashion. These dependencies inhibit parallelization. However, since addition is associative, the summation can be expressed as a tree (figure 2 on the right). The benefit of this tree representation is that the dependency chain is shallow, and since the root node summarizes its leaves, this calculation can be split into independent tasks. Speaking of tasks, this brings us to the Python package Dask, a popular distributed computing framework. With Dask, data scientists and researchers can use Python to express their problems as tasks. Dask then distributes these tasks across processing elements within a single system, or across a cluster of systems. The RAPIDS team recently integrated GPU support into a package called dask-cuda. When you import both dask-cuda and another package called CuPY, which allows data to be allocated on GPUs using familiar numpy constructs, you can really explore the full breadths of models you can build with your data set. To illustrate, shown in Figures 3 and 4 show side-by-side comparisons of the same test run. On the left, 48 cores of a single system are used to process 2 terabytes (TB) of randomly initialized data using 48 Dask workers. On the right, 4 Dask workers process the same 2 TB of data, but dask-cuda is used to automatically associate those workers with 4 Tesla T4 GPUs installed in the same system. Running RAPIDS To test parallel sum-reduction, perform the following steps: 1. SSH into the instance. See Connecting to Instances for more details. 2. Download the code required from this repository and upload it to your Deep Learning Virtual Machine Compute Engine instance. These two files are of particular importance as you profile performance: You can find the sample code to run these tests, based on this blog, GPU Dask Arrays, below. 3. Run the tests: Run test on the instance’s CPU complex, in this case specifying 48 vCPUs (indicated by the -c flag): time ./run.sh -c 48 Using CPUs and Local Dask Allocating and initializing arrays using CPU memory Array size: 2.00 TB. Computing parallel sum . . . Processing complete. Wall time create data + computation time: 695.50063515 seconds real 11m 45.523s user 0m 52.720s sys 0m 10.100s Now, run the test using 4 (indicated by the -g flag) NVIDIA Tesla T4 GPUs: time ./run.sh -g 4 Using GPUs and Local Dask Allocating and initializing arrays using GPU memory with CuPY Array size: 2.00 TB. Computing parallel sum . . . Processing complete. Wall time create data + computation time: 57.94356680 seconds real 1m 13.680s user 0m 9.856s sys 0m 13.832s Here are some initial conclusions we derived from these tests: - Processing 2 TB of data on GPUs is much faster (an ~12x speed-up for this test) - Using Dask’s dashboard, you can visualize the performance of the reduction sum as it is executing - CPU cores are fully occupied during processing on CPUs, but the GPUs are not fully utilized - You can also run this test in a distributed environment import argparse import subprocess import sys import time import cupy import dask.array as da from dask.distributed import Client, LocalCluster, wait from dask_cuda import LocalCUDACluster def create_data(rs, xdim, ydim, x_chunk_size, y_chunk_size): x = rs.normal(10, 1, size=(xdim, ydim), chunks=(x_chunk_size, y_chunk_size)) return x def run(data): (data + 1)[::2, ::2].sum().compute() return def get_scheduler_info(): scheduler_ip = subprocess.check_output(['hostname','--all-ip-addresses']) scheduler_ip = scheduler_ip.decode('UTF-8').split()[0] scheduler_port = '8786' scheduler_uri = '{}:{}'.format(scheduler_ip, scheduler_port) return scheduler_ip, scheduler_uri def main(): parser = argparse.ArgumentParser() parser.add_argument('--xdim', type=int, default=500000) parser.add_argument('--ydim', type=int, default=500000) parser.add_argument('--x_chunk_size', type=int, default=10000) parser.add_argument('--y_chunk_size', type=int, default=10000) parser.add_argument('--use_gpus_only', action="store_true") parser.add_argument('--n_gpus', type=int, default=1) parser.add_argument('--use_cpus_only', action="store_true") parser.add_argument('--n_sockets', type=int, default=1) parser.add_argument('--n_cores_per_socket', type=int, default=1) parser.add_argument('--use_dist_dask', action="store_true") args = parser.parse_args() sched_ip, sched_uri = get_scheduler_info() if args.use_dist_dask: print('Using Distributed Dask') client = Client(sched_uri) elif args.use_gpus_only: print('Using GPUs and Local Dask') cluster = LocalCUDACluster(ip=sched_ip,n_workers=args.n_gpus) client = Client(cluster) elif args.use_cpus_only: print('Using CPUs and Local Dask') cluster = LocalCluster(ip=sched_ip, n_workers=args.n_sockets, threads_per_worker=args.n_cores_per_socket) client = Client(cluster) start = time.time() if args.use_gpus_only: print('Allocating arrays using GPU memory with CuPY') rs=da.random.RandomState(RandomState=cupy.random.RandomState) elif args.use_cpus_only: print('Allocating arrays using CPU memory') rs = da.random.RandomState() x = create_data(rs, args.xdim, args.ydim, args.x_chunk_size, args.y_chunk_size) print('Array size: {:.2f}TB. Computing...'.format(x.nbytes/1e12)) run(x) end = time.time() delta = (end - start) print('Processing complete.') print('Wall time: {:10.8f} seconds'.format(delta)) del x if __name__ == '__main__': main() In this example, we allocate Python arrays using the double data type by default. Since this code allocates an array size of (500K x 500K) elements, this represents 2 TB (500K × 500K × 8 bytes / word). Dask initializes these array elements randomly via normal Gaussian distribution using the dask.array package. Running RAPIDS on a distributed cluster You can also run RAPIDS in a distributed environment using multiple Compute Engine instances. You can use the same code to run RAPIDS in a distributed way with minimal modification and still decrease the processing time. If you want to explore RAPIDS in a distributed environment please follow the complete guide here. time ./run.sh -g 20 -d Using Distributed Dask Allocating and initializing arrays using GPU memory with CuPY Array size: 2.00 TB. Computing parallel sum . . . Processing complete. Wall time create data + computation time: 11.63004732 seconds real 0m 12.465s user 0m 1.432s sys 0m 0.324s Conclusion As you can see from the above example, the RAPIDS VM Image can dramatically speed up your ML workflows. Running RAPIDS with Dask lets you seamlessly integrate your data science environment with Python and its myriad libraries and wheels, HPC schedulers such as SLURM, PBS, SGE, and LSF, and open-source infrastructure orchestration projects such as Kubernetes and YARN. Dask also helps you develop your model once, and adaptably run it on either a single system, or scale it out across a cluster. You can then dynamically adjust your resource usage based on computational demands. Lastly, Dask helps you ensure that you’re maximizing uptime, through fault tolerance capabilities intrinsic in failover-capable cluster computing. It’s also easy to deploy on Google’s Compute Engine distributed environment. If you’re eager to learn more, check out the RAPIDS project and open-source community website, or review the RAPIDS VM Image documentation. Acknowledgements: Ty McKercher, NVIDIA, Principal Solution Architect; Vartika Singh, NVIDIA, Solution Architect; Gonzalo Gasca Meza, Google, Developer Programs Engineer; Viacheslav Kovalevskyi, Google, Software Engineer
https://cloud.google.com/blog/products/ai-machine-learning/nvidias-rapids-joins-our-set-of-deep-learning-vm-images-for-faster-data-science
CC-MAIN-2019-18
en
refinedweb
from cenpy import products import matplotlib.pyplot as plt %matplotlib inline chicago = products.ACS(2017).from_place('Chicago, IL', level='tract', variables=['B00002*', 'B01002H_001E']) Matched: Chicago, IL to Chicago city within layer Incorporated Places Install it the prerelease candidate using: pip install --pre cenpy I plan to make a full 1.0 release in July. File bugs, rough edges, things you want me to know about, and interesting behavior at! I'll also maintain a roadmap here. Cenpy started as an interface to explore and query the US Census API and return Pandas Dataframes. This was mainly intended as a wrapper over the basic functionality provided by the census bureau. I was initially inspired by acs.R in its functionality and structure. In addition to cenpy, a few other census packages exist out there in the Python ecosystem, such as: And, I've also heard/seen folks use requests raw on the Census API to extract the data they want. All of the packages I've seen (including cenpy itself) involved a very stilted/specific API query due to the way the census API worked. Basically, it's difficult to construct an efficienty query against the census API without knowing the so-called "geographic hierarchy" in which your query fell: The main census API does not allow a user to leave middle levels of the hierarchy vague: For you to get a collection of census tracts in a state, you need to query for all the counties in that state, then express your query about tracts in terms of a query about all the tracts in those counties. Even tidycensus in R requires this in many common cases. Say, to ask for all the blocks in Arizona, you'd need to send a few separate queries: what are the counties in Arizona? what are the tracts in all of these counties? what are the blocks in all of these tracts in all of these counties? This was necessary because of the way the hierarchy diagram (shown above) is structured. Blocks don't have a unique identifier outside of their own tract; if you ask for block 001010, there might be a bunch of blocks around the country that match that identifier. Sometimes, this meant conducting a very large number of repetitive queries, since the packages are trying to build out a correct search tree hierarchy. This style of tree search is relatively slow, especially when conducting this search over the internet... So, if we focus on the geo-in-geo style queries using the hierarchy above, we're in a tough spot if we want to also make the API easy for humans to use. Fortunately for us, a geographic information system can figure out these kinds of nesting relationships without having to know each of the levels above or below. This lets us use very natural query types, like: what are the blocks *within* Arizona? There is a geographic information system that cenpy had access to, called the Tiger Web Mapping Service. These are ESRI Mapservices that allow for a fairly complex set of queries to extract information. But, in general, neither census nor censusdata used the TIGER web map service API. Cenpy's cenpy.tiger was a fully-featured wrapper around the ESRI Mapservice, but was mainly not used by the package itself to solve this tricky problem of building many queries to solve the geo-in-geo problem. Instead, cenpy1.0.0 uses the TIGER Web mapping service to intelligently get all the required geographies, and then queries for those geographies in a very parsimonious way. This means that, instead of tying our user interface to the census's datastructures, we can have some much more natural place-based query styles. Let's grab all the tracts in Los Angeles. And, let's get the Race table, P004. from cenpy import products import matplotlib.pyplot as plt %matplotlib inline The new cenpy API revolves around products, which integrate the geographic and the data APIs together. For starters, we'll use the 2010 Decennial API: dectest = products.Decennial2010() Now, since we don't need to worry about entering geo-in-geo structures for our queries, we can request Race data for all the tracts in Los Angeles County using the following method: la = dectest.from_county('Los Angeles, CA', level='tract', variables=['^P004']) Matched: Los Angeles, CA to Los Angeles, CA within layer Counties And, making a pretty plot of the Hispanic population in LA: How this works from a software perspective is a significant imporvement on how the other packages, like cenpy itself, work. targetwithin a level of the census geography. (e.g. match Los Angeles, CA to Los Angeles County) target. Since the Web Mapping Service provides us all the information needed to build a complete geo-in-geo query, we don't need to use repeated queries. Further, since we are using spatial querying to do the heavy lifting, there's no need for the user to specify a detailed geo-in-geo hierarchy: using the Census GIS, we can build the hierarchy for free. Thus, this even works for grabbing block information over a very large area, such as the Austin, TX MSA: aus = dectest.from_msa('Austin, TX', level='block', variables=['^P003', 'P001001']) Matched: Austin, TX to Austin-Round Rock-San Marcos, TX within layer Metropolitan Statistical Areas
https://nbviewer.jupyter.org/gist/ljwolf/3481aeadf1b0fbb46b72553a08bfc4e6?flush_cache=true
CC-MAIN-2019-18
en
refinedweb
Return to worksheet index. For the code below, write a prediction on what it will output. Then run the code and state if your prediction was accurate or not. If your prediction is incorrect, make sure you understand why. If you don't know why the code runs the way it does, watch the video at the end of the assignment for an explanation. If you are looking at the text-only version of this worksheet, go on-line and find the HTML version of this worksheet for the video. This section is worth 10 points, a half point per problem rounded up. a(x): print("a", x) x = x + 1 print("a", x) x = 1 print("main", x) a(x) print("main", x) def b(y): print("b", y[1]) y[1] = y[1] + 1 print("b", y[1]) y=[123, 5] print("main", y[1]) b(y) print("main", y[1]) def c(y): print("c", y[1]) y = [101, 102] print("c", y[1]) y = [123, 5] print("main", y[1]) c(y) print("main", y[1]) This next section involves finding the mistakes in the code. If you can't find the mistake, check out the video at the end for the answer and an explanation on what is wrong. This section is worth 7(text)) def get_user_choice(): while True: command = input("Command: ") if command = f or command = m or command = s or command = d or command = q: return command print("Hey, that's not a command. Here are your options:" ) print("f - Full speed ahead") print("m - Moderate speed") print("s - Status") print("d - Drink") print("q - Quit") user_command = get_user_choice() print("You entered:", user_command) (13 pts) For this section, write code that satisfies the following items:
http://programarcadegames.com/worksheets/show_file.php?file=worksheet_09.php
CC-MAIN-2019-18
en
refinedweb
Has anyone followed this Tutorial lately? I followed it and then downloaded his github example of just the front end I got it almost fully working. I can post to the server with Postman. I can also manually add data to the table in VS through backend project. But I can not get the front end, Android app working. It just stores stuff locally, it never gets to the server, push or pull. But something that is weird is when a terminate the app and run it again it does not load any of previous data. So it may not even be storing it locally either, that or its wiping it everytime. This is the code for the AzureService. For any of the other code it is the same as the project sited above. namespace PillTrackerApp.Services { class AzureService { public MobileServiceClient Client { get; set; } = null; private IMobileServiceSyncTable<Pill> pillTable; public static bool UseAuth { get; set; } = false; public async Task Initialize() { if (Client?.SyncContext?.IsInitialized ?? false) return; var appUrl = ""; Client = new MobileServiceClient(appUrl); var path = "syncstore.db"; //path = Path.Combine(MobileServiceClient.DefaultDatabasePath, path); var store = new MobileServiceSQLiteStore(path); store.DefineTable<Pill>(); await Client.SyncContext.InitializeAsync(store); pillTable = Client.GetSyncTable<Pill>(); } public async Task SyncPill() { try { if (!CrossConnectivity.Current.IsConnected) return; await pillTable.PullAsync("allPills", pillTable.CreateQuery()); await Client.SyncContext.PushAsync(); } catch (Exception ex) { Debug.WriteLine("Unable to sync pills, that is alright as we have offline capabilities: " + ex); } } public async Task<IEnumerable<Pill>> GetPills() { await Initialize(); await SyncPill(); return await pillTable.OrderBy(c => c.DateUtc).ToEnumerableAsync(); } public async Task<Pill> AddPill(bool atHome, string location) { await Initialize(); var pill = new Pill() { DateUtc = DateTime.UtcNow, TakenAtHome = atHome, OS = Device.RuntimePlatform, Location = location ?? string.Empty }; await pillTable.InsertAsync(pill); await SyncPill(); return pill; } } } I have F9 these two lines await pillTable.PullAsync("allPills", pillTable.CreateQuery()); await Client.SyncContext.PushAsync(); And it never seems to run them it just exits the try statement. I even ran then before the try statement and nothing, but no exception was thrown either... I would love any insight! I did type it all by hand and I didn't copy and paste so I might of messed something up, but I cant find it if that was the issue Answers Is the question I am asking not clear/ confusing/ makes no sense? Sorry if it is. In the time since the post I was able to get a different project on Android to talk with an easy table. But I need it to work with the project above that uses a c# backend. Thanks!
https://forums.xamarin.com/discussion/comment/308021/
CC-MAIN-2019-18
en
refinedweb
Dependency Static Class [namespace: Serenity.Abstractions, assembly: Serenity.Core] Dependency class is the service locator of Serenity. All dependencies are queried through its methods: public static class Dependency { public static TType Resolve<TType>() where TType : class; public static TType Resolve<TType>(string name) where TType : class; public static TType TryResolve<TType>() where TType : class; public static TType TryResolve<TType>(string name) where TType : class; public static IDependencyResolver SetResolver(IDependencyResolver value); public static IDependencyResolver Resolver { get; } public static bool HasResolver { get; } } In your application's start method (e.g. in global.asax.cs) you should initialize service locator by setting a dependency resolver (IDependencyResolver) implementation (an IoC container) with SetResolver method. Dependency.SetResolver Method Configures the dependency resolver implementation to use. You can use IoC container of your choice but Serenity already includes one based on Munq: var container = new MunqContainer(); Dependency.SetResolver(container); SetResolver methods return previously configured IDependencyResolver implementation, or null if none is configured before. Dependency.Resolver Property Returns currently configured IDependencyResolver implementation. Throws a InvalidProgramException if none is configured yet. Depency.HasResolver Property Returns true if a IDependencyResolver implementation is configured through SetResolver. Returns false, if not. Dependency.Resolve Method Returns the registered implementation for requested type. If no implementation is registered, throws a KeyNotFoundException. If no dependency resolver is configured yet, throw a InvalidProgramException Second overload of Resolve method accepts a name parameter. This should be used if different providers are registered for an interface depending on scope. retrieve a IConfigurationRepository provider for each of these scopes, you would call the method like: var appConfig = Dependency.Resolve<IConfigurationRepository>("Application"); var srvConfig = Dependency.Resolve<IConfigurationRepository>("Server"); Dependency.TryResolve Method This is functionally equivalent to Resolve method with one difference. If a provider is not registered for requested type, or no dependency resolver is configured yet, TryResolve doesn't throw an exception, but instead returns null.
https://volkanceylan.gitbooks.io/serenity-guide/service_locator/dependency_static_class.html
CC-MAIN-2019-18
en
refinedweb
I wanted to write a piece of code in C for my Stellaris launchpad just to turn on the onboard LED by keeping the library usage to minimum. To my surprise, the compiled code was around 800 bytes in size. So to check what was put in to the compiled code by the compiler, I checked the assembly code using a dissambler. It had a lot of code which I didn't write the C code for. I would like to know what those codes are for and how did it enter the compiler setting. I am trying to learn how a compiler behaves and what behind-the-scenes things the compiler is doing. Please help me. This is my C program: #include "inc/hw_memmap.h" #include "inc/hw_types.h" #include "driverlib/rom.h" #include "driverlib/sysctl.h" #define GPIOFBASE 0x40025000 #define GPIOCLK *((volatile unsigned long *)(0x400FE000 + 0x608)) #define FDIR *((volatile unsigned long *)(GPIOFBASE + 0x400)) #define FDEN *((volatile unsigned long *)(GPIOFBASE + 0x51C)) #define FDATA *((volatile unsigned long *)(GPIOFBASE + 0x3FF)) void main(void) { ROM_SysCtlClockSet(SYSCTL_SYSDIV_4 | SYSCTL_USE_PLL | SYSCTL_XTAL_16MHZ | SYSCTL_OSC_MAIN); GPIOCLK |= (1<<5); FDIR |= 0xE; FDEN |= 0xE; FDATA |= 0xE; while (1); } The only API call I used was to set the Clock setting using a Onchip ROM library. Please check the dissambly code at this pastebin: (The main: is at 0x190.) The additional code will be CPU initialisation and C runtime initialisation. The source code for this start-up is probably provided with your compiler. In GCC for example it is normally called crt0.s Depending on your CPU and memory it will probably require some initialisation to set the correct clock frequency, memory timing etc. On top of that the C runtime requires static data initialisation and stack initialisation. If C++ is supported additional code is necessary to call the constructors of any static objects. Cortex-M devices like Stellaris are designed to run C code with minimum overhead, and it is possible to essentially start C code from reset, however the default start-up state is often not what you want to run your application is since for example this is likley to run at a lower and less accurate clock frequency. Added 06Dec2012: Your start-up code is almost certainly provided by the CMSIS. The CMSIS folder will contain CoreSupport and DeviceSupport folders containing start-up code. You can copy this code (or the relevant parts of it) to your project, modify it as necessary and link it in place of the provided code. The CMSIS is frequently updated, so there is an argument for doing that in any case. Your build log and/or map file are useful for determine which CMSIS components are linked.
http://m.dlxedu.com/m/askdetail/3/c6a0e68e4f6715d180571c30f6feeab9.html
CC-MAIN-2019-18
en
refinedweb
#include <tlsdefault.h> This is an abstraction of the various TLS implementations. Definition at line 30 of file tlsdefault.h. Supported TLS types. Definition at line 37 of file tlsdefault.h. Constructs a new TLS wrapper. Definition at line 41 of file tlsdefault.cpp. Virtual Destructor. Definition at line 72 of file tlsdefault.cpp. This function performs internal cleanup and will be called after a failed handshake attempt. Definition at line 108 100 of file tlsdefault.cpp. Use this function to feed unencrypted data to the encryption implementation. The encrypted result will be pushed to the TLSHandler's handleEncryptedData() function. Definition at line 92 of file tlsdefault.cpp. This function is used to retrieve certificate and connection info of a encrypted connection. Reimplemented from TLSBase. Definition at line 136 114 of file tlsdefault.cpp. Returns the state of the encryption. Reimplemented from TLSBase. Definition at line 122 of file tlsdefault.cpp. Use this function to set a number of trusted root CA certificates which shall be used to verify a servers certificate. Definition at line 130 of file tlsdefault.cpp. Use this function to set the user's certificate and private key. The certificate will be presented to the server upon request and can be used for SASL EXTERNAL authentication. The user's certificate file should be a bundle of more than one certificate in PEM format. The first one in the file should be the user's certificate, each cert following that one should have signed the previous one. Definition at line 144 of file tlsdefault.cpp. Returns an ORed list of supported TLS types. Definition at line 77 of file tlsdefault.cpp.
https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1TLSDefault.html
CC-MAIN-2019-18
en
refinedweb
I'm learning how to use the SDL libraries but I'm having a small problem playing the sound. The program appears to load the sound but either not play it or play it too low for me to hear. Here's my code so far: #include "SDL/SDL.h" #include "SDL/SDL_mixer.h" Mix_Music *play_sound = NULL; void cleanUp() { Mix_FreeMusic(play_sound); Mix_CloseAudio(); SDL_Quit(); } int main(int argc, char* args[]) { SDL_Init(SDL_INIT_EVERYTHING); Mix_OpenAudio(22050, MIX_DEFAULT_FORMAT, 2, 4096); play_sound = Mix_LoadMUS("noise.mp3"); Mix_PlayMusic(play_sound, -1); cleanUp(); return 0; } I'm using Dev-Cpp with these linker arguments: -lmingw32 -lSDLmain -lSDL -lSDL_mixer and I have the proper .dll's in the folder with my project along with the actual .mp3. Any suggestions?
https://www.daniweb.com/programming/software-development/threads/161557/sdl-playing-music
CC-MAIN-2019-18
en
refinedweb
. Actually, you should reserve in config/app.php file. Then, you can add In the Service Providers array : 'Menu\MenuServiceProvider', In the aliases array : 'Menu' => 'Menu\Menu', Finally, you need to run the following command; php artisan dump-autoload I assume that you already added this package in composer.json Sorry, I didn't clear your question that before you edited. I'm not sure that you cannot call $menu variable in your view or your controller. Based on my experience, I will make this with injection way. In routes.php; App::bind('MenuCreator',function($app) { return new yournamespace\YourMenuClass(); }); In YourMenuClass.php Let's say I have a function call createMenu public function createMenu { //TODO create Menu and styling Menu } If I want to call this injection method, I can access this way; $menuClass = App::make('MenuCreator'); $menuClass->createMenu(); Please note that, you need to import App. And also you need to run php artisan dump-autoload Hope this help. Sorry for my bad English.
http://databasefaq.com/index.php/answer/249/php-laravel-laravel-5-how-to-register-global-variable-for-my-laravel-application
CC-MAIN-2019-18
en
refinedweb
HA. Nova-compute is down after destroying primary controller Bug Description ISO: {"build_id": "2014-03- Steps: - Create environment. Cent OS, Neutron with GRE segmentation - Add nodes: 3 controller, 2 compute - Deploy changes - Destroy primary controller - Try to boot an instance. Check output of a "nova service-list" command Result: Unable to boot instance : No valid host was found nova-compute is down needs to be reconfirmed with newer rabbitmq and haproxy- This is still a problem in 4.1.1 since the relevant patches are back-ported, I'm going to say that this is not fixed Andrew, is this ok that we move this bug to 4.1.1 ? after removing 1st controller rabbitmq cluster take huge time for rebuild himself (about 180sec.) [root@node-3 ~]# rabbitmqctl cluster_status Cluster status of node 'rabbit@node-3' ... [{nodes, {running_ {partitions,[]}] ...done. Horizon temporary feel sick nova-compute go out permanently. If we restart nova-conductor on ALL alive controllers _AND_ nova-compute -- nova-compute returns from darkness If we have non-syncronized clock, we can see this fail: nova-scheduler node-2.domain.tld internal enabled :-) 2014-04-30 17:46:51 nova-compute node-4.domain.tld nova enabled XXX 2014-04-30 17:44:13 nova-cert node-3.domain.tld internal enabled :-) 2014-04-30 17:46:51 In really it's a not fail, because timestamp of nova-compute was changed, and interpreted by nova-manage as fail because time not synched. @bogdando via https:/ Summary: Fuel should provide TCP KA (keepalives) for rabitmq sessions in HA mode. These TCP KA should be visible at the app layer as well as at the network stack layer. related Oslo.messaging issue: https:/ related fuel-dev ML: https:/ Issues we have in the Fuel: 1) In 5.0 we upgraded rabbit up to 3.x and moved its connections management out of the HAproxy scope for most of the Openstack services (those ones who have synced rabbit_hosts support from Oslo.messages). ( Was also backported for 4.1.1) Hence, we still have to provide a TCP KA for rabbitmq sessions in order to make Fuel HA arch more reliable. 2) Anyway, HAproxy provides TCP KA only for network layer, see in the docs: "It is important to understand that keep-alive packets are neither emitted nor received at the application level. It is only the network stacks which sees them. For this reason, even if one side of the proxy already uses keep-alives to maintain its connection alive, those keep-alive packets will not be forwarded to the other side of the proxy." 3)We have it configured in the wrong way, see HAproxy docs: "Using option "tcpka" enables the emission of TCP keep-alive probes on both the client and server sides of a connection. Note that this is meaningful only in "defaults" or "listen" sections.." Suggested solution: Apply all patches from #856764 for Nova in MOS packages and test the RabbitMQ connections thoroughly. If it looks OK, sync the patches for other MOS packages. Perhaps, this issue should be fixed in 5.1 but backporting should be considered as a critical for 4.1.1 release (due to the increasing number of existing tickets in zendesk) and as High for 5.1. I hope, the 5.0 backport is not needed due to the option to roll an upgrade 5.0 -> 5.1 would be existing. The support for Rabbit heartbeat was reverted: https:/ The kombu reconnect changes here: https:/ As far as I can see from the related oslo.bug and aforementioned comments (https:/ 1) TCP KA for Rabbit cluster (comments/19) 2) address use running_ Guys, at the start of this release cycle we had the same situation with mysql. i.e. connection from host to the himself. To the local IP address. For Mysql we fix this problem by remove haproxy to the network namespace. I.e. eliminated connections to the himself, through localhost. In the current RabbitMQ implementation we have two "local" endpoints: 127.0.0.1 and local controller IP address. Can anybody try to reproduce this case, when openstack services will connect to rabbitmq through haproxy ? Sergey, MySQL and RabbitMQ clusters are not comparable, if we are talking about localhost connections. It is normal for RabbitMQ with mirrored queues to get or publish messages via localhost, cuz it would auto-resend all messages to the main rabbit. But MySQL could use some benefits from localhost connections only for read only slaves as well as for configured multi muster writes, perhaps (but Fuel uses neither of them) from TCP/IP implementation in the linux kernel has no differences between mysql and rabbitmq connections. Basically, I was meaning that using the direct localhost & other rabit nodes connections in HA are not a problem for RabbitMQ and perhaps, unlikely to the MySQL+VIP case, doesn't require to be fixed. Keepalives for application level could handle app layer just fine, hence we don't have to go through the layers below. Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 549173c10173708. Change-Id: Ic9d491f4904a5e Closes-Bug: #1289200 Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: stable/4.1 commit 9a0e985b3a7936f. poke ci Change-Id: Ic9d491f4904a5e Closes-Bug: #1289200 Signed-off-by: Bogdan Dobrelya <email address hidden> Below is a Release Notes friendly description of the current state of this bug with all fixes merged to master so far and 3 additional fixes: - https:/ - https:/ - https:/ Controller failover may cause Nova to fail to start VM instances 2014-05-20 ------- If one of the Controller nodes abruptly goes offline, it is possible that some of the TCP connections from Nova services on Compute nodes to RabbitMQ on the failed Controller node will not be immediately terminated. When that happens, RPC communication between the nova-compute service and Nova services on Controller nodes stops, and Nova becomes unable to manage VM instances on the affected Compute nodes. Instances that were previously launched on these nodes continue running but cannot be stopped or modified, and new instances scheduled to the affected nodes will fail to launch. After 2 hours (sooner if the failed controller node is brought back online), zombie TCP connections are terminated; after that, Nova services on affected Compute nodes reconnect to RabbitMQ on one of the operational Controller nodes and RPC communication is restored. Manual restart of nova-compute service on affected nodes also results in immediate recovery. See `LP1289200 <https:/ Dmitry, please note, I was able to reproduce this (#21) for default TCPKA sysctls in host OS (7200, 75, 9). But It wasn't reproduceable anymore with https:/ And that is how I was testing this: 0) Given - node-1 192.168.0.3 primary controller hosting the VIP 192.168.0.2 - node-5 192.168.0.7 compute node under test (it should be connected via AQMP to the given node-1 as well) - rabbitmq accept rule for controller has a number #16 - spawned some instances at compute under test *CASE A.* 1)add iptables block rules from node-5 to node-1:5673 (take care for conntrack as well!) [root@node-1 ~]# iptables -I INPUT 16 -s 192.168.0.7/32 -p tcp --dport 5673 -j DROP [root@node-1 ~]# iptables -I FORWARD 1 -s 192.168.0.7/32 -p tcp ! --syn -m state --state NEW -j DROP 2.a)send a task to spawn some and delete some others instances at the compute under test 2.b)watch for accumulated messages in the queue for compute under test [root@node-1 ~]# rabbitmqctl list_queues name messages | grep compute| egrep -v "0|^$" 3.a)wait for AMQP reconnections from the compute under test (i.e. just grep its logs for "Reconnecting to AMQP") 3.b) watch for established connections continuously: lsof: [root@node-5 ~]# lsof -i -n -P -itcp -iudp | egrep 'nova-comp' ss: [root@node-5 ~]# ss -untap | grep -E '5673\s+u.*compute' 4)remove iptables drop rules (teardown) The expected result was: a) Reconnection attemtps from coumpute under test within a ~2 min after the traffic was blocked at the controller side with iptables b) Hanged requests for instances (non consumed messages in queues) should 'alive' after compute under test being successful reconnected to new AMQP node. *CASE B*. The same as CASE a but instead of iptables rules issue a kill -9 beam pid on node-1 which has an open connection with the compute under test, e.g. (for given data in section (0): [root@node-1 ~]# ss -untap | grep '\.2:5673.*\.6' tcp ESTAB 0 0 ::ffff: ... [root@node-1 ~]# kill -9 28198 So, Dmitry, could you please elaborate: 1) either your test cases or results was different. 2) what was your expected results - a)reconnection attempts from compute and/or b)absence of rmq related operations with the dead sockets in strace outputs, c)some other rmq traffic related criterias as well? forgot to add, I'm running at the 'stock' amqp==1.0.12 amqplib==1.0.0 kombu==2.5.12 provided with the current Fuel ISO Sorry, have lost in tracking history... https:/ Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit b41ceb676b4441a Author: Bogdan Dobrelya <email address hidden> Date: Tue May 13 13:30:03 2014 +0300 Use TCPKA for Rabbit cluster See https:/ poke ci Related-Bug: 1289200 Change-Id: I39684e0a57d054 Signed-off-by: Bogdan Dobrelya <email address hidden> A small update for #22: [root@node-1 ~]# iptables -I FORWARD 1 -s 192.168.0.7/32 -p tcp ! --syn -m state --state NEW -j DROP looks like a wrong one and won't block an established/related sessions in conntrack as well # iptables -I INPUT 3 -s 192.168.0.7/32 -p tcp --dport 5673 -m state --state ESTABLISHED,RELATED -j DROP - here is a right one. (or you could use 'conntrack -F' if you want) Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: stable/4.1 commit 2bd2d3b87a06ad8 Author: Bogdan Dobrelya <email address hidden> Date: Tue May 13 13:30:03 2014 +0300 Use TCPKA for Rabbit cluster See https:/ Related-Bug: 1289200 Change-Id: I39684e0a57d054 Signed-off-by: Bogdan Dobrelya <email address hidden> https:/ Issue was reproduced on release iso {"build_id": "2014-05- Steps: 1. Create next cluster - Ubuntu, HA, KVM, Flat nova network, 3 controllers, 1 compute, 1 cinder 2. Deploy cluster 3. Destroy primary controller 4. Wait some time and run OSTF tests Actual result - all tests failed with 'Keystone client is not available' Seems that some problems with rabbitmq 2014-05- :00 err: ERROR: Error during FlatDHCPManager 0cd2e Logs are attached Primary controller was shut down at: 2014-05-23 12:23:39.228+0000: shutting down Please note, there is an another one heartbeat WIP patch https:/ (they plan to get rid of the direct eventlet usage in order to improve it, but there are +1 from the OSt operators, tho) Ths issue could be also related to https:/ /bugs.launchpad .net/fuel/ +bug/1288831
https://bugs.launchpad.net/fuel/+bug/1289200
CC-MAIN-2019-18
en
refinedweb
#include <uniquemucroom.h> This class implements a unique MUC room. A unique MUC room is a room with a non-human-readable name. It is primarily intended to be used when converting one-to-one chats to multi-user chats. XEP version: 1.21 Definition at line 33 of file uniquemucroom.h. Creates a new abstraction of a unique Multi-User Chat room. The room is not joined automatically. Use join() to join the room, use leave() to leave it. See MUCRoom for detailed info. Definition at line 23 of file uniquemucroom.cpp. Virtual Destructor. Definition at line 28 of file uniquemucroom.cpp. Join this room. Reimplemented from MUCRoom. Definition at line 33 of file uniquemucroom.cpp.
https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1UniqueMUCRoom.html
CC-MAIN-2019-18
en
refinedweb
Generic instances missing for Int32, Word64 etc. Some base types have Generic instances, like Integer, the machine-specific ones like Int32 have none. But these would be most useful when using generic Binary serialization. I think it makes sense to add Generic instances for *all* primitive types in base. I can define the instance myself, see import GHC.Generics (Generic(..)) import qualified GHC.Generics as Gen data D_Int32 data C_Int32 instance Gen.Datatype D_Int32 where datatypeName _ = "Int32" moduleName _ = "GHC.Int32" -- packageName _ = "base" instance Gen.Constructor C_Int32 where conName _ = "" -- JPM: I'm not sure this is the right implementation... instance Generic Int32 where type Rep Int32 = Gen.D1 D_Int32 (Gen.C1 C_Int32 (Gen.S1 Gen.NoSelector (Gen.Rec0 Int32))) from x = Gen.M1 (Gen.M1 (Gen.M1 (Gen.K1 x))) to (Gen.M1 (Gen.M1 (Gen.M1 (Gen.K1 x)))) = x However, as GHC.Generics is a moving target, and changes over the compiler versions, I'd rather not have to.
https://gitlab.haskell.org/ghc/ghc/issues/10512
CC-MAIN-2019-18
en
refinedweb
Grabber for DepthSense devices (e.g. More... #include <pcl/io/depth_sense_grabber.h> Grabber for DepthSense devices (e.g. Creative Senz3D, SoftKinetic DS325). Requires SoftKinetic DepthSense SDK. Definition at line 56 of file depth_sense_grabber.h. Definition at line 63 of file depth_sense_grabber.h. Definition at line 67 of file depth_sense_grabber.h. Definition at line 69 of file depth_sense_grabber.h. Definition at line 74 of file depth_sense_grabber.h. Create a grabber for a DepthSense device. The grabber "captures" the device, making it impossible for other grabbers to interact with it. The device is "released" when the grabber is destructed. This will throw pcl::IOException if there are no free devices that match the supplied device_id. Disable temporal filtering. Enable temporal filtering of the depth data received from the device. The window size parameter is not relevant for DepthSense_None filtering type. Get the serial number of device captured by the grabber. returns fps. 0 if trigger based. Implements pcl::Grabber. returns the name of the concrete subclass. Implements pcl::Grabber. Definition at line 109 of file depth_sense_grabber.h. Indicates whether the grabber is streaming or not. This value is not defined for triggered devices. Implements pcl::Grabber. Set the confidence threshold for depth data. Each pixel in a depth image output by the device has an associated confidence value. The higher this value is, the more reliable the datum is. The depth pixels (and their associated 3D points) are filtered based on the confidence value. Those that are below the threshold are discarded (i.e. their coordinates are set to NaN).. Definition at line 147 of file depth_sense_grabber.h.
http://docs.pointclouds.org/trunk/classpcl_1_1_depth_sense_grabber.html
CC-MAIN-2019-18
en
refinedweb
Comparing Closures in Java, Groovy and Scala Comparing Closures in Java, Groovy and Scala Join the DZone community and get the full member experience.Join For Free. Why those languages? Because they are the three JVM languages I'm most interested in. I suppose I could also have compared the closure support in Jython, JRuby or... well, there are a few to choose from, but this blog is going to be plenty long enough with just three. Let's start with the Java example that was given, remembering that this is a proposed syntax, that may or may not make it to Java 7 or later. As I understood the example it was this: imagine you want to add the ability to time a block of code, and you wanted to do it in a way that would look almost like a new keyword has been added to the language; and you wanted to pass in a parameter to name what you were timing; and the block you're timing returns a result, or might throw an exception. So, quite an involved case. BGGA Proposed Syntax Here's how the current Java proposal looks: // Here's a method that uses the time call: int f() throws MyException { time("opName", {=> // some statements that can throw MyException }); time("opName", {=> return ...compute result...; }); } So we're timing a couple of operations, and we're doing this inside a method, f, that returns an integer. The implementation would be.... interface Block<R, throws X> { R execute() throws X; } public <R, throws X> R time( String opName, Block<R, X> block) throws X { long startTime = System.nanoTime(); boolean success = true; try { return block.execute(); } catch (final Throwable ex) { success = false; throw ex; } finally { recordTiming( "opName", System.nanoTime() - startTime, success); } } As you can see, time takes an arbitrary text label and a block of code, runs the block and tells you how long the block took to run and if it succeeded or not. That's the example that was given at JavaOne. Now for the same thing in Groovy... Same example in Groovy To make runnable code for Groovy (and for Scala), I had to decided to time something. So I'm timing a block of code that randomly throws an exception or returns something. And then timing a block of code that just returns a number. In Groovy that would be: def time(opname, block) { long start_time = System.nanoTime() boolean success = true try { return block() } catch (Throwable ex) { success = false; throw ex } finally { diff = System.nanoTime() - start_time println "$opname $diff $success" } } int f() throws Exception { time("a") { Random r = new Random() if (r.nextInt(100) > 50) throw new IOException("Boom") else return 42 } time("b") { return 7 } } println f() An example of running the code, showing one run where no exception was thrown, and operation "a" and "b" where both successfully and took a certain amount of time.... $ groovy time.groovy a 37116000 true b 69000 true 7 ...and an example where an exception was thrown when timing operation "a": $ groovy time.groovy a 39998000 false Caught: java.io.IOException: Boom at time$_f_closure1.doCall(time.groovy:21) at time$_f_closure1.doCall(time.groovy) at time.time(time.groovy:6) at time.f(time.groovy:18) at time.run(time.groovy:31) at time.main(time.groovy) Note that the Java example is typed in that it uses a generic type, R, for the return value which gives you some compile-time checks. That is, when you run time() and use the result, the compiler will enforce that your declaration of the result has the same type as the return type of the block you're timing. Although Groovy does support generics, I've not used them here, and as a result the Groovy example doesn't have that type-safety. I think that's the way one would typically write Groovy code. Same example in Scala Now a look at the same code in Scala: def time[R](opname: String)(block: => R) = { val start_time = System.nanoTime() var success = true try { block } catch { case ex: Throwable => { success = false; throw ex } } finally { val diff = System.nanoTime() - start_time println(opname + " " + diff + " "+success) } } def f():Integer = { val answer = time("a") { val r = new Random() if (r.nextInt(100) > 50) throw new java.io.IOException("Boom") else "42" } println ("the answer, "+answer+" is of type "+ answer.getClass()) val seven:Integer = time("b") { 7 } println ("seven is of type "+seven.getClass()) return seven } f() I'm still not using Scala day-to-day, so this might be a little awkward: thank you to the London Scala User Group for helping me clean up my syntax, but all the mistakes are mine. This code has the same properties as the Java code (type safety via the R generic type), but seems a little shorter and neater. Additionally, the thing I like about the Scala code (and the Groovy code) is that the languages return the value of the last statement in a block, and that the syntax allows a clean time("thing") { ... } format. One observation: I've used the Integer class, which is deprecated, in order to be able to print out the class of the return types in the function f(). Without the :Integer declaration I was getting weird compile errors. As I said, my understanding of Scala and type inference isn't there yet. The output from running the code: a 913000 true the answer, 42 is of type class java.lang.String b 13000 true seven is of type class java.lang.Integer ...and an example where the method threw an exception: a 936000 false java.io.IOException: Boom at Main$$anonfun$1.apply((virtual file):26) at Main$$anonfun$1.apply((virtual file):23) at Main$.time$1((virtual file):8) at Main$.f$1((virtual file):23) at Main$.main((virtual file):44) at Main.main((virtual file)) // rest of stack trace removed Conclusions There's no conclusions here. It's just an exercise in comparing closures code in three different languages. I've probably missed some of the nuances of the Java example, but hey... it's a starting point. Original Blog Post: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/comparing-closures-java-groovy
CC-MAIN-2019-18
en
refinedweb
The following list shows the function calls of the MySQL C API. Chapter 19 lists each of these methods with detailed prototype information, return values, and descriptions. You may notice that many of the function names do not seem directly related to accessing database data. In many cases, MySQL actually only provides an API interface into database administration functions. By reading the function names, you might have gathered that any database application you write might look something like this: Example 12-1 shows a simple select statement that retrieves data from a MySQL database using the MySQL C API. #include <stdio.h> #include <mysql.h> int main(char ... No credit card required
https://www.oreilly.com/library/view/managing-using/0596002114/ch12s01.html
CC-MAIN-2019-18
en
refinedweb
Introduction Humans do not reboot their understanding of language each time we hear a sentence. Given an article, we grasp the context based on our previous understanding of those words. One of the defining characteristics we possess is our memory (or retention power). Can an algorithm replicate this? The first technique that comes to mind is a neural network (NN). But the traditional NNs unfortunately cannot do this. Take an example of wanting to predict what comes next in a video. A traditional neural network will struggle to generate accurate results. That’s where the concept of recurrent neural networks (RNNs) comes into play. RNNs have become extremely popular in the deep learning space which makes learning them even more imperative. A few real-world applications of RNN include: - Speech recognition - Machine translation - Music composition - Handwriting recognition - Grammar learning In this article, we’ll first quickly go through the core components of a typical RNN model. Then we’ll set up the problem statement which we will finally solve by implementing an RNN model from scratch in Python. We can always leverage high-level Python libraries to code a RNN. So why code it from scratch? I firmly believe the best way to learn and truly ingrain a concept is to learn it from the ground up. And that’s what I’ll showcase in this tutorial. This article assumes a basic understanding of recurrent neural networks. In case you need a quick refresher or are looking to learn the basics of RNN, I recommend going through the below articles first: Table of Contents - Flashback: A Recap of Recurrent Neural Network Concepts - Sequence Prediction using RNN - Building an RNN Model using Python Flashback: A Recap of Recurrent Neural Network Concepts Let’s quickly recap the core concepts behind recurrent neural networks. We’ll do this using an example of sequence data, say the stocks of a particular firm. A simple machine learning model, or an Artificial Neural Network, may learn to predict the stock price based on a number of features, such as the volume of the stock, the opening value, etc. Apart from these, the price also depends on how the stock fared in the previous fays and weeks. For a trader, this historical data is actually a major deciding factor for making predictions. In conventional feed-forward neural networks, all test cases are considered to be independent. Can you see how that’s a bad fit when predicting stock prices? The NN model would not consider the previous stock price values – not a great idea! There is another concept we can lean on when faced with time sensitive data – Recurrent Neural Networks (RNN)! A typical RNN looks like this: This may seem intimidating at first. But once we unfold it, things start looking a lot simpler: It is now easier for us to visualize how these networks are considering the trend of stock prices. This helps us in predicting the prices for the day. Here, every prediction at time t (h_t) is dependent on all previous predictions and the information learned from them. Fairly straightforward, right? RNNs can solve our purpose of sequence handling to a great extent but not entirely. Text is another good example of sequence data. Being able to predict what word or phrase comes after a given text could be a very useful asset. We want our models to write Shakespearean sonnets! Now, RNNs are great when it comes to context that is short or small in nature. But in order to be able to build a story and remember it, our models should be able to understand the context behind the sequences, just like a human brain. Sequence Prediction using RNN In this article, we will work on a sequence prediction problem using RNN. One of the simplest tasks for this is sine wave prediction. The sequence contains a visible trend and is easy to solve using heuristics. This is what a sine wave looks like: We will first devise a recurrent neural network from scratch to solve this problem. Our RNN model should also be able to generalize well so we can apply it on other sequence problems. We will formulate our problem like this – given a sequence of 50 numbers belonging to a sine wave, predict the 51st number in the series. Time to fire up your Jupyter notebook (or your IDE of choice)! Coding RNN using Python Step 0: Data Preparation Ah, the inevitable first step in any data science project – preparing the data before we do anything else. What does our network model expect the data to be like? It would accept a single sequence of length 50 as input. So the shape of the input data will be: (number_of_records x length_of_sequence x types_of_sequences) Here, types_of_sequences is 1, because we have only one type of sequence – the sine wave. On the other hand, the output would have only one value for each record. This will of course be the 51st value in the input sequence. So it’s shape would be: (number_of_records x types_of_sequences) #where types_of_sequences is 1 Let’s dive into the code. First, import the necessary libraries: %pylab inline import math To create a sine wave like data, we will use the sine function from Python’s math library: sin_wave = np.array([math.sin(x) for x in np.arange(200)]) Visualizing the sine wave we’ve just generated: plt.plot(sin_wave[:50]) X = [] Y = [] seq_len = 50 num_records = len(sin_wave) - seq_len for i in range(num_records - 50): X.append(sin_wave[i:i+seq_len]) Y.append(sin_wave[i+seq_len]) X = np.array(X) X = np.expand_dims(X, axis=2) Y = np.array(Y) Y = np.expand_dims(Y, axis=1) Print the shape of the data: X.shape, Y.shape ((100, 50, 1), (100, 1)) X_val = [] Y_val = [] for i in range(num_records - 50, num_records): X_val.append(sin_wave[i:i+seq_len]) Y_val.append(sin_wave[i+seq_len]) X_val = np.array(X_val) X_val = np.expand_dims(X_val, axis=2) Y_val = np.array(Y_val) Y_val = np.expand_dims(Y_val, axis=1) Step 1: Create the Architecture for our RNN model Our next task is defining all the necessary variables and functions we’ll use in the RNN model. Our model will take in the input sequence, process it through a hidden layer of 100 units, and produce a single valued output: learning_rate = 0.0001 nepoch = 25 T = 50 # length of sequence hidden_dim = 100 output_dim = 1 bptt_truncate = 5 min_clip_value = -10 max_clip_value = 10 We will then define the weights of the network: U = np.random.uniform(0, 1, (hidden_dim, T)) W = np.random.uniform(0, 1, (hidden_dim, hidden_dim)) V = np.random.uniform(0, 1, (output_dim, hidden_dim)) Here, - U is the weight matrix for weights between input and hidden layers - V is the weight matrix for weights between hidden and output layers - W is the weight matrix for shared weights in the RNN layer (hidden layer) Finally, we will define the activation function, sigmoid, to be used in the hidden layer: def sigmoid(x): return 1 / (1 + np.exp(-x)) Step 2: Train the Model Now that we have defined our model, we can finally move on with training it on our sequence data. We can subdivide the training process into smaller steps, namely: Step 2.1 : Check the loss on training data Step 2.1.1 : Forward Pass Step 2.1.2 : Calculate Error Step 2.2 : Check the loss on validation data Step 2.2.1 : Forward Pass Step 2.2.2 : Calculate Error Step 2.3 : Start actual training Step 2.3.1 : Forward Pass Step 2.3.2 : Backpropagate Error Step 2.3.3 : Update weights We need to repeat these steps until convergence. If the model starts to overfit, stop! Or simply pre-define the number of epochs. Step 2.1: Check the loss on training data We will do a forward pass through our RNN model and calculate the squared error for the predictions for all records in order to get the loss value. for epoch in range(nepoch): # check loss on train loss = 0.0 # do a forward pass to get prediction for i in range(Y.shape[0]): x, y = X[i], Y[i] # get input, output values of each record prev_s = np.zeros((hidden_dim, 1)) # here, prev-s is the value of the previous activation of hidden layer; which is initialized as all zeroes for t in range(T): new_input = np.zeros(x.shape) # we then do a forward pass for every timestep in the sequence new_input[t] = x[t] # for this, we define a single input for that timestep mulu = np.dot(U, new_input) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s # calculate error loss_per_record = (y - mulv)**2 / 2 loss += loss_per_record loss = loss / float(y.shape[0]) Step 2.2: Check the loss on validation data We will do the same thing for calculating the loss on validation data (in the same loop): # check loss on val val_loss = 0.0 for i in range(Y_val.shape[0]): x, y = X_val[i], Y_val[i] prev_s = np.zeros((hidden_dim, 1)) for t in range(T): new_input = np.zeros(x.shape) new_input[t] = x[t] mulu = np.dot(U, new_input) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s loss_per_record = (y - mulv)**2 / 2 val_loss += loss_per_record val_loss = val_loss / float(y.shape[0]) print('Epoch: ', epoch + 1, ', Loss: ', loss, ', Val Loss: ', val_loss) You should get the below output: Epoch: 1 , Loss: [[101185.61756671]] , Val Loss: [[50591.0340148]] ... ... Step 2.3: Start actual training We will now start with the actual training of the network. In this, we will first do a forward pass to calculate the errors and a backward pass to calculate the gradients and update them. Let me show you these step-by-step so you can visualize how it works in your mind. Step 2.3.1: Forward Pass In the forward pass: - We first multiply the input with the weights between input and hidden layers - Add this with the multiplication of weights in the RNN layer. This is because we want to capture the knowledge of the previous timestep - Pass it through a sigmoid activation function - Multiply this with the weights between hidden and output layers - At the output layer, we have a linear activation of the values so we do not explicitly pass the value through an activation layer - Save the state at the current layer and also the state at the previous timestep in a dictionary Here is the code for doing a forward pass (note that it is in continuation of the above loop): # train model for i in range(Y.shape[0]): x, y = X[i], Y[i] layers = [] prev_s = np.zeros((hidden_dim, 1))) # forward pass for t in range(T): new_input = np.zeros(x.shape) new_input[t] = x[t] mulu = np.dot(U, new_input) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) layers.append({'s':s, 'prev_s':prev_s}) prev_s = s Step 2.3.2 : Backpropagate Error After the forward propagation step, we calculate the gradients at each layer, and backpropagate the errors. We will use truncated back propagation through time (TBPTT), instead of vanilla backprop. It may sound complex but its actually pretty straight forward. The core difference in BPTT versus backprop is that the backpropagation step is done for all the time steps in the RNN layer. So if our sequence length is 50, we will backpropagate for all the timesteps previous to the current timestep. If you have guessed correctly, BPTT seems very computationally expensive. So instead of backpropagating through all previous timestep , we backpropagate till x timesteps to save computational power. Consider this ideologically similar to stochastic gradient descent, where we include a batch of data points instead of all the data points. Here is the code for backpropagating the errors: # derivative of pred dmulv = (mulv - y) # backward pass for t in range(T): dV_t = np.dot(dmulv, np.transpose(layers[t]['s'])) dsv = np.dot(np.transpose(V), dmulv) ds = dsv dadd = add * (1 - add) * ds dmulw = dadd * np.ones_like(mulw) dprev_s = np.dot(np.transpose(W), dmulw) for i in range(t-1, max(-1, t-bptt_truncate-1), -1): ds = dsv + dprev_s dadd = add * (1 - add) * ds dmulw = dadd * np.ones_like(mulw) dmulu = dadd * np.ones_like(mulu) dW_i = np.dot(W, layers[t]['prev_s']) dprev_s = np.dot(np.transpose(W), dmulw) new_input = np.zeros(x.shape) new_input[t] = x[t] dU_i = np.dot(U, new_input) dx = np.dot(np.transpose(U), dmulu) dU_t += dU_i dW_t += dW_i dV += dV_t dU += dU_t dW += dW_t Step 2.3.3 : Update weights Lastly, we update the weights with the gradients of weights calculated. One thing we have to keep in mind that the gradients tend to explode if you don’t keep them in check.This is a fundamental issue in training neural networks, called the exploding gradient problem. So we have to clamp them in a range so that they dont explode. We can do it like this if dU.max() > max_clip_value: dU[dU > max_clip_value] = max_clip_value if dV.max() > max_clip_value: dV[dV > max_clip_value] = max_clip_value if dW.max() > max_clip_value: dW[dW > max_clip_value] = max_clip_value if dU.min() < min_clip_value: dU[dU < min_clip_value] = min_clip_value if dV.min() < min_clip_value: dV[dV < min_clip_value] = min_clip_value if dW.min() < min_clip_value: dW[dW < min_clip_value] = min_clip_value # update U -= learning_rate * dU V -= learning_rate * dV W -= learning_rate * dW On training the above model, we get this output: Epoch: 1 , Loss: [[101185.61756671]] , Val Loss: [[50591.0340148]] Epoch: 2 , Loss: [[61205.46869629]] , Val Loss: [[30601.34535365]] Epoch: 3 , Loss: [[31225.3198258]] , Val Loss: [[15611.65669247]] Epoch: 4 , Loss: [[11245.17049551]] , Val Loss: [[5621.96780111]] Epoch: 5 , Loss: [[1264.5157739]] , Val Loss: [[632.02563908]] Epoch: 6 , Loss: [[20.15654115]] , Val Loss: [[10.05477285]] Epoch: 7 , Loss: [[17.13622839]] , Val Loss: [[8.55190426]] Epoch: 8 , Loss: [[17.38870495]] , Val Loss: [[8.68196484]] Epoch: 9 , Loss: [[17.181681]] , Val Loss: [[8.57837827]] Epoch: 10 , Loss: [[17.31275313]] , Val Loss: [[8.64199652]] Epoch: 11 , Loss: [[17.12960034]] , Val Loss: [[8.54768294]] Epoch: 12 , Loss: [[17.09020065]] , Val Loss: [[8.52993502]] Epoch: 13 , Loss: [[17.17370113]] , Val Loss: [[8.57517454]] Epoch: 14 , Loss: [[17.04906914]] , Val Loss: [[8.50658127]] Epoch: 15 , Loss: [[16.96420184]] , Val Loss: [[8.46794248]] Epoch: 16 , Loss: [[17.017519]] , Val Loss: [[8.49241316]] Epoch: 17 , Loss: [[16.94199493]] , Val Loss: [[8.45748739]] Epoch: 18 , Loss: [[16.99796892]] , Val Loss: [[8.48242177]] Epoch: 19 , Loss: [[17.24817035]] , Val Loss: [[8.6126231]] Epoch: 20 , Loss: [[17.00844599]] , Val Loss: [[8.48682234]] Epoch: 21 , Loss: [[17.03943262]] , Val Loss: [[8.50437328]] Epoch: 22 , Loss: [[17.01417255]] , Val Loss: [[8.49409597]] Epoch: 23 , Loss: [[17.20918888]] , Val Loss: [[8.5854792]] Epoch: 24 , Loss: [[16.92068017]] , Val Loss: [[8.44794633]] Epoch: 25 , Loss: [[16.76856238]] , Val Loss: [[8.37295808]] Looking good! Time to get the predictions and plot them to get a visual sense of what we’ve designed. Step 3: Get predictions We will do a forward pass through the trained weights to get our predictions: preds = [] for i in range(Y.shape[0]): x, y = X[i], Y[i] prev_s = np.zeros((hidden_dim, 1)) # Forward pass for t in range(T): mulu = np.dot(U, x) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s preds.append(mulv) preds = np.array(preds) Plotting these predictions alongside the actual values: plt.plot(preds[:, 0, 0], 'g') plt.plot(Y[:, 0], 'r') plt.show() preds = [] for i in range(Y_val.shape[0]): x, y = X_val[i], Y_val[i] prev_s = np.zeros((hidden_dim, 1)) # For each time step... for t in range(T): mulu = np.dot(U, x) mulw = np.dot(W, prev_s) add = mulw + mulu s = sigmoid(add) mulv = np.dot(V, s) prev_s = s preds.append(mulv) preds = np.array(preds) plt.plot(preds[:, 0, 0], 'g') plt.plot(Y_val[:, 0], 'r') plt.show() from sklearn.metrics import mean_squared_error math.sqrt(mean_squared_error(Y_val[:, 0] * max_val, preds[:, 0, 0] * max_val)) 0.127191931509431 End Notes I cannot stress enough how useful RNNs are when working with sequence data. I implore you all to take this learning and apply it on a dataset. Take a NLP problem and see if you can find a solution for it. You can always reach out to me in the comments section below if you have any questions. In this article, we learned how to create a recurrent neural network model from scratch by using just the numpy library. You can of course use a high-level library like Keras or Caffe but it is essential to know the concept you’re implementing. Do share your thoughts, questions and feedback regarding this article below. Happy learning! 3 Comments Great article! How would the code need to be modified if more than one time series are used to make a prediction? For example: – predict next day temperature using the last 50-day temperature and last 50-day humidity level; or, – predict next day temperature and next day humidity level using the last 50-day temperature and last 50-day humidity level Thank you, Guy Aubin Hi, As the data for both the tasks, i.e. predicting temperature and predicting humidity, will be different, I would suggest to build two different models, one for temperature and one for humidity. @pulkit_sharma. Thank you Pulkit. But still, future temperature can be correlated to both past temperature and past humidity level, so incorporating both variables as inputs will improve the output; how would you incorporate two variables in the model. Also, is it possible to have a single model make the prediction for both the temperature and humidity since these variables are correlated?
https://www.analyticsvidhya.com/blog/2019/01/fundamentals-deep-learning-recurrent-neural-networks-scratch-python/
CC-MAIN-2019-18
en
refinedweb
In this example we are going to create a function which will count the number of occurrences of each character and return it as a list of tuples in order of appearance. For example, ordered_count("abracadabra") == [('a', 5), ('b', 2), ('r', 2), ('c', 1), ('d', 1)] The above is a 7 kyu question on CodeWars, this is the only question I can solve today after the first two fail attempts. I am supposed to start the Blender project today but because I want to write a post for your people I have spent nearly an hour and a half working on those three python questions on CodeWars, I hope you people will really appreciate my effort and will share this post to help this website to grown. def ordered_count(input): already = [] input_list = list(input) return_list = [] for word in input_list: if(word not in already): return_list.append((word, input_list.count(word))) already.append(word) return return_list The solution above is short and solid, hope you like it.
https://www.cebuscripts.com/2019/04/15/codingdirectional-count-the-number-of-occurrences-of-each-character-and-return-it-as-a-list-of-tuples-in-order-of-appearance/
CC-MAIN-2019-18
en
refinedweb
I'm changing my database server provider from MySQL to SQL Server Express, so I'm changing all my .NET Framework 4.6.1 class libraries that I used before for MySQL database connection. Of course I re-created the migration, install Microsoft.EntityFrameWorkCore.SqlServer instead of the MySQL one and set up the DbContext. I use these libraries as a game server plugins, so I installed them there. Sadly if fails to load and throws this error: System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host. I tried running the same DbContext as .NET Framework 4.6.1 console application and it worked. I also changed everything up for my other class library which was using MySQL, but it also failed to load. So I believe this error happens only to class library. My database context ( SQLKits is the name of my game plugin) public class SQLKitsContext : DbContext { public virtual DbSet<Kit> Kits { get; set; } public virtual DbSet<KitItem> KitItems { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer(@"Server=localhost\sqlexpress;Database=unturned;Trusted_Connection=True;"); } } and this is an image of my instance class where I define DbContext: I also tried to use it without migration, but it failed either. And in SQL Server Profiler there's nothing about the connection try, so plugin doesn't even connect to it. I'm debugging this issue for a long time and I really couldn't solve it myself nor nobody could help me, so that's why I'm asking for solution here. ~Thank you Full Error: System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host. at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult] (Microsoft.EntityFrameworkCore.Storage.IExecutionStrategy strategy, System.Func 2[T,TResult] operation, System.Func2[T,TResult] verifySucceeded, TState state) [0x0001f] in :0 at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult] (Microsoft.EntityFrameworkCore.Storage.IExecutionStrategy strategy, TState state, System.Func`2[T,TResult] operation) [0x00000] in :0 at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerDatabaseCreator.Exists (System.Boolean retryOnNotExists) [0x00034] in <253b972c331e4e9c86e8f6b8430dc9d0>:0 at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerDatabaseCreator.Exists () [0x00000] in <253b972c331e4e9c86e8f6b8430dc9d0>:0 at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.Exists () [0x0000b] in <69f795dffc844780bfcfff4ff8415a92>:0 at Microsoft.EntityFrameworkCore.Migrations.Internal.Migrator.Migrate (System.String targetMigration) [0x00012] in <69f795dffc844780bfcfff4ff8415a92>:0 at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.Migrate (Microsoft.EntityFrameworkCore.Infrastructure.DatabaseFacade databaseFacade) [0x00010] in <69f795dffc844780bfcfff4ff8415a92>:0 at RestoreMonarchy.SQLKits.SQLKitsPlugin+d__13.MoveNext () [0x00033] in <4589f5df884b435cab61ef028aadd6e8>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in :0 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0003e] in :0 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00028] in :0 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00008] in :0 at System.Runtime.CompilerServices.TaskAwaiter.GetResult () [0x00000] in :0 at Rocket.Core.Plugins.Plugin+d__32.MoveNext () [0x002c4] in <4bcff08a1274468caf2867ee950c3ee7>:0 Change your connection string to optionsBuilder.UseSqlServer(@"Server=.\sqlexpress;Database=unturned;Trusted_Connection=True;")
https://entityframeworkcore.com/knowledge-base/55766322/how-to-fix--an-existing-connection-was-forcibly-closed-by-the-remote-host--error-in-ef-core
CC-MAIN-2021-10
en
refinedweb
Return a piece of a point cloud. More... #include <vtkExtractPointCloudPiece.h> Return a piece of a point cloud. This filter takes the output of a vtkHierarchicalBinningFilter and allows the pipeline to stream it. Pieces are determined from an offset integral array associated with the field data of the input. Definition at line 33 of file vtkExtractPointCloudPiece.h. Definition at line 41 of file vtkExtractPointCloudPiece.h. Standard methods for instantiation, printing, and type. Turn on or off modulo sampling of the points. By default this is on and the points in a given piece will be reordered in an attempt to reduce spatial coherency. This is called by the superclass. This is the method you should override. Reimplemented from vtkPolyDataAlgorithm. This is called by the superclass. This is the method you should override. Reimplemented from vtkPolyDataAlgorithm. Definition at line 63 of file vtkExtractPointCloudPiece.h.
https://vtk.org/doc/nightly/html/classvtkExtractPointCloudPiece.html
CC-MAIN-2021-10
en
refinedweb
You've been invited into the Kudos (beta program) private group. Chat with others in the program, or give feedback to Atlassian.View group Join the community to find out what other Atlassian users are discussing, debating and creating. If you're working with Jira REST API on a daily basis then this article is for you. As you may have realised that you can do a lot with it with scripting certain functions. Also taking that extra step to automate those certain features can be challenging at times but fulfilling that aim makes it worthwhile. There are many programming languages that can be used to implement a solution but choosing the right one that has lots of support would help you attain the answers you seek. I'm here to share with you about how I can update certain Jira fields using python with a package I wrote called jiraone. The package provides a Field class which has the ability to update Jira's custom or system fields on an issue or issues simply by specifying the field name alone. In the event the field cannot be found, it returns an error that the field value cannot be None. Example of fields type that can be queried: Adding values to a multiselect field from jiraone import field, echo, LOGIN user = "email" password = "token" link = "" LOGIN(user=user, password=password, url=link) issue = "T6-75" fields = "Multiple files" # a multiselect custom field case_value = ["COM Row 1", "Thanos"] for value in case_value: c = field.update_field_data(data=value, find_field=fields, key_or_id=issue, options="add", show=False) echo(c) # output # < Response[204] > With the above, you can easily add multiple values or remove multiple values from a multiselect field on an issue or on any issue.You can look into the documentation here to get more details on its usage. Posting values to a cascading select field from jiraone import field, echo, LOGIN user = "email" password = "token" link = "" LOGIN(user=user, password=password, url=link) issue = "T6-75" fields = "Test cascading" # a cascading select custom field value = ["Servers", "Redhat"] # a list of First level & second level c = field.update_field_data(data=value, find_field=fields, key_or_id=issue, show=False) echo(c) # output # < Response[204] > You can specify the first level and second level of the cascading field. Look into the documentation here for more details. Prince NyecheCommunity Leader Technology Consultant
https://community.atlassian.com/t5/Jira-articles/Editing-Jira-fields-via-API/ba-p/1612063
CC-MAIN-2021-10
en
refinedweb
This notebook presents code and exercises from Think Bayes, second edition. MIT License: from __future__ import print_function, division % matplotlib inline import warnings warnings.filterwarnings('ignore') import math import numpy as np from thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot def EvalWeibullPdf(x, lam, k): """Computes the Weibull PDF. x: value lam: parameter lambda in events per unit time k: parameter returns: float probability density """ arg = (x / lam) return k / lam * arg**(k-1) * np.exp(-arg**k) def EvalWeibullCdf(x, lam, k): """Evaluates CDF of the Weibull distribution.""" arg = (x / lam) return 1 - np.exp(-arg**k) def MakeWeibullPmf(lam, k, high, n=200): """Makes a PMF discrete approx to a Weibull distribution. lam: parameter lambda in events per unit time k: parameter high: upper bound n: number of values in the Pmf returns: normalized Pmf """ xs = np.linspace(0, high, n) ps = EvalWeibullPdf(xs, lam, k) return Pmf(dict(zip(xs, ps))) SciPy also provides functions to evaluate the Weibull distribution, which I'll use to check my implementation. from scipy.stats import weibull_min lam = 2 k = 1.5 x = 0.5 weibull_min.pdf(x, k, scale=lam) 0.33093633846922332 EvalWeibullPdf(x, lam, k) 0.33093633846922332 weibull_min.cdf(x, k, scale=lam) 0.1175030974154046 EvalWeibullCdf(x, lam, k) 0.11750309741540454 And here's what the PDF looks like, for these parameters. pmf = MakeWeibullPmf(lam, k, high=10) thinkplot.Pdf(pmf) thinkplot.Config(xlabel='Lifetime', ylabel='PMF') We can use np.random.weibull to generate random values from a Weibull distribution with given parameters. To check that it is correct, I generate a large sample and compare its CDF to the analytic CDF. def SampleWeibull(lam, k, n=1): return np.random.weibull(k, size=n) * lam data = SampleWeibull(lam, k, 10000) cdf = Cdf(data) model = pmf.MakeCdf() thinkplot.Cdfs([cdf, model]) Exercise: Write a class called LightBulb that inherits from Suite and Joint and provides a Likelihood function that takes an observed lifespan as data and a tuple, (lam, k), as a hypothesis. It should return a likelihood proportional to the probability of the observed lifespan in a Weibull distribution with the given parameters. Test your method by creating a LightBulb object with an appropriate prior and update it with a random sample from a Weibull distribution. Plot the posterior distributions of lam and k. As the sample size increases, does the posterior distribution converge on the values of lam and k used to generate the sample? # Solution class LightBulb(Suite, Joint): def Likelihood(self, data, hypo): lam, k = hypo if lam == 0: return 0 x = data like = EvalWeibullPdf(x, lam, k) return like # Solution from itertools import product lams = np.linspace(0, 5, 101) ks = np.linspace(0, 5, 101) suite = LightBulb(product(lams, ks)) # Solution datum = SampleWeibull(lam, k, 10) lam = 2 k = 1.5 suite.UpdateSet(datum) 5.5677896093212706e-09 # Solution pmf_lam = suite.Marginal(0) thinkplot.Pdf(pmf_lam) pmf_lam.Mean() 2.3583976257450225 # Solution pmf_k = suite.Marginal(1) thinkplot.Pdf(pmf_k) pmf_k.Mean() 1.2969100506973927 # Solution thinkplot.Contour(suite) Exercise: Now suppose that instead of observing a lifespan, k, you observe a lightbulb that has operated for 1 year and is still working. Write another version of LightBulb that takes data in this form and performs an update. # Solution class LightBulb2(Suite, Joint): def Likelihood(self, data, hypo): lam, k = hypo if lam == 0: return 0 x = data like = 1 - EvalWeibullCdf(x, lam, k) return like # Solution from itertools import product lams = np.linspace(0, 10, 101) ks = np.linspace(0, 10, 101) suite = LightBulb2(product(lams, ks)) # Solution suite.Update(1) 0.83566549505291599 # Solution pmf_lam = suite.Marginal(0) thinkplot.Pdf(pmf_lam) pmf_lam.Mean() 5.6166427208116056 # Solution pmf_k = suite.Marginal(1) thinkplot.Pdf(pmf_k) pmf_k.Mean() 5.2481865102083747 Exercise: Now let's put it all together. Suppose you have 15 lightbulbs installed at different times over a 10 year period. When you observe them, some have died and some are still working. Write a version of LightBulb that takes data in the form of a (flag, x) tuple, where: flagis eq, it means that xis the actual lifespan of a bulb that has died. flagis gt, it means that xis the current age of a bulb that is still working, so it is a lower bound on the lifespan. To help you test, I will generate some fake data. First, I'll generate a Pandas DataFrame with random start times and lifespans. The columns are: start: time when the bulb was installed lifespan: lifespan of the bulb in years end: time when bulb died or will die age_t: age of the bulb at t=10 import pandas as pd lam = 2 k = 1.5 n = 15 t_end = 10 starts = np.random.uniform(0, t_end, n) lifespans = SampleWeibull(lam, k, n) df = pd.DataFrame({'start': starts, 'lifespan': lifespans}) df['end'] = df.start + df.lifespan df['age_t'] = t_end - df.start df.head() Now I'll process the DataFrame to generate data in the form we want for the update. data = [] for i, row in df.iterrows(): if row.end < t_end: data.append(('eq', row.lifespan)) else: data.append(('gt', row.age_t)) for pair in data: print(pair) ('eq', 2.6693934023350288) ('eq', 4.4395922794570097) ('eq', 1.0134550944499816) ('gt', 1.1559931554468044) ('eq', 0.7748433201074556) ('eq', 0.58523684453682523) ('eq', 1.5050346559516292) ('eq', 0.0043263460781409529) ('eq', 3.1183238430930049) ('eq', 2.5497343435244648) ('eq', 1.426995199412439) ('eq', 2.8969617422395997) ('eq', 1.2143990874080592) ('gt', 0.28161850035970559) ('eq', 1.4219175345313229) # Solution class LightBulb3(Suite, Joint): def Likelihood(self, data, hypo): lam, k = hypo if lam == 0: return 0 flag, x = data if flag == 'eq': like = EvalWeibullPdf(x, lam, k) elif flag == 'gt': like = 1 - EvalWeibullCdf(x, lam, k) else: raise ValueError('Invalid data') return like # Solution from itertools import product lams = np.linspace(0, 10, 101) ks = np.linspace(0, 10, 101) suite = LightBulb3(product(lams, ks)) # Solution suite.UpdateSet(data) 5.7937515916258046e-12 # Solution pmf_lam = suite.Marginal(0) thinkplot.Pdf(pmf_lam) pmf_lam.Mean() 2.2717545445867739 # Solution pmf_k = suite.Marginal(1) thinkplot.Pdf(pmf_k) pmf_k.Mean() 1.2410798895283741 Exercise: Suppose you install a light bulb and then you don't check on it for a year, but when you come back, you find that it has burned out. Extend LightBulb to handle this kind of data, too. # Solution class LightBulb4(Suite, Joint): def Likelihood(self, data, hypo): lam, k = hypo if lam == 0: return 0 flag, x = data if flag == 'eq': like = EvalWeibullPdf(x, lam, k) elif flag == 'gt': like = 1 - EvalWeibullCdf(x, lam, k) elif flag == 'lt': like = EvalWeibullCdf(x, lam, k) else: raise ValueError('Invalid data') return like Exercise: Suppose we know that, for a particular kind of lightbulb in a particular location, the distribution of lifespans is well modeled by a Weibull distribution with lam=2 and k=1.5. If we install n=100 lightbulbs and come back one year later, what is the distribution of c, the number of lightbulbs that have burned out? # Solution # The probability that any given bulb has burned out comes from the CDF of the distribution p = EvalWeibullCdf(1, lam, k) p 0.29781149867344037 # Solution # The number of bulbs that have burned out is distributed Binom(n, p) n = 100 from thinkbayes2 import MakeBinomialPmf pmf_c = MakeBinomialPmf(n, p) thinkplot.Pdf(pmf_c) Exercise: Now suppose that lam and k are not known precisely, but we have a LightBulb object that represents the joint posterior distribution of the parameters after seeing some data. Compute the posterior predictive distribution for c, the number of bulbs burned out after one year. # Solution n = 100 t_return = 1 metapmf = Pmf() for (lam, k), prob in suite.Items(): p = EvalWeibullCdf(t_return, lam, k) pmf = MakeBinomialPmf(n, p) metapmf[pmf] = prob # Solution from thinkbayes2 import MakeMixture mix = MakeMixture(metapmf) thinkplot.Pdf(mix)
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/survival_soln.ipynb
CC-MAIN-2021-10
en
refinedweb
I have inserted audio data into the data folder in the following program + pde file, but I cannot read the audio. NullPointerException I get an error. Where should I change? import ddf.minim. *;// Import the minim library Minim minim;// Declaration of minim type Minim AudioPlayer player;// Sound data storage variable void setup () { size (100, 100); minim = new Minim (this);// initialization player = minim.loadFile ("sample.mp3");// Load sample.mp3 player.play ();// Play } void draw () { background (0); } void stop () { player.close ();// Close sound data minim.stop (); super.stop (); } - Answer # 1 Related articles - Python read WAV audio file - python - i can't read the tsv file - read csv file and use in list [java] - c ++ - i want to read a txt file and display it as it is - i want to read a csv file with python and save it as excel - python - i can't open a video file with opencv - javascript - wordpress cannot read js file - css file is not read in html - python - read binary file as string - i can't read javascript - i want to read a javascript file from html - How to read JSON configuration file in C # project in vs - batch file - how much error processing should be performed in batch processing - cannot read file with vba adodbstream - read a file in python - ruby - i can't find the file i made with the command prompt - i got stuck in file processing - read file (css) in wordpress - python - when i read the file, it says that there is no key - i want to read a csv file in c # and output it in json format Trends Tried in Processing3.5.3 sample.mp3 was played without any problems in the data folder and where the pde file was. If it is such an error, it means that the file does not exist. Check the file name.
https://www.tutorialfor.com/questions-151823.htm
CC-MAIN-2021-10
en
refinedweb
# Handling Failure Follow along in the Terminal cd examples/tutorial python 04_handle_failures.py # If at first you don't succeed... Now that we have a working ETL flow let's take further steps to ensure its robustness. The extract_* tasks are making web requests to external APIs in order to fetch the data. What if the API is unavailable for a short period? Or if a single request times out for unknown reasons? Prefect Tasks can be retried on failure; let's add this to our extract_* tasks: from datetime import timedelta import aircraftlib as aclib from prefect import task, Flow, Parameter @task(max_retries=3, retry_delay=timedelta(seconds=10)) def extract_reference_data(): # same as before ... ... @task(max_retries=3, retry_delay=timedelta(seconds=10)) def extract_live_data(airport, radius, ref_data): # same as before ... ... This is a simple measure that helps our Flow gracefully handle transient errors in only the tasks we specify. Now if there are any failed web requests, a maximum of 3 attempts will be made, waiting 10 seconds between each attempt. More Ways to Handle Failures There are other mechanisms Prefect provides to enable specialized behavior around failures: - Task Triggers: selectively execute Tasksbased on the states from upstream Taskruns. - State Handlers: provide a Python function that is invoked whenever a Flowor Taskchanges state - see all the things! - Notifications: Get Slack notifications upon state changes of interest or use the EmailTask in combination with Task Triggers. Up Next! Schedule our Flow to run periodically or on a custom schedule.
https://docs.prefect.io/core/tutorial/04-handling-failure.html
CC-MAIN-2020-45
en
refinedweb
nexuslib is a library that provides robust reflection support, random number generation, cryptography, networking, and more. The reflection and serialization packages have among their features deterministic JSON de/serialization, deserializing directly to typed AS objects, a structured reflection class heirarchy, and more. It also has full support for Application Domains and namespaces. The crypto and security package offer an HMAC class and some utilities, useful to protect content. The nexuslib also has packages to work with version control systems, such as git. Sample // Class nexus.math.Random public static function get instance():Random; public function Random(generator:IPRNG); public function float(min:Number = NaN, max:Number = NaN):Number; public function integer(min:uint = 0, max:int = int.MAX_VALUE):int; public function unsignedInteger(min:uint = 0, max:uint = uint.MAX_VALUE):uint; public function boolean():Boolean; public function weightedRound(value:Number):int; public function choice(... items):Object; public function shuffle(container:Object):void; public function toString(verbose:Boolean = false):String;
https://www.as3gamegears.com/misc/nexuslib/
CC-MAIN-2020-45
en
refinedweb
In this blog post, we’ll highlight how all the basic commands you end up using in the first few minutes after installing PostgreSQL are identical in YugabyteDB. We’ll cover connecting to the database, creating users, databases, schemas, and calling external files from the SQL shell. In the next blog post in this series we’ll tackle querying data to demonstrate that if you know how to query data in PostgreSQL, you already know how to do it in YugabyteDB. First things first, for those of you who might be new to either distributed SQL or YugabyteDB…. - Smart distributed query execution so that query processing is pushed closer to the data as opposed to data being pushed over the network and thus slowing down query response times. -… Installing YugabyteDB YugabyteDB is only slightly more involved than getting PostgreSQL up and running. At the end of the day it should only take a few minutes or less depending on your environment. Let’s look at a few scenarios: Single Node Installation on Mac $ wget $ tar xvfz yugabyte-2.3.0.0-darwin.tar.gz && cd yugabyte-2.3.0.0/ $ ./bin/yugabyted start Single Node Installation on Linux $ wget $ tar xvfz yugabyte-2.3.0.0-linux.tar.gz && cd yugabyte-2.3.0.0/ $ ./bin/post_install.sh $ ./bin/yugabyted start Note: If you want to run 3 local nodes instead of a single node for either the Mac or Linux setups, just tweak the last command so it reads: ./bin/yb-ctl --rf 3 create 3 Node Installation on Google Kubernetes Engine $ helm repo add yugabytedb $ helm repo update $ kubectl create namespace yb-demo $ helm install yb-demo yugabytedb/yugabyte --namespace yb-demo --wait For more information on other installation types and prerequisites, check out the Quickstart Docs. Connecting to a YugabyteDB Cluster Connect Locally Assuming you are in the YugabyteDB install directory, simply execute the following to get to a YSQL shell: $ ./bin/ysqlsh ysqlsh (11.2-YB-2.3.0.0-b0) Type "help" for help. yugabyte=# Connecting on GKE Assuming you are connected to the Kubernetes cluster via the Google Cloud Console, execute the following. $ kubectl exec -n yb-demo -it yb-tserver-0 -- ysqlsh -h yb-tserver-0.yb-tservers.yb-demo ysqlsh (11.2-YB-2.3.0.0-b0) Type "help" for help. yugabyte=# Check out the documentation for more information about YugabyteDB’s PostgreSQL-compatible YSQL API. Connecting via JDBC Assuming we are using the PostgreSQL JDBC driver to connect to YugabyteDB, the construction of the connect string will be identical to PostgreSQL. For example here’s a snippet for setting up a connection to a database called "northwind" in YugabyteDB using the PostgreSQL driver in Spring. spring.datasource.url=jdbc:postgresql://11.22.33.44:5433/northwind Note: In the example above we assume YugabyteDB’s YSQL API is being accessed at 11.22.33.44 on the default port 5433, using the default user “yugabyte” with the password “password”. For more information about YugabyteDB connectivity options check out the Drivers section of the documentation. Setting Up Users in YugabyteDB Creating roles/users, and assigning them privileges and passwords is going to be the same in YugabyteDB as it is in PostgreSQL. Create a Role with Privileges CREATE ROLE felix LOGIN; Create a Role with a Password CREATE USER felix2 WITH PASSWORD ‘password’; Create a Role with a Password That Will Expire in the Future CREATE ROLE felix3 WITH LOGIN PASSWORD 'password' VALID UNTIL '2020-09-30'; Change a User’s Password ALTER ROLE felix WITH PASSWORD 'newpassword'; List All the Users \du For more information about how YugabyteDB handles users, permissions, security, and encryption check out the Secure section of the documentation. Creating Databases and Schemas in YugabyteDB Creating databases and schemas in YugabyteDB is identical to how it is done in PostgreSQL. Create a Database CREATE DATABASE northwind; Switch to a Database \c northwind; Describe the Database \dt) Create a Schema CREATE SCHEMA nonpublic; Create a Schema for a Specific User CREATE SCHEMA AUTHORIZATION felix; Create Objects and Load Data from External Files If you have DDL or DML scripts that you want to call from within the YSQL shell, the process is the same in YugabyteDB as it is in PostgreSQL. You can find the scripts used in the examples below in the "~/yugabyte-2.3.x.x/share" directory. For information about the sample data sets that ship by default with YugabyteDB, check out the Sample Datasets documentation. Call an External File to Create Objects \i 'northwind_ddl.sql'; Call an External File to Load Data into the Objects \i 'northwind_data.sql'; What’s Next? Stay tuned for part 2 in this series where we’ll dive into querying data from a YugabyteDB cluster using familiar PostgreSQL syntax. Discussion
https://dev.to/jguerreroyb/a-postgresql-compatible-distributed-sql-cheat-sheet-the-basics-4ep7
CC-MAIN-2020-45
en
refinedweb
Subject: Re: [boost] first steps to submitting - boost :: observers From: Giovanni Piero Deretta (gpderetta_at_[hidden]) Date: 2016-09-18 19:22:33 On 17 Sep 2016 7:18 pm, "Robert McInnis" <r_mcinnis_at_[hidden]> wrote: > > G'afternoon, > > > > This is my first time submitting to a public repo, please be gentle > Welcome to the meat grinder :) > > > I'd like to submit a series of classes I have been using since '90. The > initial set implements a thread safe subject/observer pattern. I have > included a handful of example programs, two fairly trivial and the third > more in-depth. All examples are single file examples to make compilation > trivial. > > > > This is the first re-work of my original tools, making them more > boost-friendly. I expect to add more default observer templates and > observable objects, but that will come with time. > > I do like the idea. It seems vaguely related to Functional Reactive Programming (of which I have no experience). It would be nice to have a discussion of similarities and differences. Possibly this is discussed in the docs, but they are not immediately accessible via mobile. I would like thread safety to be optional. A few quick comments: * Code conventions do not follow the boost stile. * Macros are injected outside the BOOST namespace. * Lock free mutexes aren't. -- gpd Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2016/09/230759.php
CC-MAIN-2020-45
en
refinedweb
Data Binding in Xamarin DataGrid (SfDataGrid) The SfDataGrid is bound to an external data source to display the data. It supports data sources such as List, ObservableCollection, and so on. The SfDataGrid.ItemsSource property helps to bind this control with collection of objects. In order to bind data source of the SfDataGrid, set the SfDataGrid.ItemsSource property as follows. Such that each row in the SfDataGrid would bind to an object in data source. Each column would bind to a property in the data model object. <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="" xmlns:x="" xmlns:local="clr-namespace:DataGridDemo;assembly=DataGridDemo" xmlns:syncfusion="clr-namespace:Syncfusion.SfDataGrid.XForms;assembly=Syncfusion.SfDataGrid.XForms" x:Class="DataGridDemo; If the data source implements ICollectionChanged interface, then the SfDataGrid will automatically refresh the view when an item is added, removed, or cleared. When you add or remove an item in ObservableCollection, it automatically refreshes the view as the ObservableCollection. That implements the INotifyCollectionChanged. But when you do the same in List, it will not refresh the view automatically. If the data model implements the INotifyPropertyChanged interface, then the SfDataGrid responds to the property change at runtime to update the view. NOTE The SfDataGrid does not supports DataTablebinding in Xamarin.Formssince System.Datais inaccessible in Portable Class Library. Binding with IEnumerable The SfDataGrid control supports to bind any collection that implements the IEnumerable interface. All the data operations such as sorting, grouping, and filtering are supported when binding collection derived from IEnumerable. Binding with DataTable SfDataGrid control supports to bind the DataTable. SfDataGrid control automatically refresh the UI when binding DataTable as ItemsSource when rows are added, removed or cleared. <sfGrid:SfDataGrid x: </sfGrid:SfDataGrid> DataTable Table = this.GetDataTable(); this.sfDataGrid1.ItemsSource = Table; Below are the limitations when binding DataTable as ItemsSource to SfDataGrid. - Custom sorting is not supported. SfDataGrid.View.Filteris not supported. - Advanced Filtering does not support Case Sensitive filtering. - GridUnboundColumn.Expression is not supported. This can be achieved by using the DataColumn of DataTable by setting DataColumn.Expression property. - SfDataGrid.LiveDataUpdateMode is not supported. Binding complex properties The SfDataGrid control supports to bind complex property to its columns. To bind complex property to the GridColumn, set the complex property path to the MappingName. <sfGrid:SfDataGrid x: <sfGrid:SfDataGrid.Columns> <sfGrid:GridTextColumn <sfGrid:GridTextColumn <sfGrid:GridTextColumn <sfGrid:GridTextColumn </sfGrid:SfDataGrid.Columns> </sfGrid:SfDataGrid> this.dataGrid.Columns.Add(new GridTextColumn() { MappingName = "OrderID.Order" }); View The DataGrid has the View property of type ICollectionViewAdv interface that implements ICollectionView interface. View is responsible for maintaining and manipulating data and other advanced operations, like Sorting, Grouping, and etc., When you bind collection to the ItemsSource property of the SfDataGrid, then View will be created and maintains the operations on Data such as Grouping, Sorting, Insert, Delete, and Modification. NOTE The DataGrid creates different types of view derived from ICollectionViewAdvinterface based on the ItemsSource. IMPORTANT Viewrelated properties can be used only after creating SfDataGridview. Hence changes related to view can be done in SfDataGrid.GridViewCreatedor SfDataGrid.GridLoadedevent or in runtime only. The following property is associated with View LiveDataUpdateMode The SfDataGrid supports to update the view during data manipulation operations and property changes using the LiveDataUpdateMode. It allows to update the view based on the SfDataGrid.View.LiveDataUpdateMode property. dataGrid.GridViewCreated += DataGrid_GridViewCreated; private void DataGrid_GridViewCreated(object sender, GridViewCreatedEventArgs e) { dataGrid.View.LiveDataUpdateMode = LiveDataUpdateMode.Default; } The following events are associated with View. RecordPropertyChanged The RecordPropertyChanged event is raised when DataModel property value is changed, if DataModel implements the INotifyPropertyChanged interface. The event receives with two arguments namely sender that handles the DataModel and the PropertyChangedEventArgs as argument. PropertyChangedEventArgs has the following property: PropertyName: It denotes the PropertyName of the changed value. CollectionChanged The CollectionChanged event is raised when changes in Records/DisplayElements collection. The event receives two arguments namely sender that handles View. SourceCollectionChanged The SourceCollectionChanged event is raised when making changes in SourceCollection. For example, adding or removing the collection. The event receives two arguments namely sender that handles GridQueryableCollectionViewWrapper. The following methods are associated with View which can be used to defer refresh the view: Data Virtualization Data grid provides support to handle the large amount of data through built-in virtualization feature. With Data virtualization, the record entries will be created in the runtime only upon scrolling to the vertical end which increases the performance of grid loading time. To set SfDataGrid.EnableDataVirtualization property to true, follow the code example: <syncfusion:SfDataGrid x: datagrid.EnableDataVirtualization = true; NotificationSubscriptionMode Data grid exposed SfDataGrid.NotificationSubscriptionMode property that allows you to set whether the underlying source collection items can listen to the INotifyCollectionChanged or INotifyPropertyChanging events. You can handle the property change or collection change by setting the NotificationSubscriptionMode property. To set the NotificationSubscriptionMode property, follow the code example. <syncfusion:SfDataGrid x: dataGrid.NotificationSubscriptionMode = NotificationSubscriptionMode.CollectionChange; Binding SfDataGrid.SelectedIndex property You can bind any int value to the bindable property SfDataGrid.SelectedIndex which gets or sets the lastly selected row’s index in the SfDataGrid. Refer the below code to bind the SfDataGrid.SelectedIndex from the ViewModel. <sfgrid:SfDataGrid x: </sfgrid:SfDataGrid> //ViewModel.cs code private int _selectedIndex; public int SelectedIndex { get { return _selectedIndex; } set { this._selectedIndex = value;RaisePropertyChanged("SelectedIndex"); } } public ViewModel() { this.SelectedIndex = 5; } Binding SfDataGrid.SelectedItem property You can bind any object value to the bindable property SfDataGrid.SelectedItem which gets or sets the selected item in the SfDataGrid. Refer the below code to bind the SfDataGrid.SelectedItem from the ViewModel. <sfgrid:SfDataGrid x: </sfgrid:SfDataGrid> //ViewModel.cs code private object _selectedItem; public object SelectedItem { get { return _selectedItem; } set { this._selectedItem = value; RaisePropertyChanged("SelectedItem"); } } public ViewModel() { this.SelectedItem = this.OrderInfoCollection[8]; } Binding SfDataGrid.SelectedItems property You can bind any object type collection to the bindable property SfDataGridSfDataGrid.SelectedItems which gets or sets the collection of SelectedItems collection in the SfDataGrid. Refer the below code to bind the SfDataGrid.SelectedItems from the ViewModel. <sfgrid:SfDataGrid x: </sfgrid:SfDataGrid> //ViewModel.cs code private ObservableCollection<object> _selectedItems; public ObservableCollection<object> SelectedItems { get { return _selectedItems; } set { this._selectedItems = value; RaisePropertyChanged("SelectedItems"); } } public ViewModel() { this.SelectedItems.Add(OrderInfoCollection[1]); this.SelectedItems.Add(OrderInfoCollection[5]); this.SelectedItems.Add(OrderInfoCollection[8]); } Binding GridColumn properties You can also assign value via binding to the properties of the GridColumn such as HeaderCellTextSize,CellTextSize,FontAttribute,RecordFont,HeaderFont etc. Refer the below code to bind the GridColumn properties from the ViewModel. <sfgrid:SfDataGrid x: <sfgrid:SfDataGrid.Columns> <sfgrid:GridTextColumn </sfgrid:SfDataGrid.Columns> </sfgrid:SfDataGrid> //ViewModel.cs code private double _cellTextSize; public double CellTextSize { get { return _cellTextSize; } set { this._cellTextSize = value; RaisePropertyChanged("CellTextSize"); } } public ViewModel() { this.CellTextSize = 20; } Binding GridPickerColumn ItemsSource from ViewModel You can assign any object typed collection to the GridPickerColumn.ItemsSource to display a list of items in the GridPickerColumn when entering edit mode. Refer the below code to bind the ItemsSource of GridPickerColumn from the ViewModel. <sfgrid:SfDataGrid x: <sfgrid:SfDataGrid.Columns> <sfgrid:GridPickerColumn </sfgrid:SfDataGrid.Columns> </sfgrid:SfDataGrid> //ViewModel.cs code private ObservableCollection<string> _customerNames; public ObservableCollection<string> CustomerNames { get { return _customerNames; } set { this._customerNames = value; RaisePropertyChanged("CustomerNames"); } } public ViewModel() { this.CustomerNames = customerNames.ToObservableCollection(); } string[] customerNames = { "Thomas", "John", "Hanna", "Laura", "Gina" }; Binding the ItemsSource in ViewModel to the Picker loaded inside template The ItemsSource of a picker which is loaded inside the GridTemplateColumn can also be assigned any value via binding by passing the binding context as the Source to the ItemsSource property. Refer the below code to bind the ItemsSource of Picker" HeaderText="Picker"> <sfgrid:GridTemplateColumn.CellTemplate> <DataTemplate> <Picker ItemsSource="{Binding SelectedModels,Source={x:Reference viewModel}}" SelectedIndex="0"/> </DataTemplate> </sfgrid:GridTemplateColumn.CellTemplate> </sfgrid:GridTemplateColumn> </sfgrid:SfDataGrid.Columns> </sfgrid:SfDataGrid> //ViewModel.cs code private List<String> _vehicleModel; public List<String> SelectedModels { get { return _vehicleModel; } set { this._vehicleModel = value;RaisePropertyChanged("SelectedCars"); } } public ViewModel() { this.SelectedModels = selectedModels.ToList(); } string [] selectedModels = { "Select Car", "Audi", "Bentley", "Mercedes Benz", "Porsche" }; Binding the button command in template column to ViewModel You can also assign any value to the Command property of the Button loaded inside the GridTemplateColumn via binding. Refer the below code to bind the command property of Button"> <sfgrid:GridTemplateColumn.CellTemplate> <DataTemplate> <Button Text="Template" Command="{Binding ButtonCommand,Source={x:Reference viewModel}}"/> </DataTemplate> </sfgrid:GridTemplateColumn.CellTemplate> </sfgrid:GridTemplateColumn> </sfgrid:SfDataGrid.Columns> </sfgrid:SfDataGrid> //ViewModel.cs code private Command _buttonCommand; public Command ButtonCommand { get { return _buttonCommand; } set { this._buttonCommand = value;RaisePropertyChanged("ButtonCommand"); } } public ViewModel() { this.ButtonCommand = new Command(CustomMethod); } public void CustomMethod() { // Customize your code here } You can download the source code of binding the SfDataGrid properties sample here See also How to bind a column collection from view model in SfDataGrid Xamarin.Forms How to resolve “Cannot resolve reference `Xamarin.Android.Support.Interpolate’” in Xamarin.Forms Android projects How to resolve SfDataGrid not rendering issue in iOS and UWP How to configure and install SfDataGrid NuGet package in Visual Studio How to make Syncfusion.Xamarin.SfDataGrid to work in release mode in UWP when .NET Native tool chain is enabled How to apply the custom assemblies when configured the project with Syncfusion NuGet packages How to bind button command to ViewModel from TemplateColumn of DataGrid How to update the modified GridCell value for Dictionary How to use SfDataGrid in Prism How to commit the edited values when binding Dictionary in SfDataGrid How to load SfDataGrid dynamically with JSON data without POCO classes How to retain the SfDataGrid properties when changing the data source How to bind a view model property to header template How to overcome the DisplayBinding converter is not firing problem when XamlCompilation attribute is set as XamlCompilationOptions.Compile How to get the X and Y coordinates when interacting with SfDataGrid How to resolve “Expecting class path separator ‘;’ before” error in Xamarin.Forms.Android How to display an animation while loading the data in the SfDataGrid
https://help.syncfusion.com/xamarin/datagrid/data-binding
CC-MAIN-2020-45
en
refinedweb
The getContext() method incorrectly returns null when requesting the ROOT context ("/"). The following fix resolves the issue: public ServletContext getContext(String uri) { // Validate the format of the specified argument if ((uri == null) || (!uri.startsWith("/"))) return (null); Context child = null; try { // Look for an exact match Container host = context.getParent(); child = (Context) host.findChild(uri); if ( (child == null) && "/".equals(uri) ) { // fix child = (Context) host.findChild(""); // fix } // fix The root context was created via: Context context = embedded.addWebapp("", "/path/to/webapps/ROOT");. Changelog for the Tomcat 7 fix: <fix> Correct a regression in the fix for <bug>57190</bug> that incorrectly required the path passed to <code>ServletContext.getContext(String)</code> to be an exact match to a path to an existing context. (markt) </fix> This fix will be included in Tomcat 7.0.60 and later. *** Bug 57733 has been marked as a duplicate of this bug. *** (In reply to Christopher Schultz from comment #1) >. > For a reference, discussion about ServletContext.getContext() API on dev@ list of that time, Thread "Two serious issues have been introduced in Tomcat 7 and Tomcat 8..." started 2015-02-23, It looks like this bug was re-introduced on the Tomcat 7 branch in 7.0.61 (In reply to Paul Taylor from comment #6) Quick test: It does work in 7.0.78. Note that the ServletContext.getContext() method is disabled by default for security reasons. It must be explicitly enabled by setting crossContext="true" in Context configuration of the calling web application. Created attachment 35180 [details] test.war Sample web application that demonstrates that the feature is working successfully. Steps: 1. Deploy it as webapps/test.war 2. Start Tomcat 3. Access It prints: getContext("/"): org.apache.catalina.core.ApplicationContextFacade@16b6c55 Tested with 7.0.78.
https://bz.apache.org/bugzilla/show_bug.cgi?id=57645
CC-MAIN-2020-45
en
refinedweb
My Attribute Disappears The GetCustomAttributes scenario (ICustomAttributeProvider.GetCustomAttributes or Attribute.GetCustomAttributes, referred to as GetCA in this post) involves 3 pieces: - a custom attribute type - an entity which is decorated with the custom attribute - a code snippet calling GetCA on the decorated entity. These pieces could be residing together in one assembly; or separately in 3 different assemblies. The following C# code shows each piece in separate files, and I will compile them into 3 assemblies: the attribute type assembly (attribute.dll), the decorated entity assembly (decorated.dll) and the querying assembly (getca.exe): // file: attribute.cs public class MyAttribute : System.Attribute { } // file: decorated.cs [My] public class MyClass { } // file: getca.cs using System; using System.Reflection; class Demo { static void Main(string[] args) { Assembly asm = Assembly.LoadFrom(args[0]); object[] attrs = asm.GetType("MyClass").GetCustomAttributes(true); Console.WriteLine(attrs.Length); } } D:\> sn.exe -k sn.key D:\> csc /t:library /keyfile:sn.key attribute.cs D:\> csc /t:library /r:attribute.dll decorated.cs D:\> gacutil -i attribute.dll D:\> del attribute.dll D:\> csc getca.cs D:\> getca.exe decorated.dll 1 D:\> getca.exe \\machine\d$\decorated.dll 0 attribute.dll is installed in GAC (no local copy, to avoid confusion); getca.exe checks whether the loaded type MyClass has MyAttribute. As you see from the output, MyAttribute disappeared when the decorated entity was loaded from a share (or as a partially trusted assembly, to be precise). GetCA is supposed to return an array of attribute objects. In order to do so, it parses the custom attribute metadata, finds the right custom attribute constructor, and then invokes that .ctor with some parameters (if any). It is a late-bound call, reflection decides whether the querying assembly should invoke the attribute .ctor, or avoid calling it for security reasons. Let me quote something from ShawnFa's security blog: "by default, strongly named, fully trusted assemblies are given an implicit LinkDemand for FullTrust on every public and protected method of every publicly visible class". This means, in a scenario where a library is strongly named and fully trusted, partial trusted assemblies are unable to call into such library. The GetCA scenario is not exactly the same, but similar. The .ctor to be invoked is in attribute.dll (in GAC, strongly named and fully trusted). The querying assembly (runs locally, fully trusted too) is the code that makes the invocation (if that were to happen). But to make this .ctor invocation, we need pass in the parameters, which are provided by the decorated entity assembly. GetCA will take the decorated entity as the caller to the attribute type constructor. Based on what I just quoted, if the decorated entity assembly is partially trusted, we will filter out such attribute object creation, unless the attribute assembly is decorated with AllowPartiallyTrustedCallersAttribute. Note please read Shawn's blog entry carefully about this attribute and its' security implications before taking this approach. What if the attribute and decorated entity are in the same assembly? In this case, it does not matter whether the assembly is loaded from a share or locally. GetCA will try to create and return the attribute object. If the loaded assembly is partially trusted, the runtime gives it a smaller set of permissions and running the .ctor code is not going to do something terrible. To close, GetCA will try to create the custom attribute object if any of the following 3 conditions is true: - the decorated entity and the custom attribute type are in one assembly, - the decorated entity is fully trusted, - the assembly which defines the custom attribute type is decorated with APTCA. By the way, the new class CustomAttributeData in .NET 2.0 is designed to access custom attribute in the reflection-only context, where no code will be executed (only metadata checking). If we use CustomAttributeData.GetCustomAttributes instead in the above example, it prints 1; one CustomAttributeData object, not one MyAttribute object.
https://docs.microsoft.com/en-us/archive/blogs/haibo_luo/my-attribute-disappears
CC-MAIN-2020-45
en
refinedweb
I have recently completed the Orion Constellations path project for the “Visualize Data with Python” skillpath and I encourage any feedback on the project. Project: Visualizing the Orion Constellation In this project we will be visualizing the Orion constellation in 2D and 3D using the Matplotlib function .scatter(). The goal of the project is to understand spatial perspective. Once we visualize Orion in both 2D and 3D, we will be able to see the difference in the constellation shape humans see from earth versus the actual position of the stars that make up this constellation. 1. Set-Up We will add %matplotlib notebookin the cell below. This statement will allow us to be able to rotate out visualization in this jupyter notebook. We will be importing matplotlib.pyplotas usual. In order to see our 3D visualization, we also need to add this new line after we import Matplotlib: from mpl_toolkits.mplot3d import Axes3D %matplotlib notebook from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D 2. Get familiar with real data The x , y , and z lists below are composed of the x, y, z coordinates for each star in the collection of stars that make up the Orion constellation as documented in a paper by Nottingham Trent Univesity on “The Orion constellation as an installation” found here. #] 3. Create a 2D Visualization Before we visualize the stars in 3D, let’s get a sense of what they look like in 2D. We first create a figure for the 2d plot and save it to a variable name fig_2d . Then we add a subplot .add_subplot() as the single subplot, with 1,1,1 . We use the scatter function to visualize our x and y coordinates. We also set the background color to “black” and the marker on the datapoints to “star” so that they imitate the stars in the black sky. We finally give a title and render our visualization. The 2D visualization dows not look like the Orion constellation we see in the night sky. There is a curve to the sky, and this is a flat visualization, but we will visualize it in 3D in the next step to get a better sense of the actual star positions. fig_2d = plt.figure() ax = fig_2d.add_subplot(1,1,1) plt.scatter(x,y, color = 'yellow', marker = '*') plt.title('2D Visualization of the Orion Constellation') plt.xlabel('Orion x Coordinates') plt.ylabel('Orion y Coordinates') ax.set_facecolor('xkcd:black') plt.show() 4. Create a 3D Visualization We first create a figure for the 3D plot and save it to a variable name fig_3d . Since this will be a 3D projection, we want to tell Matplotlib this will be a 3D plot. To add a 3D projection, we must include a the projection argument. It would look like this: projection="3d" Then we add our subplot with .add_subplot() as the single subplot 1,1,1 and specify our projection as 3d : fig_3d.add_subplot(1,1,1,projection="3d") ) Since this visualization will be in 3D, we will need our third dimension. In this case, our z coordinate. We then create a new variable constellation3d and call the scatter function with our x , y and z coordinates. We also set the background color to “black” and the marker on the datapoints to “star” so that they imitate the stars in the black sky. We finally give a title and render our visualization. fig_3d = plt.figure() constellation3d = fig_3d.add_subplot(1,1,1,projection="3d") constellation3d.scatter(x,y,z, color = 'yellow', marker = '*', s=50) plt.title('3D Visualization of the Orion Constellation') constellation3d.set_xlabel('Orion x Coordinates') constellation3d.set_ylabel('Orion y Coordinates') constellation3d.set_zlabel('Orion z Coordinates') plt.gca().patch.set_facecolor('white') constellation3d.w_xaxis.set_pane_color((0, 0, 0, 1.0)) constellation3d.w_yaxis.set_pane_color((0, 0, 0, 1.0)) constellation3d.w_zaxis.set_pane_color((0, 0, 0, 1.0)) plt.show() Any feedback is most welcome. Thanks a lot!
https://discuss.codecademy.com/t/visualize-data-with-python-orion-constellation-project/516053
CC-MAIN-2020-45
en
refinedweb
Double-sided Maxwell distribution. Inherits From: Distribution, AutoCompositeTensor tfp.distributions.DoublesidedMaxwell( loc, scale, validate_args=False, allow_nan_stats=True, name='doublesided_maxwell' ) This distribution is useful to compute measure valued derivatives for Gaussian distributions. See [Mohamed et al. 2019][1] for more details. Mathematical details The double-sided Maxwell distribution generalizes the Maxwell distribution to the entire real line. pdf(x; mu, sigma) = 1/(sigma*sqrt(2*pi)) * ((x-mu)/sigma)^2 * exp(-0.5 ((x-mu)/sigma)^2) where loc = mu and scale = sigma. The DoublesidedMaxwell distribution is a member of the location-scale family, i.e., it can be constructed as, X ~ DoublesidedMaxwell(loc=0, scale=1) Y = loc + scale * X The double-sided Maxwell is a symmetric distribution that extends the one-sided maxwell from R+ to the entire real line. Their densities are therefore the same up to a factor of 0.5. It has several methods for generating random variates from it. The version here uses 3 Gaussian variates and a uniform variate to generate the samples The sampling path is: mu + sigma* sgn(U-0.5)* sqrt(X^2 + Y^2 + Z^2) U~Unif; X,Y,Z ~N(0,1) In the sampling process above, the random variates generated by sqrt(X^2 + Y^2 + Z^2) are samples from the one-sided Maxwell (or Maxwell-Boltzmann) distribution. Examples import tensorflow_probability as tfp tfd = tfp.distributions # Define a single scalar DoublesidedMaxwell distribution. dist = tfd.DoublesidedMaxwell(loc=0., scale=3.) # Evaluate the cdf at 1, returning a scalar. dist.cdf(1.) # Define a batch of two scalar valued DoublesidedMaxwells. # The first has mean 1 and standard deviation 11, the second 2 and 22. dist = tfd.DoublesidedMaxwell(loc=[1, 2.], scale=[11, 22.]) # Evaluate the pdf of the first distribution on 0, and the second on 1.5, # returning a length two tensor. dist.prob([0, 1.5]) # Get 3 samples, returning a 3 x 2 tensor. dist.sample([3]) References [1]: Mohamed, et all, "Monte Carlo Gradient Estimation in Machine Learning.", 2019 [2] B. Heidergott, et all "Sensitivity estimation for Gaussian systems", 2008. European Journal of Operational Research, vol. 187, pp193-207. [3] G. Pflug. "Optimization of Stochastic Models: The Interface Between Simulation and Optimization", 2002. Chp. 4.2, pg 247.... Distribution subclasses are not required to implement _parameter_properties, so this method may raise NotImplementedError. Providing a _parameter_properties implementation enables several advanced features, including: - Distribution batch slicing ( sliced_distribution = distribution[i:j]). - Automatic inference of _batch_shapeand _batch_shape_tensor, which must otherwise be computed explicitly. - Automatic instantiation of the distribution within TFP's internal property tests. - Automatic construction of 'trainable' instances of the distribution using appropriate bijectors to avoid violating parameter constraints. This enables the distribution family to be used easily as a surrogate posterior in variational inference. In the future, parameter property annotations may enable additional functionality; for example, returning Distribution instances from tf.vectorized_map.). unnormalized_log_prob unnormalized_log_prob( value, name='unnormalized_log_prob', **kwargs ) Potentially unnormalized log probability density/mass function. This function is similar to log_prob, but does not require that the return value be normalized. (Normalization here refers to the total integral of probability being one, as it should be by definition for any probability distribution.) This is useful, for example, for distributions where the normalization constant is difficult or expensive to compute. By default, this simply calls log_prob.__()
https://tensorflow.google.cn/probability/api_docs/python/tfp/distributions/DoublesidedMaxwell?hl=vi
CC-MAIN-2022-33
en
refinedweb
Inheritance is one of the main features of programming in Java. It is a mechanism in which one or more classes acquire the properties of another class. The class which inherits the properties of another class is called the subclass or the child class. The class whose properties are inherited is known as the base class, super class or the parent class. Accessing variables and methods by using Java Access Modifiers implements reusability of existing code. What are Java Access Modifiers? Java access modifiers specify which class can access a given class and its properties. There are three access modifiers in java. These are: Public Modifier Public Access modifier is the default modifier and the easiest of three. It is used to specify that a member is visible to all the classes and can be accessed from everywhere. We should declare public members only if such access does not produce undesirable results. These act like secrets which can be known to anyone. Protected Modifier This specifies that the member can be accessed by only a class’s own members or the members of its sub-class. These act as family secrets which only the concerned family knows but no outsiders know about it. Private Modifier Private access modifier is the most restrictive access level. It specifies that the members can only be accessed by its own class i.e. the class where it is defined. This is used with variables or methods containing information that need not be accessed by an outsider as it could make the program inconsistent. This modifier is the primary method by which an object encapsulates itself and hides its data from the outside world. Hence, these act as secrets which we do not tell anybody. Using Java Access Modifiers Now, let us have a better understanding to how these modifiers are used and inherited in various classes. Private access modifier If a method or variable in a class is declared as private, then only the code inside the same class can access the variable or make a call to that method. Subclasses will not be able to access these methods and variables, nor can an external class do so. This is so because a subclass does not inherit the private members of the parent class. Classes cannot be marked as private as making a class private will mean that no other class could access it which would mean that you cannot really use that class. Hence, classes are never declared as private. Let us take up an example: // Parent Class class pClass { private int num1; void setdata(int n) { num1 = n; } int getdata() { return num1; } } // sub class class sub extends pClass { int num2; void prod() { int num = getdata(); System.out.println("Product = " + (num2 * num)); } } class privateEg { public static void main(String args[]) { sub obj = new sub(); obj.setdata(15); obj.num2 = 10; obj.prod(); } } Output In the above example, num1 is a private member of the class pClass. Only the members setdata() and getdata() in the base class can access this field directly by its name. However it is inaccessible to members of the derived class. So, in order to access the private member in the method product of the subclass, we call the getdata() method of the base class which is by default public and is directly accessible to the subclass. Protected Access Modifier A protected variable or method in a public class can be accessed by its own members or the members of its sub-class only. This holds true even if the subclass is not located in the same package as the super class. Let us take up an example: // Parent Class class pClass { protected int num1, num2; pClass(int a, int b) { num1 = a; num2 = b; } void show() { System.out.println("Number 1 = " + num1); System.out.println("Number 2 = " + num2); } } // sub class class sub extends pClass { private int ans; sub(int a, int b) { super(a,b); } void prod() { ans = num1 * num2; } void showprod() { System.out.println("Product = " + ans); } } class proEg { public static void main(String args[]) { sub obj = new sub(10,8); obj.show(); obj.prod(); obj.showprod(); } } Output Public Access Modifier This is the default access modifier. If we do not specify any access modifier it will be public by default. We use public when we want that the variable or method should be accessible by the entire application. Summary Hence, we can conclude that the derived class inherits all the members and methods that are declared as protected or public. But, if the members of the super class are declared as private then the derived class cannot directly use them. The private members can be accessed by members of the same class.
https://csveda.com/java/java-access-modifiers-access-inherited-variables-and-methods/
CC-MAIN-2022-33
en
refinedweb
How do I exit a thread? How do I kill thr() in this example? Does it have something to do with the id arg passed? def thr(id): from time import sleep i = 0 while True: print(i) i += 1 sleep(delay) import _thread _thread.start_new_thread(thr, (1, 0)) Maybe should I do this? mythr = _thread.start_new_thread(thr, (1, 0)) mythr.exit() - paulpraveen23 paulpraveen23 Banned last edited by This post is deleted! - paulkapil08 paulkapil08 last edited by Thanks for the clarification. I think the concepts @larry-hems mentions apply to Micropython, but the function calls might differ. If I understand correctly, all threads in micropython are daemon threads, as they continue running after command is returned to the REPL. Only after a machine.reset()or pressing the reset button will the thread be closed automatically. @larry-hems are you referring to regular Python or Pycom's implementation of MicroPython? Because I couldn't find a daemon flag, or multiprocessing as an option with Pycom's uPy. No daemon flag: No multiprocessing module: - larry hems last edited by @BetterAuto said in How do I exit a thread?: How do I kill thr() in this example? It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases: - the thread is holding a critical resource that must be closed properly - the thread has created several other threads that must be killed as well. If you REALLY need to use a Thread, there is no way to kill thread directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon: If you do NOT really need to have a Thread , what you can do, instead of using the threading package , is to use the multiprocessing package . Here, to kill a process, you can simply call the method: yourProcess.terminate() Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe) This code based on @Eric24's suggestion does work. Ugly, but functional. I had trouble with my opening post code so I went back to the example in the docs and that works. import _thread import time kill = dict() def th_func(delay, id): global kill while True: if id in kill and kill[id]: break time.sleep(delay) print('Running thread %d' % id) for i in range(2): kill[i] = False _thread.start_new_thread(th_func, (i + 1, i)) kill[0] = True kill[1] = True And this fails. _thread.exit() does not work from outside the thread. import _thread import time threads = dict() def th_func(delay, id): while True: time.sleep(delay) print('Running thread %d' % id) for i in range(2): threads[i] = _thread.start_new_thread(th_func, (i + 1, i)) threads[0].exit() threads[1].exit() >>> Running thread 0 Running thread 1 Running thread 0 Running thread 0 Running thread 1 Running thread 0 threads[0].exit() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'exit' >>> threads[1].exit() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'exit' >>> Running thread 0 Running thread 1 Running thread 0 >>> +1, is it possible to kill the thread from outside ? Thanks, I'll try it soon. (Don't have access to my WiPy at the moment.) @BetterAuto Hmmm. The _thread.exit() function in the docs isn't exactly clear on how it should be used, but since it has no params, my best guess is that it would be executed from inside the thread you want to kill. You might also try just breaking out of the while True loop. Now, if the thread itself doesn't know it's supposed to die, you might have to implement an appropriate communications channel from the main program to the thread (could be as simple as a global "kill" variable), as there does not appear to be a _thread function that can kill a particular thread from outside that thread. It's a bit strange that this doesn't exist, as it's a common thing found in many threading implementations.
https://forum.pycom.io/topic/1393/how-do-i-exit-a-thread
CC-MAIN-2022-33
en
refinedweb
Battery Time Hi I have a SiPy connected to the expansion board v2 and a 500mAh battery. When this battery is fully charged though I can only ever get at most a few hours run time on the whole device. I cant see how the device, with bluetooth and WIFI deactivated can possibly consume even 100mA. Any suggestions? The code is: from network import WLAN from machine import UART from network import Bluetooth import os uart = UART(0, 115200) os.dupterm(uart) wlan = WLAN() bluetooth=Bluetooth() wlan.deinit() bluetooth.deinit() import pycom pycom.heartbeat(False) Is the serial management consuming power waiting input? Ok, great. Thanks. - jmarcelino last edited by It's one of the features being worked on, for now .deinit() only disables the software device it doesn't power it off. Should be coming in a future update as well as the power saving modes (deep sleep etc) you need to operate on battery power. What do you mean by it "doesnt turn it off yet"? I saw an SSID broadcast from the device before I applied that command, and then I didnt when I applied that command. The device goes through a lot of power in my opinion. I cant see how you can power this with a battery alone. Kind of defeats the purpose of a IoT device really. - jmarcelino last edited by jmarcelino @bradnz WLAN.deinit() doesn't actually turn the peripheral off yet, it still consumes roughly the same.
https://forum.pycom.io/topic/647/battery-time
CC-MAIN-2022-33
en
refinedweb
SkeletonView. ? Features - [x] Easy to use - [x] All UIViews are skeletonables - [x] Fully customizable - [x] Universal (iPhone & iPad) - [x] Interface Builder friendly - [x] Simple Swift syntax - [x] Lightweight readable codebase ? Supported OS & SDK Versions - iOS 9.0+ - tvOS 9.0+ - Swift 4 ? Example To run the example project, clone the repo and run SkeletonViewExample target. ? Installation Using CocoaPods Edit your Podfile and specify the dependency: pod "SkeletonView" Using Carthage Edit your Cartfile and specify the dependency: github "Juanpe/SkeletonView" ? How to use Only 3 steps needed to use SkeletonView: 1. Import SkeletonView in proper place. import SkeletonView 2. Now, set which views will be skeletonables. You achieve this in two ways: Using code: avatarImageView.isSkeletonable = true Using IB/Storyboards: IMPORTANT! SkeletonViewis recursive, so if you want show the skeleton in all skeletonable views, you only need to call the show method in the main container view. For example, with UIViewControllers ? Collections Now, func collectionSkeletonView(_ skeletonView: UITableView, numberOfRowsInSection section: Int) -> Int func collectionSkeletonView(_ skeletonView: UITableView, cellIdenfierForRowAt indexPath: IndexPath) -> ReusableCellIdentifier } As you can see, this protocol inherits from UITableViewDataSource, so you can replace this protocol with the skeleton protocol. This protocol has a default implementation: func numSections(in collectionSkeletonView: UITableView) -> Int // Default: 1 func collectionSkeletonView(_ skeletonView: UITableView, numberOfRowsInSection section: Int) -> Int // Default: // It calculates how many cells need to populate whole tableview There is only one method you need to implement to let Skeleton know the cell identifier. This method doesn't have default implementation: func collectionSkeletonView(_ skeletonView: UITableView, cellIdenfierForRowAt indexPath: IndexPath) -> ReusableCellIdentifier Example func collectionSkeletonView(_ skeletonView: UITableView, cellIdenfierForRowAt indexPath: IndexPath) -> ReusableCellIdentifier { return "CellIdentifier" } IMPORTANT! If you are using resizable cells ( tableView.rowHeight = UITableViewAutomaticDimension), it's mandatory define the estimatedRowHeight. UICollectionView For UICollectionView, you need to conform to SkeletonCollectionViewDataSource protocol. public protocol SkeletonCollectionViewDataSource: UICollectionViewDataSource { func numSections(in collectionSkeletonView: UICollectionView) -> Int func collectionSkeletonView(_ skeletonView: UICollectionView, numberOfItemsInSection section: Int) -> Int func collectionSkeletonView(_ skeletonView: UICollectionView, cellIdentifierForItemAt indexPath: IndexPath) -> ReusableCellIdentifier } The rest of the process is the same as UITableView ? Multiline text. ? Customize You can set some properties for multilines elements. To modify the percent or radius using code, set the properties: descriptionTextView.lastLineFillPercent = 50 descriptionTextView.linesCornerRadius = 5 Or, if you prefer use IB/Storyboard: ? ?? UIColor.turquoise, UIColor.greenSea, UIColor.sunFlower, UIColor.flatOrange ... Image captured from website ? Custom animations Now, } NEW this cases: ? TRICK! Exist another way to create sliding animations, just using this shortcut: let animation = GradientDirection.leftToRight.slidingAnimation() ???: ìsSkeletonable= ☠️ ? Documentation ? Next steps - [x] Set the filling percent of the last line in multiline elements - [x] Add more gradient animations - [x] Supported resizable cells - [x] CollectionView compatible - [x] tvOS compatible - [x] Add recovery state - [ ] Custom collections compatible - [ ] Add animations when it shows/hides the skeletons - [ ] MacOS and WatchOS compatible
https://iosexample.com/place-a-loading-view-to-show-users-that-something-is-going-on/
CC-MAIN-2022-33
en
refinedweb
clearerr, feof, ferror, fileno - check and reset stream status #include <stdio.h> void clearerr(FILE *stream); int feof(FILE *stream); int ferror(FILE *stream); int fileno(FILE *stream); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): fileno(): _POSIX_C_SOURCE). These functions should not fail and do not set the external variable errno. (However, in case fileno() detects that its argument is not a valid stream, it must return -1 and set errno to EBADF.) For an explanation of the terms used in this section, see attributes(7). The functions clearerr(), feof(), and ferror() conform to C89, C99, POSIX.1-2001, and POSIX.1-2008. The function fileno() conforms to POSIX.1-2001 and POSIX.1-2008. open(2), fdopen(3), stdio(3), unlocked_stdio(3) This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://www.zanteres.com/manpages/ferror.3.html
CC-MAIN-2022-33
en
refinedweb
Handling multiple buttons in HTML Form in Struts By: Charles The <html:submit> tag is used to submit the HTML form. The usage of the tag is as follows: <html:submit><bean:message key=â€button.saveâ€/></html:submit> This will generate a HTML as follows. <input type="submit" value="Save Me"> This usually works okay if there was only one button with “real†Form submission (The other one maybe a Cancel button). Hence it suffices to straight away process the request in CustomerAction. However you will frequently face situations where there are more than one or two buttons submitting the form. You would want to execute different code based on the buttons clicked. If you are thinking, “No problem. I will have different ActionMapping (and hence different Actions) for different buttonsâ€, you are out of luck! Clicking any of the buttons in a HTML Form always submits the same Form, with the same URL. The Form submission URL is found in the action attribute of the form tag as: <formname=â€CustomFormâ€action=â€/App1/submitCustomerForm.doâ€/> and is unique to the Form. You have to use a variation of the <html:submit> as shown below to tackle this problem. <html:submit property=â€stepâ€> <bean:message key=â€button.saveâ€/> <=â€step†The generated HTML submit button has a name associated with it. You have to now add a JavaBeans property to your ActionForm whose name matches the submit button name. In other words an instance variable with a getter and setter are required. If you were to make this change in the application just developed, you have to add a variable named “step†below shows the modified execute() method from CustomerAction. The changes are shown in bold. When the Save Me button is pressed, the custForm.getStep() method returns a value of “Save Me†and the corresponding code block is executed. // CustomerAction modified for multiple button Forms public class CustomerAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { if (isCancelled(request)) { System.out.println(Cancel Operation Performedâ€); return mapping.findForward(“mainpageâ€); } CustomerForm custForm = (CustomerForm) form; ActionForward forward = null; if ( “Save Meâ€.equals(custForm.getStep()) ) { System.out.println(“Save Me Button Clickedâ€); String firstName = custForm.getFirstName(); String lastName = custForm.getLastName(); System.out.println(“Customer First name is “ + System.out.println(“Customer Last name is “ + forward = mapping.findForward(“successâ€); } return forward; } } “Spike Meâ€. The submit button can still have the name “step†(same as the “Save Me†button). This means the CustomerForm class has a single JavaBeans property “step†for the submit buttons. In the CustomerAction you can have check if the custForm.getStep() is “Save Me†or “Spike Meâ€. If each of the buttons had different names like button1, button2 etc. then the CustomerAction would have to perform checks as follows: if (“Save Meâ€.equals(custForm.getButton1()) { // Save Me Button pressed } else if (“Spike Me†“Excepto MÆinstead of “Save Meâ€. However the CustomerAction class is still looking for the hard coded “Save Meâ€. Consequently the code block meant for “Save Me†button never gets. hello sir i am simulating railway r View Tutorial By: karthick at 2008-08-30 03:09:42 2. safsdfs View Tutorial By: nmn at 2008-09-09 01:07:49 3. farhaan, thats why I said, "Using the HTML Bu View Tutorial By: Charles at 2008-03-21 01:36:46 4. This tutorial is very good. Actually I have been s View Tutorial By: farhaan at 2008-03-20 15:22:13 5. To handle multiple submit buttons you can use stan View Tutorial By: Vic at 2008-12-23 12:03:43 6. Thanks for the solution. I must say however that t View Tutorial By: Pierre at 2009-11-19 20:16:35 7. Thank you so much. I have migrated from the .NET t View Tutorial By: kanika at 2011-05-31 12:23:33 8. thanks for this tutorial! it's very useful View Tutorial By: Miguel at 2011-06-26 21:15:00 9. Very useful tutorial. But if same code is executed View Tutorial By: Jitendra Kumar Mahto at 2011-08-12 15:10:14 10. HI, Thanks for your beautiful tutorial. View Tutorial By: Chintan at 2011-10-06 07:35:01 11. @chintan Check this answer, that could help you. h View Tutorial By: Mallika at 2011-10-06 11:59:45 12. thanks for this blogs , i was looking for the same View Tutorial By: prakash at 2011-11-27 18:25:32 13. great !!!!!!!!!!!!!!!!!!!!! View Tutorial By: arjun at 2012-03-24 14:55:10 14. Awesome post.... View Tutorial By: Zeeshan Ali Ansari at 2012-11-24 06:35:20 15. sanjuanagompert3bm View Tutorial By: dunglupinoprb at 2017-03-15 15:04:29 16. stepanieperrin1o1 View Tutorial By: hortensebogdanskinzi at 2017-03-15 17:57:20 17. kathyrnalarid2my View Tutorial By: louieneitzked8d at 2017-03-16 02:44:55 18. rodolfosandisonsvc View Tutorial By: joshuabolognese1w8 at 2017-03-17 02:44:10 19. magdaspotoh8l View Tutorial By: thaddeusrobellantt at 2017-03-21 15:25:06 20. savannahstalmaiio View Tutorial By: lindseycollenxi3 at 2017-03-28 10:49:51 21. JasonNix View Tutorial By: JasonNix at 2017-04-13 04:28:44 22. JasonNix View Tutorial By: JasonNix at 2017-04-25 09:49:11
https://java-samples.com/showtutorial.php?tutorialid=577
CC-MAIN-2022-33
en
refinedweb
psa_drv_se_t Struct Reference A structure containing pointers to all the entry points of a secure element driver. #include <crypto_se_driver.h> A structure containing pointers to all the entry points of a secure element driver. Future versions of this specification may add extra substructures at the end of this structure. Member Function Documentation ◆ MBEDTLS_PRIVATE() [1/3] The version of the driver HAL that this driver implements. This is a protection against loading driver binaries built against a different version of this specification. Use PSA_DRV_SE_HAL_VERSION. ◆ MBEDTLS_PRIVATE() [2/3] The size of the driver's persistent data in bytes. This can be 0 if the driver does not need persistent data. See the documentation of psa_drv_se_context_t::persistent_data for more information about why and how a driver can use persistent data. ◆ MBEDTLS_PRIVATE() [3/3] The driver initialization function. This function is called once during the initialization of the PSA Cryptography subsystem, before any other function of the driver is called. If this function returns a failure status, the driver will be unusable, at least until the next system reset. If this field is NULL, it is equivalent to a function that does nothing and returns PSA_SUCCESS.
https://docs.silabs.com/gecko-platform/latest/service/api/structpsa-drv-se-t
CC-MAIN-2022-33
en
refinedweb
A Dart wrapper for the Discord API. Feel free to contribute! Documentation is available here, and examples can be found in the example directory. Features - Speedy (built upon the latest versions of the Dart SDK and web APIs) - Small dependency tree (only one direct dependency - w_transport, for both WebSocket and REST // five development dependencies) - Predictable - robust and well-defined type checks included with all models - Multiplatform - runs anywhere Dart VM (Web support soon) does Example A quick example of the client in action: import 'package:circumstellar/circumstellar.dart'; import 'package:w_transport/vm.dart' show vmTransportPlatform; // Dart for Web: import 'package:w_transport/browser.dart' show browserTransportPlatform; void main() { var client = new DiscordClient(new AuthSet('MY_TOKEN', TokenType.Bot), vmTransportPlatform /* dart for web: browserTransportPlatform */); client.dispatcher.messageCreate.listen((Message msg) async { print('Received message with ID ${msg.id} (Channel ID ${msg.channel.id})'); print('Content: ${msg.content}'); if (msg.content == '!ping') { await msg.channel.createMessage('Pong!'); } }); client.dispatcher.ready.listen((empty) { print('Ready'); }); client.start(); } Contributing Contributing guidelines: - Please use dartfmtand follow Dart's standard formatting rules for all your contributions - Please follow the naming conventions already laid out in the project - The tests haven't been fully written yet, so you can run bin/dev.dart(check file for the environment variables to set) to run a dev instance/test client of Circumstellar. You can also use the Shell script run_bot.sh. Features and bugs Please file feature requests and bugs at the issue tracker. Libraries - circumstellar - A Dart wrapper for the Discord API
https://pub.dartlang.org/documentation/circumstellar/latest/
CC-MAIN-2018-30
en
refinedweb
Once you've decided on names, creating the child domains is easy. But first, you've got to or not to delegate it. It doesn't make sense to delegate a subdomain to an entity that doesn't manage its own hosts or networks. For example, in a large corporation, the personnel department probably doesn't run its own computers: IT (Information Technology) department manages them. So while you may want to create a subdomain for personnel, delegating management for that subdomain to them is probably wasted effort. You can create a subdomain without delegating it, however. How? By creating resource records that refer to the subdomain within the parent's zone. Say one day a group of students approaches us, asking for a DNS entry for a web server for student home pages. The name they'd like is. You might think that we'd need to create a new zone, students.movie.edu, and delegate to it from the movie.edu zone. Well, that's one way to do it, but it's easier to create an A record for in the movie.edu zone. We find that few people realize this is perfectly legal. You don't need a new zone for each new level in the namespace. A new zone would make sense if the students were going to run students.movie.edu by themselves and wanted to administer their own name servers. But they just want one A record, so creating a whole new zone is more work than necessary. It's easy to add this record with the DNS console. First create a students.movie.edu subdomain in the movie.edu zone, then add the A record. To create the subdomain, right-click on the zone in the left pane and select New Domain. You'll see a window like the one shown in Figure 10-1. Enter the name of the new subdomain. You don't need to append movie.edu?the DNS console knows what you mean. You'll then see a folder icon for the new domain in the DNS console, as shown in Figure 10-2. To enter the A record, just select the students folder and follow the procedures described previously to add a new host. In fact, you can even skip the Add Domain step and use the Add Host (A) function to add the host's address record and implicitly create the students.movie.edu subdomain (if it hasn't already been created). Just specify in the Name field and voila! You've created an address for. Now users can access to get to the students' home pages. We could make this setup especially convenient for students by adding students.movie.edu to their PCs' or workstations' search lists; they'd need to type only www as the URL to get to the right host. Did you notice there's no SOA record for students.movie.edu? There's no need for one since the movie.edu SOA record indicates the start of authority for the entire movie.edu zone. Since there's no delegation to students.movie.edu, it's part of the movie.edu zone. If you decide to delegate your subdomains?to send your children out into the world, as it were?you'll need to do things a little differently. We're in the process of doing it now, so you can follow along with us. We need to create a new subdomain of movie.edu for our special-effects lab. We've chosen the name fx.movie.edu?short, recognizable, unambiguous. Because we're delegating fx.movie.edu to administrators in the lab, it'll be a separate zone. The hosts bladerunner and outland, both within the special-effects lab, will serve as the zone's name servers (bladerunner will serve as the primary master). We've chosen to run two name servers for the zone for redundancy?a single fx.movie.edu name server would be a single point of failure that could effectively isolate the entire special-effects lab. Since there aren't many hosts in the lab, though, two name servers should be enough. The special-effects lab is on movie.edu's new 192.253.254/24 network. Here are the partial contents of First, we make sure the Microsoft DNS Server is installed on the new server, bladerunner. Then we create the new zone fx.movie.edu on bladerunner using the process described in the section "Creating a New Zone" in Chapter 4. We also create the corresponding in-addr.arpa zone, 254.253.192.in-addr.arpa. Next, we populate the zone with all the hosts from our snippet of HOSTS, making sure the DNS console automatically adds the PTR records that correspond to our A records. We then add MX records for all of our hosts, pointing to starwars.fx.movie.edu and wormhole.movie.edu, at preferences 10 and 100, respectively. The zone datafile we end up with, called fx.movie.edu.dns, looks like this: ; ; Database file fx.movie.edu.dns for fx.movie.edu zone. ; Zone version: 22 ; @ IN SOA bladerunner.fx.movie.edu. administrator.fx.movie.edu. ( 22 ; serial number 900 ; refresh 600 ; retry 86400 ; expire 3600 ) ; default TTL ; ; Zone NS records ; @ NS bladerunner.fx.movie.edu. @ NS outland.fx.movie.edu. ; ; Zone records ; @ MX 100 wormhole.movie.edu. @ MX 10 starwars.fx.movie.edu. bladerunner A 192.253.254.2 MX 100 wormhole.movie.edu. MX 10 starwars.fx.movie.edu. empire A 192.253.254.5 MX 100 wormhole.movie.edu. MX 10 starwars.fx.movie.edu. jedi A 192.253.254.6 MX 10 starwars.fx.movie.edu. MX 100 wormhole.movie.edu. outland A 192.253.254.3 MX 100 starwars.fx.movie.edu. MX 10 starwars.fx.movie.edu. starwars A 192.253.254.4 MX 100 wormhole.movie.edu. MX 10 starwars.fx.movie.edu. Note that we added an NS record for outland.fx.movie.edu even though we didn't strictly need to: the DNS console would have added it for us when we added outland as a secondary. But adding the NS record lets us restrict zone transfers to name servers listed in NS records and still set up the secondary on outland. We'll do this for the reverse-mapping zone, too. The 254.253.192.in-addr.arpa.dns file ends up looking like this: ; ; Database file 254.253.192.in-addr.arpa.dns for 254.253.192.in-addr.arpa zone. ; Zone version: 14 ; @ IN SOA bladerunner.fx.movie.edu. administrator.fx.movie.edu. ( 14 ; serial number 900 ; refresh 600 ; retry 86400 ; expire 3600 ) ; default TTL ; ; Zone NS records ; @ NS bladerunner.fx.movie.edu. bladerunner.fx.movie.edu. A 192.253.254.2 @ NS outland.fx.movie.edu. outland.fx.movie.edu. A 192.253.254.3 ; ; Zone records ; 1 PTR movie-gw.movie.edu. 2 PTR bladerunner.fx.movie.edu. 3 PTR outland.fx.movie.edu. 4 PTR starwars.fx.movie.edu. 5 PTR empire.fx.movie.edu. 6 PTR jedi.fx.movie.edu. Notice that the PTR record for 1.254.253.192.in-addr.arpa points to movie-gw.movie.edu. That's intentional. The router connects to the other movie.edu networks, so it really doesn't belong in fx.movie.edu. There's no requirement that all the PTR records in 254.253.192.in-addr.arpa map into a single zone, although they should correspond to the canonical names for those hosts. Now we need to configure bladerunner's resolver. Following the directions in Chapter 6, we configure bladerunner to send queries to its own IP address. Then we set bladerunner's domain to fx.movie.edu. Now we'll use nslookup to look up a few hosts in fx.movie.edu and in 254.253.192.in-addr.arpa: C:\> nslookup Default Server: bladerunner.fx.movie.edu Address: 192.253.254.2 > jedi Server: bladerunner.fx.movie.edu Address: 192.253.254.2 Name: jedi.fx.movie.edu Address: 192.253.254.6 > set type=mx > empire Server: bladerunner.fx.movie.edu Address: 192.253.254.2 empire.fx.movie.edu MX preference = 10, mail exchanger = starwars.fx.movie.edu empire.fx.movie.edu MX preference = 100, mail exchanger = wormhole.movie.edu starwars.fx.movie.edu internet address = 192.253.254.4 > ls fx.movie.edu [bladerunner.fx.movie.edu] fx.movie.edu. NS server = bladerunner.fx.movie.edu fx.movie.edu. NS server = outland.fx.movie.edu bladerunner A 192.253.254.2 empire A 192.253.254.5 jedi A 192.253.254.6 outland A 192.253.254.3 starwars A 192.253.254.4 > set type=ptr > 192.253.254.3 Server: bladerunner.fx.movie.edu Address: 192.253.254.2 3.254.253.192.in-addr.arpa name = outland.fx.movie.edu > ls 254.253.192.in-addr.arpa [bladerunner.fx.movie.edu] 254.253.192.in-addr.arpa. NS server = bladerunner.fx.movie.edu 254.253.192.in-addr.arpa. NS server = outland.fx.movie.edu 1 PTR host = movie-gw.movie.edu 2 PTR host = bladerunner.fx.movie.edu 3 PTR host = outland.fx.movie.edu 4 PTR host = starwars.fx.movie.edu 5 PTR host = empire.fx.movie.edu 6 PTR host = jedi.fx.movie.edu > exit The output looks reasonable, so it's safe to set up a secondary name server for fx.movie.edu and then delegate fx.movie.edu from movie.edu. Setting up the secondary name server for fx.movie.edu is simple: use the DNS console to add outland as a new server, then add two secondary zones, according to the instructions in Chapter 4. Like bladerunner, outland's resolver will point to the local name server, and we'll configure the local domain to be fx.movie.edu. All that's left now is to delegate the fx.movie.edu subdomain to the new fx.movie.edu name servers on bladerunner and outland. Right-click on the parent domain, movie.edu, in the left pane and choose New Delegation, which starts the New Delegation Wizard. Click Next in the welcome screen to display a screen like the one shown in Figure 10-3. The first step is entering the name of the delegated subdomain, which we've done. Click Next and you'll be presented with a window to choose the name servers to host (i.e., be authoritative for) the delegated zone. Our two servers are bladerunner.fx.movie.edu and outland.fx.movie.edu, so we enter the appropriate information by clicking Add (we have to run through the add process twice, once for each name server), resulting in a window like Figure 10-4. The final window of the wizard is just for confirmation, so we won't bother to show it. Click Finish and you've delegated a zone. The DNS console adds a special gray icon for delegated zones; if you select this icon, you'll see the NS records added by the wizard. These records perform the actual delegation. A sample DNS console view showing the fx.movie.edu delegation appears in Figure 10-5. According to RFC 1034, the domain names in the resource record-specific portion (the "right side") of the bladerunner.fx.movie.edu and outland.fx.movie.edu NS records must be the canonical domain names for the name servers. A remote name server following delegation expects to find one or more address records attached to that domain name, not an alias (CNAME) record. Actually, the RFC extends this restriction to any type of resource record that includes a domain name as its value?all must specify the canonical domain name. These two records alone aren't enough, though. Do you see the problem? How can a name server outside of fx.movie.edu look up information within fx.movie.edu? Well, a movie.edu name server would refer it to the name servers authoritative for fx.movie.edu, right? That's true, but the NS records in movie.edu give only the names of the fx.movie.edu name servers. The foreign name server needs the IP addresses of the fx.movie.edu name servers in order to send queries to them. Who can give it those addresses? Only the fx.movie.edu name servers. A real chicken-and-egg problem! The solution is to include the addresses of the fx.movie.edu name servers in movie.edu. Although these aren't strictly part of the movie.edu zone, delegation to fx.movie.edu won't work without them. Of course, if the name servers for fx.movie.edu weren't within fx.movie.edu, these addresses?called glue records?wouldn't be necessary. A foreign name server would be able to find the address it needed by querying other name servers. We don't have to worry about adding these records, though?the New Delegation Wizard takes care of it for us. Also, remember to keep the glue up-to-date. If bladerunner gets a new network interface, and hence another IP address, you'll need to update the glue data. The DNS console doesn't let you edit the glue records directly, though. You have use the name server modification window. With the DNS console showing a view like Figure 10-5, double-click on an NS record in the right pane to produce a window like the one shown in Figure 10-6. If fx.movie.edu's delegation changes?i.e., a name server gets added or deleted or a name server's IP address changes?use the Add, Edit, or Remove buttons to make the appropriate changes. We might also want to include aliases for any hosts moving into fx.movie.edu from movie.edu. For example, if we move plan9.movie.edu, a server with an important library of public-domain special-effects algorithms, into fx.movie.edu, we should create an alias under movie.edu pointing the old domain name to the new one. In the zone datafile, the record would look like this: plan9 IN CNAME plan9.fx.movie.edu. This will allow people outside of movie.edu to reach plan9 even though they're using its old domain name, plan9.movie.edu. Don't get confused about the zone in which this alias belongs. The plan9 alias record is actually in the movie.edu zone, so it belongs in the file movie.edu.dns. An alias pointing p9.fx.movie.edu to plan9.fx.movie.edu, on the other hand, is in the fx.movie.edu zone and belongs in fx.movie.edu.dns. We almost forgot to delegate the 254.253.192.in-addr.arpa zone! This is a little trickier than delegating fx.movie.edu because we don't manage the parent zone. First, we need to figure out what 254.253.192.in-addr.arpa's parent zone is and who runs it. Figuring this out may take some sleuthing; we covered how to do this in Chapter 3. As it turns out, the 192.in-addr.arpa zone is 254.253.192.in-addr.arpa's parent. And, if you think about it, that makes sense. There's no reason for the administrators of 192.in-addr.arpa to delegate 253.192.in-addr.arpa to a separate authority because, unless 192.253/16 is all one big CIDR block, networks like 192.253.253/24 and 192.253.254/24 don't have anything in common with each other. They may be managed by totally unrelated organizations. To find out who runs 192.in-addr.arpa, we can use nslookup or whois, as we demonstrated in Chapter 3. Here's how we'd use nslookup to find the administrator: C:\> nslookup Default Server: bladerunner.fx.movie.edu Address: 192.253.254.2 > set type=soa > set norecurse > 253.192.in-addr.arpa. Server: bladerunner.fx.movie.edu Address: 192.253.254.2 192.in-addr.arpa primary name server = arrowroot.arin.net responsible mail addr = bind.arin.net serial = 2003070219 refresh = 1800 (30 mins) retry = 900 (15 mins) expire = 691200 (8 days) default TTL = 10800 (3 hours) > So ARIN is responsible for 192.in-addr.arpa. (Remember them from Chapter 3?) All that's left is for us to submit the form at to request registration of our reverse-mapping zone. If the special-effects lab gets big enough, it may make sense to put a movie.edu secondary somewhere on the 192.253.254/24 network. That way, a larger proportion of DNS queries from fx.movie.edu hosts can be answered locally. It seems logical to make one of the existing fx.movie.edu name servers into a movie.edu secondary, too?that way, we can make better use of an existing name server instead of setting up a brand-new name server. We've decided to make bladerunner a secondary for movie.edu. This won't interfere with bladerunner's primary mission as the primary master name server for fx.movie.edu. A single name server, given enough memory, can be authoritative for literally thousands of zones. One name server can load some zones as a primary master and others as a secondary.[1] [1] Clearly, though, a name server can't be both the primary master and a secondary for a single zone. The name server gets the data for a given zone either from a local zone datafile (and is a primary master for the zone) or from another name server (and is a secondary for the zone). The configuration change is simple: we use the DNS console to add a secondary zone to bladerunner and tell bladerunner to get the movie.edu zone data from terminator's IP address, per the instructions in Chapter 4.
http://etutorials.org/Server+Administration/dns+windows+server/Chapter+10.+Parenting/10.4+How+to+Become+a+Parent+Creating+Subdomains/
CC-MAIN-2018-30
en
refinedweb
Hey guys... had a little problem while trying make this little program that would find the mean of 4 integers... here's what i have so far #include <iomanip> #include <iostream> #include <cmath> using namespace std; int main(void) { int X1; int X2; int X3; int X4; int X; cout<<"Enter The First Integer value:"; cin>>X1; cout<<"Enter The Second Integer Value:"; cin>>X2; cout<<"Enter The Third Integer Value:"; cin>>X3; cout<<"Enter The Forth Integer Value:"; cin>>X4; X=(X1+X2+X3+X4)/4.0; cin.get();cin.get(); return 0; } Im asked to enter the 4 integers but then the screen terminates no results. Can help me find what im doing wrong? Thanks.
https://www.daniweb.com/programming/software-development/threads/385991/help-with-with-some-coding
CC-MAIN-2018-30
en
refinedweb
UFDC Home | Help | RSS Group Title: Economic information report Title: Indian River citrus packinghouses and the southward movement of production CITATION THUMBNAILS PAGE IMAGE ZOOMABLE Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: Indian River citrus packinghouses and the southward movement of production Series Title: Economic information report Physical Description: iii leaves, 15 p. : map ; 28 cm. Language: English Creator: Kilmer, Richard LSpreen, Thomas HUniversity of Florida -- Food and Resource Economics Dept Publisher: Food & Resource Economics Dept., Agricultural Experiment Stations, Institute of Food and Agricultural Sciences, University of FloridaFood and Resource Economics Dept., Agricultural Experiment Stations, Institute of Food and Agricultural Sciences, University of Florida Place of Publication: Gainesville Fla Publication Date: 1983 Copyright Date: 1983 Subjects Subject: Citrus fruit industry -- Florida -- Indian River County ( lcsh )Citrus fruits -- Florida -- Indian River County ( lcsh )Orange industry -- Florida -- Indian River County ( lcsh ) Genre: government publication (state, provincial, terriorial, dependent) ( marcgt )bibliography ( marcgt )non-fiction ( marcgt ) Notes Bibliography: Bibliography: p. 15. Statement of Responsibility: Richard L. Kilmer, Thomas H. Spreen. General Note: Cover title. General Note: "April 1983." Record Information Bibliographic ID: UF00026491 Volume ID: VID00001 Source Institution: University of Florida Holding Location: University of Florida Rights Management: All rights reserved by the source institution and holding location. Resource Identifier: notis - AHF9024alephbibnum - 001545504oclc - 2078401 Richard L. Kilmer Economic Information Thomas H. Spreen Report 180 Indian River Citrus Packinghouses and the Southward Movement of Production Uni of F\orida Food and Resource Economics Department Agricultural Experiment Stations April 1983 Institute of Food and Agricultural Sciences University of Florida, Gainesville 32611 ABSTRACT Existing. Key words: Grapefruit, Indian River, oranges, packinghouses, plant location. ACKNOWLEDGEMENTS We wish to express our appreciation to Mr. George Kmetz, formerly with the Indian River County Appraisers office, for his assistance in determining the building costs for Indian River packinghouses and to the Florida Department of Citrus for financial assistance. i TABLE OF CONTENTS Page ABSTRACT ............................................................ i TABLE OF CONTENTS ................................................. ii LIST OF TABLES ................................................... iii LIST OF FIGURES .................................................... iii INTRODUCTION ............... .... .................................. 1 PROBLEM STATEMENT ................................................. 1 Overview of the Study ........................................ 4 DATA FOR MODEL ..................................... ....... 5 Supply and Demand ........................................... 5 Assembly and Distribution Costs .............................. 7 Packing Costs ............. ................................. 8 Other Assumptions ............................................ 11 RESULTS .............. ............ ... ....... .......... .......... 11 SUMMARY AND CONCLUSIONS ................. ...................... 14 REFERENCES ............................... .... 15 ii LIST OF TABLES Table Page 1 Projected production of oranges and grapefruit in the Indian River marketing district, 1979 1980 and 1983 84 seasons ........................................... 6 2 Projected disposition of Indian River fresh citrus shipments, 1979 80 season ................................ 7 3 Estimated fresh citrus truck hauling costs per 1-3/5 bushel, 1979 80 season .................................... 8 4 Estimated variable and fixed costs per 1-3/5 bushel box, 1979 80 season ....................................... 9 5 Estimated land, packinghouse, equipment, and working capital cost in the Indian River marketing district, 1980 dollars ................................................ 10 6 Static and dynamic solutions to the packinghouse location problem, 1979 80 through 1983 84 ............... 12 7 Packinghouse size configuration for the best dynamic solution ............................................ 13 LIST OF FIGURES Figure 1 Indian River district grapefruit and orange produc- tion, packinghouses and ports for fresh fruit ............... 2 2 Indian River marketing district projected produc- tion and packing of oranges and grapefruit, 1979 80 through 1983 84 ................................. 3 iii INDIAN RIVER CITRUS PACKINGHOUSES AND THE SOUTHWARD MOVEMENT OF PRODUCTION Richard L. Kilmer and Thomas H. Spreen INTRODUCTION The Indian River area is a marketing order district on the east coast of Florida (Figure 1). Nearly two-thirds of its western border is separated from the Interior marketing district by swampland that con- tains little or no citrus. In the past 15 years, new plantings have been concentrated in the southern half of the district. The projected growth in grapefruit and orange production from 1979-80 to 1983-84 is 12.3 percentage points greater in the southern production area (Figure 1) than in the northern area (26.9 and 14.6 percent, estimated from Florida Crop and Livestock Reporting Service, 1980 and Fairchild). Existing packinghouses are located near older groves. As more citrus is grown farther south, transportation cost increases will occur unless new packinghouses open near the new production areas. PROBLEM STATEMENT The problem to be examined in this study is concerned with the impact of the southern movement of citrus production (Figure 2) in the Indian River marketing district on the size, number, and location of citrus packinghouses. RICHARD L. KILMER and THOMAS H. SPREEN are assistant professor and associate professor of food and resource economics. Jacksonville North District Titusville E. N Port Canaveral CocC a EN Tampa P Melbourne N Vero Beach E,N South Ft. Pierce E,N,P SDistrict \isric Stuart N Jupiter N E Location of existing packinghouses N Potential location of new packinghouses Port Port P Ports Everglades P Figure 1.-Indian River district grapefruit and orange production, packinghouses and ports for fresh fruit 3 60000 55000 50000 South district 45000 production 0S 40000 0 o S35000 x ,. 3000) F- ,: 25000 LO 20000 15000 North district production 1oooo South district packing 3000 ' North district packing 2000 r .. ,_ 1979-80 1980-81 1981-82 1982-83 1983- Season Figure 2.--Indian River marketing district projected production and packing of oranges and grapefruit, 1979 80 through 1983 84 4 Overview of the Study An analytical approach to this study is to identify a number of supply points representing groups of groves and a number of demand points or "destinations". In this study, demand points are regions of the U.S., Canada, and five possible ports of export (see Figure 1). In 1979, there were 35 existing plants in four locations. These plants are divided into two groups designated as small (under 500,000 1 3/5 bushel boxes) and large (over 500,000 boxes). Only larre new plant:; are con- sidered and are allowed to open at the four existing locations and three new locations (see Figure 1). Using estimates for the cost of shipping fruit from the supply points to the packing plants (the assembly problem), the cost of shipping fruit from the plants to the demand points (the distribution problem), and the cost of packing the fruit at the packing plants, the best configuration (size, number and location) of the plants is determined by that configuration which allows assembly, packing, and distribution of the fruit at least cost. The optimal configuration for a particular crop year can be deter- mined via a mixed integer programming model. Using the computer, total assembly, packing, and distribution cost associated with each feasible configuration,1 the least cost configuration is determined. The mixed integer programming model gives the optimal configuration for a given crop year, but does not indicate how the industry can best adjust from the existing configuration to another one. This problem is not trivial since there are costs associated with opening new plants and closing old plants called transition costs. To find the optimal path of adjustment from the existing configuration to a new configuration, a dynamic programming model is used. A mixed integer programming model determines the best plant configuration for a particular crop year. 1A feasible configuration is one in which the plants have sufficient capacity to pack all of the fruit available. 5 This solution is excluded and the model is run again to find the second best configuration. The process is repeated until several solutions are formed. In this study, the crop years 1979-80 through 1983-84 are each analyzed in this manner. Using dynamic programming, the optimal path is found beginning with the existing configuration through the 1983-84 crop year which minimizes the sum of assembly, packing, and distribution costs over these years plus the transition costs incurred as new plants open and old plants close. For a technical description and justification of the particular methodology used, see Kilmer, Spreen, and Tilley. The remainder of this report is to document the data used in the analysis and to report the results. DATA FOR MODEL Supply and Demand Oranges and grapefruit represented 97 percent of the citrus packed in the Indian River marketing district during the 1979-80 marketing season (Florida Department of Agriculture and Consumer Services, 1980, p. 37). In order to project the future production of oranges and grape- fruit by supply area, tree data by age and variety (Florida Crop and Livestock Reporting Service, 1980) are combined with yield information by tree age and variety (Fairchild, 1977, pp. 24-32) (Table 1). The varieties are early and midseason oranges, 'Valencia' oranges, 'Temple' oranges, seedy grapefruit, white seedless grapefruit, and pink seedless grapefruit. The Indian River marketing district shipped 6.8 and 67.1 percent of the oranges and grapefruit harvested to packinghouses in 1979-80 (calculated from the Florida Crop and Livestock Reporting Service, 1981, p. 28, and Florida Department of Agriculture and Con- sumer Services, 1980, p. 37). Even though oranges and grapefruit are brought to a packinghouse, only 65.6 and 76.1 percent of the deliveries 6 Table 1.--Projected production of oranges and grapefruit in the Indian River marketing district, 1979-80 and 1983-84 seasons 1979-80 1983-84 Location Oranges Grapefruit Oranges Grapefruit ------------------1 3/5 bushel box-------------------- Northa 8,699,047 3,529,207 9,957,905 4,055,227 South 24,022,998 19,756,592 29,739,736 25,824,685 aSee Figure 1 for location. were actually packed during the 1979-80 season (Hooks and Kilmer, 1981a, p. 4). The remainder was shipped to processing plants. Total one and three-fifths bushel boxes packed in the Indian River marketing district are projected for the 1979-80 through the 1983-84 marketing seasons (Figure 2), after considering tree age, variety, yield, and the percentage of citrus taken to the packinghouse which was actually packed. The projected oranges and grapefruit packed are either exported (1.7 and 40 percent) or shipped intra and interstate (98.3 and 60 per- cent--Florida Department of Agriculture and Consumer Services, 1980, pp. 33-34). North America is divided into five demand areas with central points for distribution at New York City, Atlanta, Chicago, Los Angeles, and Toronto, Canada (Table 2). Each region is assumed to maintain its 1979-80 market share for oranges and grapefruit through 1983-84 (Florida Department of Citrus, 1980) (Table 2). Fresh citrus is exported through Ft. Pierce, Jacksonville, Port Canaveral, Port Everglades, and Tampa, all in Florida (Table 2). The 1979-80 market share (Table 2) for each port is assumed to remain unchanged through the 1983-84 marketing season (Florida Department of Agriculture and Consumer Services, 1980, p. 35). 7 Assembly and Distribution Costs The distribution costs (Table 3) from packinghouses in the Indiana River district to the five North American cities (already identified) are determined by averaging actual quoted rates for oranges and grape- fruit from November 1979 through May 1980 (U.S. Federal-State Market News Service). The distribution cost per one and three-fifths bushel from the packinghouses to the ports is equal to .2049 plus .0041 times one-way distance in miles (Updated Machado, 1978, p. 100, to 1979-80 dollars). The cost of hauling the oranges and grapefruit from the citrus groves to the packinghouses and the cost of hauling eliminations from the packinghouse to a processing plant is $.00727 per one and three-fifths bushel mile (calculated from Hooks and Kilmer, 1981b, p. 7). Table 2.--Projected disposition of Indian River fresh citrus shipments, 1979-80 season Location Oranges Grapefruit --------------1 3/5 bushel box--------- Domestic regions Atlanta 428,255 (30%) 891,838 (13%) Chicago 265,607 (19%) 1716,664 (24%) Los Angeles 139,943 (10%) 655,156 (9%) New York 474,522 (33%) 3,011,292 (42%) Toronto 119,666 (8%) 854,054 (12%) Subtotal 1,427,993 (100%) 7,129,004 (100%) Port of exit Ft. Pierce 7,049 (28%) 1,334,315 (28%) Jacksonville 1,664 (7%) 315,020 (7%) Port Canaveral 3,535 (14%) 669,061 (14%) Port Everglades 3,575 (14%) 676,675 (14%) Tampa 9,317 (37%) 1,763,542 (37%) Subtotal 25,140 (100%) 4,758,613 (100%) TOTAL 1,453,133 11,887,617 8 Table 3.--Estimated fresh citrus truck hauling costs per 1 3/5 bushel, 1979-80 season Cost Oranges Grapefruit Atlanta $1.09 $1.01 Chicago 2.72 2.67 Los Angeles 4.88 4.58 New York 2.72 2.67 Torontoa 3.24 3.18 aToronto was estimated by taking the rate to New York times 1.19 to account for the extra distance to Toronto. Source: U.S. Federal-State Market News Service. Packing Costs Existing packinghouse capacities over time are assumed to be the 1979-80 volume packed plus 20 percent2 (Florida Department of Agricul- ture and Consumer Services, 1980, pp. 18-24). Existing plants were categorized as small (100,000 to 500,000 one and three-fifths bushels annually) or large (500,001 to 850,000). All new plants are assumed to be large plants. The variable costs for existing and new packinghouses includes labor (less 30 percent of the foreman labor that is assumed fixed), direct operating expenses less repairs and maintenance, 30 percent of the administration expense, and 50 percent of the sales expense (Table 4). Fixed costs for existing plants are composed of overhead and 2Packinghouse capacity figures are not available; therefore annual volume packed was used. Kilmer and Tilley found that Florida packinghouses operate at an 11 month average of 50 percent of capacity. Capacity utilization for some individual plants will be greater than 50 percent. Thus, the potential individual packinghouse capacity is assumed to be 20 percent greater than the volume packed by each packinghouse in 1979-80. 9 investment servicing cost (debt servicing plus net return on invest- ment). Overhead includes repairs and maintenance, insurance, taxes and licenses, 30 percent of foreman labor, 70 percent of administrative expense, and 50 percent of sales expense (Table 4). Investment ser- vicing cost is $.125 per one and three-fifths bushel (calculated from Hooks and Kilmer, 1981a, and Florida Department of Agriculture and Consumer Services, 1980, p. 37).3 Table 4.--Estimated variable and fixed costs per 1 3/5 bushel box, 1979- 80 season Packinghouse Cost Smalla Largea Variable Materials $1.068 $ .975 Labor (.70) .900 .743 Direct operating .104 .120 Administrative (.30) .074 .052 Sales (.50) .081 .118 Total variable cost $2.227 $2.008 Fixed Labor (.30) .078 .062 Repairs and maintenance .251 .112 Insurance .054 .028 Taxes and licenses .019 .024 Administrative (.70) .172 .123 Sales (.50) .081 .118 Total fixed cost $ .655 $ .467 aSmall is 100,000 to 500,000 1 3/5 bushel box annual volume; large is 500,001 to 850,000 1 3/5 bushel box annual volume. Source: Packinghouse records. 3The $.125 figure is taken from accounting records and is labelled as depreciation and rent. Data on actual debt servicing and net return on investment are not available. Ideally, this information is needed from each packinghouse. 10 The same estimate of overhead for existing plants is used for new plants. Using data provided by KIetA (1982), total estimated facility costs for a new large plant in 1980, including land, building, offices, and equipment, was $1.7 million (Table 5). It is assumed that a 20 percent downpayment of $340,000 would be required, the remainder financed at 16 percent for 20 years. The annual debt servicing costs are $229,387. The downpayment, $340,000, represents net investment. Since all costs are in constant 1979 dollars, a real rate of return (nominal interest rate minus the inflation rate) on net investment of 3 percent is assumed. The downpayment is a fixed cost but also can be viewed as a transition cost, since it is a cost which is incurred only in the year the plant opens. Table 5.--Estimated land, packinghouse, equipment, and working capital cost in the Indian River marketing district, 1980 dollars a Packinghouse Item Small Large Landb $ 63,000 $ 105,000 (6 acres) (10 acres) Packinghouse building, metal, dock height $ 346,892 $ 607,000 (28,571 sq.ft.) (50,000 sq.ft.) Packinghouse equipment $ 230,053 $ 314,053 Fork lifts $ 48,000 $ 72,000 Office building $ 63,839 $ 85,120 (3,000 sq.ft.) (4,000 sq.ft.) Office equipment $ 44,889 $ 44,889 Operating capital $ 210,500 $ 421,000 $1,007,173 $1,649,062 aEach packinghouse has a central sizer, packer aids, no miechani'al palletization, and no cold storage. Source: b.--Kmetz, 1982; c.--Industry source; d.--Packinghouse records. 11 Other Assumptions Once a new plant is opened, it is not allowed to close. An exist- ing plant which covers cash costs but not all investment servicing costs is closed after three years. If existing plant is closed for less than three years, it can re-open at zero start-up cost. A look at the past industry adjustments in number of packinghouses actually in operation from one season to another reveals an industry able to make short-term adjustments in numbers. From the 1964-65 season to 1965-66, packing- house numbers increased from 160 to 225 (State of Florida total -- Florida Department of Agriculture and Consumer Services). By the 1968- 69 season, the number of packinghouses declined to 169. A similar decrease occurred from 1969-70 season until 1971-72 when the number of packinghouses declined from 211 to 164. RESULTS The model includes oranges and grapefruit produced in 13 locations in the Indian River district of Florida, 35 existing packinghouses at four locations, potential opening of new packinghouses at three loca- tions where no packinghouses currently exist (Figure 1), five consump- tion regions in the U.S. and Canada (Table 2), and five export points (see Figure 1 for the Florida locations). The static mixed integer solutions for 1979-80 through the 1983-84 seasons are obtained from a mixed integer plant location model which contained small and large existing packinghouses and large new packing- houses. The costs associated with the best solutions are shown in Table 6. The costs have been discounted to 1979 using a 3 percent real dis- count rate (without inflation). The costs in 1983-84 are adjusted to reflect the present value of the cost of packing citrus from 1983-84 on indefinitely, assuming that configuration and supply and demand levels remain unchanged. Using estimated discounted transition costs (Kilmer, Spreen, Tilley) and the static solutions from the mixed integer program- 12 Table 6.--Static and dynamic solutions to the packinghouse location problem, 1979-80 through 1983-84a Rank Dynamic Static solutions for seasons ordered program solutions solution 1979-80 1980-81 1981-82 1982-83 1983-84 (through infinity)b -------------------------- thousand $ ------------ ------------ 1 2,548,660 59,083 60,762 62,799 64,829 66,901 "(Best) (2,296,922) 2 2,568,282 59,083 60,782 62,807 64,832 66,911 (Fourth Best) (2,297,276) 3 59,094 60,838 62,821 64,863 66,914 (2,297,396) 4 59,101 60,852 62,826 64,865 66,925 (2,297,750) 5 59,109 60,862 62,900 64,882 67,001 (2,300,387) 6 59,118 60,877 62,907 64,884 67,012 (2,300,741) 7 59,139 60,884 62,919 64,904 67,015 (2,300,831) 8 59,146 60,922 62,920 64,924 67,025 (2,301,186) 9 59,152 60,939 62,941 64,926 67,025 (2,301,186) 10 59,176 60,943 64,977 67,046 (2,301,896) Initial configu- ration 2,605,366 62,350 63,687 65,167 66,779 68,371 (2,347,383) Transition cost 2903c 365 320 329 302 (Best) Transition cost (EQUEth hest) 2903c 365 320 329 302 Transition cost - (Initial conf.) OC 0 0 0 0 aAll costs are in 1979 dollars. bPresent value of collection, packing, and distribution cost from 1983-84 to infinity, assuming plant configuration, supply, and demand remain unchanged. CTransition cost to initial configuration. 13 ming model, dynamic solutions to the packinghouse location problem are obtained and two such solutions are shown in Table 6. The solid under- lined elements represent the least cost path over time. The dashed underlined elements represent the fourth least cost path over time. The best solution in 1979-80 calls for the immediate closing of 24 existing plants (11 remain open) and building six large plants for a total of 17 plants (Table 7). By the 1983-84 season, nine existing houses are still operating. One of the new packinghouses is located at Jupiter in the southern part of the region (Figure 1) where no existing packinghouses are located. By employing the dynamic solution for pack- inghouses instead of allowing the initial plant location and relative sizes to exist over time, the packinghouses in the Indian River market Table 7.--Packinghouse size configuration for the best dynamic solution Capacity Initial Packinghouse number for seasons Location (1-3/5 bu. box) configuration 1979-80 1980-81 1981-82 1982-83 1983-84 1,000s North Titusville 100-500 2* - 501-850 1 1 1 1 1 Cocoa 100-500 1* 1* 1* 1* - 501-850 - Melbournea 501-850 - South Vero Beach 100-500 11* 1* - 501-850 7* 7*,1b 7*,2 7*,3 7*,3 7*,4 Ft. Pierce 100-500 12* - 501-850 2* 2*,3 2*,3 2*,3 2*,4 2*,4 Stuarta 501-850 - Jupitera 501-850 1 1 1 1 1 aNew location. b7*, 1 means seven existing plants operating and one new plant operating in that year. 14 district could save $56,706,000 (1979 dollars) or 2.2 percent of the best dynamic solution. During 1983-84 alone, total assembly, packing, and distribution costs could be reduced by $1,470,000 or $.086 per one and three-fifths bushel box (1979 dollars). Finally, most of the existing packinghouses close in the first season, 1979-80. This is not unusual and is entirely feasible (See other assumptions). SUMMARY AND CONCLUSIONS The southern movement of citrus production does suggest the need for construction of a new packinghouse in Jupiter, Florida, which is located in the southern part of the Indian River marketing district. Existing capacity could be reconfigured into larger packinghouses. Instead of building new plants in the same cities where old (existing) plants are located, the old packinghouses could be enlarged to take advantage of economies of size. In general, however, the Indian River packinghouse capacity is located where the production is located. Total collection, packing, and distribution costs could be reduced by only 2.2 percent if the industry closed all small packinghouses and maintained and built new packinghouses. Only the cost side of the packinghouse industry, however, is explored in this study. Small packinghouses that pack for a select market may be quite profitable. Also, a small pack- inghouse may have management that is just as cost efficient as a large packinghouse. Thus the southerly shift in citrus production will have a small effect on existing packinghouse size and location over the next decade; however, a new packinghouse is needed in the southern portion of the district. 15 REFERENCES Fairchild, Gary F. 1977. Estimated Florida Orange, Temple, and Grape- fruit Production, 1976-77 through 1981-82. Econ. Res. Dept., Fla. Dept. of Citrus, CIR No. 77-1. Florida Crop and Livestock Reporting Service. 1981. Florida Agricul- tural Statistics: Citrus Summary. 1980. Florida Agricultural Statistics: Commercial Citrus Inventory. Florida Department of Agriculture and Consumer Services, Division of Fruit and Vegetable Inspection. Various dates. Season Annual Report. Florida Department of Citrus, Market Research Department. 1980. Annual Fresh Fruits Shipments Report. Hooks, R. Clegg, and Richard L. Kilmer. 1981a. Estimated Costs of Packing and Selling Fresh Florida Citrus, 1979-80 Season. IFAS Econ. Info. Rpt. No. 145. 1981b. Estimated Costs of Picking and Hauling Fresh Florida Citrus, 1979-80 Season. IFAS Econ. Info. Rpt. No. 151. Kilmer, Richard L., and Daniel S. Tilley. 1979. "A Variance Component Approach to Industry Cost Analysis," Southern Journal of Agricul- tural Economics 11:35-40. Kilmer, Richard L., Thomas H. Spreen, and Daniel S. Tilley. 1982. "A Dynamic Plant Location Model of the East Florida Fresh Citrus Pack- ing Industry." Unpublished report, Food and Resource Economics Dept., Univ. of Fla. Kmetz, George P. "The 1980 Cost to Build a New Packinghouse." 1982. Unpublished report, Indian River County Appraisers Office, Vero Beach, FL. Nachado, Virgilio A.P. 1978. "A Dynamic Mixed Integer Location Model Applied to Florida Citrus Packinghouses." Unpublished Ph.D. disser- tation, Univ. of Fla. sweeney, Dennis J., and Ronald T. Tatham. 1976. "An Improved Long-Run Model for Multiple Warehouse Location," Management Science 22:748-758. U.S. Federal-State Market News Service. Various dates. Fruit and Vegetable Truck Rate Report. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs
http://ufdc.ufl.edu/UF00026491/00001
CC-MAIN-2018-30
en
refinedweb
2016-09-02 04:53 AM - edited 2016-09-02 04:54 AM Hello, I have ocum 6.4p1. when i go in the detail of a virtual machine and want to list the cifs shares, this list is not complet that only list the root shares for exemple : svm1 with a volume named data_svm1 volume is mounted in namespace as /data_svm1 i make a directory svm1\data_svm1\test if a make 2 shares share1 pointing on svm1\data_svm1 share2 pointing on svm1\data_svm1\test both share are accessible but only share1 is listed in ocum if i delete share1 share2 is accessible but not listed in ocum is that normal ? it is a possibility to list all shares level 1,level 2 ....? thx 2016-09-02 07:12 AM This is because you have mounted the test volume under svm1\data_svm1. You already created a share on svm1\data_svm1, You can access svm1\data_svm1 and svm1\data_svm1\test from the share1 it self. That is the reason it is showing single share in your case.... :-) I am able to see all the shares in my OCUM and i am also using 6.4p1. I have mounted all my volumes under root (/). Please find the attached screenshot for your reference. 2016-09-05 01:49 AM Hi, in your screen we see only first level shares if you create a folder under /vol_infra like /vol_infra/testlv1 you shares /vol_infra/test as "testlv1" and delete the /vol_infra share so the user go directly to /vol_infra/test the in OCUM the share "testlv1" did not appear look my screens windows and system manage show share " testlv1" path /data_svm3/testlv1 OCUM side show only C$ shares
https://community.netapp.com/t5/OnCommand-Storage-Management-Software-Discussions/ocum-6-4P1-cifs-shares-list/td-p/122834
CC-MAIN-2018-30
en
refinedweb
Guys, I don't want to get into religious wars re the benefits of encapsulation and the use of 'private', 'const' or whatever. I know that I can use 'name-mangling' via '__foo' from Python 1.5 to give me a limited form of privacy and I have seen allusions to extensions that enforce data hiding : There is little direct support for data hiding within Python itself, but extensions and embeddings of Python can provide rock solid interfaces that expose only permitted foreign operations to the Python interpreter. Python's restricted execution mode may also provide some (usually extreme) protection within the interpreter itself. I need to know if anyone has written said extensions, as I am unable to find anything at the Vaults of Parnassus archive. I suspect that the above quote may indicate that we need to write our own extensions to enforce data-hiding on a class-by-class basis, but I'm open to suggestions. The issue is *not* coding to prevent breaking modules by trampling one anothers namespaces, it is convincing management that Python is the right tool for the bulk of the project. Thanks, Arthur Arthur Watts Systems Integrator GBST Pty Ltd 'Good programmers know what to use, great programmers know what to re-use' : Old Jungle Saying
https://mail.python.org/pipermail/tutor/2000-July/001748.html
CC-MAIN-2018-30
en
refinedweb
Input editor with extended functionality. More... #include <CExtLineEdit.h> Input editor with extended functionality. It's possible to add an icon to the edit field or insert additional widgets (e.g control buttons) into the view. Definition at line 31 of file CExtLineEdit.h. Definition at line 36 of file CExtLineEdit.h. Construct a line edit with the given properties. Add a widget to the line edit area according to alginmentFlags. Only Qt::AlignLeft and Qt::AlignRight are possible. widgetobject. Get the startup text. Get editor text. Set the icon, that will appeared on the left side of the line edit. Set the text, that will be shown at the first time the editor becomes visible. © 2007-2017 Witold Gantzke and Kirill Lepskiy
http://ilena.org/TechnicalDocs/Acf/classiwidgets_1_1_c_ext_line_edit.html
CC-MAIN-2018-51
en
refinedweb
Parametric polymorphism You are encouraged to solve this task according to the task description, using any language you may know. - Task Write a small example for a type declaration that is parametric over another type, together with a short bit of code (and its type signature) that uses it. A good example is a container type, let's say a binary tree, together with some function that traverses the tree, say, a map-function that operates on every element of the tree. This language feature only applies to statically-typed languages. Contents - 1 Ada - 2 C - 3 C++ - 4 C# - 5 Ceylon - 6 Clean - 7 Common Lisp - 8 D - 9 Dart - 10 E - 11 F# - 12 Fortran - 13 Go - 14 Groovy - 15 Haskell - 16 Inform 7 - 17 Icon and Unicon - 18 J - 19 Java - 20 Julia - 21 Kotlin - 22 Mercury - 23 Nim - 24 Objective-C - 25 OCaml - 26 Perl 6 - 27 Phix - 28 PicoLisp - 29 Racket - 30 REXX - 31 Rust - 32 Scala - 33 Seed7 - 34 Standard ML - 35 Swift - 36 Ursala - 37 Visual Prolog Ada[edit] generic type Element_Type is private; package Container is type Tree is tagged private; procedure Replace_All(The_Tree : in out Tree; New_Value : Element_Type); private type Node; type Node_Access is access Node; type Tree tagged record Value : Element_type; Left : Node_Access := null; Right : Node_Access := null; end record; end Container; package body Container is procedure Replace_All(The_Tree : in out Tree; New_Value : Element_Type) is begin The_Tree.Value := New_Value; If The_Tree.Left /= null then The_Tree.Left.all.Replace_All(New_Value); end if; if The_tree.Right /= null then The_Tree.Right.all.Replace_All(New_Value); end if; end Replace_All; end Container; C[edit]If the goal is to separate algorithms from types at compile type, C may do it by macros. Here's sample code implementing binary tree with node creation and insertion: #include <stdio.h> #include <stdlib.h> #define decl_tree_type(T) \ typedef struct node_##T##_t node_##T##_t, *node_##T; \ struct node_##T##_t { node_##T left, right; T value; }; \ \ node_##T node_##T##_new(T v) { \ node_##T node = malloc(sizeof(node_##T##_t)); \ node->value = v; \ node->left = node->right = 0; \ return node; \ } \ node_##T node_##T##_insert(node_##T root, T v) { \ node_##T n = node_##T##_new(v); \ while (root) { \ if (root->value < n->value) \ if (!root->left) return root->left = n; \ else root = root->left; \ else \ if (!root->right) return root->right = n; \ else root = root->right; \ } \ return 0; \ } #define tree_node(T) node_##T #define node_insert(T, r, x) node_##T##_insert(r, x) #define node_new(T, x) node_##T##_new(x) decl_tree_type(double); decl_tree_type(int); int main() { int i; tree_node(double) root_d = node_new(double, (double)rand() / RAND_MAX); for (i = 0; i < 10000; i++) node_insert(double, root_d, (double)rand() / RAND_MAX); tree_node(int) root_i = node_new(int, rand()); for (i = 0; i < 10000; i++) node_insert(int, root_i, rand()); return 0; } Comments: It's ugly looking, but it gets the job done. It has the drawback that all methods need to be re-created for each tree data type used, but hey, C++ template does that, too. Arguably more interesting is run time polymorphism, which can't be trivially done; if you are confident in your coding skill, you could keep track of data types and method dispatch at run time yourself -- but then, you are probably too confident to not realize you might be better off using some higher level languages. C++[edit] template<class T> class tree { T value; tree *left; tree *right; public: void replace_all (T new_value); }; For simplicity, we replace all values in the tree with a new value: template<class T> void tree<T>::replace_all (T new_value) { value = new_value; if (left != NULL) left->replace_all (new_value); if (right != NULL) right->replace_all (new_value); } C#[edit] namespace RosettaCode { class BinaryTree<T> { public T value; public BinaryTree<T> left; public BinaryTree<T> right; public BinaryTree(T value) { this.value = value; } public BinaryTree<U> Map<U>(Func<T,U> f) { BinaryTree<U> Tree = new BinaryTree<U>(f(this.value)); if (left != null) { Tree.left = left.Map(f); } if (right != null) { Tree.right = right.Map(f); } return Tree; } } } Sample that creates a tree to hold int values: namespace RosettaCode { class Program { static void Main(string[] args) { BinaryTree<int> b = new BinaryTree<int>(6); b.left = new BinaryTree<int>(5); b.right = new BinaryTree<int>(7); BinaryTree<double> b2 = b.Map(x => x * 10.0); } } } Ceylon[edit] class BinaryTree<Data>(shared Data data, shared BinaryTree<Data>? left = null, shared BinaryTree<Data>? right = null) { shared BinaryTree<NewData> myMap<NewData>(NewData f(Data d)) => BinaryTree { data = f(data); left = left?.myMap(f); right = right?.myMap(f); }; } shared void run() { value tree1 = BinaryTree { data = 3; left = BinaryTree { data = 4; }; right = BinaryTree { data = 5; left = BinaryTree { data = 6; }; }; }; tree1.myMap(print); print(""); value tree2 = tree1.myMap((x) => x * 333.33); tree2.myMap(print); } Clean[edit] : :: (f a) -> (f a) | Functor f & Num a add1Everywhere nums = fmap (\x = x + 1) nums If we have a tree of integers, i.e. f is Treeand a is Integer, then the type of add1Everywhereis Tree Integer -> Tree Integer. Common Lisp[edit] Common Lisp is not statically typed, but types can be defined which are parameterized over other types. In the following piece of code, a type pair is defined which accepts two (optional) type specifiers. An object is of type (pair :car car-type :cdr cdr-type) if an only if it is a cons whose car is of type car-type and whose cdr is of type cdr-type. (deftype pair (&key (car 't) (cdr 't)) `(cons ,car ,cdr)) Example > (typep (cons 1 2) '(pair :car number :cdr number)) T > (typep (cons 1 2) '(pair :car number :cdr character)) NIL D[edit] class ArrayTree(T, uint N) { T[N] data; typeof(this) left, right; this(T initValue) { this.data[] = initValue; } void tmap(const void delegate(ref typeof(data)) dg) { dg(this.data); if (left) left.tmap(dg); if (right) right.tmap(dg); } } void main() { // Demo code. import std.stdio; // Instantiate the template ArrayTree of three doubles. alias AT3 = ArrayTree!(double, 3); // Allocate the tree root. auto root = new AT3(1.00); // Add some nodes. root.left = new AT3(1.10); root.left.left = new AT3(1.11); root.left.right = new AT3(1.12); root.right = new AT3(1.20); root.right.left = new AT3(1.21); root.right.right = new AT3(1.22); // Now the tree has seven nodes. // Show the arrays of the whole tree. //root.tmap(x => writefln("%(%.2f %)", x)); root.tmap((ref x) => writefln("%(%.2f %)", x)); // Modify the arrays of the whole tree. //root.tmap((x){ x[] += 10; }); root.tmap((ref x){ x[] += 10; }); // Show the arrays of the whole tree again. writeln(); //root.tmap(x => writefln("%(%.2f %)", x)); root.tmap((ref x) => writefln("%(%.2f %)", x)); } - Output: 1.00 1.00 1.00 1.10 1.10 1.10 1.11 1.11 1.11 1.12 1.12 1.12 1.20 1.20 1.20 1.21 1.21 1.21 1.22 1.22 1.22 11.00 11.00 11.00 11.10 11.10 11.10 11.11 11.11 11.11 11.12 11.12 11.12 11.20 11.20 11.20 11.21 11.21 11.21 11.22 11.22 11.22 Dart[edit] class TreeNode<T> { T value; TreeNode<T> left; TreeNode<T> right; TreeNode(this.value); TreeNode map(T f(T t)) { var node = new TreeNode(f(value)); if(left != null) { node.left = left.map(f); } if(right != null) { node.right = right.map(f); } return node; } void forEach(void f(T t)) { f(value); if(left != null) { left.forEach(f); } if(right != null) { right.forEach(f); } } } void main() { TreeNode root = new TreeNode(1); root.left = new TreeNode(2); root.right = new TreeNode(3); root.left.right = new TreeNode(4); print('first tree'); root.forEach(print); var newRoot = root.map((t) => t * 222); print('second tree'); newRoot.forEach(print); } - Output: first tree 1 2 4 3 second tree 222 444 888 666 E[edit] While E itself does not do static (before evaluation) type checking, E does have guards which form a runtime type system, and has typed collections in the standard library. Here, we implement a typed tree, and a guard which accepts trees of a specific type. (Note: Like some other examples here, this is an incomplete program in that the tree provides no way to insert or delete nodes.) (Note: The guard definition is arguably messy boilerplate; future versions of E may provide a scheme where the interface expression can itself be used to describe parametricity, and message signatures using the type parameter, but this has not been implemented or fully designed yet. Currently, this example is more of “you can do it if you need to” than something worth doing for every data structure in your program.) interface TreeAny guards TreeStamp {} def Tree { to get(Value) { def Tree1 { to coerce(specimen, ejector) { def tree := TreeAny.coerce(specimen, ejector) if (tree.valueType() != Value) { throw.eject(ejector, "Tree value type mismatch") } return tree } } return Tree1 } } def makeTree(T, var value :T, left :nullOk[Tree[T]], right :nullOk[Tree[T]]) { def tree implements TreeStamp { to valueType() { return T } to map(f) { value := f(value) # the declaration of value causes this to be checked if (left != null) { left.map(f) } if (right != null) { right.map(f) } } } return tree } ? def t := makeTree(int, 0, null, null) # value: <tree> ? t :Tree[String] # problem: Tree value type mismatch ? t :Tree[Int] # problem: Failed: Undefined variable: Int ? t :Tree[int] # value: <tree> F#[edit] namespace RosettaCode type BinaryTree<'T> = | Element of 'T | Tree of 'T * BinaryTree<'T> * BinaryTree<'T> member this.Map(f) = match this with | Element(x) -> Element(f x) | Tree(x,left,right) -> Tree((f x), left.Map(f), right.Map(f)) We can test this binary tree like so: let t1 = Tree(2, Element(1), Tree(4,Element(3),Element(5)) ) let t2 = t1.Map(fun x -> x * 10) Fortran[edit] Fortran does not offer polymorphism by parameter type, which is to say, enables the same source code to be declared applicable for parameters of different types, so that a contained statement such as X = A + B*C would work for any combination of integer or floating-point or complex variables as actual parameters, since exactly that (source) code would be workable in every case. Further, there is no standardised pre-processor protocol whereby one could replicate such code to produce a separate subroutine or function specific to every combination. MODULE SORTSEARCH !Genuflect towards Prof. D. Knuth. INTERFACE FIND !Binary chop search, not indexed. MODULE PROCEDURE 1 FINDI4, !I: of integers. 2 FINDF4,FINDF8, !F: of numbers. 3 FINDTTI2,FINDTTI4 !T: of texts. END INTERFACE FIND CONTAINS INTEGER FUNCTION FINDI4(THIS,NUMB,N) !Binary chopper. Find i such that THIS = NUMB(i) USE ASSISTANCE !Only for the trace stuff. INTENT(IN) THIS,NUMB,N !Imply read-only, but definitely no need for any "copy-back". INTEGER*4 THIS,NUMB(1:*) !Where is THIS in array NUMB(1:N)? INTEGER N !The count. In other versions, it is supplied by the index. INTEGER L,R,P !Fingers. Chop away. L = 0 !Establish outer bounds. R = N + 1 !One before, and one after, the first and last. 1 P = (R - L)/2 !Probe point offset. Beware integer overflow with (L + R)/2. IF (P.LE.0) THEN !Aha! Nowhere! And THIS follows NUMB(L). FINDI4 = -L !Having -L rather than 0 (or other code) might be of interest. RETURN !Finished. END IF !So much for exhaustion. P = P + L !Convert from offset to probe point. IF (THIS - NUMB(P)) 3,4,2 !Compare to the probe point. 2 L = P !Shift the left bound up: THIS follows NUMB(P). GO TO 1 !Another chop. 3 R = P !Shift the right bound down: THIS precedes NUMB(P). GO TO 1 !Try again. Caught it! THIS = NUMB(P) 4 FINDI4 = P !So, THIS is found, here! END FUNCTION FINDI4 !On success, THIS = NUMB(FINDI4); no fancy index here... END MODULE SORTSEARCH There would be a function (with a unique name) for each of the contemplated variations in parameter types, and when the compiler reached an invocation of FIND(...) it would select by matching amongst the combinations that had been defined in the routines named in the INTERFACE statement. The various actual functions could have different code, and in this case, only the INTEGER*4 THIS,NUMB(1:*) need be changed, say to REAL*4 THIS,NUMB(1:*) for FINDF4, which is why both variables are named in the one statement. However, for searching CHARACTER arrays, because the character comparison operations differ from those for numbers (and, no three-way IF-test either), additional changes are required. Thus, function FIND would appear to be a polymorphic function that accepts and returns a variety of types, but it is not, and indeed, there is actually no function called FIND anywhere in the compiled code. That said, some systems had polymorphic variables, such as the B6700 whereby integers were represented as floating-point numbers and so exactly the same function could be presented with an integer or a floating-point variable (provided the compiler didn't check for parameter type matching - but this was routine) and it would work - so long as no divisions were involved since addition, subtraction, and multiplication are the same for both, but integer division discards any remainders. More recent computers following the Intel 8087 floating-point processor and similar add novel states to the scheme for floating-point arithmetic: not just zero and "gradual underflow" but "Infinity" and "Not a Number", which last violates even more of the axia of mathematics in that NaN does not equal NaN. In turn, this forces a modicum of polymorphism into the language so as to contend with the additional features, such as the special function IsNaN(x). More generally, using the same code for different types of variable can be problematical. A scheme that works in single precision may not work in double precision (or vice-versa) or may not give corresponding levels of accuracy, or not converge at all, etc. While F90 also standardised special functions that give information about the precision of variables and the like, and in principle, a method could be coded that, guided by such information, would work for different precisions, this sort of scheme is beset by all manner of difficulties in problems more complex than the simple examples of text books. Polymorphism just exacerbates the difficulties, thus, on page 219 of 16-Bit Modern Microcomputers by G. M. Corsline appears the remark "At least some of the generalized numerical solutions to common mathematical procedures have coding that is so involved and tricky in order to take care of all possible roundoff contingencies that they have been termed 'pornographic algorithms'.". And "Mathematical software is easy for the uninitiated to write but notoriously hard for the expert. This paradox exists because the beginner is satisfied if his code usually works in his own machine while the expert attempts, against overwhelming obstacles, to produce programs that always work on a large number of computers. The problem is that while standard formulas of mathematics are fairly easy to translate into FORTRAN they often are subject to instabilities due to roundoff error." - quoting John Palmer, 1980, Intel Corporation. But sometimes it is not so troublesome, as in Pathological_floating_point_problems#The_Chaotic_Bank_Society whereby the special EPSILON(x) function that reports on the precision of a nominated variable of type x is used to determine the point beyond which further calculation (in that precision, for that formula) will make no difference.Having flexible facilities available my lead one astray. Consider the following data aggregate, as became available with F90: TYPE STUFF INTEGER CODE !A key number. CHARACTER*6 NAME !Associated data. INTEGER THIS !etc. END TYPE STUFF TYPE(STUFF) TABLE(600) !An array of such entries. Suppose the array was in sorted order by each entry's value of CODE so that TABLE(1).CODE <= TABLE(2).CODE, etc. and one wished to find the index of an entry with a specific value, x, of CODE. It is pleasing to be able to write FIND(x,TABLE.CODE,N) and have it accepted by the compiler. Rather less pleasing is that it runs very slowly. This is because consecutive elements in an array are expected to occupy consecutive locations in storage, but the CODE elements do not, being separated by the other elements of the aggregate. So, the compiler generates code to copy the required elements to a work area, presents that as the actual parameter, and copies from the work area back on return from the function, thereby vitiating the speed advantages of the binary search. This is why the INTENT(IN) might help in such situations, as will writing FIND(x,TABLE(1:N).CODE,N) should N be often less than the full size of the table. But really, in-line code for each such usage is the only answer, despite the lack of a pre-processor to generate it. Other options are to remain with the older-style of Fortran, using separately-defined arrays having a naming convention such as TABLECODE(600), TABLENAME(600), etc. thus not gaining the unity of declaring a TYPE, or, declaring the size within the type as in INTEGER CODE(600) except that this means that the size is a part of the type and different-sized tables would require different types, or, perhaps the compiler will handle this problem by passing a "stride" value for every array dimension so that subroutines and functions can index such parameters properly - at the cost of yet more overhead for parameter passing, and more complex indexing calculations. In short, the available polymorphism whereby a parameter can be a normal array, or, an array-like "selection" of a component from an array of compound entities enables appealing syntax, but disasterous performance. Go[edit] The parametric function in this example is the function average. It's type parameter is the interface type intCollection, and its logic uses the polymorphic function mapElements. In Go terminology, average is an ordinary function whose parameter happens to be of interface type. Code inside of average is ordinary code that just happens to call the mapElements method of its parameter. This code accesses the underlying static type only through the interface and so has no knowledge of the details of the static type or even which static type it is dealing with. Function main creates objects t1 and t2 of two different static types, binaryTree an bTree. Both types implement the interface intCollection. t1 and t2 have different static types, but when they are passed to average, they are bound to parameter c, of interface type, and their static types are not visible within average. Implementation of binaryTree and bTree is dummied, but you can see that implementation of average of binaryTree contains code specific to its representation (left, right) and that implementation of bTree contains code specific to its representation (buckets.) package main import "fmt" func average(c intCollection) float64 { var sum, count int c.mapElements(func(n int) { sum += n count++ }) return float64(sum) / float64(count) } func main() { t1 := new(binaryTree) t2 := new(bTree) a1 := average(t1) a2 := average(t2) fmt.Println("binary tree average:", a1) fmt.Println("b-tree average:", a2) } type intCollection interface { mapElements(func(int)) } type binaryTree struct { // dummy representation details left, right bool } func (t *binaryTree) mapElements(visit func(int)) { // dummy implementation if t.left == t.right { visit(3) visit(1) visit(4) } } type bTree struct { // dummy representation details buckets int } func (t *bTree) mapElements(visit func(int)) { // dummy implementation if t.buckets >= 0 { visit(1) visit(5) visit(9) } } Output: binary tree average: 2.6666666666666665 b-tree average: 5 Groovy[edit](more or less) Solution: class Tree<T> { T value Tree<T> left Tree<T> right Tree(T value = null, Tree<T> left = null, Tree<T> right = null) { this.value = value this.left = left this.right = right } void replaceAll(T value) { this.value = value left?.replaceAll(value) right?.replaceAll(value) } } Haskell[edit] data :: (Functor f, Num a) => f a -> f a add1Everywhere nums = fmap (\x -> x + 1) nums If we have a tree of integers, i.e. f is Treeand a is Integer, then the type of add1Everywhereis Tree Integer -> Tree Integer. Inform 7[edit] Phrases (the equivalent of global functions) can be defined with type parameters: Polymorphism is a room. To find (V - K) in (L - list of values of kind K): repeat with N running from 1 to the number of entries in L: if entry N in L is V: say "Found [V] at entry [N] in [L]."; stop; say "Did not find [V] in [L]." When play begins: find "needle" in {"parrot", "needle", "rutabaga"}; find 6 in {2, 3, 4}; end the story. Inform 7 does not allow user-defined parametric types. Some built-in types can be parameterized, though: list of numbers relation of texts to rooms object based rulebook producing a number description of things activity on things number valued property text valued table column phrase (text, text) -> number Icon and Unicon[edit] Like PicoLisp, Icon and Unicon are dynamically typed and hence inherently polymorphic. Here's an example that can apply a function to the nodes in an n-tree regardless of the type of each node. It is up to the function to decide what to do with a given type of node. Note that the nodes do no even have to be of the same type. procedure main() bTree := [1, [2, [4, [7]], [5]], [3, [6, [8], [9]]]] mapTree(bTree, write) bTree := [1, ["two", ["four", [7]], [5]], [3, ["six", ["eight"], [9]]]] mapTree(bTree, write) end procedure mapTree(tree, f) every f(\tree[1]) | mapTree(!tree[2:0], f) end J[edit] In J, all functions are generic over other types. Alternatively, J is statically typed in the sense that it supports only one data type (the array), though of course inspecting a value can reveal additional details (such as: is it an array of numbers?) (That said, note that J also supports some types which are not, strictly speaking, data. These are the verb, adverb and conjunction types. To fit this nomenclature, data is of type "noun". Also, nouns have some additional taxonomy which is beyond the scope of this task.) Java[edit] Following the C++ example: public class Tree<T>{ private T value; private Tree<T> left; private Tree<T> right; public void replaceAll(T value){ this.value = value; if(left != null) left.replaceAll(value); if(right != null) right.replaceAll(value); } } Julia[edit] mutable struct Tree{T} value::T lchild::Nullable{Tree{T}} rchild::Nullable{Tree{T}} end function replaceall!(t::Tree{T}, v::T) where T t.value = v isnull(lchild) || replaceall(get(lchild), v) isnull(rchild) || replaceall(get(rchild), v) return t end Kotlin[edit] // version 1.0.6 class BinaryTree<T>(var value: T) { var left : BinaryTree<T>? = null var right: BinaryTree<T>? = null fun <U> map(f: (T) -> U): BinaryTree<U> { val tree = BinaryTree<U>(f(value)) if (left != null) tree.left = left?.map(f) if (right != null) tree.right = right?.map(f) return tree } fun showTopThree() = "(${left?.value}, $value, ${right?.value})" } fun main(args: Array<String>) { val b = BinaryTree(6) b.left = BinaryTree(5) b.right = BinaryTree(7) println(b.showTopThree()) val b2 = b.map { it * 10.0 } println(b2.showTopThree()) } - Output: (5, 6, 7) (50.0, 60.0, 70.0) Mercury[edit] :- type tree(A) ---> empty ; node(A, tree(A), tree(A)). :- func map(func(A) = B, tree(A)) = tree(B). map(_, empty) = empty. map(F, node(A, Left, Right)) = node(F(A), map(F, Left), map(F, Right)). Nim[edit] type Tree[T] = ref object value: T left, right: Tree[T] Objective-C[edit] @interface Tree<T> : NSObject { T value; Tree<T> *left; Tree<T> *right; } - (void)replaceAll:(T)v; @end @implementation Tree - (void)replaceAll:(id)v { value = v; [left replaceAll:v]; [right replaceAll:v]; } @end Note that the generic type variable is only used in the declaration, but not in the implementation. OCaml[edit] type 'a tree = Empty | Node of 'a * 'a tree * 'a tree (** val map_tree : ('a -> 'b) -> 'a tree -> 'b tree *) let rec map_tree f = function | Empty -> Empty | Node (x,l,r) -> Node (f x, map_tree f l, map_tree f r) Perl 6[edit] role BinaryTree[::T] { has T $.value; has BinaryTree[T] $.left; has BinaryTree[T] $.right; method replace-all(T $value) { $!value = $value; $!left.replace-all($value) if $!left.defined; $!right.replace-all($value) if $!right.defined; } } class IntTree does BinaryTree[Int] { } my IntTree $it .= new(value => 1, left => IntTree.new(value => 2), right => IntTree.new(value => 3)); $it.replace-all(42); say $it.perl; - Output: IntTree.new(value => 42, left => IntTree.new(value => 42, left => BinaryTree[T], right => BinaryTree[T]), right => IntTree.new(value => 42, left => BinaryTree[T], right => BinaryTree[T])) Phix[edit] Phix is naturally polymorphic, with optional static typing. The standard builtin type hierarcy is trivial: <-------- object ---------> | | +-atom +-sequence | | +-integer +-string User defined types are subclasses of those. If you declare a parameter as type integer then obviously it is optimised for that, and crashes when given something else (with a clear human-readable message and file name/line number). If you declare a parameter as type object then it can handle anything you can throw at it - integers, floats, strings, or (deeply) nested sequences. Of course many builtin routines are naturally generic, such as sort and print. Most programming languages would throw a hissy fit if you tried to sort (or print) a mixed collection of strings and integers, but not Phix: ?sort(shuffle({5,"oranges",6,"apples",7})) - Output: {5,6,7,"apples","oranges"} For comparison purposes (and because this entry looked a bit sparse without it) this is the D example from this page translated to Phix. Note that tmap has to be a function rather than a procedure with a reference parameter, but this still achieves pass-by-reference/in-situ updates, mainly because root is a local rather than global/static, and is the target of (aka assigned to/overwritten on return from) the top-level tmap() call, and yet also manages the C#/Dart/Kotlin thing (by which I am referring to those specific examples on this page) of creating a whole new tree, simply because lhs assignee!=rhs reference (aka root2!=root) in "root2 = tmap(root,rid)", not that such a "deep clone" would (barring a few dirty low-level tricks) behave any differently to "root2=root", which is "a straightforward shared reference with cow semantics". enum data, left, right function tmap(sequence tree, integer rid) tree[data] = call_func(rid,{tree[data]}) if tree[left]!=null then tree[left] = tmap(tree[left],rid) end if if tree[right]!=null then tree[right] = tmap(tree[right],rid) end if return tree end function function newnode(object v) return {v,null,null} end function function add10(atom x) return x+10 end function procedure main() object root = newnode(1.00) -- Add some nodes. root[left] = newnode(1.10) root[left][left] = newnode(1.11) root[left][right] = newnode(1.12) root[right] = newnode(1.20) root[right][left] = newnode(1.21) root[right][right] = newnode(1.22) -- Now the tree has seven nodes. -- Show the whole tree. ppOpt({pp_Nest,2}) pp(root) -- Modify the whole tree. root = tmap(root,routine_id("add10")) -- Create a whole new tree. object root2 = tmap(root,rid) -- Show the whole tree again. pp(root) end procedure main() - Output: {1, {1.1, {1.11,0,0}, {1.12,0,0}}, {1.2, {1.21,0,0}, {1.22,0,0}}} {11, {11.1, {11.11,0,0}, {11.12,0,0}}, {11.2, {11.21,0,0}, {11.22,0,0}}} PicoLisp[edit] PicoLisp is dynamically-typed, so in principle every function is polymetric over its arguments. It is up to the function to decide what to do with them. A function traversing a tree, modifying the nodes in-place (no matter what the type of the node is): (de mapTree (Tree Fun) (set Tree (Fun (car Tree))) (and (cadr Tree) (mapTree @ Fun)) (and (cddr Tree) (mapTree @ Fun)) ) Test: (balance 'MyTree (range 1 7)) # Create a tree of numbers -> NIL : (view MyTree T) # Display it 7 6 5 4 3 2 1 -> NIL : (mapTree MyTree inc) # Increment all nodes -> NIL : (view MyTree T) # Display the tree 8 7 6 5 4 3 2 -> NIL : (balance 'MyTree '("a" "b" "c" "d" "e" "f" "g")) # Create a tree of strings -> NIL : (view MyTree T) # Display it "g" "f" "e" "d" "c" "b" "a" -> NIL : (mapTree MyTree uppc) # Convert all nodes to upper case -> NIL : (view MyTree T) # Display the tree "G" "F" "E" "D" "C" "B" "A" -> NIL Racket[edit] Typed Racket has parametric polymorphism: #lang typed/racket (define-type (Tree A) (U False (Node A))) (struct: (A) Node ([val : A] [left : (Tree A)] [right : (Tree A)]) #:transparent) (: tree-map (All (A B) (A -> B) (Tree A) -> (Tree B))) (define (tree-map f tree) (match tree [#f #f] [(Node val left right) (Node (f val) (tree-map f left) (tree-map f right))])) ;; unit tests (require typed/rackunit) (check-equal? (tree-map add1 (Node 5 (Node 3 #f #f) #f)) (Node 6 (Node 4 #f #f) #f)) REXX[edit] This REXX programming example is modeled after the D example. /*REXX program demonstrates (with displays) a method of parametric polymorphism. */ call newRoot 1.00, 3 /*new root, and also indicate 3 stems.*/ /* [↓] no need to label the stems. */ call addStem 1.10 /*a new stem and its initial value. */ call addStem 1.11 /*" " " " " " " */ call addStem 1.12 /*" " " " " " " */ call addStem 1.20 /*" " " " " " " */ call addStem 1.21 /*" " " " " " " */ call addStem 1.22 /*" " " " " " " */ call sayNodes /*display some nicely formatted values.*/ call modRoot 50 /*modRoot will add fifty to all stems. */ call sayNodes /*display some nicely formatted values.*/ exit /*stick a fork in it, we're all done. */ /*──────────────────────────────────────────────────────────────────────────────────────*/ addStem: nodes=nodes + 1; do j=1 for stems; root.nodes.j=arg(1); end; return newRoot: parse arg @,stems; nodes=-1; call addStem copies('═',9); call addStem @; return /*──────────────────────────────────────────────────────────────────────────────────────*/ modRoot: arg #; do j=1 for nodes /*traipse through all the defined nodes*/ do k=1 for stems if datatype(root.j.k,'N') then root.j.k=root.j.k + # /*add bias.*/ end /*k*/ /* [↑] only add if numeric stem value.*/ end /*j*/ return /*──────────────────────────────────────────────────────────────────────────────────────*/ sayNodes: w=9; do j=0 to nodes; _= /*ensure each of the nodes gets shown. */ do k=1 for stems; _=_ center(root.j.k, w) /*concatenate a node*/ end /*k*/ $=word('node='j, 1 + (j<1) ) /*define a label for this line's output*/ say center($, w) substr(_, 2) /*ignore 1st (leading) blank which was */ end /*j*/ /* [↑] caused by concatenation.*/ say /*show a blank line to separate outputs*/ return /* [↑] extreme indentation to terminal*/ - output when using the default input: ═════════ ═════════ ═════════ node=1 1.00 1.00 1.00 node=2 1.10 1.10 1.10 node=3 1.11 1.11 1.11 node=4 1.12 1.12 1.12 node=5 1.20 1.20 1.20 node=6 1.21 1.21 1.21 node=7 1.22 1.22 1.22 ═════════ ═════════ ═════════ node=1 51.00 51.00 51.00 node=2 51.10 51.10 51.10 node=3 51.11 51.11 51.11 node=4 51.12 51.12 51.12 node=5 51.20 51.20 51.20 node=6 51.21 51.21 51.21 node=7 51.22 51.22 51.22 Rust[edit] struct TreeNode<T> { value: T, left: Option<Box<TreeNode<T>>>, right: Option<Box<TreeNode<T>>>, } impl <T> TreeNode<T> { fn my_map<U,F>(&self, f: &F) -> TreeNode<U> where F: Fn(&T) -> U { TreeNode { value: f(&self.value), left: match self.left { None => None, Some(ref n) => Some(Box::new(n.my_map(f))), }, right: match self.right { None => None, Some(ref n) => Some(Box::new(n.my_map(f))), }, } } } fn main() { let root = TreeNode { value: 3, left: Some(Box::new(TreeNode { value: 55, left: None, right: None, })), right: Some(Box::new(TreeNode { value: 234, left: Some(Box::new(TreeNode { value: 0, left: None, right: None, })), right: None, })), }; root.my_map(&|x| { println!("{}" , x)}); println!("---------------"); let new_root = root.my_map(&|x| *x as f64 * 333.333f64); new_root.my_map(&|x| { println!("{}" , x) }); } Scala[edit] There's much to be said about parametric polymorphism in Scala. Let's first see the example in question: case class Tree[+A](value: A, left: Option[Tree[A]], right: Option[Tree[A]]) { def map[B](f: A => B): Tree[B] = Tree(f(value), left map (_.map(f)), right map (_.map(f))) } Note that the type parameter of the class Tree, [+A]. The plus sign indicates that Tree is co-variant on A. That means Tree[X] will be a subtype of Tree[Y] if X is a subtype of Y. For example: class Employee(val name: String) class Manager(name: String) extends Employee(name) val t = Tree(new Manager("PHB"), None, None) val t2: Tree[Employee] = t The second assignment is legal because t is of type Tree[Manager], and since Manager is a subclass of Employee, then Tree[Manager] is a subtype of Tree[Employee]. Another possible variance is the contra-variance. For instance, consider the following example: def toName(e: Employee) = e.name val treeOfNames = t.map(toName) This works, even though map is expecting a function from Manager into something, but toName is a function of Employee into String, and Employee is a supertype, not a subtype, of Manager. It works because functions have the following definition in Scala: trait Function1[-T1, +R] The minus sign indicates that this trait is contra-variant in T1, which happens to be the type of the argument of the function. In other words, it tell us that, Employee => String is a subtype of Manager => String, because Employee is a supertype of Manager. While the concept of contra-variance is not intuitive, it should be clear to anyone that toName can handle arguments of type Manager, but, were not for the contra-variance, it would not be usable with a Tree[Manager]. Let's add another method to Tree to see another concept: case class Tree[+A](value: A, left: Option[Tree[A]], right: Option[Tree[A]]) { def map[B](f: A => B): Tree[B] = Tree(f(value), left map (_.map(f)), right map (_.map(f))) def find[B >: A](what: B): Boolean = (value == what) || left.map(_.find(what)).getOrElse(false) || right.map(_.find(what)).getOrElse(false) } The type parameter of find is [B >: A]. That means the type is some B, as long as that B is a supertype of A. If I tried to declare what: A, Scala would not accept it. To understand why, let's consider the following code: if (t2.find(new Employee("Dilbert"))) println("Call Catbert!") Here we have find receiving an argument of type Employee, even though the tree it was defined on is of type Manager. The co-variance of Tree means a situation such as this is possible. There is also an operator <:, with the opposite meaning of >:. Finally, Scala also allows abstract types. Abtract types are similar to abstract methods: they have to be defined when a class is inherited. One simple example would be: trait DFA { type Element val map = new collection.mutable.HashMap[Element, DFA]() } A concrete class wishing to inherit from DFA would need to define Element. Abstract types aren't all that different from type parameters. Mainly, they ensure that the type will be selected in the definition site (the declaration of the concrete class), and not at the usage site (instantiation of the concrete class). The difference is mainly one of style, though. Seed7[edit] In Seed7 types like array and struct are not built-in, but are defined with parametric polymorphism. In the Seed7 documentation the terms "template" and "function with type parameters and type result" are used instead of "parametric polymorphism". E.g.: array is actually a function, which takes an element type as parameter and returns a type. To concentrate on the essentials, the example below defines the type container as array. Note that the map function has three parameters: aContainer, aVariable, and aFunc. When map is called aVariable is used also in the actual parameter of aFunc: map(container1, num, num + 1) $ include "seed7_05.s7i"; const func type: container (in type: elemType) is func result var type: container is void; begin container := array elemType; global const func container: map (in container: aContainer, inout elemType: aVariable, ref func elemType: aFunc) is func result var container: mapResult is container.value; begin for aVariable range aContainer do mapResult &:= aFunc; end for; end func; end global; end func; const type: intContainer is container(integer); var intContainer: container1 is [] (1, 2, 4, 6, 10, 12, 16, 18, 22); var intContainer: container2 is 0 times 0; const proc: main is func local var integer: num is 0; begin container2 := map(container1, num, num + 1); for num range container2 do write(num <& " "); end for; writeln; end func; Output: 2 3 5 7 11 13 17 19 23 Standard ML[edit] datatype 'a tree = Empty | Node of 'a * 'a tree * 'a tree (** val map_tree = fn : ('a -> 'b) -> 'a tree -> 'b tree *) fun map_tree f Empty = Empty | map_tree f (Node (x,l,r)) = Node (f x, map_tree f l, map_tree f r) Swift[edit] class Tree<T> { var value: T? var left: Tree<T>? var right: Tree<T>? func replaceAll(value: T?) { self.value = value left?.replaceAll(value) right?.replaceAll(value) } } Another version based on Algebraic Data Types: enum Tree<T> { case Empty indirect case Node(T, Tree<T>, Tree<T>) func map<U>(f : T -> U) -> Tree<U> { switch(self) { case .Empty : return .Empty case let .Node(x, l, r): return .Node(f(x), l.map(f), r.map(f)) } } } Ursala[edit] Types are first class entities and functions to construct or operate on them may be defined routinely. A parameterized binary tree type can be defined using a syntax for anonymous recursion in type expressions as in this example, binary_tree_of "node-type" = "node-type"%hhhhWZAZ or by way of a recurrence solved using a fixed point combinator imported from a library as shown below. #import tag #fix general_type_fixer 1 binary_tree_of "node-type" = ("node-type",(binary_tree_of "node-type")%Z)%drWZwlwAZ (The %Z type operator constructs a "maybe" type, i.e., the free union of its operand type with the null value. Others shown above are standard stack manipulation primitives, e.g. d (dup) and w (swap), used to build the type expression tree.) At the other extreme, one may construct an equivalent parameterized type in point-free form. binary_tree_of = %-hhhhWZAZ A mapping combinator over this type can be defined with pattern matching like this binary_tree_map "f" = ~&a^& ^A/"f"@an ~&amPfamPWB or in point free form like this. binary_tree_map = ~&a^&+ ^A\~&amPfamPWB+ @an Here is a test program defining a type of binary trees of strings, and a function that concatenates each node with itself. string_tree = binary_tree_of %s x = 'foo': ('bar': (),'baz': ()) #cast string_tree example = (binary_tree_map "s". "s"--"s") x Type signatures are not necessarily associated with function declarations, but have uses in the other contexts such as assertions and compiler directives (e.g., #cast). Here is the output. 'foofoo': ('barbar': (),'bazbaz': ()) Visual Prolog[edit] domains tree{Type} = branch(tree{Type} Left, tree{Type} Right); leaf(Type Value). class predicates treewalk : (tree{X},function{X,Y}) -> tree{Y} procedure (i,i). clauses treewalk(branch(Left,Right),Func) = branch(NewLeft,NewRight) :- NewLeft = treewalk(Left,Func), NewRight = treewalk(Right,Func). treewalk(leaf(Value),Func) = leaf(X) :- X = Func(Value). run():- init(), X = branch(leaf(2), branch(leaf(3),leaf(4))), Y = treewalk(X,addone), write(Y), succeed(). - Programming Tasks - Basic language learning - Type System - Ada - C - C++ - C sharp - Ceylon - Clean - Clojure/Omit - Common Lisp - D - Dart - E - F Sharp - Fortran - Go - Groovy - Haskell - Inform 7 - Icon - Unicon - J - Java - Julia - Kotlin - Mercury - Nim - Objective-C - OCaml - Oforth/Omit - Perl 6 - Phix - PicoLisp - Racket - REXX - Rust - Scala - Seed7 - Standard ML - Swift - Ursala - Visual Prolog - Axe/Omit - C/Omit - Factor/Omit - J/Omit - JavaScript/Omit - M4/Omit - Maxima/Omit - Oz/Omit - Perl/Omit - Python/Omit - Ruby/Omit - Tcl/Omit - TI-83 BASIC/Omit - TI-89 BASIC/Omit - LaTeX/Omit - Retro/Omit - Zkl/Omit
http://rosettacode.org/wiki/Parametric_polymorphism
CC-MAIN-2018-51
en
refinedweb
#include <wx/propgrid/editors.h> Instantiates editor controls. Reimplemented from wxPGChoiceEditor. Returns pointer to the name of the editor. For example, wxPGEditor_TextCtrl has name "TextCtrl". If you dont' need to access your custom editor by string name, then you do not need to implement this function. Reimplemented from wxPGChoiceEditor. Returns value from control, via parameter 'variant'. Usually ends up calling property's StringToValue() or IntToValue(). Returns true if value was different. Reimplemented from wxPGChoiceEditor.). Reimplemented from wxPGChoiceEditor. Extra processing when control gains focus. For example, wxTextCtrl based controls should select all text. Reimplemented from wxPGEditor. Loads value from property to the control. Reimplemented from wxPGChoiceEditor.
https://docs.wxwidgets.org/3.0/classwx_p_g_combo_box_editor.html
CC-MAIN-2018-51
en
refinedweb
din An unofficial Dart library for Discord. Install The current pre-release of din does not require other runtime dependencies. dependencies: din: ^0.1.0-alpha+5 To get started, it's recommended to read the Discord documentation and create an application for API access. Usage It's recommended to import din prefixed whenever possible: import 'package:din/din.dart' as din; Platforms Currently this library is only supported on the standalone VM and Flutter, but can be easily expanded by implementing a custom HttpClient. See the VM implementation at lib/platform/vm.dart for an example. You can then pass the client in: void main() { final client = const din.ApiClient( rest: const din.RestClient( auth: const din.AuthScheme.asBot('YOUR_TOKEN_HERE'), http: const CustomHttpClient(), ), ); } class CustomHttpClient implements din.HttpClient { /* TODO: Implement. */ } Contributing Discord API Changes As explained below, the REST API endpoints for Discord are manually specified but much/almost all of the boilerplate code around JSON encoding and generating URLs is handled by offline code generation, and published as part of this package. Implementation: tool/codegen/resource.dart tool/codegen/structure.dart The current implementation is fairly ad-hoc, and does not yet support all the different use-cases in the Discord API. That is also OK; the API is flexible in that some of the methods can be implemented by hand if needed, and the others are generated. If you make a change to anything in lib/src/schema/** or the code generators re-run the build script at tool/build.dart to update <file>.g.dart files: $ dart tool/build.dart Or, leave a watcher open to automatically rebuild: $ dart tool/build.dart --watch Testing The easiest way to run all the unit tests with the precise configuration needed is to run tool/test.dart. This in turn runs a series of web servers (as needed) for local testing, and pub run test to execute the test cases: $ dart tool/test.dart For manual testing, (i.e. running/debugging a specific test): - Run tool/servers/static.dartbefore running any HTTP client tests: $ dart tool/servers/static.dart ... $ pub run test test/clients/http_client_vm_test.dart End-to-end testing Sometimes when changing the API, or releasing a new version of this library it is important to verify that the entire end-to-end story still works connected to the real Discord API server. Tests in the e2e/** folder do this, but are part of tool/test.dart in order to avoid overloading Discord's servers from continuous integration systems like travis. In order to run manually, or on something like a cron-job, run: $ DISCORD_API_TOKEN='1234' DISCORD_CHANNEL_ID='1234' DISCORD_USER_NAME='...' pub run test e2e Note that all variables above must be set as an environment variable to use these tests. If you do not have one, login to discord and create an application. Do not share this token with others, it should remain private. Make sure to add access for your bot to connect and interact with the channel specified by DISCORD_CHANNEL_ID. You may optionally add a config.yaml file to e2e/config.yaml instead; it is ignored by .gitignore and is for convenience while developing locally. api_token: "..." channel_id: "..." user_name: "..." Design The din package is built to be layered, customizable, and easily hackable, both for direct contributors to the package, and for packages built on top of din. There are three "tiers" of APIs available, each with high-level functionality: HTTP Din provides a minimal, platform-independent HTTP interface, or HttpClient: abstract class HttpClient { /// Sends an HTTP request to [path] using [method]. /// /// May optionally define a [payload] and HTTP [headers]. /// /// Unlike a standard [Future], a [CancelableOperation] may be cancelled if /// the in-flight request is no longer valid or wanted. In that event, the /// [CancelableOperation.value] may never complete. CancelableOperation<Map<String, Object>> requestJson( String path, { String method: 'GET', Map<String, Object> payload, Map<String, String> headers, }); } While this itself is not that useful for building a bot or application, it does provide a very simple way to create custom HTTP implementations, such as those that do caching, offline support, or work on a variety of platforms. The built-in/default implementation works on the standalone Dart VM and Flutter. REST The official Discord API is REST-based, and requires a series of HTTP headers in order to communicate. This is encapsulated as RestClient, which in turn can make an HTTP request to any given REST endpoint. It does not have knowledge of the precise endpoints of the Discord API, though, and only communicates in raw/untyped JSON. Creating one is required to use ApiClient, the highest-level API provided: final rest = const din.RestClient( auth: const din.AuthScheme.asBot('YOUR_TOKEN_HERE'), ); For clients or libraries that do not want to use ApiClient, they can use RestClient in order to remove much of the boiler-plate around how to connect and utilize the REST API. API A semi-automatically updated high-level API for communicating with precise REST endpoints in the Discord API used strongly-typed methods, returning strongly typed Dart objects. There is no official schema provided by the Discord team, so instead the schema is hand-maintained as metadata annotations, and the exact API is generated from that. See tool/build.dart, and lib/src/schema/** for more information.
https://pub.dartlang.org/documentation/din/latest/
CC-MAIN-2018-51
en
refinedweb
Getting Started with Auth¶ Auth (vapor/auth) is a framework for adding authentication to your application. It builds on top of Fluent by using models as the basis of authentication. Tip There is a Vapor API template with Auth pre-configured available. See Getting Started → Toolbox → Templates. Let's take a look at how you can get started using Auth. Package¶ The first step to using Auth is adding it as a dependency to your project in your SPM package manifest file. // swift-tools-version:4.0 import PackageDescription let package = Package( name: "MyApp", dependencies: [ /// Any other dependencies ... // 👤 Authentication and Authorization framework for Fluent. .package(url: "", from: "2.0.0"), ], targets: [ .target(name: "App", dependencies: ["Authentication", ...]), .target(name: "Run", dependencies: ["App"]), .testTarget(name: "AppTests", dependencies: ["App"]), ] ) Auth currently provides one module Authentication. In the future, there will be a separate module named Authorization for performing more advanced auth. Provider¶ Once you have succesfully added the Auth package to your project, the next step is to configure it in your application. This is usually done in configure.swift. import Authentication // register Authentication provider try services.register(AuthenticationProvider()) That's it for basic setup. The next step is to create an authenticatable model.
http://docs.vapor.codes/3.0/auth/getting-started/
CC-MAIN-2018-51
en
refinedweb
Word Sets March 17, 2017 Today. Advertisements Something (not so memory efficient) in Python3. Outputs [‘stew’, ‘west’, ‘wets’, ‘tews’]. Added bonus: learned a new word tew :) A Haskell version. Letters can be used more than once. It’s case-sensitive. Here’s the output of running it with an English dictionary and the Finnish translation of Homer’s Odyssey. […] completed the solution to the previous exercise, which you can look at if you wish. We’ll have a new exercise on […] Here’s some C++ that uses a binary trie to store the character sets of the words given on standard input, represented as bitsets. Given a word and the derived bitset, we can efficiently find all words that can be made up with those characters: Of course, this works just as well: C# Implementation: static List GetMatchingWordsSet(List wordsSet, List charSet) { List lstMatchingWordsSet = new List(); foreach (var word in wordsSet) { int wordFlag = 0; int wordlength = word.Length; for (int i = 0; i < wordlength; i++) { if (charSet.Contains(Convert.ToChar(word[i].ToString().ToLower()))) { wordFlag++; } } if (wordFlag == word.Length) { lstMatchingWordsSet.Add(word); } wordFlag = 0; } return lstMatchingWordsSet; } static void Main(string[] args) { List lstWords = new List { “One”, “Two”, “Three”, “Four”, “Five”, “Six”, “Cat”,”Tin” }; List lstChars = new List { ‘a’, ‘e’, ‘i’, ‘o’, ‘u’, ‘n’, ‘s’, ‘x’, ‘t’ }; var MatchingWordsSet = GetMatchingWordsSet(lstWords, lstChars); Console.WriteLine(“Matching Words:”); foreach (var w in MatchingWordsSet) { Console.WriteLine(w); } } To avoid duplicates the following changes have to be done in my code (my above comment): string distinctWord = new String(word.ToLower().Distinct().ToArray()); if (wordFlag == word.Length && distinctWord.Length == word.Length) { lstMatchingWordsSet.Add(word); } Mumps Implementation: MCL> type fword fword (word,file) ; Find matching words in file ; n char,fword,i,newword,str i $g(word)=”” w !,”No word was supplied” q i $g(file)=”” w !,*7,”No file was included” q o 1:(file:”R”):0 i ‘$t w !,*7,”Unable to open “,file q w !!,”file: “,file w !,”word: “,word u 1 s newword=””,str=”” f i=1:1:$l(word) s char=$e(word,i) s:newword'[char newword=newword_char f i=1:1 r fword q:fword=”” d . i $$matches(newword,fword) s str=str_fword_”,” c 1 w !,”Matching words: “,$s($l(str):$e(str,1,$l(str)-1),1:”None in “_file _) q ; matches (word,fword); See if fword can be built with chars in word ; n flg,i s flg=1 f i=1:1:$l(fword) d q:’flg . s:word'[$e(fword,i) flg=0 q flg — Mumps Implementation: w !,”Matching words: “,$s($l(str):$e(str,1,$l(str)-1),1:”None in “_file _) should be: w !,”Matching words: “,$s($l(str):$e(str,1,$l(str)-1),1:”None in “_file) @bookofstevegraham: We don’t often see mumps here. Perhaps you could give a brief explanation of your program, for those unfamiliar with mumps. I’ve been asked to give a brief explanation on my implementation. Here goes. I will be including the complete name of commands and functions, instead of the abbreviations. Mumps Implementation: () MCL> type fword fword (word,file) ; Find matching words in file # Program name, accepting 2 arguments ; new char,fword,i,newword,str # Create new versions of variables if $get(word)=”” write !,”No word was supplied” quit # Quit program if the word was not supplied, or it was = “” if $get(file)=”” w !,*7,”No file was included” quit # Quit program if the filename was not supplied, or it was = “” open 1:(file:”R”):0 # Open device 1 in read mode and assign the file to it if ‘$test write !,*7,”Unable to open “,file quit # When you (try to) open a device with a trailing “:0”, and it is successful, $test is set to 1. If unable it is set to 0. # If 0, then quit program write !!,”file: “,file # Write filename with 2 lf/cr (!) write !,”word: “,word # Write supplied word with lf/cr use 1 # Begin using the device/file for input set newword=””,str=”” # Initialize variables for i=1:1:$length(word) set char=$extract(word,i) set:newword'[char newword=newword_char # Take each unique character in the supplied word and put it in variable newword # Mumps allows post-conditionals for most commands. set:newword'[char = if newword does not contain char, set … # [ is the operator for contains and ‘ is the operator for negate/not for i=1:1 read fword quit:fword=”” do # For each word in file . if $$matches(newword,fword) set str=str_fword_”,” # If the file word matches the characters in the supplied word, append it to a comma-delimited list of matched words # $$matches indicates a user-created function named matches. _ is the operator for concatenation. close 1 # Close device/file and revert to std in/out write !,”Matching words: “,$select($length(str):$extract(str,1,$length(str)-1),1:”None in “_file) # Write out list of matched words with lf/cr or a message that there were no matches quit # End program ; matches(word,fword); See if fword can be built with chars in word # Subroutine for matching supplied word with list word ; new flg,i # Create new versions of variables set flg=1 # Initialize flg variable (Assuming the supplied word matches list word) for i=1:1:$l(fword) do quit:’flg # For each character in list word, do next line, quit loop if flg = 0 . set:word'[$e(fword,i) flg=0 # If supplied word does not contain this character in list word, set flg = 0. This will cause the loop to quit upon returning to it quit flg # Quit subroutine returning value of flg (1 = it matched, 0 = it did not match) — @Bookofstevegraham: Fascinating language. Thank you! You’re welcome. Thanks for the interest. Looking at the solutions so far it looks like there are several interpretations of the problem: (a) look for words that contain exactly the letters in the set (i.e. if there’s only one ‘s’, then words can only contain one ‘s’), (b) look for words that only use letters from the set with no repeats, but not necessarily all of them, and (c) words that only letters from the set, but with repeats allowed. The following Unicon [works in Icon as well] solution assumes (a): ————————————————————- procedure main(a) w := a[1] | “gomiz” every write(matched(w, !&input)) end procedure matched(w, pword) every ((*pword = *w), (t := pword), t[bound(upto,!w,t)] := “”) return (0 = *\t, pword) end procedure bound(a[]) return a[1]!a[2:0] end ————————————————————- Sample run: ———————————————————- ->anagram west ———————————————————— Hmmm, sample run didn’t come through for Unicon solution: Steve Wampler: Could you comment on how your solution works? I would guess that most are not familiar with the language. an icon program meeting sub-challenge (c). 8 lines of code procedure main(a) # given a set of characters in parameter 1 and a wordlist as a file as parameter 2 # respond with the words in the wordlist that meet the criteria # solution type (c) # # what criteria? # # The letters in parameter 1 may be used in the target word # 1 or more times or not at all. in any order. # for simplicity, # I have made no special provision for punctuation # so backslash and quotes, for example may behave unexpectedly. # # This is partly also because different shells will have different behaviour. # backslash escaping may help some situations. # ############################### # get the characters thechars := cset(a[1]) | stop(“No characters found; the word \”\” is made of no characters”) #get the wordlist put(a,”C:\\Users\\…\\webster\\WORDS.TXT”) # local fix puts a dictionary in position3 if user supplied a dictionary, else at position 2 thewords := open(a[2],r) | stop(“No words to test. Is the file valid?”) ######################### every aword := !thewords do { charsofaword := cset(aword) # thediff := charsofaword — thechars # cset difference, any letters in diff are those in the word, but not in the # characters supplied by the user as parameter 1. # if (*() = 0) then write(aword) if (*(charsofaword — thechars) = 0) then write(aword) } end run using “cat” on a 20C websters list I had. a aa acca act acta at atta c ca cat t ta taa tact tat tatta Sorry, non-fatal error; “thewords := open(a[2],r)” should read “thewords := open(a[2],’rt’)” The variable r is initially null, so the 2nd parameter to “open” defaults to ‘rt’. The compiled program executes the same. I was asked to provide a commented version of my Unicon solution, since both Unicon and Icon might not be familiar to some folks. A little background – Unicon is a successor to Icon. In both languages expressions are capable of generating sequences of results. A result sequence might have anywhere from 0 to an infinity of results. Normally expressions only so far as needed to produce a single result. If no result is produced or if there are no more results that can be produced then expression evaluation is said to fail, otherwise the evaluation succeeds. If an expression fails, subexpressions that can produce further results are automatically back-tracked into to produce their next result in an attempt to produce successful evaluation of the entire expression. This process is called goal-directed evaluation. The every control structure essentially tells a successful evaluation of its expression that it really failed – forcing goal-directed evaluation to exhaust all possibilities. Control flow is governed by this success or failure of the evaluation, not by the values computed during evaluation. (I apologize in advance for the length of this explanation. Unicon and Icon behave differently from most common procedural programming languages even though they don’t look all that different.) Implemented in the klong programming language (), an array language patterned after the k programming language. $ kg ./wordsets4.kg mississippi < /usr/share/dict/words [i imp imps is ism isms m mi miss ms p pi pimp pimps pip pips pis piss s sip sips sis] Oops, forgot the program. Try again. $ cat wordsets5.kg main::{[a fword list word]; word::?.a@0; list::[]; .mi{fword::x; :[~#({word?x}’fword)?[]; list::list,fword; a::””]; .rl()}:~.rl(); .p(list)} main() $ kg ./wordsets5.kg mississippi < /usr/share/dict/words [i imp imps is ism isms m mi miss ms p pi pimp pimps pip pips pis piss s sip sips sis] $
https://programmingpraxis.com/2017/03/17/word-sets/
CC-MAIN-2018-51
en
refinedweb
packages get Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:call_number/call_number.dart'; We analyzed this package on Dec 5, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Document public APIs (-10 points) 3 out of.
https://pub.dartlang.org/packages/call_number
CC-MAIN-2018-51
en
refinedweb
Plug v1.7.1 Plug behaviour View Source The plug specification. There are two kind of plugs: function plugs and module plugs. Function plugs A function plug is any function that receives a connection and a set of options and returns a connection. Its type signature must be: (Plug.Conn.t, Plug.opts) :: Plug.Conn.t Module plugs A module plug is an extension of the function plug. It is a module that must export: - a call/2function with the signature defined above - an init/1function which takes a set of options and initializes it. The result returned by init/1 is passed as second argument to call/2. Note that init/1 may be called during compilation and as such it must not return pids, ports or values that are not specific to the runtime. The API expected by a module plug is defined as a behaviour by the Plug module (this module). Examples Here’s an example of a function plug: def json_header_plug(conn, opts) do Plug.Conn.put_resp_content_type(conn, "application/json") end Here’s an example of a module plug: defmodule JSONHeaderPlug do import Plug.Conn def init(opts) do opts end def call(conn, _opts) do put_resp_content_type(conn, "application/json") end end The Plug pipeline The Plug.Builder module provides conveniences for building plug pipelines. Link to this section Summary Link to this section Types Link to this section Callbacks call(Plug.Conn.t(), opts()) :: Plug.Conn.t()
https://hexdocs.pm/plug/Plug.html
CC-MAIN-2018-51
en
refinedweb
import scala.reflect.runtime.universe._ import scala.reflect.runtime.{currentMirror => cm} import scala.tools.reflect.ToolBox object Test extends App { val tb = cm.mkToolBox() val expr = tb.parse("math.sqrt(4.0)") println(tb.eval(expr)) } C:\Projects\Kepler\sandbox @ ticket/5943>myke run . scala.tools.reflect.ToolBoxError: reflective compilation has failed: object sqrt is not a member of package math at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl$ToolBoxGlobal.throwIfErrors(ToolBoxFactory.scala:294) at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl$ToolBoxGlobal.compile(ToolBoxFactory.scala:227) at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl.compile(ToolBoxFactory.scala:391) at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl.eval(ToolBoxFactory.scala:394) at Test$delayedInit$body.apply(t5943b2.scala:8) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at scala.App$$anonfun$main$1.apply(App.scala:61) at scala.collection.immutable.List.foreach(List.scala:309) at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45) at scala.App$class.main(App.scala:61) at Test$.main(t5943b2.scala:5) at Test.main(t5943b2.sc.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:71) at scala.tools.nsc.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31) at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:139) at scala.tools.nsc.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:71) at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:139) at scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:28) at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:45) at scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:35) at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:45) at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74) at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:96) at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:105) at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala) It looks like we're doomed w.r.t this one. The problem is that Scala reflection and reflective compiler (which is underlying toolboxes) use a different model of classfile loading than vanilla scalac does. Vanilla compiler has its classpath as a list of directories/jars on the filesystem, so it can exhaustively enumerate the packages on the classpath. Reflective compiler works with arbitrary classloaders, and classloaders don't have a concept of enumerating packages. As a result, when a reflective compiler sees "math" having "import scala.; import java.lang." imports in the lexical context, it doesn't know whether that "math" stands for root.math, scala.math or java.lang.math. So it has to speculate and provisionally creates a package for root.math, which ends up being a wrong choice. We could probably support a notion of "overloaded" packages, so that the compiler doesn't have to speculate and can store all the possible options, but that'd require redesign of reflection and probably of the typer as well. Ok, shall we make this major and include in 2.10.0's list of "known issues"? It seems so. I hoped someone will magically save the day before the release, but well... "We could probably support a notion of "overloaded" packages, so that the compiler doesn't have to speculate and can store all the possible options, but that'd require redesign of reflection and probably of the typer as well." This is important for other reasons, independent of reflection. SI-6039. I have gone to a lot of trouble to work around it only a small ways, because I am plagued by compilation errors whenever I try to recompile a subset of compiler sources. But that was only the beginning, and deterioration began immediately. Yet another reason it is important is because scanning the entire classpath up front is a very expensive piece of startup time. By request, a link to an old branch of mine with a rewrite of classpaths. Can we provide special treatment for URLClassLoaders? Eugene, care to elaborate? That was my suggestion from our meeting today. I don't think we should consider other ToolBox bugs without having a clear plan for this one, otherwise ToolBoxes are not really useful. Rather than deal with the lowest common denominator (the ClassLoader API), we could try to do a better job for the more powerful, and very common, URLClassLoader, which would let use get at the underlying folders and JAR files to enumerate the classpath, as is done in the compiler proper. This might be an opt-in feature, or even a feature implemented in an external library, that takes advantage of extra hooks we expose. When Jason and I were triaging reflection bugs today, we came to the conclusion that this one is a blocker for using toolboxes seriously, e.g. as a replacement for scala.tools.nsc.interactive. Well, it indeed is, because relative imports are very common in Scala programs. As to the comment itself, a possible solution would be to distinguish URLClassLoaders, which can exhaustively enumerate their contents, and treat them specially in the reflective compiler. That would bring conceptual feature parity to toolboxes w.r.t the regular compiler in 99% of the use cases. Will that work with sbt, ide and Maven? Let's start with sbt and IDE. I don't know how sbt is handling classloaders so I added Mark to watchers and let him comment on that. Same for Iulian. The IDE doesn't call ToolBox compilers. SBT already has a means to communicate the classpath to the embedded interpreter, see "Use the Scala REPL from project code¶. But it is likely that some environments (maybe OSGi) will be fundamentally incompatible with classpath enumeration, and will not support ToolBox compilation. We might have to live with that (or provide hooks for someone else to build a bridge). But we shouldn't punish the mainstream. I believe that Paul's branch a few comments above has a little code for URLClassLoader enumeration. The IDE might want to call it for features like conditional breakpoints where you want to compile a little piece of code, no? If we are designing API we should think about all future clients not only current ones. I agree that we should provide connivence for common case but we need to ensure that we have a way to support more exotic environments. In order to assure that, could we start with designing an API that allows one to pass classpath enumerator to Reflection API, try that with URLClassLoader-based enumerator and only then make it a default choice? Yeah, that's what I meant by "provide hooks". We would just provide a standard implementation of that hook for URLClassLoader, SBT.
https://issues.scala-lang.org/si/jira.issueviews:issue-word/SI-6393/SI-6393.doc
CC-MAIN-2018-51
en
refinedweb
Data logging functions for debugging purpose (CoAP) More... #include "core/net.h" #include "coap/coap_common.h" Go to the source code of this file. Data logging functions for debugging purpose (CoAP) coap_debug.h. Dump CoAP message for debugging purpose. Definition at line 121 of file coap_debug.c. Dump CoAP message header. Definition at line 181 of file coap_debug.c. Dump CoAP option. Definition at line 287 of file coap_debug.c. Dump the list of CoAP options. Definition at line 233 of file coap_debug.c. Convert a parameter to string representation. Definition at line 399 of file coap_debug.c.
https://oryx-embedded.com/doc/coap__debug_8h.html
CC-MAIN-2018-51
en
refinedweb
A2.1 MATLAB BASICS MATLAB provides several standard functions. In addition, there are several toolboxes or collections of functions and procedures available as part of the MATLAB package. MATLAB offers the option of developing customized toolboxes. Many such tools are available worldwide. MATLAB is case sensitive, so the commands and programming variables must be used very carefully. It means A + B is not the same as a + b. The MATLAB prompt (>>) will be used to indicate where the commands are entered. Anything written after this prompt denotes user input (i.e., a command) followed by a carriage return (i.e., the “enter” key). A2.1.1 Help Command MATLAB provides help on any command or topic by typing the help command ... No credit card required
https://www.oreilly.com/library/view/dynamics-of-structures/9789332579101/xhtml/Appendix2.xhtml
CC-MAIN-2018-51
en
refinedweb
A prime palindrome integer is a positive integer (without leading zeros) which is prime as well as a palindrome. Given two positive integers m and n, where m < n, write a program to determine how many prime-palindrome integers are there in the range between m and n (both inclusive) and output them. The input contains two positive integers m and n where m < 3000 and n < 3000. Display the number of prime palindrome integers in the specified range along with their values in the format specified below: Test your program with the sample data and some random data: Example 1 INPUT: M = 100 N = 1000 OUTPUT: The prime palindrome integers are: 101,131,151,181,191,313,351,373,383,727,757,787,797,919,929 Frequency of prime palindrome integers: 15 Example 2 INPUT: M = 100 N = 5000 OUTPUT: Out of range import java.util.*; class ISC2012q01 { public static void main (String args[]) throws InputMismatchException { Scanner scan = new Scanner(System.in); int m,n,i,j,t,c,r,a,freq; System.out.println("Enter two positive integers m and n, where m < 3000 and n < 3000: "); m = scan.nextInt(); n = scan.nextInt(); if(m < 3000 && n < 3000){ // To count the frequency of prime-palindrome numbers freq = 0; System.out.println("The prime palindrome integers are:"); for(i=m;i<=n;i++){ t = i; //Check for prime c = 0; for(j =1; j<=t; j++){ if(t%j == 0) c++; } if(c == 2){ //Check for palindrome r = 0; while(t>0){ a = t%10; r = r*10 + a; t = t/10; } if(r == i){ System.out.print(i+" "); freq++; } } } System.out.println("\nFrequency of prime palindrome integers:"+freq); }else{ System.out.println("OUT OF RANGE"); } }//end of main }//end of class
https://www.wethementors.com/isc/computer-science/practical/isc-computer-science-practical-2012-question-1
CC-MAIN-2018-51
en
refinedweb
A little bit of thinking and compromise can remove unnecessary complexity from the routes in an MVC application. For example: Morgan’s web site has authenticated users,and Morgan decides the URLs for managing users should look like /morgan/detail. Morgan adds the following code to register the route: routes.MapRoute( "User", "{username}/{action}", new { controller ="User", action="Index" } ); Morgan runs some tests and it looks like there is a problem. The User controller is getting requests for /home/index. Oops, that request should have gone to the Home controller. Fortunately, Morgan knows there is all sorts of whizz-bang features you can add to a route, so Morgan creates a route constraint: public class UserRouteConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { var username = values["username"] as string; return IsCurrentUser(username); } private bool IsCurrentUser(string username) { // ... look up the user } } The constraint prevents the User route from gobbing requests to other controllers. It only allows requests to reach the User controller when the username exists in the application. Morgan tweaks the route registration code to include the constraint: routes.MapRoute( "User", "{username}/{action}", new { controller ="User", action="Index"}, new { user = new UserRouteConstraint() } ); Morgan runs some tests and everything works! Well … Everything works until a user registers with the name “home”. Morgan goes off to tweak the code again... As a rule of thumb I try to: Sometimes you have to go break those rules, but let’s look at Morgan’s case. If Morgan was happy with a URL like /user/index/morgan, then Morgan wouldn’t need to register any additional routes. The default routing entry in a new MVC application would happily invoke the index action of the User controller and pass along the username as the id parameter. Morgan could also go with /user/morgan and use a simple route entry like this: routes.MapRoute( "User", "User/{username}/{action}", new { controller ="User", action="Index", username="*"} ); No constraints required! The KISS principle (keep it simple stupid) is a good design principle for routes. You’ll have fewer moving parts, consistent URL structures, and make the code easier to maintain. I agree with your KISS principle about routes. I have never used constraints as yet in my MVC apps. In fact what I am doing is building generic routes with {param1}, {param2}, type of structure. So I have only 9 routes configured and I manage to run my entire website on these 9 routes without any hassles. Cheers! Cyril Gupta codebix.com Resources: jarrettmeyer.com/... Nested Resources: jarrettmeyer.com/... Example: [Route("product/{name}")] [RouteDefault("name", "")] [RouteConstraint("name", "^[0-9]+_")] public ActionResult Product(string name) { return View(); } I personally don't like the attribute approach. I think it scatters what should be simple declarations far and wide across the code base and only increases the chances for collisions. Consider that 1) Controllers are a reflection of software design. They are related to the internal organization of your system, and are NOT created for the sake of "pretty urls". 2) On the other side, your final URLs should be friendly. URLs are created for users, and for SEO optimization, NOT for the sake of developers. So there is a abyss here, right? Using ASP.NET MVC (and, in fact, many other MVC frameworks, like Castle Monorail or Cake PHP) my urls, by default, are a reflection of the way my controllers and actions were created, and are not necessarily the most "friendly", thinking at the user level. I correct this impedance through the use of routes. So no, i do not agree that you should avoid using routes. They should be kept as simple as possible, but NOT to the cost of not-so-friendly urls (which means bad usabillity), and definitely NOT to the cost of creating controllers just for the sake of nice urls (which means bad software design). Sorry 'bout my english; I hope I made myself clear. Although obviously this does inhibit future functionality if you want to create a new controller that happens to share the name of a user. :) It is important to make it as user memory/bookmark friendly as possible This can be shown in twitter - its so easy to remember someone's twitter URL Great job, Congrats.
http://odetocode.com/blogs/scott/archive/2010/01/25/kiss-your-asp-net-mvc-routes.aspx
CC-MAIN-2017-09
en
refinedweb
File is nothing but a simple storage of data in Java language. We call one object that belongs to java.io package and we use it to store name of the file or directory and also use the pathname. File is collection of stored information that are arranged in String, column and row line. This example stores file name and text data to the file. The File.createNewFile() method is used to create a file in java. This method returns a Boolean value "true" if the file is created successfully, "false" if the File is not created. NewFile() method returns "false" if the file is already created or the operation has failed. Example of how to create a file in java: package FileHandling; import java.io.File; import java.io.IOException; public class FileCreate { public static void main(String[] args) throws IOException { File file = new File("c:/Test.txt"); if(file.createNewFile()){ System.out.println("File is created"); }else System.out.println("File is already created"); } } Output: File is created File in Java Post your Comment
http://www.roseindia.net/java/javatutorial/create-file-in-java.shtml
CC-MAIN-2017-09
en
refinedweb
kingjj: VDSL's great and all if you can get it, but if you are happy with the speeds provided with ADSL2+ it may not be worth upgrading (we have both VDSL and UFB available at home, happily staying with ADSL2 at present). #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. l43a2: Telecom has a pretty good network for streaming etc and atm their VDSL is free install which includes a master filter etcthis an image of your general area and the redish orangely areas are VDSL coverage with the Blue squares being cabinets sbowness: Thanks to everyone for the responses so far. We are in VDSL-land, so it is very definitely an option. The painful thing will be unwinding the various discounts I get with VF for Sky, home and mobile. Would love to move away from Sky but I need it for the sport (same old story). Michael Murphy | to be with an awesome ISP? Want $20 credit too? Use this link to sign up to BigPipe.The Router Guide | Community UniFi Cloud Controller | Ubiquiti Edgerouter Tutorial What do you get with VF and how much do you pay? sbowness: What do you get with VF and how much do you pay?I've just moved to the Entertain package with Sky Sport and SoHo and multiroom and my mobile on their $39 plan. I got a loyalty discount off a new phone bu t didn't have to sign up to a term. I'm still waiting for the first bill with the whole lot on it, but I'm expecting the first one to be around the $215 mark. I should also mention that my mother's email address hangs off my account as well, so I would need to move us both to a more generic one.I posted a comment on the Vodafone Community forums (adding to the long-running "Where's the VDSL?" thread) and apparently VDSL was the subject of their most recent Vodafone Voice survey. As usual, VF won't confirm anything at this stage "but a little more patience might just pay off." I'll give them until the end of January and then make a call. Snap seems a good alternative but the cost of their modem is a bit up there. Telecom seems to have a great package but their recent results in the TrueNet testing came off second best to Snap (they were the only two VDSL providers that met the criteria).I'm not sure why I don't get good streaming here. My Speedtest results are pretty good:but I know there is more to it than that. Any tips?
http://www.geekzone.co.nz/forums.asp?forumid=151&topicid=138264
CC-MAIN-2017-09
en
refinedweb
1. INTRODUCTION Nestlé UK is the British operating business of Nestlé SA, the world's largest food company. In the UK, we manufacture and distribute, via retail and catering outlets, import and export products in virtually every sector of the food and drink industry. Our brands include such household names as Nescafé, Rowntree, Crosse & Blackwell, Buitoni, Findus, Lyons Maid, SunPat, Gales, Perrier and many others; we also supply a range of major retailer private label products. In the UK, we employ some 15,000 people, in over 20 factory and head office establishments, with an annual turnover of £1.7 billion. World wide, Nestlé employs approximately 220,000 people, operates some 500 factories and has an annual turnover of approximately 70 billion Swiss Francs. Our own internal structure and the increasingly global nature of world food trade dictate that we source our raw materials and finished products on a truly international basis. Our European factories operate, similarly, on an international basis and our production within the UK may equally be destined for European consumption as for the domestic market. Likewise, products sold in the UK may well have been produced elsewhere within Europe. Increasingly, therefore, we perceive Europe as a single trading entity and, in the area of regulation in particular, the need for a single framework of equitable, enforceable rules is paramount. We therefore welcomes the opportunity to contribute our comments to this inquiry. 2. SUMMARY 2.1 Nestlé is committed to the responsible use of foods and food ingredients derived from genetic modification. it is also fully committed to openness and transparency in this use and in dialogue with other parties. 2.2 Current regulations impose significant restrictions on research, particularly when viewed on a global basis. 2.3 The UK model (ACRE/ACNFP) for controlling release into the environment has worked well and should be used as the basis for international harmonisation in order to remove current confusion and facilitate global trade. 2.4 The current EU framework for labelling GMOs and their derivativesdespite recent developments-remains ambiguous and incapable of uniform, meaningful application. Further consolidation of existing requirements is now urgently required, whereby principles applicable to current and future approval will be established. 2.5 Codex Alimentarius should be the focus for internationally agreed safety and labelling procedures/mechanisms and requirements. 3. DETAILED COMMENT 3.1 Nestlé Position on Genetic Modification New and creative solutions will be required to feed an ever-growing world population with affordable and wholesome foods in an environmentally sustainable way. As one of the world's major users of agricultural produce, Nestlé has been a pioneer in encouraging more efficient and sustainable farming methods, especially in the developing world, where we operate more than 100 factories. We fully recognise that biotechnology, including genetic modification will be one of the principal tools available to meet these challenges. Traditional biotechnology such as plant and animal breeding has a long history and the use of fermentation to produce preserved products such as cheese, pickles, bread, beer, salami, etc., is well established. Genetic modification has evolved from these traditional processes and allows improvements to be made rapidly, precisely and safely. Although Nestlé does not directly produce its own raw materials, we are firmly convinced that the responsible control and use of this technology guarantees safe products which will bring substantial benefits to farmers, industry and consumers alike. Nestlé has therefore decided that it will use genetically modified crops and their derivatives, taking fully into consideration local legislation, consumer demand and concerns and the global supply situation. As a responsible and responsive company, Nestlé encourages transparency and welcomes open dialogue with consumers. We are actively co-operating with suppliers, other food manufacturers, retailers, authorities and consumers in activities aimed at informing the public about developments arising from genetic modification. Futhermore, although we see no safety or scientific justification for specific labelling we have recognised the legitimate consumer interest for this information and have commenced a programme (in addition to any legal requirements) to indicate the use of ingredients produced with the aid of genetic modification on the label wherever practicable. Our business is based on offering products tailored to meet the diverse needs and preferences of consumers in all parts of the world. Whatever the technology or raw material, Nestlé only uses ingredients which meet the highest international standards and which comply with all legal requirements. It is therefore essential that these standards and legal requirements are maintained by the authorities and perceived by consumers to be adequate, increasingly on a global rather than a national basis. 3.2 The Appropriateness and Efficacy of Current Regulation (a) Research Nestlé UK is not directly involved in research on genetic modification. However, Nestlé operates a number of research establishments around the world and, in particular, has an establishment dedicated to plant breeding in France. Gene technology is one of the tools available to this team. We believe that the application of Directive 90/219 on the contained use of genetically modified organisms has imposed undue restrictions on plant breeding at the research level. We would therefore welcome a review of this legislation, recognising at the present time that such a review could well introduce political, in addition to scientific, considerations. As major users of agricultural raw materials, we are adamant that any agricultural research base within Europe must be at the forefront of science and technology, whilst retaining the principles of safety and good environmental practice as fundamental criteria. (b) Release into the Environment Nestlé UK follows closely the work and reports of the UK Advisory Committee on Releases into the Environment (ACRE); we are impressed by the professionalism of the committee and the presentation and information contained in its reports. The previous voluntary scheme within the UK has formed the basis of a European system for approval of release of GMOs. However there have recently developed obvious and confusing overlaps between Directive 90/220, the Novel Foods Regulation 258/97 and more recently Regulation 1813/97. The very recently agreed regulation referring specifically to Monsanto Soya and Novartis Maize, whilst clarifying to a certain extent provisions relating only to these two products does not apply to further products approved in the interim period. There is a very clear need for the requirements of the various regulations to be consolidated and harmonised. Furthermore, there is a clear and urgent need for European mechanisms to be speeded up in order that the present confusion in trade arising from varying numbers of products approved for use within the USA, Europe and other parts of the world be reduced to a minimum. This issue is severely compounded in the case of commodity crops such as maize, soya and rapeseed where there are already varieties approved in the United States and Canada, which are not fully approved within the European Union. As users of derivatives of these crops, the legal status of these derivatives remains unclearcertainly the labelling provisions (see later paragraph) are confused. (c) Novel Foods and their labelling We do not believe that genetic modification in itself presents any new food safety risk or that foods and food ingredients produced with the aid of genetically modified organisms represent a special class of new foods. They should be subject to the same type of risk assessment as any other new food product and its intended use, whatever the method of production which has been used. We therefore believe that the scope of the Novel Food Regulation 258/97 is appropriate as a means of achieving a harmonised approach to the approval of all Novel Foods and as a basis for ensuring both consumer confidence and fair trade. However, the labelling requirements defined under Article 8 of this regulation were defined in extremely subjective terms and left open to potentially very wide interpretation. This interpretation is further confused by the reference in Article 5 to "substantially equivalent", whereas Article 8 refers to "no longer equivalent"; there is a difference of meaning between these phrases but the extent and significance of this difference is totally unclear. In order to clarify the status and labelling requirements of Monsanto Round Up Ready Soya and Novartis Bt-maize a further regulation has recently (26 May) been agreed. In our opinion, this latest regulation should now be consolidated with the requirements under Directive 90/220, Regulation 258/97, Regulation 1813/97 and the underlying principles converted into a single global regulation which can be, and will be, applicable to all future approvals of genetically modified crops and their derivatives. Several crops previously approved in the United States have recently been notified to the EU authorities and fall within a legal lacuna. We believe the aspect of "equivalence" to be fundamental to the whole question of labelling of Novel Foods. It is of some concern, therefore, that the EU Regulators appear to have applied a far stricter interpretation to this term than is generally recognised internationally. Notwithstanding the current regulatory requirements for labelling of Novel Foods and derivatives we remain concerned as to how these regulations will be enforced in practice. We have frequently stated that a general principle of labelling is that is must be accurate, truthful and meaningful; the legislation must be capable of uniform interpretation and it must be uniformly enforced. We do not believe that the current regulation will meet these criteria until further detailed requirements have been elucidated. In particular, as indicated in the Council Minutes, the question of thresholds and agreed methodology will be paramount. We would be pleased to have the opportunity to comment further to your committee on this, should the committee so wish. 3.3 Appropriateness and Efficacy of Current Regulation at the level of the UK and other Member States We have indicated previously our belief that genetic modification will, in the longer term, offer potentially enormous benefits at all stages throughout the food chain from primary agriculture through food processing to final product improvements, whether nutritional or quality related. However it is equally clear that in these early days of the technology, there are widely differing views as to the need for regulation and/or information about the products. It is inevitable that the early introduction of genetically modified crops will carry improved agronomic traits. The benefits to the consumer will not, therefore, be immediately apparent. Equally, the interpretation of "equivalence" differs widely between interested parties. This has led to the wide divergence of approach between the EU and the USA/Canada, with the consequential difficulties relating to the supply of commodity crops such as soya and maize. The EU cannot isolate itself from world commodity trade and the more the EU legislation diverges from that of the USA, Canada and the rest of the world, the greater will become the difficulties in sourcing commodity on a global basis. This will place additional financial burdens on our industry, and consequently consumers, without generating any tangible benefits. The initial clamouring for segregation of crops by some parties in Europe has not been modified (at least in words) to the "holy grail" of traceability. This is equally an overly bureaucratic requirement which, at the end of the day, does not meet consumer requirements but adds unnecessary costs to the food chain. In the longer term it will be essential for authorities, industry and all interested parties to re-establish credibility in approval mechanisms, the safety of the products and the integrity of the food chain. In this way specific labelling requirements may be progressively relaxed and greater emphasis and reliance placed upon alternative means of supplying relevant information to specific interested consumers via modern technology such as carelines, bar-codes, etc. 3.4 Appropriate Jurisdictions for Decisions on GMOs Many of the plants (and no doubt in the future animals) which have been or will be modified by gene technology form the basis of international trade, whether as commodities themselves or as components of final foodstuffs. It is therefore imperative that the regulation of these products, particularly from a safety viewpoint, should be controlled at as high an international level as possible. This might best be done via the FAO/WHO Codex Alimentarius mechanisms. If global agreement to the principles can be achieved, these should then form the basis of local regulation and thus equivalent treatment of genetically modified organisms around the world. We are confident that appropriate and adequate mechanisms for safety evaluation exist in the Western world but are not aware of similarly thorough controls existing in China and the Far East where considerable developments in this area are being made. We believe it would be an essential step towards ensuring consumer confidence in the technology, and hence its global acceptability, if the Oriental developments fell to be treated in an equivalent manner. 3.5 The Effect of Regulation on Competition Any legislative controls/regulations must be capable of uniform interpretation and be equitably enforced across their range of application. Providing this is done, there should be no undue imbalance of impact on any sector of the industry. Problems will arise at an international level when local legislation (albeit European based) is out of line with other geographic areas. This increasingly appears to be the case with regard to genetic modification. With increased internationalisation/globalisation of food manufacture and trading, it will be inevitable that business will move from one location to another if undue cost pressures are imposed upon it. This will apply both to research and development and food manufacture itself. Future regulation in this area must remain based on scientific considerations, albeit tempered by a political recognition of the sensitivity of this technology, and must be applicable to all relevant stages of the food chain, regardless of the size of the enterprise. Derogations from the legislation should be minimal if any and, if granted, must in no way prejudice the consumer confidence in the totality of the regulatory control over genetic modification. 4 June 1998
https://www.publications.parliament.uk/pa/ld199899/ldselect/ldeucom/11/11we37.htm
CC-MAIN-2017-09
en
refinedweb
All - O'Reilly Media 2017-02-19T14:43: 17 February 2017 2017-02-17T12:05:00Z tag: <p><em>Robot Governance, Emotional Labour, Predicting Personality, and Music History</em></p><ol> <li> <a href="">Who Should Own the Robots?</a> (Tyler Cowan) -- <i.</i> Designing governance for The Robot Future is definitely a Two Beer Problem.</li> <li> <a href="">Emotional Labour for GMail</a> -- <i>Automate emotional labor in Gmail messages.</i> </li> <li> <a href="">Beyond the Words: Predicting User Personality from Heterogeneous Information</a> -- .</i> (via <a href="">Adrian Colyer</a>)</li> <li> <a href="">Theft: A History of Music</a> -- <i>a graphic novel laying out a 2,000-year-long history of music, from Plato to rap. The comic is by James Boyle, Jennifer Jenkins, and the late Keith Aoki.</i> You can buy print, or download for free.</li> </ol> <p>Continue reading <a href=''>Four short links: 17 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Amir Shevat on workplace communication 2017-02-16T12:20:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Bots Podcast: Slack’s head of developer relations talks about what bots can bring to Slack channels.</em></p><p>In this episode of the <a href="">O’Reilly Bots Podcast</a>, Pete Skomoroch and I speak with <a href="">Amir Shevat</a>, head of developer relations at <a href="">Slack</a> and the author of the forthcoming O’Reilly book <em><a href="">Designing Bots: Creating Conversational Experiences</a></em>.</p><p>Continue reading <a href=''>Amir Shevat on workplace communication.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jon Bruner Simon Endres on designing in an arms race of high-tech materials 2017-02-16T12:00:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Design Podcast: The guiding light of strategy, designing Allbirds, and what makes the magic of a brand identity.</em></p><p>In this week’s Design Podcast, I sit down with <a href="">Simon Endres</a>, creative director and partner at Red Antler. We talk about working from a single idea, how Red Antler is helping transform product categories, and the importance of having a point of view. </p><p>Continue reading <a href=''>Simon Endres on designing in an arms race of high-tech materials.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mary Treseler Four short links: 16 February 2017 2017-02-16T11:40:00Z tag: <p><em>Memory-Busting Javascript, Taobao Villages, Drone Simulation, and Bio Bots</em></p><ol> <li> <a href="">ASLR-Busting Javascript</a> (Ars Technica) -- modern chips randomize where your programs actually live in memory, to make it harder for someone to overwrite your code. This clever hack (in Javascript!) makes the CPU cache reveal (through faster returns) where your code is. I'm in awe.</li> <li> <a href="">China's Taobao Villages</a> (Quartz) -- <i>Today, the township and its surrounding area are China’s domestic capital for one rather specific category of products: acting and dance costumes. Half of the township’s 45,000 residents produce or sell costumes—ranging from movie-villain attire to cute versions of snakes, alligators, and monkeys—that are sold on Alibaba-owned Taobao, the nation’s largest e-commerce platform.</i> </li> <li> <a href="">Aerial Informatics and Robotics platform</a> (Microsoft) -- open source drone simulator.</li> <li> <a href="">How to Build Your Own Bio Bot</a> (Ray Kurzweil) -- <i>researchers are sharing a protocol with engineering details for their current generation of millimeter-scale soft robotic bio-bots</i>.</li> </ol> <p>Continue reading <a href=''>Four short links: 16 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington How Python syntax works beneath the surface 2017-02-16T11:00:00Z tag: <p><img src=''/></p><p><em>Use Python's magic methods to amplify your code.</em></p><p>Continue reading <a href=''>How Python syntax works beneath the surface.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Aaron Maxwell Four short links: 15 Feb 2017 2017-02-15T12:50:00Z tag: <p><em>Docker Data, Smart Broadcasting, Open Source, and Cellphone Spy Tools</em></p><ol> <li> <a href="">Docker Data Kit</a> -- <i>Connect processes into powerful data pipelines with a simple git-like filesystem interface.</i> </li> <li> <a href="">RedQueen: An online algorithm for smart broadcasting in social networks</a> (Adrian Colyer) -- <i.</i> </li> <li> <a href="">Open Source Guides</a> -- GitHub's guide to making and contributing to open source. GitHub's is nicely packaged into visual and consumable chunks, but I still prefer (newly updated) <a href="">Producing Open Source Software</a>. The more people know how to do open source, the better.</li> <li> <a href="">Cellphone Spy Tools Flood Local Police Departments</a> --.</li> </ol> <p>Continue reading <a href=''>Four short links: 15 Feb 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Doug Barth and Evan Gilman on Zero Trust networks 2017-02-15T11:00:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Security Podcast: The problem with perimeter security, rethinking trust in a networked world, and automation as an enabler.</em></p><p>In this episode, I talk with <a href="">Doug Barth</a>, site reliability engineer at Stripe, and <a href="">Evan Gilman</a>, Doug’s former colleague from PagerDuty who is now working independently on Zero Trust networking. They are also co-authoring a <a href="">book for O’Reilly on Zero Trust networks</a>. They discuss the problems with traditional perimeter security models, rethinking trust in a networked world, and automation as an enabler.</p><p>Continue reading <a href=''>Doug Barth and Evan Gilman on Zero Trust networks.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Courtney Nash Why you need a data strategy, and what happens if you don’t have one 2017-02-14T12:00:00Z tag: <p><img src=''/></p><p><em>How to map out a plan for finding value in data.</em></p><p>Continue reading <a href=''>Why you need a data strategy, and what happens if you don’t have one.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jerry Overton Four short links: 14 Feb 2017 2017-02-14T11:55:00Z tag: <p><em>Rapping Neural Network, H1B Research, Quantifying Controversy, Social Media Research Tools</em></p><ol> <li> <a href="">Rapping Neural Network</a> -- <i>It's a neural network that has been trained on rap songs, and can use any lyrics you feed it and write a new song (it now writes word by word as opposed to line by line) that rhymes and has a flow (to an extent).</i> With examples.</li> <li> <a href="">H1B Research</a> -- H1B holders are paid less and often weaker in skills compared to their American counterparts.</li> <li> <a href="">Amazon Chime</a> -- interesting to see a business service from Amazon, not a operations service. This is better (they claim) meeting software: move between devices, with screen-sharing, video, chat, file-sharing.</li> <li> <a href="">Quantifying Controversy in Social Media</a> -- <i.</i> </li> <li> <a href="">Social Media Research Toolkit</a> -- <i>a list of 50+ social media research tools curated by researchers at the Social Media Lab at Ted Rogers School of Management, Ryerson University. The kit features tools that have been used in peer-reviewed academic studies. Many tools are free to use and require little or no programming. Some are simple data collectors such as tweepy, a Python library for collecting Tweets, and others are a bit more robust, such as Netlytic, a multi-platform (Twitter, Facebook, and Instagram) data collector and analyzer, developed by our lab. All of the tools are confirmed available and operational.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 14 Feb 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington The dirty secret of machine learning 2017-02-13T12:00:00Z tag: <p><img src=''/></p><p><em>David Beyer talks about AI adoption challenges, who stands to benefit most from the technology, and what's missing from the conversation.</em></p><p>Continue reading <a href=''>The dirty secret of machine learning.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jenn Webb Four short links: 13 Feb 2017 2017-02-13T12:00:00Z tag: <p><em>Urban Attractors, Millimetre-Scale Computing, Ship Small Code, and C++ Big Data</em></p><ol> <li> <a href="">Urban Attractors: Discovering Patterns in Regions of Attraction in Cities</a> -- <i>We use a hierarchical clustering algorithm to classify all places in the city by their features of attraction. We detect three types of Urban Attractors in Riyadh during the morning period: Global, which are significant places in the city, and Downtown, which are the central business district and Residential attractors. In addition, we uncover what makes these places different in terms of attraction patterns. We used a statistical significance testing approach to rigorously quantify the relationship between Points of Interests (POIs) types (services) and the three patterns of Urban Attractors we detected.</i> </li> <li> <a href="">Millimetre-Scale Deep Learning</a> -- <i>Another micro mote they presented at the ISSCC incorporates a deep-learning processor that can operate a neural network while using just 288 microwatts. </i> </li> <li> <a href="">Ship Small Diffs</a> (Dan McKinley) -- <i>your deploys should be measured in dozens of lines of code rather than hundreds. [...] In online systems, you have to ship code to prove that it works. [...] Your real problem is releasing frequently.</i> So quotable, so good.</li> <li> <a href="">Thrill</a> -- <i>distributed big data batch computations on a cluster of machines</i> ... in C++. (via <a href="">Harris Brakmic</a>)</li> </ol> <p>Continue reading <a href=''>Four short links: 13 Feb 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Four short links: 10 Feb 2017 2017-02-10T12:00:00Z tag: <p><em>Microsoft Graph Engine, Data Exploration, Godel Escher Bach, and Docker Secrets</em></p><ol> <li> <a href="">Microsoft Graph Engine</a> -- open source (Windows now, Unix coming) graph data engine. It's the open source implementation of <a href="">Trinity: A Distributed Graph Engine on a Memory Cloud</a>.</li> <li> <a href="">Superset</a> -- AirBnB's <i>data exploration platform designed to be visual, intuitive, and interactive</i> now with a better SQL IDE.</li> <li> <a href="">MIT Godel Escher Bach Lectures</a> -- not Hofstadter himself, but a thorough walkthrough of the premises and ideas in the book.</li> <li> <a href="">Docker Secrets Management</a> -- interesting to see etcd getting some competition here.</li> </ol> <p>Continue reading <a href=''>Four short links: 10 Feb 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Hacker quantified security 2017-02-10T11:00:00Z tag: <p><img src=''/></p><p><em>Alex Rice on the importance of inviting hackers to find vulnerabilities in your system, and how to measure the results of incorporating their feedback.</em></p><p>Continue reading <a href=''>Hacker quantified security.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Alex Rice Tom Davenport on mitigating AI's impact on jobs and business 2017-02-09T12:20:00Z tag: <p><img src=''/></p><p><em>The O'Reilly Radar Podcast: The value humans bring to AI, guaranteed job programs, and the lack of AI productivity.</em></p><p>This week, I sit down with <a href="">Tom Davenport</a>., <em><a href="">Competing on Analytics: The New Science of Winning</a></em>; his new book <em><a href="">Only Humans Need Apply: Winners and Losers in the Age of Smart Machines</a></em>, which looks at how AI is impacting businesses; and we talk more broadly about how AI is impacting society and what we need to do to keep ourselves on a utopian path.</p><p>Continue reading <a href=''>Tom Davenport on mitigating AI's impact on jobs and business.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jenn Webb How to build—and grow—a strong design practice 2017-02-09T12:00:00Z tag: <p><img src=''/></p><p><em>5 questions for Aarron Walter: Shaping products, growing teams, and managing through change.</em></p><p>I recently asked <a href="">Aarron Walter</a>, VP of design education at InVision and author of <a href=""><em>Designing for Emotion</em></a>, to discuss what he has learned through his years of building and managing design teams. At the <a href="">O’Reilly Design Conference</a>, Aaron will be presenting a session, <a href=""><em>Hard-learned lessons in leading design</em></a>.</p> <h3>Your talk at the upcoming O'Reilly Design Conference is titled <em>Hard-learned lessons in leading design</em>. Tell me what attendees should expect.</h3> <p>I had the unique opportunity of watching a company grow from just a handful of people to more than 550 over the course of eight years at MailChimp. When I started we had a few thousand customers, but when I left in February of 2016, there were more than 10 million worldwide. We saw tremendous growth, and I learned so much in my time there. In my talk, I'll be sharing the most salient lessons I learned along the way—how to shape a product, grow a team, how a company changes and how it changes people's careers, and a lot more.</p> <h3>What are some of the challenges that come along with building and leading a design team in a strong growth period?</h3> <p>As a company grows, the people who run it have to grow, too. There's a steep learning curve. When you're a small team it's easy to make decisions and get things done. But when a company grows, clear processes are needed, more people need to be brought into the planning process, and rapport has to be developed between teams and key individuals.</p> <p>The trick is you never really know what stage the company is in, so there's always uncertainty about whether you're doing the right thing. Everyone has to adapt and change with each new stage, and that can be hard for some people.</p> <h3>What are some of the more memorable lessons you learned along the way?</h3> <p>Early on as the director of UX, I thought my most important job was designing a great product. That was true but only until we needed to start building teams. Then my most important job was hiring great people. That remained my top priority for years to come, and I see it as my lasting legacy within the company. There are so many smart, talented people at MailChimp. I'm proud to have played a part in hiring and mentoring a number of people who've gone on to lead their own teams.</p> <p>In the early years of the product, we were focused on the future, toward new features and new ideas. But as the product and company matured, we had to master the art of refinement. Feature production is a treadmill: there will always be something else you can build. But if those features are half-baked or unrefined, you can end up with a robust product that is too complicated or too broken to use. Phil Libin <a href="">said it best</a>, "The best product companies in the world have figured out how to make constant quality improvements part of their essential DNA."</p> <h3>You will be speaking about the importance of building a strong design practice. Can you explain what this looks like?</h3> <p>A strong design practice has these things going for it:</p> <ul> <li>A product vision that makes it clear to everyone how the product fits into the lives of the audience.</li> <li>A rigorous process for understanding the problem through research, customer interaction, and debate.</li> <li>A culture of feedback where designers can continue to grow and the work gets pushed to its potential.</li> <li>Strong relationship with other teams. Design is a continuum, not just a step in the process. You have to work with everyone in the process to produce great products.</li> </ul> <h3>You're speaking at the O'Reilly Design Conference in March. What sessions are you interested in attending?</h3> <p>I'm excited to hear <a href="">Alan Cooper</a> speak. His book <a href=""><em>The Inmates are Running the Asylum</em></a> made a big impression on me. I'm also looking forward to hearing <a href="">Dan Hill</a> talk about <a href="">the UX of buildings, cities, and infrastructure.</a> Architecture and city planning are ripe with lessons for the UX practice.</p> <p>It's an incredible line up of speakers!</p> <p>Continue reading <a href=''>How to build—and grow—a strong design practice.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mary Treseler Deep learning for Apache Spark 2017-02-09T12:00:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Data Show Podcast: Jason Dai on BigDL, a library for deep learning on existing data frameworks.</em></p><p>In this episode of the <a href="">Data Show</a>, I spoke with <a href="">Jason Dai</a>, CTO of big data technologies at Intel, and co-chair of <a href="">Strata + Hadoop World Beijing</a>. Dai and his team are prolific and longstanding contributors to the Apache Spark project. Their early contributions to Spark tended to be on the systems side and included Netty-based shuffle, a fair-scheduler, and the “yarn-client” mode. Recently, they have been contributing tools for advanced analytics. In partnership with major cloud providers in China, they’ve written implementations of algorithmic building blocks and <a href="">machine learning models</a> that let Apache Spark users scale to extremely high-dimensional models and large data sets. They achieve scalability by taking advantage of things like <a href="">data sparsity</a> and Intel’s <a href="">MKL software</a>. Along the way, they’ve gained valuable experience and insight into how companies deploy machine learning models in real-world applications.</p><p>Continue reading <a href=''>Deep learning for Apache Spark.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ben Lorica Four short links: 9 February 2017 2017-02-09T11:50:00Z tag: <p><em>In-Memory Malware, Machine Ethics, Open Source Maintainer's Dashboard, and Cards Against Silicon Valley</em></p><ol> <li> <a href="">In-Memory Malware Infesting Banks</a> (Ars Technica) -- <i>According to research Kaspersky Lab plans to publish Wednesday, networks belonging to at least 140 banks and other enterprises have been infected by malware that relies on the same in-memory design [as Stuxnet] to remain nearly invisible</i>. (via <a href="">Boing Boing</a>)</li> <li> <a href="">Technical Challenges in Machine Ethics</a> (Robohub) -- interesting interview with a researcher who is attempting to implement ethics in software. Fascinating to read about the approach and challenges.</li> <li> <a href="">Scope</a> -- nifty tool to help busy open source maintainers stay on top of their GitHub-hosted projects...dashboard for critical issues, PRs, etc.</li> <li> <a href="">Cards Against Silicon Valley</a> -- spot on tragicomedy.</li> </ol> <p>Continue reading <a href=''>Four short links: 9 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Four short links: 8 February 2017 2017-02-08T11:40:00Z tag: <p><em>Becoming a Troll, Magic Paper, HTTPS Interception, and Deep NLP</em></p><ol> <li> <a href="">Anyone Can Become a Troll</a> (PDF) -- <i>A predictive model of trolling behavior shows that mood and discussion context together can explain trolling behavior better than an individual’s history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.</i> (via <a href="">Marginal Revolution</a>)</li> <li> <a href="">Magic Paper</a> -- printed with light, erased with heat, and reusable up to 80 times. (via <a href="">Slashdot</a>)</li> <li> <a href="">The Security Implication of HTTPS Interception</a> (PDF) -- <i.</i> </li> <li> <a href="">Deep Natural Language Processing Course</a> -- <i>This repository contains the lecture slides and course description for the Deep Natural Language Processing course offered in Hilary Term 2017 at the University of Oxford.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 8 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington What is the new reduce algorithm in C++17? 2017-02-08T10:00:00Z tag: <p><img src=''/></p><p><em>Learn how to allow for parallelization using the reduce algorithm, new in C++17.</em></p><p>Continue reading <a href=''>What is the new reduce algorithm in C++17?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jason Turner Four short links: 7 February 2017 2017-02-07T12:10:00Z tag: <p><em>Game Theory, Algorithms and Robotics, High School Not Enough, and RethinkDB Rises</em></p><ol> <li> <a href="">Game Theory in Practice</a> (The Economist) -- various firms around the world offering simulations/models of scenarios like negotiations, auctions, regulation, to figure out strategies and likely courses of action from other players.</li> <li> <a href="">Videos from the 12th Workshop on Algorithmic Foundations of Robotics</a> -- there are plenty with titles like "non-Gaussian belief spaces" (possibly a description of modern America) but also keynotes with titles like <a href="">Replicators, Transformers, and Robot Swarms</a>. </li> <li> <a href="">No Jobs for High School Grads</a> (NYT) -- <i>.”</i> </li> <li> <a href="">The Liberation of RethinkDB</a> --. </li> </ol> <p>Continue reading <a href=''>Four short links: 7 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Staying out of trouble with big data 2017-02-07T12:00:00Z tag: <p><img src=''/></p><p><em>Understanding the FTC’s role in policing analytics.</em></p><p>Continue reading <a href=''>Staying out of trouble with big data.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Kristi WolffCrystal Skelton How do I use the slice notation in Python? 2017-02-07T10:00:00Z tag: <p><img src=''/></p><p><em>Learn how to extract data from a structure correctly and efficiently using Python's slice notation.</em></p> <h2>How do I use the slice notation in Python?</h2> <p id="id-JeIgsO">In this tutorial, we will review the Python slice notation, and you will learn how to effectively use it. Slicing is used to retrieve a subset of values.</p> <p id="id-45IGuk">The basic slicing technique is to define the starting point, the stopping point, and the step size - also known as stride.</p><p>Continue reading <a href=''>How do I use the slice notation in Python?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Eszter D. Schoell Four short links: 6 February 2017 2017-02-06T12:30:00Z tag: <p><em>NPC AI, Deep Learning Math Proofs, Amazon Antitrust, and Code is Law</em></p><ol> <li> <a href="">Building Character AI Through Machine Learning</a> -- NPCs that learn from/imitate humans. (via <a href="">Greg Borenstein</a>)</li> <li> <a href="">Network-Guided Proof Search</a> -- <i>We give experimental evidence that with a hybrid, two-phase approach, deep-learning-based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved.</i> </li> <li> <a href="">Amazon's Antitrust Paradox</a> -- <i.</i> Fascinating overview of the American conception of antitrust.</li> <li> <a href="">FBI's RAP-BACK Program</a> -- software encodes "guilty before trial." <i>employers enrolled in federal and state Rap Back programs receive ongoing, real-time notifications and updates about their employees’ run-ins with law enforcement, including arrests at protests and charges that do not end up in convictions.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 6 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington How do you customize packages in a Kickstart installation? 2017-02-06T09:00:00Z tag: <p><img src=''/></p><p><em>Learn how to set up your configuration file to indicate the types of packages you want to install by using the “yum” command. </em></p><p>Continue reading <a href=''>How do you customize packages in a Kickstart installation?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ric Messier Four short links: 3 February 2017 2017-02-03T11:40:00Z tag: <p><em>Stream Alerting, Probabilistic Cognition, Migrations at Scale, and Interactive Machine Learning</em></p><ol> <li> <a href="">StreamAlert</a> -- <i>a serverless, real-time data analysis framework that empowers you to ingest, analyze, and alert on data from any environment, using data sources and alerting logic you define.</i> Open source from AirBnB.</li> <li> <a href="">Probabilistic Models of Cognition</a> -- .</i> </li> <li> <a href="">Online Migrations at Scale</a> -- <i>In this post, we’ll explain how we safely did one large migration of our hundreds of millions of Subscriptions objects.</i> This is a solid process.</li> <li> <a href="">Interactive Machine Learning</a> (Greg Borenstein) -- intro to, and overview of, the field of Interactive Machine Learning, elucidating <i>the principles for designing systems that let humans use these learning systems to do things they care about.</i> In Greg's words, <i.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 3 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Personalization's big question: Why am I seeing this? 2017-02-03T11:00:00Z tag: <p><img src=''/></p><p><em>Sara M. Watson from Digital Asia Hub discusses the state of personalization and how it can become more useful for consumers.</em></p><p>Continue reading <a href=''>Personalization's big question: Why am I seeing this?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mac Slocum How do you create a Kickstart file? 2017-02-03T10:00:00Z tag: <p><img src=''/></p><p><em>Learn how to create and make changes to a Kickstart configuration file using the anaconda-ks.cfg.</em></p><p>Continue reading <a href=''>How do you create a Kickstart file?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ric Messier How do I use the set_intersection algorithm in C++? 2017-02-03T09:00:00Z tag: <p><img src=''/></p><p><em>Learn how to handle array comparisons using the set_intersection algorithm in C++.</em></p><p>Continue reading <a href=''>How do I use the set_intersection algorithm in C++?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jason Turner Mike Vladimer on IoT connectivity 2017-02-02T12:50:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Hardware Podcast: Powering connected devices with low-power networks.</em></p><p>In this episode of the <a href="">O’Reilly Hardware Podcast</a>, Brian Jepson and I speak with <a href="">Mike Vladimer</a>, co-founder of the <a href="">Orange IoT Studio</a> at Orange Silicon Valley. Vladimer discusses how Internet of Things devices could benefit from connectivity options other than those provided by well-known technologies (including cellular, WiFi, and Bluetooth), and explains the LoRa wireless protocol, which supports long-range and lower-power applications.</p><p>Continue reading <a href=''>Mike Vladimer on IoT connectivity.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jeff Bleiel Extend Spark ML for your own model/transformer types 2017-02-02T12:00:00Z tag: <p><img src=''/></p><p><em>How to use the wordcount example as a starting point (and you thought you’d escape the wordcount example).</em></p><p>While Spark ML pipelines have a wide variety of algorithms, you may find yourself wanting additional functionality without having to leave the pipeline model. In Spark MLlib, this isn't much of a problem—you can manually implement your algorithm with RDD transformations and keep going from there. For Spark ML pipelines, the same approach can work, but we lose some of the nicely integrated properties of the pipeline, including the ability to automatically run meta-algorithms, such as cross-validation parameter search. In this article, you will learn how to extend the Spark ML pipeline model using the standard wordcount example as a starting point (one can never really escape the intro to big data wordcount example).</p><p>Continue reading <a href=''>Extend Spark ML for your own model/transformer types.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Holden Karau Kat Holmes on Microsoft’s human-led approach to tackling society’s challenges 2017-02-02T12:00:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Design Podcast: Building bridges across disciplines, universal vs. inclusive design, and what playground design can teach us about inclusion.</em></p><p>In this week’s Design Podcast, I sit down with <a href="">Kat Holmes</a>, principal design director, inclusive design at Microsoft. We talk about what she looks for in designers, working on the right problems to solve, and why both inclusive and universal design are important but not the same.</p><p>Continue reading <a href=''>Kat Holmes on Microsoft’s human-led approach to tackling society’s challenges.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mary Treseler Four short links: 2 February 2017 2017-02-02T11:50:00Z tag: <p><em>Physical Authentication, Crappy Robots, Immigration Game, and NN Flashcards</em></p><ol> <li> <a href="">Pervasive, Dynamic Authentication of Physical Items</a> (ACM Queue) -- <i>Silicon PUF circuits generate output response bits based on a silicon device's manufacturing variation</i>. This is cute!</li> <li> <a href="">Hebocon</a> -- crappy robot competition.</li> <li> <a href="">Martian Immigration Nightmare</a> -- a game can make a point.</li> <li> <a href="">TraiNNing Cards</a> -- flashcards for neural networks. Hilarious!</li> </ol> <p>Continue reading <a href=''>Four short links: 2 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Should you containerize your Go code? 2017-02-02T11:00:00Z tag: <p><img src=''/></p><p><em>Containers helps you distribute, deploy, run, and test your Golang projects.</em></p><p>I’m a huge fan of Go, and I’m also really interested in containers, and how they make it easier to deploy code, especially at scale. But not all Go programmers use containers. In this article I’ll explore some reasons why you really should consider them for your Go code — and then we’ll look at some cases where containers wouldn’t add any benefit at all.</p> <p>First, let’s just make sure we’re all on the same page about what we mean by “containers.”</p> <h2>What is a container?</h2> <p>There are probably about as many different definitions of what a container is as there are people using them. For many, the word “container” is synonymous with Docker, although <a href="">containers have been around a lot longer</a> than either the <a href="">Docker open-source project</a> or <a href="">Docker the company</a>. If you're new to containers, Docker is probably your best starting point, with its developer-friendly command line support, but there are other implementations available:</p> <ul> <li> <a href="">Linux Containers</a> - container implementations including LXC and LXD</li> <li> <a href="">rkt</a> - pod-native container engine from CoreOS</li> <li> <a href="">runc</a> - running containers per the <a href="">OCI</a> specification</li> <li> <a href="">Windows Containers</a> - Windows Server containers and Hyper-V containers</li> </ul> <p>Containers are a virtualization technology — they let you isolate an application so that it’s under the impression it’s running in its own physical machine. In that sense a container is similar to a virtual machine, except that it uses the operating system on the host rather than having its own operating system.</p> <p>You start a container from a container image, which bundles up everything that the application needs to run, including all its runtime dependencies. These images make for a convenient distribution package.</p> <h2>Containers make it easy to distribute your code</h2> <p>Because the dependencies are part of the container image, you’ll get exactly the same versions of all the dependencies when you run the image on your development machine, in test, or in production. No more bugs that turn out to be caused by a library version mismatch between your laptop and the machines in the data center.</p> <p>But one of the joys of Go is that it compiles into a single binary executable. You have to <a href="">deal with dependencies at build time</a>, but there are no runtime dependencies and no libraries to manage. If you’ve ever worked in, say, Python, JavaScript, Ruby, or Java, you’ll find this aspect of Go to be a breath of fresh air: you can get a single executable file out of the Go compilation process, and it’s literally all you need to move to any machine where you want it to run. You don’t need to worry about making sure the target machine has the right version libraries or execution environment installed alongside your program.</p> <p>Err, so, if you have a single binary, what’s the point of packaging up that binary inside a container?</p> <p>The answer is that there might be other things you want to package up alongside your binary. If you’re building a web site, or if you have configuration files that accompany your program, you may very well have your static files separate. You could build them into the executable with <a href="">go-bindata</a> or similar if you prefer. Or you can build a container image that includes the binary file and its static resources together in one neat package. Wherever you put that container image, it has everything it needs for your program to run.</p> <h2>Containers help you deploy your code</h2> <p>To keep things simple, let’s assume you don’t have any static resources, just a single binary. You build that executable and then move it to the machine where it needs to run — you simply need to move that one file. Go makes cross-compilation easy, so it’s no big deal even if the target machine where you want to run the code differs from the one you’re building on. All you need to do is <a href="">specify the target machine’s architecture and operating system in environment variables</a> when you run go build.</p> <p>In many traditional deployments, you know exactly which (virtual) machine is going to run each executable. You might have multiple hosts (e.g. for high availability), but now that we know how easy it is to build for the target machine, it’s not exactly rocket science to ship that lovely Go binary to where it needs to run.</p> <p>But the modern approach to deploying code is to run a cluster of machines, and use an orchestrator like <a href="">Kubernetes</a>, <a href="">ECS</a>, or <a href="">Docker Swarm</a> to place containers somewhere in the cluster.</p> <p>Containers are great for this, because an image acts as a standard “unit of deployment” for the orchestrator to act on. The orchestrator tells the machine what code to run by giving it an identifier for a container image; if the machine doesn’t already have a copy of that image it can pull it from a container registry.</p> <p>It’s certainly possible to run an orchestrator to <a href="">deploy code that isn’t packaged up in container images</a>. But by using containers you’re taking advantage of a broadly common, language-agnostic deployment methodology that’s being used increasingly across industry. Even if your company is a pure Go shop today, that might not be the case forever. By using containers you’ll have a common mechanism for deploying different code components whatever language they might be written in, so you’re avoiding language lock-in.</p> <p>When I said “modern approach to deploying code,” you might quite rightly have thought “serverless?” Serverless implementations are running each executable function inside a container. Deployment to serverless looks different today, but I wouldn’t be at all surprised to see a blurring of the terms - in some environments you can already <a href="">ship your serverless function in the form of a Docker container image</a> (not least so that it has all the dependencies it needs).</p> <h2>Containers help you restrict the resources your code can access</h2> <p>When you run a Go (or any other) executable on a Linux machine you’re starting a process. If you execute code inside a Linux container you’re also starting a process in almost exactly the same way — it’s just that the process has such a restricted view of the resources available on the machine that it practically thinks it has a machine to itself.</p> <p>Restricting the process’s view of the world inside a container has many of the same advantages of a virtual machine for running multiple different applications on the same hardware. For example, a containerized process has no way to access files or devices outside its container unless you explicitly allow it, so it can’t affect those files or devices (either maliciously or simply due to a bug). It might think it’s thrashing the CPU to perform an intensive operation, but the system may have limited the amount of processing power it can use so that other applications and services can continue to operate.</p> <p>That restricted view of resources is created using <a href="">namespaces</a> and <a href="">cgroups</a>. Exactly what those terms mean is a topic for another time, but people tell me they’ve found <a href="">this talk I did at Golang UK</a> to be helpful.</p> <p>If you want to restrict an executable so that it only has access to a limited set of resources, containers give you a neat, friendly, and repeatable way of doing that.</p> <p>It’s possible to create the same restrictions for an executable in other ways, but containers make it easy. For example, traditionally sysadmins have done lots of careful and potentially fiddly work to set up the right permissions for things like files, devices and network ports. In the world of containers it’s very easy for <a href="">developers to convey their intent</a> that the code should be able to use, say, certain ports or volumes (and no others), and that by default everything inside the container is private to the container. You’ll want to make sure your <a href="">Dockerfiles follow security best practices</a>, but there’s no need for bespoke operations work to get the permissions set up every time a development team deploys a new application or service.</p> <h2>Containers help you test locally with other components</h2> <p>Many applications need to access other components, like a database or a queuing service (or limitless other things). When you want to run your program locally for testing, you’ll need those components installed too.</p> <p>But what if you need different versions of components for different applications? Or what if the configuration is different for different projects? For example, if you’re a contract developer you could easily have two clients running with different versions of, say, Postgres. It’s possible to run multiple copies on your laptop, but it can be painful (and you have to make sure you’re using the right version).</p> <p>Life can be much simpler if you use the containerized versions of the services you need. You can set up a <a href="">docker-compose file</a> for each project to bring up the right set of components with all the correct configuration.</p> <h2>What can containers do for you?</h2> <p>In summary, containers make it easy to:</p> <ul> <li>distribute your code, in a package that can run anywhere</li> <li>deploy your software under an orchestrator</li> <li>constrain the resources your (or someone else’s) code can use on the host machine</li> <li>run and test your software locally along with all the services it needs</li> </ul> <p>If you’re a Go developer working on “back-end” or systems software that will be deployed in the cloud, these can be compelling reasons to use containers. But if those don’t apply to you, should you be using containers for your Go code?</p> <h2>When not to use containers with Go</h2> <p>Docker have a catchphrase: “Build, Ship and Run Any App, Anywhere.” Go already has some of those attributes built in. As I mentioned earlier, among Go’s strong suits are its cross-compilation and the production of single executable files without dependencies. Unless you’re packaging up other files (or perhaps the new <a href="">plugins</a>) with your executable, or unless “run” means “deploy through an orchestrator,” containers are not going to make it any easier to build, ship or run anywhere new.</p> <p>Perhaps you’re a Go developer who doesn’t have to worry about deploying code to a cluster of machines. If you build, say, a standalone desktop or mobile app that you distribute as a download, then containers don’t add any benefit that I’m (yet) aware of, and would just add unnecessary complexity to your workflow and build process.</p> <p>Similarly I wouldn’t use a container if I were writing a standalone program that I’m only planning to run locally, perhaps for an experiment or demo, or a small utility that doesn’t need to interact with other components. As always, use the right tool for the job and don’t use containers if they won’t add value for you!</p> <p>Got another reason for using containers for your Go code? I’d love to <a href="">hear about</a> your experiences.</p> <p>Continue reading <a href=''>Should you containerize your Go code?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Liz Rice What is a Kickstart installation and why would you use it? 2017-02-02T10:00:00Z tag: <p><img src=''/></p><p><em>Learn to use Kickstart to get the same look on multiple Red Hat Enterprise Linux system installations. </em></p><p>Continue reading <a href=''>What is a Kickstart installation and why would you use it?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ric Messier How do you use standard algorithms with object methods in C++? 2017-02-02T09:00:00Z tag: <p><img src=''/></p><p><em>Learn how to write shorter, better performing, and easier to read code using standard algorithms with object methods in C++.</em></p><p>Continue reading <a href=''>How do you use standard algorithms with object methods in C++?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jason Turner Four short links: 1 February 2017 2017-02-01T18:25:00Z tag: <p><em>Unhappy Developers, Incident Report, Compliance as Code, AI Ethics</em></p><ol> <li> <a href="">Unhappy Developers</a> -- paper authors surveyed 181 developers and built a framework of consequences: Internal Consequences, such as low cognitive performance, mental unease or disorder, low motivation; External Consequences, which might be Process-related (low productivity, delayed code, variation from the process) or Artefact-related (low-quality code, rage rm-ing the codebase). Hoping to set the ground for future research into how developer happiness affects software production.</li> <li> <a href="">GitLab Database Incident Report</a> -- <i</i>. YP died for your sins.</li> <li> <a href="">Compliance as Code</a> -- <i.</i> </li> <li> <a href="">Ethical Considerations in AI Courses</a> -- <i> In this article, we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 1 February 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Susan Sons on maintaining and securing the internet’s infrastructure 2017-02-01T12:15:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Security Podcast: Saving the Network Time Protocol, recruiting and building future open source maintainers, and how speed and security aren’t at odds with each other.</em></p><p>In this episode, O’Reilly’s <a href="">Mac Slocum</a> talks with <a href="">Susan Sons</a>, senior systems analyst for the Center for Applied Cybersecurity Research (CACR) maintaining and securing the internet’s infrastructure.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Courtney Nash Build a super fast deep learning machine for under $1,000 2017-02-01T12:00:00Z tag: <p><img src=''/></p><p><em>The adventures in deep learning and cheap hardware continue!</em></p><p>Yes, you can run TensorFlow on a <a href="">$39 Raspberry Pi</a>, and yes, you can run TensorFlow on a <a href="">GPU powered EC2 node</a> for about $1 per hour. And yes, those options probably make more practical sense than building your own computer. But if you’re like me, you’re dying to build your own fast deep learning machine.</p> <p <a href="">$2,800 Macbook Pro</a> on every metric other than power consumption and, because it’s easily upgraded, stay ahead of it for a few years to come.</p> <p.</p> <p>Here’s what you need to buy and some specific recommendations:</p> <h3>Motherboard</h3> <p>Motherboards come in <a href="">different sizes</a>. <a href="">ASUS Mini ITX DDR4 LGA 1151 B150I PRO GAMING/WIFI/AURA</a> Motherboard for $125 on Amazon. It comes with a WiFi antenna, which is actually super useful in my basement.</p> <h3>Case</h3> <p <a href="">Thermaltake Core V1 Mini ITX Cube</a> on Amazon for $50.</p> <h3>RAM</h3> <p>I can’t believe how cheap RAM has gotten! You need to buy DDR4 RAM to match the motherboard (that’s most of what you will find online) and the prices are all about the same. I bought two <a href="">8GB of Corsair Vengeanc</a>e for $129.</p> <p.</p> <h3>CPU</h3> <p <a href="">Intel I5-6600</a> for $214.</p> <p.</p> <h3>Hard drive</h3> <p>I also can’t believe how cheap hard drives have gotten. I bought a <a href="">1TB SATA drive</a> <a href="">Samsung 850 EVO 250GB 2.5-Inch SATA III Internal SSD</a>, which is 250Gb for $98.</p> <p>All these drives make you realize what a rip off it is when Apple charges you $200 more for an extra 250Gb in your Macbook Pro.</p> <h3>Graphics card/GPU</h3> <p>Which graphics card is the most important and the toughest question. For pretty much all machine learning applications, you want an NVIDIA card because only NVIDIA makes the essential <a href="">CUDA framework</a> and the <a href="">CuDNN library</a> that all of the machine learning frameworks, including TensorFlow, rely on.</p> <p>Not being a GPU expert, I found the terminology incredibly confusing, but here’s a very basic primer on selecting one.</p> <p.</p> <p <a href="">this benchmark</a>.</p> <p).</p> <p.</p> <p.</p> <p>I went with the <a href="">GeForce GTX 1060</a> 3GB for $195, and it runs models about 20 times faster than my MacBook, but it occasionally runs out of memory for some applications, so I probably should have gotten the GeForce GTX 1060 6GB for an additional $60.</p> <h3>Power supply</h3> <p>I was talked into a <a href="">650W power supply</a> for $85. It’s so annoying and hard to debug when electronics have power issues it doesn’t seem worth trying to save money on this. On the other hand, I haven’t seen my setup draw more than 250W at peak load.</p> <h3>Heat sink</h3> <p <a href="">Cooler Master Hyper 212 EVO</a> heat sink. It keeps the CPU cool and runs super quiet.</p> <h2>Overview</h2> <table> <tbody> <tr> <td> <p>Component</p> </td> <td> <p>Price</p> </td> </tr> <tr> <td> <p>Graphics Card</p> </td> <td> <p>$195</p> </td> </tr> <tr> <td> <p>Hard Drive</p> </td> <td> <p>$50</p> </td> </tr> <tr> <td> <p>CPU</p> </td> <td> <p>$214</p> </td> </tr> <tr> <td> <p>Case</p> </td> <td> <p>$50</p> </td> </tr> <tr> <td> <p>Power Supply</p> </td> <td> <p>$85</p> </td> </tr> <tr> <td> <p>Heat Sink</p> </td> <td> <p>$35</p> </td> </tr> <tr> <td> <p>RAM</p> </td> <td> <p>$129</p> </td> </tr> <tr> <td> <p>Motherboard</p> </td> <td> <p>$125</p> </td> </tr> <tr> <td> <p><strong>Total</strong></p> </td> <td> <p><strong>$883</strong></p> </td> </tr> </tbody> </table> <p>To actually use this thing, you will need a monitor, mouse and keyboard. Those things are easy (I had them lying around). The total so far is $883, so there’s plenty of room to get a sweet setup for around $1,000.</p> <h2>Putting the computer together</h2> <p.</p> <p>The second time around, I put everything together on a cardboard box first to check that it worked.</p> <p>Basically, if you plug everything into the places it looks like it probably fits, everything seems to work out OK.</p> <figure id="id-ROJiW"><img alt="My computer, half assembled" class="iimages01imagejpg" src=""> <figcaption><span class="label">Figure 1. </span><em>My computer, half assembled, on my desk with the minimum components for testing.</em></figcaption> </figure> <figure id="id-3XPiE"><img alt="The computer with the giant heat sink" class="iimages02imagejpg" src=""> <figcaption><span class="label">Figure 2. </span><em>The computer with the giant heat sink attached looks much more formidable.</em></figcaption> </figure> <figure id="id-5jLin"><img alt="computer from above with the hard drive plugged in" class="iimages03imagejpg" src=""> <figcaption><span class="label">Figure 3. </span><em>Here it is from above with the hard drive plugged in.</em></figcaption> </figure> <h2>Booting the machine</h2> <p>You will make your life easier by installing the latest version of <a href="">Ubuntu</a>, as that will support almost all the deep learning software you’ll install. You can put an image on a USB stick and install it using this <a href="">simple step-by-step guide</a>. The linux desktop install process has changed a lot since my days of fighting with drivers in the 90s—it went incredibly smooth.</p> <p>The new Ubuntu desktop is also great. I’ve been using this machine as a personal computer quite frequently ever since I built it. With a ton of RAM, reasonably fast CPU, and lightweight OS, it’s by far the fastest machine in the house.</p> <h2>Installing CUDA, OpenCV and TensorFlow</h2> <p>In order to <em>use< <a href="">NVIDIA website</a>.</p> <p><a href="">OpenCV</a>:</p> <pre> git clone \ && cd opencv \ && mkdir build \ && cd build \ && cmake .. \ && make -j3 \ && make install </pre> <p>Finally, TensorFlow turns out to be pretty easy to install these days—just check the directions on <a href="">this website</a>.</p> <p>To see if GPU support is enabled, you can run <a href="">TensorFlow’s test program</a> or you can execute from the command line:</p> <pre> python -m tensorflow.models.image.mnist.convolutional</pre> <p>This should start training a model without errors.</p> <h2>The fun part!</h2> <p.</p> <h3>Real-time object recognition on the neighbors</h3> <p>Mount a cheap USB camera or a Raspberry Pi with a camera outside your house. You can make a Pi stream video pretty easily with the <a href="">RPi Camera Module</a> that I talked about in my <a href="">previous article on the $100 TensorFlow robot</a>.</p> <h3>YOLO</h3> <p.</p> <p>It’s easy to take the YOLO model and run it on TensorFlow with the <a href="">YOLO_tensorflow</a> project. It’s also fun to install <a href="">“Darknet,”</a> a different deep learning framework that YOLO was originally designed to work with:</p> <pre> git clone cd darknet make</pre> <p>Once Darknet is installed you can run it on images with:</p> <pre> ./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg</pre> <p>Since the Pi camera just puts a file on a web server, you can link directly to that and do real-time object recognition on a stream. Here I am in my garage doing object recognition on a traffic jam happening outside:</p> <div class="responsive-video"><iframe allowfullscreen="true" frameborder="0" src=""></iframe></div> <h2>Give your Raspberry Pi robots an augmented brain</h2> <p>In writing my<a href=""> previous article </a>on a <a href="">$100 TensorFlow robot</a>,!</p> <p>If you follow <a href="">my instructions on GitHub</a>, you can build a robot that streams everything it’s seeing from its camera in a format that’s easy to parse and fast.</p> <p:</p> <div class="responsive-video"><iframe allowfullscreen="true" frameborder="0" src=""></iframe></div> <p>If you look at his computer, the machine is actually doing real-time object recognition on two robot images in feeds in real time on his GeForce 980. He claims that he can handle four video feeds at once before he runs out of memory.</p> <h2>Make art!</h2> <p>One of the most fun things you can do with neural nets, which would be <em>possible</em>.</p> <p>One great tutorial that worked out of the box for me is the <a href="">Deep Dream code published by Google</a>.</p> <p>You need to install the <a href="">Jupyter notebook server</a> (which you should anyway!) and <a href="">Caffe</a>.</p> <p>Then plug your friends’ faces into Google’s tutorial. With your new machine, these images should take minutes rather than hours to generate, and they’re really fun to explore.</p> <figure id="id-zxikA"><img alt="deep dream neighbors" class="iimages04imagejpg" src=""> <figcaption><span class="label">Figure 4. </span><em>My neighbors Chris Van Dyke and </em><a href=""><em>Shruti Gandhi</em></a><em> in my garage as interpreted by my implementation of Deep Dream.</em></figcaption> </figure> <figure id="id-ywiyV"><img alt="deep dream birthday" class="iimages05imagejpg" src=""> <figcaption><span class="label">Figure 5. </span><em>And my friend </em><a href=""><em>Barney Pell</em></a><em> with his Chess birthday cake</em></figcaption> </figure> <figure id="id-VwipN"><img alt="deep dream picture of itself" class="iimages06imagejpg" src=""> <figcaption><span class="label">Figure 6. </span><em>And here’s my machine running Deep Dream on a picture of itself! It sees dogs everywhere (probably because the model’s training data ImageNet was full of dogs since the creators included example images for 120 separate breeds).</em></figcaption> </figure> <p>If you want to get crazier, there’s a <a href="">TensorFlow implementation of Neural Style</a>, built on the Deep Dream work, that can do even more amazing things, some of them outlined in this <a href="">mind blowing blog post</a>.</p> <p><strong>Conclusion</strong></p> <p.</p> <p.</p> <p>Continue reading <a href=''>Build a super fast deep learning machine for under $1,000.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Lukas Biewald What’s a CDO to do? 2017-01-31T12:00:00Z tag: <p><img src=''/></p><p><em>Data governance is straightforward; data strategy is not.</em></p><p>Continue reading <a href=''>What’s a CDO to do?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Julie SteeleScott Kurth Four short links: 31 January 2017 2017-01-31T11:50:00Z tag: <p><em>Historic Language, Activist Security, Microcode Assembler, and PDP-10 ITS Source</em></p><ol> <li> <a href="">Computer Language We Get From the Mark I</a> -- loop, patch, library, bug...all illustrated.</li> <li> <a href="">Twitter Activist Security</a> (The Grugq) -- <i>This guide hopes to help reduce the personal risks to individuals while empowering their ability to act safely.</i> </li> <li> <a href="">mcasm</a> -- microcode assembler.</li> <li> <a href="">PDP-10 ITS</a> -- <i>This repository contains source code, tools, and scripts to build an ITS system from scratch.</i> ITS is the Incompatible Timesharing System. Trivia: it's the OS that the original EMACS was written for, and the original Jargon File was written on.</li> </ol> <p>Continue reading <a href=''>Four short links: 31 January 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Prototyping and deploying IoT in the enterprise 2017-01-31T11:00:00Z tag: <p><img src=''/></p><p><em>Toward a virtuous cycle between people, devices, and cloud.</em></p><p>You have a lot of options available when you’re building a smart, connected device. For example, in recent years, your hardware options have multiplied massively. Even the humble Raspberry Pi, originally designed as an educational tool for youth, is getting into the game with NEC’s announcement of <a href="">Raspberry Pi Compute Module support</a> in their commercial/industrial display panels. </p> <p>And there has long been plenty of choices for those who want to roll their own devices from scratch. Every embedded hardware platform has some kind of evaluation board available that works as a starting point for your own designs. For example, you can prototype with an inexpensive reference module like <a href="">MediaTek’s LinkIt ONE</a>, and then design your own module that has only the parts you need.</p><p>Continue reading <a href=''>Prototyping and deploying IoT in the enterprise.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Brian Jepson Playing with processes in Elixir 2017-01-31T11:00:00Z tag: <p><img src=''/></p><p><em>Elixir’s key organizational concept, the process, is an independent component built from functions that sends and receives messages.</em></p> <p><a contenteditable="false" data- </a><a contenteditable="false" data- </a><a contenteditable="false" data- </a>Elixir is a functional language, but Elixir programs are rarely structured around simple functions. Instead, Elixir’s key organizational concept is the <em>process</em>, an independent component (built from functions) that sends and receives messages. Programs are deployed as sets of processes that communicate with each other. This approach makes it much easier to distribute work across multiple processors or computers, and also makes it possible to do things like upgrade programs in place without shutting down the whole system.</p> <p>Taking advantage of those features, though, means learning how to create (and end) processes, how to send messages among them, and how to apply the power of pattern matching to incoming messages.</p><p>Continue reading <a href=''>Playing with processes in Elixir.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Simon St. LaurentJ. Eisenberg Four short links: 30 January 2017 2017-01-30T14:10:00Z tag: <p><em>Liquid Lenses, SRE Book, MEGA Source, and Founder Game</em></p><ol> <li> <a href="">Liquid Lens Glasses</a> -- <i.</i> </li> <li> <a href="">Site Reliability Engineering</a> -- Google SRE book CC-licensed, free for download or <a href="">purchase from O'Reilly in convenient dead-tree form</a>.</li> <li> <a href="">MEGA Source Code</a> -- on GitHub from Mega itself.</li> <li> <a href="">The Founder</a> -- <i>a dystopian business simulator.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 30 January 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington A new visualization to beautifully explore correlations 2017-01-30T12:00:00Z tag: <p><img src=''/></p><p><em>Introducing the solar correlation map, and how to easily create your own.</em></p><p>An ancient curse haunts data analysis. The more variables we use to improve our model, the exponentially more data we need. By focusing on the variables that matter, however, we can avoid underfitting, and the need to collect a huge pile of data points. One way of narrowing input variables is to identify their influence on the output variable. Here correlation helps—if the correlation is strong, then a significant change in the input variable results in an equally strong change in the output variable. Rather than using all available variables, we want to pick input variables strongly correlated to the output variable for our model.</p> <p>There's a catch though—and it arises when the input variables have a strong correlation among themselves. As an example, suppose we want to predict parental education, and we find a strong correlation with country club membership, the number household cars, and costs of vacations in our data set. All of these luxuries grow from the same root: the family is rich. The true underlying correlation is that highly educated parents usually have a higher income. We can either use the household income to predict parental education, or use the array of variables above. We call this type of correlation “intercorrelation.”</p> <p>Intercorrelation is the correlation between explanatory variables. Adding many variables, where one suffices, conjures up the <a href="">curse of dimensionality</a>, and requires large amounts of data. It is sometimes beneficial therefore, to elect just one representative for a group of intercorrelated input variables. In this article, we’ll explore both correlation and intercorrelation with a “solar correlation map”—a new type of visualization created for this purpose, and we’ll show you how to simply create a solar correlation for yourself.</p> <h2>Using the solar correlation map on housing price data</h2> <p>We can use covariance and coefficient matrices to apply the solar correlation map to housing price data. As efficient as these tools are, however, they are hard to read. Thankfully, there are visualizations that can beautifully and succinctly represent the matrices to explore the correlations.</p> <p>The solar correlation map is designed for a dual purpose—it addresses:</p> <ul> <li>the visual representation of the correlation of each input variable, to the output variable</li> <li>the intercorrelation of the input variables</li> </ul> <p>Let's generate the solar correlation map for a standard data set and explore it. Carnegie Mellon University has collected data on <a href="">Boston Housing prices</a> in the 1990s; it is one of the freely accessible data sets from the UCI (University of California Irvine) Machine Learning repository. Our goal in this data set is to predict the output variable—the value of a home (<code>MEDV</code>)—with several input variables in the data set.</p> <p>Let’s first generate a correlation matrix:</p> <figure id="id-5dEi7"><img alt="correlation matrix" class="iimagesimage01png" src=""> <figcaption><span class="label">Figure 1. </span><em>Credit: Stefan Zapf and Christopher Kraushaar</em></figcaption> </figure> <p>You can find the correlation between the output variable, the value of a home, and an input variable (like tax) by searching for <code>MEDV</code> row, then finding the column <code>TAX</code>, and finding the cell where the row meets the column. To explore intercorrelation, you’ll need to find all cells with absolute values higher than, e.g., 0.8. In complex data sets, the sheer number of columns and rows take a long time to digest. The solar correlation map can help; we’ll begin by looking at correlation to the output variable first. Below is a summary of the information, represented as a solar correlation:</p> <figure id="id-RMkiq"><img alt="solar correlation" class="iimagesimage03png" src=""> <figcaption><span class="label">Figure 2. </span><em>Credit: Stefan Zapf and Christopher Kraushaar</em></figcaption> </figure> <p>The output variable <code>MEDV</code>, the value of the Boston home, is the sun at the center of the solar system. Each circle around the sun is an orbit. Planets are input variables, and moons are input variables that are inter-correlated with the planet they orbit. The closer the orbit, the stronger the correlation. For example, on the second orbit is a planet that describes lower income neighborhoods (<code>LSTAT</code>), the third orbit describes the number of rooms in the home (<code>RM</code>), and the fourth orbit describes the size of the homes (<code>PTRATIO</code>). The size of houses, the number of rooms, and the potential purchasing power of the inhabitants strongly determine the value of the homes. We did not pick this example to surprise you, but for the opposite reason—common sense analysis of the variables helps us recognize the validity of the solar correlation map.</p> <p>The strengths of correlations are determined by the absolute value of the <a href="">Pearson correlation coefficient</a>. Planets on the first orbit have an absolute value of 0.9-1.0. The second orbit planets have a correlation coefficient of 0.8-0.9, and so forth. Another indication is the color and size of the indicator. The sun is a large circle, the planets are middle-sized circles, and moons are small circles.</p> <h2>Exploring intercorrelated input variables</h2> <p>You probably noticed there aren't that many moons in the solar system. Our threshold for calling multiple variables inter-correlated is set to default that the <a href="">Pearson correlation coefficient</a> must be greater than 0.8. Usually, a strong correlation is anything above a Pearson coefficient of 0.5. The default is very cautious, but you can adjust that number in your correlation analysis. If we have inter-correlated variables, the variable with the strongest correlation to the output variable becomes the planet, and the others its moons. This is to ensure that the planets are the ones that best explain the output variable.</p> <p>In our example, there are only two variables that are so strongly correlated to be almost identical. Not every solar system has few moons. In a big data context, there are usually many more variables (and, incidentally, many moons) in correlation solar maps. As the number of variables grows, the solar correlation map becomes even more important.</p> <p>Let us now turn to the issue of the inter-correlation of input variables. On the 6th (green) orbit, we have a planet with one moon. The planet variable is <code>TAX</code>, the full value of property tax rate, and the moon is <code>RAD</code>, the accessibility to highways. As the tax rate differs for residential and commercial estates, the planet variable may be an indicator to separate commercial from residential areas. Businesses often want quick access to the highway, while private homeowners often wish to avoid the noise and air pollution of highly frequented roads. The mostly commercial or residential nature of a neighborhood might be the underlying reason for the inter-correlation of these variables. If this is the case, then including one of the two is sufficient to explain their effect on house prices.</p> <p>A word of caution is in order. Data analysis is not a mechanical or deterministic process. Even a wealthy family may not buy a sports car because they care about the impact on the environment, for example. We may thus see a distant orbit of the sports car when trying to predict the wealth of families, indicating that sports cars are not a good indicator of wealth. Still, we know that owning a sports car is a good indication of wealth. Not picking the sports car as an indicator of wealth because it is a distant planet is almost certainly the wrong move, as a complex model can condition its effect on the environmental attitudes of the family. Correlation is a useful tool, but always weigh the results against your common sense, and trust your gut feeling, a vast array of hypothesis tests, and Bayesian analysis.</p> <p>Used in exploratory data analysis (EDA), and with caution in modeling, the solar correlation map may help us to understand the correlation in a visual manner. An understanding of correlation can serve as a basis to prioritize our model building: planets in a low orbit are promising candidates, next in line are the moons, and lastly the outermost planets.</p> <h2>Positive and negative labels</h2> <p>We have so far explained the strength of correlations and the importance of a correlation. However, we also want to know about whether a correlation is positive or negative; it’s “the more, the better” correlation. A positive correlation means that an increase in one variable also increases the other. Let’s explore the variable <code>RM</code> first, which is the average number of rooms. The more rooms in a house, the higher the price, as it both indicates that the house is larger and also that space can be more easily divided. When we have 10 rooms instead of two, we will probably have a higher price. This is the essence of a positive correlation. You can see that the correlation between <code>MEDV</code> and the <code>RM</code> is positive, as the label <code>RM</code> is green.</p> <p>A negative correlation means that an increase in one variable, decreases the other: it’s the “sometimes less is more” variable. The less crime, the more price we can get from our house, so we suspect the label for crime is red. Our suspicion holds true in the solar correlation map.</p> <p>Through the solar correlation map, we can discover the strength, the inter-correlation, and the type of correlation at one glance.</p> <h2>How to simply create a solar correlation</h2> <p>The creation of a solar correlation is as simple as baking frozen cookie dough. It is a Python module you can install with pip: <code>pip install solar-correlation-map</code>. Then, try downloading the jedi.csv file from our <a href="">GitHub repo</a>. The csv file itself is a standard csv file, with a header:</p> <figure id="id-34xiw"><img alt="csv file with a header" class="iimagesimage02png" src=""> <figcaption><span class="label">Figure 3. </span><em>Credit: Stefan Zapf and Christopher Kraushaar</em></figcaption> </figure> <p>The data set is about variables relating to being a Jedi:</p> <p>1.JEDI: the larger the variable, the closer the Jedi is to the light side</p> <p>2.GRAMMAR: higher values indicate a better grammar</p> <p>3.GREENESS: the greener the skin, the higher the variable</p> <p>4.IMPLANTS: the number of implants in the body</p> <p>5.ELEGEN: the megajoules of electrical energy, the force wielder can channel</p> <p>6.MIDI-CHLORIANS: midi-chlorian count in the bloodstream</p> <p>7.FRIENDS: the number of friends</p> <p>Please observe, that the midi-chlorians are the same for all the persons in this list. It seems that we picked really strong force-users.</p> <p>Then use the following command to run the solar correlation map in the directory where you downloaded the jedi-csv file:</p> <pre> winterfell:solar-correlation-map daebwae$ python -m solar_correlation_map jedi.csv JEDI </pre> <p>Now a window opens up and you will find the solar correlation map on your screen:</p> <figure id="id-5jLin"><img alt="solar correlation map" class="iimagesimage04png" src=""> <figcaption><span class="label">Figure 4. </span><em>Credit: Stefan Zapf and Christopher Kraushaar</em></figcaption> </figure> <p>Grammar is in a close orbit and the label is red, so there’s a strong negative correlation between grammar and Jedi. The better the grammar, the less likely the person is a Jedi. In addition, “greenness” is inter-correlated with bad grammar, so both might refer to the same underlying factor. Remember that all people had the very same midi-chlorian count? Therefore it cannot possibly tell us anything about the jedi-ness of the force-wielder. That’s why the midi-chlorians are in the outer-most orbit.</p> <p>There are many tweaks we could do to improve the solar correlation map. This is the introduction of a new tool, and we are happy to hear your ideas on how to improve the initial version—please feel free to contact us at <a href="mailto:stefan@zapf-consulting.com">stefan@zapf-consulting.com</a>.</p> <h2>Three steps to a new visualization</h2> <p>Having introduced the solar correlation map, let’s step back and look at the bigger picture. We started out with a data analysis problem of finding the input variables with the greatest impact on the output variable. We found the tool of the correlation matrix to analyze that problem. Visually summarizing the problem helps to both find intercorrelation and the most influential input variables. As visualization is all about communication, we chose the metaphor of the solar system because it is known to many readers.</p> <p>So, here are our three steps to a new visualization:</p> <ol> <li>Identify a problem in data analysis</li> <li>Find an analytical tool that solves this problem</li> <li>Use a visual metaphor to explore and communicate your results</li> </ol> <p>Storytellers throughout the ages were creative and daring, and data analysis is often likened to storytelling. In a similar vein, a data scientist can follow the footsteps of ancient storytellers, to boldly explore new ways to communicate the story of data to the reader.</p> <p>In exploratory data analysis, our toolbox of visualizations plays a significant role of communication and persuasion. This article presented the solar correlation map, and also served as high altitude map of the process, to create new types of visualizations that solve real exploratory data analysis problems. As you are telling the stories of your data, explore strange new worlds of visualization that no reader has seen before. Let your creativity roam to enthrall your reader and help to extend the visual metaphors of your fellow data scientists.</p> <p>Continue reading <a href=''>A new visualization to beautifully explore correlations.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Stefan ZapfChristopher Kraushaar Four short links: 27 January 2017 2017-01-27T11:55:00Z tag: <p><em>Ethics of AI, Vertically Integrated Internet, Assessing Empirical Observations, Battery Teardown</em></p><ol> <li> <a href="">Virginia Dignum: Ethics of AI</a> -- see <a href="">these notes</a> for the type of material she covers.</li> <li> <a href="">Google's Vertically Integrated Internet</a> -- a Hacker News comment worth reading.</li> <li> <a href="">A Guide to Assessing Empirical Evaluations</a> (Adrian Colyer) -- <i>here are five questions to help you avoid [traps]:?</i> </li> <li> <a href="">Inside the Tesla 100kWh Battery Pack</a> -- <i>516 cells per module. That's 8,256 cells per pack, a ~16% increase vs the 85/90 packs. [...] As for real capacity, the BMS reports usable capacity at a whopping 98.4 kWh. It also reports a 4 kWh unusable bottom charge, so that's 102.4 kWh total pack capacity!</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 27 January 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Pitfalls of HTTP/2 2017-01-27T11:00:00Z tag: <p><img src=''/></p><p><em>HTTP/2 is still new and, although deploying it is relatively easy, there are a few things to be on the lookout for when enabling it.</em></p><p.</p> <p>So what makes h2 different from h1, and what should you watch out for when enabling h2 support for a site? Here are five things to look out for along the way.</p><p>Continue reading <a href=''>Pitfalls of HTTP/2.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Andy Davies Compliance as code 2017-01-27T11:00:00Z tag: <p><img src=''/></p><p><em>Build regulatory compliance into development and operations, and write compliance and checks and auditing into continuous delivery, so it becomes an integral part of how your DevOps team works.</em></p><section xmlns="" data- <p.</p> <aside data- <h6>Chef Compliance</h6> <p><a href="">Chef Compliance</a> is a tool from Chef that scans infrastructure and reports on compliance issues, security risks, and outdated software. It provides a centrally managed way to continuously and automatically check and enforce security and compliance policies.</p> <p>Compliance profiles are defined in code to validate that systems are configured correctly, using InSpec, an open source testing framework for specifying compliance, security, and policy requirements.</p> <p>You can use InSpec to write high-level, documented tests/assertions to check things such as password complexity rules, database configuration, whether packages are installed, and so on. Chef Compliance comes with a set of predefined profiles for Linux and Windows environments as well as common packages like Apache, MySQL, and Postgres.</p> <p>When variances are detected, they are reported to a central dashboard and can be automatically remediated using Chef.</p> </aside> <p>A way to achieve Compliance as Code is described in the <a href="">“DevOps Audit Defense Toolkit”</a>, a free, community-built process framework written by James DeLuccia, IV, Jeff Gallimore, Gene Kim, and Byron Miller.<sup><a data-1</a></sup> The Toolkit builds on real-life examples of how DevOps is being followed successfully in regulated environments, on the Security as Code practices that we’ve just looked at, and on disciplined Continuous Delivery. It’s written in case-study format, describing compliance at a fictional organization, laying out common operational risks and control strategies, and showing how to automate the required controls.</p> <section data- <h2>Defining Policies Upfront</h2> <p>Compliance as Code brings management, compliance, internal audit, the PMO and infosec to the table, together with development and operations. Compliance policies and rules and control workflows need to be defined upfront by all of these stakeholders working together. Management needs to understand how operational risks and other risks will be controlled and managed through the pipeline. Any changes to these policies or rules or workflows need to be formally approved and documented; for example, in a Change Advisory Board (CAB) meeting.</p> <p>But.</p> </section> <section data- <h2>Automated Gates and Checks</h2> <p>The first approval gate is mostly manual. Every change to code and configuration must be reviewed precommit. This helps to catch mistakes and ensure that no changes are made without at least one other person checking to verify that it was done correctly. High-risk code (defined by the team, management, compliance, and infosec) must also have an SME review; for example, security-sensitive code must be reviewed by a security expert. Periodic checks are done by management to ensure that reviews are being done consistently and responsibly, and that no “rubber stamping” is going on. The results of all reviews are recorded in the ticket. Any follow-up actions that aren’t immediately addressed are added to the team’s backlog as another ticket.</p> <p>In addition to manual reviews, automated static analysis checking is also done to catch common security bugs and coding mistakes (in the IDE and in the Continuous Integration/Continuous Delivery pipeline). Any serious problems found will fail the build.</p> <p>After it is checked-in, all code is run through the automated test pipeline. The Audit Defense Toolkit assumes that the team follows Test-Driven Development (TDD), and outlines an example set of tests that should be executed.</p> <p>Infrastructure changes are done using an automated configuration management tool like Puppet or Chef, following the same set of controls:</p> <ul> <li><p>Changes are code reviewed precommit</p></li> <li><p>High-risk changes (again, as defined by the team) must go through a second review by an SME</p></li> <li><p>Static analysis/lint checks are done automatically in the pipeline</p></li> <li><p>Automated tests are performed using a test framework like rspec-puppet or Chef Test Kitchen or ServerSpec</p></li> <li><p>Changes are deployed to test and staging in sequence with automated smoke testing and integration testing</p></li> </ul> <p>And, again, every change is tracked through a ticket and logged.</p> </section> <section data- <h2>Managing Changes in Continuous Delivery</h2> <p>Because DevOps is about making small changes, the Audit Defense Toolkit assumes that most changes can be treated as standard or routine changes that are essentially preapproved by management and therefore do not require CAB approval.</p> <p>It also assumes that bigger changes will be made “dark.” In other words, that they will be made in small, safe, and incremental changes, protected behind runtime feature switches that are turned off by default. The feature will only be rolled out with coordination between development, ops, compliance, and other stakeholders.</p> <p>Any problems found in production are reviewed through post-mortems, and tests added back into the pipeline to catch the problems (following TDD principles).</p> </section> <section data- <h2>Separation of Duties in the DevOps Audit Toolkit</h2> <p>In the DevOps Audit Toolkit, a number of controls enforce or support Separation of Duties:</p> <ul> <li><p>Mandatory independent peer reviews ensure that no engineer (dev or ops) can make a change without someone else being aware and approving it. Reviewers are assigned randomly where possible to prevent collusion.</p></li> <li><p>Developers are granted read-only access to production systems to assist with troubleshooting. Any fixes need to be made through the Continuous Delivery pipeline (fixing forward) or by automatically rolling changes back (again, through the Continuous Delivery pipeline/automated deployment processes) which are fully auditable and traceable.</p></li> <li><p>All changes made through the pipeline are transparent, published to dashboards, IRC, and so on.</p></li> <li><p>Production access logs are reviewed by IT operations management weekly.</p></li> <li><p>Access credentials are reviewed regularly.</p></li> <li><p>Automated detective change control tools (for example, Tripwire, OSSEC, UpGuard) are used to check for unauthorized changes.</p></li> </ul> <p>These controls minimize the risk of developers being able to make unauthorized, and undetected, changes to production.</p> </section> <section data- <h2 class="pagebreak-before">Using the Audit Defense Toolkit</h2> <p>The DevOps Audit Defense Toolkit provides a roadmap to how you can take advantage of DevOps workflows and automated tools, and some of the security controls and checks that we’ve already looked at, to support your compliance and governance requirements.</p> <p>It requires a lot of discipline and maturity and might be too much for some organizations to take on—at least at first. You should examine the controls and decide where to begin.</p> <p>Although it assumes Continuous Deployment of changes directly to production, the ideas and practices can easily be adapted for Continuous Delivery by adding a manual review gate before changes are pushed to production.</p> </section> <section data- <h2>Code Instead of Paperwork</h2> <p>Compliance as Code tries to minimize paperwork and overhead. You still need clearly documented policies that define how changes are approved and managed, and checklists for procedures that cannot be automated. But most of the procedures and the approval gates are enforced through automated rules in the Continuous Integration/Continuous Delivery pipeline, leaning on the automated pipeline and tooling to ensure that all of the steps are followed consistently and taking advantage of the detailed audit trail that is automatically created.</p> <p>In the same way that frequently exercising build and deployment steps reduces operational risks, exercising compliance on every change, following the same standardized process and automated steps, reduces the risks of compliance violations. You—and your auditors—can be confident that all changes are made the same way, that all code is run through the same tests and checks, and that everything is tracked the same way: consistent, complete, repeatable, and auditable.</p> <p>Standardization makes auditors happy. Auditing makes auditors happy (obviously). Compliance as Code provides a beautiful audit trail for every change, from when the change was requested and why, to who made the change and what that person changed, who reviewed the change and what was found in the review, how and when the change was tested, to when it was deployed. Except for the discipline of setting up a ticket for every change and tagging changes with a ticket number, compliance becomes automatic and seamless to the people who are doing the work.</p> <p>Just as beauty is in the eye of the beholder, compliance is in the opinion of the auditor. Auditors might not understand or agree with this approach at first. You will need to walk them through it and prove that the controls work. But that shouldn’t be too difficult, as Dave Farley, one of the original authors of <em>Continuous Delivery</em> explains:</p> <blockquote> <p>I have had experience in several finance firms converting to Continuous Delivery. The regulators are often wary at first, because Continuous Delivery is outside of their experience, but once they understand it, they are extremely enthusiastic. So regulation is not really a barrier, though it helps to have someone that understands the theory and practice of Continuous Delivery to explain it to them at first.</p> <p>If you look at the implementation of a deployment pipeline, a core idea in Continuous Delivery, it is hard to imagine how you could implement such a thing without great traceability. With very little additional effort the deployment pipeline provides a mechanism for a perfect audit trail. The deployment pipeline is the route to production. It is an automated channel through which all changes are released. This means that we can automate the enforcement of compliance regulations—“No release if a test fails,” “No release if a trading algorithm wasn’t tested,” “No release without sign-off by an authorised individual,” and so on. Further, you can build in mechanisms that audit each step, and any variations. Once regulators see this, they rarely wish to return to the bad old days of paper-based processes.<sup><a data-2</a></sup></p> </blockquote> </section> <div data- <p data-<sup><a href="#fn39-marker">1</a></sup><a href=""><em class="hyperlink"></em></a></p> <p data-<sup><a href="#fn40-marker">2</a></sup>Dave Farley (<a href=""><em class="hyperlink"></em></a>), Interview July 24, 2015</p> </div></section> <p>Continue reading <a href=''>Compliance as code.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jim Bird The state of Jupyter 2017-01-26T17:05:00Z tag: <p><img src=''/></p><p><em>How Project Jupyter got here and where we are headed.</em></p><p>In this post, we’ll look at Project Jupyter and answer three questions:</p> <ol> <li>Why does the project exist? That is, what are our motivations, goals, and vision?</li> <li>How did we get here?</li> <li>Where are things headed next, in terms of both Jupyter itself and the context of data and computation it exists in?</li> </ol> <p>Project Jupyter aims to create an ecosystem of open source tools for interactive computation and data analysis, where the direct participation of humans in the computational loop—executing code to understand a problem and iteratively refine their approach—is the primary consideration.</p> <p:</p> <ol> <li> <strong>Explore ideas and develop open standards that try to capture the essence of what humans do when using the computer as a companion to reasoning about data, models, or algorithms</strong>. This is what the <a href="">Jupyter messaging protocol</a> or the <a href="">Notebook format</a> provide for their respective problems, for example.</li> <li> <strong>Build libraries that support the development of an ecosystem, where tools interoperate cleanly without everyone having to reinvent the most basic building blocks.</strong> Examples of this include tools for <a href="">creating new Jupyter kernels</a> (the components that execute the user’s code) or <a href="">converting Jupyter notebooks to a variety of formats</a>.</li> <li> <strong>Develop end-user applications that apply these ideas to common workflows that recur in research, education, and industry. </strong>This includes tools ranging from the now-venerable <a href="">IPython command-line shell</a> (which continues to evolve and improve) and our widely used <a href="">Jupyter Notebook</a> to new tools like <a href="">JupyterHub for organizations</a> and our next-generation <a href="">JupyterLab modular and extensible interface</a>. We strive to build highly usable, very high-quality applications, but we focus on specific usage patterns: for example, the architecture of JupyterLab is optimized for a web-first approach, while other projects in our ecosystem target desktop usage, like the open source <a href="">nteract client</a> or the support for Jupyter Notebooks in the commercial <a href="">PyCharm IDE</a>.</li> <li> <strong>Host a few services that facilitate the adoption and usage of Jupyter tools</strong>. Examples include <a href="">NBViewer</a>, our online notebook sharing system, or the free demonstration service <a href="">try.jupyter.org</a>. These services are themselves fully open source, enabling others to either deploy them in custom environments or build new technology based on them, such as the <a href="">mybinder.org</a> system that provides single-click hosted deployment of GitHub repositories with custom code, data, and notebooks, or the <a href="">native rendering of Jupyter Notebooks on GitHub</a>.</li> </ol> <h2>Some cairns along the trail</h2> <p>This is not a detailed historical retrospective; instead, we’ll highlight a few milestones along the way that signal the arrival of important ideas that continue to be relevant today</p> <p><strong>Interactive Python and the SciPy ecosystem</strong>. Jupyter evolved from the <a href="">IPython</a>.</p> <p><strong>Open protocols and formats for the IPython Notebook</strong>. <a href="">Qt Console</a>) and the first iteration of today’s Jupyter Notebook (then-named IPython), released in summer 2011 (more details about this process can be found in <a href="">this blog post</a>).</p> <p><strong>From IPython to Jupyter</strong>. <a href="">more</a>) were created; we had a hand in some, but most were independently developed by users of those languages. This cross-language usage forced us to carefully validate our architecture to remove any accidental dependencies on IPython, and in 2014, led us to <a href="">rename most of the project as Jupyter</a>. The name is inspired by Julia, Python, and R (the three open languages of data science) but represents the general ideas that go beyond any specific language: computation, data, and the human activities of understanding, sharing, and collaborating.</p> <h2>The view from today’s vantage point</h2> <p>The ideas that have taken Juypter this far are woven into a larger fabric of computation and data science that we expect to have significant impact in the future. The following are six trends we are seeing in the Jupyter ecosystem:</p> <ol> <li> <strong>Interactive computing as a real thing.</strong> <em>interactive computing environments</em>;.</li> <li> <strong>Widespread creation of computational narratives. </strong.</li> <li> <strong>Programming for specific insight rather than generalization.</strong> <a href="">Literate Computing</a>.</li> <li> <strong>Individuals and organizations embracing multiple languages. </strong.</li> <li> <strong>Open standards for interactive computing. </strong.</li> <li> <strong>Sharing data with meaning. </strong.</li> </ol> <p>The question is then: do all of these trends sketch a larger pattern? We think they all point to code, data, and UIs for computing being optimized for human interaction and comprehension.</p> <p.</p> <p.</p> <p.</p> <hr> <aside data- <h3>Acknowledgments</h3> <p.</p> <p.</p> <p>We would like to thank Jamie Whitacre and Lisa Mann for valuable contributions to this post.</p> </aside> <p>Continue reading <a href=''>The state of Jupyter.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Fernando PérezBrian Granger Genevieve Bell on moving from human-computer interactions to human-computer relationships 2017-01-26T13:15:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.</em></p><p>This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, <a href="">Genevieve Bell</a>. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.</p><p>Continue reading <a href=''>Genevieve Bell on moving from human-computer interactions to human-computer relationships.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jenn Webb The key to building deep learning solutions for large enterprises 2017-01-26T13:12:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Data Show Podcast: Adam Gibson on the importance of ROI, integration, and the JVM.</em></p><p>As data scientists add deep learning to their arsenals, they need tools that integrate with existing platforms and frameworks. This is particularly important for those who work in large enterprises. In this episode of the <a href="">Data Show</a>, I spoke with <a href="">Adam Gibson</a>, co-founder and CTO of <a href="">Skymind</a>, and co-creator of <a href="">Deeplearning4J</a> (DL4J). Gibson has spent the last few years developing the DL4J library and community, while simultaneously building deep learning solutions and products for large enterprises.</p><p>Continue reading <a href=''>The key to building deep learning solutions for large enterprises.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ben Lorica Chris Messina on conversational commerce 2017-01-26T13:10:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Bots Podcast: The 2017 bot outlook with one of the field’s early adopters.</em></p><p>In this episode of the<a href=""> O’Reilly Bots Podcast</a>, Pete Skomoroch and I speak with <a href="">Chris Messina</a>, bot evangelist, creator of the hashtag, and, until recently, developer experience lead at Uber. We talk about the origins of <a href="">MessinaBot</a>, ruminate on the need for bots that truly exploit their medium rather than imitating older apps, and take a look at what’s ahead for bots in 2017.</p><p>Continue reading <a href=''>Chris Messina on conversational commerce.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jon Bruner AI building blocks: The eggs, the chicken, and the bacon 2017-01-26T12:05:00Z tag: <p><img src=''/></p><p><em>Data, algorithms, and better business results are key to developing AI.</em></p><p>As I read this post from the World Economic Forum, <a href="">This is why China has the edge in Artificial Intelligence</a>, what struck me wasn't whether China has an edge in AI, or even if I care. What struck me is the proposed five building blocks required for AI development:</p> <ul> <li>Massive data</li> <li>Automatic data tagging systems</li> <li>Top scientists</li> <li>Defined industry requirements</li> <li>Highly efficient computing power</li> </ul> <p>It made me wonder, are these factors essential to building a solid foundation for AI? Does high performance in these areas give an edge to AI projects? And, overall, my answer was: somewhat, but misleading. Let me explain, by block:</p> <ul> <li> <strong>Massive data</strong>.. <em>Bottom line: big data is a building block—check; massive data—misleading.</em> </li> <li> <strong>Automatic data tagging systems</strong>. The automated tagging systems <em>are</em> AI, so we get caught in an infinite loop if we take this as a building block. <em>Bottom line: automatic data tagging systems are sub-assemblies, </em>not<em> building blocks.</em> </li> <li> <strong>Top scientists</strong>. First, <em>none</em> of this is possible without research. None. HT to Bengio(s), LeCun, Ng, Hinton, <em>et al</em>.. <em>Bottom line: top scientists and/or experienced engineers create the building blocks, but are not building blocks themselves.</em> </li> <li> <strong>Defined industry requirements</strong>. Requirements is where we are failing AI. I was recently invited to attend <a href="">Intel's AI Day</a> <a href="">drunk under the streetlight</a> with our technology. I would argue against industry requirements, in favor of <em>business requirements</em>. While there will be overlap in industries, what is more critical is focusing AI on your business, your customers, your operations. <em>Bottom line: defined business requirements—check; defined industry requirements—misleading.</em> </li> <li> <strong>Highly efficient compute power</strong>.. <em>Bottom line: highly efficient compute power—substrate, not building block, thus misleading.</em> </li> </ul> <p>I propose the following three key building blocks to AI development, what I call the eggs, the chicken, and the bacon:</p> <ul> <li> <strong>The eggs</strong>. Data are the eggs. We have not found <em>one</em> customer who does not already have enough data to start with AI and do a better job for themselves or their customers. The two biggest challenges we see in customers’ mindsets regarding data are: <ul> <li>Data silos. Organizations draw insane and inane lines around data, and departments act like feudal lords over "their" data.</li> <li>Unstructured data. Gartner estimates that 80% of an enterprise's data is unstructured, and in our experience, it is an untapped resource, which can provide valuable variety in data.</li> </ul> </li> </ul> <p. <em>Bottom line: focus on quality of the data for your unique business problem, not quantity.</em></p> <ul> <li> <strong>The chicken</strong>. The algorithm(s) are the chicken. I often show executives examples in the <a href="">TensorFlow Playground</a> of the interplay between algorithms and data. Depending on your objective and the data you have available, you will need to choose different algorithms. Depending on the algorithms available, you might need to find different data. Thus, the appropriateness of the chicken and egg reference. <em>Bottom line: you cannot separate the algorithm from the data; they depend on each other.</em> </li> </ul> <ul> <li> <strong>The bacon</strong>. What is the bacon of business? Better business results. This must come first and last. Define your project around the results you need, then measure to make sure you are getting them. Rinse, repeat. I gave a talk at <a href="">Strata + Hadoop World</a> in Singapore about <a href="">how to "hire" AI</a>. The first step is to write the job description. What are the job requirements? Then, you need to evaluate the job done by the requirements you defined. <em>Bottom line: do not forget the bacon!</em> </li> </ul> <p.”</p> <p>Continue reading <a href=''>AI building blocks: The eggs, the chicken, and the bacon.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jana Eggers Guaranteed successful design 2017-01-26T12:00:00Z tag: <p><img src=''/></p><p><em>5 questions for Noah Iliinsky: Solving real problems, measuring success, and adopting holistic thinking.</em></p><p>I recently asked <a href="">Noah Iliinsky</a>, senior UX architect at Amazon Web Services, and co-editor of <a href=""><em>Beautiful Visualization</em></a> and co-author of <a href=""><em>Designing Data Visualizations</em></a>, to discuss the principles for successful design, common missteps designers make, and why holistic thinking is an important skill for all designers. At the O’Reilly Design Conference, Noah will be presenting a session, <a href="">Guaranteed successful design</a>.</p> <h3>You're presenting a talk at the O'Reilly Design Conference called <a href="">Guaranteed successful design</a>. Tell me more about what folks can expect.</h3> <p>It's a survey of design techniques, approaches, and tenets that are either not-well-known-enough (<a href="">Wardley mapping</a>, design for human inaction), or are understood but not sufficiently practiced (draw the map or diagram). This talk originated as a lightning talk, where each topic was mostly just a headline and a single line of description. I'll be walking through them in the same order as before, but giving more depth and background for each technique.</p> <h3>You are covering 17 principles that can improve odds of success. How do you measure success?</h3> <p>Great question. Success can me measured by subjective user experience (frustrating, easy, delightful, confusing, etc.) as well as by metrics around task completion rate, number of errors, etc. There's also the greater question of solving the right problem in the first place. Even if your design is perfect, it can't be a success if it isn't solving a real problem. Each of these topics is designed to guide the right sort of inquiry to increase the likelihood of solving the right problem in a satisfying manner.</p> <h3>Conversely, what are some of the major missteps designers make when approaching their work?</h3> <p>The are two major classes of design process error I see frequently. The first is people providing solutions for problems that don't actually exist, or only exist for a small subset of people (who are often similar types of people to the solution-makers).</p> <p>The second class of error is problem solvers falling in love with a particular implementation of a solution, rather than understanding that each implementation is one of many that can satisfy a particular requirement, and each have different strengths and weaknesses.</p> <p>Not coincidentally, these topics are both heavily addressed in my talk.</p> <h3>Why do you think it's so difficult for designers to think more holistically about the design process when designing?</h3> <p>Looking at things holistically doesn't seem to be a natural human skill. It can be learned and taught, but most folks don't fall into it easily. Even when it's endorsed, suggested, shown to be more successful, etc., there are barriers to doing it reflexively. Holistic thinking requires awareness of habits, self, and team; that introspection takes training and intention. Holistic thinking requires more up-front investment in research and ideation, which can be hard to justify when time, budget, and technology are being dictated elsewhere. And it can lead to analysis paralysis, where the lack of constraints or guides leads to too many choices with not enough information to select one. In many of these cases, it's easier to go with an option that sounds good or is easy or worked last time or the boss likes and call it good.</p> <h3>You're speaking at the O'Reilly Design Conference in March. What sessions are you interested in attending?</h3> <p>I'm definitely looking forward to seeing <a href="">Alan Cooper</a>; I saw a talk of his 10 years ago that changed how I thought about how design fits into organizations. Other topics that I'm excited for are the assorted sessions on design leadership and team diversity, and, of course, the visualization sessions.</p> <p>Continue reading <a href=''>Guaranteed successful design.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mary Treseler A guide to improving data integrity 2017-01-26T12:00:00Z tag: <p><img src=''/></p><p><em>Validating your data requires asking the right questions and using the right data.</em></p><p>Almost.</p> <p.</p> <p>However, after working in data sets, this becomes more obvious and easy. To help answer this question, it's helpful to focus on boundaries and hard expectations within the data, specifically on format and validity of the values being observed. Ask yourself these questions:</p> <ul> <li>Do I have all the data I started with?</li> <li>Are there nulls in the data that should have values?</li> <li>Are there duplicates in the data?</li> </ul> <p>Other key things to look at when evaluating data’s validity include trends and how different components of data relate. For example, if you’re testing a set of data that represents a shopping experience with users, products, purchases, and carts, some key questions to answer may include:</p> <ul> <li>Do all purchases relate to valid products?</li> <li>Does every cart and purchase have a valid user?</li> <li>Is the total number of carts less than total users? (Assuming each user should only have at most one cart.)</li> </ul> <p.</p> <p. <a href="">Check it out here</a>.</p> <p>Continue reading <a href=''>A guide to improving data integrity.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jessica Roper Four short links: 26 January 2017 2017-01-26T11:30:00Z tag: <p><em>Soda Locker, Building Fabricator, Familied Traveler Advice, and Technically Competent Bosses</em></p><ol> <li> <a href="">The Soda Locker Vending Machine</a> (Instructables) -- genius creation from a high schooler!</li> <li> <a href="">Robotic Fabricator for Buildings</a> (MIT TR) -- <i.</i> </li> <li> <a href="">Lessons Learned From a Million Miles and 5 Kids</a> (Bryce Roberts) -- golden advice for travelers with families at home.</li> <li> <a href="">You're More Likely to be Happy at Work if Your Boss is Technically Competent</a> (HBR) -- technical competence is <i>Whether the supervisor could, if necessary, do the employee’s job; whether the supervisor worked his or her way up inside the company; the supervisor’s level of technical competence as assessed by a worker.</i> 35,000 randomly sampled employees at different workplaces. <i> </li> </ol> <p>Continue reading <a href=''>Four short links: 26 January 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Inside the Washington Post’s popularity prediction experiment 2017-01-25T13:00:00Z tag: <p><img src=''/></p><p><em>A peek into the clickstream analysis and production pipeline for processing tens of millions of daily clicks, for thousands of articles.</em></p><p>In the distributed age, news organizations are likely to see their stories shared more widely, potentially reaching thousands of readers in a short amount of time. At the <em>Washington Post</em>, we asked ourselves if it was possible to predict which stories will become popular. For the <em>Post</em>.</p> <p>Here’s a behind-the-scenes look at how we approached article popularity prediction.</p> <h2>Data science application: Article popularity prediction</h2> <p>There has not been much formal work in article popularity prediction in the news domain, which made this an open challenge. For our first approach to this task, <em>Washington Post</em> data scientists identified the most-viewed articles on five randomly selected dates, and then monitored the number of clicks they received within 30 minutes after being published. These clicks were used to predict how popular these articles would be in 24 hours.</p> <p>Using the clicks 30 minutes after publishing yielded poor results. As an example, here are five very popular articles:</p> <figure id="id-6nGie"><img alt="popular article 1" class="iimagesexample1jpg" src=""> <figcaption><span class="label">Figure 1. </span>Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <figure id="id-RN4iK"><img alt="popular article 2" class="iimagesexample2jpg" src=""> <figcaption><span class="label">Figure 2. </span>Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <figure id="id-68jin"><img alt="popular article 3" class="iimagesexample3jpg" src=""> <figcaption><span class="label">Figure 3. </span>Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <figure id="id-5meiz"><img alt="popular article 4" class="iimagesexample4jpg" src=""> <figcaption><span class="label">Figure 4. </span>Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <figure id="id-5dEi7"><img alt="popular article 5" class="iimagesexample5jpg" src=""> <figcaption><span class="label">Figure 5. </span>Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <p>Table 1 lists the actual number of clicks these five articles received 30 minutes and 24 hours after being published. The takeaway: looking at how many clicks a story gets in the first 30 minutes is not an accurate way to measure its potential for popularity:</p> <table> <caption> <span class="label">Table 1. </span>Five popular articles.</caption> <tbody> <tr> <td> <p><strong>Articles</strong></p> </td> <td> <p><strong># clicks @ 30mins</strong></p> </td> <td> <p><strong># clicks @ 24hours</strong></p> </td> </tr> <tr> <td> <p>9/11 Flag</p> </td> <td> <p>6,245</p> </td> <td> <p>67,028</p> </td> </tr> <tr> <td> <p>Trump Policy</p> </td> <td> <p>2,015</p> </td> <td> <p>128,217</p> </td> </tr> <tr> <td> <p>North Carolina</p> </td> <td> <p>1,952</p> </td> <td> <p>11,406</p> </td> </tr> <tr> <td> <p>Hillary & Trump</p> </td> <td> <p>1,733</p> </td> <td> <p>310,702</p> </td> </tr> <tr> <td> <p>Gary Johnson</p> </td> <td> <p>1,318</p> </td> <td> <p>196,798</p> </td> </tr> </tbody> </table> <h3>Prediction features</h3> <p>In this prediction task, <em>Washington Post</em>.</p> <p, "<a href="">Predicting the Popularity of News Articles</a>.")</p> <figure id="id-5bMiM"><img alt="List of features used" class="iimagesfigure1_sizedjpg" src=""> <figcaption><span class="label">Figure 6. </span>List of features used. Credit: Yaser Keneshloo, Shuguang Wang, Eui-Hong Han, Naren Ramakrishnan, used with permission.</figcaption> </figure> <h3>Regression task</h3> <p>Figure 7 illustrates the process that we used to build regression models. In the training phase, we built several regression models using 41,000 news articles published by the <em>Post</em>. To predict the popularity of an article, we first collected all features within 30 minutes after its publication, and then used pre-trained models to predict its popularity in 24 hours.</p> <figure id="id-30yix"><img alt="Statistical Modeling" class="iimagesfigure2jpg" src=""> <figcaption><span class="label">Figure 7. </span>Statistical Modeling. Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <h3>Evaluation</h3> <p>To measure the performance of the prediction task, we conducted two evaluations. First, we conducted a 10-fold cross validation experiment on the training articles. Table 2 enumerates the results of this evaluation. On average, the adjusted R<sup>2</sup> is 79.4 (out of 100) with all features. At the same time, we realized that metadata information is the most useful feature aside from the temporal clickstream feature.</p> <table> <caption> <span class="label">Table 2. </span>10-fold cross validation results.</caption> <tbody> <tr> <td> <p><strong>Features</strong></p> </td> <td> <p><strong>Predicted R<sup>2</sup></strong></p> </td> </tr> <tr> <td> <p>Baseline</p> </td> <td> <p>69.4</p> </td> </tr> <tr> <td> <p>Baseline + Temporal</p> </td> <td> <p>70.4</p> </td> </tr> <tr> <td> <p>Baseline + Social</p> </td> <td> <p>72.5</p> </td> </tr> <tr> <td> <p>Baseline + Context</p> </td> <td> <p>71.1</p> </td> </tr> <tr> <td> <p>Baseline + Metadata</p> </td> <td> <p>77.2</p> </td> </tr> <tr> <td> <p>All</p> </td> <td> <p>79.4</p> </td> </tr> </tbody> </table> <p>The second evaluation was done in production after we deployed the prediction system at the <em>Post</em>. Using all articles published in May 2016, we got an adjusted R<sup>2</sup> of 81.3% (out of 100).</p> <p.</p> <figure id="id-RDmiA"><img alt="Scatter plot of prediction results" class="iimagesfigure3jpg" src=""> <figcaption><span class="label">Figure 8. </span>Scatter plot of prediction results (May 2016). Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <h2>Production deployment</h2> <p>We built a very effective regression model to predict the popularity of news articles. The next step was to deploy it to production at the <em>Post</em>.</p> <p <em>Post</em> articles. A streaming infrastructure facilitates the fast prediction task with minimal delays.</p> <h3>Architecture</h3> <p>Figure 9 illustrates the overall architecture of the prediction service in the production environment at the <em>Post</em>. <em>Post</em> newsroom is also alerted of popular articles via Slack and email.</p> <figure id="id-RzxiA"><img alt="System Architecture" class="iimagesfigure4jpg" src=""> <figcaption><span class="label">Figure 9. </span>System Architecture. Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <h3>Spark Streaming in clickstream collection</h3> <p>The Spark Streaming framework is used in several components in our prediction service. Figure 10 illustrates one process we use Spark Streaming for: to collect and transform the clickstream (page view) data into prediction features.</p> <p.</p> <figure id="id-5wmiG"><img alt="Clickstream processing" class="iimagesfigure5jpg" src=""> <figcaption><span class="label">Figure 10. </span>Clickstream processing. Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <h3>System in the real world</h3> <p><em>Washington Post</em> journalists monitor predictions using real-time Slack and email notifications. The predictions can be used to drive promotional decisions on the <em>Post</em> home page and social media channels.</p> <p>We created a Slack bot to notify the newsroom if, 30 minutes after being published, an article is predicted to be extremely popular. Figure 11 shows Slack notifications with the current number and forecasted number of clicks in 24 hours.</p> <figure id="id-3XPiE"><img alt="Slack bot" class="iimagesfigure6jpg" src=""> <figcaption><span class="label">Figure 11. </span>Slack bot. Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <p.</p> <figure id="id-32ri8"><img alt="Popularity summary email" class="iimagesfigure7jpg" src=""> <figcaption><span class="label">Figure 12. </span>Popularity summary email. Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <p>In addition to this being a tool for our newsroom, we are also integrating it into <em>Washington Post</em>.</p> <figure id="id-5oDip"><img alt="" class="iimagesfigure8-updatedjpg" src=""> <figcaption><span class="label">Figure 13. </span>PostPulse example. Credit: Shuguang Wang and Eui-Hong (Sam) Han, used with permission.</figcaption> </figure> <h3>Practical challenges</h3> <p.</p> <p.</p> <h2>Continuous experiments and future work</h2> <p>Moving forward, we will explore a few directions. First, we want to identify the time frame in which an article is expected to reach peak traffic (after running some initial experiments, the results are promising). Second, we want to extend our prediction to articles <em>not</em> published by the <em>Washington Post</em>. Last but not least, we want to address distribution biases in the prediction process. Articles can get much more attention when they are in a prominent position on our home page or spread through large channels on social media.</p> <p>Continue reading <a href=''>Inside the Washington Post’s popularity prediction experiment.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Shuguang WangEui-Hong (Sam) Han Drinking from the industrial IoT data fire hose 2017-01-25T12:50:00Z tag: <p><img src=''s_off_the_hanger_deck_during_a_damage_control_demonstration_held_for_embarked_guests_aboard_uss_ronald_reagan_(cvn_76)_crop-003c851ce2b2cb0a40e6055d4808148d.jpg'/></p><p><em>Careful choices in data collection and architecture can reduce burdens.</em></p><p>Behind the immense promise offered by the Internet of Things (IoT) lies serious challenges for system and network administrators who have to transmit and store the equally immense quantities of data that will be generated by edge devices and for programmers who have to process that data. These challenges are unprecedented, even in the context of the enormous growth of data during the digital age.</p> <p>Many people felt the stress of "data overload" in the 1990s when they had to read 50 email messages a day. By the year 2000, we were talking about the <a href="">overwhelming size of the Web</a>, when it offered an estimated seven million sites. During that time, new words were invented to refer to the exploding data sizes, and I suggested (tongue in cheek) that <a href="">the size of corporate data is outgrowing the availability of Greek prefixes</a>. In 2013, Cisco estimated <a href="">"the number of connected objects to reach ~50 billion in 2020</a>." A typical example of modern IoT volume involves <a href="">2.5 terabytes of data per day from 6,000 sensors on a single machine</a>. You can check a <a href="">ZDNet article</a> for more statistics inducing data vertigo.</p> <p>It’s far from hopeless, though. I’ve written up <a href="">a primer</a> on ways you can sop up the output from these data fire hoses to gain value and actionable insight from the data. Let’s look at some basic considerations that lie behind your use of IoT data.</p> <h2>Look for value</h2> <p>Managers and financial planners will want to know what value will be derived from sensors and big data. Ample uses can be reported from numerous fields:</p> <ul> <li>Shell Oil uses <a href="">seismic sensors and artificial intelligence to find new sources of fuel</a>, while renewable energy companies combine <a href="">on-site data with weather forecasts to improve decisions</a> about locating and building facilities.</li> <li>The <a href="">OnFarm</a> company <a href="">integrates data from multiple sensors</a>, such as crops, winds, and moisture through ThingWorx and combines the results into agricultural applications.</li> <li>The <a href="">All Traffic Solutions</a> company crunches <a href="">data about traffic and creates visualizations</a> for safety and other applications using ThingWorx.</li> </ul> <p>These examples could be multiplied over many fields and industries. The question is how <em>your</em> organization can use data. What markets would you like to enter? Where are your current products or operations inefficient? What information would help you make such decisions? And are you ready to invest the human resources and money to create major changes in your organization based on the incoming information? If you have satisfactory answers to these questions, you can move to the next step.</p> <h2>Determine what you need to learn from data</h2> <p>Through machine learning, data scientists have turned up fascinating insights that traditional techniques would never have yielded. Dazzled by such results, some web sites collect everything that comes their way, storing huge amounts of raw historical data. Some of the decisions driven by this data have to be made instantly (notably, which ads to present to a visitor), whereas other analytics can be processed at the sites' leisure.</p> <p>The Internet of Things is different; although, here too one has decisions that must be made quickly (such as whether to shut down an overheating engine), and others can use historical data to draw long-range conclusions. One difference, as we have seen, is the great quantity of data that edge devices can generate. Another difference is that the IoT has a potentially infinite amount of information to offer. It all depends on which sensors you install, and how many.</p> <p>So, your organization needs to decide what it has to learn from its environment, and choose sensors wisely. One key decision is granularity. Can you get by with one sensor at one end of a pipe, or do you need sensors at regular intervals? Can a farmer draw useful conclusions from one sensor on a field, or does she need a sensor for each row of plants? These are engineering questions.</p> <h2>Determine which proxies to use</h2> <p>Rarely can you extract the exact answer to your questions from your environment. For instance, few planets from other solar systems can be detected through direct observation, so NASA uses <a href="">four other methods</a> for finding planets, such as watching for wobble in the stars themselves or in the light that comes from them. Similarly, a common question in manufacturing would be, "How long will this enclosure last?" But you can't get a timetable directly from the enclosure. You need instead to monitor for cracks, a thinning of the shell, or other proxy measures.</p> <h2>Determine where to process data</h2> <p>The simplest architecture for data processing is to slurp everything into a central data store, probably on a cloud facility, and run large-scale analytics there. But, burdens on your storage and networking can be reduced by processing some data close to where it is gathered.</p> <figure id="id-5bMiM"><img alt="burdens on your storage and networking can be reduced by processing some data close to where it is gathered" height="70%" class="iimagesptc-article-1png" src=""> </figure> <p>A typical example of processing at the edge is monitoring the values generated by a device to look for anomalies. For instance, an electrocardiogram could check for spikes or drops in the heartbeat and send an alert to the central service only for irregularities. Another option is to send the average of values collected once a second or once a minute, instead of all the raw values.</p> <figure id="id-3aBiB"><img alt="send the average of values collected once a second or once a minute" height="70%" class="iimagesptc-article-2png" src=""> </figure> <p>Another common IoT architecture is for devices at each geographical site to send data to a local hub, generally over a local wireless network. The local hub can partially process the data and send results over the internet to a centralized store that combines data from many sites.</p> <figure id="id-6LDiA"><img alt="The local hub can partially process the data and send results over the internet to a centralized store that combines data from many sites" height="70%" class="iimagesptc-article-3png" src=""> </figure> <p>The decisions about where to process data depend on several factors:</p> <ul> <li>Can useful insights be generated from the data at a single device or a single site, or do analytics require combined data from a large number of devices?</li> <li>Do the edge devices have sufficient CPU power, data storage, and energy sources to do the processing?</li> <li>Does your network have the bandwidth to transmit the data?</li> <li>Do you want to preserve the data for future processing?</li> <li>How quickly do you need results? Urgent changes to data that require real-time action are more likely to take place in a timely manner if they are processed close to the edge.</li> <li>How sensitive are your analytics to missing data? Edge devices are liable to fail or to stop reporting data for other reasons.</li> </ul> <p>These are some of the considerations for data handling in IoT applications. Read my report, <a href="">Scaling Data Science for the Industrial Internet: Advanced Analytics in Real Time</a>, for more insights and guidance.</p> <p><em>This post is a collaboration between ThingWorx and O’Reilly. See our <a href="">statement of editorial independence</a>.</em></p> <p>Continue reading <a href=''>Drinking from the industrial IoT data fire hose.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Andy Oram Four short links: 25 January 2017 2017-01-25T11:45:00Z tag: <p><em>Robot Fist Bump, Emotion Visualization, p-Values, and food2vec</em></p><ol> <li> <a href="">The Backstory to Obama's Fistbump</a> (BoingBoing) -- <i>It's actually a prosthetic robotic arm belonging to Nathan Copeland, who can control it with his mind and sense touch with it.</i> </li> <li> <a href="">Plexus: Interactive Emotion Visualization based on Social Media</a> -- emotional analysis of things (e.g., cities) based on Tweets about them. (I'm picturing one large rectangle coloured "outrage.")</li> <li> <a href="">Toward Sustainable Insights</a> (Adrian Colyer) -- fantastic intro to p-values and null hypotheses at the start.</li> <li> <a href="">food2vec: Augmented cooking with machine intelligence</a> -- first analogies: <i>Egg is to bacon as orange juice is to coffee. South Asian is to rice as Southern European is to thyme.</i> Then recommendations: <i>We can use our model of food as a recommendation system for cooks. By taking the average embedding for a set of foods, we can look up foods with the closest embeddings.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 25 January 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington How do I colorize an image in Photoshop? 2017-01-25T09:00:00Z tag: <p><img src=''/></p><p><em>Learn how to colorize black and white photos with layers and blending modes in Photoshop.</em></p><p>Continue reading <a href=''>How do I colorize an image in Photoshop?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Andy Anderson Four short links: 24 January 2017 2017-01-24T11:55:00Z tag: <p><em>Doomsday Prep, Printing Skin, Autonomous Skepticism, and Slow Wifi</em></p><ol> <li> <a href="">Doomsday Prep for the Super Rich</a> -- <i>“I think, in the back of people’s minds, frankly, is that, if the world really goes to shit, New Zealand is a First World country, completely self-sufficient, if necessary—energy, water, food. Life would deteriorate, but it would not collapse.”</i>.”</i> </li> <li> <a href="">3D Printing Skin</a> -- that's a Yoda figurine I don't want.</li> <li> <a href="">Toyota's Gill Pratt on Autonomous Vehicles</a> -- <i.</i> </li> <li> <a href="">Why Does It Take So Long to Connect to a Wifi Access Point?</a> -- <i>studies on five million mobile users from four representative cities associating with seven million APs in 0.4 billion WiFi sessions</i> lets this paper's authors <i.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 24 January 2017.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington
http://feeds.feedburner.com/oreilly/radar/rss10?m=204
CC-MAIN-2017-09
en
refinedweb
Previous Chapter: Sets and Frozen Sets Next Chapter: Functions Next Chapter: Functions Shallow and Deep Copy Introduction As we have seen in the chapter "Data Types and Variables", Python has a strange behaviour - in comparison with other programming languages - when assigning and copying simple data types like integers and strings. The difference between shallow and deep copying is only relevant for compound objects, which are objects containing other objects, like lists or class instances. In the following code snippet y points to the same memory location than X. This changes, when we assign a different value to y. In this case y will receive a separate memory location, as we have seen in the chapter "Data Types and Variables". >>> x = 3 >>> y = xBut even if this internal behaviour appears strange compared to programming languages like C, C++ and Perl, yet the observable results of the assignments answer our expectations. But it can be problematic, if we copy mutable objects like lists and dictionaries. Python creates real copies only if it has to, i.e. if the user, the programmer, explicitly demands it. We will introduce you to the most crucial problems, which can occur when copying mutable objects, i.e. when copying lists and dictionaries. Copying a list >>> colours1 = ["red", "green"] >>> colours2 = colours1 >>> colours2 = ["rouge", "vert"] >>> print colours1 ['red', 'green'] As we have expected, the values of colours1 remained unchanged. Like it was in our example in the chapter "Data types and variables", a new memory location had been allocated for colours2, because we have assigned a complete new list to this variable. >>> colours1 = ["red", "green"] >>> colours2 = colours1 >>> colours2[1] = "blue" >>> colours1 ['red', 'blue'] In the example above, we assign a new value to the second element of colours2. Lots of beginners will be astonished that the list of colours1 has been "automatically" changed as well. The explanation is that there has been no new assignment to colours2, only to one of its elements. Copy with the Slice Operator It's possible to completely copy shallow list structures with the slice operator without having any of the side effects, which we have described above: >>> list1 = ['a','b','c','d'] >>> list2 = list1[:] >>> list2[1] = 'x' >>> print list2 ['a', 'x', 'c', 'd'] >>> print list1 ['a', 'b', 'c', 'd'] >>>But as soon as a list contains sublists, we have the same difficulty, i.e. just pointers to the sublists. >>> lst1 = ['a','b',['ab','ba']] >>> lst2 = lst1[:] This behaviour is depicted in the following diagram: If you assign a new value to the 0th Element of one of the two lists, there will be no side effect. Problems arise, if you change one of the elements of the sublist. >>> lst1 = ['a','b',['ab','ba']] >>> lst2 = lst1[:] >>> lst2[0] = 'c' >>> lst2[2][1] = 'd' >>> print(lst1) ['a', 'b', ['ab', 'd']] The following diagram depicts what happens, if one of the elements of a sublist will be changed: Both the content of lst1 and lst2 are changed. Using the Method deepcopy from the Module copyA solution to the described problems is to use the module "copy". This module provides the method "copy", which allows a complete copy of a arbitrary list, i.e. shallow and other lists. The following script uses our example above and this method: from copy import deepcopy lst1 = ['a','b',['ab','ba']] lst2 = deepcopy(lst1) lst2[2][1] = "d" lst2[0] = "c"; print lst2 print lst1If we save this script under the name of deep_copy.py and if we call the script with "python deep_copy.py", we will receive the following output: $ python deep_copy.py ['c', 'b', ['ab', 'd']] ['a', 'b', ['ab', 'ba']] Previous Chapter: Sets and Frozen Sets Next Chapter: Functions Next Chapter: Functions
http://python-course.eu/deep_copy.php
CC-MAIN-2017-09
en
refinedweb
ok i have a byte buffer byte buf[16]; in which i have stored longs. now how do i add the longs in it and get the total in another long? or in other words how would i read a long out of it? ok i have a byte buffer byte buf[16]; in which i have stored longs. now how do i add the longs in it and get the total in another long? or in other words how would i read a long out of it? You can't have longs in a byte buffer unless you use 4 byte slots for every long. In your case, you could store 4 longs in the array. Add them together using: long total=0; for(long i = 0; i < 4; i++) { total += (long*)(buf)[i]; } I think I got it right. // Gliptic i came across this but i wasn't sure. becuase lets say i have a long thats something like this 00010010 01011010 10010010 10010010 when i save it to a byte buf[4] it then splites into four parts. if i add each byte that is buf[0] +buf[1] +buf[2] +buf[3] would not the answer be different? like: buf[0] = 00010010 buf[0] = 01011010 + buf[0] = 10010010 + buf[0] = 10010010 + ================= 11001 0000 ??? not the same ??? Of course it would be different. That's because you are adding all the bytes in the long together. If you want to read the long you just type: *(long*)buf . // Gliptic sorry could you give me an example? lets say i have a byte buf[16]; with four longs in it. to add should i use long total=0; for(long i = 0; i < 4; i+=4 ) { total += * (long*)(buf)[i]; } No, this one: long total=0; for(long i = 0; i < 4; i++) { total += (long*)(buf)[i]; } Think of the byte array as what it is, a byte array! There are 16 bytes in it. In this code, we are treating the array just as it was a long array with four elements. Sorry, I don't have time to write more now... // Gliptic thanks for your help, but i get this error '+=' : illegal, right operand has type 'long *' when using above, and if i remove the * the results still wrong? Here's how you'd do it with a byte array holding two longs - Code:#include <iostream> using namespace std; typedef unsigned char byte; int main () { byte buf[8]; long a=10,b=25; //Put two longs in byte array *(long*)buf = a; *(long*)&buf[4]=b; long total=0; for(int i = 0; i < 8; i+=4) { total += *(long*)&buf[i]; } cout << total; return 0; } zen How about Code:byte buf[16]; long *lbuff = (long *)buff; for ( i = 0 ; i < 4 ; i++ ) { sum += lbuff[i]; }
https://cboard.cprogramming.com/cplusplus-programming/3645-byte-long.html
CC-MAIN-2017-09
en
refinedweb
Hello everyone, Why in the below code, pb->foo(); will output Final other than output Base? I have tested that the output is Final. My question is 1. pb points Final instance, and the foo in Final is not virtual method; 2. if function is virtual, we should invoke the foo based on the type of instance pointed to, if not virtual, we should invoke the foo based on the type which is the pionter type; 3. since foo in Final is not final, so the output should be invoking the foo based on the type of pointer (which is Base), then the output should be Base? Code:#include <iostream> using namespace std; class Base { public: virtual int foo() {cout << "Base" << endl; return 0;} }; class Derived: public Base { public: int foo() {cout << "Derived" << endl;return 0;} }; class Final: public Derived{ public: int foo() {cout << "Final" << endl;return 0;} }; int main() { Final f; Base* pb = reinterpret_cast<Base*> (&f); pb->foo(); return 0; } regards, George
https://cboard.cprogramming.com/cplusplus-programming/98821-strange-virtual-function-output.html
CC-MAIN-2017-09
en
refinedweb
I am trying to create a program that uses classes and methods to makes a restaurant/s. By making a restaurant I mean it states their name, the type of food they serve, and when they open. I have done that successfully, but now I am trying to create an ice cream stand that inherits from its parent class ( Restaurant IceCreamStand flavor_options #!/usr/bin/python class Restaurant(object): def __init__(self, restaurant_name, cuisine_type, rest_time): self.restaurant_name = restaurant_name self.cuisine_type = cuisine_type self.rest_time = rest_time self.number_served = 0 def describe_restaurant(self): long_name = "The restaurant," + self.restaurant_name + ", " + "serves " + self.cuisine_type + " food"+ ". It opens at " + str(self.rest_time) + "am." return long_name def read_served(self): print("There has been " + str(self.number_served) + " customers served here.") def update_served(self, ppls): self.number_served = ppls if ppls >= self.number_served: self.number_served = ppls # if the value of number_served either stays the same or increases, then set that value to ppls. else: print("You cannot change the record of the amount of people served.") # if someone tries decreasing the amount of people that have been at the restaurant, then reject themm. def increment_served(self, customers): self.number_served += customers class IceCreamStand(Restaurant): def __init__(self, restaurant_name, cuisine_type, rest_time): super(IceCreamStand, self).__init__(restaurant_name, cuisine_type, rest_time) self.flavors = Flavors() class Flavors(): def __init__(self, flavor_options = ["coconut", "strawberry", "chocolate", "vanilla", "mint chip"]): self.flavor_options = flavor_options def list_of_flavors(self): print("The icecream flavors are: " + str(self.flavor_options)) icecreamstand = IceCreamStand(' Wutang CREAM', 'ice cream', 11) print(icecreamstand.describe_restaurant()) icecreamstand.flavors.list_of_flavors() restaurant = Restaurant(' Dingos', 'Australian', 10) print(restaurant.describe_restaurant()) restaurant.update_served(200) restaurant.read_served() restaurant.increment_served(1) restaurant.read_served() You would want to use .join() to combine the list into a single string. I.E. flavor_options = ['Chocolate','Vanilla','Strawberry'] ", ".join(flavor_options) This would output: "Chocolate, Vanilla, Strawberry"
https://codedump.io/share/c6Hl2gXGBJbl/1/how-can-i-print-items-from-a-list
CC-MAIN-2017-09
en
refinedweb
I am working on a homework asisgnment from school and need some help. I am stuck on coding to set a private field to my Box class with three constructors.. It's kind of confusing when dealing with more than one constructor. The following is the code for the box class progam. Can any one assist me. Please follow the comments Thanks! //two arg constructor that sets length and width public Box(int l, int w) { // write code to set the private field } /** The Box class with three constructors */ public class Box{ // three class variables private int length; private int width; private int height; //one arg constructor should take one parameter and set the length equal to that. Width and height should be set to zero. public Box( int l) { length = l; width =0; height=0; } //two arg constructor that sets length and width public Box(int l, int w) { // write code to set the private field } //three arg constructor that sets lenght, width and height public Box(int l, int w, int h) { // write code to set the private fields } // get and set methods for all the private data ie, length, width and height. I have written one method. You need to write for all three public int getLength() { return length; } public void setLength(int l){ length=l; } // Write get and set methods for width public int getWidth() { return width; } public void setWidth(int w) { width = 0; } // Write get and set methods for height public int getHeight() { return height; } public void setHeight(int h) { height = 0; } }
https://www.daniweb.com/programming/software-development/threads/271983/box-program
CC-MAIN-2017-09
en
refinedweb
Blogs 2017-02-18T01:13:19.3030000Z EPiServer World How to get the current page's category from a block and use Find to search for all pages that match the category. 2017-02-18T01:13:19.3030000Z <p>In my application I created a block that showed all related pages based on the parent pages category. <br />To achieve this I created a new category field called <strong>Location</strong>, the idea behind this field is to only select one location per page hence the call below to get the <strong>First</strong>() location. You could use .<strong>In</strong>() if you required to match against multiple.<br />In the block controllers <strong>ActionResult()</strong> method I get the parent page object by using the <strong>ServiceLocator.Current.GetInstance<IPageRouteHelper>() .</strong> <strong> </strong><br />In the search query you can see that I first <strong>Filter()</strong> on the <strong>Location</strong> category field and call <strong>Match()</strong> to get the pages that match the parent page's <strong>Location</strong>.<br />I then use <strong>Filter() </strong>and<strong> Match() </strong>to exclude the parent page from the results based on the <strong>ContentGUID.</strong> <br /><strong><br /></strong>Note: The code below has no exception handling for blog readability so this would need to be added in a real world application.</p> <pre class="brush:csharp;auto-links:false;toolbar:false" contenteditable="false"> public class RelatedHomesBlockController : BlockController<RelatedHomesBlock> { private readonly IPageRouteHelper _pageRouteHelper; public IClient FindServiceClient { get; private set; } public RelatedHomesBlockController(IClient client) { FindServiceClient = client; _pageRouteHelper = ServiceLocator.Current.GetInstance<IPageRouteHelper>(); } public override ActionResult Index(RelatedHomesBlock currentBlock) { var currentPage = (_pageRouteHelper.Page as AccommodationPage); var currentPageCategory = currentPage.Location.First(); var contentResult = FindServiceClient.Search<AccommodationPage>() .Filter(h => h.Location.Match(currentPageCategory)) .Filter(h => !h.ContentGuid.Match(currentPage.ContentGuid)) .GetContentResult(); var model = new RelatedHomesBlockModel { Heading = currentBlock.Heading, ContentResult = contentResult }; return PartialView(model); } }</pre> Setting up Episerver Find on a development machine using Autofac 2017-02-17T20:29:39.8100000Z <p>Setting up Episerver Find is straight forward but I did run into a few steps that could stump new players.</p> <p>The first two following steps are well documented so I will not go into them.</p> <p><strong>Step 1:</strong> Create an Episerver Find account online.</p> <p><strong>Step 2:</strong> Add the web config details to your project.</p> <p><strong>Step 3:</strong> Next we add the Autofac dependency Injection code into our <strong>Global.asax.cs</strong> file, most of the code goes into the <strong>Application_Start()</strong> method that already exist. <br />Note: Below I am registering all Controllers in the application using <strong>builder.RegisterControllers(typeof(EpiServerApplication).Assembly) </strong>so just changed the <strong>EpiServerApplication </strong>class to match your application. <br />We then register our search client class and set the dependency resolver using Autofac.<br />This means we only <span>instantiate the <strong>Client </strong>class once and then inject it into our controllers, which improves the applications performance. </span></p> <pre class="brush:csharp;auto-links:false;toolbar:false" contenteditable="false"> public class EPiServerApplication : EPiServer.Global { protected void Application_Start() { var builder = new ContainerBuilder(); builder.RegisterControllers(typeof(EPiServerApplication).Assembly); builder.Register(x => CreateSearchClient()).As<IClient>().SingleInstance(); var container = builder.Build(); DependencyResolver.SetResolver( new AutofacDependencyResolver(container)); //Standard MVC stuff AreaRegistration.RegisterAllAreas(); } private static IClient CreateSearchClient() { var client = Client.CreateFromConfig(); //Any modifications required goes here return client; } }</pre> <p> </p> <p><strong>Step 4:</strong> On the Episerver Alloy template the code above didnt compile because it was missing some important DLLs, to fix this we now need to add the <strong>Autofac.dll </strong>and<strong> Autofac.Integration.Mvc.dll</strong> to our project.</p> <p>In Visual Studio go to<strong> Tools > NuGet Package Manager > Package Manager Console</strong></p> <p>In the <strong>Package Manager Console </strong>enter the following:</p> <p><strong>PM > Install Package Autofac</strong></p> <p>Then run</p> <p><strong>PM > Install Package <span>Autofac.Mvc5</span> </strong> </p> <p>The code we entered above will now resolve and you should be able to build your project without errors.</p> <p><strong>Step 5:</strong> Now in a Controller we need to add an IClient property, which as you can see below I called <strong>FindServiceClient.</strong> <br />Then add a constructor with the IClient parameter that we are injecting into and set it to the <strong>FindServiceClient</strong><strong> </strong>property. <br />Now we can write queries against the Find service as follows, this code is from a block controller that returns pages:</p> <pre class="brush:html;auto-links:false;toolbar:false" contenteditable="false"> public class TopRatedHomesBlockController : BlockController<TopRatedHomesBlock> { public IClient FindServiceClient { get; private set; } public TopRatedHomesBlockController(IClient client) { FindServiceClient = client; } public override ActionResult Index(TopRatedHomesBlock currentBlock) { var contentResult = FindServiceClient.Search<AccommodationPage>() .Filter(h => h.Rating.GreaterThan(4)) .GetContentResult(); var model = new TopRatedHomesBlockModel { Heading = currentBlock.Heading, ContentResult = contentResult }; return PartialView(model); } }</pre> <p><strong>Step 6: </strong>In the code above I am using an extension method to get<strong> PageData, </strong>the<strong> AccommodationPage </strong>class<strong> </strong>inherits<strong> StandardPage </strong>which inherits<strong> SitePageData </strong>which inherits<strong> PageData, </strong>the extension method used is called <strong>GetContentResult() .</strong></p> <p>To use this extension method we must add the following <strong>using</strong> to our class:</p> <p><strong>using EPiServer.Find.Cms; </strong> <strong> </strong><strong> </strong></p> <p><strong>Completed: </strong>That wraps up how to setup the powerful Episerver Find!<strong> <br /></strong> </p> 4 Reasons Episerver Is a Leader in the CMS World 2017-02-17T16:12:36.0000000Z Forrester has named Episerver a leader in the web content management space. We look at why website owners should be excited about Episerver's capabilities. Extend your content types in Episerver 2017-02-17T07:29:04.0000000Z. Breaking changes with inRiver PIM 2017-02-16T19:11:44.0000000Z If you're using inRiver PIM with the Episerver connector, and upgrading the PIM system from 6.2 to 6.3, there's a breaking change you should be aware of, that could potentially make products on your page invisible. MenuPin v10 for Episerver 10 released 2017-02-14T16:02:35.0000000Z Episerver Commerce performance optimization – part 1 2017-02-13T14:00:10.0000000Z <p>This is a first part of a long series (which have no planned number of parts) as the lessons I learned during trouble shouting customers’ performance problems. I’m quite of addicted to the support cases reported by customers, especially the ones with performance problems. Implementations are different from cases to cases, but there are some … <a href="" class="more-link">Continue reading <span class="screen-reader-text">Episerver Commerce performance optimization – part 1</span></a></p> <p>The post <a rel="nofollow" href="">Episerver Commerce performance optimization – part 1</a> appeared first on <a rel="nofollow" href="">Quan Mai's blog</a>.</p> Utilizing Inversion of Control to Build a Swappable Search Service in Episerver 2017-02-13T00:00:00.0000000Z Presenting: Find My Content 2017-02-12T17:06:28.0000000Z My […]<img alt="" border="0" src="" width="1" height="1" /> Better event handling in Episerver 2017-02-12T00:00:00.0000000Z <p>When thinking about <em>Episerver</em> content events as an outer border of your application according to <a href="">the ports and adapters architecture</a> (even known as <a href="">a pizza architecture</a> :)), then we have to implement some adapter for these events. This adapter should translate <em>Episerver</em> events into our application's events.</p> <p>There is a good solution for such purpose. Some time ago <a href="">Valdis Iļjučonoks</a> wrote an <a href="">article</a> how <a href="">Mediator pattern</a> can help with this. I am going to use <a href="">MediatR</a> library for this purpose.</p> <p>First of all, install <em>MediatR</em> in your project.</p> <pre><code>Install-Package MediatR </code></pre><p>Then there will be a need for an initialization module where the events will be handled. Create one, in the <em>Initialize</em> method load <em>IContentEvents</em> and attach an event handler to the events you care. In this example, I am attaching to the <em>SavedContent</em> event. Do not forget to detach the events in the <em>Uninitialize</em> method.</p> <pre><code>[InitializableModule] [ModuleDependency(typeof(EPiServer.Web.InitializationModule))] public class EventInitialization : IInitializableModule { private static bool _initialized; private Injected<IMediator> InjectedMediator { get; set; } private IMediator Mediator => InjectedMediator.Service; public void Initialize(InitializationEngine context) { if (_initialized) { return; } var contentEvents = context.Locate.ContentEvents(); contentEvents.SavedContent += OnSavedContent; _initialized = true; } private void OnSavedContent(object sender, ContentEventArgs contentEventArgs) { } public void Uninitialize(InitializationEngine context) { } } </code></pre><p>So now there is an event handler and we somehow should call the Mediator. To start with it, we have to create our own <em>event</em> types. Here is an example of the <em>SavedContentEvent</em>.</p> <pre><code>public class SavedContentEvent : INotification { public SavedContentEvent(ContentReference contentLink, IContent content) { ContentLink = contentLink; Content = content; } public ContentReference ContentLink { get; set; } public IContent Content { get; set; } } </code></pre><p>This event contains only those properties which are important for this event and not more.</p> <p>Now we are ready to publish our first event. Locate mediator instance, create our event from the <em>ContentEventArgs</em> and call a mediator's <em>Publish</em> method with our event as a parameter.</p> <pre><code>private Injected<IMediator> InjectedMediator { get; set; } private IMediator Mediator => InjectedMediator.Service; private void OnSavedContent(object sender, ContentEventArgs contentEventArgs) { var ev = new SavedContentEvent(contentEventArgs.ContentLink, contentEventArgs.Content); Mediator.Publish(ev); } </code></pre><p>The last step for the mediator to be able to publish events is its configuration. You can find configuration examples for different IoC containers in the documentation. Here is an example of the configuration required for <em>StructureMap</em> which is added in the configurable initialization module.</p> <pre><code>container.Scan( scanner => { scanner.TheCallingAssembly(); scanner.AssemblyContainingType<IMediator>(); scanner.WithDefaultConventions(); scanner.ConnectImplementationsToTypesClosing(typeof(IRequestHandler<,>)); scanner.ConnectImplementationsToTypesClosing(typeof(I<>)); }); container.For<SingleInstanceFactory>().Use<SingleInstanceFactory>(ctx => t => ctx.GetInstance(t)); container.For<MultiInstanceFactory>().Use<MultiInstanceFactory>(ctx => t => ctx.GetAllInstances(t)); </code></pre><p>Now when everything is set up, how to use these published events? You have to create handlers for the events. The meditator will send events to all event handlers. So you can create as much event handlers as you need for the single event. Event handlers support <a href="">Dependency Injection</a>, so you can inject whatever services you need in the constructor. Here is an example how handlers for the <em>SavedContentEvent</em> could look like.</p> <pre><code>public class SendAdminEmailOnSavedContent : INotificationHandler<SavedContentEvent> { private readonly IEmailService _emailService; public SendAdminEmailOnSavedContent(IEmailService emailService) { _emailService = emailService; } public void Handle(SavedContentEvent notification) { // Handle event. } } public class LogOnSavedContent: INotificationHandler<SavedContentEvent> { public void Handle(SavedContentEvent notification) { // Handle event. } } </code></pre><h1 id="summary">Summary</h1> <p>This solution might look too complex for handling some simple events but usually, those simple events become quite complex in our applications. And then event handling for all cases of those are baked in the initialization module's single method. The code becomes hard to maintain.</p> <p>With a mediator, events have separate handlers for each case you need. So it is much easier to change the code when requirements change. It is much easier to add new event handling for new requirements and in general, the code becomes much easier to reason about.</p> Customize Summary in emails from #Episerver Forms 2017-02-10T08:08:16.0000000Z Episerver Forms is getting stronger. Here is a example of the how to customize the summary text with PlaceHolderProvider that is available in version 4.4. Surrounding EpiServer lines with a div in content area 2017-02-09T19:12:00.0000000Z <br /><h2>Surrounding EpiServer lines with a div in content area</h2>Some days ago I had a discussion with our frontenders regarding the rows in ContentArea. They liked to have a div surrounding each line of the items so they can make the styles in a correct way. In this small post I will describe what have I done:<br />I will use Alloy Demo site to describe the issue and my solution<br /><br /><h3><b>The problem:</b></h3><div>In a ContentArea, if you add some items it will be a big div with multiple lines inside. </div><div>As you can see in the picture item 1 is taking a line, 2 and 3 are in the second line and 4,5,6 are in the third line. And size of each block depends on their display options.</div><div><br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="296" src="" width="640" /></a></div><div><br /></div><div><br /></div><div><br /></div><h3>The request</h3><div>Surrounding each line with a div with<a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="38" src="" width="640" /></a></div><div><br /></div><div><br /></div><div>However to remove add the container around each row, you need to override ContentAreaRenderer. In alloy demo website, it has already been overridden to add suitable classes based on the tags and it is called AlloyContentAreaRenderer. Just override RenderContentAreaItems method and add this code. Then you are good to go :)</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="432" src="" width="640" /></a></div><div><br /></div><pre class="brush: csharp"> protected override void RenderContentAreaItems(HtmlHelper htmlHelper, IEnumerable<contentareaitem> contentAreaItems)<br /> {<br /> float totalSize = 0;<br /> if (contentAreaItems.Any())<br /> htmlHelper.ViewContext.Writer.Write(@"<div class="" row=""><br />");<br /><br /> foreach (ContentAreaItem contentAreaItem in contentAreaItems)<br /> {<br /> var tag = this.GetContentAreaItemTemplateTag(htmlHelper, contentAreaItem);<br /> totalSize += GetSize(tag);<br /> if (totalSize>1)<br /> {<br /> htmlHelper.ViewContext.Writer.Write(@"</div><br /><div class="" row=""><br />");<br /> totalSize = GetSize(tag);<br /> }<br /> this.RenderContentAreaItem(htmlHelper, contentAreaItem,<br /> this.GetContentAreaItemTemplateTag(htmlHelper, contentAreaItem),<br /> this.GetContentAreaItemHtmlTag(htmlHelper, contentAreaItem),<br /> this.GetContentAreaItemCssClass(htmlHelper, contentAreaItem));<br /> }<br /> if (contentAreaItems.Any())<br /> htmlHelper.ViewContext.Writer.Write(@"</div><br />");<br /> }<br /><br /> float GetSize(string tagName)<br /> {<br /> if (string.IsNullOrEmpty(tagName))<br /> {<br /> return 1f;<br /> }<br /> switch (tagName.ToLower())<br /> {<br /> case "span12":<br /> return 1f;<br /> case "span8":<br /> return 0.666666f;<br /> case "span6":<br /> return 0.5f;<br /> case "span4":<br /> return 0.333333f;<br /> default:<br /> return 1f;<br /> }<br /> }<br /></contentareaitem></pre><h3>The result</h3><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="288" src="" width="640" /></a></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div> Exporting form data to Excel in EPiServer 10 2017-02-07T17:01:09.0000000Z EPiServer allows you to export form data to text-based files. This is how you can export form data to binary-based files such as XLSX.> [No-workflow] Customizing ITaxCalculator and combine with workflow 2017-02-07T04:02:57.4100000Z <p>You might know that Episerver Commerce introduced ITaxCalculator beging with version 8.x, and refer to the <b>Tax calculator</b> section in this <a href="/link/ca0fc3fa4c6b45fbbe0293a23c94a91a.aspx" target="_blank">document.</a> We knew how to modify it.</p> <p>However, in this post, we walk through it and figure out when it was used, how to customize it, and what ITaxCalculator missed.</p> <p>I have a list of subtopics here, so you can ignore subjects that you already know:</p> <ul> <li><a href="#sodo_when">When Tax Calulator was used</a></li> <li><a href="#sodo_how">How to customize ITaxCalculator</a></li> <li><a href="#sodo_what">What ITaxCalculator missed</a></li> <li><a href="#sodo_combine">Combine customized ITaxCalculator with workflow</a></li> </ul> <h3 id="sodo_when">When Tax Calulator was used</h3> <ul> <li>In workflow, tax was calculated in the Calculate Tax activity. We could modify the tax formula there. <i>We could download workflow <a href="/link/474a5a959bee4a17bb1dddb73e174ac1.aspx" target="_blank">sample code</a> to modify, build and deploy your business to your site.</i></li> <li>In a no-workflow way, tax was calculated via ITaxCalculator after calling <b>IOrderGroupCalculator.GetTotal</b>. <br /> <b>IOrderGroupCalculator.GetTotal</b> calculates all things related to an order group: lineitems total, tax, subtotal, shipping cost, shipping tax, order total...</li> </ul> <h3 id="sodo_how">How to customize ITaxCalculator</h3> <p>This <a href="/link/55fe410f4f65482cab72fb59d47c7561.aspx?documentId=commerce/10/277D3EB8" target="_blank">sample code</a> has an example of how to modify tax. We will use this code and do some modification.</p> <p><span class="text-info small">Note that </span> we renamed TaxCalculatorSample to CustomizedTaxCalculator for our customization.</p> <div class="clearfix"> </div> <div id="codeB1" class="panel-group"> <div class="panel panel-success"> <div id="samplecodeB1" class="panel-heading"><span class="panel-title small"><a href="#samplecodeB1a" data-CustomizedTaxCalculator.cs</a></span></div> <div id="samplecodeB1a" class="panel-collapse collapse"> <div class="panel-body"> <pre>public class CustomizedTaxCalculator : ITaxCalculator { private readonly IContentRepository _contentRepository; private readonly ReferenceConverter _referenceConverter; private readonly IShippingCalculator _shippingCalculator; public CustomizedTaxCalculator(IContentRepository contentRepository, ReferenceConverter referenceConverter, IShippingCalculator shippingCalculator) { _contentRepository = contentRepository; _referenceConverter = referenceConverter; _shippingCalculator = shippingCalculator; } public Money GetShippingTaxTotal(IShipment shipment, IMarket market, Currency currency) { if (shipment.ShippingAddress == null) { return new Money(0m, currency); }; } shippingTaxes.AddRange( taxes.Where(x => x.TaxType == TaxType.ShippingTax && !shippingTaxes.Any(y => y.Name.Equals(x.Name)))); } return new Money(CalculateShippingTax(shippingTaxes, shipment, market, currency), currency); } public Money GetTaxTotal(IOrderGroup orderGroup, IMarket market, Currency currency) { //iterate over all order forms in this order group and sum the results. return new Money(orderGroup.Forms.Sum(form => GetTaxTotal(form, market, currency).Amount), currency); } public Money GetTaxTotal(IOrderForm orderForm, IMarket market, Currency currency) { var formTaxes = 0m; //calculate tax for each shipment foreach (var shipment in orderForm.Shipments.Where(x => x.ShippingAddress != null)) {; } //calculate the line item price, excluding tax var lineItem = (LineItem)item; var lineItemPricesExcludingTax = item.PlacedPrice - (lineItem.OrderLevelDiscountAmount + lineItem.LineItemDiscountAmount) / item.Quantity; var quantity = 0m; if (orderForm.Name.Equals(OrderForm.ReturnName)) { quantity = item.ReturnQuantity; } else { quantity = item.Quantity; } formTaxes += taxes.Where(x => x.TaxType == TaxType.SalesTax).Sum(x => lineItemPricesExcludingTax * (decimal)x.Percentage * 0.01m) * quantity; //add shipping taxes for later tax calculation shippingTaxes.AddRange( taxes.Where(x => x.TaxType == TaxType.ShippingTax && !shippingTaxes.Any(y => y.Name.Equals(x.Name)))); } //sum the calculated tax for each shipment formTaxes += CalculateShippingTax(shippingTaxes, shipment, market, currency); } return new Money(currency.Round(formTaxes), currency); } private static IEnumerable<ITaxValue> GetTaxValues(string taxCategory, string languageCode, IOrderAddress orderAddress) { return OrderContext.Current.GetTaxes(Guid.Empty, taxCategory, languageCode, orderAddress); } private decimal CalculateShippingTax(IEnumerable<ITaxValue> taxes, IShipment shipment, IMarket market, Currency currency) { //calculate shipping cost for the shipment, for specified market and currency. var shippingCost = _shippingCalculator.GetShippingCost(shipment, market, currency).Amount; return taxes.Where(x => x.TaxType == TaxType.ShippingTax).Sum(x => shippingCost * (decimal)x.Percentage * 0.01m); } }</pre> </div> </div> </div> </div> <p>In your site initialization, remember to add a structure map to use the new tax calculator:</p> <pre>services.AddTransient<ITaxCalculator, CustomizedTaxCalculator>();</pre> <h3 id="sodo_what">What ITaxCalculator missed</h3> <p>ITaxCalculator was used for calculating tax <span class="text-info small">of course ^_^</span> in front-end without workflow (or we call it no-workflow).</p> <p>However, if we have a custom tax implementation, it won't work properly in Commerce Manager. Commerce Manager still uses workflow, so we need to combine ITaxCalculator with workflow. If you didn't customize workflow, now is the right time to do this.</p> <h3 id="sodo_combine">Combine customized ITaxCalculator with workflow</h3> <h4 id="sodo_Resources">Resources preparation</h4> <ul> <li>Visual Studio.</li> <li>Download the code sample in the <a href="/link/60513f074cf049bfae0d279d92f99ddf.aspx" target="_blank">download page </a> or <a href="/link/474a5a959bee4a17bb1dddb73e174ac1.aspx" target="_blank">here</a>. Unzip it, and we will use <span class="text-danger">Mediachase.Commerce.Workflow</span> project.</li> <li>For customizing activity flow, refer to this <a href="/link/75cac3110f034225b68171b7b9c972c9.aspx" target="_blank">document</a>.</li> </ul> <h4 id="sodo_ITaxCalculatorworkflow">Using ITaxCalculator in workflow</h4> <p>In this scope, we play with the CalculateTaxActivity.</p> <p>Open <span class="text-danger">CalculateTaxActivity.cs</span>, and add the line below to top of this class:</p> <pre>private readonly Injected<ITaxCalculator> _taxCalculator;</pre> <p>This will call our customized <span class="text-danger">CustomizedTaxCalculator</span>.</p> <p>In the <span class="text-danger">CalculateTaxes()</span> method, we use <span class="text-danger">_taxCalculator</span> for setting <b>ShippingTax</b> and <b>TaxTotal</b></p> <p><img class="img-thumbnail" title="CalculateTaxActivity" alt="CalculateTaxActivity" src="" /></p> <p> </p> <p>Those changes are also availabled for preview in my <a href="" target="_blank">github</a>.</p> <p> </p> <p>Hope this helps you.</p> <p>/Son Do</p> Moving Commerce content in Vulcan 2017-02-06T15:02:00.0000000Z <p>I’ve just released a minor update of Vulcan that addresses two quite significant issues.</p> <ol> <li>Reindexing commerce content if a product/variant is moved between categories or, more to the point, the categories change by being modified, added to or removed.</li> <li>Indexing all the categories of a product/variant if it belongs to multiple, rather than just the first one.</li></ol> <p>Note this has also introduced a breaking change if you created custom <strong>IVulcanIndexingModifier</strong> classes.</p> <p>The first ‘bug’ occurred because although we were listening to the move of content on <strong>IContentEvents</strong> and updating the index appropriately, moving commerce content doesn’t actually fire a move event of the content itself on <strong>IContentEvents</strong>. This is correct as, technically, it’s not being moved. It’s simply changing the relations, and we weren’t picking that event up. To replicate the issue, cut-paste a variant/product to a different category and you’ll see the ‘ancestors’ in ElasticSearch don’t change for an item. I’ve updated the Vulcan Commerce package to listen for these relation changes and index appropriately.</p> <p>The second ‘bug’ occurred because of a limitation of the <strong>GetAncestors</strong> extension API call for <strong>IContentLoader</strong>. It only understands one ‘parent’ but with Commerce content you can have multiple ‘parents’ as it can belong to many categories. I’ve had to rewrite a commerce-specific version of it to work nicely with variants/products. This implementation can be found in the Vulcan Commerce library codebase inside <strong>VulcanCommerceIndexingModifier</strong> for the curious. To replicate the issue, add another category to a variant/product and you’ll see the ‘ancestors’ in ElasticSearch don’t change for an item. In fixing this, I’ve taken the opportunity to create the ability for customising the ‘ancestors’ that are picked up when an item is indexed. The <strong>IVulcanIndexingModifier</strong> interface now has a <strong>GetAncestors</strong> method. In your custom indexer classes you can leave it throwing a not implemented exception, that’s OK. However, the OOTB CMS and Commerce indexers will now return ancestors as appropriate which will get indexed with the item. (This is the breaking change as your custom classes WILL need to implement the method, even if it just throws a not implemented exception.)</p> <p>Side note: in the bugged version, manually firing a re-index is a workaround for issue (1) but it still only picks up the <em>first</em> category that something belongs to as it won’t resolve issue (2). If your products/variants only ever belong to one category then manual reindexing would be a workaround until you can update to the latest Vulcan packages.</p> <p><strong>DISCLAIMER:</strong> This project is in no way connected with or endorsed by Episerver. It is being created under the auspices of a South African company and is entirely separate to what I do as an Episerver employee.</p> Fixing “tax total” differences 2017-02-05T15:33:50.0000000Z Sometimes you will need / want to display the price including tax on the line items of your cart. When you do this, you may see differences in the totals, using the calculators in Commerce, between the sum of the prices in your cart as they are displayed and the sum from the calculator. This … <a href="" class="more-link">Continue reading <span class="screen-reader-text">Fixing “tax total” differences</span></a><img alt="" border="0" src="" width="1" height="1" /> Getting Started Using Vulcan Search in Episerver 2017-02-03T17:29:45.0000000Z We offer detailed instructions for how to set up Vulcan search in Episerver, which will index Episerver CMS content to Elasticsearch. EMVP Program Updates and new EMVPs 2017-02-03T14:40:35.4270000Z <p><img src="/link/c43dc3610f8d4a51b11146a2f9382cc3.aspx" alt="Image emvplogo.jpg" /></p> <p>The Episerver Most Valuable Professional program has existed for quite a few years and is still going strong! But as with many other things, it’s about time we give it an upgrade! When the EMVP program was started the main intended audience was developers, mainly since main intent was to motivate and promote those developers that share code, answer forum questions and help the developer community. It worked excellent, and we are thrilled to have such an awesome developer community!</p> <p>But over the last few years it’s become increasingly obvious that not only developers are sharing their knowledge and contributing to improving the online experience for all Episerver users. There are also more business minded people that are sharing ideas, strategic visions, marketing and editorial knowledge on many channels, ranging from blogs, to twitter and other media. Some of them are cross-overs from the technical side, some of them are purely business focused.</p> <p>So, today I’m happy to announce that the EMVP program in the future officially will be for both Developers and Digital Strategists that are experts in their field and actively share knowledge with our community of customers and partners and inspire great digital experiences. To begin with we would like to acknowledge those already existing EMVPs that are crossovers from the technical to the business side – but still greatly appreciated EMVPs: Alexander Haneng, Arild Henrichsen, Chris Osterhout and Deane Barker. In the future, we are happy to accept nominations for other digital strategists that qualify for EMVP status.</p> <p><img src="/link/fed50b551d06420a8c0063b3af15276e.aspx" width="620" alt="Image emvps.jpg" height="349" /></p> <p>Some of the EMVP gang posting for a wooden selfie stick photo along with Epi staff at the last EMVP Summit.</p> <h2>EMVP Changes</h2> <p>I’m very happy to welcome the following people to the rank of EMVP:</p> <p><img src="/link/650373c8729c422c9018d513ad69ad47.aspx" alt="Image eric.jpg" /></p> <p><strong>Eric Herlitz, sQills, Sweden</strong></p> <p><strong><img src="/link/349f4ae486c34f1f9bc45e42b1765378.aspx" alt="Image daniel.jpg" /></strong></p> <p><strong>Daniel Ovaska, Mogul, Sweden</strong></p> <p><strong><img src="/link/661cff09d52644dab91c30b4f8120cb7.aspx" alt="Image Luc.jpg" /></strong></p> <p><strong>Luc Gosso, Gosso Systems, Sweden</strong></p> <p> </p> <p>Sadly, we also lost 5 EMVPs. Some of them, because they are now Episerver staff (and as such disqualified from remaining EMVP) and some due to low community activity. We thank them for their service through the years!</p> <p>If you want to learn more about the EMVP Program or see a list of the current EMVPs, you can find it on <a href="/link/0e4c738fd29f4c0ebe73a97178bc995a.aspx">world.episerver.com/emvp</a>.</p> Razor view engine for feature folders 2017-02-03T00:00:00.0000000Z <p>For an <em>MVC</em> application to find views, it uses <em>RazorViewEngine</em>. The default implementation looks for views in the <em>Views</em> folder in the root of the project or within <em>MVC</em> <em>Area</em>. But there is a way to create an own <em>RazorViewEngine</em> implementation.</p> <p>The easiest way is to inherit your custom view engine from <em>RazorViewEngine</em> and set <em>ViewLocationFormats</em>, <em>MasterLocationFormats</em>, <em>PartialViewLocationFormats</em> properties (there are also other properties for <em>Areas</em>) in the constructor with your view location formats. The location format has two format items: {0} - action name (also content type name in Episerver) and {1} - controller name.</p> <pre><code>public class CustomViewEngine : RazorViewEngine { public CustomViewEngine() {Basic feature folder support</h1> <p>A basic support for feature folders is simple. In this example, I registered several different ways how views can be resolved under the <em>Features</em> folder. <em>ViewLocationFormats</em>, <em>MasterLocationFormats</em> and <em>PartialViewLocationFormats</em> share same view location formats. I also append these to the default ones.</{0}.cshtml", "~/Features/Shared/Views/{0}.cshtml" } configuration supports these view locations:</p> <ul> <li><em>"~/Features/{0}.cshtml"</em> - looks for views which match action name or content type name in the root of the <em>Features</em> folder.</li> <li><em>"~/Features/{1}/{0}.cshtml"</em> - looks for views in the folder which matches controller name and the view matches action name.</li> <li><em>"~/Features/{1}/{1}.cshtml"</em> - looks for views in the folder which matches controller name and the view matches controller name.</li> <li><em>"~/Features/{1}/Views/{0}.cshtml"</em> - looks for views in the folder which matches controller name and the view matches action name in the <em>Views</em> folder.</li> <li><em>"~/Features/{1}/Views/{1}.cshtml"</em> - looks for views in the folder which matches controller name and the view matches controller name in the <em>Views</em> folder.</li> <li><em>"~/Features/Shared/{0}.cshtml"</em> - looks for views in the <em>Shared</em> folder and the view matches action name or content type name in the root of the <em>Shared</em> folder.</li> <li><em>"~/Features/Shared/Views/{0}.cshtml"</em> - looks for views in the <em>Shared</em> folder and the view matches action name or content type name in the <em>Views</em> folder.</li> </ul> <blockquote> <p>NOTE: There is one "bug" in <em>Episerver</em> (at least I perceive it like that) that if you call your view same as a content type, then it will not pick content type's controller but will try to render view directly by matching content type name to the view name. Initially, it was only for blocks but it caused also pages to render incorrectly and throw exceptions.</p> </blockquote> <p>Now it looks quite good. But there are some issues.</p> <p>First of all, when working with <em>Episerver</em> it is common to name controller as a content type plus <em>Controller</em> postfix. For example, <em>ArticlePage</em> content type with <em>ArticlePageController</em>. This way feature folder has to be called <em>ArticlePage</em> but it would be nicer to call it <em>Articles</em>. Also, namespace with the same name as the content type will have naming conflicts.</p> <p>This configuration also doesn't support sub-features.</p> <h1 id="advanced-feature-folder-support">Advanced feature folder support</h1> <p>To be able to add feature folders with any name, a view engine should scan all folders in the <em>Features</em> folder and register view location format for each of it. It should include all possible view location formats you might need for a single folder. Below is a method which does that.</p> <pre><code><p>And then append these folders to the other view location formats.</p> <pre><code approach still has several issues.</p> <p>The first one is that views for an <em>Episerver</em> content should not be called as <em>Index</em> or has the same name as the content type. That is the reason why there is a location format where view name consists of controller and action name.</p> <p>Another issue is related to the sub-feature folder naming. Sub-feature folder still should be called with the same name as a controller.</p> <p>View names also should be unique. That's why <em>Index</em> can't be used as a view name.</p> <p><em>Visual Studio</em> will show you warnings that it is unable to resolve views.</p> <p>As there are a lot of view location formats registered, there might be some performance issues when looking for the right view. I haven't measured that but for now didn't have any issues.</p> <h1 id="summary">Summary</h1> <p>Even with all disadvantages, organizing views in the feature folders has one big benefit - maintainability. Now views are close to the code which uses these.</p> <p>Here is a final version of the view engine:<>
http://world.episerver.com/Blogs/?feed=ATOM
CC-MAIN-2017-09
en
refinedweb
With the addition of Visual Studio Update 1 we introduced a cool new feature called Code Map. You can use Code Map to visualize relationships in code. In this article we will explore this new feature and show you how to use it. Requirements Before we get started you need to be aware of the requirements for Code Map. Below are the requirements to leverage this feature: Visual Studio 2012.1 and one of these editions: Visual Studio 2012 Ultimate to create code maps from the code editor or from Solution Explorer. Note: Before you share maps with others who use Premium or Professional, make sure that all the items on the map are visible, such as hidden items, expanded groups, and cross-group links. Visual Studio 2012 Premium or Visual Studio 2012 Professional to open code maps, make limited edits, and navigate code. A solution with Visual C# .NET or Visual Basic .NET code With that said, I often see many larger organizations who usually have a few extra Ultimate licenses so you might ask around at your company to see if there are any available licenses if you need the higher level version. What are Code Maps? We will get to examples of maps in a minute but I wanted to help clarify their origin. Code Maps are a more user-friendly approach to something we have had in Ultimate for a while known as Dependency Graphs. With this new feature we make creating and manipulating visualizations easier. I bring this up because you may want to explore creating Dependency Graphs to learn more about how to work with these visualizations. You can find out more here: Creating Code Maps The need for maps will usually manifest itself when you are writing or debugging code and need to understand code relationships. Let’s take, for example, TailSpin Toys from the Brian Keller Virtual machine found at: Let’s say I happen to be looking at the AddItem method and want to get a handle on what is calling this method. I can right-click the method and choose Show on Code Map (note the shortcut key as well): NOTE: I’m showing this path for demo purposes. As you get more advanced you may want to choose an option from the Show Related Items on Code Map menu: Once you choose to show an item on the map, Visual Studio will build the solution and index it to generate the initial map image: The first image may not look like much: I want to see anything that calls this method. I can go to Show Related Items on the toolbar or simply right-click the AddItem node and choose Find All References: Now we have a map! We can visualize our method and calls to it: I’m not a fan of the default orientation (Top to Bottom) in this case so go to the toolbar and select the Left to Right Layout: Which will give a little better perspective on what is happening: If you need to fit the diagram to your viewing area you can use the Zoom To Fit button on the Code Map Toolbar: NOTE: Whenever you add nodes the most recently added ones will be in green. If this annoys you, you can clear the green color by going to to Layout on the Code Map Toolbar and selecting Clear Result Highlighting or by pressing CTRL + G: You can see the legend by going to the Code Map Toolbar: You can hover your mouse over any node to get more detail and/or you can double-click any node to see the code associated with it: Note the green arrow beside the node you are currently viewing. This is just like the map in the mall that says, “You are here.” It is meant to clearly show where in the codebase you are examining. The arrow only shows up on nodes when the editor cursor is in the code underlying them: You can also flag nodes using a variety of colors to indicate some type of action needs to be taken: If you need to have more detail, you can add comments to any node by right-clicking the node and selecting New Comment: Sharing Code Maps At some point you will want to share the Code Map with others. If you want the diagram to travel with source control you can move the diagram to an existing project by going to Share on the Code Map Toolbar and moving the map file to an existing project: You will see it show up in your project after you move it: Attaching the diagram to your source code is the optimal option but you can also choose one of the other methods from the Share button on the toolbar as well to share with others. Finally As you can see there is quite a bit to using Code Map and it is a great way to get a handle on complex code bases. I hope you like this feature as much as I do. Can it help find – Methods, properties, constants used only in the current class but are public – Methods, class, properties in one class only used by one other class (i.e., is the class a candidate to be nested in another class instead of being global) – Methods used only one time (i.e., candidate for in-lining in the caller method and removing the original method) – Classes in one namespace used only by something in another namespace (i.e., move to the namespace that uses it) – Actions used only one time (i.e., could be in-lined in the calling code) Hey Tom 🙂 Just off the top of my head here are the answers referring specifically to the funcionality of Code Map in Update 1: YES – Methods, properties, constants used only in the current class but are public YES – Methods, class, properties in one class only used by one other class (i.e., is the class a candidate to be nested in another class instead of being global) NO – Methods used only one time (i.e., candidate for in-lining in the caller method and removing the original method) YES – Classes in one namespace used only by something in another namespace (i.e., move to the namespace that uses it) NO – Actions used only one time (i.e., could be in-lined in the calling code) This feature is constantly being updated so some of this may change but, today, the above with YES are possible. Z Can you link code maps together? I know the visual studio code mapping tool doesn't work with our solution due to 1. Its enormous size, and 2. we use interfaces.
https://blogs.msdn.microsoft.com/zainnab/2013/02/07/visual-studio-2012-update-1-understanding-code-map/
CC-MAIN-2017-09
en
refinedweb
Here we go again. As you can see, I have gotten much further. There are some elements however that I am unsure how to apply (i.e. bool tooMany). I haven't the slightest how to apply that. That is one snag that I have. Another, and the main one, is this: The following code does work. It calls a file called "studentData.txt". Said file contains the ID#s and Scores on their own lines : (id) 101 (score) 100 102 95 103 90 ... ... 121 0 Now, if I comment that out and just have it read from the arrays that I have hardcoded it works great. I can't quite figure out how to compute the .txt items into the individual arrays to make it use those as opposed to the hardcoded arrays. One of my main issues with reading from a .txt file, is that the only way I know how is using the getline feature. Is there anything better? Currently I have the code calling the .txt file. #include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; void printTable(int score[], int id[], int count); void printGrade(int oneScore, float average); void readStudentData(ifstream &rss, int scores[], int id[], int &count, bool &tooMany) { const int MAX_SIZE = 21; rss.open("studentData.txt"); string line; id[MAX_SIZE]; int score[MAX_SIZE]; count = 0; int oneScore = 0; float average = 0; string grade; for(count = 0; count < MAX_SIZE; count++) { getline(rss,line); cout << line; getline(rss,line); cout << " " << line; cout << " " << grade << endl; } // printTable(score, id, count); } float computeAverage(int scores[], int count[]) { const int MAX_SIZE = 21; return 0; } void printTable(int score[], int id[], int count) { void printGrade(int oneScore, float average); const int MAX_SIZE = 21; int oneScore = 0; float average = 0; string grade; id[MAX_SIZE]; score[MAX_SIZE]; cout << left << setw(9) << "ID#s" << setw(9) << "Scores" << setw(9) << "Grades" << endl << endl; //for(count = 0; count < MAX_SIZE; count++) //{ printGrade(oneScore,average); //} } void printGrade(int oneScore, float average) { const int MAX_SIZE = 21; int id[MAX_SIZE] = {101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121}; int scores[MAX_SIZE] = {100,95,90,85,80,75,70,65,60,55,50,45,40,35,30,25,20,15,10,5,0}; oneScore = 0; average = 0; string grade; int sum = 0; for(int i = 0; i < MAX_SIZE; i++) sum += scores[i]; average = sum / MAX_SIZE; for(int i = 0; i < MAX_SIZE; i++) { if(scores[i] > average + 10) { grade = "outstanding"; } else if(scores[i] < average - 10) { grade = "unsatisfactory"; } else { grade = "satisfactory"; } // cout << left << setw(9) << id[i] << setw(9) << scores[i] << setw(9) << grade << endl; } } int main() { ifstream rss; string line; const int MAX_SIZE = 21; int scores[MAX_SIZE]; int id[MAX_SIZE]; int count; bool tooMany; readStudentData(rss, scores, id, count, tooMany); return 0; }
https://www.daniweb.com/programming/software-development/threads/201597/snagged-yet-again
CC-MAIN-2017-09
en
refinedweb
CodePlexProject Hosting for Open Source Software Hi, I was wondering if it was possible to configure the recent posts widget summary length. I noticed that the default blog posts list only displays X number of characters, is it possible to control this summary limit? Also is is possible to include images in this summary? Future enhancement would be a preview of the summary below where the post was being authored. Many thanks, Ian Seems like a good idea. Could be an interesting patch ;) Indeed. But in the short term I just need to extend the character limit, not done any orchard development yet. I'm assuming the summary will get set in the core rather than the view template. Is that right? I have the same requirement. I'll cook up something. OK, it's actually fairly simple. Add a file named Parts.Common.Body.Summary.cshtml into the Views directory of your theme, with this contents: @{ Orchard.ContentManagement.ContentItem contentItem = Model.ContentPart.ContentItem; string bodyHtml = Model.Html.ToString(); var body = new HtmlString(Html.Excerpt(bodyHtml, 400).ToString().Replace(Environment.NewLine, "</p>" + Environment.NewLine + "<p>")); } <p>@body</p> <p>@Html.ItemDisplayLink(T("Read more...").ToString(), contentItem)</p> Thanks Bertrand. It's starting to make sense but what I don't quite understand is how you worked out the file must be called Parts.Common.Body.Summary. I can't find any documentation about the common parts. Also if I override this part to match my recent blog posts design, hasn't that then overridden other content types body summaries also. Is there a way to target just a specific module in this case the blog. I may have totally missed where this is documented. Cheers, You are absolutely right that documentation is somewhat lacking in that area, as is good tooling to make this more discoverable. I'll explain how I figured it out, and some of it will be a little technical but I hope I can keep it usable. I know that the body of a content item is handled by the body part that is in the Core.Common module. I open the views directory of that volume and hunt for template files in there. So you determine what part is responsible for what you want to customize and try to find the right template in the views folder of that module. Once you've done that, you can also look for alternates. You do that by looking for a class implementing IShapeTableProvider and see if anything is handling the shape you're looking for and adding alternates. You can see examples of that by searching the code for AddAlternates. If you don't find an alternate that already exists that would enable you to specialize the template the way you want. That's where the technique I exposed in post enters into play and enables you to add your own alternates. Between all those techniques (plus placement.info) you should be able to customize pretty much anything in any way you can think of. Does this help? Ok. Say I want to customize the metadata information displayed on a recent blog post item. I know that the recent blog post item is displayed by the Content.Summary.cshtml which in turn calls <div class="metadata"> @Display(Model.Meta) </div> I can see that in Shapes.cs there is an Alternate for blog posts. if (contentItem != null) { //Content-BlogPost displaying.ShapeMetadata.Alternates.Add("Content__" + contentItem.ContentType); So in order to target the meta data on a blog post only I tried creating a chtml template in my theme named: Content.BlogPost.Summary.Metadata.chtml Content.BlogPost.Metadata.cshtml Not working. What am I missing... I'm sure this will all click soon. Thanks for all your help. Uh. I was pretty sure I had answered this but it appears my answer got lost in the intertubes... Oh well. Yeah, you have two shapes at play here: the content item's shape (that does have an alternate with the content type) and the metadata part that is provided by the Orchard.Common module. That one is called Parts_Common_MetaData and has no alternates that I know of. This is where the technique I showed in my post comes in handy as it enables you to add your own alternates, for example based on the content type name. Makes sense? Oh, and the metadata shape is just one of the shapes that will get rendered inside of the more global content item shape. It's a hierarchy in case that wasn't clear. Still struggling with this. I've added the below but a breakpoint on the ContentItem doesn't get hit. I've found this shape mentioned by name in the src so I know its correct. public class MetaDataShapeProvider : IShapeTableProvider { private readonly IWorkContextAccessor workContextAccessor; public MetaDataShapeProvider(IWorkContextAccessor workContextAccessor) { this.workContextAccessor = workContextAccessor; } public void Discover(ShapeTableBuilder builder) { builder .Describe("Parts_Common_Metadata") .OnDisplaying(displaying => { ContentItem contentItem = displaying.Shape.ContentItem; if (contentItem != null) displaying.ShapeMetadata.Alternates.Add("Metadata__" + contentItem.ContentType); }); } } Any ideas? If I'm understanding this concept correctly this should create an alternate with the name "Metadata_BlogPost that I can use to override the display of that part. I'm working off the below post for anyone that's interested. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/242866
CC-MAIN-2017-09
en
refinedweb
Pop Searches: photoshop office 2007 PC Security You are here: Brothersoft.com > Windows > System Utilities > Startup & Shutdown > Advertisement Advertisement view larger (1) Advertisement Please be aware that Brothersoft do not supply any crack, patches, serial numbers or keygen for ShutUDown,and please consult directly with program authors for any problem with ShutUDown. import pdf into autocad 2008 | symbian video redaktor | password finder | canon pixma ip1200 driver | free screensavers no adware | deaverweaver | hack mig33 password | pub file viewer | wifi password cracker software | lg phoenix kit android | daemon347 tool 2007 download | zuma for nokia 7610 | turbo water polo suit size chart | 2d helicopter simulator | install punkbuster pc games | yahoo password cracker | vlc activex buttons | nero photosnap essentials | port management windows vista | file lock | hokkien languages software | file explore | qq browser for nokia 6600 | free task scanner
http://www.brothersoft.com/shutudown-292925.html
CC-MAIN-2017-09
en
refinedweb
this is a 2nd question related to my first question at: Flex 4: mx|tree - how can i disable items selection ? I want to disable the hover and selection colors so when a user selects an item it's background won't change color. how is that possible?update i do not want to choose the selection and hover colors. the background contains an image so it won't be useful. i need to completely disable the colors.another update i tried to override the Tree class but with no luck. this is the class that overrides the tree Class: package components.popups.WelcomeBack { import mx.controls.listClasses.IListItemRenderer; import mx.controls.Tree; /** * @author ufk */ public class TreeNoSelection extends Tree { protected override function drawItem(item:IListItemRenderer, selected:Boolean = false, highlighted:Boolean = false, caret:Boolean = false, transition:Boolean = false):void { super.drawItem(item, false, false, false, transition); } } } and this is my actual tree component: <?xml version="1.0" encoding="utf-8"?> <WelcomeBack:TreeNoSelection xmlns: <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Script> <![CDATA[ import ItemRenderer.WelcomeBackTreeItemRenderer; private var _themeLibrary:Object; public function get themeLibrary():Object { return this._themeLibrary; } public function set themeLibrary(tl:Object):void { this._themeLibrary=tl; var cf:ClassFactory = new ClassFactory(); cf.generator = ItemRenderer.WelcomeBackTreeItemRenderer; cf.properties = { _themeLibrary:this._themeLibrary }; this.itemRenderer=cf; } ]]> </fx:Script> </WelcomeBack:TreeNoSelection> thanks You can use the rollOverColor and selectionColor style on the Tree. There are different ways to set the styles but here I'm setting them inline to white, change the color to whatever your background color is. <mx:Tree rollOverColor="#FFFFFF" selectionColor="#FFFFFF" I've got good news and bad news. The good news is that it's really easy. The bad news is that you need to subclass the Tree. package custom { import mx.controls.Tree; import mx.controls.listClasses.IListItemRenderer; public class CustomTree extends Tree { protected override function drawItem(item:IListItemRenderer, selected:Boolean = false, highlighted:Boolean = false, caret:Boolean = false, transition:Boolean = false):void { super.drawItem(item, false, false, false, transition); } } } So what happens here is that we intercept the drawItem method and call the method on the superclass fooling it into thinking there's nothing selected, highlighted or "careted". The caret is for when you change selection by keyboard. Not sure what the transition parameter is for, you can send it as always false if there're still some effects bothering you. Edit After taking a look at the related question, I found out that the root of the problem is the item renderer using the new spark architecture, which means the renderers are responsible with reacting to special states (selected, highlighted, show caret). So when using the spark item renderer there are other 3 functions that also need overriding: public class CustomTree extends Tree { public override function isItemShowingCaret(data:Object):Boolean { return false; } public override function isItemHighlighted(data:Object):Boolean { return false; } public override function isItemSelected(data:Object):Boolean { return false; } protected override function drawItem(item:IListItemRenderer, selected:Boolean = false, highlighted:Boolean = false, caret:Boolean = false, transition:Boolean = false):void { super.drawItem(item, false, false, false, transition); } } Bonus - override isItemSelectable to prevent selection when clicking on an item (you can still select them via keyboard, although there will be no visual hint of that): public override function isItemSelectable(data:Object):Boolean { return false; } You might be able to use 4-channel colors with jss's approach: <mx:Tree rollOverColor="#00FFFFFF" selectionColor="#00FFFFFF" ...including the alpha channel (at full transparent) in the color choice. Similar Questions
http://ebanshi.cc/questions/845614/flex-4-mxtree-how-can-i-disable-hover-and-selection-colors
CC-MAIN-2017-09
en
refinedweb
1 /*2 * Copyright 2004.portal.coplets.basket.events;17 18 import org.apache.cocoon.portal.coplets.basket.Briefcase;19 20 /**21 * Clean all briefcases or a single one22 *23 * @version CVS $Id: CleanBasketEvent.java 30941 2004-07-29 19:56:58Z vgritsenko $24 */25 public class CleanBriefcaseEvent extends ContentStoreEvent {26 27 /**28 * Constructor29 * If this constructor is used all baskets will be cleaned30 */31 public CleanBriefcaseEvent() {32 // nothing to do here33 }34 35 /**36 * Constructor37 * One briefcase will be cleaned38 * @param briefcase The briefcase39 */40 public CleanBriefcaseEvent(Briefcase briefcase) {41 super(briefcase);42 }43 44 }45 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/cocoon/portal/coplets/basket/events/CleanBriefcaseEvent.java.htm
CC-MAIN-2017-09
en
refinedweb
Content-type: text/html #include <xti.h> int t_error(const char *errmsg);_error() function produces a message on the standard error output which describes the last error encountered during a call to a transport function. The argument string errmsg is a user-supplied error message that gives context to the error. The error message is written as follows: first (if errmsg is not a null pointer and the character pointed to be errmsg is not the null character) the string pointed to by errmsg followed by a colon and a space; then a standard error message string for the current error defined in t_errno. If t_errno has a value different from TSYSERR, the standard error message string is followed by a newline character. If, however, t_errno is equal to TSYSERR, the t_errno string is followed by the standard error message string for the current error defined in errno followed by a newline. The language for error message strings written by t_error() is that of the current locale. If it is English, the error message string describing the value in t_errno may be derived from the comments following the t_errno codes defined in xti.h. The contents of the error message strings describing the value in errno are the same as those returned by the strerror(3C) function with an argument of errno. The error number, t_errno, is only set when an error occurs and it is not cleared on successful calls. If a t_connect(3NSL) function fails on transport endpoint fd2 because a bad address was given, the following call might follow the failure: t_error("t_connect failed on fd2"); The diagnostic message to be printed would look like: t_connect failed on fd2: incorrect addr format where incorrect addr format identifies the specific error that occurred, and t_connect failed on fd2 tells the user which function failed on which transport endpoint. Upon completion, a value of 0 is returned. All - apart from T_UNINIT No errors are defined for the t_error() that can be set by the XTI interface and cannot be set by the TLI interface is: TPROTO See attributes(5) for descriptions of the following attributes: t_errno(3NSL)strerror(3C), attributes(5)
http://backdrift.org/man/SunOS-5.10/man3nsl/t_error.3nsl.html
CC-MAIN-2017-09
en
refinedweb
Dec 16 – Dec 25 Hot-Fix Weekly Release – Windows Server ★★★★★★★★★★★★★★★ Chris MuDecember 25, 20071 Share 0 0 Windows Server 2003 945778 Error message when you install a network adapter driver on a Windows Server 2003 64-bit-version-based computer that has more than 32 processors: “Stop 0x000000D1” 945463 Windows Server 2003 does not issue security event 523 even though the WarningLevel registry entry is configured 945449 Output of remote shell (rsh) commands is not redirected to a local file when you run the commands against a remote shell server that is hosted in SUA on a Windows Server 2003 R2-based computer 945410 A Windows Server 2003-based computer may stop responding when many connections are created and then disconnected if IPsec is configured 945344 The “Serial number” attribute of a self-signed certificate has a negative value when you call the CertCreateSelfSignCertificate function to create the certificate in Windows Server 2003 945330 An application that uses the CDOSYS library or the System.Web.Mail namespace in a non-English version of Windows Server 2003 Service Pack 2 may receive a corrupted error message 945119 Stop error that is related to the Storport.sys driver on a Windows Server 2003-based computer: “0x000000D1 (parameter1, parameter2, parameter3, parameter4) DRIVER_IRQL_NOT_LESS_OR_EQUAL” 945050 The private bytes that the DFS service consumes continue to increase on a Windows Server 2003-based domain controller that hosts the PDC emulator role 944971 A Cluster node may lose an MNS quorum, and the Cluster service may stop on a Windows Server 2003, Enterprise Edition-based computer 944195 Error message when you use a mobile device to access an ASP.NET Web site that is hosted in IIS 6.0: “HTTP 400 – Bad Request” 943875 Authorization Manager in Windows Server 2003 cannot add roles from other domains in the forest after security update 926122 or Windows Server 2003 Service Pack 2 is installed 943831 An application stops responding in Windows Server 2003 R2 when the application calls the getpriority function or the setpriority function by using a PRIO_PGRP value for the first parameter 943830 SUA uses the default primary domain name instead of the cached primary domain name in Windows Server 2003 R2 943809 Event 1164 is logged when a disk or a mount point is mounted in a Windows Server 2003 cluster that uses a Majority Node Set (MNS) quorum 942004 A Windows Server 2003 x64 Edition-based computer does not automatically restart as expected after the .crash command is executed to generate a dump file 938863 Name resolution may fail on a Windows Server 2003 DNS server if conditional forwarding is configured and if records have different TTL values 937278 Error message when multiple processes access a file that is in an NFS shared folder on a Windows Server 2003 R2-based computer that has Client for NFS installed: “STOP 0x00000027” Service for UNIX 3.5 944412 FIX: Error message on a system that has Windows Services for UNIX 3.5 installed: “Segmentation fault” Systems Management Server 2003 942212 The SMS_Executive service process for Systems Management Server 2003 stops repeatedly PingBack from
https://blogs.technet.microsoft.com/hot/2007/12/25/dec-16-dec-25-hot-fix-weekly-release-windows-server/
CC-MAIN-2017-09
en
refinedweb
Working with secure ArcGIS services In this topic Layers and tasks in ArcGIS Runtime SDK for Android communicate with ArcGIS for Server web services. These services can be secured to permit access to only authorized users. An ArcGIS for Server instance can use one of two authentication modes: token-based authentication or HTTP (including Windows) authentication. Both modes are supported by the runtime. If you're the administrator of an ArcGIS for Server system and want to restrict access to your ArcGIS web services, information is available in Securing your ArcGIS Server site (Windows) and Securing your ArcGIS Server site (Linux) in the ArcGIS for Server online help. Token-based authentication Services secured with token-based authentication require that a token be included in each request for a map, query, and so on. To use a secure service through the API, you need to know the credentials (username and password) or tokens to access the service. The server administrator can provide this information. Once you know the credentials or tokens to use, you need to pass them to the layer or the task through a UserCredentials object in the Java code or hardcode the tokens in the layout XML by including a long-term token directly in the URL. The following code examples demonstrate several different ways to access secure services. Instantiate the layer or task with credentials through the UserCredentials object in Java code as shown in the following example: UserCredentials creds = new UserCredentials(); creds.setUserAccount("username", "password "); // The following line can be omitted if the default token service is used. creds.setTokenServiceUrl(""); ArcGISDynamicMapServiceLayer layer = new ArcGISDynamicMapServiceLayer( "", null, creds); Alternatively, instead of passing in the username and password, you can pass in a token obtained from the corresponding token service as shown in the following example: UserCredentials creds = new UserCredentials(); creds.setUserToken("token", "referer"); Locator al = new Locator( "", creds); The API will automatically try to discover the URL of the token service where tokens can be acquired. If you know this information in advance, you can provide it to the UserCredentials object as shown in the previous example so that it does not make any unnecessary network requests to discover the same information. Using self-signed certificates To safeguard content exchanged over the network, HTTPS should be used whenever supported by the service. HTTPS connections use Secure Sockets Layer (SSL) to encrypt information that is exchanged over the network and digital certificates to verify identities of the parties involved. The ArcGIS API for Android supports certificates issued by a trusted certificate authority as well as self-signed certificates. If you want to trust a server that is using a self-signed certificate, you can choose one of two workflows supported by ArcGIS API for Android based on your business requirement. Workflow 1 You can trust a certain server using self-signed certificates by importing the server's certificate and pre-loading the trusted certificate on the client side. This workflow enables server authentication when supplied with a truststore file containing one or several trusted certificates. See the following code example: // Load self-signed certificate KeyStore keyStore = KeyStore.getInstance("BKS"); InputStream is = this.getResources().openRawResource(R.raw.xxx); keyStore.load(is, "xxx".toCharArray()); // Populate Security Info UserCredentials creds = new UserCredentials(); creds.setUserAccount("username", "password"); UserCredentials.setTrustStore(keyStore); ArcGISFeatureLayer layer = new ArcGISFeatureLayer( "", MODE.ONDEMAND, creds); Workflow 2 This workflow allows users to trust a server using self-signed certificates without pre-loading the server certificate. To do so, create an instance of OnSelfSignedCertificateListener and register it to SelfSignedCertificateHandler, which is a final class. OnSelfSignedCertificateListener will be fired when the server is using a self-signed certificate. Return true from OnSelfSignedCertificateListener if you want to trust the server. The following code snippet shows this workflow: import com.esri.core.io.SelfSignedCertificateHandle; …… public boolean getUserPermission() { … } // Set listener to handle self-signed certificate SelfSignedCertificateHandler.setOnSelfSignedCertificateListener( new OnSelfSignedCertificateListener() { public boolean checkServerTrusted(X509Certificate[] chain, String authType) { try { chain[0].checkValidity(); } catch (Exception e) { return getUserPermission(); } return true; } }); HTTP/Windows authentication When a request is made to a service secured with HTTP authentication (HTTP basic or HTTP digest), user credentials must be supplied to the service through the UserCredentials object. See the following code example: UserCredentials creds = new UserCredentials(); creds.setUserAccount("xxx", "xxx"); Geoprocessor gp = new Geoprocessor( "", creds); Exception handling When a layer fails to load to a map due to security, both the layer and the map will send out an EsriSecurityException error. Users can listen to the status changed events on MapView or Layer to handle the error. You can listen to the LAYER_LOADING_FAILED on MapView or INITIALIZATION_FAILED on Layer and get the error detail through the getError() method on STATUS. A layer can be re-initialized with a given credential. If the layer is re-initialized successfully, it will be reloaded into the map automatically. The following is an example of exception handling through MapView: MapView map; ArcGISFeatureLayer fl; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); map = new MapView(this); String url = ...; fl = new ArcGISFeatureLayer(url, MODE.ONDEMAND); map.addLayer(fl); setContentView(map); ... // Handle status change event on MapView map.setOnStatusChangedListener(new OnStatusChangedListener() { private static final long serialVersionUID = 1L; public void onStatusChanged(Object source, STATUS status) { // Check if a layer is failed to be loaded due to security if (status == STATUS.LAYER_LOADING_FAILED) { if ((status.getError()) instanceof EsriSecurityException) { EsriSecurityException securityEx = (EsriSecurityException) status.getError(); if (securityEx.getCode() == EsriSecurityException.AUTHENTICATION_FAILED) Toast.makeText(map.getContext(), "Authentication Failed!", Toast.LENGTH_SHORT).show(); else if (securityEx.getCode() == EsriSecurityException.TOKEN_INVALID) Toast.makeText(map.getContext(), "Invalid Token!", Toast.LENGTH_SHORT).show(); else if (securityEx.getCode() == EsriSecurityException.TOKEN_SERVICE_NOT_FOUND) Toast.makeText(map.getContext(), "Token Service Not Found!", Toast.LENGTH_SHORT).show(); else if (securityEx.getCode() == EsriSecurityException.UNTRUSTED_SERVER_CERTIFICATE) Toast.makeText(map.getContext(), "Untrusted Host!", Toast.LENGTH_SHORT).show(); if (source instanceof ArcGISFeatureLayer) { // Set user credential through username and password UserCredentials creds = new UserCredentials(); creds.setUserAccount("username", "password"); fl.reinitializeLayer(creds); } } } } }); ... When an operational layer of a web map is fails to load due to security, the map will send out an EsriSecurityException error. Users can listen to the WebMap loading events to handle the error as shown in the following code example: MapView map; public void onCreate(Bundle savedInstanceState) { ... OnWebMapLoadListener listener = new OnWebMapLoadListener() { public MapLoadAction<UserCredentials> onWebMapLoadError(MapView source, WebMap webmap, WebMapLayer wmlayer, Layer layer, Throwable error, UserCredentials credentials) { if (error instanceof EsriSecurityException) { UserCredentials creds; creds = new UserCredentials(); creds.setUserAccount("username", "password"); if (credentials != null && (credentials.getUserName() == creds.getUserName() && credentials.getPassword() == creds.getPassword())) return null; return new MapLoadAction<UserCredentials>(Action.CONTINUE_OPEN_WITH_THE_PARAMETER, creds); } return null; } public void onWebMapLayerAdd(MapView source, WebMap webmap, WebMapLayer wmlayer, Layer layer, UserCredentials credentials) { ... } }; String url = ...; map = new MapView(this, url, "", "", "", listener); ... } Public key infrastructure A public key infrastructure (PKI) is a set of policies and services used to. Using ArcGIS for Server in a PKI ArcGIS for Server will leverage the PKI solution in COTS Web Servers (IIS, WebLogic and Websphere, and so on) through the use of the ArcGIS Web Adaptor. When a request is made for a resource on ArcGIS for Server, the web server will authenticate the user by validating the client certificate provided. The request (along with the user name) is then forwarded to ArcGIS for Server via the Web Adaptor. ArcGIS for Server verifies that the specified user has access to the requested resource before sending back the appropriate response. Learn how to enable SSL using a new CA-signed certificate Accessing PKI services Accessing PKI services from an Android device is OS dependent. You can install certificates on your device from Settings > Security > Install certificates from storage. At issue is the fact that you may not be able to access certificates installed in this manner from devices with v3.x (Honeycomb) or earlier. Version 4.0.x Ice Cream Sandwich (ICS) introduces a new KeyChain class to access these certificates. If you're using a pre-ICS device, you can access some of the installed certificates in a keystore located in the following locations: - Pre ICS > /system/etc/security/cacerts.bks - ICS or later > /system/etc/security/cacerts You can read the certificate there; however, some devices don't list the one you just installed. In this case, you can access your certificate from anywhere on the device that is accessible by the API. To support all devices with API 2.2 and later, the KeyStore class is used, which maintains keys and their owners. See the following code example: Access PKI from sdcard location KeyStore keystore = KeyStore.getInstance("BKS"); certInputStream = new FileInputStream("/sdcard/cert/example.bks"); keystore.load(is, "web.adf".toCharArray()); UserCredentials.setTrustStore(keystore, "web.adf", keystore);
https://developers.arcgis.com/android/10-2/guide/working-with-secure-arcgis-services.htm
CC-MAIN-2017-09
en
refinedweb
java.lang.Object groovy.mock.interceptor.StubForgroovy.mock.interceptor.StubFor class StubFor Stub. Class clazz Demand demand def expect Ignore ignore Map instanceExpectations MockProxyMetaClass proxy StubFor(Class clazz, boolean interceptConstruction = false) def ignore(Object filter, Closure filterBehavior = null) filterobject is invoked using the normal Groovy isCase()semantics. GroovyObject makeProxyInstance(def args, boolean isDelegate) GroovyObject proxyDelegateInstance(def args = null) GroovyObject proxyInstance(def args = null) void use(Closure closure) void use(GroovyObject obj, Closure closure) void verify(GroovyObject obj) void verify()
http://groovy.codehaus.org/gapi/groovy/mock/interceptor/StubFor.html
CC-MAIN-2014-15
en
refinedweb
Formatted Performance Data Provider [The Formatted Performance Data Provider, also known as the "Cooked Counter Provider," is no longer available for use as of Windows Vista. Instead, use the WMIPerfInst provider.] The high-performance Formatted Performance Data provider supplies calculated ("cooked") performance counter data, such as the percentage of time a disk spends writing data. This provider supplies dynamic data to the WMI classes derived from Win32_PerfFormattedData. The difference between this provider and the Performance Counter provider is that the Performance Counter provider supplies raw data and the Cooked Counter provider supplies performance data that appears exactly as in System Monitor. The __Win32Provider instance name is "HiPerfCooker_v1". The WMI formatted class name for a counter object is of the form "Win32_PerfFormattedData_<service_name>_<object_name>". For example, the WMI class name that contains the logical disk counters is Win32_PerfFormattedData_PerfDisk_LogicalDisk. These classes are located in the root\cimv2 namespace. The Win32_PerfFormattedData classes use the CookingType qualifier in WMI Performance Counter Types to specify the formula for calculating performance data. This qualifier is the same as the CounterType qualifier in the Win32_PerfRawData classes. As a high-performance provider, the Cooked
http://msdn.microsoft.com/en-us/library/aa390431(v=vs.85).aspx
CC-MAIN-2014-15
en
refinedweb
Hi lists, (I hope my cross-posting is okay, but somehow this post seems to apply to all of you, so here goes...) I've recently noticed that folks at GHC HQ are working on a way to resolve the problem of importing two modules with the same name from different packages. There is a proposal[1] on the GHC wiki calling for a syntax extension for 'import' statements in Haskell modules so that the package (and version) to import from can be specified explicitly. There is a second ("extended") proposal[2], which calls for the ability to import (subtrees of) the module hierarchy exposed by a package and attach it to the global module namespace at an arbitrary point, analogous to mounting a filesystem in Unix. This proposal was appearently inspired by a post to the libraries mailing list by Frederik Eaton[3]. I agree with Frederik that it would be very nice to remove the burden of writing out long package or hierarchy prefixes in modules, and just work in some previously defined context. I'd like to propose yet another alternative to the existing two proposals that follows [2] in trying to satisfy [3] but differes from it in the following ways: - It doesn't require an extension of the existing Haskell syntax. - It can be implemented by extending Cabal alone (AFAICT).". While I'm at it, I had to evaluate how this proposal would interact with the "ECT" module versioning scheme, I've proposed earlier[4]. I'd like to rework that scheme to this proposal. In ECT, a library author guarantees to his users that their imports will never break by providing different (numbered) variants of the library modules whenever their interface changes. By keeping the old variants as re-exports of updated versions, the author can record compatibilities. This carries the burden for users to annotate each import with a version number. If we lift the principles of ECT to the package level in accord with this proposal, that burden largely evaporates. Keeping the promise of "eternal backwards-compatibility", however, requires an (obvious) extension to the way Cabal deals with version numbers... First of all, package "mounts" must be able to say "this version, or any compatible one". Obviously, that's actually always what one wants, so it can be taken to be the only meaning of specifying "mount foo-1.3 at Foo.Bar". Then the question is just when to consider another version of foo compatible to 1.3. Obviously, only later versions should be considered, but when does compatibility end? Only the author of foo knows!'d personally favour 2, as it would be easier to maintain for the author of foo (no compatiblity-field in the package description to update all the time) and also to implement in the build system (just a simple version comparison instead of a relation traversal to check a dependency). As a (more conservative?) variant of alternative 2, the third (instead of second) version component could be declared as "stays compatible", i.e. if only the third (or later) version component changes, the new version is assumed backwards-compatible, otherwise it's assumed incompatible. This would allow frequent incompatible updates without quickly getting large major version numbers... As for wasting space by keeping old versions around: Of course, if the new one is backwards-compatible, any old version can be removed. Otherwise, if it's "half-way" compatible, the author of foo can release a new backwards-compatible revision of the old version that reimplements (part of) the functionality in terms of the new operations, so the old version can be replaced by this leaner revision. Comments? Best regards, Sven [1]: [2]: [3]: [4]:
http://www.haskell.org/pipermail/libraries/2006-July/005546.html
CC-MAIN-2014-15
en
refinedweb