text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
theres a couple things wrong with this and i have no clue where to start. like the title says it has to be a really simple mortgage calculator, no gui or any other inputs. not yet atleast. my prof messaged me back saying i need "1. import goes before public class 2. DecimalFormat num = new DecimalFormat(“$,###.00“); is a declaration so goes with the other declarations at the top of your program 3. When you want to print out you monthly payment you use num.format(mp) to get it to be in the right format" other than those 3 things nothing else should need to be added, only corrected to get the right answer. which is 665.30 BTW any help would be greatly appreciated //lab 2 //File: MortgagePayment.java //Programmer: Edward MacConnell import java.text.*; //for DecimalFormat class public class MortgagePayment { public static void main(String[] args) { double mp, //mortgage payment per month p; //total amout owed double I; //interest int T; //time for the mortgage p=100000; //assign value to total amoun owed I=0.07; //assign value to interest T=30; //assign value to time of the mortgage mp= p*((I/12.0)/(1-(1/(1+I))*(T*12))); //compute mortgage payment System.out.println("mp is " +mp); //output monthly mortgage payment (mp) using num format } } }
https://www.daniweb.com/programming/software-development/threads/491731/simple-mortgage-payment-calculator
CC-MAIN-2020-40
en
refinedweb
Hi, Greetings!! I was working on offsetting the below cad profile The offset i expected was (need solution which is fully automated in python), @Michael_Pryor gave a solution using food4rhino clipper library that works very good for me, but its a rhino command, i wanted it like a python code, i coded it with rs.command but there is much complication, please help to get an answer This is the code i have used, import rhinoscriptsyntax as rs def offset(curve, length = 5, direction = "Inside"): ##ProjectTo = FitToCurve rs.Command("_OffsetPolyline " + "_selid " + str(curve) + " _Enter" + " Distance" + " _Enter" + str(length) + " Side" + " _Enter" + str(direction) + " _Enter" + " _Enter" ) curve = rs.GetObject("select curve", rs.filter.curve) print offset(curve) Please find the attached cad file below, find_corner_points.stp (9.9 KB) Previous forum post : Offset Problem Thanks in advance
https://discourse.mcneel.com/t/rs-command-for-offsetting-polyline/78957
CC-MAIN-2020-40
en
refinedweb
The support of market quotes data is indispensable when researching, designing and backtest trading strategies. It is not realistic to collect all the data from every market, after all, the amount of data is too large. For the digital currency market, FMZ platform supports limited backtest data for exchanges and trading pairs. If you want to backtest some exchanges and trading pairs that temporarily wasn’t supported by FMZ platform, you can use a custom data source for backtest, but this premise requires that you have data. Therefore, there is an urgent need for a market quotes collection program, which can be persisted and best obtained in real time. In this way, we can solve several needs, such as: and many more… we plan to use python to achieve this, why? Because it’s very convenient Python’s pymongo library Because you need to use database for persistent storage. The database selection uses MongoDB and the Python language is used to write the collection program, so the driver library of this database is required. Just install pymongo on Python. Install MongoDB on the hosting device For example: MacOS installs MongoDB, also same as windows system installs MongoDB. There are many tutorials online. Take the installation of MacOS system as an example: Download Download link: Unzip After downloading, unzip to the directory: /usr/local Configure environment variables Terminal input: open -e .bash_profile, after opening the file, write: exportPATH=${PATH}:/usr/local/MongoDB/bin After saving, in the terminal, uses source .bash_profile to make the changes take effect. Manually configure the database file directory and log directory Create the corresponding folder in the directory /usr/local/data/db. Create the corresponding folder in the directory /usr/local/data/logs. Edit the configuration file mongo.conf: #bind_ip_all = true # Any computer can connect bind_ip = 127.0.0.1 # Local computer can access port = 27017 # The instance runs on port 27017 (default) dbpath = /usr/local/data/db # data folder storage address (db need to be created in advance) logpath = /usr/local/data/logs/mongodb.log # log file address logappend = false # whether to add or rewrite the log file at startup fork = false # Whether to run in the background auth = false # Enable user verification command: ./mongod -f mongo.conf use admin; db.shutdownServer(); The collector operates as a Python robot strategy on FMZ platform. I just implemented a simple example to show the ideas of this article. Collector program code: import pymongo import json def main(): Log("Test data collection") # = huobi_DB[dropName] Log("dropName:", dropName, "delete:", dropName) ret = tab.drop() collist = huobi_DB.list_collection_names() if dropName in collist: Log(dropName, "failed to delete") else : Log(dropName, "successfully deleted") # Create the records table huobi_DB_Records = huobi_DB["records"] # Request data preBarTime = 0 index = 1 while True: r = _C(exchange.GetRecords) if len(r) < 2: Sleep(1000) continue if preBarTime == 0: # Write all BAR data for the first time for i in range(len(r) - 1): # Write one by one bar = r[i] huobi_DB_Records.insert_one({"index": index, "High": bar["High"], "Low": bar["Low"], "Open": bar["Open"], "Close": bar["Close"], "Time": bar["Time"], "Volume": bar["Volume"]}) index += 1 preBarTime = r[-1]["Time"] elif preBarTime != r[-1]["Time"]: bar = r[-2] huobi_DB_Records.insert_one({"index": index, "High": bar["High"], "Low": bar["Low"], "Open": bar["Open"], "Close": bar["Close"], "Time": bar["Time"], "Volume": bar["Volume"]}) index += 1 preBarTime = r[-1]["Time"] LogStatus(_D(), "preBarTime:", preBarTime, "_D(preBarTime):", _D(preBarTime/1000), "index:", index) Sleep(10000) Full strategy address: Create a strategy robot that uses data. Note: You need to check the “python PlotLine Template”, if you don’t have it, you can copy one from the strategy square to your strategy library. here is the address: import pymongo import json def main(): Log("Test using database data") #) # Query data printing huobi_DB_Records = huobi_DB["records"] while True: arrRecords = [] for x in huobi_DB_Records.find(): bar = { "High": x["High"], "Low": x["Low"], "Close": x["Close"], "Open": x["Open"], "Time": x["Time"], "Volume": x["Volume"] } arrRecords.append(bar) # Use the line drawing library to draw the obtained K-line data ext.PlotRecords(arrRecords, "K") LogStatus(_D(), "records length:", len(arrRecords)) Sleep(10000) It can be seen that the strategy robot code that uses the data does not access any exchange interface. The data is obtained by accessing the database. The market collector program does not record the current BAR data. It collects the K-line BAR in the completed state. If the current BAR real-time data is needed, it can be modified slightly. The current example code is just for demonstration. When accessing the data records in the table in the database, all are obtained. In this way, as the time for collecting data increases, more and more data is collected. All queries will affect performance to a certain extent, and can be designed. Only data that is newer than the current data is queried and added to the current data. running docker program On the device where the docker is located, run the MongoDB database service The collector runs to collect the BTC_USDT trading pairs of FMZ Platform WexApp simulation exchange marekt quotes: WexApp Address: Robot A using database data: Robot B using database data: WexApp page: As you can see in the figure, robots with different IDs share K-line data using one data source. Relying on the powerful functions of FMZ platform, we can easily collect K-line data at any cycle. For example, I want to collect a 3-minute K-line, what if the exchange does not have a 3-minute K-line? It does not matter, it can be easily achieved. We modify the configuration of the collector robot, the K line period is set to 3 minutes, and FMZ platform will automatically synthesize a 3 minute K line to the collector program. We use the parameter to delete the name of the table, setting: [“records”] delete the 1 minute K-line data table collected before. Prepare to collect 3-minute K-line data. Start the collector program, and then re-start the strategy robot using the data. You can see the K-line chart drawn, the interval between BARs is 3 minutes, and each BAR is a K-line bar with a 3-minute period. In the next issue, we will try to implement the requirements of custom data sources. Thanks for reading!
https://www.fmz.com/digest-topic/5692
CC-MAIN-2020-40
en
refinedweb
When Google first released their mysterious Fuchsia OS in 2016, it was claimed that it could run on virtually anything -- from a smartwatch to your car’s dashboard entertainment system -- setting the stage for complete integration of Google products into any home IoT environment. Fast forward a couple years and Fuchsia hasn’t yet achieved ubiquity among smart devices. But Flutter, Fuchsia’s open source mobile app SDK, has certainly gained popularity as a way to build iOS and Android apps that look the same. Now Google claims thousands of apps on the Google Play and Apple App stores have been built using Flutter. And it’s not hard to see why developers prefer Flutter. Its well-written documentation, near-native rendering performance via the Skia 2D engine, and support for hot reload make building cross-platform delightful. Since PDFTron’s SDKs are cross-platform, we knew we had to release Flutter support. In this blog, we’ll walk you through how to add a PDF and MS Office document viewer to a Flutter app via the PDFTron SDK. All of the code is available on our Github here. linkAdding a Document Viewer with PDFTron You can open any PDF files, office files or images with a simple API call to PdftronFlutter.openDocument. First, add the PDFTron flutter dependency to your project’s dependency list: pdftron_flutter: git: url: git://github.com/PDFTron/pdftron-flutter.git Then use the PDFTron openDocument API to view any documents: import 'package:flutter/material.dart'; import 'dart:async'; import 'package:flutter/services.dart'; import 'package:pdftron_flutter/pdftron_flutter.dart'; void main() => runApp(MyApp()); class MyApp extends StatefulWidget { @override _MyAppState createState() => _MyAppState(); } class _MyAppState extends State<MyApp> { String _version = 'Unknown'; @override void initState() { super.initState(); initPlatformState(); PdftronFlutter.openDocument(""); } // Platform messages are asynchronous, so we initialize via an async method. Future<void> initPlatformState() async { String version; // Platform messages may fail, so we use a try/catch PlatformException. try { PdftronFlutter.initialize(); version = await PdftronFlutter.version; } on PlatformException { version = 'Failed to get platform version.'; } // If the widget was removed from the tree while the asynchronous platform // message was in flight, we want to discard the reply rather than calling // setState to update our non-existent appearance. if (!mounted) return; setState(() { _version = version; }); } @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: const Text('PDFTron flutter app'), ), body: Center( child: Text('Running on: $_version\n'), ), ), ); } } That’s it! You now have a fully featured document viewer for your cross-platform Flutter application. PDFTron SDK for mobile comes with many out-of-the-box controls and tools that can be wrapped in Flutter. If you are interested in seeing it’s full capabilities you can visit our Flutter PDF library documentation section to get started. If you have any questions please feel free to contact us or if you are interested in any other features you can submit a feature request. You are also very welcome to improve our open source Flutter wrapper. Please submit a pull request if you are interested. Stay tuned for future improvements on our PDFTron Flutter wrapper! To get started, you can clone PDFTron's open source Flutter wrapper here. linkConclusion Flutter is an exciting new platform and definitely worth looking into for your next cross platform app. I am looking forward to seeing what apps you come up with. Please feel free to message me and share your creations! You can find me on Medium. If you have any questions about PDFTron's PDF SDK, please feel free to get in touch! You can find the source code for this blog post at Github.
https://www.pdftron.com/blog/flutter/build-a-document-viewer-in-flutter/
CC-MAIN-2020-40
en
refinedweb
Today we’ll be talking about microservices in Java. While it’s true that Java EE has a robust platform for writing, deploying, and managing enterprise-level microservices, in this article I will create a RESTful microservice that is as slim as possible. Don’t worry – we won’t be reinventing the wheel by marshaling our own data or anything. We’ll be using JBoss’ RESTEasy to take care of that! The objective in keeping things lightweight is to show how truly simple it can be to establish a RESTful interface in front of a new or existing Java microservice. At the same time, I’ll illustrate the flexibility of such a service by supporting multiple media types, JSON and XML, and deploying it on Apache Tomcat rather than JBoss Enterprise Application Platform (EAP). Every tool has its place, but I think it’s helpful to explore technologies through the lens of the KISS principle first, then decide what kind of additional architectural features should be pursued depending on the long-term objectives and requirements of the software. The code example in this article is available on GitHub, with “starter” and “final” branches. The following describes my environment, although your mileage may vary: - Java Development Kit (JDK) 1.8.0_131 (amd64) - Apache Tomcat 9 - Apache Maven 3.5.0 - Eclipse Java EE IDE 4.7.0 (Oxygen) - Linux Mint 18.2 (Sonya) 64-bit Technically speaking… A microservice is a small, concise service whose objective is to “do one thing well”. It’s quite common to interact with microservices via some kind of interface. If that interface is accessible via the web (using HTTP) then it is a web service. Some web services are RESTful and others are not. It’s worth noting that not all microservices are web services, not all web services are RESTful, and not all RESTful web services are microservices! REST and XML… together? If you’ve never encountered a RESTful web service that delivers content using one of the many media types other than JSON, you might think that these two things don’t belong together. But recall that REST is an architectural style for defining APIs, and that the popularity of REST and JSON happened to grow in parallel (not coincidentally, mind you). RESTful web services that accept and provide XML can be extremely useful for organizations who already have interconnected systems relying on that type of content, or for consumers who simply have more experience with XML. Of course, JSON would normally be the first choice because the message bodies are smaller, but sometimes XML is just an easier “sell”. Having a RESTful microservice that can do both is even better; not only is it concise and scalable from a deployment standpoint, but it’s also flexible enough to support different kinds of content to applications who wish to consume it. Why RESTEasy? RESTEasy is a framework by JBoss to help you build RESTful web services. With RESTEasy, it’s possible to build a RESTful web service that serves up both XML and JSON by depending on just four libraries: - resteasy-jaxrs, which implements JAX-RS 2.0 (Java API for RESTful Web Services) - resteasy-jaxb-provider, whose JAXB binding helps us support XML - resteasy-jettison-provider, which uses Jettison to convert XML to JSON - resteasy-servlet-initializer, for deploying to a Servlet 3.0 container (on Tomcat) To start, we create a web service project with a pom.xml that looks something like this: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.lyndseypadget</groupId> <artifactId>resteasy</artifactId> <packaging>war</packaging> <version>0.0.1-SNAPSHOT</version> <name>resteasy</name> <repositories> <repository> <id>org.jboss.resteasy</id> <url></url> </repository> </repositories> <dependencies> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxrs</artifactId> <version>3.1.4.Final</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxb-provider</artifactId> <version>3.1.4.Final</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jettison-provider</artifactId> <version>3.1.4.Final</version> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-servlet-initializer</artifactId> <version>3.1.4.Final</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> <finalName>resteasy</finalName> </build> </project> All together, these libraries come in at ~830 KB. Of course, these are our direct dependencies and building the project with Maven will bring in a handful of transitive dependencies as well. Going forward, I’ll be building this project in the “Maven way” (i.e. classes under src/main/java, using Maven build commands, etc), but you can also download the RESTEasy jars directly from the download page if you prefer not to use Maven. If you go this route, don’t be alarmed by this popup on the RESTEasy site: JBoss is simply trying to steer you down a more “enterprise” path. You can click “Continue Download” and be on your way. The project layout This service is going to be extremely simple to illustrate some basic concepts. You’ll need five classes, organized like this: FruitApplication is the entry point for the microservice. FruitService provides the main endpoint (/fruits), and it also serves as the router. Apple and Fruit are the models; Fruit has some abstract functionality and Apple will concretely extend it. As you can imagine, FruitComparator helps us compare fruits. If you’re unfamiliar with Java comparators, you can learn about object equality and comparison in this article, where I’m using Strings instead. While FruitComparator isn’t a model, I prefer to keep comparators close to the type of object it is intended to compare. The models Let’s start with the Fruit class: package com.lyndseypadget.resteasy.model; import javax.xml.bind.annotation.XmlElement; public abstract class Fruit { private String id; private String variety; @XmlElement public String getId() { return id; } public void setId(String id) { this.id = id; } @XmlElement public String getVariety() { return variety; } public void setVariety(String variety) { this.variety = variety; } } And the Apple class that extends it: package com.lyndseypadget.resteasy.model; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement(name = "apple") public class Apple extends Fruit { private String color; @XmlElement public String getColor() { return color; } public void setColor(String color) { this.color = color; } } This isn’t particularly earth-shattering code here – it’s a simple example of Java inheritance. However, the important parts are the annotations @XmlElement and @XmlRootElement, which define what the XML apple structure will look like: <apple> <id>1</id> <variety>Golden delicious</variety> <color>yellow</color> </apple> There’s also something else going on here that’s more subtle since no constructor is explicitly provided: Java uses an implicit, no-arg default constructor. This no-arg constructor is actually necessary for the JAXB magic to work (this article explains why that is, and how you can work around it with XMLAdapter if necessary). Now we have our object, an apple, defined. It has three properties: id, variety and color. The service The FruitService class serves as the primary endpoint (/fruits) we’ll use to interact with the microservice. In this case, I’ve defined the first route, /fruits/apples, directly in this class using the @Path annotation. As your RESTful microservice grows, you’ll likely want to define each final endpoint (i.e. /apples, /bananas, /oranges) in its own class. package com.lyndseypadget.resteasy; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.Map; import java.util.TreeMap; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import com.lyndseypadget.resteasy.model.Apple; import com.lyndseypadget.resteasy.model.FruitComparator; @Path("/fruits") public class FruitService { private static Map<String, Apple> apples = new TreeMap<String, Apple>(); private static Comparator comparator = new FruitComparator(); @GET @Path("/apples") @Produces({ MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON }) public List getApples() { List retVal = new ArrayList(apples.values()); Collections.sort(retVal, comparator); return retVal; } } The apples map helps us keep track of our apples by id, thus simulating some kind of persistence layer. The getApples method returns the values of that map. The GET /apples route is defined with the @GET and @Path annotations, and it can produce content of media type XML or JSON. This method needs to return a List<Apple> object, and we use the comparator to sort that list by the variety property. The FruitComparator looks like this: package com.lyndseypadget.resteasy.model; import java.util.Comparator; public class FruitComparator implements Comparator { public int compare(F f1, F f2) { return f1.getVariety().compareTo(f2.getVariety()); } } Note that if we wanted to sort by a property that is Apple-specific, such as color, we’d have to create a different looking implementation of Comparator instead, and name it something like AppleComparator. The application As of RESTEasy version 3.1.x, you’ll need to define a class that extends Application. RESTEasy example documentation suggests for this to be a singleton registry, like so: package com.lyndseypadget.resteasy; import javax.ws.rs.core.Application; import java.util.HashSet; import java.util.Set; public class FruitApplication extends Application { HashSet singletons = new HashSet(); public FruitApplication() { singletons.add(new FruitService()); } @Override public Set<Class> getClasses() { HashSet<Class> set = new HashSet<Class>(); return set; } @Override public Set getSingletons() { return singletons; } } We won’t need to do much with this class for the purpose of this example, but we will need to wire it up in our web.xml file, described in the “A bit of web service wiring” section later. Structuring collections of objects As written, the GET /apples call will return data like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <collection> <apple> <id>1</id> <variety>Golden delicious</variety> <color>yellow</color> </apple> </collection> [ { "apple": { "id": 1, "variety": "Golden delicious", "color": "yellow" } } ] However, it is possible to change the data to look a bit different, like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <apples> <apple> <id>1</id> <variety>Golden delicious</variety> <color>yellow</color> </apple> </apples> { "apples": { "apple": { "id": 1, "variety": "Golden delicious", "color": "yellow" } } } The second option looks a bit nicer in XML, but affects the JSON in a potentially undesirable way. If you prefer this structure, you can wrap the List<Apple> in its own type and modify the FruitService.getApples method to return this type: package com.lyndseypadget.resteasy.model; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.Comparator;AccessorType(XmlAccessType.FIELD) @XmlRootElement(name = "apples") public class Apples { private static Comparator comparator = new FruitComparator(); @XmlElement(name = "apple", type = Apple.class) private List apples; public List getApples() { Collections.sort(apples, comparator); return apples; } public void setApples(Collection apples) { this.apples = new ArrayList(apples); } } These annotations effectively “relabel” the root element, which is the collection/list. You can experiment with this and different XML Schema mapping annotations by reading the javadocs for javax.xml.bind.annotation. Of course, it is possible to write different methods – one for XML and one for JSON – if you can’t settle on a common method signature. A bit of web service wiring Since I’m deploying this service to Tomcat, I’ll need a web application deployment descriptor file at src/main/webapp/WEB-INF/web.xml. Its contents will look like the following: <?xml version="1.0"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" ""> <web-app> <display-name>resteasy</display-name> <context-param> <param-name>javax.ws.rs.core.Application</param-name> <param-value>com.lyndseypadget.resteasy.FruitApplication</param-value> </context-param> <context-param> <param-name>resteasy.servlet.mapping.prefix</param-name> <param-value>/v1<>/v1/*</url-pattern> </servlet-mapping> </web-app> The servlet-name value indicates (you guessed it) the servlet (aka service) name: Resteasy. The servlet-mapping url-pattern (/v1/*) tells Tomcat to route incoming requests containing that pattern to our Resteasy service. For more information about how to construct this file, as well as the different options available, check out Tomcat’s Application Developer documentation. Build and deploy From your project’s root directory, you can run the following to build the WAR (web application resource) file: mvn clean install This will create a new folder in that directory called target, containing the WAR file. While you can use Maven or other deployment-specific tools to deploy this file, I just use a simple copy command. As a reminder, each time you redeploy a war to Tomcat, you should first stop Tomcat and delete the service application folder (in this case, <tomcatDirectory>/webapps/resteasy) and the old war file (<tomcatDirectory>/webapps/resteasy.war). [sudo] cp target/resteasy.war <tomcatDirectory>/webapps/resteasy.war If Tomcat is already running, it will deploy the web service immediately. If it’s not, it will be deployed the next time you start. Then, you’ll be able to reach the web service at http://<tomcatHost>:<tomcatPort>/resteasy/v1/fruits/apples. In my case, this is. Leveraging content negotiation to test the service Content negotiation is the mechanism that makes it possible to serve different representations of a resource (a URI). At a basic level, this means is that you can: - specify the Accept header to indicate what kind of content you are willing to accept from the service, and/or - specify the Content-Type header to indicate what kind of content you are sending to the service For further information about what you can do with content negotiation and headers, see sections 12 and 14 of RFC 2616. For the purpose of this example, all you really need to know is: - the @Produces annotation indicates what kind of content the method is able to produce (this will attempt to match the Accept header on the request), and - the @Consumes annotation indicates what kind of content the method is able to consume (this will attempt to match on the Content-Type header of the request) If you attempt to make an HTTP call to a valid endpoint but the content can’t be negotiated – meaning no @Produces matches the Accept, or no @Consumes matches the Content-Type – you’ll get HTTP status code 415: Unsupported media type. GET calls that return common media types can actually be entered directly into the browser. In the case of GET /apples, you’ll get XML by default: It’s more helpful, though, to use a tool like Postman, explicitly specifying the Accept header as application/xml: Both of these return some valid yet underwhelming XML – namely, an empty list of apples. But here’s something cool… Change the Accept header to application/json, and voilà! JSON just works: Beyond the read operation You’ll tend to find many examples of RESTful web services that are Read-only, but some may not go further to show you how to handle Create, Update and Delete operations, too. While we have the skeleton of our web service in place right now, an empty list that we can’t change isn’t particularly useful. Let’s add some other methods so we can add and remove apples to the list. package com.lyndseypadget.resteasy; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.Map; import java.util.TreeMap; javax.ws.rs.core.Response; import com.lyndseypadget.resteasy.model.Apple; import com.lyndseypadget.resteasy.model.FruitComparator; @Path("/fruits") public class FruitService { private static Comparator comparator = new FruitComparator(); private static Map apples = new TreeMap(); private static int appleCount = 0; @GET @Path("/apples") @Produces({ MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON }) public List getApples() { List retVal = new ArrayList(apples.values()); Collections.sort(retVal, comparator); return retVal; } @GET @Path("/apples/{id}") @Produces({ MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON }) public Response getApple(@PathParam("id") String id) { Apple found = apples.get(id); if(found == null) { return Response.status(404).build(); } return Response.ok(found).build(); } @DELETE @Path("/apples/{id}") public Response deleteApple(@PathParam("id") String id) { apples.remove(id); return Response.status(200).build(); } @POST @Path("/apples") @Consumes({ MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON }) public Response createApple(Apple apple) { String newId = Integer.toString(++appleCount); apple.setId(newId); apples.put(newId, apple); return Response.status(201).header("Location", newId).build(); } } We’ve added the ability to: - retrieve an apple by its id (return 404 if not found in the map) - delete an apple by its id - create a new apple (return a 201 if successful) These methods provide enough functionality to ensure that the service is working as intended. Implementing the ability to update an apple (using @PUT and/or @PATCH) – as well as more endpoints, logic, and persistence – are left as exercises for the reader. If we build and deploy again (see “Build and deploy” above if using the assumed Maven/Tomcat setup), we’ll see that we can now create, retrieve and delete apples from our service. Calls can alternate between XML and JSON without any reconfigurations on the server. Creating an apple with a Content-Type of “application/json” and a JSON body: Another example: creating an apple with a Content-Type of “application/xml” and an XML body: Retrieving all apples in XML: Retrieving apple 2 by id, in JSON: Deleting apple 1 by id: Retrieving all apples in JSON: Conclusion We’ve explored how RESTEasy can help you seamlessly support both XML and JSON in a Java web service. I also explained the technical differences between REST, media types, web services and microservices, as there tends to be a lot of grey area between these terms. The example we built here is a bit contrived; I’ve never really needed to work with fruit data, but then again, I’ve never worked in the grocery industry! That said, I think it helps illustrate the right “size” for a microservice, as you can imagine how other microservices such as vegetables, canned goods or seafood, in this example, could collectively comprise a food distribution system. Food distribution in the real world is actually extremely complicated; a system attempting to model it would have to account for concepts such as sales, coupons, expiration dates, nutrition information, and so on. Of course, there are different ways you could slice this, but RESTEasy is a handy tool to have in your toolbox when you need to support multiple media types in a quick and lightweight fashion. Don’t forget to continually improve your Java application by writing better code with Stackify Prefix, the free dynamic code profiler, and Stackify Retrace, the only full lifecycle APM. -
https://stackify.com/multiple-media-types-java-microservices-resteasy/
CC-MAIN-2020-40
en
refinedweb
Introduction to the Raspberry Pi Lesson 18 Read a button with GPIOZERO You now know how to “do” output. It’s time to learn “input”. A momentary button is an input device. It is essentially a sensor with only two possible states: “pressed” and “not pressed”. In your circuit, you have connected the output of the button to GPIO14. You Python program will check the voltage at GPIO14 to determine if the button is pressed or not. Unlike with the LED example, this time we’ll go straight to write a regular Python program. This is because we want to get Python to read the state of GPIO14 many times per second, so that as soon as we press the button, Python can print a message on the screen to let us know if has detected the press. We can also do this on the CLI, but it will be a clunky implementation which I prefer to avoid. Use Vim to create a new Python program: $ vim button_gpiozero.py Copy this code into the Vim buffer: from gpiozero import Button import time button = Button(14,None,True) while True: if button.is_pressed: print("Button is pressed") else: print("Button is not pressed") time.sleep(0.1) Save the buffer to disk, and quit Vim (“:wq”). There’s a few new elements in this code, so lets take a moment to learn. In the first line, you are importing the Button module, another member of the gpiozero library. In the second line you import the “time” module, so that you can insert a tiny delay later in your program. This is important because without this delay, you program will sample the button GPIO as fast as your Raspberry Pi can, leaving little time for anything else. We only want to read the state of a button, not to totally dominate the CPU. Line three is a little challenging because it contains three parameters. You can find detailed information about these parameters in the gpiozero documentation for the Button object. The first one is the GPIO number. Since the button is connected to GPIO14, you’ll enter “14” in the first parameter. The second parameter controls the type of pull-up/down resistor we are using. The Raspberry Pi can use internal pull-up resistors resistor, but in our circuit we have provided an external pull-down resistor. This resistor ensures that voltage at GPIO14 is equal to GND when the button is not pressed. For an article on pull-up/down resistors, please have a look at this article. Because of the presence of the external pull-down, we use “None” as the value of the second parameter. In the third parameter, we include the active state value. Because when the button is pressed the voltage at GPIO14 is “HIGH”, and when the button is not pressed the voltage is “LOW”, we use the “True” value for this parameter. If the voltages were reversed (i.e. pressed buttons created a “LOW” voltage, and not pressed button created “HIGH”), then for the third parameter we would write “False”. You have saved the configured Button object in the “button” variable, and then get into an infinite loop. In the look, your program simply reads the state of the button and if it is pressed, it prints “Button is pressed” to the console. If it isn’t pressed, it prints “Button is not pressed”. Run the program like this: $ python3 button_gpiozero.py Now press the button as see how the message on the console changes accordingly. It should look like this: A button was just pressed. It works! .
https://techexplorations.com/guides/rpi/begin/read-a-button-with-gpiozero/
CC-MAIN-2020-40
en
refinedweb
Provides Mail support to a running Grails application Plugin Collection 44% of Grails users Dependency: compile "org.grails.plugins:mail:1.0.7" Summary"} Installation The Grails 3 Mail plugin is located here -- install the mail plug-in just run the following command grails install-plugin mail Description Mail Plug-inThe mail plug-in provides e-mail sending capabilities to a Grails application by configuring a Spring MailSender based on sensible defaults.There is a screencast showing how to use the basic features of the plugin (v0.4) UsageThe mail plug-in provides a MailService that can be used anywhere in your Grails application. The MailService provides a single method called sendMail that takes a closure. In addition the sendMail method is injected into all controllers to simplify access.An example of the sendMailmethod can be seen below: Or if you're using the service directly you can use: sendMail { to "[email protected]" subject "Hello Fred" body 'How are you?' } To send HTML mail you can use the mailService.sendMail { to "[email protected]","[email protected]" from "[email protected]" cc "[email protected]", "[email protected]" bcc "[email protected]" subject "Hello John" body 'this is some text' } htmlmethod instead of the bodymethod: If your HTML is contained within a GSP template you can use the render tag called as a method (available in controllers and tag libraries): sendMail { to "[email protected]" subject "Hello John" html '<b>Hello</b> World' } If you wish to render a mail body from a GSP view from anywhere in your application, including from services or jobs where there may or may not be a current request, you can use the new body method variant (v0.4): sendMail { to "[email protected]" subject "Hello John" html g.render(template:"myMailTemplate") } The view is the absolute path (or relative if you have a current controller) to the GSP. The plugin parameter is optional - if you need to render a template that may exist in a plugin installed in your application, you must include the name here. The model parameter is a map representing the model the GSP will see for rendering data.In this case the content type will be auto-sensed - use the GSP page contentType directive to set the content-type to use in the e-mail. The default is text/plain so you must include this at the top of the GSP for a HTML email: sendMail { to "[email protected]" subject "Hello John" body( view:"/emailconfirmation/mail/confirmationRequest", plugin:"email-confirmation", model:[fromAddress:'[email protected]']) } Note however that due to a limitation of the underlying Spring APIs used, XHTML content type text/xhtml will not result in a correct XHTML email.This plugin also accepts emails in the form "Bill Gates<[email protected]>". This allows you to specify the sender display name in mail clients. <%@ page contentType="text/html"%> Multiple recipientsYou can send mail to multiple recipients (in either of 'to', 'cc' or 'bcc') at once. There is a pitfall when using a sendMail { to "[email protected]","[email protected]" subject "Hello to mutliple recipients" body "Hello Fred! Hello Ginger!" } Listfor storing the recipients. You'll have to invoke toArraywhen providing it to the builder, like this: If you forget the call to sendMail { to issue.watchers.email.toArray() subject "The issue you watch has been updated" body "Hello Watcher!" } toArray, Groovy will convert the list (even a list with a single entry) to a String(the same way it does on the interactive console). The result will be something that is not a valid email address and you'll face javax.mail.internet.AddressException. AttachmentsSince version 0.9 attachment support has been improved. It is possible to have both, email body and multiple attachments. In order to activate multipart support, the 'multipart true' must be the first element in the closure passed to the sendMail method, e.g.: See also GRAILSPLUGINS-1175. sendMail { multipart true to issue.watchers.email.toArray() subject "The issue you watch has been updated" body "Hello Watcher!" attachBytes "Some-File-Name.xml", "text/xml", contentOrder.getBytes("UTF-8") //To get started quickly, try the following //attachBytes './web-app/images/grails_logo.jpg','image/jpg', new File('./web-app/images/grails_logo.jpg').readBytes() } AsynchronousThe plugin can send mail asynchronously (the mail is sent on a different thread, and the sendMail message returns instantly, not waiting for the mail to be actually sent). In order to send asynchronously, 'async true' must be in the closure passed to the sendMail method, e.g.: sendMail { async true to "[email protected]" subject "Hello John" html g.render(template:"myMailTemplate") } ConfigurationBy default the plugin assumes an unsecured mail server configured at localhoston port 25. However you can change this via the grails-app/Config.groovyfile. For example here is how you would configure the default sender to send with a Gmail account: And the configuration for sending via a Hotmail/Live account: grails { mail { host = "smtp.gmail.com" port = 465 username = "[email protected]" Yahoo account: grails { mail { host = "smtp.live.com" port = 587 username = "[email protected]" password = "yourpassword" props = ["mail.smtp.starttls.enable":"true", "mail.smtp.port":"587"] } } If your mail session is provided via JNDI you can use the grails {" ] } } jndiNamesetting: You can also set the default "from" address to use for messages in Config using: grails.mail.jndiName = "myMailSession" This will be used if no "from" is supplied in a mail. You can also disable mail delivery completely in certain environments by [email protected]@: // Since 0.2-SNAPSHOT in SVN grails.mail.default.from="[email protected]" This is useful for certain environments where you don't want mails to be delivered such as during testing. Another useful setting often set per-enviroment is @[email protected] lets you override the email address mails are sent to and from: grails.mail.disabled=true You can also adjust the size of the asynchronous mail sending thread pool. This size is the maximum number of threads that can send mail concurently. By default the value is 5 - so there could be 5 emails being sent asynchronously at the same time (if any more are sent, they will be queued). Note that is only used when sending mail asynchronously. grails.mail.overrideAddress="[email protected]" grails.mail.poolSize=5 CloudFoundry configuration using SendgridYou must bind the sendgrid service to the application and run the Sendgrid initial configuration wizard by going to the Sendgrid Dashboard via CloudFoundry.Add these dependencies to BuildConfig.groovy Configuration to Config.groovy compile "org.springframework.cloud:spring-cloud-cloudfoundry-connector:1.1.1.RELEASE" compile "org.springframework.cloud:spring-cloud-spring-service-connector:1.1.1.RELEASE" def cloud try { cloud = Class.forName("org.springframework.cloud.CloudFactory").newInstance().cloud } catch(e) { } if(cloud) { // configure sendgrid SMTP sender grails { mail { def smtpServiceInfo = cloud.getServiceInfos().find { it.class.name == "org.springframework.cloud.service.common.SmtpServiceInfo" } host = "smtp.sendgrid.net" port = 587 username = smtpServiceInfo.userName password = smtpServiceInfo.password props = ["mail.smtp.starttls.enable":"true", "mail.smtp.port":"587] } } } TODO - Attachment support - Support for multiple parts - Support for MailSender artefact and sendMail(with: 'defaultSMTP') {..} for multiple smtp servers - Inline images - etc. Troubleshooting - If an installation failed because of unresolved dependencies ... (like "javax.activation#activation;1.1: not found" … check if mavenCentral() is uncommented in your BuildConfig.groovy - If you face javax.mail.internet.AddressExceptionerrors referring to "Illegal address in string" or "Missing ']' in string", it could be that you try to send to a Listof recipients, which is not supported by the plugin. See 'Multiple recipients' above for information how to deal with that. - If you call sendMail in a service class rendering a gsp template, you may realize that the plugin is rendering the template as text/plain instead of html. To solve this issue you have to put in your gsp <%@ page contentType="text/html" %> or <%@ page contentType="text/xhtml" %> without setting the encoding
http://grails.org/plugin/mail?skipRedirect=true
CC-MAIN-2020-40
en
refinedweb
I have been searching for a solution to this problem for a few days without any joy so any help would be fantastic! I have the DragRigidBody script attached to my Player which is a child of my camera and is being moved forward, around curves etc. I would like to be able to drag the Player along the X and Y axes in his local space (Eg. so if he is facing a certain direction, his X axis will change). What's happening now is that he is moving along the X-axis in World Space, so basically left and right in World space, regardless of his direction. Some more about the game setup; It is a 3D shooter, where the camera is on 'rails' and move forward around different shaped tubes using markers along the way, facing in the correct forward direction. The game is in third person and so the Player is in front of the camera and can move along the X and Y axes (Up/Down/Left/Right). He is a Rigidbody and so has his Z-position constrained as well as his X and Z -rotations. He moves around the 'track' the same way as the camera, but instead of also doing the exact same path-following using the markers, I have instead made him a child of the camera to minimise the processing time, etc (game is for iOS). To take care of his rotation (so that he's always always facing the correct direction), I have a 'lookAt target' object ahead of him, which is also a child of the camera (so Player is in-between Camera and lookAt target). Please see the attached sketch (at bottom) to show the Player's movement along the X-axis as he moves around a curved section of the track. The current movement (at top-left) shows that he moves from left to right (basically horizontally) regardless of where along the curve the camera (and so he) is. The correct movement is shown at top-right, where his X-axis changes at different points long the curve. The lower two boxes just show the different perspectives and the path markers. The slightly modified DragRigidBody script I am using (converted to C#) is as follows (I have left the commenting in for anyone that doesn't know how the script works - feel free to correct any mistakes); Thanks using UnityEngine; using System.Collections; /* Note, this script is attached to the Player object in the scene. * It is used to allow the user to drag the Player around the screen using the mouse / finger on mobile devices. * It also clamps the values/positions of the X and Y where the Player can be dragged (the edges of the walls, etc), * as well as some other useful things. */ public class MovementController : MonoBehaviour { public float Spring = 50.0F; public float Damper = 5.0F; public float Drag = 10.0F; public float AngularDrag = 5.0F; public float Distance = 0.2F; public bool AttachToCenterOfMass = false; private SpringJoint springJoint; //----------------------------------------------------- // Use this for initialization void Start() { } //----------------------------------------------------- // Update is called once per frame void Update() { // Make sure the user pressed the mouse down if (!Input.GetMouseButtonDown(0)) { return; } var mainCamera = FindCamera(); // Create a RaycastHit object called hit RaycastHit hit; // Cast a ray from the mouse point in Screen Space, storing the information in the hit variable // If we don't hit anything.. if (!Physics.Raycast(mainCamera.ScreenPointToRay(Input.mousePosition), out hit, 100)) // ignore return; // if we hit a non-rigidbidy OR a non-kinematic object if (!hit.rigidbody || hit.rigidbody.isKinematic) { // ignore return; } // if a springJoint object does not exsist.. if (!springJoint) { // Create a Rigidbody dragger object GameObject go = new GameObject("Rigidbody dragger"); // Create a Rigidbody and add the component to the dragger Rigidbody body = go.AddComponent("Rigidbody") as Rigidbody; // Add a SprintJoint component to the dragger springJoint = go.AddComponent("SpringJoint") as SpringJoint; // Set the Rigidbody to be Kinematic (ie. to be moveable) body.isKinematic = true; } // Set the position of the springJoint object to be the position that was hit by the mouse springJoint.transform.position = hit.point; // Iff we checked this option (AttachToCenterOfMass).. if (AttachToCenterOfMass) { //** var anchor = transform.TransformDirection(hit.rigidbody.centerOfMass) + hit.rigidbody.transform.position; // Set the anchor point of the object that scriopt is attached to. // This is the centre of mass + its position var anchor = hit.rigidbody.centerOfMass + hit.rigidbody.transform.position; //** anchor = springJoint.transform.InverseTransformPoint(anchor); springJoint.anchor = anchor; } // if the collider that the ray hit is part of the 'Player' layer.. if (hit.rigidbody.gameObject.layer == LayerMask.NameToLayer("Player")) { // Reset the anchor point springJoint.anchor = Vector3.zero; } // otherwise, do nothing else return; // Update the attributes of the springJoint object springJoint.spring = Spring; springJoint.damper = Damper; springJoint.maxDistance = Distance; springJoint.connectedBody = hit.rigidbody; // StartCoroutine ("DragObject", hit.distance); // Call the DragObject Coroutine, passing the distance that the hit variable has stored. StartCoroutine(DragObject(hit.distance)); } //----------------------------------------------------- IEnumerator DragObject(float distance) { float oldDrag = springJoint.connectedBody.drag; float oldAngularDrag = springJoint.connectedBody.angularDrag; springJoint.connectedBody.drag = Drag; springJoint.connectedBody.angularDrag = AngularDrag; Camera mainCamera = FindCamera(); // While the mouse button is pressed down.. while (Input.GetMouseButton(0)) { // Assign the position in World Space to the 'ray' variable. // Note, ScreenPointToRay,pixelHeight). var ray = mainCamera.ScreenPointToRay(Input.mousePosition); // Assign the distance from the ray that was touched to the position of the springJoint. // Note, 'GetPoint' returns a point at distance units along the ray. springJoint.transform.position = ray.GetPoint(distance); // Wait for the end of that frame yield return new WaitForEndOfFrame(); } // if the springJoint connectedBody object exists.. if (springJoint.connectedBody) { // Update the attributes of the connectedBody object springJoint.connectedBody.drag = oldDrag; springJoint.connectedBody.angularDrag = oldAngularDrag; springJoint.connectedBody = null; } } //----------------------------------------------------- Camera FindCamera() { if (camera) return camera; else return Camera.main; } } Answer by Bunny83 · Dec 05, 2012 at 07:41 PM You should spend more words on how your actual setup looks like and what you actually want to do. Is it a 3d or 2d game (i guess 3d)? Is it a 3rd person, first person or top down game (i guess 3rd person)? You said you have your player object as child of your camera and it's moving forward? Having the player a child of the camera seems really strange, usually you have the camera follow. Also "he moves forward", in the local space of the camere? Or do you move the camera? This script does not move the object along world axis. It moves it relative to the camera. Also if you use a perspective camera it doesn't move the object on a flat plane. The movement will happen on a curved (deformed) surface around the camera origin. That's because you use a ray that always have the same length and always go through the camera's origin. Because the ray starts at the near plane it's not a perfect arc, but almost. Maybe post a screenshot or paint a scribble of your situation. How the player can move / rotate how the camera is in relation to that and on which axis you want to move. edit I would say your problem is: The player is a rigidbody and is a child of another object The players movement constraint on z axis Rigidbodies are always moved in world space. You can't have it simulated in "local physics". The rigidbodies velocity angular velocity and position are always in worldspace. When you lock the movement on the z axis (the world z-axis of course) your player can't be move along the z axis by forces. It can still be moved on the z axis by "overriding" the physics and use the transform component. A Rigidbody that is a child of another moving object will behave strange. The Rigidbody itself calculates it's movement velocity based in FixedUpdate (the physics loop). Those calculations are, as already mentioned, in worldspace. Now when the parent object moves it transfers it's motion to it's child transforms. So the naturally physics based movement is overlayed with a non physics based offset. Since you don't actually use physics in your game (since you use a fix path), i wouldn't use a Rigidbody at all. I would suggest to use a Plane which position should be the center point on the path where the player should be, and the normal vector should be set to the opposite of the look direction of the player. So the Plane is the plane in which the player can move. Now just use Plane.Raycast to get the mouseposition on this plane. Do either a smoothed lerping to this point of just set the player to this point. Thanks, I'll add some more information shortly. Any ideas guys, any suggestions at all? :) Sorry, we have around 500 new questions and about 300 new users a day, so you can really get lost here ;) I'll edit my answer... No problem Bunny, thanks for taking the time to explain that - appreciate it! :) Your suggestions sounds like it'll work really well for this purpose. I'll give it a try and let you know how it goes. Also, correct me if I'm wrong but using this method will also keep the Z-position of the Player relative to the camera (offset) constant too, right? Since ScreenToWorldPoint ignores the z-position (makes it 0). As I'm removing the Rigidbody from the Player am I right to assume that it can now be kept as a child of the Camera? Thanks Sure, it can be the child of the camera. Well, it depends on how you use it ;) ScreenToWorldPoint doesn't ignore the z position. The z position you pass into the function is taken as the distance from the camera's origin in worldunits. I'm still not sure if you player "looks" into a different position as the camera. If so ScreenToWorldPoint would move the player relative to camera and not relative to the player. That's why is suggested the Plane. It does quite the same thing, but projects the mouseposition onto an arbitary plane that isn't bound to the camera. The plane can be moved / rotated with the player object. Here's an example: public Transform Player; void Update() { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); Plane plane = new Plane(-Player.forward, Player.position); float dist; if (plane.Raycast(ray, out dist)) { Vector3 pos = ray.GetPoint(dist); Player.position = pos; } } Since we rebuild the plane each time from the players position, it might get slowly offset. It might be better to use a fix position relative to the camera, ideally the center on the path where the player would be when not moved manually. Transform cam = Camera.main.transform; Plane plane = new Plane(-Player.forward, cam.position + cam.forward * DesiredPlayerDistanceFrom. Drag Rigidbody 2 Answers problem with dragrigidbody 0 Answers How to Stop a Game Object from Passing through a Collider while it's being Dragged? 3 Answers transform.position is giving me local space 1 Answer Why does my character fall slowly even though rigid.drag = 0? 1 Answer
https://answers.unity.com/questions/359502/dragrigidbody-move-object-in-local-space-not-world.html?sort=oldest
CC-MAIN-2020-40
en
refinedweb
The goto statement is rarely used because it makes program confusing, less readable and complex. Also, when this is used, the control of the program won’t be easy to trace, hence it makes testing and debugging difficult. C – goto statement When a goto statement is encountered in a C program, the control jumps directly to the label mentioned in the goto stateemnt Syntax of goto statement in C goto label_name; .. .. label_name: C-statements Flow Diagram of goto Example of goto statement #include <stdio.h> int main() { int sum=0; for(int i = 0; i<=10; i++){ sum = sum+i; if(i==5){ goto addition; } } addition: printf("%d", sum); return 0; } Output: 15 Explanation: In this example, we have a label addition and when the value of i (inside loop) is equal to 5 then we are jumping to this label using goto. This is reason the sum is displaying the sum of numbers till 5 even though the loop is set to run from 0 to 10. my name is sudheer from starting stage learn c this beginners book is very helpfull. it is very easy to understand all thank you sir. Before reading this i was so confused in goto statement. Now there’s. Not a single doubt.. thanks I have discovered (the hard way) that goto works only going forward in the code; it can’t be used to loop back to previous code.
https://beginnersbook.com/2014/01/c-goto-statement/
CC-MAIN-2020-40
en
refinedweb
hi UI gurus, we have a simple requirement to display certain links in a dashboard. All is good until there is invalid (un-encoded) characters involved. then if I use [[CDATA]] then Splunk simple XML takes the values as literal Below is dashboard we want to achieve <dashboard> <label>test HREF CDATA</label> <row> <html> <h1> HREF test with un-encoded characters </h1> <li>Lookup Links <ul> <li>My Lookup => <a href="../lookup_editor/owner=nobody&namespace=search&lookup=xyz.csv&type=csv">Link</a> </li> </ul> </li> </html> </row> </dashboard> Since the href is "un-encoded", this complains. When we change the <a> link as below <![CDATA[<a href="../lookup_editor/owner=nobody&namespace=search&lookup=xyz.csv&type=csv">Link</a>]]> Then splunk XML does not render href properly. What's the best way to use "href" links properly in Simple XML? PS: Even <link> don't like & in the URL as below also have same problem <link> ../lookup_editor/owner=nobody&namespace=search&lookup=xyz.csv&type=csv </link> xx how about $url|u$? <drilldown> <set token="a_link">$click.value2$</set> <eval token="url">trim(mvindex(split($a_link$,"("),1),")")</eval> <link target="_blank"> <![CDATA[ ]]></link> </drilldown> this works Got the gist of the logic. Thanks again. If you want to put as answer, will mark as "answer" but in the above, where are we passing the "token" to the link URL? If you write statically, you only have to convert this problem when writing it, right?
https://community.splunk.com/t5/Dashboards-Visualizations/Splunk-Simple-XML-Invalid-character-entry-in-XML-help-with-CDATA/td-p/480283?sort=votes
CC-MAIN-2020-40
en
refinedweb
. You can communicate with the service process from your app using e.g. osc or (a heavier option) twisted. Service creation¶ There are two ways to have services included in your APK. Service folder¶ This basic method works with both the new SDL2 and old Pygame bootstraps. (included automatically with the Pygame bootstrap, you must add it to the requirements manually with SDL2 if you wish to use this method): import android android.start_service(title='service name', description='service description', arg='argument to service') Arbitrary service scripts¶ Note This service method is not supported by the Pygame boot.name.ServiceMyservice') mActivity = autoclass('org.kivy.android.PythonActivity').mActivity argument = '' service.start(mActivity, argument) Here, your.package.name refers to the package identifier of your APK as set by the --package argument to python-for-android, and the name of the service is ServiceYourservicename, in which Yourservicename is the identifier passed to the --service argument with the first letter upper case. You must also pass the argument parameter even if (as here) it is an empty string. If you do pass it, the service can make use of this.
https://python-for-android.readthedocs.io/en/stable/services/
CC-MAIN-2022-27
en
refinedweb
In this week's lab, we will mostly ignore statistics and instead focus on some practical issues that you will encouter on Homework 4. Section 4 of that homework includes new python techniques (classes, inheritance), an unfamiliar approach to breaking up large computing problems (MapReduce), code that has to be run outside the friendly confines of an ipython notebook, and then you are asked to put it all to use on Amazon's Elastic Compute Cloud (EC2). This sounds very complicated, but the end result is a simpler algorithm for that problem of calculating similarity scores, as well as the ability to expand to arbitrarily large data sets. On previous homeworks, nearly all of the coding has been done by writing python functions plus a small amount of code that calls the functions you have written. Included below is the code for the mrjob word_count example that was covered in lecture (the canonical MapReduce example). There are a lot of new features here! Below is the code for a simple MapReduce algorithm to count the number of words in a text file. This is one of the simplest examples of a problem that can be solved using MapReduce (I even took it from the Section "Writing your first job" in the mrjob documentation). If you try to run the cell in this notebook, it will not work! We will get to running programs with mrjob soon, but for now it will just serve as reference for some topics we want to() Classes are the basis of object-oriented programming in python. For all of the problems on previous homework assignments, we have written functions to do calculations, draw figures, etc. To use mrjob, we have to switch gears and use a different style of programming. As you can see in the example above, the MRWordFrequencyCount class is defined with an indented block similar to a function definition, except with class instead of def. Instead of a list of arguments, the item in parentheses (MRJob) is a base class that our newly defined class will inherit most of its features from. Even though there is very little code written above for MRWordFrequencyCount, it knows how to do many complex operations (running a mapper and a reducer, submitting jobs to EC2, etc.) because it inherited these abilities from the base class. There are two methods, mapper and reducer, that have been written specifically for MRWordFrequencyCount. These methods are also defined for the MRJob base class, but the methods defined here supercede the inherted ones. A class method is similar to a function (as you might guess, since it is also defined with a def statement), but the first argument to a class method will always be self, a reference back to the object to which the method belongs. The always-present self argument allows the method to access other members of the same object (both data and methods). However, when you actually call a class method, you don't have to supply anything for the self argument -- it is implicit. For example, to call the reducer method defined above, you would use: # Call reducer method of MRWordFrequencyCount object using some key and values. MRWordFrequencyCount.reducer(my_key, my_values) # Did not specify 'self' argument The next mrjob example -- Writing your second job -- processes text to find the most commonly used word. That algorithm involves two MapReduce steps, so it is necessary to write a MRMostUsedWord.steps method to override the inherited method. Notice that the self is used repeatedly to specify the function references inside the list returned by the steps method. import re WORD_RE = re.compile(r"[\w']+") class MRMostUsedWord(MRJob):) def steps(self): return [ self.mr(mapper=self.mapper_get_words, combiner=self.combiner_count_words, reducer=self.reducer_count_words), self.mr(reducer=self.reducer_find_max_word) ] if __name__ == '__main__': MRMostUsedWord.run() Generators are necessary to understand all of those yield statements popping up in the mapper and reducer methods. The main issue, in the case of industrial-strength MapReduce, is that you don't have enough memory to store all of your data at once. This is true even after you have split your data between many compute nodes. So instead of getting an enormous list of data, the mapper and reducer functions both receive and emit generators. When you run a function, it chugs along until it hits a return statement, at which point it returns some results and then it is done. A generator does its specified calculations until it hits a yield statement. It passes along whatever values it was supposed to yield and then it pauses and waits for someone to tell it to continue. It continues until it reaches another yield, and so on. Not only are mapper and reducer generators, their (key, value) inputs are also generators. This means that for each step of the mapper, it pulls in one (key, value) pair, does some processing, and then emits one or more key value pairs, which move along to a combiner or a shuffler or whatever. This is how MapReduce avoids ever having to load huge datasets into limited memory. A common stumbling block with generators is the fact that once you have iterated through an entire generator, it is done. You can see an example of this mistake by trying to run the code block below. # This function converts a list into a generator. def example_generator(list): for item in list: yield item # Create a generator. my_generator = example_generator([0, 1, 2, 3, 4]) # Iterating over the generator works great the first time. print "generator iteration 1" print "---------------------" for value in my_generator: print value # ...but it doesn't work the second time. print "\n" print "generator iteration 2" print "---------------------" for value in my_generator: print value Python is really into namespaces (see, for example, The Zen of Python). The \_\_name\_\_ keyword tells you what namespace it is in. For example, if we import numpy</span>, then all of the numpy features are in the numpy namespace. import numpy as np print np.__name__ import matplotlib.pyplot as plt print plt.__name__ If you try to import the above file containing the definition for MRMostUsedWord, then python will interpret the file all the way down until it hits that last if statement. \_\_name\_\_ will evaluate to MRMostUsedWord (or whatever the name was of the file we imported) and the line inside the if statement will be ignored. On the other hand, if you run this code from the command line, python will interpret it without importing it and \_\_name\_\_ will be the python top level namespace, which is '\_\_main\_\_', so MRMostUsedWord.run() gets called. In (many) fewer words: it tells you to run the job only when invoked from the command line. Try copying the code for MRMostUsedWord to a file, named MRMostUsedWord.py, and then running it on any old text file you might have lying around. The invokation will be somthing like this (modify based on your particular python installation): python MRMostUsedWord.py some_file.txt > most_used_word.out There is quite a bit of overhead involved in setting up an AWS account and keeping an eye on the jobs that you end up running. In lab, we will run through an example account activation including: These documents (also linked from HW4) are very useful: Instructions for Amazon Setup notebook, Elastic MapReduce Quickstart Once you have this all set up and working, then mrjob makes it very easy to run a MapReduce job with EMR. Using the same MRMostUsedWord example as above, the command line invokation to run with EMR is: python MRMostUsedWord.py -r emr some_file.txt > most_used_word.out [Image from []()] Below are two practice problems to get the hang of writing MapReduce algorithms. Remember, you will be writing these programs in separate files that you run from the command line. You are welcome to try out EC2, but these are small datasets and it will generally be much faster to run locally. First, grab the file word_list.txt. This contains a list of six-letter words that I dumped from my spellchecker. To keep things simple, all of the words consist of lower-case letters only. word_list = [word.strip() for word in open("word_list.txt").readlines()] print "{0} words in list".format(len(word_list)) print "First ten words: {0}".format(", ".join(word_list[0:10])) Use mrjob to write a class that finds all anagrams in word_list.txt. UPDATE: My solution to exercise 3.1 For the next problem, download the file baseball_friends.csv. Each row of this csv file contains the following: Let's take a look at one line: friends = open("baseball_friends.csv").readlines() print friends[0].strip() print len(friends[0].split(",")) - 2 Thisjob class that lists each person's name, their favorite team, the number of Red Sox fans they are friends with, and the number of Cardinals fans they are friends with. After running that program, we can look at the results to get an idea of the absurdly simple model that I used to generate the input csv file. You might need to modify the code below if the format of your output file doesn't quite match mine. import pandas as pd import json # Read results. result_file = "baseball_friends.out" result = [[json.loads(field) for field in line.strip().split('\t')] for line in open(result_file)] # Break out columns. names = [x[0] for x in result] teams = [x[1][0] for x in result] redsox_count = [x[1][1] for x in result] cardinals_count = [x[1][2] for x in result] # Combine in data frame. result = pd.DataFrame(index=names, data={'teams': teams, 'redsox_count': redsox_count, 'cardinals_count': cardinals_count}) %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rcParams rcParams['figure.figsize'] = (10, 6) rcParams['font.size'] = 14 # Average number of friends by affiliation. print result.groupby('teams').mean() # Histogram the affiliations of people who are friends of Red Sox fans. plt.hist(result.redsox_count[result.teams == "Red Sox"], label="Red Sox friend Red Sox") plt.hist(result.cardinals_count[result.teams == "Red Sox"], label="Red Sox friend Cardinals") plt.xlabel('number of friends') plt.ylabel('count') plt.legend(loc=0)
https://nbviewer.ipython.org/github/cs109/content/blob/master/labs/lab8/lab8_mapreduce.ipynb
CC-MAIN-2022-27
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, I would like to draw a spline similar way it's drawing in the editor, with gradient from white to blue, and the points. As well as I'd like the points to be selectable. I've came to this for now: import c4d class SplineTest(c4d.plugins.ObjectData): def Init(self, op): self.LINE_DRAW = None return True def GetVirtualObjects(self, op, hh): hierarchyCloneRes = op.GetAndCheckHierarchyClone(hh, op.GetDown(), c4d.HIERARCHYCLONEFLAGS_ASSPLINE, False) if op.GetDown() else None spline = c4d.SplineObject(0, c4d.SPLINETYPE_LINEAR) if hierarchyCloneRes: if not hierarchyCloneRes["dirty"]: return hierarchyCloneRes["clone"] if hierarchyCloneRes["clone"].IsInstanceOf(c4d.Ospline) or hierarchyCloneRes["clone"].IsInstanceOf(c4d.Oline): spline = hierarchyCloneRes["clone"] sh = c4d.utils.SplineHelp() if sh.InitSplineWith(spline, c4d.SPLINEHELPFLAGS_RETAINLINEOBJECT): self.LINE_DRAW = sh.GetLineObject().GetClone() sh.FreeSpline() return spline def Draw(self, op, drawpass, bd, bh): doc = bh.GetDocument() if drawpass != c4d.DRAWPASS_OBJECT or self.LINE_DRAW is None or self.LINE_DRAW.GetPointCount() == 0 or not bh.IsActive() or not doc.IsEditMode(): return c4d.DRAWRESULT_SKIP # this draw is ignored bd.DrawPolygonObject(bh, self.LINE_DRAW, c4d.DRAWOBJECT_USE_CUSTOM_COLOR, None, c4d.GetViewColor(c4d.VIEWCOLOR_SPLINESTART), ) if doc.GetMode() == c4d.Mpoints: opPoints = self.LINE_DRAW.GetAllPoints() pSelection = self.LINE_DRAW.GetPointS().GetAll(len(opPoints)) selected = [opPoints[index] for index, isSelected in enumerate(pSelection) if isSelected] nonselected = [opPoints[index] for index, isSelected in enumerate(pSelection) if not isSelected] bd.SetMatrix_Matrix(op, bh.GetMg()) bd.SetPointSize(6) bd.LineZOffset(2) if len(nonselected): bd.SetPen(c4d.GetViewColor(c4d.VIEWCOLOR_INACTIVEPOINT)) bd.DrawPoints(nonselected) if len(selected): bd.SetPen(c4d.GetViewColor(c4d.VIEWCOLOR_ACTIVEPOINT)) bd.DrawPoints(selected) return c4d.DRAWRESULT_OK if __name__ == "__main__": c4d.plugins.RegisterObjectPlugin(id = 1000001, str = 'SplineTest', g = SplineTest, description = "osplinetest", icon = None, info = c4d.OBJECT_GENERATOR | c4d.OBJECT_INPUT | c4d.OBJECT_ISSPLINE) Issue is object lines are not being drawn with custom color. Instead this object is drawn as maybe with c4d.VIEWCOLOR_ACTIVEBOX (visually its drawn as orange in R25) Hello @baca, Thank you for reaching out to us. I will answer in bullet points. ObjectData OBJECT_ISSPLINE GetContour GetVirtualObjects OBJECT_POINTOBJECT OBJECT_POLYGONOBJECT PointObject::DisplayPoints PointObject So, your code is 'correct' for the most part, at least for a polygon object. You return something in GVO to let Cinema 4D handle the polygon shading and then draw vertices and edges on top of that (which is what you are supposed to do). However, what at least I would not do, is storing the point data in self.LINE_DRAW. I would suggest accessing the cache of the node instead. self.LINE_DRAW In the end it is also not quite clear what you want to do. I understand that you want to draw the vertices and 'gradient' for the line segments of a spline. What is not quite clear to me is if you actually need an object (the stated could also be a tool) and if the user should be able to select and manipulate vertices. I would be best if you could clarify this. Especially regarding the point of wanting to draw a gradient of line segments, I would strongly suggest moving on to C++, as you will have much better chances to implement such a custom thing in that language. Cheers, Ferdinand @ferdinand huge thanks for detailed reply, as always. Hey @baca, I didn't shared whole code here, but in GVO I'm also caching the spline as self._spline property and in GetSpline method I'm returning self._spline.GetClone() When dealing with child input object GetAndCheckHierarchyClone has no alternatives except weird combination of CSTO + Touch() Hm, that is sometimes true, but at least from your example code this was not obvious. And I would make if this is really the only way to go. There come penalties with implementing a spline in this manner, the major one being, that you will always have another layer of caches, which in turn will impact other things. A node which implements a spline via GetVirtualObjects will return a SplineObject as its cache (or part of its cache). This SplineObject will then a have a LineObject cache. A node which implements a spline via GetContour, will return a LineObject directly. SplineObject LineObject So, the solution might be to draw tiny polygons with gradient in vertex colors? Any suggestion for draw pass in that case - DRAWPASS_OBJECT or DRAWPASS_HANDLER? Current issue is that my object is drawn automatically, and I have no idea if I can prevent automatic drawing. I can render custom color only in the case my object is not selected. Yes, sort of. I was thinking more of line segments than polygons, but that is the general idea. I would also not draw in object space, but in screen space, to be as efficient and smooth as possible. When you draw in object space, and you choose the spacing to wide, the gradient will be visibly segmented when projected to pixel screen space. When you choose the spacing to tightly in object space, you will end up with many unnecessary drawing operations which are not going to be visible in screen space anyways. The drawpass in which you should be drawing depends on what you want to do, but usually DRAWPASS_OBJECT is the one to go. The more important information is here the z-depth for drawing the line segments, which should be 4 or greater, so that the line segments are drawn over shaded polygons and other stuff. DRAWPASS_OBJECT 4 But I would here point again that Python is not a good match for this task due to performance restrictions. If your splines are simple and drawing a gradient for them just means <= 10,000 draw calls in screen space, you will be fine in Python. But when your splines are complex, and drawing a gradient for them means drawing 100,000s of pixels (i.e., draw calls), Python will bottleneck you hard. The old viewport API is not the fastest even in C++, so such complex tasks should really be done there. I would suggest drawing only the vertices in Python and expressing the direction of the spline by colouring the vertices. PointObject::DisplayPoints is only available under C++? Didn't found any references in both Py and Cpp documentation As I said, this method is non-public. Some of the public interfaces/types have non-public methods. I simply pointed out that we use this method internally, and that it does a lot of things regarding displaying vertices in the viewport. So, when you implement a OBJECT_POINTOBJECT ObjectData plugin, it is somewhat intended from the internal perspective that you use then ::DisplayPoints. I was simply informing you that the inaccessibility of DisplayPoints is one of the obstacles you must overcome. OBJECT_POINTOBJECT ::DisplayPoints DisplayPoints Yeah, thanks for suggestion to not store source line, as it's not reflecting further deformer changes. But I found that Draw is called so frequent, and converting spline to line takes sufficient time, so it becomes uncomfortable to work with. When you implement your spline in GetContour this will be more straight forward, as the cache will then be directly the line-object, but you can do more or less the same withGetVirtuaalObjects, you only must deal with a slightly more complex cache then. But it should beat storing and retrieving the data manually in any case. GetVirtuaalObjects I wanted to render vertices and spline direction not for manipulation, but for visual check. And thought to implement point selection feature for selection tag creating, maybe. I have provided a very simple pattern for that at the end. I did not implement the selecting, creating, and moving vertices stuff, as this would be quite a bit of work. I also went the route of shading the vertices and not the line segments, as this is much simpler to do The result: The code: import c4d class EditableSplineData(c4d.plugins.ObjectData): def Draw(self, op, drawpass, bd, bh): """ """ if drawpass != c4d.DRAWPASS_OBJECT: super().Draw(op, drawpass, bd, bh) # Only display the vertices when the object is selected. if (not op.GetBit(c4d.BIT_ACTIVE)): return c4d.DRAWRESULT_OK cache = op.GetCache() if not isinstance(cache, c4d.LineObject): return c4d.DRAWRESULT_OK # Draw in object space with a zoffset of 4 over most things Cinema 4D will draw as polygons, # edges, and vertices. bd.SetMatrix_Matrix(op, op.GetMg(), zoffset=4) # Now we simply will draw a dot for each vertex in the LineObject, with a gradient going # from the first vertex to the last vertex. Note that we are drawing the vertices of the # LineObject, not the ones of the SplineObject. I.e., we are not drawing the control # vertices of the spline, but the actual vertices which have been interpolated between the # control vertices. In this case we will draw 3 * 64 vertices, since we define 4 control # points of a non-closed spline (i.e., 3 segments between them) and a uniform interpolation # of 64. count = cache.GetPointCount() cStart = c4d.Vector(1, 0, 0) cEnd = c4d.Vector(0, 0, 1) colors = [] # We are going to use a single draw call for doing this, DrawPoints, which also allows for # drawing each point in a different color. But it expects the colors in a bit annoying # format as an array of floats: [r, g, b, r, g, b, r, g, b, ...]. for i in range(count): ci = c4d.utils.MixVec(cStart, cEnd, i / count) colors += [ci.x, ci.y, ci.z] # Draw all points in one go, this is very performant. bd.SetPointSize(4) bd.DrawPoints(vp=cache.GetAllPoints(), vc=colors, colcnt=3) return super().Draw(op, drawpass, bd, bh) def GetContour(self, op, doc, lod, bt): """ """ # I am not going to comment the geometry construction I am doing here. For details on # polygon and spline generation, see node = c4d.SplineObject(4, c4d.SPLINETYPE_LINEAR) size = 100.0 node.SetAllPoints([c4d.Vector(0, 0, 0), c4d.Vector(size, 0, 0), c4d.Vector(size, size, 0), c4d.Vector(0, size, 0)]) node.Message(c4d.MSG_UPDATE) # I am just going to be lazy and not tie all the interpolation parameters in and instead # set them manually here. node[c4d.SPLINEOBJECT_INTERPOLATION] = c4d.SPLINEOBJECT_INTERPOLATION_UNIFORM node[c4d.SPLINEOBJECT_SUB] = 64 return node if __name__ == "__main__": c4d.plugins.RegisterObjectPlugin( id = 1000001, str = 'Editable Spline', g = EditableSplineData, description = "oeditspline", icon = None, info = c4d.OBJECT_GENERATOR | c4d.OBJECT_ISSPLINE) @ferdinand thanks for explanations and example.
https://plugincafe.maxon.net/topic/14025/draw-editable-spline-in-the-viewport
CC-MAIN-2022-27
en
refinedweb
Add policy engine class to API¶ This blueprint makes it possible to access policy-engines (other than the domain-agnostic policy engine) via the API. Problem description¶ Currently there is no way to access policy engines other than the current domain-agnostic policy engine through the API. Now that we are adding other policy engines, we need to give the user a way to interact with them. In addition to being useful for domain-specific policy engines, enabling multiple policy engines should make it easier to handle upgrades for the domain-agnostic policy engine: bring up the new one alongside the old one, then swap the old one with the new one atomically. Proposed change¶ Here we discuss two proposals. We list tradeoffs in the next section. Proposal 1: Type-based¶ Here we simply add a top-level ‘policy-engines’ API endpoint, giving us … /v1/policy-engines/<engine-id>/… /v1/data-sources/<datasource-id>/… /v1/system/… For example: /data-sources/nova/… /data-sources/neutron/… /policy-engines/agnostic/… Data-sources would support: * schema: available tables * actions: available actions * status: status Policy-engines would support: schema: available tables (e.g. classification:connected_to_internet) actions: available actions (e.g. scripts built into a policy-engine for carrying out some task) status: status policies: available policies That is, the only difference between a policy-engine and a data-source is that the datasource doesn’t support ‘policies’. If we were to think of ‘policies’ as simply ‘modules’ then we could imagine a datasource-driver exposing hierarchical tables, just like a policy-engine does. And in that case, the data-source would have something analogous to the policy-engine ‘policies’. Proposal 2: Service-based¶ Here we do away with the types policy-engine and data-sources and give clients the ability to access both policy-engines and datasources from the same endpoint. This is possible because you cannot give a policy-engine and a datasource the same name (a restriction in place to ensure references to other services in policy are unambiguous). /v1/services/<service-id>/… /v1/system/… For example: /v1/services/nova/… /v1/services/neutron/… /v1/services/agnostic/… All services would support: schema: available tables (e.g. classification:connected_to_internet) actions: available actions (e.g. scripts built into a policy-engine for carrying out some task) status: status policies: available policies In short, this proposal treats everything as if it is a policy-engine. The datasource policy engines prohibit users from making changes directly (and even if they did, they would only accept a very restricted form of policy statements: ground facts). This approach can be backwards compatible as well. We can still support /v1/datasources/<datasource-id>/… which gets routed to /v1/services/<datasource-id>. And we can support /v1/policies /v1/actions /v1/tables /v1/status as syntactic sugar for /v1/services/<default-service>/policies /v1/services/<default-service>/actions /v1/services/<default-service>/tables /v1/services/<default-service>/status This would enable the user to choose a single touchpoint for managing policy in the datacenter, while at the same time enabling them direct access to all the services in the datacenter. (The only worry here is that ) To set the <default-service>, we’d want a parameter the user can set dynamically (to help with upgrade). Right now we would set that to /services/agnostic. And maybe we can have arbitrary aliases for services as well, so that we can upgrade any service without changing policy. One worry with providing the /v1/policies, etc. endpoints is that it may seem to mask Congress’s overall status, policies, actions, and tables. That is, people might expect those endpoints to aggregate all the potential policies, actions, tables, and statuses. But if such functionality ever becomes necessary, we can attach those endpoints to /v1/services, giving us the following end points. /services/policies /services/actions /services/tables /services/status Here is an example of the entry points if we had Nova, GBP, and our policy engine. The name of the service is whatever name DSE expects. /policies /actions /tables /services/policies /services/actions /services/tables /services/nova/policies -> empty /services/nova/actions -> createVM, deleteVM, migrateVM, etc. /services/nova/tables -> servers, hosts, etc. /services/gbp/policies /services/gbp/actions /services/gbp/tables /services/engine/policies /services/engine/actions /services/engine/tables /system/drivers/ /system/engine-drivers/nova-uber-driver /system/datasource-drivers/nova-uber-driver /system/action-drivers/nova-uber-driver /users /stats One benefit to enabling people to modify domain-specific policy engines through the Congress API is that we provide a single policy language for managing all the policy engines running in the datacenter. For delegation, we already need adapters that translate Datalog into the native language of each policy engine, so here we expose that functionality directly to the user as well. Tradeoffs¶ Pros for the type-based approach: * Easy for users to understand * Simple extension of the current API Cons for the type-based approach: Awkward that data-sources and policy-engines implement almost exactly the same interface and have separate namespaces, but are represented as distinct classes in the API. Enables us to build datasources with significantly different programmatic interface than policy-engines. If at the API-layer the two classes of objects were almost indistinguishable, it would lead to better abstraction and interfaces in the underlying implementation. Pros for the the service-based approach: All services running on the DSE are accessed identically from the API. This is a more natural reflection of the reality of the nature of those services. Cons for the service-based approach: Bigger change May be more difficult for users to understand initially. Eventually the policy-engine class will include functionality that the datasource class does not. Executing that functionality on a datasource will cause a 404, and we cannot predict which will occur based on just the URL. Doing something like listing all the datasources will require an API like /v1/services?action=list&type=datasource instead of the more obvious /v1/data-sources/. Overall, the types (datasource vs. policyengine) will be present in both proposals, but they will be emphasized much less in the service-based approach. The service-based approach is closer to Python in that the system isn’t able to look at the code you’ve written (the URL) and check if the method you asked for exists. The type-based approach is closer to C/Java in that the system IS able to tell you if the method exists by just looking at the code (URL). Typically policy systems are quite dynamic in nature (you can change the policy/code at runtime), and hence are closer to dynamic programming languages like Python than to static languages like C/Java. We therefore typically bias our decisions toward dynamism, which in this case would mean the service-based approach. Implementation¶ Work items¶ Once we decide on the approach, we will figure out the necessary work items. But here’s a rough cut. type-based approach: add routes, create congress/api/engine_model.py, modify congress/api/*_model to enable tables/actions/policies/etc. for engines. service-based approach: add routes, create congress/api/service_model.py, (including an API to list different types of objects), modify the congress/api/*_model to eliminate distinction between datasources and policies Dependencies¶ Assumes that we have added an API call for ‘actions’, though this work could be done without that: add-action-listing. Testing¶ Change unit tests in congress/tests/test_congress.py Change congress_pythonclient (which will handle tempest tests)
https://specs.openstack.org/openstack/congress-specs/specs/liberty/policy-engine-api.html
CC-MAIN-2022-27
en
refinedweb
Inset map showing a rectangular region The pygmt.Figure.inset method adds an inset figure inside a larger figure. The function is called using a with statement, and its position, box, offset, and margin can be customized. Plotting methods called within the with statement plot into the inset figure. Out: <IPython.core.display.Image object> import pygmt # Set the region of the main figure region = [137.5, 141, 34, 37] fig = pygmt.Figure() # Plot the base map of the main figure. Universal Transverse Mercator (UTM) # projection is used and the UTM zone is set to be "54S". fig.basemap(region=region, projection="U54S/12c", frame=["WSne", "af"]) # Set the land color to "lightbrown", the water color to "azure1", the # shoreline width to "2p", and the area threshold to 1000 km^2 for the main # figure fig.coast(land="lightbrown", water="azure1", shorelines="2p", area_thresh=1000) # Create an inset map, setting the position to bottom right, the width to # 3 cm, the height to 3.6 cm, and the x- and y-offsets to # 0.1 cm, respectively. Draws a rectangular box around the inset with a fill # color of "white" and a pen of "1p". with fig.inset(position="jBR+w3c/3.6c+o0.1c", box="+gwhite+p1p"): # Plot the Japan main land in the inset using coast. "U54S/?" means UTM # projection with map width automatically determined from the inset width. # Highlight the Japan area in "lightbrown" # and draw its outline with a pen of "0.2p". fig.coast( region=[129, 146, 30, 46], projection="U54S/?", dcw="JP+glightbrown+p0.2p", area_thresh=10000, ) # Plot a rectangle ("r") in the inset map to show the area of the main # figure. "+s" means that the first two columns are the longitude and # latitude of the bottom left corner of the rectangle, and the last two # columns the longitude and latitude of the uppper right corner. rectangle = [[region[0], region[2], region[1], region[3]]] fig.plot(data=rectangle, style="r+s", pen="2p,blue") fig.show() Total running time of the script: ( 0 minutes 1.562 seconds) Gallery generated by Sphinx-Gallery
https://www.pygmt.org/dev/gallery/embellishments/inset_rectangle_region.html
CC-MAIN-2022-27
en
refinedweb
This is the mail archive of the cygwin-apps@cygwin.com mailing list for the Cygwin project. Igor Pechtchanski wrote: > On Sat, 12 Jul 2003, Max Bowsher wrote: >>> 2003-07-11 Igor Pechtchanski <pechtcha@cs.nyu.edu> >>> >>> * String++.h (TOSTRING): New macro. >>> [snip] >> >> Do we need __TOSTRING__ and TOSTRING? Since they are defined in the same >> file, it isn't really making the namespace cleaner. > > Yes, we do need two macros. The helper macro (__TOSTRING__) can be named > something else, but it's needed to force parameter expansion. Otherwise, > TOSTRING(__LINE__) would have produced "__LINE__", not the stringified > value of __LINE__. This is straight from the K&R book... OK, I didn't know that. Would you add a comment on this subtlety? > However, I just looked, and this kind of macro seems to be defined already > in /usr/include/symcat.h (XSTRING). I'm not sure whether it's better to > use the pre-existing macro, or to to define our own (with a more intuitive > name, IMO). The macro is simple enough. Opinions? IMO, define our own - more obvious name, plus symcat.h would really be a Cygwin header - My self-built gcc-mingw 3.3 doesn't search /usr/include. I don't know whether 3.2-3 does or not. Max.
https://sourceware.org/legacy-ml/cygwin-apps/2003-07/msg00151.html
CC-MAIN-2022-27
en
refinedweb
Perform a FFT transform by calling EMData::do_fft() and EMData::do_ift() More... #include <processor.h> Perform a FFT transform by calling EMData::do_fft() and EMData::do_ift() Definition at line 9105 of file processor.h. Get the descrition of this specific processor. This function must be overwritten by a subclass. Implements EMAN::Processor. Definition at line 9127 of file processor.h. Get the processor's name. Each processor is identified by a unique name. Implements EMAN::Processor. Definition at line 9110 of file processor.h. Get processor parameter information in a dictionary. Each parameter has one record in the dictionary. Each record contains its name, data-type, and description. Reimplemented from EMAN::Processor. Definition at line 9120 of file processor.h. References EMAN::EMObject::INT, and EMAN::TypeDict::put(). Definition at line 9115 of file processor.h. To process an image in-place. For those processors which can only be processed out-of-place, override this function to just print out some error message to remind user call the out-of-place version. Implements EMAN::Processor. Definition at line 10437 of file processor.cpp. References EMAN::Dict::has_key(), and EMAN::Processor::params. Definition at line 9132 of file processor.h. Referenced by get_name().
https://blake.bcm.edu/doxygen/classEMAN_1_1FFTProcessor.html
CC-MAIN-2022-27
en
refinedweb
Molot - lightweight build tool for software projects. Project description Lightweight build tool for software projects. Requirements Molot requires Python 3.6 or above (3.5 should work too, but that’s untested). Usage To create a build script, just make a new file build.py in the root of your project. You can make it executable chmod +x build.py to make it easier to run it. Here’s how to start your build script. #!/usr/bin/env python3 from molot.builder import * #pylint: disable=unused-wildcard-import Note that #pylint: disable=unused-wildcard-import is optional but will help keep your editor quiet about unused imports. Once you’ve defined your targets and all, do this at the end to compile them: build() Now you’re ready to run the build script to see the help message: ./build.py If you only wanted to see the list of targets and environment arguments, you run the built-in target list as follows: ./build.py list The output will be something like this: → Executing target: list available targets: <builtin> list - lists all available targets environment arguments: Not very exciting. Now let’s learn how to add targets and environment arguments. Targets Any piece of work that your build script needs to perform is defined as a target. Here’s a trivial example of a target that just runs ls. @target( name='ls', description="lists current directory items", group='greetings', depends=['target1', 'target2'] ) def ls(): shell("ls") Parameters are as follows: - name - unique name to use when requesting the target (optional; by default the function name will be used) - description - short description about what the target does, to be displayed in the help message (optional) - group - group name to list target under alphabetically (optional; by default, will be listed under ungrouped) - depends - list of other targets that need to be executed first (optional) Since all the parameters are optional, the shortest definition of the same target can be as follows: @target() def ls(): shell("ls") There is a basic dependency resolution routine that checks for circular dependencies and finds the first targets to execute before running the one that you requested. Anyway, here’s how you run your new target: ./build.py ls Environment Arguments Environment arguments are intended as a cross between environment variables and arguments. Values can be passed as the former and then overriden as the latter. Here’s how you define one: ENV = envarg('ENV', default='dev', description="build environment") Parameters are as follows: - name - unique name for the argument - default - default value if none is supplied (optional; by default None) - description - short description about what the argument is, to be displayed in the help message (optional) The argument is evaluated right there (rather than inside of targets), so you can use that variable anywhere once it’s set. It can either be set as a regular environment variable. For example: ENV=dev ./build.py sometarget Alternatively, it can be passed as an argument: ./build.py sometarget --arg ENV=prod Finally, you can pass .env file to load: ./build.py sometarget --dotenv ~/somewhere/.env If both are passed simultaneously (not recommended), then argument takes precedence over the environment variable. Configuration Molot provides an optional configuration parsing facility. If you want to specify a configuration YAML file, create a file build.yaml in your project root, same location as your build.py, and fill it with any valid YAML. For example, something like this: Environments: dev: Name: development prod: Name: production Now you can access these configurations by calling config() from anywhere. First call will do the initial parsing, subsequent ones will just returned a cached dictionary with your configurations. Therefore, if you want to parse a YAML file with a different name, pass the path to the first call: config(path=os.path.join(PROJECT_PATH, 'somethingelse.yaml')) You can either get the whole configuration dictionary or pass a specific path of keys to extract. For example, if you want to get the name for the prod environment: name = config(['Environments', 'prod', 'Name']) If the desired key is optional and you don’t want to fail the execution if it’s not there, you can do the following: name = config(['Environments', 'qa', 'Name'], required=False) Bootstrap The build script above assumes Molot is already installed. If not, there are some tricks that you can use to pre-install before the script runs. For example, you can create a separate file build_boot.py as follows: from subprocess import run from importlib.util import find_spec as spec from pkg_resources import get_distribution as dist # Preloads Molot build tool. def preload_molot(ver): mod, pkg = 'molot', 'molot' spec(mod) and dist(pkg).version == ver or run(['pip3', 'install', f"{pkg}=={ver}"]) Then at the top of your script, you’ll be able to do the following: #!/usr/bin/env python3 __import__('build_boot').preload_molot('X.Y.Z') from molot.builder import * #pylint: disable=unused-wildcard-import This downloads a specific version X.Y.Z if it’s not already installed. Installer There is an installer for external packages that you can use to install dependencies only when they’re needed. from molot.installer import install install([ 'package1', ('module2', 'package2>=1.2.3') ]) Notice that you can pass a list of packages to install in two formats: - When the module name (import statement) matches the install package name, you can just pass it as a string, i.e. like 'package1' in the example - When they differ or you want to provide a specific version of a package, pass a tuple with the module name first and the install statement second, i.e. like ('module2', 'package2>=1.2.3') in the example The install() expression checks if the module can be imported (meaning that it’s already installed) and installs it otherwise. By default, the installer uses pip3 install but if you want to use a different expression (e.g. different version of pip or conda), you can pass it using the INSTALLER environment argument. INSTALLER="conda install" ./build.py Contexts Although you can do all the work within each target, you can also abstract it into “contexts”. While you can use this concept however you like, the intended use was creating an object that extends Context that sets up the arguments, paths and anything else your target needs, and then calling a method on it. Here’s an example: PATH = './' ENV = 'dev' @target() def create_foo(): FooContext(PATH, ENV).create() @target() def delete_foo(): FooContext(PATH, ENV).delete() from molot.context import Context class FooContext(Context): def __init__(self, path, env): self.path = path self.env = env def create(self): self.ensure_dir(self.path) # Do something with self.env def delete(self): self.ensure_dir(self.path) # Do something with self.env It might be a good idea to then extract your contexts into a separate file build_contexts.py and import them in your build.py. That way, your build script is nice and clean with only the targets, meanwhile all your under-the-hood implementation is hidden away in a separate file. Examples See examples directory for sample build scripts that demonstrate some features. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/molot/
CC-MAIN-2022-27
en
refinedweb
README TinySDF is a tiny and fast JavaScript library for generating SDF (signed distance field) from system fonts on the browser using Canvas 2D and Felzenszwalb/Huttenlocher distance transform. This is very useful for rendering text with WebGL. Demo UsageUsage Create a TinySDF for drawing glyph SDFs based on font parameters: const tinySdf = new TinySDF({ fontSize: 24, // Font size in pixels fontFamily: 'sans-serif', // CSS font-family fontWeight: 'normal', // CSS font-weight fontStyle: 'normal', // CSS font-style buffer: 3, // Whitespace buffer around a glyph in pixels radius: 8, // How many pixels around the glyph shape to use for encoding distance cutoff: 0.25 // How much of the radius (relative) is used for the inside part of the glyph }); const glyph = tinySdf.draw('泽'); // draw a single character Returns an object with the following properties: datais a Uint8ClampedArrayarray of alpha values (0–255) for a widthx heightgrid. width: Width of the returned bitmap. height: Height of the returned bitmap. glyphTop: Maximum ascent of the glyph from alphabetic baseline. glyphLeft: Currently hardwired to 0 (actual glyph differences are encoded in the rasterization). glyphWidth: Width of the rasterized portion of the glyph. glyphHeightHeight of the rasterized portion of the glyph. glyphAdvance: Layout advance. TinySDF is provided as a ES module, so it's only supported on modern browsers, excluding IE. <script type="module"> import TinySDF from ''; ... </script> In Node, you can't use require — only import in ESM-capable versions (v12.15+): import TinySDF from '@mapbox/tiny-sdf'; DevelopmentDevelopment npm test # run tests npm start # start server for the demo page LicenseLicense This implementation is licensed under the BSD 2-Clause license. It's based directly on the algorithm published in the Felzenszwalb/Huttenlocher paper, and is not a port of the existing C++ implementation provided by the paper's authors.
https://www.skypack.dev/view/@mapbox/tiny-sdf
CC-MAIN-2022-27
en
refinedweb
The document Using Calculations to replace and enhance AutoClose (or automatically move any process on to any status with any action) gives full instructions on setting up a calculation to automatically perform a process action such as Close after a set period of time. However it assumes that the status you are moving from is not configured to be Read Only. The calculation that is performed each night to check if the process is ready to move on can't run if the status is Read Only so an alternative method is needed. The following are the variations to the steps on the document linked above. Follow the steps as normal but refer to this document for the changes. It is assumed you are using Incidents and checking against the Resolutions collection as in the example in the original document. Stage 1 - The Scheduled Calculation - Step 1 instructs you to create a Boolean attribute on the Incident however in this case you should instead create it on the Resolution object. This is because the Resolution object is what is being checked against in the calculation, and while the Incident itself is Read Only the collection object is not. - Step 4 has an example calculation. Instead use the following: import System static def GetAttributeValue(Resolution): // initially set the flag to False Flag = 'False' // check this is the latest resolution and otherwise return False if Resolution.SerialNumber != Resolution.Incident.Resolutions.Count: return Flag // check the incident is resolved if Resolution.Incident.Status.Name == 'Resolved': // check if this resolution was over 7 days ago TimeSinceResolved = Resolution.Incident.GetBusinessTime(Resolution.CreationDate, DateTime.UtcNow) if TimeSinceResolved.TotalHours > 40: // if today isn't Saturday or Sunday enable the flag if DateTime.Today.DayOfWeek != DayOfWeek.Saturday and DateTime.Today.DayOfWeek != DayOfWeek.Sunday: Flag = 'True' // return the True or False flag return Flag The main difference is that before checking if the resolution was created over a week ago it also checks that this is the latest resolution. This is because you could have re-opened and re-resolved an incident multiple times and when the action is performed on the Incidents with the flag enabled we need to know that the time elapsed since it was resolved is from the most recent resolution. Stage 2 - Scheduling the calculation - Step 2 instructs you to create a query on the Incident object. Instead create it on the Resolution object as this is where the calculation attribute sits. - For adding the criteria in step 3 you need to expand the relationship to Incident in the attributes list to access the Status. - In step 4 in creating the schedule set the object to Resolution rather than Incident. Stage 3 - Schedule the Close action - For adding the criteria in step 2 you need to expand the Resolutions collection in the attributes list to access the AutoClose Flag attribute created in the first stage. Because the criteria is based on a collection this would return incidents where any of the resolutions has the flag set. However this is not an issue because the calculation only sets the flag if the resolution is the latest for that Incident and clears the flag on previous resolutions.
https://community.ivanti.com/docs/DOC-24282
CC-MAIN-2018-05
en
refinedweb
Hi, during compilation of LAME I found out that the following gcc -------------------------- Using built-in specs. Target: i686-pc-linux-gnu Configured with: ../../../gcc-CVS-20050512/gcc-CVS-20050512/configure --host=i686-pc-linux-gnu --prefix=/usr/local/opt/gcc-4.1 --exec-prefix=/usr/local/opt/gcc-4.1 --sysconfdir=/etc --libdir=/usr/local/opt/gcc-4.1/lib --libexecdir=/usr/local/opt/gcc-4.1/libexec --sharedstatedir=/var --localstatedir=/var --program-suffix=-4.1 --with-x-includes=/usr/X11R6/include --with-x-libraries=/usr/X11R6/lib --disable-shared --enable-static --with-gnu-as --with-gnu-ld --with-stabs --enable-threads=posix --enable-version-specific-runtime-libs --disable-coverage --disable-libgcj --disable-checking --enable-multilib --with-x --enable-cmath --enable-libstdcxx-debug --enable-fast-character --enable-hash-synchronization --with-system-zlib --with-libbanshee --with-demangler-in-ld --with-arch=athlon-xp --disable-libada --enable-languages=c,c++,f95,objc Thread model: posix gcc version 4.1.0 20050512 (experimental) -------------------------- miscompiles the code when compiled with (at least) following flags: -------------------------- gcc -O1 -fno-strict-aliasing -finline-functions -o lame lame.c -------------------------- But when compiling with only the following: -------------------------- gcc -O1 -fno-strict-aliasing -finline-functions -o lame lame.c -------------------------- it works OK. I extracted as small part of it as possible, so that you can test it, but still it is too big to just list here (cca 17KB), so I'll send it as an attachment after commiting this bugreport template (because there is no way to do in directly from here, before the bug is created :(). The miscompilation occurs on line 302 of the attached file. It looks something like this: -------------------------- if (gfp->out_samplerate == 0) gfp->out_samplerate = optimum_samplefreq( (int)gfp->lowpassfreq, gfp->in_samplerate); /* <----- Here's the problem! */ -------------------------- The problem is that though before this 'gfp->out_samplerate' IS zero, the value of 'gfp->out_samplerate' keeps its previous value (which is 0 in this case) while it should have obtained a new value from calling the 'optimum_samplefreq( (int)gfp->lowpassfreq, gfp->in_samplerate)', which should return (and returns) a value of 8000 in this case. I tried to strip the surrounding code a bit more, but it seems not to trigger the bug then. When you compile and run it, it may SEGFAULT afterwards (after the bugged part), because I didn't put there some necessary initializations and some other things in order to make the example as small as possible. But that is OK and should be of no concern. When you compile the whole LAME application (library), everything is running without SEGFAULTs and the bug is still there. I consider this a fatal bug, since it doesn't reveal itself during compilation. Everything compiled just fine. Except for that it didn't work as expected then. I hope there is nothing wrong with the test code. But even if it is, gcc certainly shouldn't miscompile like this. Created attachment 8887 [details] This is the testcode that triggers the bug (stripped from latest CVS LAME). (In reply to comment #0) > > But when compiling with only the following: > > -------------------------- > gcc -O1 -fno-strict-aliasing -finline-functions -o lame lame.c > -------------------------- Sorry this should have been: -------------------------- gcc -O0 -fno-strict-aliasing -finline-functions -o lame lame.c -------------------------- I have no idea what is causing the problem. I tried the following options and it is still messed up: " -O1 -finline-functions -fno-tree-dominator-opts -fno-tree-fre -fno-tree-ccp -fno-tree-store-ccp -fno-tree-salias -fno-tree-sink -fno-tree-dse -fno-tree-sra " some how the store is becoming dead.; Strict aliasing does not matter in this case as it is not enabled at -O1 anyways. Oh and -fno-tree-saliasing does not fix it, this is just for Dan. (In reply to comment #7) > Oh and -fno-tree-saliasing does not fix it, this is just for Dan. I mean "-fno-tree-salias". (In reply to comment #5) >; Yes, you are absolutely correct. Just a result of too much stripping and 3:00 AM. ;-) But the problem isn't affected by that. (In reply to comment #6) > Strict aliasing does not matter in this case as it is not enabled at -O1 anyways. It does! Although not with -O1. But I just wanted to point out (which I forgot before) that with '-fstrict-aliasing' it works, i.e.: ---------------------- gcc -O1 -fstrict-aliasing -finline-functions -I. -o lame lame.c ----------------------? (In reply to comment #11) >? Well, actually it was CVS-20050613 that I checked (sorry a misspell). But anyway. Fixed.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=21564
CC-MAIN-2018-05
en
refinedweb
The problem is typical Knapsack problem. The idea is, each coin can be used infinitely. So the inner loop of dp is ascending. If one coin can only be used once, the inner loop should be descending. public class Solution { public int coinChange(int[] coins, int amount) { int[] dp = new int[amount + 1]; Arrays.fill(dp, Integer.MAX_VALUE); dp[0] = 0; for (int coin : coins) { for (int j = coin; j <= amount; j++) { if (dp[j - coin] != Integer.MAX_VALUE) { dp[j] = Math.min(dp[j], dp[j - coin] + 1); } } } return dp[amount] == Integer.MAX_VALUE ? -1 : dp[amount]; } } I can't understand the relationship between this question and Knapsack_problem, can you give more details about this? This is what I get from your code: For each iteration, the dp[] result shows that the minimum number of coins combined for each amount. Let's take 11 and [5,2,1] for example. After the first iteration of coin 5, we can know whether the amount can be expressed by coin 5 and what is the minimum number. The second iteration uses the previous result of coin 5 to determine which amounts can expressed by the combination of 5 and 3 and what is the minimum amount needed by Math.min(dp[i], dp[i - coin] + 1). And so on..
https://discuss.leetcode.com/topic/57101/simplest-java-solution-beat-89
CC-MAIN-2018-05
en
refinedweb
So I am getting into the swing of python in my class, and am currently trying to take a program from an example, and making it much more condensed. The issue I am having, is there are many classes for a tkinter guessing game. def question5(): if entry5.get() in ["Samus", "samus", "Metroid", "metroid", "Prime", "prime", "Metroid Prime", "metroid prime"]: answerlabel5['text']= "Its the amazing Metroid game heroine Samus, good call!" else: answerlabel5['text']= "Sigh.. try again." def question(butnum): print "calling proc" #shows space in output to see what is going in print "you submitted", butnum #trying to find what output is happening. """if (ent[i]).get() in ans[i]: (a[i])['text'] = acor[i] else: (a[i])['text'] = ahint[i]""" #" was an attempt to get it to work. for i in range(0, 5): images[i] = Label(image=images[i]).pack() ent[i] = Entry().pack() sub[i] = Button(text="Submit Answer",command=(lambda: question('i')), fg="white",bg="grey50", font=('impact',10)).pack() ans[i] = Label(text="", bg="white", font=('impact',10)).pack() The class question 5 is one of 5(obviously) that checks for a right answer. I was trying to make a class that would dynamically check based on what entry box I was entering into. How would I get that to work with the class, or is it even possible?
https://www.daniweb.com/programming/software-development/threads/153411/class-issue
CC-MAIN-2018-05
en
refinedweb
Hello everyone, I'm obviously having trouble with a program that wants me to create a house with just five mouse clicks. My program is simple create a house using only 5 mouse clicks. This problem is from the book Python Programming by John Zelle Its on page 162 This is what the book says on the problem: You are to write a program that allows the user to draw a simple house using five mouse clicks. The first two clicks will be the opposite corners of the rectangle frame of the house. The third click will indicate the center of teh top edge of a rectangular door. The door should have a total width that is 1/5th of the width of the house frame. The sides of the door should extend from the corners of the top down to the bottom of the frame. The fourth click will indicate the center of a square window. The window is half as wide as the door. The last click will indicate the peak of the roof. The edges of the roof will extend from the point at the peak to the corners of the top edge of the house frame. Thanks for the help in advance. PS I'm using python 2.2.3 version So far this is what I have come up with: from graphics22 import * def main(): win = GraphWin("house.py", 500, 500) win.setCoords(0,0, 4,4) p1 = win.getMouse() p2 = win.getMouse() p3 = win.getMouse() p4 = win.getMouse() p5 = win.getMouse() house = Rectangle(p1, p2) house.setFill("Red") house.draw(win) roof = Polygon(p1, p3, p4) roof.setFill("Black") roof.draw(win) #door = Rectangle () #door.setFill("Brown") #door.draw(win) #window = Rectangle() #window.setFill("White") #window.draw(win) win.getMouse() win.close() main() I can create the rectangle for the base of the house and the roof but then I'm already at 4 mouse clicks.
https://www.daniweb.com/programming/software-development/threads/94059/need-help-with-the-5-mouse-click-house-program
CC-MAIN-2018-05
en
refinedweb
Upper-casing conventions as SQL likes it in Entity Framework 6 Before Entity Framework 6 was finalized I wrote posts (here and here) showing how with the help of conventions you can save yourself some tedious typing for databases following strictly SQL standard in respect to upper case and (not)quoting (see previous posts for details). But that was pre-EF6 era and some API changed. In fact it’s now even way easier to do that. Let’s do just simple “upper-casing” convention. Given I need to handle all columns and tables, even the ones generated, I need to use so-called store model conventions. These operate on the model in S-Space. The interface I’m going to use is IStoreModelConvention. This interface needs type parameter to describe on what element we’re going to operate. I’ll start with EdmProperty. This class represents column in S-Space. (Also I believe there’s a way from EntityType. But why to make it harder.) Whoever implements IStoreModelConvention interface must implement single method void Apply(T item, DbModel model). No problem. public void Apply(EdmProperty item, DbModel model) { item.Name = MakeUpperCase(item.Name); } For tables I need to dig into EntitySet type aka IStoreModelConvention<EntitySet>. Not a problem either. public void Apply(EntitySet item, DbModel model) { item.Table = MakeUpperCase(item.Table); } And that’s it. Either I can make it as two conventions or single one. I feel that this is single logical package so I made it one. public class UpperCaseConvention : IStoreModelConvention<EntitySet>, IStoreModelConvention<EdmProperty> { public void Apply(EntitySet item, DbModel model) { item.Table = MakeUpperCase(item.Table); } public void Apply(EdmProperty item, DbModel model) { item.Name = MakeUpperCase(item.Name); } protected virtual string MakeUpperCase(string s) { return s.ToUpperInvariant(); } } I also made MakeUpperCase method virtual in case somebody would like to make slightly different implementation, simple subclassing it is. With this it shouldn’t take long to create bunch of custom conventions (and combine these) to match naming conventions – like T_<tablename>, F_<columnname> or PropertyName -> PROPERTY_NAME.
https://www.tabsoverspaces.com/233488-upper-casing-convention-as-sql-likes-it-in-entity-framework-6/
CC-MAIN-2018-05
en
refinedweb
On Mon, Apr 4, 2016 at 7:14 PM, Peter Maydell <address@hidden> wrote: > On 4 April 2016 at 14:39, <address@hidden> wrote: >> From: Vijay <address@hidden> >> >> Set target page size to minimum 4K for aarch64. >> This helps to reduce live migration downtime significantly. >> >> Signed-off-by: Vijaya Kumar K <address@hidden> >> --- >> target-arm/cpu.h | 7 +++++++ >> 1 file changed, 7 insertions(+) >> >> diff --git a/target-arm/cpu.h b/target-arm/cpu.h >> index 066ff67..2e4b48f 100644 >> --- a/target-arm/cpu.h >> +++ b/target-arm/cpu.h >> @@ -1562,11 +1562,18 @@ bool write_cpustate_to_list(ARMCPU *cpu); >> #if defined(CONFIG_USER_ONLY) >> #define TARGET_PAGE_BITS 12 >> #else >> +/* >> + * Aarch64 support minimum 4K page size >> + */ >> +#if defined(TARGET_AARCH64) >> +#define TARGET_PAGE_BITS 12 > > I agree that this would definitely improve performance (both for > migration and for emulated guests), but I'm afraid this breaks > running 32-bit ARMv5 and ARMv7M guests with this QEMU binary, > so we can't do this. If we want to allow the minimum page size to > be bigger than 1K for AArch64 CPUs then we need to make it a > runtime settable thing rather than compile-time (which is not > an entirely trivial thing). Do you mean to say that based on -cpu type qemu option choose the page size at runtime? > >> +#else >> /* The ARM MMU allows 1k pages. */ >> /* ??? Linux doesn't actually use these, and they're deprecated in recent >> architecture revisions. Maybe a configure option to disable them. */ >> #define TARGET_PAGE_BITS 10 >> #endif >> +#endif >> >> #if defined(TARGET_AARCH64) >> # define TARGET_PHYS_ADDR_SPACE_BITS 48 > > thanks > -- PMM
https://lists.gnu.org/archive/html/qemu-devel/2016-04/msg00504.html
CC-MAIN-2018-05
en
refinedweb
Select your preferred scripting language. All code snippets will be displayed in this language. This. The examples can be viewed in either C# or JavaScript using the menu at the top of each page. Note that the API is the same regardless of which language is used, so the choice of language is purely down to preference. API are grouped by namespaces they belong to, and can be selected from the sidebar to the left. For most users, the UnityEngine section will be the main port of call. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.4/Documentation/ScriptReference/
CC-MAIN-2018-39
en
refinedweb
[Solved] QPrinter crash on Qt5 Hi, i need help... i write small program to print receipt. but having difficulties with Qt5. It always crash. in Qt4 it working just fine. So i decide to test it, here is the code: @#include <QCoreApplication> #include <QPrinter> #include <QPainter> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); QPrinter printer; printer.setOutputFormat(QPrinter::PdfFormat); printer.setOutputFileName("test.pdf"); QPainter p; if (p.begin(&printer)) { p.drawText(0, 0, "test"); p.end(); } return 0; }@ in Qt5 if i replace drawText with drawLine, crash didn't happen. Tested on Qt 5.3.1 MSVC 2013 and Qt 5.2.1 MSVC 2010, Qt 4.8.6 MSVC 2010. What is wrong with my code? Can someone give me a hint about the proper way to use QPrinter on Qt5. I'm just newbie on programming... Thank you. Sorry for my bad english. Perhaps you just need a font. thank you for replying.. added @ QFont font("Arial"); p.setFont(font);@ but no luck I tested it: my debugger complained about a missing font and it wanted a QGuiApplication instead of a QCoreApplication. Then it worked on my machine. - SGaist Lifetime Qt Champion Hi and welcome to devnet, IIRC you must use a QApplication thank you msue and SGaist. solved...
https://forum.qt.io/topic/44550/solved-qprinter-crash-on-qt5
CC-MAIN-2018-39
en
refinedweb
{ Aud(); } } Another example: // A grenade // - instantiates a explosion Prefab when hitting a surface // - then destroys itself using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Transform explosionPrefab; void OnCollisionEnter(Collision collision) { ContactPoint contact = collision.contacts[0]; Quaternion rot = Quaternion.FromToRotation(Vector3.up, contact.normal); Vector3 pos = contact.point; Instantiate(explosionPrefab, pos, rot); Destroy(gameObject); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Collider.OnCollisionEnter.html
CC-MAIN-2018-39
en
refinedweb
To scope this article, we will not be adding fancy methods that require async stuff on the server. That's something for later on.. Its just to illustrate the problem and solution with Meteor's EJSON library as a replacement for JSON. TL;DR.. Here's the resulting boilerplate code How to push data to your clientside React components We can utilize the window object and add a JSON stringified object to it. This will be sent along side with the serverside rendered HTML in script tags. On our clientside it will then automatically become available as a javascript object. See example below: A logical place to start would be the boilerplate that I've created for a previous article about How to set up Meteor & React with SSR In our startup/server.js file there's the following code:} /> )); }); The above code uses Meteor's onPageLoad which gets the Sink library as its parameter. Sink contains the request and response object and a couple of methods described in the Meteor docs about server-render. Adding initial state to your server side Lets extend the above code a bit with a bit of dummy state. Normally this state will be gathered from a bunch of different locations like Redux actions or a simple Meteor Method. In this example we are going to push some site meta data into the app. As you can see, on the serverside part its just a matter of adding an additional parameter to our App component. import React from 'react'; import { renderToNodeStream } from 'react-dom/server'; import { onPageLoad } from 'meteor/server-render';} /> )); }); Let's change our App component to simply do a console log for now. import React from 'react'; export default ({ initialState }) => { console.log(initialState); return ( <h1>Its working!</h1> ) }; Start your Meteor app and open it in the browser. You will notice that on our command line output all the information is given. However, if you open your developers console on the browser, it says 'undefined'.. That is because our clientside app doesn't have the initial state yet. We need a way to pass it to our bundle and ideally without doing an extra request. Hydrating React Components with state React's recommended way of hydrating the clientside is nicely explained in this article about React Client Side hydration. This method works nicely and is similar in Meteor, except, without the Express stuff. Let's first naively dive in and hydrate our client bundle 'the SSR Meteor way'. Add the following line to the bottom of your onPageLoad function in startup/server.js sink.appendToBody(` <script id="preloaded-state"> window.__PRELOADED_STATE__ = ${JSON.stringify(initialState)} </script> `) The above snippet uses sink to push a script tag into the bottom of our Server rendered HTML. It assigns a PRELOADED_STATE variable to the browser's window object. The initial state must be a string. This is why we stringify the initialState variable. If you do a 'view source' in your app, you should see the stringified JSON string as part of the HTML body Now in startup/client.js we need to grab this preloaded state window property and push it into our App component: // Browser automatically parses the JSON string into a javascript object. const initialState = window.__PRELOADED_STATE__; delete window.__PRELOADED_STATE__; // Remove what we don't need anymore If you now go to your app in the browser and open your console again, it should show you the state like on the serverside. There is a problem however! If you inspect the values of your state, you'll notice that the publishedAt field contains a date formatted as a string.. On the server however this is a Date object... Normally in Express based applications, you would have to 'pull the data trough a schema' - Meaning that all fields need to be parsed and transformed back into their rich equivalents. This process is error prone and leads to a lot of potential bugs. Meteor however solved this problem for us by providing us with EJSON. (Extended JSON). It allows us to push rich content types to the client in the form of JSON and parses it back automatically for us! You can even define your own custom types if needed, but that's a different topic for now. Lets start using it. Lets tweak our startup/server.js by adding the EJSON dependency. Also where we push the JSON data, we need to replace JSON with EJSON. Below should be the result. import React from 'react'; import { renderToNodeStream } from 'react-dom/server'; import { onPageLoad } from 'meteor/server-render'; // Add EJSON dependency import { EJSON } from 'meteor/ejson';}/> )); // Push EJSON stringified state to the client sink.appendToBody(` <script id="preloaded-state"> window.__PRELOADED_STATE__ = ${EJSON.stringify(initialState)} </script> `) }); Now lets open our browser console again and you will see a new value for the publishedAt field. publishedAt: {$date: 1525910400000} Its still not a Date object, but at least we now have an indicator that it should. We need to change how the EJSON string is parsed on our client side. Right now it just parses it as JSON. The only thing that we need to do is to tweak the startup/client.js file: import React from 'react'; import ReactDOM from 'react-dom'; import { onPageLoad } from 'meteor/server-render'; import { EJSON } from 'meteor/ejson'; // Stringify back the preloaded state into its original EJSON string form. // Then use the EJSON parser to parse rich content types const initialState = EJSON.parse(JSON.stringify(window.__PRELOADED_STATE__)); delete window.__PRELOADED_STATE__; // Remove what we don't need anymore onPageLoad(async sink => { const App = (await import('../ui/App.jsx')).default; ReactDOM.hydrate( <App initialState={initialState}/>, document.getElementById('app') ); }); In the above code we are doing one seemingly silly thing. We first stringify the preloaded state only to parse it back again.. What's the deal with that? Well your browser already uses JSON.parse by default to create a javascript object from our state. We don't want that. We need EJSON to parse our string. So we simply undo what the browser did and parse it using EJSON. Now lets open the console again and your publishedAt field should have a Date value instead of a string value.
https://www.chrisvisser.io/meteor/meteor-react-with-ssr-hydrating-initial-state-with-ejson
CC-MAIN-2018-39
en
refinedweb
<string> using namespace std; void f() { const char text[] = "hello world"; string s1 = text; // initialization of string object with a C-style string string s2(s1); // initialize with another string object string s3(&text[0], &text[5]); // initialize by part of a C-string string s4(10, 0); // by a sequence of 10 characters string s5 ( s2.begin(), s2.find(' ')); // by part of another string object; s5 = "hello" } Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/5583
CC-MAIN-2018-39
en
refinedweb
Introduction: Killer Candy Robot 3000 I am an 11-year-old boy who loves to make stuff with electronics and programming. This year for Halloween, I decided to make a robot costume. This robot costume that took me about 4 weeks to make, most of the time was soldering and programming/testing the microcontroller code. This costume has 4 microcontrollers inside (2 in the head, and 2 in the body). In the head, an Arduino Nano is controlling the voice-activated LEDs, and an mbed Nucleo is controlling the eyes which are made from two MAX7219s. I programmed both the nano and the mbed using C++. In the body, an Arduino Nano is controlling the 14 LEDs that randomly blink (two on each side of the body, and 12 in an array on the front to look like computer read-outs from old movies), and a LinkIt One from Mediatek Labs is using a Grove shield to control the LEDs and LED Bar next to the candy drawer, the servo that opens and closes the candy drawer, the touch-sensitive button for controlling the servo, and it plays sound from a speaker on the side of the body. I had to wear special gloves that are used for texting on touch screen phones in order to be able to activate the candy drawer using the touch-sensitive button. The eyes look left, right, and forward, and they blink. The arms are made from aluminum heating ducts as well as plastic paint buckets with the bottoms cut out and painted silver. The legs are made from aluminum heating ducts and two cardboard boxes. The head and body are made from three cardboard boxes painted silver. The mouth of the robot is covered with a nylon so that I can see through it, yet it looks black (when I am not talking) I wore this costume to my local Halloween festival and came in second place, and then I wore it during Trick-or-Treating without the legs (because it was impossible to go up and down stairs wearing them) Here is the full list of materials I used to create this costume: 1 14x10x8 box for head 1 14x20x18 box for body (top) 1 14x20x10 box for body (bottom) 1 custom sized box for candy drawer (used scraps from other boxes to make) 2 8x14x4 boxes for feet (these were boxed from Samsung Series 7 Slates, very heavy cardboard) 21 8mm Red LEDs 1 8mm Green LED 1 Nylon Stocking (for mouth) 14 Images for "stickers" printed 1 baseball cap with the brim removed 1 aluminum air duct 4"x25' (cut into two parts for arms) - had some left over 1 aluminum air duct 8"x50' (cut into two parts for legs) - had some left over 2 Paint buckets (small from Lowes) with the bottoms removed 1 LinkIt One from Mediatek (SeeedStudio) 1 Grove Sheidl for LinkIt One from Mediatek 2 custom made Grove Connectors for the LED near the candy drawer 1 Grove LED bar for LinkIt One 1 Grove Servo for the candy drawer 1 Grove touch-sensitive control 1 pair texting gloves 2 Arduino Nanos 1 mbed NucleoF411RE 1 light switch and metal cover 4 lithium ion batteries 1 SD card for storing sounds 1 rechargeable speaker with audio cable plugged into LinkIt One many wires soldered and heat-shrunk together The programming for the mbed was done using their online IDE/compiler. The programming for the Arduinos was done using the Arduino IDE. The LinkIt One used the Grove libraries for LinkIt One as well as SD card and sound libriaries. Step 1: Making the Head I started by making the head, I measured and cut out the eye slots (for the MAX7219s) and the mouth (which is actually where I looked out of the head). I used hot glue to attach a baseball cap (with the brim removed) in the center of the top so that I could wear the head. I programmed the sound-activated LEDs for the Arduino Nano: int sensorPin = 4; int sensorValue = 0; int LED_1 = 2; int LED_2 = 3; int LED_3 = 4; int LED_4 = 5; int LED_5 = 6; int LED_6 = 7; void setup() { pinMode(LED_1, OUTPUT); pinMode(LED_2, OUTPUT); pinMode(LED_3, OUTPUT); pinMode(LED_4, OUTPUT); pinMode(LED_5, OUTPUT); pinMode(LED_6, OUTPUT); Serial.begin(9600); } void loop() { // read the value from the sensor: sensorValue = analogRead(sensorPin); Serial.println(sensorValue); if (sensorValue<100){ //The 'silence' sensor value is 509-511 digitalWrite(LED_1,HIGH); digitalWrite(LED_2,HIGH); digitalWrite(LED_3,HIGH); digitalWrite(LED_4,HIGH); digitalWrite(LED_5,HIGH); digitalWrite(LED_6,HIGH); delay(5); // The red LEDs stay on for 2 seconds } else { digitalWrite(LED_1,LOW); digitalWrite(LED_2,LOW); digitalWrite(LED_3,LOW); digitalWrite(LED_4,LOW); digitalWrite(LED_5,LOW); digitalWrite(LED_6,LOW); } } Then I programmed the eyes for the mbed:: #include "mbed.h" #include #include /* printf, scanf, puts, NULL */ #include /* srand, rand */ #include /* time */ using std::string; // p5: DIN, p7: CLK, p8: LOAD/CS SPI max72_spi(SPI_MOSI, NC, SPI_SCK); DigitalOut load(D5); Serial pc(SERIAL_TX, SERIAL_RX); InterruptIn mybutton(USER_BUTTON); int maxInUse = 2; //change this variable to set how many MAX7219's you'll use int lastMode = -1; int currMode = 0; // define max7219 registers #define max7219_reg_noop 0x00 #define max7219_reg_digit0 0x01 #define max7219_reg_digit1 0x02 #define max7219_reg_digit2 0x03 #define max7219_reg_digit3 0x04 #define max7219_reg_digit4 0x05 #define max7219_reg_digit5 0x06 #define max7219_reg_digit6 0x07 #define max7219_reg_digit7 0x08 #define max7219_reg_decodeMode 0x09 #define max7219_reg_intensity 0x0a #define max7219_reg_scanLimit 0x0b #define max7219_reg_shutdown 0x0c #define max7219_reg_displayTest 0x0f #define LOW 0 #define HIGH 1 #define MHZ 1000000 void maxSingle( int reg, int col) { load = LOW; // begin max72_spi.write(reg); // specify register max72_spi.write(col); // put data load = HIGH; // make sure data is loaded (on rising edge of LOAD/CS) } void maxAll (int reg, int col) { // initialize all MAX7219's in the system load = LOW; // begin for ( int c=1; c<= maxInUse; c++) { max72_spi.write(reg); // specify register max72_spi.write(col); // put data } load = HIGH; } void maxOne(int maxNr, int reg, int col) { int c = 0; load = LOW; for ( c = maxInUse; c > maxNr; c--) { max72_spi.write(0); // no-op max72_spi.write(0); // no-op } max72_spi.write(reg); // specify register max72_spi.write(col); // put data for ( c=maxNr-1; c >= 1; c--) { max72_spi.write(0); // no-op max72_spi.write(0); // no-op } load = HIGH; } void setup () { // initiation of the max 7219 // SPI setup: 8 bits, mode 0 max72_spi.format(8, 0); // going by the datasheet, min clk is 100ns so theoretically 10MHz should work... // max72_spi.frequency(10*MHZ); (int e=1; e<=8; e++) { // empty registers, turn all LEDs off maxAll(e,0); } maxAll(max7219_reg_intensity, 0x0f & 0x0f); // the first 0x0f is the value you can set // range: 0x00 to 0x0f } int getBitValue(int bit) { pc.printf("bit = %d\n\r", bit); switch(bit) { case 0: return 1; case 1: return 2; case 2: return 4; case 3: return 8; case 4: return 16; case 5: return 32; case 6: return 64; case 7: return 128; } return 0; } void OpenEyes() { maxAll(7,60); maxAll(6,126); maxAll(5,102); maxAll(4,102); maxAll(3,126); maxAll(2,60); } void Blink() { maxAll(3,0); maxAll(4,0); maxAll(5,0); maxAll(6,0); maxOne(1, 7, 0); maxOne(2, 2, 0); } void LookAhead() { maxAll(4,102); maxAll(5,102); } void LookLeft() { maxOne(1,4,78); maxOne(1,5,78); maxOne(2,4,114); maxOne(2,5,114); } void LookRight() { maxOne(2,4,78); maxOne(2,5,78); maxOne(1,4,114); maxOne(1,5,114); } int looky = 0; int looking = 10; int lookMode = 0; int main() { srand (time(NULL)); setup (); OpenEyes(); while(true) { if(looky > looking) { looky = 0; switch(lookMode) { case 0: currMode = 1; break; case 1: currMode = 0; break; case 2: currMode = 1; break; case 3: currMode = 2; break; case 4: currMode = 1; break; case 5: currMode = 0; break; case 6: currMode = 1; break; case 7: currMode = 3; break; case 8: // ahead; currMode = 1; break; case 9: currMode = 0; break; } lookMode++; if(lookMode > 9) lookMode = 0; if(lastMode != currMode) { lastMode = currMode; switch(currMode) { case 0: // blink Blink(); wait(.25f); OpenEyes(); break; case 1: // look ahead LookAhead(); break; case 2: LookLeft(); break; case 3: LookRight(); break; } } } else looky++; wait(.25f); } } Step 2: Body Functions I put two boxes together in order to make a body that was big enough for me to wear, house the electronics,and have a candy drawer. I initially taped it together with duct tape, painted it silver, then added stickers and some metallic tape as well. The Grove Bar and candy drawer indicators (LEDs) had to be taped onto the body using metallic tape so that it looked better. The Grove touch sensor I put on the top of the body so that I could reach it with my finger. The speaker had to be mounted using tape inside the body so that it could play sounds when I opened and closed the drawer. I used an Arduino Nano to drive 14 LEDs arranged on the body to resemble computer displays seen in old movies my dad and I watch and riff on (like MST3K does). int demoMode = 0; void setup() { for(int l = 0; l < 15; l++) { pinMode(l, OUTPUT); } randomSeed(analogRead(0)); } // the loop routine runs over and over again forever: void loop() { for(int LedIndex = 0; LedIndex < 15; LedIndex++) { if(demoMode ==1 ) { digitalWrite(LedIndex, HIGH); delay(1000); } else { int onOff = random(10); if(onOff % 2 == 0) { // on digitalWrite(LedIndex, HIGH); } else { // off digitalWrite(LedIndex, LOW); } } } delay(1000); } The LinkIt One provided the majority of the robot's functions inside the body. This took a while to figure everything out, especially how to attach the servo to the candy drawer so that it opened and closed when I presses and released the touch-sensitive control. Here is the code for the LinkIt One. #include "Suli.h" #include #include #include #include "Seeed_LED_Bar_Arduino.h" #include const int ROBOT_START = 1; const int ROBOT_ON = 2; const int ROBOT_OFF = 3; const int TRICK_TREAT = 4; const int THANK_YOU = 5; const int pinTouch = 4; const int pinLed = 8; const int REDLED = 8; const int GREENLED = 7; int lastState = LOW; int barLevel = 1; int maxOpenCount = 5; int openCount = 0; int tray; Servo myservo; int maxTray = 90; int minTray = 10; SeeedLedBar bar(6, 5); // CLK, DTA void PlaySound(int soundId) { AudioStatus status; switch(soundId) { case ROBOT_START: LAudio.playFile( storageSD,(char*)"RobotStart.mp3"); break; case ROBOT_ON: LAudio.playFile( storageSD,(char*)"RobotOn.mp3"); break; case ROBOT_OFF: LAudio.playFile( storageSD,(char*)"RobotOff.mp3"); break; case OPEN_TRAY: LAudio.playFile( storageSD,(char*)"RobotCandyDrawerOpen.wav"); break; case CLOSE_TRAY: LAudio.playFile( storageSD,(char*)"RobotCandyDrawerClose.wav"); break; } } void setup() { tray = maxTray; LAudio.begin(); LSD.begin(); // Init SD card bar.begin(6, 5); pinMode(pinTouch, INPUT); pinMode(pinLed, OUTPUT); LAudio.setVolume(3); bar.setLevel(1); myservo.attach(3); myservo.write(tray); pinMode(REDLED, OUTPUT); pinMode(GREENLED, OUTPUT); // PlaySound(ROBOT_START); } void OpenTray() { PlaySound(OPEN_TRAY); tray = minTray; myservo.write(tray); digitalWrite(REDLED, LOW); digitalWrite(GREENLED, HIGH); openCount++; if(openCount > maxOpenCount) { openCount = 0; barLevel++; if(barLevel > 10) barLevel = 1; bar.setLevel(barLevel); } } void CloseTray() { PlaySound(CLOSE_TRAY); tray = maxTray; myservo.write(tray); digitalWrite(REDLED, HIGH); digitalWrite(GREENLED, LOW); } void toggleTray() { if(tray == minTray)CloseTray(); else OpenTray(); } void checkButton() { int state = digitalRead(pinTouch); if(state != lastState) { lastState = state; toggleTray(); } } void loop() { checkButton(); } Step 3: Finishing Touches I found a bunch of funny and cool images on the Internet and printed them on my dad's laser printer, then attached them all over the robot's body using rubber cement. We got numbers and letters from Lowes as well as the light switch (which provides no functionality other than letting others flip it up and down while I stood in line at houses to get candy). I am happy with how this project turned out, however, there are some things that would have improved the costume. 1) shoulder padding inside - my arms got a bit numb and sore from wearing this all night 2) additional support in the head - the cap worked well, but the head wobbled about and eventually my microphone for the voice sensor broke off from being rubbed by the neck foam. 3) usable feet - although they looked really cool, I couldn't really wear my legs and feet during Trick-or-Treating because they were clumsy and difficult for me to walk up and down stairs while wearing. Thanks for looking at my costume. 2 Discussions Great robot! Thanks! :)
https://www.instructables.com/id/Killer-Candy-Robot-3000/
CC-MAIN-2018-39
en
refinedweb
a collider on another object stops touching this object's collider (2D physics only). Further information about the objects involved is reported in the Collision2D parameter passed during the call. Notes: Collision events will be sent to disabled MonoBehaviours, to allow enabling Behaviours in response to collisions. See Also: Collision2D class, OnCollisionEnter2D, OnCollisionStay2D. using UnityEngine; public class Example : MonoBehaviour { int bumpCount; void OnCollisionExit2D(Collision2D collision) { if (collision.gameObject.tag == "DodgemCar") { bumpCount++; } } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Collider2D.OnCollisionExit2D.html
CC-MAIN-2018-39
en
refinedweb
It appears this program does not support export/import declarations in classes. Anytime I put an export/import declaration before my class name it thinks the entire class name is the name of the export/import macro. Example: StarUML reverse engineers this as a class called MYDECL with a function called Foo() and this isn't even close to correct.StarUML reverse engineers this as a class called MYDECL with a function called Foo() and this isn't even close to correct.Code:#ifdef MYEXPORTS #define MYDECL __declspec(dllexport) #else #define MYDECL __declspec(dllimport) #endif class MYDECL ISomeInterface { public: virtual ~ISomeInterface() { } virtual void Foo() = 0; }; Anyone else use this free tool and know how to get around this? Ideally I do not want to be forced to remove the import/export declarations from all my classes b/c I'm building a DLL which requires them or clients of my DLL will get all kinds of unresolved externals. Note that when I remove the declaration the reverse engineering works as expected.
https://cboard.cprogramming.com/general-discussions/131779-staruml-reverse-engineering.html
CC-MAIN-2017-09
en
refinedweb
Rock-Paper-Kinect! Introducing Kinect RPS (Play Rock-Paper-Scissors) Sometimes to grok something, you just can't beat a good book. If you got a new Kinect for Windows device this holiday season and are looking for a single starting resource, Abhijit's new book might be just the thing you've been waiting for... .... ... Project Information URL:, “Kinect for Windows SDK Programming Guide”. Contact Information: cant wait to get my hands on it. There is a lot content that i would like to read about on Kinect in the book. Congratulations Abhijit i have seen some chapters of this week, must say this is really very good for beginners. Hi! i'm looking for the others chapters of this book... please can you help me with this.. do you know if this book is online? @Isamar Zarazua: The book is available for purchase as print or ebook. Click through to "Kinect for Windows SDK Programming Guide". and the purchase links are there... This book adds some info on Kinect, helping you a little bit to go further for a lot of money, compared to what you get ... Have managed to build the Kinect Info Box sample on page 50 & ff However, it does NOT give the complete picture to some issues like e.g. tracking the SkeletonJoint.RightHand x,y,z coordinates, why ? Repeats what we already know about Kinect, does NOT spell out what to code to make code complete, like on page 73 Code samples are not listed on the download PACKT site, You can register, but PACKT want your E-Mail addr. for free (you already payed for the book) Code samples are not complete, see p 73, Code samples fail, try page 169 ff Main disapointment: Is not building on Kinect V 1.6 SDKs SensorChooserUI. or KinectSensorManager Kinect for Windows with Kinect V 1.6 on Windows 8 with Visual Studio Express 2012 Dear anonymous Please don’t hide behind the “anonymous” because I want to understand your frustrations that i fail to understand and I would like to know who I am engaging. “You talked about the example Infobox in the book and you asked why the author does not give a complete picture about skeleton tracking. “ The Author created the example to show you, how you can get certain info about your sensor, but not to track skeleton, this is an Infobox that tells you things about your sensor. In the book there are many parts where he explains the Skeleton tracking and also in the examples that you downloaded from PACKT everything is there. “Repeats what we already know about Kinect,” Not everyone knows what you already know, so when books are generally written, it is advisable that everyone should be presented with a basic knowledge of a technology, I have read books on advanced topics that contains basics about the technology, and do you want to crucify the Author because he wrote the book to cater for everyone? The Simple thing to do, if you know someone thing in a chapter Skip it and look at other chapters “Code samples are not listed on the download PACKT site,” That is definitely not true, go to the support tab on the PACKT website or follow this that is where i got them enter your email address which I don’t see nothing wrong in doing that, and that is not the authors fault but processes used by PACKT enter the Capcha and click on Go and PACKT will send you a download link to your email “Is not building on Kinect V 1.6 SDKs SensorChooserUI. Or KinectSensorManager” I am using V1.6 of the sdk and all the examples are working fine, so of all people who commented on this post, you are the only one. It seems you are not the one who have installed 1.6, if you have problems compiling your example project, why don’t you post your errors here and we might help resolving them. Hope these replies help you resolve problems and your understanding. I have been reading this book for a while now. FYI, I know nothing about the device and the SDK, while totally being interested to learn it. I have bought this book and I am gradually learning things that are new to me. I don't know why you are fustrated. I used to get fustrated when I cannot understand the code or cannot run the code. But this book has made extra-ordinary work by pulling things that are rarely found over internet. Believe me, I would recommend this book always for a newbie. I think probably you might not be very competent on technologies used on the book. I dont know if you are aware of WPF or other ongoing technologies which are widely used on the book. Let me go a little on your findings and let me try out them myself. 1. You seem to have managed to build InfoBox sample, that means you have a real device with you and it is connected. 2. It does not give complete picture of Skeleton Tracking. Well, it is true. I think you need to read the book further to get the entire idea. This chapter overlooked on it intentionally I think, because there is a separate section for the same. 3. Hmm, yes, there are sections which are intentionally cut down to decrease the page count. I know it might be hard for a beginner like you to code it completely. But probably this book is not intended to give insights on actual programming, and I think you need another book to learn WPF and/or windows programming before you can go with this book. This book is intended only for Kinect SDK. 4. Page 169. I dont know why the code sample Failed for you. Did you added proper namespaces? As I see the code uses Linq, and probably you need to add that namespace in addition to the SDK ones. Can you specify your error message so that we could get an idea what you are missing (probably if you cannot fix it yourself). 5. SensorChooserUI is not a part of SDK, it is a part of toolkit. I think the book already states how to use Kinect developer toolkit in Page 205. I think you should read the book again before you comment again. Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/coding4fun/kinect/Theres-a-new-book-in-town-Kinect-for-Windows-SDK-Programming-Guide
CC-MAIN-2017-09
en
refinedweb
//My objective of the program is using scanner class to get the input from the user and use a dialog box(JOptionPane) for output. When I compile the file I was able to enter the input from the scanner class, but I did not see the dialog box for the output using JOptionpane? PLEASE HELP!!! import javax.swing.*; import java.util.Scanner; import static java.lang.Math.*; public class Volume { public static void main (String[] args) { //Declaring Variables double radius; double height; //public static double length; double sphere; double cylinder = 0; //Initialize Scanner to read from user Scanner input = new Scanner(System.in); //Read the radius from the user System.out.print("Enter a radius in meters : "); radius = input.nextDouble(); //Compute the volume of sphere sphere = 4/3*PI*Math.pow(radius, 3); //Compute the height of a cylinder height = cylinder/(PI*radius*radius); //Display their averages in the Dialog Box JOptionPane.showMessageDialog(null, "The volume of the sphere is: " + sphere + "\n The height of a cylinder: " + height, "Calculations",JOptionPane.INFORMATION_MESSAGE); } }
https://www.daniweb.com/programming/software-development/threads/311999/using-scanner-and-joptionpane
CC-MAIN-2017-09
en
refinedweb
>>." What did we expect? (Score:5, Insightful) I mean, really? Re:What did we expect? (Score:5, Funny) I might be confusing Microsoft with a wife beater, but the mentality is roughly the same it seems. Re:What did we expect? (Score:4, Funny) Say what you will about Microsoft, but I'll start using Linux on my production machines when I want to start losing money. Get the facts [getthefacts.com], people. Re:What did we expect? (Score:4, Insightful) As opposed to LD_LIBRARY_PATH hell and no codecs at all? Comparing Windows and Linux feature by feature is always going to be futile. The two are different, and if trying to make Linux a direct replacement for Windows, you'll necessarily have to chop down the things that make Linux great (like the toolbox approach and not being designed from the "one user, one application, one machine" philosophy). And comparing Linux with Windows is like wrestling a pig. You'll just get dirty, and the pig enjoys it. Re:What did we expect? (Score:5, Funny) I might be confusing Microsoft with a wife beater, but the mentality is roughly the same it seems. What do you tell a user with two black eyes? (I propose that the answer is "Did you really think Apple was different from Microsoft?" but that might not win me too many points around here. The converse would work almost as well, but nobody would have believed that Microsoft was the good guys.) Re:What did we expect? (Score:5, Insightful) That's unfair. Apple have never made an iWorks product intentionally produce a broken ODF document! *cough* Re:What did we expect? (Score:5, Informative) nobody would have believed that Microsoft was the good guys. Actually there was a time when Microsoft was hailed as the white knight in the shiny armor freeing us from the evil IBM empire. Re:What did we expect? (Score:5, Insightful) Actually there was a time when Microsoft was hailed as the white knight in the shiny armor freeing us from the evil IBM empire. Yeah but that was ~twenty years ago, which is like two hundred in do^H^H computer years. Since then Lancelot has screwed the king's wife and is off in the wilderness slowly going insane. Re:What did we expect? (Score:5, Insightful) |, ... Re:What did we expect? (Score:5, Informative), ... It was pretty obvious to many techies by the early 90s that Microsoft software was crap. The printed press was one of its tools and perpetuated the myth that companies would be better off with Microsoft. By 1995 it was getting out to a more general crowd how bad Microsoft was but these people still required having their eyes and minds open. Considering where they are today, it's obvious many are still pretty ignorant to their business practices and technology in general. By 1995, even the author, Douglas Adams saw this: Microsoft Here's a quote from the end of that short article: ." Over $200 million in marketing spent on Window 95 and about the same amount the following year pushing NT as _the_ server OS suckered in enough to seal their position in the market. That seal is leaking now but unfortunately, the general population of computer users and IT execs are mostly just as naive as they were in the early 1990s. It's the OEM's who are driving the market now because of very low margins and the high relative cost of Microsoft software. LoB Re:What did we expect? (Score:5, Insightful) Yah. The real heros bringing us the PC revolution was the guys reverse engineering the hardware/BIOS, and made cheap clones. The OS was just what became the de facto standard. As we all know, DOS won over CP/M. CP/M was technically superior at the time, but lost for political and/or contract reasons, whatever. Digital Research then went on to create a better DOS to compete. MS fought it with all means it could, and it went into oblivition. At early stages, MS Windows was just a graphical shell on top of DOS. It wasn't particulary good either. There were competing graphical shells, for example Digital Research' GEM. Digital Research lost the patent lawsuit that MS essentially won, and GEM was limited to have only two windows simultaneously...who knows what it could have been. MS has not had the technical best/superior solutions at any time. It was just better at legal and marketing stuff than anyone else. The PC revolution would have come with or without MS. We'll never know how much innovation MS have killed on its way where it is, so to hail it as a savior is just plain stupid. Re:What did we expect? (Score:5, Insightful) While I certainly remember thinking of IBM as the evil monopolistic overlords in the '80s, I thought of Microsoft as more of the black knight working with IBM, then stabbing them in the back as soon as they got a chance in order to become the new evil overlords. Re:What did we expect? (Score:5, Funny) Godwin'd! That wasn't MS (Score:4, Insightful) Actually the early 80s. You see, before MSFT started the clone market by selling Compaq MS DOS and thus creating the IBM PC compatible market, things were VERY different. It was 'welcome to proprietary land" where my VIC wouldn't talk to your TRS80 [...] Actually, it was Digital Research's CP/M (and AT&T's UNIX) that were leading the charge against "proprietary land". Bill Gates just got lucky when DR's Gary Kildall was out the day IBM came calling, and managed to steal DR's thunder with a hastily-purchased CP/M clone and IBM's marketing power. BG doesn't deserve credit for anything except dumb luck and being in the right place at the right time. The market was already headed in the direction of platform-independent OSes as fast as it could go. Re:What did we expect? (Score:5, Funny) Nothing. He's already been told twice. Re:What did we expect? (Score:5, Informative) "...1 second or so that it took to open the "Save file as" dialog..." It takes 2 seconds for a menu to appear on my work XP laptop when I click the Start button. It takes forever to open a Word document. Virus scanning is now part of the Office experience and can't be disregarded. And this is on a more modern computer. What is your point. Re:What did we expect? (Score:4, Funny) Hey, it is different. Hence not compatible. Re:What did we expect? (Score:4, Informative) I use Windows for compatibility, but open-source for everything else: VLC, WinAmp, OpenOffice, Utorrent, et cetera. I don't think you understand what open source is. Winamp and uTorrent are not open source. Re: (Score:3, Funny) I dont see why you're comparing MS to congress. Why not compare it to being eaten by a shark (with frikkin laser beams if thats your thing) or abducted by aliens. Congress doesn't have a history of lying to people... oh hang on Congress doesn't have a history of screwing the public for money/business interests... wait a minute.. Congress... errr.. never mind Also... uTorrent isnt open source. Re:What did we expect? (Score:4, Insightful) Re: (Score:3, Interesting) Not likely that they'll embrace a competing standard antytime soon. Re:What did we expect? (Score:5, Interesting) Well, that depends on who you talk to. Here in the US, that's probably true. Pretty much it's up to Europe to send the lawyers back in. But, there is a comment at the end of the article to check for an obvious abuse:. Since I don't have access to Office 2007 until I get home tonight, I can't try this out. But if someone feels compelled in the meantime, I'd love to see the results. If the document "magically" works after changing the header, then Microsoft did *not* do enough to keep the lawyers at bay. Re: (Score:3, Insightful) Tried it. Not the case. Re:I tried as well (Score:5, Insightful) No, that's not it. (Score:4, Informative) The problem isn't that you can't open a Word 2007 ODF document in another ODF compliant program, it's that it refuses to open to other program's ODF documents. If you actually read the article, you'll find that Google, KSpread, Symphony, OpenOffice, and the Sun plugin are all unable to open documents created in Excel 2007. The issue here is not that it's one way, it's that the MS interpretation is different from what everyone else uses (though the actual specification leaves it open). And it's also about spreadsheets (Excel), not word-processor documents (Word). Excel, not Word (Score:4, Informative) Re:What did we expect? (Score:4, Insightful) You really think so? The EU will probably slap them with a hefty fine yet again. This is just another example of Microsoft being deliberately anti-competitive. Re:What did we expect? (Score:5, Interesting) You really think so? The EU will probably slap them with a hefty fine yet again. This is just another example of Microsoft being deliberately anti-competitive. Except if you look a little closer, the EU doesn't just fine them. The fine is trivial, and does nothing but make the news in the computer press. Just money. A fine is like a parking ticket. And if you are rich enough, you can theoretically see a parking ticket as a parking fee. Forcing them to correct the problem to the satisfaction of a neutral third party acting as a technical "expert witness" however, is a worthwhile activity. And this can really sting. This is more like taking away their car, or revoking their license. Way more than a slap on the wrist and a stern look. Re:What did we expect? (Score:4, Funny) Doh, how do Governments force people to do stuff? Just jail the people at the top of MS in the relevant countries. If they refuse to go to jail, send people authorized to inflict force and violence to drag them off to jail. That's well within the authority of any country which MS operates in. You don't even have to fine at all. Once you start jailing top executives, they'll start taking things really seriously. After all, if you're a CEO, the fines don't really come out of your pocket[1]. But time in prison comes out of your lifespan. [1] They might in theory affect your bonus etc, but in practice just look at the AIGs of the world. Re:What did we expect? (Score:5, Interesting) If it achieves 100% technical compliance with the standard, but zero interoperability, this is certainly a problem with the standard itself.. Did the author of the article test with anything else than a spreadsheet with formulas? Formula breakage was expected and mentioned in the comments to the previous article. The interesting part is are there other flaws with ODF 1.1, are they addressed by 1.2? Re:What did we expect? (Score:5, Informative). That is, curiously, not quite true. ODF 1.1 doesn't fully specify formulas, but it does specify the general syntax that should be used for them, and Microsoft seems to have ignored this. (Also, in practice, the major spreadsheets are quite similar in terms of what expressions they accept in formulas. This makes it relatively simple to convert between MS Office formulas and OpenOffice.org ones, which are what most ODF-based apps use.) Re:What did we expect? (Score:5, Informative) Re:What did we expect? (Score:5, Insightful) Software Engineers. It is what we do for a living. Re:What did we expect? (Score:5, Informative) From the article: inexplicably thing is why that code never made it into Excel 2007 SP2. Re:This is a REQUIREMENT so that Excel can be read (Score:5, Informative) On the contrary, it does make sense to alter them because there is something wrong with Microsoft's formulas. For example, consider the MAX() function in Excel: Now consider the OO.o (and forthcoming ODF 1.2 standard) equivalent: OO.o uses semicolons instead of commas to separate parameters; so what? Well, let's what would happen if you were European, and tried to do the same thing in Excel: Uh-oh! Now, since Europeans use commas instead of periods to indicate decimals, Excel suddenly thinks that there are 8 integer parameters instead of 4 decimal ones! Excel is wrong! In contrast, here's how it looks in OO.o: Hey, whaddya know: still four decimal numbers! It works! But that's just the tip of the iceberg. If you read previous posts in the linked blog, the guy points out how (for example) most of Excel's date and financial functions are wrong (not just because of syntax, but because they implement the wrong algorithms). Actually, it does -- 300-odd pages worth of one, in fact. But Excel doesn't follow that either! In fact, those date and financial functions tend to give answers different from both the OOXML standard and the original financial standards they are supposed to be based on! Agreed ... interoperability harms Microsoft (Score:5, Insightful) Clearly Microsoft's best interests are served by denying their customers interoperability. That's what drives Microsoft's policy: cash. Everything else is PR. Which is duly born out by their actions. Re:Agreed ... interoperability harms Microsoft (Score:5, Interesting) Oh course. This has always been true with Microsoft, where in the late 80s/early 90s they advertised they could read WordPerfect files from Amigas or Macs, but all it did was strip all the formatting to leave-behind plain text. Yuck. Even later when Word was released for early PowerMacs, I found that Windows Word could not read the Word documents from my Macintosh. Microsoft does not want interchanging of information. They want everybody using MS Word on an MS operating system. The end. Re:Agreed ... interoperability harms Microsoft (Score:5, Insightful) Microsoft does not want interchanging of information. They want everybody using MS Word on an MS operating system. The end. Every major vendor would probably like their own product to dominate. The difference is not the motivation, but the methods. Some vendors honestly try to make the best product and win customers by so doing. MS prefers to leverage monopolies to artificially break competing products and prevent users from being able to choose based upon the individual merits of the products in question. I have no problem with MS wanting their OS and office suite to dominate. I have a problem with their breaking the law and hurting the industry, innovation, and end users to make that happen. Re: (Score:3, Insightful) But being able to correctly read ODF files would just be a big plus in an already great product like Excel. Why break the reading part? Re:Agreed ... interoperability harms Microsoft (Score:4, Insightful) But being able to correctly read ODF files would just be a big plus in an already great product like Excel. Why break the reading part? Because they don't want to discourage just other products that use ODF, they want to slow and discourage adoption of ODF as a format. Anything that makes more users stick with MS proprietary formats longer, makes MS money. Every user who sends an ODF file from Google docs to an Excel user, then finds it doesn't work is discouraged from using Google docs and encouraged to buy a license for MSOffice so they can interoperate easily with that other person. Re:Agreed ... interoperability harms Microsoft (Score:5, Funny) To really add flavor to the discussion, let us further assume that planet Earth is spherical, and space is pretty big. Because you look ridiculous claiming you were able to follow the standard for reading documents, but unable to do so when writing them? Re:Agreed ... interoperability harms Microsoft (Score:5, Informative) Åpne dokumentstandarder blir obligatoriske i staten. [regjeringen.no] My rough translation from Norwegian: - Norway has so far lacked a policy regarding the area of software. This have now changed. This Cabinet has decided that IT-development in the public sector shall be based upon Open Standards. In the future we will not accept that State activities locks users of public information to Locked Formats. - Heidi Grande Røys [wikipedia.org] (Minister of Government Administration and Reform). Microsoft might play their games to hinder development as much as they can, but at least in this country the turn towards Open Standards seems inevitable. Re:Never ascribe to malice... (Score:5, Interesting) Never ascribe to malice that which is adequately explained by incompetence. from the referenced article.... [robweir.com] Re:Never ascribe to malice... (Score:4, Funny) Re:Never ascribe to malice... (Score:4, Funny) Am I the only person who reads that extension aloud as "Ex-Lax"? Actually, the first two times I read it, it parsed as "Excel sucks." Counter-adage (Score:5, Insightful) There's another saying, and one that I think better applies here: "Once is an accident, twice is a coincidence, three times is a conspiracy." And with Microsoft we're way past three times. They also claim Windows supports Posix (Score:5, Insightful) As they also claim Microsoft Windows is Posix compliant! It is simply to be able to tic a "mandated" requirement in some government procurement, not as something one would actually use or deploy. Re:They also claim Windows supports Posix (Score:4, Insightful) Well, Windows is at least somewhat POSIX compliant. A few semesters ago I took an Operating Systems class; our labs were simple programs involving forking processes, named pipes, sockets, and file I/O, which we were to develop on an old Solaris box. Not much of a Pico fan, I developed my programs on Vista using Visual Studio 2005. They all compiled and ran on Vista, and then also compiled and ran on gcc and Solaris. These were simple programs, mind you, but it worked. Now ODF... TFA only looks at spreadsheet compatibility, and evidently there is no way documented in the ODF standard to store spreadsheet formulas. Article claims that they should have reverse engineered it or reused code from some other plug-in, but really I'm surprised they included any ODF support at all - "new markets" be damned. But, if no-one's satisfied, they also introduced a whole new API for writing file format converters. Go write your own plug-in! Re:They also claim Windows supports Posix (Score:5, Funny) As they also claim Microsoft Windows is Posix compliant! It is simply to be able to tic a "mandated" requirement in some government procurement, not as something one would actually use or deploy. Ah, I think you might have misread that one. The latest version of Windows is fully compliant with the ISO's 'Piece of Shit v9' standard. POS IX, not POSIX. Re:They also claim Windows supports Posix (Score:5, Informative) Microsoft Windows is POSIX.1 compliant, which will not help anyone today but which is nonetheless true. Re:They also claim Windows supports Posix (Score:5, Informative) Well if you just go for the basic level of posix support, then yes it does support it. So does 100 other OSes, including weird embedded OSes that can't even run executables. Everything has to be compiled in, but they are "POSIX" too. To be far UNIX Services for Windows is pretty decent and gives you a very complete POSIX environment on Windows. Problem with the Spec (Score:5, Insightful) So, this is either a problem with the specification or a problem with other implementations. If MS has made a compliant program, who are we to complain? Good point! (Score:5, Interesting) I was thinking exactly the same thing. If MS have made a compliant implementation but it isn't compatible with anyone else's, doesn't that mean that ODF is broken? Isn't this exactly the sort of complaint certain people around here have made against Microsoft's own formats in the past: just because there's a standard that officially states what the document format is, it's no use if other people can't realistically implement it and then trust that interoperability will work? Re: (Score:3, Interesting) Not necessarily broken, but certainly incomplete. Re:Good point! (Score:5, Interesting) Sure, it might be "incomplete" rather than "incorrect", but if we're talking about a standard for interoperability, doesn't "incomplete" pretty much imply "broken"? That sort of standard only has one job, and it isn't going to do it... Which means it won't get used.... (Score:5, Insightful) ...which is probably the point of this. The only reason to use ODF instead of MS native formats is for interoperability. When people don't use it, MS can point and say "see people don't want or need it and didn't care when we put it in". Useful at all manner of legal proceeding (antitrust anyone) to show that it's not important. I'm shocked! (Score:5, Funny) The article speaks about spreadsheets. (Score:5, Insightful) The article speaks about spreadsheets, which the slashdot blurb neglected to mention. Re:The article speaks about spreadsheets. (Score:5, Interesting) Unfinished sayings (Score:5, Insightful) This is the trouble with people saying the first half of a saying and then trailing off. The people who know the saying get the point, and the people who don't remember a fragment and repeat it even though it makes no sense on its own. To the people tagging this "embraceandextend". Embracing and extending is not a particularly bad thing to do. Many formats, including XML (upon which ODF is based), are built with this in mind. The complete saying that is referred to with "embrace and extend" is embrace, extend and extinguish [wikipedia.org]. The extinguishing is the goal here, the former two are merely tools to help them achieve this. Next stop - customer support (Score:3, Funny) Now that'll be good for some fun calls to customer support. Still no OOXML!! (Score:4, Insightful) Surprisingly MS has decided to implement ODF in their own strange way, but OOXML is still not available.... why?? Everybody pile on Microsoft... (Score:5, Insightful) In the meantime, how the HELL is it possible the spec is so bad that you can be technically-compliant with it, and yet not be read by (almost) any existing implementation? Re:Everybody pile on Microsoft... (Score:5, Informative) this is how [wikipedia.org] Kind of looks like the whole thing was a farce to begin with given how they created a bad spec and then went on to support a worse one before imploding. Re: (Score:3, Interesting) And from the article, the format version 1.1 doesn't even define how spreadsheet formulas should be stored! Which is why Microsoft's implementation, which doesn't bother to store the formulas at all, is compliant with the standard. This is a joke. Gee, I wonder why Microsoft fought a bunch of non-technical government offices from forcing them to use a file format that's woefully insufficient for their (both Microsoft's and the government offices') needs? Re: (Score:3, Informative) Re:Everybody pile on Microsoft... (Score:4, Insightful) The reason for this is that it's a hard problem I don't think you can really use that blog post for that citation, because it's the same source as TFA [robweir.com] which is both more relevant, and substantially newer... and which says: Editorial [brackets] and note mine. In summary: Your source, the same person who wrote the article which explains why it isn't hard also says Ecma dropped the ball. (in your link.) Another particular gem, this time from the current FA again: Everyone knows what TODAY() means. Everyone knows what =A1+A2 means. To get this wrong requires more effort than getting it right. So to say "The trouble is, it doesn't standardise them - in particular, there's no standard list of spreadsheet functions and what they should do." is just crazy talk which actually apologizes for Microsoft. In fact, there is such a list; the list documents what Excel does, since there was nothing else available; Microsoft itself had this functionality in a previous version, and now it is gone. Therefore the trouble is that Microsoft has deliberately broken spreadsheet compatibility in Office 2007 SP2. There is really no other way to look at it. It might not have been the goal (an alternate excuse might be to take advantage of another, newer codebase in order to eliminate some old code which is otherwise unnecessary) but it was trivially testable and therefore is inexcusable. Re:Everybody pile on Microsoft... (Score:5, Insightful) Except...Microsoft already have a perfectly good plugin that can read & write ODF documents. It appears they've gone out of their way to break that existing code and do things differently to how everyone else (including themselves) are already doing things. As the author of the blog says "If your business model requires only conformance and not actually achieving interoperability, then I wish you well.". If Microsoft have put all that effort into adding ODF support without actually achieving interoperability then it's a thinly veiled paper exercise on their part. Re:Everybody pile on Microsoft... (Score:5, Insightful) The current spec doesn't cover spreadsheet formulas: it has a big whole and basically says "Do what OpenOffice.org does for now". The problem with MS's specs saying "Do what Word 97 does" is that no one other than MS knows what Word 97 does. But OpenOffice's source code is... open. Anyone can know what OpenOffice does, and if MS is afraid of GPL, they're big enough for proper cleanroom approach. Re:Everybody pile on Microsoft... (Score:5, Insightful) Because specifications are written by people and then read and interpreted by others. While specification creators try to be as complete and thorough as possible, there are still gaps. In something as complex as a document format like spreadsheets, I'd imagine it's an impossible task. Bake-offs where all the stakeholders get into a room, try to get this shit to interoperate, and then decided the proper interpretation, is where the interoperation work gets done. All of the Internet protocols went through a similar cycle. Then, when there is consensus on the interpretations, guidance and reference implementations can be written. Re:Everybody pile on Microsoft... (Score:5, Insightful) Re:Everybody pile on Microsoft... (Score:5, Insightful). Re:Everybody pile on Microsoft... (Score:4, Interesting). Conversely, if the files produced by MS Office are valid standards-compliant ODF files (which they may be according to the letter of the standard) we should also blame the other apps if they fail to use them, isn't so? They will also fail to open standards-compliant ODF files. Well, interoperability wasn't the goal. (Score:5, Interesting) Even if MS fails all interoperability (which I would bet they do), at least someone could use ODF with office 2007 and 10-20 years later be able to use the spec to develop an app to recover the documents. EXCELLENT article (Score:3, Interesting) This is one of the best-written articles submitted to slashdot in a long time. Not only is it well-written (at least, it didn't make my brain hurt) but it gives you the technical background AND it tells you in advance how to debunk the stupid arguments which will certainly by coming from M$ trolls and astroturfers. Scrapbook this one, kids. You're going to be referring back to it for months, if not years. Hypocrisy (Score:4, Insightful) I'd say that it had a bad smell of Hypocrisy. If the standard doesn't cover important(I dare say) areas such as the friggin formula language, what good is the standard? No, the author is trying to preempt the obvious and very valid argument that if the standard didn't cover this and implementers need to reverse engineer a specific implementation (OpenOffice), maybe the standard wasn't good enough? The author is making silly analogies with someone willfully going through hoops (investing time) to sabotage interoperability with an implementation in which the implementor has chosen not to invest time and effort reverse engineering and testing functionality which is clearly outside the specification. Sun ODF plugin for Microsoft Office (Score:5, Informative) holes in the standard (Score:5, Interesting) Spreadsheets, people, spreadsheets (Score:5, Insightful) Bullshit (Score:3, Insightful) This article focuses very specifically on formula support in OpenDocument Spreadsheet. The problem with that is that ODF 1.x does not provide ANY specification for formulas whatsoever. This article claims that the standard be damned and that Microsoft should go and reverse engineer the implementation by OpenOffice. This is only demonstrative of how incomplete and irrelevant the ODF specification really is. There are massive gaping holes in it that implementers are filling on their own which will invariably lead to incompatibilities. The ISO OOXML specification may be absolutely massive, but that's because it's complete, and very specific (I'm referring specifically to the one that did pass ISO, not the first few iterations). This is like bitching that Internet Explorer can't be CSS compliant because it doesn't implement the moz-* CSS extensions. Either fix the spec, or get used to this. Re:Bullshit (Score:4, Interesting) Re:Bullshit (Score:4, Insightful) Microsoft should go and reverse engineer the implementation by OpenOffice Reverse-engineer it? You have the source code. You don't have to reverse-engineer anything. The problem is formulas. (Score:5, Informative) ODF does not specify the a language for formulas. Everybody but MS uses one language, MS uses another. Of course there are incompatibilities. Why did ODF not specify a spreadsheet formula language? Re:The problem is formulas. (Score:4, Informative) Because it's bloody hard to do. Microsoft's spreadsheet formula language in OOXML is actually a copy-and-paste job from the Excel help files. It doesn't provide nearly enough information to re-implement. It was only added as an afterthought, when Microsoft started complaining that ODF didn't have a spec for spreadsheet formulas, made a big deal about it, and then realised that OOXML didn't either. ODF does have a formula language specification. It specifies something like 400 functions in precise detail, loosely based on what OOo, Gnumeric, and others (including Excel) already do. This has been a work-in-progress since 2005 (before Microsoft started complaining about ODF), and is basically finished (for now). It's to be included in OpenDocument 1.2 (the next version), but most other OpenDocument-capable spreadsheet apps already use these formula specifications on OpenDocument 1.1 documents. Microsoft just chose to ignore it, and roll their own. As usual. Badly Specified Standard (Score:4, Insightful) Pigs (Score:4, Funny) Now, ODF == .doc and .xls (Score:4, Insightful) Microsoft plays dirty. All the time. This was totally expected, of course. It's ok though; we're still in better shape than we were just a few years ago. A Microsoft ODF document, or even a Microsoft OOXML document, is still at least roughly following a standard that has some documentation somewhere. The free world can develop Microsoft Office compatibility in this space a lot easier than in the Self-defense, perhaps? (Score:5, Insightful) It looks like Microsoft has learned from its IE experience. Instead of chasing an "anything but Microsoft" standard put together by a community that's actively hostile to Microsoft, they've decided to wait them out. Microsoft is refusing to give them a target and telling them to get off the pot. What Microsoft has done should speed up the ODF standards process. We should thank them for that. The Microsoft formulas aren't actually conformant (Score:5, Informative) Microsoft's supposed ODF 1.1 spreadsheet output is not compliant with the ODF 1.1 specification. From 8.1.3 (emphasis mine): From 8.3.1 Referencing Table Cells (emphasis mine): Now look at a Microsoft formula in their ODF 1.1 spreadsheets. You'll see a formula attribute value of "msoxl:=B4-B3". For that to be correct per the ODF 1.1 specification, that should be "msoxl:=[.B4]-[.B3]". Compare this to the OpenOffice.org and OpenFormula syntax: msoxl:=[.B4]-[.B3] oooc:=[.B4]-[.B3] of:=[.B4]-[.B3] Ignoring the prefix, they're identical. Furthermore, the formula functions used by OpenOffice.org are generally based on the functions in Excel to begin with (such as "TODAY", for example), so I can only conclude that Microsoft is intentionally sabotaging interoperability to keep people from using ODF while still claiming conformance. Re: (Score:3, Funny) Bloody hell. I wonder why they would ever want to ship a software product that did that. You must be new here. (grin) Re:Really? (Score:5, Insightful) Well, from the article: "First, we might hear that ODF 1.1 does not define spreadsheet formulas and therefore it is not necessary for one vendor to use the same formula language that other vendors use." Seems like a rather large hole in the spec itself. ODF 1.1 doesn't define spreedsheet forumlas? So, what version will? I wouldn't put any effort into guess, nor making my application read various other vendor formats.. when I may well have to recode again when 1.2 comes out. If anyone's to blame here, it's the ODF people for not having a COMPLETE spec. If formulas are so important to spreadsheets (and they are), why the hell would your spec not include how to store said forumlas? Re:Really? (Score:5, Insightful) Because apparently it's really difficult: [robweir.com] Oasis and ODF committees would rather get it right than have something busted and broken like competing suites. Re:Really? (Score:4, Insightful) If Microsoft is ODF 1.1 compliant, and other ODF 1.1 compliant software can't use the software, then it looks like the ODF committee didn't get it right and has something busted and broken. I think the ODF committee was more concerned about getting their standard approved quickly than having a complete specification. Re:Really? (Score:5, Insightful) Microsoft put all their Excel formulas into a private namespace. This is almost as bad as, say, writing a compiler that claims to be a C compiler, but really, all it does is validate the syntax of the C program and then look for C comments containing Pascal code, then compiling the Pascal code instead. BEGIN writeln("Microsoft rules!"); END */ int main(int argc, char *argv[]) { printf("This is standard C code.\n") } Is it a problem with the C standard that I can embed Pascal in a C comment? Re:Really? (Score:5, Informative) Interesting. According the article [linux.com] referenced in the Wikipedia [wikipedia.org] even OpenOffice and KOffice don't get along. Re:Really? (Score:5, Insightful) Interesting. According the article referenced in the Wikipedia even OpenOffice and KOffice don't get along. The difference is OpenOffice reads everything fine. KOffice fails to read the latest OpenOffice docs perfectly because OpenOffice uses the new draft version of the spec as the default... and it is perfectly appropriate for KOffice to fall back to reading those formulas as the last value until they release a new version of KOffice that supports the new spec. That is why there is a failback mode in the spec. MSOffice, however, fails back even when reading the old version of the spec, because they seem to have decided understanding Excel style formulas in Excel was too hard, despite the existence of several open source implementations and the spec being the formulas they already use. The difference is huge. Koffice is doing the right thing and being reasonable. MS is going out of their way to be as poor at interoperability as the spec allows by feigning extreme incompetence. I mean, did you look at the chart in the article. Why is it even small, unfunded projects seem to work interoperably pretty well, while MS can't manage to work with anyone else's implementation. Do you truly believe they are that incompetent? Re:Really? (Score:5, Insightful) Re:Really? (Score:5, Insightful) In order to claim (in a legalistic sense) technical compliance with the spec in order to be able to sell Office to companies/governments who have adopted policies requiring this, while at the same time making it virtually impossible for those organizations to actually USE a competing office product. Re:I just hope (Score:5, Insightful) Re:Chickens are coming home to roost (Score:4, Insightful) One the one hand we require Microsoft to follow specs to the letter, and now we somehow fault them for doing so? No, we're faulting them for following the specs to the letter and at the same time going out of their way to make sure their technically compliant implementation still doesn't work with all the other, existing implementations. What is wrong about asking OpenOffice to follow the specs? ODF does, for the most part, follow the specs. The problems between OpenOffice and MSOffice's implementations are that ODF implements a newer version of the spec and MS hasn't caught up to that, and MS decided the suggested (but not required) formulas, which use the same syntax as Excel and for which their is already BSD licensed code that works in MS Office as a plug-in, were "too hard to understand" so they just strip all the formulas out. MS may, technically, be minimally compliant with the spec, but it is clear they went out of their way to be as minimally compliant as possible to make their version as incompatible and unfriendly as they could manage while still being within the spec. This was not an honest attempt at being compatible, despite MS's claims that they were making an honest attempt. What goes around comes around. ODF was initially just a clever assault launched by Sun and IBM. Yeah, but it was an attempt to level the playing field and let products win based upon merits instead of criminal leveraging of monopolies. I don't understand why people have such a hard time understanding antitrust laws and how they work and why we have them. OpenOffice and derivatives, Sun and IBM just have to eat their own dogfood. Admit that the "perfect" ODF was at least partly a hype. No one claimed ODF was perfect and the early spec MS is using left room for ambiguity... which is why they also provided several open source reference implementations which everyone else has had no real problem implementing. Aside from MS, the only real problems are bugs between the stable and draft versions of the spec. MS is just playing dumb. "Oh they say if we can't understand Excel formula's, we can fail back to just reading the value the formula would produce. We're so stupid we can't understand formulas identical to the one we already use, har har, and we're too stupid to use the free BSD licensed implementation that already works with MSOffice, har har." The "problem" with the ODF spec in this case is that they wrote it as a spec assuming it would be used to make interoperable implementations, instead of as an ironclad legal contract with no loopholes for dishonest companies that wanted to try to be compliant but as non-interoperable as possible. After all, only one company had motivation to do that, and for them to attempt it would be criminal. That doesn't seem to have stopped MS though, as usual. The chickens are coming home to roost. Suck it up. Fix it instead of point fingers. Please. They already have a draft that removes the ambiguity and it is already implemented by several companies. If MS were interested in being honest or even obeying the law, there would be no issue. There is room for more than finger pointing, MS should be prosecuted for one more criminal antitrust violation. Why do you hate free market competition so much?
https://slashdot.org/story/09/05/04/1246249/office-2007sp2-odf-interoperability-very-bad
CC-MAIN-2017-09
en
refinedweb
0 Hi everybody, I think this question will be kind of stupid for you ... but I am fighting with since yesterday evening. I have a class Test defined as such in the hpp file (Test.hpp) #include <iostream> #include <vector> using namespace std; class Test { public: Test(); static const int SOMEVALUE = 1200; void testFunction(); private: vector <int> vectorOfValues; }; The file Test.cpp is as such : #include <vector> #include <iostream> #include "Test.hpp" using namespace std; Test::Test() { } void Test::testFunction() { vectorOfValues.push_back(SOMEVALUE); } The main file is very simple : #include <iostream> #include "Test.hpp" using namespace std; int main() { Test t; t.testFunction(); } When trying to compile, the compiler returns the following error : /tmp/ccA63Hmn.o: In function `Test::testFunction()': Test.cpp:(.text+0x45): undefined reference to `Test::SOMEVALUE' collect2: ld returned 1 exit status However, if modify the testFunction function (see below) I don't have this problem, meaning that there is no access problem to my SOMEVALUE constant. void Test::testFunction() { cout << SOMEVALUE << endl; } What do you think of it? I thank you a lot for your help!
https://www.daniweb.com/programming/software-development/threads/305581/undefined-reference-issue
CC-MAIN-2017-09
en
refinedweb
It is sometimes important that cache is preloaded with some base data before a user starts using it. NCache cache startup loader feature allows clients to implement an interface that is called /invoked as soon as cache starts up and you can pre-populate cache with desired data. User can load some important domain objects, framework and application configurations, resource files etc. Here are details on cache startup loader implementation: For enabling Cache Startup Loader, your program needs a reference following namespaces. using Alachisoft.NCache.Runtime.CacheLoader; using Alachisoft.NCache.Runtime.Caching; Class implementing interface Alachisoft.NCache.Runtime.CacheLoader.ICacheLoader allows framework to load data from the master datasource in the cache on cache startup. Here are details on interface implementation. public void Init(System.Collections.IDictionary parameters); public bool LoadNext(ref System.Collections.Specialized.OrderedDictionary data, ref object index); public void Dispose(); Following is a list of methods needs to be defined by the class implementing interface and a brief description of their purpose. void Init(IDictionary parameters); This method performs tasks like allocating resources, acquiring connections etc. When the cache is initialized and Cache Startup Loader is enabled, it callsInit method to notify the client that cache has initialized and you may initialize your data source too. The parameters passed as an argument contains all the parameters (if any) that were specified using NCache Manager cache/cluster views. These parameters can be utilized in many ways. For example, connection string of a datasource can be specified as a parameter through NCache Manager. Whenever cache will be initialized, the connection to the datasource could be made using this connection string passed as an argument. Thus, it provides a flexible way to change the datasource dynamically without requiring code changes. void Dispose(); This method performs tasks like releasing the resources etc. When Cache is stopped or LoadNext returns false, it calls the Dispose() method to notify the client that the task has completed or cache has stopped now and you can free the resources related to your datasource. public bool LoadNext(ref System.Collections.Specialized.OrderedDictionary data, ref object index); This method should contain logic to load object(s) from the master datasource. After Init call, framework calls LoadNext method in a loop till LoadNext returns false. This method receives an OrderedDictionary reference that user needs to set and populate with its respective data. Another ref variable is index. User must set it to be starting index in given OrderedDictionary. This function allows user to add data in chunks, or add it as a single operation. If user wants to add data in chunks, he must return True back to caller framework, indicating that this method should be called again. User should return false, to end Startup loading of data by the framework. Here is how your class should look like for ICacheLoader: namespace CacheLoaderApp { class CacheLoader : ICacheLoader { private string _connString = @"Server=stagingServer2; initial catalog=Northwind; User ID=sa; password=xxxxxx;"; private SqlConnection _connection; #region ICacheLoader Members public void Init(System.Collections.IDictionary parameters) { _connection = new SqlConnection(_connString); _connection.Open(); } public bool LoadNext(ref System.Collections.Specialized.OrderedDictionary data, ref object index) { int nextIndex = 0; if (index != null) { nextIndex = (int)index; } SqlCommand command = new SqlCommand("SELECT * FROM Customers WHERE CustomerID > " + nextIndex.ToString() + " AND CustomerID < " + (nextIndex + 10).ToString()); SqlDataReader reader = command.ExecuteReader(); if (!reader.HasRows) return true; while (reader.Read()) { Customer customer = new Customer(); customer.CustomerID = reader["CustomerID"].ToString(); customer.ContactName = reader["ContactName"].ToString(); customer.Country = reader["Country"].ToString(); ProviderCacheItem provideritem = new ProviderCacheItem(customer); provideritem.Dependency = new KeyDependency("customer:1"); provideritem.AbsoluteExpiration = System.DateTime.Now.AddSeconds(30); data.Add("Customer:" + customer.CustomerID, provideritem); } index = nextIndex + 10; reader.Close(); return false; } public void Dispose() { if (_connection != null) _connection.Close(); } #endregion } } Note: NCache logs the warnings in Application event log in case of an exception during loading the assemblies. To enable Cache Startup Loader, you need to implement ICacheLoader and specify it in the cache properties. NCache Manager provides an interface for specifying it under the cache properties. For Cache Startup Loader, follow these steps: What to Do Next?What to Do Next?
http://www.alachisoft.com/resources/tips/how-to-use-cache-startup-loader.html
CC-MAIN-2017-09
en
refinedweb
We would like to know how to get the seconds, Minutes between two Instant. import java.time.Duration; import java.time.Instant; // w w w .j a v a 2 s .c om public class Main { public static void main(String[] args) { Instant firstInstant = Instant.ofEpochSecond(1294881180); Instant secondInstant = Instant.ofEpochSecond(1294708260); Duration between = Duration.between(firstInstant, secondInstant); System.out.println(between); long seconds = between.getSeconds(); long absoluteResult = between.abs().toMinutes(); } } The code above generates the following result.
http://www.java2s.com/Tutorials/Java/Data_Type_How_to/Date/Get_the_seconds_Minutes_between_two_Instant.htm
CC-MAIN-2017-09
en
refinedweb
This C program generates N number of passwords, each of length M. This problem focuses on finding the N permutations each of length M. Here is the source code of the C program to generate random passwords of equal length. The C program is successfully compiled and run on a Linux system. The program output is also shown below. #include <time.h> #include <stdio.h> #include <stdlib.h> int main(void) { /* Length of the password */ int length; int num; int temp; printf("Enter the length of the password: "); scanf("%d", &length); printf("\nEnter the number of passwords you want: "); scanf("%d", &num); /* Seed number for rand() */ srand((unsigned int) time(0) + getpid()); while(num--) { temp = length; printf("\n"); while(temp--) { putchar(rand() % 56 + 65); srand(rand()); } temp = length; } printf("\n"); return EXIT_SUCCESS; } $ gcc password.c -o password $ ./password Enter the length of the password: 8 Enter the number of passwords you want: 5 Yfqdpshp GZJqGuiB ^jFUTLOo WbNK]Teu ]wrQSBNY Sanfoundry Global Education & Learning Series – 1000 C Programs. Here’s the list of Best Reference Books in C Programming, Data Structures and Algorithms.
http://www.sanfoundry.com/c-program-generate-n-number-passwords-length-m/
CC-MAIN-2017-09
en
refinedweb
I want to compile mex files without installing xcode, using only Command Line Tools (from apple developer center). Apple Command Line Tools install the compiler and adds standard libraries and headers to the system in a package much smaller than xcode (which is several GBs). Running mex on linux is possible - I see no reason why matlab mex should require the huge SDKs required for macos. A long evening of trial and error and hacking configuration files hasn't helped. Does anyone have a minimal working example of how to compile a mex file outside matlab, or a simple way to use mex without having xcode installed? Best Regards, Magnus After spending more time, I wound up learning more stuff and answering my own question. I'll post my solution here if anyone else needs it in the future. Make sure the cord is connected to your computer and that MATLAB is installed, and also install the command line tools from apple. Then call the following makefile to compile arrayProduct.c (comes with matlab) from the terminal as follows: make mex=arrayProduct Put this makefile code in the same folder in a file called makefile(edit to your own needs if you have to): all: clang -c\ -DMX_COMPAT_32 \ -DMATLAB_MEX_FILE \ -I"/Applications/MATLAB_R2016b.app/extern/include" \ -I"/Applications/MATLAB_R2016b.app/simulink/include" \ -fno-common \ -arch x86_64 \ -fexceptions \ -O2 \ -fwrapv \ -DNDEBUG \ "/Applications/MATLAB_R2016b.app/extern/version/c_mexapi_version.c" \ $(mex).c clang \ -Wl,-twolevel_namespace \ -undefined error \ -arch x86_64 \ -bundle \ -Wl,-exported_symbols_list,"/Applications/MATLAB_R2016b.app/extern/lib/maci64/mexFunction.map" \ $(mex).o \ c_mexapi_version.o \ -O \ -Wl,-exported_symbols_list,"/Applications/MATLAB_R2016b.app/extern/lib/maci64/c_exportsmexfileversion.map" \ -L"/Applications/MATLAB_R2016b.app/bin/maci64" \ -lmx \ -lmex \ -lmat \ -lc++ \ -o $(mex).mexmaci64 The above makefile is a bare minimum working example, you should edit it to comply with your requirements. Edit: Option 2 You can make MATLAB understand how to use the Command Line Tools by editing the xml file containing the compiler options instead. Open the file located at /User/username/Library/Application Support/MathWorks/MATLAB/R2016b/mex_C_maci64.xml Remove all compiler and linker options related to ISYSROOT. This will make the compiler search for header files in /usr/include etc instead of in the SDK-folder in XCode.
https://codedump.io/share/sDkBdOGhr5pp/1/matlab-mex-without-xcode-but-with-standalone-command-line-tools
CC-MAIN-2017-09
en
refinedweb
Directory Listing use proper name for partitioning screen gtkfe is "finished" removed old rsync-copying code. fix webrsync fallback bunch of string changes among others change the way screens are loaded pass GRP flag when installing bootloader define tmpbootloaders higher up actually put cron daemons in the list for CronDaemon.py allow 'None' for cron move logfiles before unmounting missing self comment out code that pulls network info from CC change operation order in 'if' statement to actually make sense bah, wrong step name networkless fixes new method for getting arch indentation malfunction missing comma I'm too lazy to type up what actually changed here :P split Daemons.py into Logger.py and CronDaemon.py and add both to gtkfe.py bye bye gendialog.pot updating the two .pot language templates with 2007.0 text. gli-dialog: fix failure to properly split in kernel modules loading fix stage tarball second-try manual type in bug fix extra space in MAKEOPTS fix ordering in if for adding xorg-x11 to package list same thing for adding xdm. add RootPass screen and rearrange some screens rearrange screen order a bit fix up Timezone and Networking couple fixes and it looks like this works again! :) majorly overhauling the clife to bring it around to the new structure. not yet functional but almos there. stupid typo multiple gli-dialog changes: removed a bunch of commented-out partitioning code. add xorg-x11 to package list if not already there if the user selects a windowmanager. kde => kde-meta reworked save_install_profile to save to /tmp/installprofile.xml this is used by finishing_cleanup. add the potential use of do_recommended_partitioning to the load-profile steps. moved the finishing gauge step to its own function. fix progress update in kernel building code copy compile_output.log instead of moving adding a do_recommended_partitioning function only for use in fully-automated installs. removed some commented out partitioning lines adding two parameters that will hopefully handle running do_recommended for automated installs. These will not be known to frontends to set. asthetic changes I hate typos this kernel stuff just hates me group local partitions/net mounts and stage/portage tree pass grp_install option to get_deps() small fixes. forgot 2 arguments to set_portage_snapshot_uri() tossed simulating question. put load_profile question first. fixed advanced_mode question. added a system beep after steps complete. added extra warnings to clear/set_recommended partition layouts fixed Partitioning call to pass IP. Reimplemented the set_mounts function to use Partitioning devices auto-sets swap. change kernel_source_pkg default to gentoo-sources forgot the quotes auto-populate localmounts with do_recommended() add kernelsources and kernelconfig to gtkfe screen list show step description instead of function name change padding to line up options reduce spacing on URI fields get rid of INSTALL_DONE Bootloader screen implement menuconfig in KernelConfig.py commenting out the go-back-and-change-values option for now. adding a call to re-grab verbose value in the first function call. attempting to add in retry code to gli-dialog haven't yet figured out how to know which functions to rerun in order to change settings. perhaps the menu will be needed again. add KernelConfig.py add checks for install mode to Kernel and KernelSources add KernelSources.py set genkernel buildmode in Kernel.py first kernel screen more little fixes. ask for a stage tarball uri until you get a valid one. progress height works now. move install done to end of main. oops, forgot to remove the height line from the rest of the gauges added debugging to hopefully expose the self._mounted_devices bug gli-d: don't run phase5 when not in profile mode. updating TODO fixed dhcp iface bug. need extra [0] adding height to gauge dialog, will see if it works. adding a progress status to the kernel compilation. add a little more descriptive debugging to kernel sources change order of params for emerge command. grp_install parameter passed to emerge() and get_deps() adding in two experimental changes. 1. fix up the list of devices in the bootloader screen (no more dupes) 2. don't show the route menu for only one interface. makes no sense. helps to install the bootloader before you use it add else line to restart the gauge to make the screen not look like crap lets hope this will actually work this time. remove INSTALL_DONE constant and unnecessary imports AT: uncomment the update_config_files link add steps into the phases. trying to reorder things so that steps are done by the frontend. add subprogress to install_failed_cleanup() remove some of the (now) useless debugging code use dict to store install steps go to Install Failed screen in default exception handling disable cancel button and ignore callback if no callback function set add install failure screen modified for calling random steps modify step structure for random steps call exit() through controller kill RunInstall.py change way GLIScreen is instantiated change way GLIScreen is imported profile default progress_callback in GLISCreen and call it if not handled in individual screens missing _tree fixed up PortageTree.py uncomment Kernel screen run prepare_chroot step as well comment out initial update_config_files and configure_make_conf steps since they don't make sense there anymore cosmetic changes Stage screen fixups rearrange gtkfe code kill off FutureBar and HelpDialog since they're no longer used remove description at top and put color codes in a table cosmetic changes update help text and add check for mount at / remove 'configuration' parameter from amd64AT handle non-existing version-stamp file properly comment out installprofile serialization in CC finish up Welcome -> InstallMode change move Welcome to InstallMode blah remove all references to exit, help, and finish buttons in screens continue of partition is swap...we don't want to try to mount it in the loop below disable previous button on NetworkMounts.py look for 'swap' or 'linux-swap' take extra optional argument for exception data in progress_callback() use 'devnode' instead of 'device' pass instantiated IP object to CC's __init__() it helps if we actually start the secondary thread general cleanups sub-progress notification for localmounts actually call the first step getting ready for a test run initial commit of ProgressDialog remove GLIClientConfiguration.py and references to it on demand loading of screens install progress bar at top instead of FutureBar initial commit of LocalMounts.py remove Help button rearrange Stage screen hide properties button on partitioning screen add entry field for mkfsopts add support for specifying mkfsopts to add_partition() a few fixes to the gli-dialog partitioning code fix up gli-dialog partitioning code for new real-time partitioning-fu structure in GLIAT just has function name instead of function pointer CC uses getattr() to get function pointer from name some small bugfixes make wait_for_device_node() module function instead of class instance function and fix usage in Partitioning remove duplicate wait_for_devnode() and change function call in GLIAT more updates. mountpoints screen mostly works. reviving wait_for_device_node from AT in here used in AT by mount_local_partitions() more general cleanup adding more of the mountpoints screen force part_size_mb to int actually fix get_max_size() remove get_partition_position() and fix get_max_size() fix display of device name for new partition fix up unallocated space handling fix get_extended_partition() to return idx general cleanups add yes/no prompt for recommended layout finish cleanup from slider removal fix up do_recommended() kill all traces of the resize display thing fix deprecation message fix deprecation message remove all mount point/opts stuff from PartProperties more cleanups fixing [] to {} starting in on the mounts screen. redid the list_partitions function. use idx instead of minor for get_partition() remove mountpoint call and switch to idx isntead of minor kill all partitioning code in GLIAT adding a small function to list the partitions in /proc/partitions kill import of GLISD from GLIArchTemplate more gtkfe mods for overhaul rename Partitioning to Partition to avoid conflict gtkfe partititioning mods for new partitioning module fixing indenting and two small string-integer issues found and corrected another function that used partitions to mounts forgot to define two vars. trying to fix up clientcontroller a bit. it's skipping the first step another round of bugsquashing in progress. this time i can actually get the install to try to start. WARNING: pretend mode turned off. program is now active fixing the usual typos and mistakes found when you finally test the code you write fixed code to use mounts instead of partitioning. install_profile() to self._install_profile fix reference to arch instaed of self._arch updating changelog reordered functions. removed references to the client-config fixed fstab and other locations that used mountpoints to use the new mounts. removed unused rc.conf function. other smaller touchups. Took out partitioning. Added mounts. Moved root mount point over from CF Removed RP-PPPoE. was never used. took out the preinstall steps. these will now be handled by the frontends. The actual code for the steps can be put in GLIUtility if necessary lots of improvements. tons of things still left to fix. moving set_verbose from CF to IP fixed some dialog mistakes. more general updates in an effort to get the code functional taking out hte client configuration fixing all instances of WINDOWSKEYS to WINDOWKEYS added space in AT debug line. no need for the net stuff in the overhaul branch re-add PNG files fix PNG images few minor changes. trying to get rid of instances of GLIStorageDevice few typo fixes add formatting code set partition type correctly get rid of start position constraints move flag checking inside 'if not free' block gli-dialog: more rearranging of code. Tossed the review menu and not calling show_settings not sure how those are going to turn out just yet. fix a few typos, syntax errors, and wrong var names more reorg more reorg and cleanup more code reorg implement add_partition() and delete_partition() remove unnecessary set_foo() functions more partitioning reorganization. src/Partitioning.py: first round of refactoring for Partitioning.py (was GLISD) move GLISD to Partitioning) it's alive\! mwahahahaha wrong var name s/"/'/ src/templates/x86ArchitectureTemplate.py: continue breakup/cleanup of partitioning code skip blank group 17th try's the charm iters suck (more) iters suck list vs. dict fix reorder functions because python sucks missing comma call runtimedeps.py with python switch dynamic-stage3 to new method lots of bug fixes from adding debug code src/GLIUtility.py: added parse_vdb_contents() src/GLIArchitectureTemplate.py: added copy_pkg_to_chroot() added debug code to dynamic-stage3 small fix typos DEBUGGING ADDED TO AT. verbose field added to C comment out hack run _get_packages_to_emerge() again with -pk remove extra = quickpkg hokey pokey use = when emerging also uncomment logger line tracked down and fixed missing = causing empty install extra packages debug code Use best_visible instead of best_version another damn typo missing colon add _portage_best_version() function and rework install_packages() to allow for future X of Y tracking.() Use URI instead of local path for file, allows easy remote specification of configs. fixing bootloader code for no initrd and initrd->initramfs naming change. Proper comments and parameters to GLIException in install_mta() Starting detailed changelog for glid. Changed height and width of stable/unstable yesno. Modified Files: src/GLIGenDialog.py src/fe/dialog/Changelog. added comments to new functions. complete rewrite of GenDialog networking among other updates.. 24 Jun 2005; Christopher Hotchkiss <chotchki@gentoo.org> Fixed the password checks when setting the root password. Fixed Changelog formatting. trying to get GenDialog to compile. Add dhcp_options to the CC, CConfig, and GenDialog. More overall changes to GenDialog as it gets closer to completion. Add --debug flag for gtkfe.py. Add fields to Advanced screen of Client Config screen.. Refactor kernel_compile common stuff, fix 2.4 build process. Logging system. add notes about 2.4 support.. Fixed set_kernel for new kernel_build_method parameter. Users.py: bugfix to not check if uid is an int if its not set. Turn off a line of debug code. Add stub note in build_kernel, to remind myself about it for tommorow.. use /var/tmp instead of /root for tempfiles. Clean up commenting used by _edit_config(), and fix bug where wrong comment was repeatedly appended to make.conf. cvsignore goodness. Put some Linux-2.6 specific code under a if statement. Store list of successfully mounted swap device for using swapoff. Add proper error checking to install_packages and install_filesystem_tools. Fixed namespace conflicts on 'file'. Clean up some redundant code (thanks to pychecker). Use proper main() block. avoid namespace collision on "file". Remove '/usr/share/zoneinfo/' part from timezone. Change _emerge('sync') call to direct call to spawn to avoid 'emerge -k sync' Added list_mirrors() and list_stage_tarballs_from_mirror() functions to GLIUtility Add check for blank stage tarball URI and ask user if they want to continue Fix typos: get_extended() instead of get_extended_partition() GLIInstallProfile.py: add missing set for dhcp_options if a tuple is passed in add_network_interface. Networking.py: complete overhaul. added in gui support ( no backend support yet ) for wireless, added in hardware identification for ethernet devices, added a new tab that will hold proxy and other networking information. Timezone.py: changed error if timezone is bad. RcDotConf.py: stopped printing KeyError in loading. Add null type to network config, for cases where the interface is already up and should not be touched (netboot). Add CLI frontend. Put the scripts used for calling frontends into the CVS tree. Try to use binary packages if available by default. pcmcia is not a variable name for a call to _add_to_runlevel, it should be a string!. remove a line of debug code I commited by mistake.. forgot a make_conf item. Forgot get_mta in one spot. Add MTA install code, and include MTA install phase. Ensure PORT_LOGDIR/PORTDIR_OVERLAY are created in _emerge if needed. Add support for 'none' kernel config for build_kernel phase. Put kernel_script in /var/tmp instead of /root for build_kernel phase. Factor out MEGABYTE and make sure its used. Change pass to return for null bootloader. 10 June 2005; Christopher Hotchkiss <chotchki@gentoo.org Redid the file naming convention for the netbe. Files are now searched for via the pxe standard. This allows for more flexible server setups. Example: 005070E808D0 005070E808 005070E8 005070 0050 00 default Changed the error checking that set_architecture_template uses. Fixed set_architecture_template(). Added documentation for trim_ip.. oops patch for mkfsopts for dialogfe and GenDialog from bug 95541) missed a bit of code Removed start/end from XML output and added mkfsopts to partitioning info. Updates to GenDialog and dialogfe. updated TODO too Minor changes as suggested by pychecker. wrong date in changelog Fix text running off the side of the window on all screens. removed development-sources and gentoo-dev-sources from list. Added auto-save of CConfig and copying to new /root after install. yet even more GenDialog updates. Added DNS server selection for CConfig static IP removed print statement from GLISD. more updates to GenDialog. Changed 'data' to 'self.data' in a few places in GLIClientConfiguration. another small fix for race condition in timezone_map_gui.py Add back error message on partition load error. timezone_map_gui: Small fix to prevent race condition on area_expose. Fix typo in GLIStorageDevice causing mountopts to be loaded from XML as ['mountopts'] Chroot wrapper passes along exit code. Added more detail to the error message Not being able to fetch the stage tarball is now an exception get rid of testing testing crap in output text field Cause install to *actually* fail on a failure timezone_map_gui.py: small update to fix refresh on area expose. recommit of Timezone.py due to previous removal of gnomecanvas dep in timezone_map_gui.py. Rewrite of set_partitions() for new GLIStorageDevice API more gendialog updates A few more fixes for templates/x86Archtemplate A few more fixes for same. Fix mount_local_partitions(), configure_fstab(), and install_filesystem_tools() to use GLISD directly Typo fix, WARNING instead of WARN. added codemap.dia Fix use of /mnt/gentoo instead of _chroot_dir in finishing_cleanup CC serializes install profile to disk and prepare_chroot() copies it into /mnt/gentoo/root Fix a late-night coding error in partitioning Fixed bug in finishing_cleanup() undo accidental (apparently) paste in middle of comment Added code to deactivate() in Partitioning.py to make sure there is a partition with the mountpoint / Moved URI parsing into new function parse_uri() Removed dependency on gnomecanvas for timezone_map_gui.py by re-implementing anything needed in pure gtk. Added XMLParser module Changed Welcome screen message and disabled setting of root password Networking.py returns True in deactivate() to get around some b0rkage put old Timezone.py back until dependency on gnomecanvas can be removed fix tabs/space mixing in format_mac() in GLIUtility updated both for new filename. date changes. 2004->2005. adding in the generic dialog function files. Added a Changelog and updated netfe and netbe with new files from chotchki. updated TODO again updated TODO Complete overhaul of Timezone.py. GLIStorageDevice cleanup patch from bug #91761 More idiot-proofing: mountpoint/opts fields disables for swap and mountopts defaults to 'defaults' if blank and a mountpoint is specified updated TODO Error logging casts 'error' to str Missing colons Fixed timezone code to not link to /mnt/gentoo/usr/share/zoneinfo/blah. Fix _edit_config() removed dead file GLIPartitionTools Added code to CC to handle exceptions *not* thrown by the installer itself. Exceptions received in CC are logged before being passed to the FE modify path added netfe remove duff import Readded resizing support. Additional manual size entry validation code. get_max_mb_for_resize() returns -1 if not self.resizeable PartProperties gets cur_size parameter. PartProperties allows sliding bar when a partition is resizeable. Remove craptastic split partitioning screen stuff Add 'Format' to PartProperties More dirty rsync hacks :-/ Fix _quickpkg_deps() to call _get_packages_to_emerge() Fix _quickpkg_deps() Fix minor bug in _get_packages_to_emerge() Minor fixes to GLIStorageDevice as suggested by pychecker. Split 'custom' sync option into 'snapshot' and 'none' Split 'custom' sync option into 'none' and 'snapshot' Patches from chotchki (bug #90325) to improve CC networking. untested. reenable error-checking code Fixed missing int()s Fixed == instead of = typos in GLIStorageDevice (pointed out by chotchki) Proxies patch from chotchki (bug #90147) comma bug. reverted changes to GLIArchitectureTemplate.py Integrated RcDotConf into the backend. _edit_config() changes value in-place and supports ##commented## Complete overhaul of RcDotConf. documented GLINotification.py documented GLIClientController.py finish documentation in GLIStorageDevice.py partial documentation in GLIStorageDevice.py Back. Did the docuementation thang for ArchTemplate and ClientConfiguration. Also updated TODO list. removed self from all function descriptions since pythondoc doesn't show it docs Use blackace's one-liner to add comments for all function for use with pythondoc. moved the logger line. yet another typo another typo fix typo fix Finish overhaul of backend partitioning code. Bunch of cosmetic and behavior UI changes. tweaks to tidy code tidy_partitions() function in GLIStorageDevice and remove debug statements (gtkfe also) removed a few old banner images Major GLIStorageDevice overhaul...all MB now instead of sectors. Added comments to the ChangeLog. Added DHCP options to add_network_interface. Added support for MAC addresses to the GLIInstallProfile. Added two helpers to GLIUtility: format_mac and is_mac. Patch from zahna to have default option increment in main menu. Minor modifications to set_timezone() from zahna New timezone function Fixed 2 bugs in Users.py. Pipe emerge through sed to properly strip out junk. check for non-blank PORTDIR and PORTAGE_TMPDIR try to suppress error output from _quickpkg_deps() Changed mountopts check to work for blank and whitespace updated TODO list fixed ethx not being added to runlevel defalt. : little bug w/ partition format option added hotplug/coldplug for livecd-kernel added --emptytree to stage2. fixed the way set_timezone works. set mountopts to 'defaults' if empty change " to ' in set_users() change " to ' in set_root_password() Removed reiserfs from the supported filesystems Users.py - put in password field for user adding, loading from saved xml profile implemented. Networking.py, Timezone.py, fixed tab spacings. Added format and type options to existing partitions. added --provide to mkvardb Fixed indent problem in GLICController backwards if/else Cleanup deactivate() in Users Users.py is now half integrated into the backend. ( Saving only ) Check for disklabel type loop and use the device name without a minor. Slightly modified patch from zahna to default to current architecture. kernel_args -> bootloader_kernel_args Remove most of content in amd64ArchTemplate and make it inherit from x86Archtemplate. Added code to detect when the output logfile is moved and temporarily disabled the done/error dialogs. Patch from zahna to add get_eth_info() function. Patch from zahna for extra arguments to the kernel. Added code to (hopefully) keep 2nd thread running after install. Complete overhaul of Users.py. UI is smart, but not yet integrated into the backend. debugging code in RunInstall.py Remove /tmp/compile_output.log and /var/log/install.log when install is complete. append to log when unpacking tarball Added 'append_log=True' to all spawn() calls using logfile= Change /tmp/install.log back to /var/log/install.log stupid typos debug code cleanups since it works now one more time everybody\! looks like it works so ripping out all the old filesystem_tools code. forgot a line let's try this again rewrote filesystem_tools [] instead of None wrong variable name added code to actually display tail output fix a whitespace booboo Added install complete/error message boxes and a TextView for displaying output. removing GLIInstallTemplate.py whitespace cleanup rem'd a print line i missed undid previous change added a fix to the logger from BenUrban Added the finishing_cleanup function. fix typo removed /usr/local/zoneinfo from gtkfe b/c it's already being added on the BE. fixed comma on users. changed kernel to not ask for genkernel if doing livecd added GENTOO_MIRRORS and SYNC options to make.conf Changed the print statements to logging in the partitioning BE code. the logger may need to be imported to the x86archtemplate. unknown yet. ripped out error checking of set_services. this is done by _add_to_runlevel. Blank entry won't get added with Update in Networking.py fixing agaffney's spelling error fixed prompts in partitioning ow my colon was missing! local_install also checks the custom_kernel_config_uri changed InstallProfile to remove is_uri check on kernel, stage, and portage URIs. The blank uri check has been commented out. added another patch from zahna for portage tarball selection. Added a choice for local_install which determines error checking on tarballs and whether to use existing partitions by default. set_stage_tarball_uri() doesn't raise an exception on a blank string. changed all gtk.TRUE/FALSE to True/False gentoo-sources is default option in Kernel.py. another bug fix to the livecd-kernel code Switch _emerge() call to spawn() call in livecd-kernel code to pass environment variables. fix typo in mkvardb added getgli script to misc small changes to setup_network_post. moved adding to runlevel of net.x to after the device gets symlinked. added domainname runlevel command. Removed call to mkvardb in livecd-kernel code as it's now done by catalyst. Added hostname, domainname, and nisdomainname to networking list. Added a stage_tarball selection patch from zahna. remove need for PORTDIR_OVERLAY in mkvardb new banner from blackace minor fix (hopefully) to livecd-kernel code minor fix (hopefully) to livecd-kernel code minor fix (hopefully) to livecd-kernel code lots of stuff in both FEs and backend...read the ChangeLogs :P removed sigunmask.c as the workaround is no longer necessary creates misc directory and added mkvardb script Fixed crash bug if you cancel from the partition type selection menu. switch append_log lines debugging code in spawn() should emerge hotplug and coldplug before adding them to runlevel. Remove command to 'rm /tmp/spawn.sh' as it breaks the piping. Removed get_random_salt() function and replaced calls to crypt.crypt() with calls to GLIUtility.hash_password() in dialogfe and gtkfe. reenabled 'emerge sync' option in gtkfe updated gtkfe TODO Updated TODO list to show our progress towards alpha release Temporarily disabled 'emerge sync' option due to python threading bug. Temporarily disabled 'emerge sync' option in PortageTree due to python threading bug. Fixed a couple bugs in add_netmount() in GLIInstallProfile. Fixed netmounts code in dialogfe updated partition code added code to convert MB/%/* to start/end sectors updated TODO. added debugging code to GLIUtility.exitsuccess() Modified GLIUtility.exitsuccess() to work with return value from commands.getstatusoutput instead of os.waitpid() proper indentation please but some debugging code in GLIUtility.exitsuccess() but some debugging code in GLIUtility.exitstatus() subtract 1 from end value PartitioningMain's desctivate() returns a value PartitioningMain calls deactivate() in active notebook page in its deactivate()) second try on set gateway code. Small fix for setting the default gateway. Also added feature to dialogfe. Added a custom package bar where you can fill in custom packages to emerge (space seperated list) switch spawn() over to using commands.getstatusoutput instead of fork/waitpid Loading an install profile uses a temporary GLIInstallProfile object to parse and then assigns it to the main object so the master object doesn't get left in an inconsistent state after a failed profile load. try/except block for set_portage_snapshot_uri() Networking.py fix file cleanup more file cleanup cleaned up a few unneeded files added help label slider/entry field sync added gtkfe TODO only write new resolv.conf if there are dns servers listed. more true -> True typos Fixed a bunch of true -> True typos. fix another true instead of True quick fix for fifo spelling error added progress bar to dialogfe and fix logger bug in ArchTemplate. small modification to GLILocalization.py moved lang to __init__ renamed GLISayWhat to GLILocalization created GLISayWhat module ExtraPackages.py - Added a lot of UI reaction details. It now reacts in a reasonable way that users will expect. Added support for each section to have default packages selected when you select the category. ugly screen is auto-selected on error PartitioningMain calls activate() in child screens split partitioning screen updates to TODO list and small change to ArchTemplate check for dhcp in the network stuff and emerge it if it is Changed the raising of 'warning' exceptions to a simple log of the error so that the installer can continue. added custom kernel and bootsplash options. added header.png to repository updates from AllanonJL minor code cleanup bottom buttons are dynamically generated minor gtkfe.py modifications more updates to TODO list. hopefully we'll soon start removing items instead of adding them. fixed logical logic issue in PartProperties removed another image switched to dynamically creating color key blocks fixed some nastiness with swap GLIStorageDevice fixes and finished moving stuff to PartProperties lots more partition code moved began moving code from Partitioning to PartProperties more PartProperties additions added a few images shell for partition properties dialog updated TODO list again to keep everyone in sync on remaining tasks. added 2.6 kernels updated TODO list fixed the len(sys.argv) check with a few parentheses add len(sys.argv) check updated Widgets.py from AllanonJL updates from AllanonJL small fix. see changelog fixed custom kernels and added runtime pretend to dialogfe changed output of portmap start to display_on_tty8. fixed bootloader for udev and multiple kernels updates from AllanonJL fixed additional users in dialogfe --pretend for gtkfe.py added code to allow custom kernel .config. more changes. nothing big. Finished initital coding of additional_users. fixed bug that caused last partition to go past the end of the drive partition booboo is_uri() only checked if portage_tree_snapshot_uri isn't blank typo typo typo portage snapshot uri field undo last change swapon failure isn't an error for now todo update Added things to the TODO list. and forgot a / Various fixes related to the add_users function. Still not yet finished. testing something fixed NFS mounting code partition data fix create filesystems Took out unnecessary setting of random livecd root password. removed the training wheels (among other things) uhh, blah...changelog or something updates from AllanonJL /me kicks pygtk in the junk updates from AllanonJL tabs install progress worky! breaking everything updates from AllanonJL CHOST, MAKEOPTS, ACCEPT_KEYWORDS, and FEATURES in MakeDotConf.py CFLAGS in MakeDotConf.py USE editor generates proper USE fixed double spacing issue USE editor in MakeDotConf.py Attempt at detecting and adding windows partitions to lilo. rewrote Bootloader.py updates from allanonjl added sysklogd to Daemons.py rewrote Daemons.py change title on PortageTree rewrote Kernel.py aligned columns in Stage and PortageTree rewrote PortageTree new Stage.py Welcome.py cleanups and added Template.py small misc fixes. typos and such. NetworkMounts cleanups and field validation Added lilo code and cleaned up lilo code. also do_partitioning renamed to partition added description to NetworkMounts updates from allanonjl changelog more resizing updates basic pyparted resize code clear partition table before third pass partition() fixes updates from allanonjl miniscule updates found a few more _edit_configs rewrote configure_fstab and install_bootloader for new partition format. my midday update on GLI changes to mount_local_partitions added mount_local_partitions more sectors instead of cylinders changes thanks to spy for pointing out i forgot to cvs add this thanks to spy for pointing out i forgot to cvs add this sure hope i didn't screw anything up this time removed depends stuff from Installprofile GLIStorageDevice ignores freespace <=100 sectors so help me i'm on a commit streak sectors instead of cylinder for partitioning you like the fixes? ok i'll keep em coming. removed dummy partition() from x86ArchTemplate oops forgot to click save ok large update here. first: removed _depends. second: fixed spawn()s. third: moved _ functions from Utility back to ArchTemplate 4: moved bootloader to x86Template 5: lots of misc other fixes gettin back in my groove. lots more fixes here. added clientconfig barebones stuff to dialogfe. just enough to make it work undo geometry change attempting to fix geometry issues workin out the kinks my bad, put it in the wrong spot. stupid fixes. little stuff. updated README for gtkfe more updates from AllanonJL he forgot a few files updates from AllanonJL The Install Profile now throws GLIExceptions. GLIStorageDevice fix Partition screen updates Changed the way exceptions work... minor partition screen changes Partition screen...yea! GLIStorageDevice mods and major enhancements to Partitioning screen in gtkfe NetworkMounts fixes Save button works Exit and Load buttons working NetworkMounts reorginization new refresh image trying again added Load/Save buttons more button images switched to dumpe2fs to determine the free space for a ext2/3 filesystem in GLIStorageDevice cleaned up dulpicate code and fixed GTK assertion error updates from AllanonJL finished rough working NetworkMounts.py NetworkMounts.py lists NFS exports on a particular host removed a few unneeded, already commented lines screen updates from AllanonJL Changelog type added ChangeLog for gtkfe NetworkMounts with working column sorting added NetworkMounts screen) added stock_exec.png minor changes to gtkfe layout and welcome screen text moved part screen to backup added deactivate() to GLIScreen and gtkfe.py now calls it on screen switch Small bug fix, noticed by AllanonJL. added default snapshot because its impossible to remember offhand forgot the images :-/ initial import of 'working' version added initial gtkfe.py moved dialog frontend into installer/src/fe/dialog Removed a few print statements. Added support for different architecture templates. changed the way GLIStorageDevice.py gets ext2/3 free space minor bugfix removed no longer needed signal handler from dialogfe.py bye bye signals simplify install loop in dialogfe.py switch to constants for notification values working notifications and FE control of install process dialogfe.py updates added _pretend to ClientController added start_pre_install() to GLIClientController.py CC creates ArchTemplate object and gets install_steps from it add get_install_steps() to ArchTemplate CC sends notification to FE on install step completion update install.py to reflect split up preinstall() in ArchTemplate split ArchTemplate's preinstall() into unpack_stage_tarball() and prepare_chroot() typo in dialogfe.py new size passed to ntfsresize in bytes instead of KiB fix to partitioning code to strip .0 from end of number sent to ntfsresize more partition code fixes install.py fix Python's handling of int vs. string sucks minor ArchTemplate fixes spelling errors are such a pain committed 'ebuild'-like script for testing install code added code to preserve lba flag on partitions shell code in dialogfe.py for controlling install via ClientController more notification stuff in GLIClientController and dialogfe.py GLIClientController now intercepts Exceptions from ArchTemplate functions and passes them to the frontend via a notification added install step control from FE moved some partition helper function code into main partitioning function modifications to GLIStorageDevice for determining ext2/3 freespace I should really test before I commit :-/ fix to GLIStorageDevice.py minor modification to dialogfe.py GLIStorageDevice can determine minimum safe size for ext2/3 and NTFS minor fix (extra tab) in GLIStorageDevice added partitioning code to GLIArchitectureTemplate using thread-safe Queue module in GLIClientController for notifications added notification queue locking for GLIClientController update to GLIArchitectureTemplate to refer back to GLIClientController committed my GLIStorageDevice.py module Added the GLINotification.py. Updates to the client controller Added some Notification code to the ClientController. Also fixed a small bug in GLIUtility. hacked spawn() to optionally return command output added append_log flag to spawn() function in GLIUtility Started porting GLIInstallTemplate to the GLIArchitectureTemplate a few more fixes for the new pythondialog Added an MTA option in the InstallProfile additional fixes from the pythondialog-2.7 changes updated dialogfe.py to work with pythondialog-2.7 committed my dialog-based frontend cron and logging now won't barf if not defined. tho logging SHOULD be required. Patch from agaffney. Applied and tested patches submitted by agaffney. The partition data is now added to the GLIInstallProfile. some material here no longer relevant DANGER: These scripts are just samples of stuff I test from. I do not recommend using them unless you know what you are doing! We're supposed to update this way more often :) Adding the bootloader code and many other various fixes. This file will now be split up for the ArchTemplates. Added more options in GLIInstallProfile Added SimpleXMLParser and modified GLIClientConfiguration and GLIInstallProfile to use it. Various updates. Cleanups and updates... Simple update to GLIException. Major code cleanup on set_network_type().. this will probably get renamed to set_network() A few cleanups.. more to come. Bugfixes from codeman. Bugfixes. Added set_proxys to GLIClientController. Added a few new exceptions to GLIException. Rewrote edit_config and added it to GLIUtility. Added GLIArchitectureTemplate Added a few exceptions. run_cmd has been fixed in GLIUtility. All of the exceptions raised in these 3 files are now subclasses of GLIException. Updated GLIClientController. It's now a subclass of Thread. Furthermore all of the configuration and profile generation will now happen in the frontend. Added GLIException. Bugfixes... Fixed a missing ':' Various updates. *** empty log message *** Some useful comments added. Another small update to GLIUtility Removed all of the pre/post stuff from GLIInstallProfile. None of this is needed as all of the "pre" stuff is handled in GLIClientConfiguration. run_cmd has been rewritten in GLIUtility. Updates to both GLIClientConfiguration and GLIUtility Initial version of GLIClientController. More updates to GLIClientConfiguration and GLIUtility. GLIClientConfiguration now supports a default gateway. Update to GLIClientConfiguration. It now will serialize itself as well as load itself from an xml file. A few more options added, for networking. Minor GLIUtility updates. Added a few fixes sent in by codeman. - Fixed bug that would have messed up the dependency chain. (Thanks to codeman for the catch) Updates to GLIUtility. run_cmd() and run_bash() were added. - Fixed bug where self was being passed as file_name (first argument). added the detect_devices function, probably needs more work Added GLIPartitionTools.py Updates to the methods set_kernel_modules & serialize. kernel module's should now work fine. Updated GLIUtility. Added an is_nfs() method. Also fixed is_hostname Updates to GLIInstallProfile. add_network_interface added. set_network_interfaces modified. Added some docs to make intent clearer. Made ClientConfiguration into a full blown singleton class. Will add usage pydocs shortly. fixed references to method names that were changed (the hyphen issue). Fixed binding scope issues for instance variables (they were being rebound each time). InstallTemplate had syntax errors - fixed. Hyphens are not allowed in method names. Removed an unused variable or two. Updates to GLIInstallProfile.py added _fetch_and_unpack_tarball(), _add_to_runlevel() completed setup_network_pre(), unpack_tarball() finished the custom section of install_portage_tree() a few little code cleanups (nothing big, just readability stuff) changed all referances of "custom_stage3_tarball_uri" to "stage_tarball_uri". This will be the uri to the stage tarball. It is no longer a "custom" thing, but a "standard" thing completed the set_network_post() method. Basically everything else at this point depends on partitioning info. added coding guidelines. added firstboot root password script Drastically simplified _run(), because of this, _log() was no longer needed, so it was removed. completed methods for set_users() and set_root_password() added unit tests for new logger singleton. updated unit test file to actually work. added GLILogger - a singleton (semi-)generic logger that can be used throughout the GLI to centralize all logging. broke up install_system_utils() and fixed some dependency issues fixed exitstatus issue. fixed a bunch of syntax errors fixed missing \n's fixed a typo on line 164 and 169 fixed formatting issue Added network_interfaces accessors and strict type checking for all accessors. added network_interfaces variable and accessors fixed indenting issue. updated methods for set_ and get_ partition_tables. Added code for partitioning_tables make sure we run tests from src (for now). added cvs and copyright headers to new test files. updated changelog. added sax parsing to InstallProfile, fixed accessors, clean up, added pydoc, and more. added initial unit test code current tests pass but need a lot of work. Added a bunch of instance variables and documented them. added timezone with accessors to act as a point of reference. added skeleton misc files added super skeletal GLI module. added gli class diagram. dia file - binary. New repository initialized by cvs2svn.
https://sources.gentoo.org/cgi-bin/viewvc.cgi/gli/branches/overhaul/?view=log&pathrev=1736
CC-MAIN-2017-09
en
refinedweb
I hate signing up for websites. I’ve already signed up for so many, using different usernames, that going back to one of them and trying to remember my credentials is sometimes impossible. These days, most sites have begun offering alternative ways to sign up, by allowing you to use your Facebook, Twitter or even your Google account. Creating such an integration sometimes feels like a long and arduous task. But fear not, Omniauth is here to help. Omniauth allows you to easily integrate more than sixty authentication providers, including Facebook, Google, Twitter and GitHub. In this tutorial, I’m going to explain how to integrate these authentication providers into your app. Step 1: Preparing your Application Let’s create a new Rails application and add the necessary gems. I’m going to assume you’ve already installed Ruby and Ruby on Rails 3.1 using RubyGems. rails new omniauth-tutorial Now open your Gemfile and reference the omniauth gem. gem 'omniauth' Next, per usual, run the bundle install command to install the gem. Step 2: Creating a Provider In order to add a provider to Omniauth, you will need to sign up as a developer on the provider’s site. Once you’ve signed up, you’ll be given two strings (sort of like a username and a password), that needs to be passed on to Omniauth. If you’re using an OpenID provider, then all you need is the OpenID URL. If you want to use Facebook authentication, head over to developers.facebook.com/apps and click on “Create New App”. Fill in all necessary information, and once finished, copy your App’s ID and Secret. Configuring Twitter is a bit more complicated on a development machine, since they don’t allow you to use “localhost” as a domain for callbacks. Configuring your development environment for this kind of thing is outside of the scope of this tutorial, however, I recommend you use Pow if you’re on a Mac. Step 3: Add your Providers to the App Create a new file under config/initializers called omniauth.rb. We’re going to configure our authentication providers through this file. Paste the following code into the file we created earlier: Rails.application.config.middleware.use OmniAuth::Builder do provider :facebook, YOUR_APP_ID, YOUR_APP_SECRET end This is honestly all the configuration you need to get this going. The rest is taken care of by Omniauth, as we’re going to find in the next step. Step 4: Creating the Login Page Let’s create our sessions controller. Run the following code in your terminal to create a new sessions controller, and the new, create, and failure actions. rails generate controller sessions new create failure Next, open your config/routes.rb file and add this: get '/login', :to => 'sessions#new', :as => :login match '/auth/:provider/callback', :to => 'sessions#create' match '/auth/failure', :to => 'sessions#failure' Let’s break this down: - The first line is used to create a simple login form where the user will see a simple “Connect with Facebook” link. - The second line is to catch the provider’s callback. After a user authorizes your app, the provider redirects the user to this url, so we can make use of their data. - The last one will be used when there’s a problem, or if the user didn’t authorize our application. Make sure you delete the routes that were created automatically when you ran the rails generate command. They aren't necessary for our little project. Open your app/controllers/sessions_controller.rb file and write the create method, like so: def create auth_hash = request.env['omniauth.auth'] render :text => auth_hash.inspect end This is used to make sure everything is working. Point your browser to localhost:3000/auth/facebook and you’ll be redirected to Facebook so you can authorize your app (pretty cool huh?). Authorize it, and you will be redirected back to your app and see a hash with some information. In between will be your name, your Facebook user id, and your email, among other things. Step 5: Creating the User Model The next step is to create a user model so users may sign up using their Facebook accounts. In the Rails console ( rails console), create the new model. rails generate model User name:string email:string For now, our user model will only have a name and an The idea behind an application like the one we are trying to build is that a user can choose between using Facebook or Twitter (or any other provider) to sign up, so we need another model to store that information. Let’s create it: rails generate model Authorization provider:string uid:string user_id:integer A user will have one or more authorizations, and when someone tries to login using a provider, we simply look at the authorizations within the database and look for one which matches the uid and provider fields. This way, we also enable users to have many providers, so they can later login using Facebook, or Twitter, or any other provider they have configured! Add the following code to your app/models/user.rb file: has_many :authorizations validates :name, :email, :presence => true This specifies that a user may have multiple authorizations, and that the name and Next, to your app/models/authorization.rb file, add: belongs_to :user validates :provider, :uid, :presence => true Within this model, we designate that each authorization is bound to a specific user. We also set some validation as well. Step 6: Adding a Bit of Logic to our Sessions Controller Let’s add some code to our sessions controller so that it logs a user in or signs them up, depending on the case. Open app/controllers/sessions_controller.rb and modify the create method, like so: def create auth_hash = request.env['omniauth.auth'] @authorization = Authorization.find_by_provider_and_uid(auth_hash["provider"], auth_hash["uid"]) if @authorization render :text => "Welcome back #{@authorization.user.name}! You have already signed up." else user = User.new :name => auth_hash["user_info"]["name"], :email => auth_hash["user_info"]["email"] user.authorizations.build :provider => auth_hash["provider"], :uid => auth_hash["uid"] user.save render :text => "Hi #{user.name}! You've signed up." end end This code clearly needs some refactoring, but we’ll deal with that later. Let’s review it first: - We check whether an authorization exists for that providerand that uid. If one exists, we welcome our user back. - If no authorization exists, we sign the user up. We create a new user with the name and email that the provider (Facebook in this case) gives us, and we associate an authorization with the providerand the uidwe’re given. Give it a test! Go to localhost:3000/auth/facebook and you should see “You’ve signed up”. If you refresh the page, you should now see “Welcome back”. Step 7: Enabling Multiple Providers The ideal scenario would be to allow a user to sign up using one provider, and later add another provider so he can have multiple options to login with. Our app doesn’t allow that for now. We need to refactor our code a bit. Change your sessions_controlller.rb’s create method to look like this: def create auth_hash = request.env['omniauth.auth'] if session[:user_id] # Means our user is signed in. Add the authorization to the user User.find(session[:user_id]).add_provider(auth_hash) render :text => "You can now login using #{auth_hash["provider"].capitalize} too!" else # Log him in or sign him up auth = Authorization.find_or_create(auth_hash) # Create the session session[:user_id] = auth.user.id render :text => "Welcome #{auth.user.name}!" end end Let’s review this: - If the user is already logged in, we’re going to add the provider they’re using to their account. - If they’re not logged in, we’re going to try and find a user with that provider, or create a new one if it’s necessary. In order for the above code to work, we need to add some methods to our User and Authorization models. Open user.rb and add the following method: def add_provider(auth_hash) # Check if the provider already exists, so we don't add it twice unless authorizations.find_by_provider_and_uid(auth_hash["provider"], auth_hash["uid"]) Authorization.create :user => self, :provider => auth_hash["provider"], :uid => auth_hash["uid"] end end If the user doesn’t already have this provider associated with their account, we'll go ahead and add it -- simple. Now, add this method to your authorization.rb file: def self.find_or_create(auth_hash) unless auth = find_by_provider_and_uid(auth_hash["provider"], auth_hash["uid"]) user = User.create :name => auth_hash["user_info"]["name"], :email => auth_hash["user_info"]["email"] auth = create :user => user, :provider => auth_hash["provider"], :uid => auth_hash["uid"] end auth end In the code above, we attempt to find an authorization that matches the request, and if unsuccessful, we create a new user. If you want to try this out locally, you’ll need a second authentication provider. You could use Twitter’s OAuth system, but, as I pointed out before, you’re going to need to use a different approach, since Twitter doesn’t allow using “localhost” as the callback URL’s domain (at least it doesn’t work for me). You could also try hosting your code on Heroku, which is perfect for a simple site like the one we’re creating. Step 8: Some Extra Tweaks Lastly, we need to, of course, allow users to log out. Add this piece of code to your sessions controller: def destroy session[:user_id] = nil render :text => "You've logged out!" end We also need to create the applicable route (in routes.rb). get '/logout', :to => 'sessions#destroy' It's as simple as that! If you browse to localhost:3000/logout, your session should be cleared, and you'll be logged out. This will make it easier to try multiple accounts and providers. We also need to add a message that displays when users deny access to our app. If you remember, we added this route near the beginning of the tutorial. Now, we only need to add the method in the sessions controller: def failure render :text => "Sorry, but you didn't allow access to our app!" end And last but not least, create the login page, where the user can click on the “Connect With Facebook” link. Open app/views/sessions/new.html.erb and add: <%= link_to "Connect With Facebook", "/auth/facebook" %> If you go to localhost:3000/login you’ll see a link that will redirect you to the Facebook authentication page. Conclusion I hope this article has provided you with a brief example of how Omniauth works. It’s a considerably powerful gem, and allows you to create websites that don’t require users to sign up, which is always a plus! You can learn about Omniauth on GitHub. Let me us know if you have any questions! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/articles/how-to-use-omniauth-to-authenticate-your-users--net-22094
CC-MAIN-2017-09
en
refinedweb
Question on webservices in Oracle816802 Aug 22, 2013 2:14 AM Hi All, I am new to Oracle webservices and I am doing POC on this one. The requirement is as given below. W have a Middleware team who is expecting data from our oracle tables. What the analysts are saying is: They will develop a WSDL and provide us the webservice information. As per them the WSDL contains what columns they need data for and in what format. So I need to use that WSDL and get the data and send the data to the Middleware team using their WEBSERVICE. Correct me If I am wrong about anything given above. I am going through the Oracle documentation and I can see we are using UTL_HTTP packages to make a request and read the data from the URL. I did not see anywhere where I can generate data as per their requirement and provide my data as a webservice. Much appreciate your guidance here on how to create data as a webservice from oracle.. Thanks, MK. 1. Re: Question on webservices in OracleBilly~Verreynne Aug 22, 2013 5:02 AM (in response to 816802) Please do not ask the same question multiple times in different formats. As I've already responded to your other question, you need to use XDB. XDB support WebDAV, HTTP, HTTPS and FTP clients - as oppose to the standard database OCI or JDBC client. One of the features of XDB, is web service support - providing a web service framework for calling standard PL/SQL code in the database.) Hi Billy, I read the oracle documentation where we are setting the Web services and how we can call the web services or create a function and this function can be provided as a web service. Using Native Oracle XML&nbsp;DB Web Services I am trying to simulate the same and so I create a TEST account and granted the specific roles given by the document. I want to build a test case like as given below. Procedure PROC1 calls PROC2 and PROC2 has out parameters and send back the response to PROC1 which will see this data and send us a confirmation back. I want to simulate the same using webservices where I want to call my proc which will be called by a webservice and this webservice will send me a response back regarding the status. Appreciate your help here. Thanks, MK. 4. Re: Question on webservices in Oracle816802 Aug 24, 2013 12:22 AM (in response to 816802) Hi All, I am able to create a WSDL document given below. This WSDL is create for a function and this function just returns a value (2 in this case). The function does not accept any input parameters. Now I am trying to access this WSDL in a PLSQL code and it is not working. I googled and found one code in Oracle-Base ORACLE-BASE - Oracle Consuming Web Services In this site the WSDL they provided is (). This URL they are replacing in the code in the below section l_url := ''; l_namespace := 'xmlns=""'; l_method := 'ws_add'; l_soap_action := ''; l_result_name := 'return'; Now I also simulated the same and my code looks as given below. CREATE OR REPLACE FUNCTION add_numbers RETURN NUMBER AS l_request sys.soap_api.t_request; l_response sys.soap_api.t_response; l_return VARCHAR2(32767); l_url VARCHAR2(32767); l_namespace VARCHAR2(32767); l_method VARCHAR2(32767); l_soap_action VARCHAR2(32767); l_result_name VARCHAR2(32767); BEGIN --l_url := ''; --l_namespace := 'xmlns=""'; --l_method := 'ws_add'; --l_soap_action := ''; --l_result_name := 'return'; l_url := ''; l_namespace := 'xmlns=""'; l_method := 'ws_add'; l_soap_action := ''; l_result_name := 'return'; l_request := sys.soap_api.new_request(p_method => l_method, p_namespace => l_namespace); sys.soap_api.add_parameter(p_request => l_request, p_name => 'int1', p_type => 'xsd:integer', p_value => 10); /* sys.soap_api.add_parameter(p_request => l_request, p_name => 'int2', p_type => 'xsd:integer', p_value => p_int_2); */ l_response := sys.soap_api.invoke(p_request => l_request, p_url => l_url, p_action => l_soap_action); l_return := sys.soap_api.get_return_value(p_response => l_response, p_name => l_result_name, p_namespace => NULL); RETURN l_return; END; But when I execute the code I am getting the below error. select add_numbers from dual; ERROR at line 1: ORA-31011: XML parsing failed ORA-19202: Error occurred in XML processing LPX-00104: Warning: element "HTML" is not declared in the DTD Error at line 2 ORA-06512: at "SYS.XMLTYPE", line 48 ORA-06512: at "SYS.SOAP_API", line 153 ORA-06512: at "TEST.ADD_NUMBERS", line 43 Can anyone let me know what would be the issue. Thanks, MK.>
https://community.oracle.com/message/11159923
CC-MAIN-2017-09
en
refinedweb
The dependency wxpython2.8 is no longer available. Search Criteria Package Details: tribler 7.0.0a3-1 Dependencies (20) - libsodium (libsodium-git) - libtorrent-rasterbar (libtorrent-rasterbar-109, libtorrent-rasterbar-1_0-git) - phonon-qt5-vlc - python2-apsw - python2-chardet - python2-cherrypy - python2-configobj - python2-cryptography - python2-decorator - python2-feedparser - python2-m2crypto - python2-netifaces - python2-pillow - python2-plyvel - python2-pyqt5 - python2-requests (python2-requests-git) - python2-twisted - qt5-svg (qt5-svg-git) - python2-setuptools (make) - vlc (vlc-clang-git, vlc-decklink, vlc-git, vlc-nightly, vlc-qt5) (optional) – for internal video player Required by (0) Sources (1) Latest Comments gmes78 commented on 2017-01-29 20:19 xantares commented on 2016-03-04 14:24 @liambluebox, I believe it's fixed now liambluebox commented on 2016-03-04 00:38 I'm getting this error from a clean install: Tribler version: 6.5.0 Traceback (most recent call last): File "Tribler/Main/tribler_main.py", line 183, in __init__ s.start() File "Tribler/Core/Session.py", line 419, in start self.lm.register(self, self.sesslock, autoload_discovery=self.autoload_discovery) File "Tribler/Core/APIImplementation/LaunchManyCore.py", line 104, in register from Tribler.Core.leveldbstore import LevelDbStore type: No module named leveldbstore Talion commented on 2015-12-27 20:56 The first day I began using Tribler v6.4.3, I received a crapload of DCMA notices from my ISP, logging my static IP address, for a bunch of x-rated videos I never heard of and didn't attempt to download. I was using no other software capable of such a thing, and I live in the middle of nowhere on a ranch, so it is not possible for anyone to hack my weak wifi signal; the finger points at Tribler. Looking at the Tribler site forums, I found others having the same problem, and one of the answers to a related post was this (); "I assume you are running current stable version (v6.4.3) Did you press "accept" on the popup window asking you if you wanted to act as an exit node the first time you start Tribler? It explains what happens when you do. (You become an exit node for chunks of other Tribler peers downloads)" No such popup appeared for me, nor did any other warning, question or indication regarding exit nodes, so it seems I was unintentionally, unknowingly, acting as an ISP and IP Visible Exit Node for porn and other downloads. Tribler establishing your system as an exit node is the default/enabled behavior. This behavior can only be changed by manually editing the config file, as this version does not include that preference in settings. This is dangerous software. Steer clear until these problems are fixed, allegedly in the next version; even then, I would monitor their forums for quite some time before using a new release. xantares commented on 2015-11-30 15:23 @D-Worak cherrypy, plyvel and decorator are listed as dependencies. D-Worak commented on 2015-11-30 13:35 Here is another update. I was able to run this version of tribler which I download from AUR, but I need to do some steps, and I will post here so the uploader can update the installation script for arch linux. Step 1. Download and install python2-pip from AUR $ yaourt -S python2-pip Step 2. Install cherrypy for python 2.7 $ sudo pip2.7 install cherrypy Step 3. Install plyvel for python 2.7 $ sudo pip2.7 install plyvel Step 4. Install decorator for python 2.7 $ sudo pip2.7 install decorator All right, thats the end. Just run tribler and be happy. D-Worak commented on 2015-11-30 12:16 Ok, so this is the correct error for my system, I found the script sending the error log to /tmp. Here it comes: $ cat /tmp/$USER-tribler-bA160Y9t.log Unable to load logging config from ''/usr/share/tribler/logger.conf'' file: No section: 'formatters' Current working directory: u'/usr/share/tribler' File doesn't exist 55, in <module> from Tribler.Core.Session import Session File "/usr/share/tribler/Tribler/Core/Session.py", line 11, in <module> from Tribler.Core.APIImplementation.LaunchManyCore import TriblerLaunchMany File "/usr/share/tribler/Tribler/Core/APIImplementation/LaunchManyCore.py", line 19, in <module> from Tribler.Core.Video.VideoPlayer import VideoPlayer File "/usr/share/tribler/Tribler/Core/Video/VideoPlayer.py", line 21, in <module> from Tribler.Core.Video.VideoServer import VideoServer File "/usr/share/tribler/Tribler/Core/Video/VideoServer.py", line 14, in <module> from cherrypy.lib.httputil import get_ranges ImportError: No module named cherrypy.lib.httputil I think the issue here is that in currently python installation on arch linux, cherrypy is using python3.5, thats explains why cherrypy is not found. But unfortunately, as I mentioned before, I have not enough python programming knowledge to workaround it. I hope this information help solving it. xantares commented on 2015-11-27 09:30 I'll try to fix it going back to python2 package set and latest stable. silent commented on 2015-11-27 02:16 Unable to load logging config from ''/usr/share/tribler/logger.conf'' file: No section: 'formatters' Current working directory: u'/usr/share/tribler' File doesn't exist 2015-11-27 03:09:09,352 [ERROR] Unable to use wxversion installed wxversions: ['3.0-gtk2'] Traceback (most recent call last): File "Tribler/Main/tribler.py", line 50, in <module> wxversion.select("2.8-unicode") File "/usr/lib/python2.7/site-packages/wxversion.py", line 152, in select raise VersionError("Requested version of wxPython not found") VersionError: Requested version of wxPython not found 26, in <module> import M2Crypto # Not a useless import! See above. ImportError: No module named M2Crypto D-Worak commented on 2015-11-27 01:10 I got no error and no output for this AUR version of tribler, just doesn't start. However, I've tried to download and compile from git, but I'm stuck with this error message: ImportError: No module named cherrypy.lib.httputil I'm not a python programmer, but cherrypy is definitely installed on my system, which I tested the import in a program separately. Strange, I don't know where to go, I'm out of ideas.
https://aur.archlinux.org/packages/tribler/?comments=all
CC-MAIN-2017-09
en
refinedweb
The Transport Authority is implementing a new Road Pricing system. The authorities decided that the cars will be charged based on distance travelled, on a per mile basis. A car will be charged $0.50/mi, a van $2.1/mi and taxis travel for free. Create a function to determine how much a particular vehicle would be charged based on a particular distance. The function should take as input the type of the car and the distance travelled, and return the charged price. def Road_Pricing(): x = float(input("How many miles is driven?")) y = (input("What car was driven?")) if "car": print (.50*x) if "van": print (2.1*x) if "taxi": print ("Free") Road_Pricing() The requirement is(emphasis mine): ...... The function should take as input the type of the car and the distance travelled, and return the charged price. This means: Another problem in your code is that expressions in your statements isn't checking the value of car_type. Also, you should use more meaningful variable names(for example, distance and car_type instead of x and y). def road_pricing(car_type, distance): if car_type == "car": return .50 * distance if car_type == "van": return 2.1 * distance if car_type == "taxi": return 0 car_type = raw_input("What car was driven? ") distance = float(input("How many miles is driven? ")) print road_pricing(car_type, distance)
https://codedump.io/share/Zwwb0XYfjUA1/1/creating-a-function-for-road-pricing
CC-MAIN-2017-09
en
refinedweb
This page was copied from. Once we're done here the content should be put back to XML and published on the website. Back to FOPProjectPages. Authors: VictorMote (wvm) JeremiasMaerki (jm) Glossary output handler: A set of classes making up an implementation of an output format (i.e. not just the renderer, but for example the PDF Renderer plus the PDF library) rendering run: One instance of the process of converting an XSL:FO document to a target format like PDF (or multiple target formats at once) rendering instance: One instance of the process of converting an XSL:FO document to exactly one target format. Note that there may be multiple rendering instances that are part of one rendering run. typeface: A set of bitmap or outline information that define glyphs for a set of characters. For example, Arial Bold might be a typeface. This concept is often also called a "font", which we have defined somewhat differently (see below). typeface family: A group of related typefaces having similar characteristics. For example, one typeface family might include the following typefaces: Arial, Arial Bold, Arial Italic, and Arial Bold Italic. A typeface family may be named in a way ambiguous with its members -- for example, the family mentioned in the previous sentence might also be named "Arial". font: A typeface rendered at a specific size. For example -- Arial, Bold, 12pt. Goals - refactor existing font logic for better clarity and to reduce duplication - The design should be in concert with the considerations for Avalonization - parse registered font metric information on-the-fly (to make sure most up-to-date parsing is used??) resolve whether the { { { FontBBox, StemV, and ItalicAngle } } } font metric information is important or not -- if so, parse the .pfb (or .pfa file) file to extract it when building the FOP xml metric file (Adobe Type 1 fonts only) [1] - handle fonts registered at the operating system (through AWT) handle fonts that are simply available on the target format (Base 14 fonts for PDF and PostScript, Pre-installed fonts for PCL etc.) - Support various file-based font formats: - Allow for font substitution [3] - We probably have to support fixed-size fonts for several renderers: Text, maybe PCL, Epson LQ etc. - Optional: Make it possible to use multiple renderers in one run (create PDF and PS at the same time) - How important is that? Issues - Why are we using our own font metric parsing and registration system, instead of the AWT system provided as part of Java? - Answer 0: We must handle default fonts for the target format, like the standad PDF fonts or default PS printer fonts, which may not be available either from the system/AWT nor as a file. Answer 1: Many of our customers use FOP in a so-called "headless" server environment -- that is, the operating system is operating in character mode, with no concept of a graphical environment. We need some mechanism of allowing these environments to get font information.12 - Answer 2: At some level, we don't yet fully trust AWT to handle fonts correctly. There are still unresolved discrepancies between the two systems. - What about fonts for output formats using the structure handler (RTF, MIF)? Do they need access to the font subsystem? [4] - Supporting multiple output formats per rendering run has a few consequences. The layout (line breaks, line height, page breaks etc.) is influenced by font metrics. Two fonts with the same name (but one TrueType and one Type 1) may have different font metrics, thus leading to a different layout. [5] - The set of available fonts cannot be provided by the renderers anymore. A central registry is needed. A selector has to decide which fonts are available for a set of renderers to be used. [6] - Two renderers (although using the same area tree) may produce slightly different looking output. - What to do when a font is not available to one of the say two target output formats? Or what to do when a font is available from two font sources but each output handler supports only one of these (and the font metrics are different)? [7] - Font subsitution: PANOSE comes to my mind. What's that exactly? [8] Can we use/implement that? Design Concern areas There are several concern areas within FOP: - provision of font metrics to be used by the layout engine - registration and embedding of fonts by the output handlers (such as PDF) - Management of multiple font sources (file-based, AWT...) - Selection of fonts (fonts that can be used in a rendering run, substituted fonts) Parsing of file-based fonts ({ { { Type 1, TrueType, OpenType etc. } } }) Thoughts Central font registry { { { Until now each renderer set up a FontInfo object containing all available fonts. Some renderers then used the font setup from other renderers (ex. PostScript the one from PDF). That indicates that there are merely various font sources from which output handlers support a subset. } } } { { { So the font management should be extracted from the renderers and made standalone, hence the idea of a central font registry. The font registry would manage various font sources (AWT fonts, Type 1 fonts, TrueType fonts). Then, there's an additional layer needed between the font registry and the layout engine. I call it the font selector (for now), because it determines the subset of fonts available to the layout engine based on the set of output handlers (one or more) to be used in a rendering run. It could also do font substitution as the layout engine looks up fonts and selects the most appropriate font. If a font is available from more than one font source the font selector should select the one favoured by the output handler(s). This means that we need some way for the output handlers to express a priority on font sources. } } } The font selector will have to pass to the output formats which fonts have been used during layout. Common Resources There are some possible efficiencies to be gained by using one FOP session to generate multiple documents, and by generating multiple output formats for one document. However, to accomplish this, we probably need to explicitly distinguish which resources are available to / used by a Session, a Document (rendering run), and a Rendering Instance. Proposed Interface for Fonts (wvm) We wish to design a unified font interface that will be available to all other sections of FOP. In other words, the interface will provide all needed information about the font to other sections of FOP, while hiding all details about what type of font it is, etc. { { { /** {{{ * Hidden from FOP-general */ }}} /** {{{ * Hidden from FOP-general. - Implementations of this for AWT, Base14 fonts, Custom, etc. - All implementations of this interface are hidden from FOP-general */ }}} package interface TypeFace package getEmbeddingStream() {} public class Font {{{ private TypeFace typeface; - /** Consults list of available TypeFace, and Font objects in the Session, - and returns appropriate Font if it exists, otherwise creates it, updating Session - and Document lists to keep track of available fonts and embedding information - / public static StreamOfSomeSort getFontRendering(Document document) {} }}} /** some accessor methods **/ {{{ String getTypeFace(); - String getStyle(); int getWeight(); int getSize(); }}} /** The following methods are either computed by the implementations of the TypeFace {{{ interface, or are computed based on information returned from the TypeFace interface - */ - //These methods already take font size into account int getAscender(); int getDescender(); int getCapHeight(); //more... int getWidth(char c); boolean hasKerningAvailable(); //more... } } } }}} Jeremias and I (wvm) have gone around a bit about whether the Font should be a class or an interface. The place where the interface is needed is at the { { { TypeFace } } } level, which is where the differences between AWT, { { { TrueType } } } custom, { { { PostScript } } } custom, etc. exist. The rest of FOP needs only the following: - the provideFont() method to obtain a Font object which can provide metrics information a way to get the actual font for embedding purposes. This is done through the { { { FontFamily } } } interface, which has methods for returning needed embedding information. However, the interface is never exposed to FOP-general. Instead a static method is (in Font) to handle all of that. - collections of fonts used, fonts to be embedded, etc. are stored in the Session and Document concept objects respectively, where they can be obtained by { { { getFontRendering() } } } So while the interface for { { { TypeFace } } } is good (to handle the many variations), any attempt to expose it to FOP-general makes FOP-general's interaction with fonts more complex than it needs to be. Hardware fonts Definition: "hardware" fonts are fonts implicitly available on a target platform (printer or document format) without the need to do anything to make them available. Examples: PDF and { { { PostScript } } } specifications both define a Base 14 font set which consists of Helvetica, Times, Courier, Symbol and ZapfDingbats. PCL printers also provide a common set of fonts available without uploading anything. The Base 14 fonts are already implemented in FOP. We have their font metrics as XML files that will be converted to Java classes through XSLT. The same must be done for all other "hardware" fonts supported by the different output handlers. The layout engine simply needs some font metrics to do the layout. OpenType Fonts OpenType font files come in three varieties: 1) TrueType font files, 2) TrueType collections, and 3) Wrappers around CFF (PostScript) fonts. The first two varieties can be treated just like normal TTF and TTC files. The fonts metric information for all three is stored in identical (i.e. TrueType) tables. The CFF files require a small amount of additional logic to unwrap the contents and find the appropriate tables. How to define a "font" in various contexts A "font" can have various definitions in different contexts: - The layout engine needs a font defined as: font-family (Helvetica), font-style (oblique), font-weight (bold), font-size (12pt). In this context the layout engine will also use information like text-decoration (underline etc.). - Font sources will probably deal with fonts defined by: font-family, font-style, font-weight. But there are things to consider: Type 1 fonts normally define a font exactly this way (Example: The files FTR_.pfb and FTR_.pfm together define the "Frutiger 55 Roman" font). But the Type 1 spec defines a multiple master extension allowing more than one flavour of a font in one font file (ex. a regular and a bold font). I think, the same is possible for TrueType. To handle fonts like this in a family/style/weight manner, we would need a facade that points to a particular flavour of a multiple master font, so one such font would result in multiple facades point to it. [9]10 These fonts are generally scalable (at least AWT, T1, TT and OT are). What if we need to support fixed-size fonts for a text, PCL or Epson LQ renderer?11 [1] It is important. Because these values are used to create the font descriptor in PDF. If these values are wrong you get error messages from Acrobat Reader. (jm) See where this issue is discussed in a note. If I understand the note correctly, we need to read in the pfb file to get this information. (wvm) [2] Actually OpenType is the unified next generation font format that also replaces Type 1 Fonts. An OpenType font contains more information than either a Type 1 or TrueType font contains. It can wrap either of the two methods for describing the font outline information. (wvm) [3] Please define what is meant by this term. Are we talking about which font to use when we can't find the one that is requested? (wvm) [4] I am not sure about RTF, but I think that MIF will need some access to this information. (wvm) <wvm date="20030718"> OK, I think I was wrong about this. The StructureRenderers should be able to get everything they need about fonts directly from the XSL-FO input. If they need to aggregate similar fonts, or track which ones have been used, they should do that themselves.</wvm> [5] My thought is that this should never happen. If the font registry is centralized, then when "XYZ, bold, 12pt" is requested, the same font should be selected every time. (wvm) [6] I envision this information to be stored in the appropriate objects -- Session, Document, or RenderingInstance (these are concepts, not class names, because classes filling these concepts may already exist). Session should either be static or a singleton, and includes a list of all fonts (actually probably typefaces) used in this session. Document may not need to list anything, but RenderingInstance needs to know which fonts need to be embedded, among other things. (wvm) [7] I think I am against allowing this. We would first need to first resolve how to register hardware fonts & get their metric information, which seems almost impossible. Then we would have to build a mechanism that maps font sources to output media -- PDF can use software fonts, but not hardware. PCL can use hardware, and depending on the printer, perhaps can use downloadable software fonts as well. This seems like an ugly, slippery slope, at least for a 1.0 release. I think it better to say that we support only soft fonts, and let the user build a workaround. The other really ugly aspect of this is that if you allow two different fonts to be used for two different rendering contexts, I think you have to have two different area trees to handle the layout differences. (wvm) [8] See. (wvm) [9] Actually, multiple masters are used to generate specific instances of .pfm and .pfb files, so these live in separate files. We do have an issue with .cff (Compact Font Format, which contain multiple Type 1 faces), and .ttc (TrueType Collection, which contain multiple TrueType faces). Until we have parsing tools, these are really unusable to us. OpenType fonts have native support for multiple typefaces within a font file, and I think they support both of these formats. (wvm) 10 With regard to the design aspect, when a font object is requested by the layout classes, the same object should always be returned for the same basic information that is passed. (wvm) 11 A fixed-size font can be thought of as merely an instance at a specific point size of a typeface. So, for text, we probably need to take the first point size passed to us & use that throughout, spitting out an error if other point sizes are subsequently used. For the others, I suppose we have to first resolve the issues of whether/how to support hardware fonts. (wvm) 12 I don't think this is really a strong argument unless we refrain from using Batik for SVGs too. (pij) Response from wvm: <wvm> I don't understand this comment. The point is that we use our own registry for fonts because it is the only way to get font information in a headless environment. </wvm>
https://wiki.apache.org/xmlgraphics-fop/FOPFontSubsystemDesign
CC-MAIN-2017-09
en
refinedweb
Hi, Basically I want to put my responses for each of the outputs of my loop (answers yes or no to different contrast numbers) into an empty list ... so I would have a list for each contrast corresponding to how many yes or no's there were? from observerClass import observer import numpy obs1 = observer(0.2, 0.05) obs2 = observer(0.1,0.05) for cont in numpy.linspace(0.0,1.0,10): print 'cont is', cont for repN in range(10): print cont, obs1.getResponse(cont) <--- this generates a yes or no to the cont Any help would be much appreciated!
https://www.daniweb.com/programming/software-development/threads/399562/adding-loop-output-to-empty-list
CC-MAIN-2017-09
en
refinedweb
Several folks have been complaining for a while about Action Links not being properly generated when MVC routes are mixed with Service Routes in the same site. It turns out there is a solution…..don’t use ServiceRoute Instead, create a derived route below and use it. [Kudos to Phil Haack and Clemens Vasters for this] public class WebApiRoute : ServiceRoute{public AppServiceRoute(string routePrefix,ServiceHostFactoryBase serviceHostFactory, Type serviceType): base(routePrefix, serviceHostFactory, serviceType){}public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values){return null;}} The link gen functionality uses the GetVirtualPath method to generate links. If you set it to null you are opting web api routes out of link gen and everything should work. Now to register your routes using this model, don’t use MapServiceRoute. Instead, in add to your route table manually this way:var config = new HttpHostConfiguration() ///create your config objectvar factory = new HttpConfigurableServiceHostFactory(config);routes.Add(new WebApiRoute("Foo", factory, typeof(Foo))); It’s a little bit more code, but not too much and hey, it fixes the problem. This will be fixed in the next drop, which YES, we are working on.
http://codebetter.com/glennblock/2011/08/05/integrating-mvc-routes-and-web-api-routes-2/
CC-MAIN-2017-09
en
refinedweb
Next Chapter: Regular Expressions, Advanced Regular Expressions The aim of this chapter of our Python tutorial is to present a detailed and descriptive introduction into regular expressions. This introduction will explain the theoretical aspects of regular expressions and will show you how to use them in Python scripts. The term "regular expression", sometimes also called regex or regexp, is originated in theoretical computer science. In theoretical computer science they are used to define a language family with certain characteristics, the so-called regular languages. A finite state machine (FSM), which accept language defined by a regular expression, exists for every regular expression. You can find an implementation of a (Finite State Machine in Python) on our website. Regular Expressions are used in programming languages to filter texts or textstrings. It's possible to check, if a text or a string matches a regular expression. A great thing about regular expressions: The syntax of regular expressions is the same for all programming and script languages, e.g. Python, Perl, Java, SED, AWK and even X#. The first programs which had incorporated the capability to use regular expressions were the Unix tools ed (editor), the stream editor sed and the filter grep. There is another mechanism in operating systems, which shouldn't be mistaken for regular expressions. Wildcards, also known as globbing, look very similar in their syntax to regular expressions. But the semantics differs considerably. Globbing is known from many command line shells, like the Bourne shell, the Bash shell or even DOS. In Bash e.g. the command "ls *.txt" lists all files (or even directories) ending with the suffix .txt; in regular expression notation "*.txt" wouldn't make sense, it would have to be written as ".*.txt" IntroductionWhen we introduced the sequential data types, we got to know the "in" operator. We check in the following example, if the string "easily" is a substring of the string "Regular expressions easily explained!": >>>>> "easily" in s True >>>We show step by step with the following diagrams how this matching is performed: We check if the string>IMAGE_1<< is contained in the string>IMAGE_2<< By the way, the string sub = "abc" can be seen as a regular expression, just a very simple one. In the first place, we check, if the first positions of the two string match, i.e. s[0] == sub[0]. This is not satisfied in our example. We mark this fact by the colour red: Then we check, if s[1:4] == sub. This means that we have to check at first, if sub[0] is equal to s[1]. This is true and we mark it with the colour green. Then we have to compare the next positions. s[2] is not equal to sub[1], so we don't have to proceed further with the next position of sub and s: Now we have to check if s[2:5] and sub are equal. The first two positions are equal but not the third: The following steps should be clear without any explanations: Finally, we have a complete match with s[4:7] == sub : A Simple Regular ExpressionWe already said in the previous section that we can see the variable "sub" from the introduction as a very simple regular expression. If you want to use regular expressions in Python, you have to import the re module, which provides methods and functions to deal with regular expressions. Representing Regular Expressions in PythonFrom other languages you might be used to represent regular expressions within Slashes "/", e.g. that's the way Perl, SED or AWK deals with them. In Python there is no special notation. Regular expressions are represented as normal strings. But this convenience brings along a small problem: The backslash is a special character used in regular expressions, but is also used as an escape character in. This can give rise to extremely clumsy expressions. E.g. a backslash in a regular expression has to be written as a double backslash, because the backslash functions as an escape character in regular expressions. Therefore it has to be quoted. The same is true for Python strings. The backslash has to be quoted by a backslash. So, a regular expression to match the Windows path "C:\programs" corresponds to a string in regular expression notation with four backslashes, i.e. "C:\\\\programs". The best way to overcome this problem consists in marking regular expressions as raw strings. The solution to our Windows path example looks as a raw string like this: r"C:\\programs" Let's look at another example, which might be quite disturbing for people used to wildcards: r"^a.*\.html$" The regular expression of our previous example matches all file names (strings) which start with an "a" and end with ".html". We will explain in the following sections the structure of the example above in detail. Syntax of Regular Expressions regular expressions. The idea of this example is to match strings containing the word "cat". We are successful with this, but unfortunately we are matching a lot of other words as well. If we match "cats" in a string that might be still okay, but what about all those words containing this character sequence "cat"? We match words like "education", "communicate", "falsification", "ramifications", "cattle" and many more. This is a case of "over matching", i.e. we receive positive results, which are wrong according to the problem we want to solve. We have illustrated this problem in the diagram on the right side. The dark green Circle C corresponds to the set of "objects" we want to recognize. But instead we match all the elements of the set O (blue circle). C is a subset of O. The set U (light green circle) in this diagram is a subset of C. U is a case of "under matching", i.e. if the regular expression is not matching all the intended strings. If we try to fix the previous RE, so that it doesn't create over matching, we might try the expression r" cat ". These blanks prevent the matching of the above mentioned words like "education", "falsification" and "ramification", but we fall prey to another mistake. What about the string "The cat, called Oscar, climbed on the roof."? The problem is that we don't expect a comma but only a blank behind the word "cat". Before we go on with the description of the syntax of regular expressions, we want to explain how to use them in Python: >>> import re >>> x = re.search("cat","A cat and a rat can't be friends.") >>> print(x) <_sre.SRE_Match object at 0x7fd4bf238238> >>> x = re.search("cow","A cat and a rat can't be friends.") >>> print(x) NoneIn the previous example we had to import the module re to be able to work with regular expressions. Then we used the method search from the re module. This is most probably the most important and the most often used method of this module. re.search(expr,s) checks a string s for an occurrence of a substring which matches the regular expression expr. The first substring (from left), which satisfies this condition will be returned. If a match has been possible, we get a so-called match object as a result, otherwise the value None. This method is already enough to use regular expressions in Python programs: >>> if re.search("cat","A cat and a rat can't be friends."): ... print("Some kind of cat has been found :-)") ... else: ... print("No cat has been found :-)") ... Some kind of cat has been found :-) >>> if re.search("cow","A cat and a rat can't be friends."): ... print("Cats and Rats and a cow.") ... else: ... print("No cow around.") ... No cow around. Any CharacterLet's assume that we have not been interested in the previous example to recognize the word cat, but all three letter words, which end with "at". The syntax of regular expressions supplies a metacharacter ".", which is used like a placeholder for "any character". The regular expression of our example can be written like this: r" .at " This RE matches three letter words, isolated by blanks, which end in "at". Now we get words like "rat", "cat", "bat", "eat", "sat" and many others. But what, if the text contains "words" like "@at" or "3at"? These words match as well and this means that we have created over matching again. We will learn a solution in the following section: Character ClassesSquare brackets, "[" and "]", are used to include a character class. [xyz]means e.g. either an "x", an "y" or a "z". Let's look at a more practical example: r"M[ae][iy]er"This is a regular expression, which matches a surname which is quite common in German. A name with the same pronunciation and four different spellings: Maier, Mayer, Meier, Meyer A finite state automata to recognize this expression can be build like this: The graph of the finite state machine (FSM) is simplified to keep the design easy. There should be an arrow in the start node pointing back on its own, i.e. if a character other than an upper case "M" has been processed, the machine should stay in the start condition. Furthermore, there should be an arrow pointing back from all nodes except the final nodes (the green ones) to the start node, if not the expected letter has been processed. E.g. if the machine is in state Ma, after having processed an "M" and an "a", the machine has to go back to state "Start", if any character except "i" or "y" can be read. Those who have problems with this FSM, shouldn't be bothered, because it is not necessary to understand it for the things to come. Instead of a choice between two characters, we often need a choice between larger character classes. We might need e.g. a class of letters between "a" and "e" or between "0" and "5" To manage such character classes the syntax of regular expressions supplies a metacharacter "-". [a-e]a simplified writing for [abcde]or [0-5]denotes [012345]. The advantage is obvious and even more impressive, if we have to coin expressions like "any uppercase letter" into regular expressions. So instead of [ABCDEFGHIJKLMNOPQRSTUVWXYZ]we can write [A-Z]. If this is not convincing: Write an expression for the character class "any lower case or uppercase letter" [A-Za-z] There is something more about the dash, we used to mark the begin and the end of a character class. The dash has only a special meaning if it is used within square brackets and in this case only if it isn't positioned directly after an opening or immediately in front of a closing bracket. So the expression [-az]is only the choice between the three characters "-", "a" and "z", but no other characters. The same is true for [az-]. Exercise: What character class is described by [-a-z]? Answer The character "-" and all the characters "a", "b", "c" all the way up to "z". The only other special character inside square brackets (character class choice) is the caret "^". If it is used directly after an opening sqare bracket, it negates the choice. [^0-9]denotes the choice "any character but a digit". The position of the caret within the square brackets is crucial. If it is not positioned as the first character following the opening square bracket, it has no special meaning. [^abc]means anything but an "a", "b" or "c" [a^bc]means an "a", "b", "c" or a "^" A Practical Exercise in PythonBefore we go on with our introduction into regular expressions, we want to insert a practical exercise with Python. We have a phone list of the Simpsons, yes, the famous Simpsons from the American animated TV series. There are some people with the surname Neu. We are looking for a Neu, but we don't know the first name, we just know that it starts with a J. Let's write a Python script, which finds all the lines of the phone book, which contain a person with the described surname and a first name starting with J. If you don't know how to read and work with files, you should work through our chapter File Management. So here is our example script: import re fh = open("simpsons_phone_book.txt") for line in fh: if re.search(r"J.*Neu",line): print(line.rstrip()) fh.close() Predefined Character ClassesYou might have realized that it can be quite cumbersome to construe certain character classes. A good example is the character class, which describes a valid word character. These are all lower case and uppercase characters plus all the digits and the underscore, corresponding to the following regular expression: r"[a-zA-Z0-9_]" The special sequences consist of "\\" and a character from the following list: Word boundariesThe \b and \B of the previous overview of special sequences, is often not properly understood or even misunderstood especially by novices. While the other sequences match characters, - e.g. \w matches characters like "a", "b", "m", "3" and so on, - \b and \B don't match a character. They match empty strings depending on their neighbourhood, i.e. what kind of a character the predecessor and the successor is. So \b matches any empty string between a \W and a \w character and also between a \w and a \W character. \B is the complement, i.e empty strings between \W and \W or empty strings between \w and \w. We illustrate this in the following example: We will get to know further "virtual" matching character, i.e. the caret (^), which is used to mark the beginning of a string, and the dollar sign ($), which is used to mark the end of a string, respectively. \A and \Z, which can also be found in our previous diagram, are very seldom used alternatives to the caret and the dollar sign. Matching Beginning and EndAs we have carried out previously in this introduction, the expression r"M[ae][iy]er"is capable of matching various spellings of the name Mayer and the name can be anywhere in the string: >>> import re >>>>> if re.search(r"M[ae][iy]er",line): print("I found one!") ... I found one! >>>But what, if we want to match a regular expression at the beginning of a string and only at the beginning? The re module of Python provides two functions to match regular expressions. We have met already one of them, i.e. search(). The other has in our opinion a misleading name: match() Misleading, because match(re_str, s) checks for a match of re_str merely at the beginning of the string. But anyway, match() is the solution to our question, as we can see in the following example: >>>)) <_sre.SRE_Match object at 0x7fc59c5f26b0> >>> print(re.match(r"M[ae][iy]er", s1)) <_sre.SRE_Match object at 0x7fc59c5f26b0> >>> print(re.match(r"M[ae][iy]er", s2)) None >>>So, this is a way to match the start of a string, but it's a Python specific method, i.e. it can't be used in other languages like Perl, AWK and so on. There is a general solution which is a standard for regular expressions: The caret '^' matches the start of the string, and in MULTILINE (will be explained further down) mode also matches immediately after each newline, which the Python method match() doesn't do. The caret has to be the first character of a regular expression: >>>)) NoneBut what happens if we concatenate the two strings s1 and s2 in the following way: s = s2 + "\n" + s1Now the string doesn't start with a Maier of any kind, but the name is following a newline character: >>> s = s2 + "\n" + s1 >>> print(re.search(r"^M[ae][iy]er", s)) None >>>The name hasn't been found, because only the beginning of the string is checked. It changes, if we use the multiline mode, which can be activated by adding the following third parameters to search: >>> print(re.search(r"^M[ae][iy]er", s, re.MULTILINE)) <_sre.SRE_Match object at 0x7fc59c5f26b0> >>> print(re.search(r"^M[ae][iy]er", s, re.M)) <_sre.SRE_Match object at 0x7fc59c5f26b0> >>> print(re.match(r"^M[ae][iy]er", s, re.M)) None >>>The previous example also shows that the multiline mode doesn't affect the match method. match() never checks anything but the beginning of the string for a match. We have learnt how to match the beginning of a string. What about the end? Of course that's possible to. The dollar sign "$" is used as a metacharacter for this purpose. '$' matches the end of a string or just before the newline at the end of the string. If in MULTILINE mode, it also matches before a newline. We demonstrate the usage of the "$" character in the following example: >>> print(re.search(r"Python\.$","I like Python.")) <_sre.SRE_Match object at 0x7fc59c5f26b0> >>> print(re.search(r"Python\.$","I like Python and Perl.")) None >>> print(re.search(r"Python\.$","I like Python.\nSome prefer Java or Perl.")) None >>> print(re.search(r"Python\.$","I like Python.\nSome prefer Java or Perl.", re.M)) <_sre.SRE_Match object at 0x7fc59c5f26b0> >>> Optional ItemsIf you thought that our collection of Mayer names was complete, you were wrong. There are other ones all over the world, e.g. London and Paris, who dropped their "e". So we have four more names ["Mayr", "Meyr", "Meir", "Mair"] plus our old set ["Mayer", "Meyer", "Meier", "Maier"]. If we try to figure out a fitting regular expression, we realize that we miss something. A way to tell the computer "this "e" may or may not occur". A question mark is used as a notation for this. A question mark declares that the preceding character or expression is optional. The final Mayer-Recognizer looks now like this: r"M[ae][iy]e?r"A subexpression is grouped by round brackets and a question mark following such a group means that this group may or may not exist. With the following expression we can match dates like "Feb 2011" or February 2011": r"Feb(ruary)? 2011" QuantifiersIf you just use, what we have introduced so far, you will still need a lot of things, above all some way of repeating characters or regular expressions. For this purpose, quantifiers are used. We have encountered one in the previous paragraph, i.e. the question mark. A quantifier after a token, which can be a single character or group in brackets, specifies how often that preceding element is allowed to occur. The most common quantifiers are - the question mark ? - the asterisk or star character *, which is derived from the Kleene star - and the plus sign +, derived from the Kleene cross We have already used previously one of these quantifiers without explaining it, i.e. the asterisk. A star following a character or a subexpression group means that this expression or character may be repeated arbitrarily, even zero times. r"[0-9]*"The above expression matches any sequence of digits, even the empty string. r".*"matches any sequence of characters and the empty string. Exercise: Write a regular expression which matches strings which starts with a sequence of digits - at least one digit - followed by a blank and after this arbitrary characters. Solution: r"^[0-9][0-9]* " So, you used the plus character "+". That's fine, but in this case you have either cheated by going ahead in the text or you know already more about regular expressions than we have covered in our course :-) Now that we mentioned it: The plus operator is very convenient to solve the previous exercise. The plus operator is very similar to the star operator, except that the character or subexpression followed by a "+" sign has to be repeated at least one time. Here follows the solution to our exercise with the plus quantifier: Solution with the plus quantifier: r"^[0-9]+ "If you work for a while with this arsenal of operators, you will miss inevitably at some point the possibility to repeat expressions for an exact number of times. Let's assume you want to recognize the last lines of addresses on envelopes in Switzerland. These lines usually contain a four digits long post code followed by a blank and a city name. Using + or * are too unspecific for our purpose and the following expression seems to be too clumsy: r"^[0-9][0-9][0-9][0-9] [A-Za-z]+"Fortunately, there is an alternative available: r"^[0-9]{4} [A-Za-z]*"Now we want to improve our regular expression. Let's assume that there is no city name in Switzerland, which consists of less than 3 letters, at least 3 letters. We can denote this by [A-Za-z]{3,}. Now we have to recognize lines with German post code (5 digits) lines as well, i.e. the post code can now consist of either four or five digits: r"^[0-9]{4,5} [A-Z][a-z]{2,}"The general syntax is {from, to}: this means that the expression has to appear at least "from" times and not more than "to" times. {, to} is an abbreviated spelling for {0,to} and {from,} is an abbreviation for "at least from times but no upper limit" GroupingWe can group a part of a regular expression by surrounding it with parenthesis (round brackets). This way we can apply operators to the complete group instead of a single character. Capturing Groups and Back ReferencesParenthesis (round brackets, braces) not only group subexpressions but they create back references as well. The part of the string matched by the grouped part of the regular expression, i.e. the subexpression in parenthesis, is stored in a back reference. With the aid of back references we can reuse parts of regular expressions. These stored values can be both reused inside the expression itself and afterwards, when the regexpr will have been executed. Before we continue with our treatise about back references, we want to strew in a paragraph about match objects, which is important for our next examples with back references. A Closer Look at the Match ObjectsSo far we have just checked, if an expression matched or not. We used the fact the re.search() returns a match object if it matches and None otherwise. We haven't been interested e.g. in what has been matched. The match object contains a lot of data about what has been matched, positions and so on. A match object contains the methods group(), span(), start() and end(), as can be seen in the following application: >>> import re >>> mo = re.search("[0-9]+", "Customer number: 232454, Date: February 12, 2011") >>> mo.group() '232454' >>> mo.span() (17, 23) >>> mo.start() 17 >>> mo.end() 23 >>> mo.span()[0] 17 >>> mo.span()[1] 23 >>>These methods are not difficult to understand. span() returns a tuple with the start and end position, i.e. the string index where the regular expression started matching in the string and ended matching. The methods start() and end() are in a way superfluous as the information is contained in span(), i.e. span()[0] is equal to start() and span()[1] is equal to end(). group(), if called without argument, returns the substring, which had been matched by the complete regular expression. With the help of group() we are also capable of accessing the matched substring by grouping parentheses, to get the matched substring of the n-th group, we call group() with the argument n: group(n). We can also call group with more than integer argument, e.g. group(n,m). group(n,m) - provided there exists a subgoup n and m - returns a tuple with the matched substrings. group(n,m) is equal to (group(n), group(m)): >>> import re >>> mo = re.search("([0-9]+).*: (.*)", "Customer number: 232454, Date: February 12, 2011") >>> mo.group() '232454, Date: February 12, 2011' >>> mo.group(1) '232454' >>> mo.group(2) 'February 12, 2011' >>> mo.group(1,2) ('232454', 'February 12, 2011') >>>A very intuitive example are XML or HTML tags. E.g. let's assume we have a file (called "tags.txt") with content like this: <composer>Wolfgang Amadeus Mozart</composer> <author>Samuel Beckett</author> <city>London</city>We want to rewrite this text automatically to composer: Wolfgang Amadeus Mozart author: Samuel Beckett city: LondonThe following little Python script does the trick. The core of this script is the regular expression. This regular expression works like this: It tries to match a less than symbol "<". After this it is reading lower case letters until it reaches the greater than symbol. Everything encountered within "<" and ">" has been stored in a back reference which can be accessed within the expression by writing \1. Let's assume \1 contains the value "composer". When the expression has reached the first ">", it continues matching, as the original expression had been "<composer>(.*)</composer>": import re fh = open("tags.txt") for i in fh: res = re.search(r"<([a-z]+)>(.*)</\1>",i) print(res.group(1) + ": " + res.group(2))If there are more than one pair of parenthesis (round brackets) inside the expression, the backreferences are numbered \1, \2, \3, in the order of the pairs of parenthesis. Exercise: The next Python example makes use of three back references. We have an imaginary phone list of the Simpsons in a list. Not all entries contain a phone number, but if a phone number exists it is the first part of an entry. Then follows separated by a blank a surname, which is followed by first names. Surname and first name are separated by a comma. The task is to rewrite this example in the following way: Allison Neu 555-8396 C. Montgomery Burns Lionel Putz 555-5299 Homer Jay Simpson 555-7334Python script solving the rearrangement problem: import re l = ["555-8396 Neu, Allison", "Burns, C. Montgomery", "555-5299 Putz, Lionel", "555-7334 Simpson, Homer Jay"] for i in l: res = re.search(r"([0-9-]*)\s*([A-Za-z]+),\s+(.*)", i) print(res.group(3) + " " + res.group(2) + " " + res.group(1)) Named BackreferencesIn the previous paragraph we introduced "Capturing Groups" and "Back references". More precisely, we could have called them "Numbered Capturing Groups" and "Numbered Back references". Using capturing groups instead of "numbered" capturing groups allows you to assign descriptive names instead of automatic numbers to the groups. In the following example, we demonstrate this approach by catching the hours, minutes and seconds from a UNIX date string. >>> import re >>>>> expr = r"\b(?P<hours>\d\d):(?P<minutes>\d\d):(?P<seconds>\d\d)\b" >>> x = re.search(expr,s) >>> x.group('hours') '13' >>> x.group('minutes') '47' >>> x.start('minutes') 14 >>> x.end('minutes') 16 >>> x.span('seconds') (17, 19) >>> Comprehensive Python ExerciseIn this comprehensive exercise, we have to bring together the information of two files. In the first file, we have a list of nearly 15000 lines of post codes with the corresponding city names plus additional information. Here are some arbitrary lines of this file: other file contains a list of the 19 largest German cities. Each line consists of the rank, the name of the city, the population, and the state (Bundesland): 1. Berlin 3.382.169 Berlin 2. Hamburg 1.715.392 Hamburg 3. München 1.210.223 Bayern 4. Köln 962.884 Nordrhein-Westfalen 5. Frankfurt am Main 646.550 Hessen 6. Essen 595.243 Nordrhein-Westfalen 7. Dortmund 588.994 Nordrhein-Westfalen 8. Stuttgart 583.874 Baden-Württemberg 9. Düsseldorf 569.364 Nordrhein-Westfalen 10. Bremen 539.403 Bremen 11. Hannover 515.001 Niedersachsen 12. Duisburg 514.915 Nordrhein-Westfalen 13. Leipzig 493.208 Sachsen 14. Nürnberg 488.400 Bayern 15. Dresden 477.807 Sachsen 16. Bochum 391.147 Nordrhein-Westfalen 17. Wuppertal 366.434 Nordrhein-Westfalen 18. Bielefeld 321.758 Nordrhein-Westfalen 19. Mannheim 306.729 Baden-WürttembergOur task is to create a list with the top 19 cities, with the city names accompanied by the postal code. If you want to test the following program, you have to save the list above in a file called largest_cities_germany.txt and you have to download and save the list of German post codes # -*-]) Another Comprehensive ExampleWe want to present another real life example in our Python course. A regular expression for UK postcodes. We write an expression, which is capable of recognizing the postal codes or postcodes of the UK. Postcode units consist of between five and seven characters, which are separated into two parts by a space. The two to four characters before the space represent the so-called outward code or out code intended to direct mail from the sorting office to the delivery office. The part following the space, which consists of a digit followed by two uppercase characters, comprises the so-called inward code, which is needed to sort mail at the final delivery office. The last two uppercase characters do not use the letters CIKMOV, so as not to resemble digits or each other when hand-written. The outward code can have the form: One or two uppercase characters, followed by either a digit or the letter R, optionally followed by an uppercase character or a digit. (We do not consider all the detailed rules for postcodes, i.e only certain character sets are valid depending on the position and the context.) A regular expression for matching this superset of UK postcodes looks like this: r"\b[A-Z]{1,2}[0-9R][0-9A-Z]? [0-9][ABD-HJLNP-UW-Z]{2}\b"The following Python program uses the regexp above: import re example_codes = ["SW1A 0AA", # House of Commons "SW1A 1AA", # Buckingham Palace "SW1A 2AA", # Downing Street "BX3 2BB", # Barclays Bank "DH98 1BT", # British Telecom "N1 9GU", # Guardian Newspaper "E98 1TT", # The Times "TIM E22", # a fake postcode "A B1 A22", # not a valid postcode "EC2N 2DB", # Deutsche Bank "SE9 2UG", # University of Greenwhich "N1 0UY", # Islington, London "EC1V 8DS", # Clerkenwell, London "WC1X 9DT", # WC1X 9DT "B42 1LG", # Birmingham "B28 9AD", # Birmingham "W12 7RJ", # London, BBC News Centre "BBC 007" # a fake postcode ] pc_re = r"[A-z]{1,2}[0-9R][0-9A-Z]? [0-9][ABD-HJLNP-UW-Z]{2}" for postcode in example_codes: r = re.search(pc_re, postcode) if r: print(postcode + " matched!") else: print(postcode + " is not a valid postcode!") Next Chapter: Regular Expressions, Advanced
http://python-course.eu/python3_re.php
CC-MAIN-2017-09
en
refinedweb
Screenshot : This is a completely pointless layout which acts like Microsoft's Flip 3D You can use this module with the following in your ~/.xmonad/xmonad.hs: import XMonad.Layout.Roledex Then edit your layoutHook by adding the Roledex layout: myLayouts = Roledex ||| etc.. main = xmonad defaultConfig { layoutHook = myLayouts } For more detailed instructions on editing the layoutHook see: XMonad.Doc.Extending
http://hackage.haskell.org/package/xmonad-contrib-0.8.1/docs/XMonad-Layout-Roledex.html
CC-MAIN-2014-15
en
refinedweb
-13%2 = -1 so this might cause problem in your program sometime. So always write your code in such a way that it never cause any problem in sleep also.. :) Use anyone of the following code snippet. Expand | Embed | Plain Text - int mod(int x, int m) { - return (x%m + m)%m; - } - - int mod(int x, int m) { - int r = x%m; - return r<0 ? r+m : r; - } Report this snippet Tweet
http://snipplr.com/view/71509/safety-for-checking-mod-for-negative-values/
CC-MAIN-2014-15
en
refinedweb
On Thu, Feb 12, 2009 at 08:47:36AM +0000, Simon Marlow wrote: > Remi Turk wrote: >> On Tue, Feb 10, 2009 at 01:31:24PM +0000, Simon Marlow wrote: >>> " ++) Ah of course, I keep forgetting about :def :) Note that when classes and types would stop sharing their namespace, ":info instance Show" would again be ambiguous though.. Groeten, Remi
http://www.haskell.org/pipermail/glasgow-haskell-users/2009-February/016632.html
CC-MAIN-2014-15
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. I: "Creating a mathematical expression evaluator is one of the most interesting exercises in computer science, whatever the language used. This is the first step towards really understanding what sort of magic is hidden behind compilers and interpreters....". I agree completely, and hope that you do too. advantage of the first method is that it allows you to store the parsed expressions and re-evaluate them without re-parsing the same string several times. However, the second method is more convenient, and because the CalcEngine has a built-in expression cache, the parsing overhead is very small.]); } Function names are case-insensitive (as in Excel), and the parameters are themselves expressions. This allows the engine to calculate expressions such as "=ATAN(2+2, 4+4*SIN(4))". The CalcEngine class also provides a Functions property that returns a dictionary containing all the functions currently defined. This can be useful if you ever need to enumerate remove functions from the engine. Functions Notice how the method implementation listed above casts the expression parameters to the expected type (double). This works because the Expression class implements implicit converters to several types (string, double, bool, and DateTime). I find that the implicit converters allow me to write code that is concise and clear. This approach is similar to the binding mechanism used in WPF and Silverlight, and is substantially more powerful than the simple value approach described in the previous section. However, it is also slower than using simple values as variables. For example, if you wanted to perform calculations on an object of type Customer, you could do it like this:> CalcEngine supports binding to sub-properties and collections. The object assigned to the DataContext property can represent complex business objects and entire data models. This approach makes it easier to integrate the calculation engine into the application, because the variables it uses are just plain old CLR objects. You don't have to learn anything new in order to apply validation, notifications, serialization, etc. The original usage scenario for the calculation engine was an Excel-like application, so it had to be able to support cell range objects such as "A1" or "A1:B10". This requires a different approach, since the cell ranges have to be parsed dynamically (it would not be practical to define a DataContext object with properties A1, A2, A3, etc). To support this scenario, the CalcEngine implements a virtual method called GetExternalObject. Derived classes can override this method to parse identifiers and dynamically build objects that can be evaluated. For example, the sample application included with this article defines a DataGridCalcEngine class that derives from CalcEngine and overrides GetExternalObject to support Excel-style ranges. This is described in detail in a later section ("Adding Formula Support to the DataGridView Control"). DataGridCalcEngine I mentioned earlier that the CalcEngine class performs two main functions: parsing and evaluating. If you look at the CalcEngine code, you will notice that the parsing methods are written for speed, sometimes even at the expense of clarity. The GetToken method is especially critical, and has been through several rounds of profiling and tweaking.: The parsing process typically consumes more time than the actual evaluation, so it makes sense to keep track of parsed expressions and avoid parsing them again, especially if the same expressions are likely to be used over and over again (as in spreadsheet cells or report fields, for example). The CalcEngine class implements an expression cache that handles this automatically. The CalcEngine.Evaluate method looks up the expression in the cache before trying to parse it. The cache is based on WeakReference objects, so unused expressions eventually get removed from the cache by the .NET garbage collector. (This technique is also used in the NCalc library.); } ... The method calls the Optimize method on each of the two operand expressions. If the resulting optimized expressions are both literal values, then the method calculates the result (which is a constant) and returns a literal expression that represents the result. To illustrate further, function call expressions are optimized as follows:; } ... First, all parameters are optimized. Next, if all optimized parameters are literals, the function call itself is replaced with a literal expression that represents the result. Expression optimization reduces evaluation time at the expense of a slight increase in parse time. It can be turned off by setting the CalcEngine.OptimizeExpressions property to false. CalcEngine.OptimizeExpressions The CalcEngine class has a CultureInfo property that allows you to define how the engine should parse numbers and dates in expressions. CultureInfo By default, the CalcEngine.CultureInfo property is set to CultureInfo.CurrentCulture, which causes it to use the settings selected by the user for parsing numbers and dates. In English systems, numbers and dates look like "123.456" and "12/31/2011". In German or Spanish systems, numbers and dates look like "123,456" and "31/12/2011". This is the behavior used by Microsoft Excel. The sample included with this article shows how the CalcEngine class can be used to extend the standard Microsoft DataGridView control to support Excel-style formulas. The image at the start of the article shows the sample in action. Note that the formula support described here is restricted to typing formulas into cells and evaluating them. The sample does not implement Excel's more advanced features like automatic reference adjustment for clipboard operations, selection-style formula editing, reference coloring, and so on. The sample defines a DataGridCalcEngine class that extends CalcEngine with a reference to the grid that owns the engine. The grid is responsible for storing the cell values which are used in the calculations. The DataGridCalcEngine class adds cell range support by overriding the CalcEngine.GetExternalObject method as follows:; } The method analyzes the identifier passed in as a parameter. If the identifier can be parsed as a cell reference (e.g., "A1" or "AZ123:XC23"), then the method builds and returns a CellRangeReference object. If the identifier cannot be parsed as an expression, the method returns The CellRangeReference also implements the IEnumerable interface to return the value of all cells in the range. This allows the calculation engine to evaluate expressions such as "Sum(A1:B10)". Notice that the GetValue method listed above uses an _evaluating flag to keep track of ranges that are currently being evaluated. This allows the class to detect circular references, where cells contain formulas that reference the cell itself or other cells that depend on the original cell.); } The method starts by retrieving the value stored in the cell. If the cell is not in edit mode, and the value is a string that starts with an equals sign, the method uses CalcEngine to evaluate the formula and assigns the result to the cell. If the cell is in edit mode, then the editor displays the formula rather than the value. This allows users to edit the formulas by typing into in the cells, just like they do in Excel. If the expression evaluation causes any errors, the error message is displayed in the cell. At this point, the grid will evaluate expressions and show their results. But it does not track dependencies, so if you type a new value into cell "A1" for example, any formulas that use the value in "A1" will not be updated. To address this, the DataGridCalc class overrides the OnCellEditEnded method to invalidate the control. This causes all visible cells to be repainted and automatically recalculated after any edits. OnCellEditEnded // invalidate cells with formulas after editing protected override void OnCellEndEdit(DataGridViewCellEventArgs e) { this.Invalidate(); base.OnCellEndEdit(e); } Let's not forget that implementation of the Evaluate method used by the CellRangeReference class listed earlier. The method starts by retrieving the cell content. If the content is a string that starts with an equals sign, the method evaluates it and returns the result; otherwise it returns the content itself: // gets the value in a cell public object Evaluate(int rowIndex, int colIndex) { // get the value var val = this.Rows[rowIndex].Cells[colIndex].Value; var text = val as string; return !string.IsNullOrEmpty(text) && text[0] == '=' ? _ce.Evaluate(text) : val; } That is all there is to the DataGridCalc class. Notice that calculated values are never stored anywhere . All formulas are parsed and evaluated on demand. The sample application creates a DataTable with 50 columns and 50 rows, and binds that table to the grid. The table stores the values and formulas typed by users. DataTable The sample also implements an Excel-style formula bar across the top of the form that shows the current cell address, content, and has a context menu that shows the functions available and their parameters. Finally, the sample has a status bar along the bottom that shows summary statistics for the current selection (Sum, Count, and Average, as in Excel 2010). The summary statistics are calculated using the grid's CalcEngine as well. } This ensures that tests are performed whenever the class is used (in debug mode), and that derived classes do not break any core functionality when they override the base class methods. The Test method is implemented in a Tester.cs file that extends the CalcEngine using partial classes. All test methods are enclosed in an #if DEBUG/#endif block, so they are not included in release builds. Test #if DEBUG/#endif This mechanism worked well during development. It helped detect many subtle bugs that might have gone unnoticed if I had forgotten to run my unit tests when working on separate projects. While implementing the CalcEngine class, I used benchmarks to compare its size and performance with alternate libraries and make sure CalcEngine was doing a good job. A lot of the optimizations that went into the CalcEngine class came from these benchmarks. I compared CalcEngine with two other similar libraries which seem to be among the best available. Both of these started as CodeProject articles and later moved to CodePlex: The benchmarking method was similar to the one described by Gary Beene in his 2007 Equation Parsers article. Each engine was tested for parsing and evaluating performance using three expressions. The total time spent was used to calculate a "Meps" (million expressions parsed or evaluated per second) index that represents the engine speed. The expressions used were the following:
http://www.codeproject.com/Articles/246374/A-Calculation-Engine-for-NET?fid=1648564&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4306378
CC-MAIN-2014-15
en
refinedweb
How to Use the javap Command The javap command is called the Java disassembler because it takes apart class files and tells you what’s inside them. You won’t use this command often, but using it to find out how a particular Java statement works is fun, sometimes. You can also use it to find out what methods are available for a class if you don’t have the source code that was used to create the class. Here is the general format: javap filename [options] The following is typical of the information you get when you run the javap command: C:\java\samples>javap HelloApp Compiled from "HelloApp.java" public class HelloApp extends java.lang.Object{ public HelloApp(); public static void main(java.lang.String[]); } As you can see, the javap command indicates that the HelloApp class was compiled from the HelloApp.java file and that it consists of a HelloApp public class and a main public method. You may want to use two options with the javap command. If you use the -c option, the javap command displays the actual Java bytecodes created by the compiler for the class. (Java bytecode is the executable program compiled from your Java source file.) And if you use the -verbose option, the bytecodes — plus a ton of other fascinating information about the innards of the class — are displayed. Here’s the -c output for a class named HelloApp: C:\java\samples>javap HelloApp -c Compiled from "HelloApp.java" public class HelloApp extends java.lang.Object{ public HelloApp(); }
http://www.dummies.com/how-to/content/how-to-use-the-javap-command.navId-323185.html
CC-MAIN-2014-15
en
refinedweb
20 January 2010 06:00 [Source: ICIS news] (Clarifies price information in the 10th paragraph.) By Prema Viswanathan SINGAPORE (ICIS news)--The current polyolefins price spike in the Middle East and south Asia caused by mounting feedstock costs may not last long due to buyer resistance, sources close to suppliers and customers said on Wednesday. “Increasing resistance from buyers is making it hard for suppliers to continue hiking prices,” a source close to a ?xml:namespace> “We are worried that if prices keep rising on the back of higher feedstocks, we may see a crash as we did in late 2008, when prices hit unsustainable highs.” Polyethylene (PE) and polypropylene (PP) prices have surged by 10% in south Asia and 6.5% in the End-users said they were finding it difficult to pass down their PE and PP costs to their customers. “We would rather wait before making large purchases, as prices are too high currently and our margins are getting squeezed,” a Saudi Arabian PP converter said. He said he expects prices to soften in the coming weeks, as supply was more plentiful in the Middle East, with several new plants ramping up their operating rates to around 70-80%, up from the 50-60% seen late last year. In “I have purchased limited volumes at high prices, but am unwilling to commit to large quantities at present,” said a PE converter. Surging ethylene, propylene and naphtha values caused PE and PP prices in south Asia to increase by up to $130/tonne (€91/tonne) and $100/tonne, respectively, on 15 January compared with the previous week; in the Middle East, both PE and PP prices increased by $90/tonne over the same period, according to global chemical market intelligence service ICIS pricing. High density PE (HDPE) film jumped to $1,400/tonne CFR (cost and freight) in “Suppliers have been justifying their price hikes, citing high feedstock costs and tight supply, but demand is not robust enough to sustain the rally,” said an Indian trader. At current price levels, traders in On the supply side, the signals were confusing this week. Market sources in the east Mediterranean indicated a possible cut in allocations and further price hikes, especially for PE, due to supply constraints. In the GCC region, however, buyers said there was adequate availability, despite a power outage in late December, which caused a production disruption in In India, buyers were indicating that supply would be more abundant following the restart of Haldia Petrochemical's facility in West Bengal this week after a prolonged turnaround. ($1 = €0.70) For more information on PP and PE,
http://www.icis.com/Articles/2010/01/20/9327214/mideast-south-asia-pe-pp-rally-may-reverse-industry-sources.html
CC-MAIN-2014-15
en
refinedweb
Mastering OpenCV with Practical Computer Vision Projects — Save 50% Step-by-step tutorials to solve common real-world computer vision problems for desktop or mobile, from augmented reality and number plate recognition to face recognition and 3D head tracking with this book and ebook. In this article by Eugene Khvedchenya, the author of Mastering OpenCV with Practical Computer Vision Projects, introduces us to Augmented reality (AR), which is a live view of a real-world environment whose elements are augmented by computer-generated graphics. In this article we will create an AR application for iPhone/iPad devices. From this article you'll learn more about markers. The full detection routine is explained. After reading this article you will be able to write your own marker detection algorithm and estimate the marker pose in 3D world with regards to camera pose. In this article, we will cover the following topics: Creating an iOS project that uses OpenCV Application architecture Marker detection Marker identification Marker code recognition Placing a marker in 3D (For more resources related to this topic, see here.) Creating an iOS project that uses OpenCV In this section we will create a demo application for iPhone/iPad devices that will use the OpenCV ( Open Source Computer Vision ) library to detect markers in the camera frame and render 3D objects on it. This example will show you how to get access to the raw video data stream from the device camera, perform image processing using the OpenCV library, find a marker in an image, and render an AR overlay. We will start by first creating a new XCode project by choosing the iOS Single View Application template, as shown in the following screenshot: Now we have to add OpenCV to our project. This step is necessary because in this application we will use a lot of functions from this library to detect markers and estimate position position. OpenCV is a library of programming functions for real-time computer vision. It was originally developed by Intel and is now supported by Willow Garage and Itseez. This library is written in C and C++ languages. It also has an official Python binding and unofficial bindings to Java and .NET languages. Adding OpenCV framework Fortunately the library is cross-platform, so it can be used on iOS devices. Starting from version 2.4.2, OpenCV library is officially supported on the iOS platform and you can download the distribution package from the library website at. The OpenCV for iOS link points to the compressed OpenCV framework. Don't worry if you are new to iOS development; a framework is like a bundle of files. Usually each framework package contains a list of header files and list of statically linked libraries. Application frameworks provide an easy way to distribute precompiled libraries to developers. Of course, you can build your own libraries from scratch. OpenCV documentation explains this process in detail. For simplicity, we follow the recommended way and use the framework for this article. After downloading the file we extract its content to the project folder, as shown in the following screenshot: To inform the XCode IDE to use any framework during the build stage, click on Project options and locate the Build phases tab. From there we can add or remove the list of frameworks involved in the build process. Click on the plus sign to add a new framework, as shown in the following screenshot: From here we can choose from a list of standard frameworks. But to add a custom framework we should click on the Add other button. The open file dialog box will appear. Point it to opencv2.framework in the project folder as shown in the following screenshot: Including OpenCV headers Now that we have added the OpenCV framework to the project, everything is almost done. One last thing—let's add OpenCV headers to the project's precompiled headers. The precompiled headers are a great feature to speed up compilation time. By adding OpenCV headers to them, all your sources automatically include OpenCV headers as well. Find a .pch file in the project source tree and modify it in the following way. The following code shows how to modify the .pch file in the project source tree: // // Prefix header for all source files of the 'Example_MarkerBasedAR' // #import <Availability.h> #ifndef __IPHONE_5_0 #warning "This project uses features only available in iOS SDK 5.0 and later." #endif #ifdef __cplusplus #include <opencv2/opencv.hpp> #endif #ifdef __OBJC__ #import <UIKit/UIKit.h> #import <Foundation/Foundation.h> #endif Now you can call any OpenCV function from any place in your project. That's all. Our project template is configured and we are ready to move further. Free advice: make a copy of this project; this will save you time when you are creating your next one! Application architecture Each iOS application contains at least one instance of the UIViewController interface that handles all view events and manages the application's business logic. This class provides the fundamental view-management model for all iOS apps. A view controller manages a set of views that make up a portion of your app's user interface. As part of the controller layer of your app, a view controller coordinates its efforts with model objects and other controller objects—including other view controllers—so your app presents a single coherent user interface. The application that we are going to write will have only one view; that's why we choose a Single-View Application template to create one. This view will be used to present the rendered picture. Our ViewController class will contain three major components that each AR application should have (see the next diagram): Video source Processing pipeline Visualization engine The video source is responsible for providing new frames taken from the built-in camera to the user code. This means that the video source should be capable of choosing a camera device (front- or back-facing camera), adjusting its parameters (such as resolution of the captured video, white balance, and shutter speed), and grabbing frames without freezing the main UI. The image processing routine will be encapsulated in the MarkerDetector class. This class provides a very thin interface to user code. Usually it's a set of functions like processFrame and getResult. Actually that's all that ViewController should know about. We must not expose low-level data structures and algorithms to the view layer without strong necessity. VisualizationController contains all logic concerned with visualization of the Augmented Reality on our view. VisualizationController is also a facade that hides a particular implementation of the rendering engine. Low code coherence gives us freedom to change these components without the need to rewrite the rest of your code. Such an approach gives you the freedom to use independent modules on other platforms and compilers as well. For example, you can use the MarkerDetector class easily to develop desktop applications on Mac, Windows, and Linux systems without any changes to the code. Likewise, you can decide to port VisualizationController on the Windows platform and use Direct3D for rendering. In this case you should write only new VisualizationController implementation; other code parts will remain the same. The main processing routine starts from receiving a new frame from the video source. This triggers video source to inform the user code about this event with a callback. ViewController handles this callback and performs the following operations: Sends a new frame to the visualization controller. Performs processing of the new frame using our pipeline. Sends the detected markers to the visualization stage. Renders a scene. Let's examine this routine in detail. The rendering of an AR scene includes the drawing of a background image that has a content of the last received frame; artificial 3D objects are drawn later on. When we send a new frame for visualization, we are copying image data to internal buffers of the rendering engine. This is not actual rendering yet; we are just updating the text with a new bitmap. The second step is the processing of new frame and marker detection. We pass our image as input and as a result receive a list of the markers detected. on it. These markers are passed to the visualization controller, which knows how to deal with them. Let's take a look at the following sequence diagram where this routine is shown: We start development by writing a video capture component. This class will be responsible for all frame grabbing and for sending notifications of captured frames via user callback. Later on we will write a marker detection algorithm. This detection routine is the core of your application. In this part of our program we will use a lot of OpenCV functions to process images, detect contours on them, find marker rectangles, and estimate their position. After that we will concentrate on visualization of our results using Augmented Reality. After bringing all these things together we will complete our first AR application. So let's move on! Accessing the camera The Augmented Reality application is impossible to create without two major things: video capturing and AR visualization. The video capture stage consists of receiving frames from the device camera, performing necessary color conversion, and sending it to the processing pipeline. As the single frame processing time is so critical to AR applications, the capture process should be as efficient as possible. The best way to achieve maximum performance is to have direct access to the frames received from the camera. This became possible starting from iOS Version 4. Existing APIs from the AVFoundation framework provide the necessary functionality to read directly from image buffers in memory. You can find a lot of examples that use the AVCaptureVideoPreviewLayer class and the UIGetScreenImage function to capture videos from the camera. This technique was used for iOS Version 3 and earlier. It has now become outdated and has two major disadvantages: Lack of direct access to frame data. To get a bitmap, you have to create an intermediate instance of UIImage, copy an image to it, and get it back. For AR applications this price is too high, because each millisecond matters. Losing a few frames per second (FPS) significantly decreases overall user experience. To draw an AR, you have to add a transparent overlay view that will present the AR. Referring to Apple guidelines, you should avoid non-opaque layers because their blending is hard for mobile processors. Classes AVCaptureDevice and AVCaptureVideoDataOutput allow you to configure, capture, and specify unprocessed video frames in 32 bpp BGRA format. Also you can set up the desired resolution of output frames. However, it does affect overall performance since the larger the frame the more processing time and memory is required. There is a good alternative for high-performance video capture. The AVFoundation API offers a much faster and more elegant way to grab frames directly from the camera. But first, let's take a look at the following figure where the capturing process for iOS is shown: AVCaptureSession is a root capture object that we should create. Capture session requires two components—an input and an output. The input device can either be a physical device (camera) or a video file (not shown in diagram). In our case it's a built-in camera (front or back). The output device can be presented by one of the following interfaces: AVCaptureMovieFileOutput AVCaptureStillImageOutput AVCaptureVideoPreviewLayer AVCaptureVideoDataOutput The AVCaptureMovieFileOutput interface is used to record video to the file, the AVCaptureStillImageOutput interface is used to to make still images, and the AVCaptureVideoPreviewLayer interface is used to play a video preview on the screen. We are interested in the AVCaptureVideoDataOutput interface because it gives you direct access to video data. The iOS platform is built on top of the Objective-C programming language. So to work with AVFoundation framework, our class also has to be written in Objective-C. In this section all code listings are in the Objective-C++ language. To encapsulate the video capturing process, we create the VideoSource interface as shown by the following code: @protocol VideoSourceDelegate<NSObject> -(void)frameReady:(BGRAVideoFrame) frame; @end @interface VideoSource : NSObject<AVCaptureVideoDataOutputSampleBuffe rDelegate> { } @property (nonatomic, retain) AVCaptureSession *captureSession; @property (nonatomic, retain) AVCaptureDeviceInput *deviceInput; @property (nonatomic, retain) id<VideoSourceDelegate> delegate; - (bool) startWithDevicePosition:(AVCaptureDevicePosition) devicePosition; - (CameraCalibration) getCalibration; - (CGSize) getFrameSize; @end In this callback we lock the image buffer to prevent modifications by any new frames, obtain a pointer to the image data and frame dimensions. Then we construct temporary BGRAVideoFrame object that is passed to outside via special delegate. This delegate has following prototype: @protocol VideoSourceDelegate<NSObject> -(void)frameReady:(BGRAVideoFrame) frame; @end Within VideoSourceDelegate, the VideoSource interface informs the user code that a new frame is available. The step-by-step guide for the initialization of video capture is listed as follows: Create an instance of AVCaptureSession and set the capture session quality preset. Choose and create AVCaptureDevice. You can choose the front- or backfacing camera or use the default one. Initialize AVCaptureDeviceInput using the created capture device and add it to the capture session. Create an instance of AVCaptureVideoDataOutput and initialize it with format of video frame, callback delegate, and dispatch the queue. Add the capture output to the capture session object. Start the capture session. Let's explain some of these steps in more detail. After creating the capture session, we can specify the desired quality preset to ensure that we will obtain optimal performance. We don't need to process HD-quality video, so 640 x 480 or an even lesser frame resolution is a good choice: - (id)init { if ((self = [super init])) { AVCaptureSession * capSession = [[AVCaptureSession alloc] init]; if ([capSession canSetSessionPreset:AVCaptureSessionPreset64 0x480]) { [capSession setSessionPreset:AVCaptureSessionPreset640x480]; NSLog(@"Set capture session preset AVCaptureSessionPreset640x480"); } else if ([capSession canSetSessionPreset:AVCaptureSessionPresetL ow]) { [capSession setSessionPreset:AVCaptureSessionPresetLow]; NSLog(@"Set capture session preset AVCaptureSessionPresetLow"); } self.captureSession = capSession; } return self; } Always check hardware capabilities using the appropriate API; there is no guarantee that every camera will be capable of setting a particular session preset. After creating the capture session, we should add the capture input—the instance of AVCaptureDeviceInput will represent a physical camera device. The cameraWithPosition function is a helper function that returns the camera device for the requested position (front, back, or default): - (bool) startWithDevicePosition:(AVCaptureDevicePosition) devicePosition { AVCaptureDevice *videoDevice = [self cameraWithPosition:devicePosit ion]; if (!videoDevice) return FALSE; { NSError *error; AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; self.deviceInput = videoIn; if (!error) { if ([[self captureSession] canAddInput:videoIn]) { [[self captureSession] addInput:videoIn]; } else { NSLog(@"Couldn't add video input"); return FALSE; } } else { NSLog(@"Couldn't create video input"); return FALSE; } } [self addRawViewOutput]; [captureSession startRunning]; return TRUE; } Please notice the error handling code. Take care of return values for such an important thing as working with hardware setup is a good practice. Without this, your code can crash in unexpected cases without informing the user what has happened. We created a capture session and added a source of the video frames. Now it's time to add a receiver—an object that will receive actual frame data. The AVCaptureVideoDataOutput class is used to process uncompressed frames from the video stream. The camera can provide frames in BGRA, CMYK, or simple grayscale color models. For our purposes the BGRA color model fits best of all, as we will use this frame for visualization and image processing. The following code shows the addRawViewOutput function: - (void) addRawViewOutput { /*We setupt the output*/ AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init]; /*While a frame is processes in -captureOutput:didOutputSampleBuff er:fromConnection: delegate methods no other frames are added in the queue. If you don't want this behaviour set the property to NO */ captureOutput.alwaysDiscardsLateVideoFrames = YES; /*We create a serial queue to handle the processing of our frames*/ dispatch_queue_t queue; queue = dispatch_queue_create("com.Example_MarkerBasedAR. cameraQueue", NULL); [captureOutput setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // Set the video output to store frame in BGRA (It is supposed to be faster) NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [captureOutput setVideoSettings:videoSettings]; // Register an output [self.captureSession addOutput:captureOutput]; } Now the capture session is finally configured. When started, it will capture frames from the camera and send it to user code. When the new frame is available, an AVCaptureSession object performs a captureOutput: didOutputSampleBuffer:fromConnection callback. In this function, we will perform a minor data conversion operation to get the image data in a more usable format and pass it to user code: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Get a image buffer holding video frame CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer (sampleB uffer); // Lock the image buffer CVPixelBufferLockBaseAddress(imageBuffer,0); // Get information about the image uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(image Buffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer); BGRAVideoFrame frame = {width, height, stride, baseAddress}; [delegate frameReady:frame]; /*We unlock the image buffer*/ CVPixelBufferUnlockBaseAddress(imageBuffer,0); } We obtain a reference to the image buffer that stores our frame data. Then we lock it to prevent modifications by new frames. Now we have exclusive access to the frame data. With help of the CoreVideo API, we get the image dimensions, stride (number of pixels per row), and the pointer to the beginning of the image data. I draw your attention to the CVPixelBufferLockBaseAddress/ CVPixelBufferUnlockBaseAddress function call in the callback code. Until we hold a lock on the pixel buffer, it guarantees consistency and correctness of its data. Reading of pixels is available only after you have obtained a lock. When you're done, don't forget to unlock it to allow the OS to fill it with new data. Marker detection A marker is usually designed as a rectangle image holding black and white areas inside it. Due to known limitations, the marker detection procedure is a simple one. First of all we need to find closed contours on the input image and unwarp the image inside it to a rectangle and then check this against our marker model. In this sample the 5 x 5 marker will be used. Here is what it looks like: In the sample project that you will find in this book, the marker detection routine is encapsulated in the MarkerDetector class: /** * A top-level class that encapsulate marker detector algorithm */ class MarkerDetector { public: /** * Initialize a new instance of marker detector object * @calibration[in] - Camera calibration necessary for pose estimation. */ MarkerDetector(CameraCalibration calibration); void processFrame(const BGRAVideoFrame& frame); const std::vector<Transformation>& getTransformations() const; protected: bool findMarkers(const BGRAVideoFrame& frame, std::vector<Marker>& detectedMarkers); void prepareImage(const cv::Mat& bgraMat, cv::Mat& grayscale); void performThreshold(const cv::Mat& grayscale, cv::Mat& thresholdImg); void findContours(const cv::Mat& thresholdImg, std::vector<std::vector<cv::Point> >& contours, int minContourPointsAllowed); void findMarkerCandidates(const std::vector<std::vector<cv::Point> >& contours, std::vector<Marker>& detectedMarkers); void detectMarkers(const cv::Mat& grayscale, std::vector<Marker>& detectedMarkers); void estimatePosition(std::vector<Marker>& detectedMarkers); private: }; To help you better understand the marker detection routine, a step-by-step processing on one frame from a video will be shown. A source image taken from an iPad camera will be used as an example: Marker identification Here is the workflow of the marker detection routine: Convert the input image to grayscale. Perform binary threshold operation. Detect contours. Detect and decode markers. Estimate marker 3D pose. Grayscale conversion The conversion to grayscale is necessary because markers usually contain only black and white blocks and it's much easier to operate with them on grayscale images. Fortunately, OpenCV color conversion is simple enough. Please take a look at the following code listing in C++: void MarkerDetector::prepareImage(const cv::Mat& bgraMat, cv::Mat& grayscale) { // Convert to grayscale cv::cvtColor(bgraMat, grayscale, CV_BGRA2GRAY); } This function will convert the input BGRA image to grayscale (it will allocate image buffers if necessary) and place the result into the second argument. All further steps will be performed with the grayscale image. Image binarization The binarization operation will transform each pixel of our image to black (zero intensity) or white (full intensity). This step is required to find contours. There are several threshold methods; each has strong and weak sides. The easiest and fastest method is absolute threshold. In this method the resulting value depends on current pixel intensity and some threshold value. If pixel intensity is greater than the threshold value, the result will be white (255); otherwise it will be black (0). This method has a huge disadvantage—it depends on lighting conditions and soft intensity changes. The more preferable method is the adaptive threshold. The major difference of this method is the use of all pixels in given radius around the examined pixel. Using average intensity gives good results and secures more robust corner detection. The following code snippet shows the MarkerDetector function: void MarkerDetector::performThreshold(const cv::Mat& grayscale, cv::Mat& thresholdImg) { cv::adaptiveThreshold(grayscale, // Input image thresholdImg,// Result binary image 255, // cv::ADAPTIVE_THRESH_GAUSSIAN_C, // cv::THRESH_BINARY_INV, // 7, // 7 // ); } After applying adaptive threshold to the input image, the resulting image looks similar to the following one: Each marker usually looks like a square figure with black and white areas inside it. So the best way to locate a marker is to find closed contours and approximate them with polygons of 4 vertices. Contours detection The cv::findCountours function will detect contours on the input binary image: void MarkerDetector::findContours(const cv::Mat& thresholdImg, std::vector<std::vector<cv::Point> >& contours, int minContourPointsAllowed) { std::vector< std::vector<cv::Point> > allContours; cv::findContours(thresholdImg, allContours, CV_RETR_LIST, CV_ CHAIN_APPROX_NONE); contours.clear(); for (size_t i=0; i<allContours.size(); i++) { int contourSize = allContours[i].size(); if (contourSize > minContourPointsAllowed) { contours.push_back(allContours[i]); } } } The return value of this function is a list of polygons where each polygon represents a single contour. The function skips contours that have their perimeter in pixels value set to be less than the value of the minContourPointsAllowed variable. This is because we are not interested in small contours. (They will probably contain no marker, or the contour won't be able to be detected due to a small marker size.) The following figure shows the visualization of detected contours: Candidates search After finding contours, the polygon approximation stage is performed. This is done to decrease the number of points that describe the contour shape. It's a good quality check to filter out areas without markers because they can always be represented with a polygon that contains four vertices. If the approximated polygon has more than or fewer than 4 vertices, it's definitely not what we are looking for. The following code implements this idea: void MarkerDetector::findCandidates ( const ContoursVector& contours, std::vector<Marker>& detectedMarkers ) { std::vector<cv::Point> approxCurve; std::vector<Marker> possibleMarkers; // For each contour, analyze if it is a parallelepiped likely to be the marker for (size_t i=0; i<contours.size(); i++) { // Approximate to a polygon double eps = contours[i].size() * 0.05; cv::approxPolyDP(contours[i], approxCurve, eps, true); // We interested only in polygons that contains only four points if (approxCurve.size() != 4) continue; // And they have to be convex if (!cv::isContourConvex(approxCurve)) continue; // Ensure that the distance between consecutive points is large enough float minDist = std::numeric_limits<float>::max(); for (int i = 0; i < 4; i++) { cv::Point side = approxCurve[i] - approxCurve[(i+1)%4]; float squaredSideLength = side.dot(side); minDist = std::min(minDist, squaredSideLength); } // Check that distance is not very small if (minDist < m_minContourLengthAllowed) continue; // All tests are passed. Save marker candidate: Marker m; for (int i = 0; i<4; i++) m.points.push_back( cv::Point2f(approxCurve[i].x,approxCu rve[i].y) ); // Sort the points in anti-clockwise order // Trace a line between the first and second point. // If the third point is at the right side, then the points are anticlockwise cv::Point v1 = m.points[1] - m.points[0]; cv::Point v2 = m.points[2] - m.points[0]; double o = (v1.x * v2.y) - (v1.y * v2.x); if (o < 0.0) //if the third point is in the left side, then sort in anti-clockwise order std::swap(m.points[1], m.points[3]); possibleMarkers.push_back(m); } // Remove these elements which corners are too close to each other. // First detect candidates for removal: std::vector< std::pair<int,int> > tooNearCandidates; for (size_t i=0;i<possibleMarkers.size();i++) { const Marker& m1 = possibleMarkers[i]; //calculate the average distance of each corner to the nearest corner of the other marker candidate for (size_t j=i+1;j<possibleMarkers.size();j++) { const Marker& m2 = possibleMarkers[j]; float distSquared = 0; for (int c = 0; c < 4; c++) { cv::Point v = m1.points[c] - m2.points[c]; distSquared += v.dot(v); } distSquared /= 4; if (distSquared < 100) { tooNearCandidates.push_back(std::pair<int,int>(i,j)); } } } // Mark for removal the element of the pair with smaller perimeter std::vector<bool> removalMask (possibleMarkers.size(), false); for (size_t i=0; i<tooNearCandidates.size(); i++) { float p1 = perimeter(possibleMarkers[tooNearCandidates[i]. first ].points); float p2 = perimeter(possibleMarkers[tooNearCandidates[i].second]. points); size_t removalIndex; if (p1 > p2) removalIndex = tooNearCandidates[i].second; else removalIndex = tooNearCandidates[i].first; removalMask[removalIndex] = true; } // Return candidates detectedMarkers.clear(); for (size_t i=0;i<possibleMarkers.size();i++) { if (!removalMask[i]) detectedMarkers.push_back(possibleMarkers[i]); } } Now we have obtained a list of parallelepipeds that are likely to be the markers. To verify whether they are markers or not, we need to perform three steps: First, we should remove the perspective projection so as to obtain a frontal view of the rectangle area. Then we perform thresholding of the image using the Otsu algorithm. This algorithm assumes a bimodal distribution and finds the threshold value that maximizes the extra-class variance while keeping a low intra-class variance. Finally we perform identification of the marker code. If it is a marker, it has an internal code. The marker is divided into a 7 x 7 grid, of which the internal 5 x 5 cells contain ID information. The rest correspond to the external black border. Here, we first check whether the external black border is present. Then we read the internal 5 x 5 cells and check if they provide a valid code. (It might be required to rotate the code to get the valid one.) To get the rectangle marker image, we have to unwarp the input image using perspective transformation. This matrix can be calculated with the help of the cv::getPerspectiveTransform function. It finds the perspective transformation from four pairs of corresponding points. The first argument is the marker coordinates in image space and the second point corresponds to the coordinates of the square marker image. Estimated transformation will transform the marker to square form and let us analyze it: cv::Mat canonicalMarker; Marker& marker = detectedMarkers[i]; // Find the perspective transfomation that brings current marker to rectangular form cv::Mat M = cv::getPerspectiveTransform(marker.points, m_ markerCorners2d); // Transform image to get a canonical marker image cv::warpPerspective(grayscale, canonicalMarker, M, markerSize); Image warping transforms our image to a rectangle form using perspective transformation: Now we can test the image to verify if it is a valid marker image. Then we try to extract the bit mask with the marker code. As we expect our marker to contain only black and white colors, we can perform Otsu thresholding to remove gray pixels and leave only black and white pixels: //threshold image cv::threshold(markerImage, markerImage, 125, 255, cv::THRESH_BINARY | cv::THRESH_OTSU); Marker code recognition Each marker has an internal code given by 5 words of 5 bits each. The codification employed is a slight modification of the hamming code. In total, each word has only 2 bits of information out of the 5 bits employed. The other 3 are employed for error detection. As a consequence, we can have up to 1024 different IDs. The main difference with the hamming code is that the first bit (parity of bits 3 and 5) is inverted. So, ID 0 (which in hamming code is 00000) becomes 10000 in our code. The idea is to prevent a completely black rectangle from being a valid marker ID, with the goal of reducing the likelihood of false positives with objects of the environment. Counting the number of black and white pixels for each cell gives us a 5 x 5-bit mask with marker code. To count the number of non-zero pixels on a certain image, the cv::countNonZero function is used. This function counts non-zero array elements from a given 1D or 2D array. The cv::Mat type can return a subimage view—a new instance of cv::Mat that contains a portion of the original image. For example, if you have a cv::Mat of size 400 x 400, the following piece of code will create a submatrix for the 50 x 50 image block starting from (10, 10): cv::Mat src(400,400,CV_8UC1); cv::Rect r(10,10,50,50); cv::Mat subView = src(r); Reading marker code Using this technique, we can easily find black and white cells on the marker board: cv::Mat bitMatrix = cv::Mat::zeros(5,5,CV_8UC1); //get information(for each inner square, determine if it is black or white) for (int y=0;y<5;y++) { for (int x=0;x<5;x++) { int cellX = (x+1)*cellSize; int cellY = (y+1)*cellSize; cv::Mat cell = grey(cv::Rect(cellX,cellY,cellSize,cellSize)); int nZ = cv::countNonZero(cell); if (nZ> (cellSize*cellSize) /2) bitMatrix.at<uchar>(y,x) = 1; } } Take a look at the following figure. The same marker can have four possible representations depending on the camera's point of view: As there are four possible orientations of the marker picture, we have to find the correct marker position. Remember, we introduced three parity bits for each two bits of information. With their help we can find the hamming distance for each possible marker orientation. The correct marker position will have zero hamming distance error, while the other rotations won't. Here is a code snippet that rotates the bit matrix four times and finds the correct marker orientation: //check all possible rotations cv::Mat rotations[4]; int distances[4]; rotations[0] = bitMatrix; distances[0] = hammDistMarker(rotations[0]); std::pair<int,int> minDist(distances[0],0); for (int i=1; i<4; i++) { //get the hamming distance to the nearest possible word rotations[i] = rotate(rotations[i-1]); distances[i] = hammDistMarker(rotations[i]); if (distances[i] < minDist.first) { minDist.first = distances[i]; minDist.second = i; } } This code finds the orientation of the bit matrix in such a way that it gives minimal error for the hamming distance metric. This error should be zero for correct marker ID; if it's not, it means that we encountered a wrong marker pattern (corrupted image or false-positive marker detection). Marker location refinement After finding the right marker orientation, we rotate the marker's corners respectively to conform to their order: //sort the points so that they are always in the same order // no matter the camera orientation std::rotate(marker.points.begin(), marker.points.begin() + 4 - nRotations, marker.points.end()); After detecting a marker and decoding its ID, we will refine its corners. This operation will help us in the next step when we will estimate the marker position in 3D. To find the corner location with subpixel accuracy, the cv::cornerSubPix function is used: std::vector<cv::Point2f> preciseCorners(4 * goodMarkers.size()); for (size_t i=0; i<goodMarkers.size(); i++) { Marker& marker = goodMarkers[i]; for (int c=0;c<4;c++) { preciseCorners[i*4+c] = marker.points[c]; } } cv::cornerSubPix(grayscale, preciseCorners, cvSize(5,5), cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_ITER,30,0.1)); //copy back for (size_t i=0;i<goodMarkers.size();i++) { Marker&marker = goodMarkers[i]; for (int c=0;c<4;c++) { marker.points[c] = preciseCorners[i*4+c]; } } The first step is to prepare the input data for this function. We copy the list of vertices to the input array. Then we call cv::cornerSubPix, passing the actual image, list of points, and set of parameters that affect quality and performance of location refinement. When done, we copy the refined locations back to marker corners as shown in the following image. We do not use cornerSubPix in the earlier stages of marker detection due to its complexity. It's very expensive to call this function for large numbers of points (in terms of computation time). Therefore we do this only for valid markers. Placing a marker in 3D Augmented Reality tries to fuse the real-world object with virtual content. To place a 3D model in a scene, we need to know its pose with regard to a camera that we use to obtain the video frames. We will use a Euclidian transformation in the Cartesian coordinate system to represent such a pose The position of the marker in 3D and its corresponding projection in 2D is restricted by the following equation: P = A * [R|T] * M; Where: M denotes a point in a 3D space [R|T] denotes a [3|4] matrix representing a Euclidian transformation A denotes a camera matrix or a matrix of intrinsic parameters P denotes projection of M in screen space After performing the marker detection step we now know the position of the four marker corners in 2D (projections in screen space). In the next section you will learn how to obtain the A matrix and M vector parameters and calculate the [R|T] transformation. Camera calibration Each camera lens has unique parameters, such as focal length, principal point, and lens distortion model. The process of finding intrinsic camera parameters is called camera calibration. The camera calibration process is important for Augmented Reality applications because it describes the perspective transformation and lens distortion on an output image. To achieve the best user experience with Augmented Reality, visualization of an augmented object should be done using the same perspective projection. To calibrate the camera, we need a special pattern image (chessboard plate or black circles on white background). The camera that is being calibrated takes 10-15 shots of this pattern from different points of view. A calibration algorithm then finds the optimal camera intrinsic parameters and the distortion vector: To represent camera calibration in our program, we use the CameraCalibration class: /** * A camera calibration class that stores intrinsic matrix and distorsion coefficients. */ class CameraCalibration { public: CameraCalibration(); CameraCalibration(float fx, float fy, float cx, float cy); CameraCalibration(float fx, float fy, float cx, float cy, float distorsionCoeff[4]); void getMatrix34(float cparam[3][4]) const; const Matrix33& getIntrinsic() const; const Vector4& getDistorsion() const; private: Matrix33 m_intrinsic; Vector4 m_distorsion; }; Detailed explanation of the calibration procedure is beyond the scope of this article. Please refer to OpenCV camera_calibration sample or OpenCV: Estimating Projective Relations in Images at for additional information and source code. For this sample we provide internal parameters for all modern iOS devices (iPad 2, iPad 3, and iPhone 4). Marker pose estimation With the precise location of marker corners, we can estimate a transformation between our camera and a marker in 3D space. This operation is known as pose estimation from 2D-3D correspondences. The pose estimation process finds a Euclidean transformation (that consists only of rotation and translation components) between the camera and the object. Let's take a look at the following figure: The C is used to denote the camera center. The P1-P4 points are 3D points in the world coordinate system and the p1-p4 points are their projections on the camera's image plane. Our goal is to find relative transformation between a known marker position in the 3D world (p1-p4) and the camera C using an intrinsic matrix and known point projections on image plane (P1-P4). But where do we get the coordinates of marker position in 3D space? We imagine them. As our marker always has a square form and all vertices lie in one plane, we can define their corners as follows: We put our marker in the XY plane (Z component is zero) and the marker center corresponds to the (0.0, 0.0, 0.0) point. It's a great hint, because in this case the beginning of our coordinate system will be in the center of the marker (Z axis is perpendicular to the marker plane). To find the camera location with the known 2D-3D correspondences, the cv::solvePnP function can be used: void solvePnP(const Mat& objectPoints, const Mat& imagePoints, const Mat& cameraMatrix, const Mat& distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess=false); The objectPoints array is an input array of object points in the object coordinate space. std::vector<cv::Point3f> can be passed here. The OpenCV matrix 3 x N or N x 3, where N is the number of points, can also be passed as an input argument. Here we pass the list of marker coordinates in 3D space (a vector of four points). The imagePoints array is an array of corresponding image points (or projections). This argument can also be std::vector<cv::Point2f> or cv::Mat of 2 x N or N x 2, where N is the number of points. Here we pass the list of found marker corners. cameraMatrix: This is the 3 x 3 camera intrinsic matrix. distCoeffs: This is the input 4 x 1, 1 x 4, 5 x 1, or 1 x 5 vector of distortion coefficients (k1, k2, p1, p2, [k3]). If it is NULL, all of the distortion coefficients are set to 0. rvec: This is the output rotation vector that (together with tvec) brings points from the model coordinate system to the camera coordinate system. tvec: This is the output translation vector. useExtrinsicGuess: If true, the function will use the provided rvec and tvec vectors as the initial approximations of the rotation and translation vectors, respectively, and will further optimize them. The function calculates the camera transformation in such a way that it minimizes reprojection error, that is, the sum of squared distances between the observed projection's imagePoints and the projected objectPoints. The estimated transformation is defined by rotation (rvec) and translation components (tvec). This is also known as Euclidean transformation or rigid transformation. A rigid transformation is formally defined as a transformation that, when acting on any vector v, produces a transformed vector T(v) of the form: T(v) = R v + t where RT = R-1 (that is, R is an orthogonal transformation), and t is a vector giving the translation of the origin. A proper rigid transformation has, in addition, det(R) = 1 This means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). To obtain a 3 x 3 rotation matrix from the rotation vector, the function cv::Rodrigues is used. This function converts a rotation represented by a rotation vector and returns its equivalent rotation matrix. Because cv::solvePnP finds the camera position with regards to marker pose in 3D space, we have to invert the found transformation. The resulting transformation will describe a marker transformation in the camera coordinate system, which is much friendlier for the rendering process. Here is a listing of the estimatePosition function, which finds the position of the detected markers: void MarkerDetector::estimatePosition(std::vector<Marker>& detectedMarkers) { for (size_t i=0; i<detectedMarkers.size(); i++) { Marker& m = detectedMarkers[i]; cv::Mat Rvec; cv::Mat_<float> Tvec; cv::Mat raux,taux; cv::solvePnP(m_markerCorners3d, m.points, camMatrix, distCoeff,raux,taux); raux.convertTo(Rvec,CV_32F); taux.convertTo(Tvec ,CV_32F); cv::Mat_<float> rotMat(3,3); cv::Rodrigues(Rvec, rotMat); // Copy to transformation matrix m.transformation = Transformation(); for (int col=0; col<3; col++) { for (int row=0; row<3; row++) { m.transformation.r().mat[row][col] = rotMat(row,col); // Copy rotation component } m.transformation.t().data[col] = Tvec(col); // Copy translation component } // Since solvePnP finds camera location, w.r.t to marker pose, to get marker pose w.r.t to the camera we invert it. m.transformation = m.transformation.getInverted(); } Summary In this article we learned how to create a mobile Augmented Reality application for iPhone/iPad devices. You gained knowledge on how to use the OpenCV library within the XCode projects to create stunning state-of-the-art applications. Usage of OpenCV enables your application to perform complex image processing computations on mobile devices with real-time performance. From this article you also learned how to perform the initial image processing (translation in shades of gray and binarization), how to find closed contours in the image and approximate them with polygons, how to find markers in the image and decode them, and how to compute the marker position in space. Resources for Article : Further resources on this subject: - Development of iPhone Applications [Article] - OpenCV: Image Processing using Morphological Filters [Article] - OpenCV: Segmenting Images [Article] About the Author : Daniel Lélis Baggio Daniel Lélis Baggio started his work in computer vision through medical image processing at InCor (Instituto do Coração – Heart Institute) in São Paulo, where he worked with intra-vascular ultrasound image segmentation. Since then, he has focused on GPGPU and ported the segmentation algorithm to work with NVIDIA's CUDA. He has also dived into six degrees of freedom head tracking with a natural user interface group through a project called ehci (). He now works for the Brazilian Air Force. David Millán Escrivá David. Jason Saragih J - Khvedchenia Ievgen Khvedchenia Ievgen is a computer vision expert from Ukraine. He started his career with research and development of a camera-based driver assistance system for Harman International. He then began working as a Computer Vision Consultant for ESG. Nowadays, he is a self-employed developer focusing on the development of augmented reality applications. Ievgen is the author of the Computer Vision Talks blog (), where he publishes research articles and tutorials pertaining to computer vision and augmented reality. Naureen Mahmood Naureen Mahmood is a recent graduate from the Visualization department at Texas A&M University. She has experience working in various programming environments, animation software, and microcontroller electronics. Her work involves creating interactive applications using sensor-based electronics and software engineering. She has also worked on creating physics-based simulations and their use in special effects for animation. Here is her blog - Roy Shilkrot Roy Shilkrot is a researcher and professional in the area of computer vision and computer graphics. He obtained a B.Sc. in Computer Science from Tel-Aviv-Yaffo Academic College, and an M.Sc. from Tel-Aviv University. He is currently a PhD candidate in Media Laboratory of the Massachusetts Institute of Technology (MIT) in Cambridge. Roy has over seven years of experience as a Software Engineer in start-up companies and enterprises. Before joining the MIT Media Lab as a Research Assistant he worked as a Technology Strategist in the Innovation Laboratory of Comverse, a telecom solutions provider. He also dabbled in consultancy, and worked as an intern for Microsoft research at Redmond. Here is his blog address - Shervin Emami Shervin Emami (born in Iran) taught himself electronics and hobby robotics during his early teens in Australia. While building his first robot at the age of 15, he learned how RAM and CPUs work. He was so amazed by the concept that he soon designed and built a whole Z80 motherboard to control his robot, and wrote all the software purely in binary machine code using two push buttons for 0s and 1s. After learning that computers can be programmed in much easier ways such as assembly language and even high-level compilers, Shervin became hooked to computer programming and has been programming desktops, robots, and smartphones nearly every day since then. During his late teens he created Draw3D (), a 3D modeler with 30,000 lines of optimized C and assembly code that rendered 3D graphics faster than all the commercial alternatives of the time; but he lost interest in graphics programming when 3D hardware acceleration became available. In University, Shervin took a subject on computer vision and became highly interested in it; so for his first thesis in 2003 he created a real-time and Philippines, using OpenCV for a large number of short-term commercial projects that included: - Detecting faces using Haar or Eigenfaces - Recognizing faces using Neural Networks, EHMM, or Eigenfaces - Detecting the 3D position and orientation of a face from a single photo using AAM and POSIT - Rotating a face in 3D using only a single photo - Face preprocessing and artificial lighting using any 3D direction from a single photo - Gender recognition - Facial expression recognition - Skin detection - Iris detection - Pupil detection - Eye-gaze tracking - Visual-saliency tracking - Histogram matching - Body-size detection - Shirt and bikini detection - Money recognition - Video stabilization - Face recognition on iPhone - Food recognition on iPhone - Marker-based augmented reality on iPhone (the second-fastest iPhone augmented reality app at the time). OpenCV was putting food on the table for Shervin's family, so he began giving back to OpenCV through regular advice on the forums and by posting free OpenCV tutorials on his website (). In 2011, he contacted the owners of other free OpenCV websites to write this book. He also began working on computer vision optimization for mobile devices at NVIDIA, working closely with the official OpenCV developers to produce an optimized version of OpenCV for Android. In 2012, he also joined the Khronos OpenVL committee for standardizing the hardware acceleration of computer vision for mobile devices, on which OpenCV will be based in the future. Post new comment
http://www.packtpub.com/article/marker-based-augmented-reality-on-iPhone-or-iPad
CC-MAIN-2014-15
en
refinedweb
12 October 2012 14:35 [Source: ICIS news] HOUSTON (ICIS)--Foster Wheeler has won another contract for work on LANXESS’ 140,000 tonne/year neodymium polybutadiene (Nd-PBR) rubber plant project in ?xml:namespace> Foster Wheeler said that under the terms of the contract it will be in charge of the Nd-PBR plant's engineering, procurement and construction management (EPCm). The contract award follows Foster Wheeler’s successful completion of the project’s front-end engineering design earlier this year, it added. ICIS reported previously that LANXESS' €200m ($260m) Nd-PBR plant is expected to start up in the first half of 2015. Construction began last month. Nd-PBR is used in tyre manufacturing. LANXESS is building the Nd-PBR facility – “expected to be the largest of its kind in the world” – alongside its 100,000 tonne/year synthetic butyl rubber plant, which is due to start up in 2013, Foster said. Foster was also the EPCm contractor for the synthetic butyl rubber plant,
http://www.icis.com/Articles/2012/10/12/9603628/foster-wins-another-contract-from-lanxess-for-spore-rubber.html
CC-MAIN-2014-15
en
refinedweb
What is Pointer in Python? Variables which serve the special purpose of storing the address of other variables, pointing to the address of the variable of which the address is provided, with obtaining of the value stored at the location being called dereferencing, like a page number in the index of a book that drives the reader to the required content, with performance for repetitive operations like traversing data structures being significantly improved, especially in terms of time and space constraints, supporting effectively the copy and access operations, and are indirectly supported as a concept by Python programming language are termed as pointers in Python. Syntax of Pointer in Python >>> variable name = value; Example – 1 >> a = 2 >>> a >> 2 Example – 2 >>> b = “Bob” >>> b >>> Bob How to Create Pointers in Python? Below is an example of creating pointers with isinstance () function to prove that it is an object type. We will see all possible datatypes in Python with isinstance() function, this way you will learn how to declare all datatypes in python as well. Code: // assigning an integer value a = 2 print(a) // checking if integer is an object or not print(isinstance(a, object)) // assigning a string value b = "Bob" print(b) // checking if string is an object or not print(isinstance(b, object)) // assigning a list value inputList = [1,2,3] print(inputList) // checking if list is an object or not print(isinstance(inputList, object)) //assigning a set value inputSet = {10,20,30} print(inputSet) // checking if set is an object or not print(isinstance(inputSet, object)) // assigning a tuple value inputTuple = (100, 200, 300) print(inputTuple) //checking if tuple object or not print(isinstance(inputTuple, object)) // assigning a dictionary value inputDict = { "0": 1922, "1": "BMW", "2": 100 } print(inputDict) //checking if dictionary is an object or not print(isinstance(inputDict, object)) Output: Now that we know each variable declared is an object as each isinstance() function return True meaning that it is an object. Now we can say that everything is an object in Python. Let us learn about mutable objects out of all the objects. Keep in mind that list, set and dictionary are mutable. Rest are not mutable objects. Mutable objects can be changed while immutable objects cannot be changed. Example On immutable object like a String, we can do an appending as mentioned below str = "Python Programming " print(str) print(id(str)) str += "Language" print(str) print(id(str)) and it works, but now if we try to append something else like str = "Python Programming " print(str) str[5] = “S” print(id(str)) str += "Language" print(str) print(id(str)) to the string it throws an error as it is immutable, to modify we have to use the append() function. Uses of the Pointer in Python Pointers are used in C and C++ widely. With Pointers dynamic memory allocation is possible. Pointers can be declared as variables holding the memory address of another variable. Pointers Arithmetic Operations Pointers have four arithmetic operators. - Increment Operator : ++ - Decrement Operator: — - Addition Operator : + - Subtraction Operator : – Arithmetic operations are performed with the use of arithmetic operators. In the below programs we have used id() function which returns the object’s memory address. Increment operator: It increments the value by 1 Code: #using the incrementing operator x = 10 print("x = " ,x, "\n") print("Address of x", id(x)) x += 1 print("Now x = ",x, "\n") print(x) #using the id() function to get the memory address print("Address of x", id(x)) Output: Decrementing Operator: It decrements the value by 1 #using the decrementing operator x = 10 print("x = " ,x, "\n") print(id(x)) x -= 1 print("Now x = ",x, "\n") print(x) #using the id() function to get the memory address print("Address of x", id(x)) Output: Addition Operator: It performs addition of two operands #using the addition operator #using the addition operator x = 10 y = 20 print("x = " ,x, "\n") print("y = " ,y, "\n") print("Address of x", id(x)) x = y + 3 print("x = y + 3 \n") print("Now x = ",x, "\n") # using the id() function to get the memory address print("Address of x", id(x)) Output: Subtraction Operator: It performs subtraction of two operands Code: #using the subtraction operator x = 10 y = 5 print("x = " ,x, "\n") print("y = " ,y, "\n") print("Address of x", id(x)) x = y - 3 print("x = y - 3 \n") print("Now x = ", x, "\n") print("Address of x", id(x)) Output: Let us look now with an example using “ is “which returns true if the objects of both objects have the same memory address 1. Example Code: In this example, we are declaring two variables x and y, where y is equal to x which now points to the same memory address as that of x. x = 100 print("x =", x) print("address of x", id(x)) y = x print("y =", y) print("address of y ", id(y)) Output: 2. Example In this example, we are declaring two variables x and y, where y is equal to x which is true, but when we increment the value of y by one the output turns to be false. x = 100 y = x print(y is x) y = y + 1 print(y is x) Output: In the above two examples, we have seen that. Pointers to Pointers 1. Example def fun(a, b, c, d): print(a,b,c,d) x = (101, 102, 103, 104) fun(*x) Output: 2. Example def fun (a,b,c,d): print(a,b,c,d) y = {'a':'I', 'b':'like','c':'python','d':'programming'} fun(**y) Output: 3. Example Putting Example One and Example Two together def fun (a,b,c,d): print(a) print(b) print(c) print(d) x = (100,200,300,400) fun(*x) y = {'a':'I', 'b':'like','c':'python','d':'programming'} fun(**y) Output: Conclusion Hope this article was good enough to make you understand the topics in a better way. Also, the article is self-explanatory to be understood as all the key elements have been explained in the best possible way. Recommended Article This has been a guide to Pointers In Python. Here we discuss what is pointers in Python? different types of pointers and arithmetic operations along with examples. You can also go through our other suggested articles to learn more –
https://www.educba.com/pointers-in-python/
CC-MAIN-2020-24
en
refinedweb
from OSAM advanced navigation routines More... #include "modules/nav/nav_vertical_raster.h" #include "firmwares/fixedwing/nav.h" #include "state.h" #include "autopilot.h" #include "generated/flight_plan.h" Go to the source code of this file. from OSAM advanced navigation routines Definition in file nav_vertical_raster.c. Copy of nav line. The only difference is it changes altitude every sweep, but doesn't come out of circle until it reaches altitude. Definition at line 40 of file nav_vertical_raster.c. Definition at line 48 of file nav_vertical_r, stateGetPositionUtm_f(), WaypointAlt, waypoints, WaypointX, WaypointY, point::x, and point::y. Definition at line 43 of file nav_vertical_raster.c. Definition at line 41 of file nav_vertical_raster.c.
http://docs.paparazziuav.org/latest/nav__vertical__raster_8c.html
CC-MAIN-2020-24
en
refinedweb
Install Traefik in Docker Swarm by Thomas Urban This tutorial is providing brief example for setting up traefik in a freshly established Docker Swarm. It closely follows official descriptions found in Traefik's documentation, but adopts those examples to work in a Docker Swarm. Prerequisites This tutorial applies to a Docker Swarm. It's installation usually consists of setting up Docker Engine on three or more servers, run docker swarm init and docker swarm join-token manager on one of the servers and whatever command is displayed by the latter on every other server. Persistent Volumes Traefik works just fine in such a basic swarm. And that's what this tutorial is about. However, some additional features such as automatic maintenance of TLS certificates fetched from Letsencrypt require access on a persistent filesystem. This applies to further services you might want to set up in your swarm. In a Docker Swarm persistent filesystems require additional setup. One option is to establish a GlusterFS cluster for sharing part of either server's filesystem. Encrypt Ingress Networking This isn't required by Traefik, but you should always make sure to encrypt any overlay network to prevent unencrypted traffic between your services from being eavesdropped. This includes your swarm's ingress network as it is an overlay network and it isn't encrypted by default. Encrypting overlay networks may be beneficial in case of encountering basic communication issues between your swarm's nodes for encrypted networks are established via TCP-based IPSec instead of UDP-based VXLAN. Due to starting with a fresh swarm we assume there is no running service right now. Otherwise you'll need to stop it for fixing the ingress network, at least. Remove the existing ingress network using docker network rm ingress and confirm the warning popping up. Re-create the ingress network using docker network create --ingress --driver overlay \ --opt encrypted ingress As an option you might want to increase the possible size of ingress network by defining its subnet explicitly: docker network create --ingress --driver overlay \ --opt encrypted --subnet 10.10.0.0./16 ingress Expose Docker API the Safer Way Following this example you should put your Docker API behind proxy to limit requests available to Traefik. Let's adopt it for use in a swarm. Create a file socket-proxy.yml containing this: version: "3.8" services: socket-proxy: image: tecnativa/docker-socket-proxy volumes: - /var/run/docker.sock:/var/run/docker.sock:ro environment: CONTAINERS: 1 NETWORKS: 1 SERVICES: 1 TASKS: 1 networks: cloud-socket-proxy: aliases: - socket-proxy networks: cloud-socket-proxy: external: true Additional environment variables are set to enable access on /networks/*, /services/* and /tasks/* which is required when setting up Traefik in swarm mode. This file declares to access some existing network, so let's create it: docker network create --driver overlay --scope swarm \ --opt encrypted --attachable cloud-socket-proxy This command is creating another overlay network. Its name is cloud-socket-proxy. The prefix cloud- is chosen here to serve as a kind of namespace grouping every network and stack related to commonly manage your swarm. The --attachable option may introduce a security risk, but on the other hand it as enabling you to have custom containers using this Docker API filter as well. Start the stack described by file created before. docker stack deploy -c socket-proxy.yml cloud-socket-proxy Install Traefik Now it's time for setting up Traefik in its most basic way. Start with creating file edge.yml defining your Traefik-based reverse proxy in a swarm-compliant way: version: "3.8" services: reverse-proxy: image: traefik:v2.1 deploy: placement: constraints: - node.role == manager ports: - "80:80" - "443:443" - "8080:8080" networks: - cloud-edge - cloud-socket-proxy configs: - source: traefik target: /etc/traefik/traefik.yaml configs: traefik: file: ./traefik.yml networks: cloud-edge: external: true cloud-socket-proxy: external: true This stack is attached to the cloud-socket-proxy network for accessing the Docker API via filtered TCP socket. There is another network named cloud-edge which is assumed to exist prior to starting the stack defined before: docker network create --driver overlay --scope swarm \ --opt encrypted --attachable --subnet 10.20.0.0/16 \ cloud-edge This network is meant to have all the containers and services attached which are exposed for public access via Traefik. It is managed externally so other stacks in your swarm can attach to it as well. Next create a static configuration file for Traefik named traefik.yml with this content: api: insecure: true dashboard: true debug: false entryPoints: http: address: ":80" https: address: ":443" providers: docker: endpoint: "tcp://socket-proxy:2375" swarmMode: true watch: true exposedByDefault: false network: cloud-edge By using a Docker configuration this file is implicitly exposed to any service container of reverse proxy as /etc/traefik/traefik.yaml. Whenever you adjust this file the stack started next must be restarted by tearing it down before re-starting again. Now, start this stack: docker stack deploy -c edge.yml cloud-edge Check it out! Open your favourite browser to visit. Replace node.of.your.swarm with hostname referring to any node of your swarm. Using either node's IP address is fine here, as well. Now create another file test.yml containing version: "3.8" services: whoami: image: containous/whoami deploy: labels: - "traefik.enable=true" - "traefik.http.routers.whoami.rule=Host(`foo.example.com`)" - "traefik.http.services.whoami.loadbalancer.server.port=80" networks: - cloud-edge networks: cloud-edge: external: true Replace foo.example.com with name you've set up in DNS to refer to either node of your swarm. Start it with command docker stack deploy -c test.yml test It takes a few moments but eventually you can see the new route in dashboard URL provided before. In addition, when accessing http.//foo.example.com/ the service is invoked to show some information on your actual HTTP request. Your Next Steps - Set up TLS encryption with LetsEncrypt. - Switch dashboard from insecure to secure mode. - Add some actual service exposed via your new edge router. Troubleshooting If something does not work you should start checking the logs of either service created before. docker service logs -f cloud-socket-proxy_socket-proxy This is showing the log of socket proxy which is probably listing failed requests by Traefik. docker service logs -f cloud-edge_reverse-proxy This command is showing the log of Traefik and it might also show errors regarding communication with Docker API. In addition it should log discovered services.
https://blog.cepharum.de/en/post/install-traefik-in-docker-swarm.html?page_n18=2
CC-MAIN-2020-24
en
refinedweb
Automatic Deadlock retry Aspect with Spring and JPA/Hibernate Automatic Deadlock retry Aspect with Spring and JPA/Hibernate Join the DZone community and get the full member experience.Join For Free time. Since we have multiple batch processes and many simultaneous users, we start seeing deadlock errors in certain parts of the application. Some specific parts have to take a pessimistic lock, this is where it goes wrong. Since a deadlock is an error that can be solved by repeating the action, we decide to build in a retry mechanism to restart the transaction if it got rolled back. I started of with creating an Annotation. This annotation will mark the entry point that we want to retry in case of a deadlock. @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface DeadLockRetry { /** * Retry count. default value 3 */ int retryCount() default 3; } The retry count is a value you can supply together with your annotation, so you can specify the number of times we want to retry our operation. Using AOP we can pick up this annotation an let us surround the method call with a retry mechanism. @Around(value = "@annotation(deadLockRetry)", argNames = "deadLockRetry") So lets view the aspect, we start with adding an @Aspect annotation on top of our class, this way it is configured to be an Aspect. We also want to implement the Ordered interface. This interface lets us order our aspect. We need this to surround our Transactional aspect. If we don’t surround our Transaction, we will never be able to retry in a new transaction, we would be working in the same (marked as rollback only) transaction. The rest of the code is pretty straight forward. We create a loop where we loop until we have more retries than we should have. Inside that loop we proceed our ProceedingJoinPoint and catch the PersistenceException that JPA would throw when a deadlock would occur. Inside the catch block we check if the error code is a deadlock error code. Off course we could not directly configure the database specific error codes inside our aspect, so I’ve created an interface. /** * Interface that marks a dialect aware of certain error codes. When you have to * do a low level check of the exception you are trying to handle, you can * implement this in this interface, so you can encapsulate the specific error * codes for the specific dialects. * * @author Jelle Victoor * @version 05-jul-2011 */ public interface ErrorCodeAware { Set<Integer> getDeadlockErrorCodes(); } We already have custom hibernate dialects for our database and database to be, so this let me configure the error codes in the Dialect implementations. It was a bit tricky to get the current dialect. I injected the persistence unit, since we are outside a transaction, and made some casts to get my dialect. The alternative was to use a custom implementation of the ErrorCodeAware interface, not using the dialects. We could inject the needed ErrorCodeAware implementation based on our application context. This added another database specific injection, which added another point of configuration. This is why I chose to store it in our custom dialect. private Dialect getDialect() { final SessionFactory sessionFactory = ((HibernateEntityManagerFactory) emf).getSessionFactory(); return ((SessionFactoryImplementor) sessionFactory).getDialect(); } The only thing left is to configure the aspect, mind the order of the transaction manager and the retry aspect <tx:annotation-driven <bean id="deadLockRetryAspect" class="DeadLockRetryAspect"> <property name="order" value="99" /> </bean> Now when I have a deadlock exception, and I’ve added this annotation, the transaction will rollback and will be reexecuted. /** * This Aspect will cause methods to retry if there is a notion of a deadlock. * * <emf>Note that the aspect implements the Ordered interface so we can set the * precedence of the aspect higher than the transaction advice (we want a fresh * transaction each time we retry).</emf> * * @author Jelle Victoor * @version 04-jul-2011 handles deadlocks */ @Aspect public class DeadLockRetryAspect implements Ordered { private static final Logger LOGGER = LoggerFactory.getLogger(DeadLockRetryAspect.class); private int order = -1; @PersistenceUnit private EntityManagerFactory emf; /** * Deadlock retry. The aspect applies to every service method with the * annotation {@link DeadLockRetry} * * @param pjp * the joinpoint * @param deadLockRetry * the concurrency retry * @return * * @throws Throwable * the throwable */ @Around(value = "@annotation(deadLockRetry)", argNames = "deadLockRetry") public Object concurrencyRetry(final ProceedingJoinPoint pjp, final DeadLockRetry deadLockRetry) throws Throwable { final Integer retryCount = deadLockRetry.retryCount(); Integer deadlockCounter = 0; Object result = null; while (deadlockCounter < retryCount) { try { result = pjp.proceed(); break; } catch (final PersistenceException exception) { deadlockCounter = handleException(exception, deadlockCounter, retryCount); } } return result; } /** * handles the persistence exception. Performs checks to see if the * exception is a deadlock and check the retry count. * * @param exception * the persistence exception that could be a deadlock * @param deadlockCounter * the counter of occured deadlocks * @param retryCount * the max retry count * @return the deadlockCounter that is incremented */ private Integer handleException(final PersistenceException exception, Integer deadlockCounter, final Integer retryCount) { if (isDeadlock(exception)) { deadlockCounter++; LOGGER.error("Deadlocked ", exception.getMessage()); if (deadlockCounter == (retryCount - 1)) { throw exception; } } else { throw exception; } return deadlockCounter; } /** * check if the exception is a deadlock error. * * @param exception * the persitence error * @return is a deadlock error */ private Boolean isDeadlock(final PersistenceException exception) { Boolean isDeadlock = Boolean.FALSE; final Dialect dialect = getDialect(); if (dialect instanceof ErrorCodeAware && exception.getCause() instanceof GenericJDBCException) { if (((ErrorCodeAware) dialect).getDeadlockErrorCodes().contains(getSQLErrorCode(exception))) { isDeadlock = Boolean.TRUE; } } return isDeadlock; } /** * Returns the currently used dialect * * @return the dialect */ private Dialect getDialect() { final SessionFactory sessionFactory = ((HibernateEntityManagerFactory) emf).getSessionFactory(); return ((SessionFactoryImplementor) sessionFactory).getDialect(); } /** * extracts the low level sql error code from the * {@link PersistenceException} * * @param exception * the persistence exception * @return the low level sql error code */ private int getSQLErrorCode(final PersistenceException exception) { return ((GenericJDBCException) exception.getCause()).getSQLException().getErrorCode(); } /** {@inheritDoc} */ public int getOrder() { return order; } /** * Sets the order. * * @param order * the order to set */ public void setOrder(final int order) { this.order = order; } } From Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/automatic-deadlock-retry
CC-MAIN-2020-24
en
refinedweb
_ads. field_mask = protobuf_helpers.field_mask(None, campaign) # Copy the field_mask onto the operation's update_mask field. campaign_operation.update_mask.CopyFrom: from google.api_core import protobuf_helpers from google.ads.google_ads.client import GoogleAdsClient # Retrieve a GoogleAdsClient instance. client = GoogleAdsClient.load_from_storage() # Retrieve an instance of the GoogleAdsService. google_ads_service = client.get_service('GoogleAdsService') # Search query to retrieve campaign. query = ('SELECT ' 'campaign.network_settings.target_search_network, ' 'campaign.resource_name ' 'FROM campaign ' 'WHERE campaign.resource_name = {}'.format(resource_name)) # Submit a query to retrieve a campaign instance. response = google_ads_service.search_stream(customer_id, query=query) # Iterate over results to retrieve the campaign. for batch in response: for row in batch.results: initial_campaign = row.campaign # Create a new campaign operation. campaign_operation = client.get_type('CampaignOperation') # Copy the retrieved campaign onto the new campaign operation's update field. campaign_operation.update.CopyFrom(initial_campaign) # Set the copied campaign object to a variable for easy reference. updated_campaign = campaign_operation.update # Mutate the new campaign. updated_campaign.network_settings.target_search_network.value = False # Create a field mask using the updated campaign. field_mask = protobuf_helpers.field_mask(initial_campaign, updated_campaign) # Copy the field mask onto the operation's update_mask field. campaign_operation.update_mask.CopyFrom(field_mask) With this strategy the updated_campaign will shared all the same fields as the initial_campaign that was retrieved from the API, namely the resource name. The generated field mask will tell the API that only the network_settings.target_search_network field needs to be changed.
https://developers-dot-devsite-v2-prod.appspot.com/google-ads/api/docs/client-libs/python/field-masks
CC-MAIN-2020-24
en
refinedweb
By David Heffelfinger CTO and ardent Java EE fan David Heffelfinger demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework. Downloads: Proponents of the Spring Framework claim that their framework of choice is much easier to work with than Java Platform, Enterprise Edition (Java EE). It is no big secret that I am a Java EE fan, having written several books covering the technology. Nevertheless, just like most developers, I don't always have the choice of picking my technology stack, and in some cases, I've had to work on projects using Spring. Every time I work on a Spring project, I start mumbling under my breath. I know I will have to go through long and convoluted XML files to determine what is going on with the project. I also know that my project will have approximately 10,000 dependencies and that the generated WAR file is going to be a monster. When working with Java EE, most of the services I need are provided by the application server. Therefore, the number of required dependencies is minimal. In most cases, Java EE provides configuration by exception, meaning there is very little configuration to be done, and sensible defaults are used in the vast majority of cases. When configuration is needed, it is usually done through annotations, which allows me to get the whole picture just by looking at the source code, without having to navigate back and forth between XML configuration files and source code. In addition to all the advantages mentioned in the previous paragraph, when working in Java EE projects, I get to take advantage of the advanced tooling available from NetBeans. And if I am lucky enough to be using GlassFish Server Open Source Edition or Oracle GlassFish Server as the application server, I can take advantage of the “deploy on save” feature, which means that every time I save a file in my project, the updated version is automatically deployed to the GlassFish server in the background. All I need to do is reload the page on the browser and the changes are reflected immediately. This is a huge time saver, and every time I am forced to go back to the edit-save-deploy-refresh cycle, I feel like I am working with one hand tied behind my back. In this series of articles, we will rewrite the sample Pet Clinic application provided with Spring using Java EE. In this first article, I illustrate how we can quickly develop an application that has equivalent functionality to the Spring version by taking advantage of the excellent Java EE tooling provided by NetBeans. The Java EE version employs JavaServer Faces (JSF) for the user interface, Data Access Objects (DAOs) are implemented using Enterprise JavaBeans (EJB) 3.1 session beans, and data access is provided by Java Persistence API (JPA) 2.0. In this first part, we start developing the Java EE version of the application by generating the persistence layer from an existing database. In part 2, we will see how NetBeans can help us generate EJB 3.1 session beans that act as DAOs, as well as the JSF 2.0 user interface. Here we are assuming that MySQL is installed on our local workstation and that the petclinic database already exists. (It can be created easily by running the setupDB ANT target included with Pet Clinic.) The first thing we need to do is create a new Web project, as shown in Figure 1. Figure 1. Creating a New Project We then need to specify a name and location for our project, as shown in Figure 2. Usually, the default location and folder are reasonable defaults. Figure 2. Specifying a Name and Location for the New Project At this point, we can optionally add any frameworks that our application will use. Since our application will use standard Java EE frameworks, we should select JavaServer Faces, as shown in Figure 3. Figure 3. Selecting JavaServer Faces as the Framework Now we need to select the Java EE server and the Java EE version, as shown in Figure 4. The default values suit our project well, and the default context path will suffice for our purposes. Figure 4. Selecting the Server and Java EE Version At this point, we click Finish and our project is created, as shown in Figure 5. Figure 5. The Newly Created Project We now are ready to develop our application. NetBeans generates most of the code we need to develop our Java EE applications. It can help us generate the JPA entities, as well as DAOs, JSF pages, and JSF managed beans. The first thing we should do is develop our JPA entities. Most JPA implementations include the ability to automatically generate database tables from JPA entities; however, the inverse is not true. JPA does not provide the ability to generate JPA entities from existing database tables. For this reason, when working with an existing schema, in most cases, we need to code our JPA entities manually, adding the appropriate annotations, properties, getters, setters, and so on. However, we are using NetBeans, which provides the ability to automatically generate JPA entities from an existing schema. We simply select File | New, select the Persistence category, and select the Entity Classes from Database file type, as shown in Figure 6. Figure 6. Selecting Entity Classes from Database At this point, we need to select a data source. If we don't already have one set up, NetBeans allows us to create one on the fly, as shown in Figure 7. Figure 7. Creating a Data Source To create a new data source on the fly, we simply need to enter its Java Naming and Directory Interface (JNDI) name. We get to make up the name since the data source doesn't exist yet. Then we select a database connection, as shown in Figure 8. Figure 8. Selecting a Database Connection Once again, if we don't have a database connection set up to connect to the desired database, we can create the connection on the fly from the wizard. When creating a new database connection, the first thing we need to do is select the appropriate Java Database Connectivity (JDBC) driver for our database, as shown in Figure 9. Figure 9. Selecting a Driver In the next screen, we need to enter the host, port, database, and user credentials, as shown in Figure 10. It is a good idea to click the button labeled Test Connection to make sure all the values are correct. We should see the message “Connection Succeeded” if everything is in order. Figure 10. Specifying Additional Details and Testing the Connection When using MySQL, a schema is synonymous with a database. Therefore, the Select schema list is grayed out, as shown in Figure 11. Figure 11. Selecting a Schema At this point, we click Finish to create the database connection, and we continue clicking OK until we are back to the New Entity Classes from Database screen, as shown in Figure 12. Figure 12. Specifying Entity Classes At this point, we need to select the tables that we will be working with. In this particular case we want all of the tables so we can simply click on the button labeled "Add All" then click the "Next >" button as shown in Figure 13. Figure 13. Database Tables NetBeans attempts to guess the desired name of our entity classes by examining the database table names. The petclinic database uses plural names for its tables (for example, owners, pets, specialties, and so on). However, we would like our corresponding entity names to be singular nouns (Owner, Pet, Specialty, so on). Conveniently, NetBeans allows us to modify the suggested JPA entity class names in this step by simply double-clicking the name and modifying as necessary. At this point, we can optionally select to generate named queries for each field in our JPA entities, generate Java API for XML Binding (JAXB) annotations, and create a persistence unit. In most cases, it is a good idea to generate the named queries and create the persistence unit. We might not need the JAXB annotations, but it doesn't hurt to have them. Therefore, in this example, we chose to generate them as well. After clicking Next, we can specify mapping options, as shown in Figure 14. In the Association Fetch list, we can select how associated entities are loaded. The default behavior is to fetch one-to-one and many-to-one relationships eagerly and to fetch one-to-many and many-to-many relationships lazily. We can select the default behavior, or we can specify that all relationships will be fetched either eagerly or lazily. In most cases, the default behavior is the most sensible approach. Figure 14. Specifying How Associated Entities Are Loaded Selecting the Fully Qualified Database Table Names check box causes the @Table annotation in the generated JPA entities to have the catalog and schema attributes set. These attributes are used when generating the database from JPA entities. Selecting the Attributes for Regenerating Tables check box results in additional attributes being added to the @Column (and, in some cases, the @Table) annotation on the generated JPA entities. When this check box is selected, metadata is obtained from the database and used to add additional attributes to the JPA annotations with the obtained values. If the database does not allow the mapped column to be null, the nullable attribute (with a value of false) is added to the corresponding @Column annotation. For attributes of type String, the length attribute is added to the @Column annotation. This attribute specifies the maximum length allowed for the corresponding property. For decimal types, the precision (number of digits in a number) and scale (number of digits to the right of the decimal point) are added to the @Column annotation. If there are any unique constraints, the uniqueConstraints attribute is added to the @Table annotation. If the Use Column Names in Relationships check box is selected, the generated field name in a relationship is named after the column name in the “one” part of the relationship. For example, if we have a table named CUSTOMER that has a one-to-many relationship with a table named ORDERS, and the column in the CUSTOMER table that points to the primary key in the ORDERS table is named ORDER_ID, the generated field in the JPA entity will be named orderId. If we deselect this check box, the generated field will be named order. In my experience, deselecting this check box results in saner naming in most cases. After clicking Finish, we can see the generated JPA entities in our project, as shown in Figure 15. Figure 15. Generated JPA Entities As we can see, NetBeans has already saved us a lot of work by automatically generating all the JPA entities needed for the project. Andrew Hunt and Dave Thomas offer this advice in their excellent book The Pragmatic Programmer: “Don’t use wizard code you don’t understand.” This is excellent advice. Let's take a look at one of the generated entities to make sure we understand before moving on. Listing 1. Examining a Generated Entity package com.ensode.petclinicjavaee.entity; //imports omitted for brevity @Entity @Table(name = "owners", catalog = "petclinic", schema = "") @XmlRootElement @NamedQueries({ @NamedQuery(name = "Owner.findAll", query = "SELECT o FROM Owner o"), @NamedQuery(name = "Owner.findById", query = "SELECT o FROM Owner o WHERE o.id = :id"), @NamedQuery(name = "Owner.findByFirstName", query = "SELECT o FROM Owner o WHERE o.firstName = :firstName"), @NamedQuery(name = "Owner.findByLastName", query = "SELECT o FROM Owner o WHERE o.lastName = :lastName"), @NamedQuery(name = "Owner.findByAddress", query = "SELECT o FROM Owner o WHERE o.address = :address"), @NamedQuery(name = "Owner.findByCity", query = "SELECT o FROM Owner o WHERE o.city = :city"), @NamedQuery(name = "Owner.findByTelephone", query = "SELECT o FROM Owner o WHERE o.telephone = :telephone")}) public class Owner implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Basic(optional = false) @NotNull @Column(name = "id", nullable = false) private Integer id; @Size(max = 30) @Column(name = "first_name", length = 30) private String firstName; @Size(max = 30) @Column(name = "last_name", length = 30) private String lastName; @Size(max = 255) @Column(name = "address", length = 255) private String address; @Size(max = 80) @Column(name = "city", length = 80) private String city; @Size(max = 20) @Column(name = "telephone", length = 20) private String telephone; @OneToMany(cascade = CascadeType.ALL, mappedBy = "owner") private Collection<Pet> petCollection; public Owner() { } public Owner(Integer id) { this.id = id; } //getters and setters omitted for brevity @Override public int hashCode() { int hash = 0; hash += (id != null ? id.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id // fields are not set if (!(object instanceof Owner)) { return false; } Owner other = (Owner) object; if ((this.id == null && other.id != null) || (this.id != null && !this.id.equals(other.id))) { return false; } return true; } @Override public String toString() { return "com.ensode.petclinicjavaee.entity.Owner[ id=" + id + " ]"; } } The actual code in the JPA entity in Listing 1 is pretty mundane (dare I say, “boring”). It is just a standard JavaBean with private properties and public getters and setters. The interesting stuff is in the annotations. The class is obviously annotated with an @Entity annotation, since this is a requirement for every JPA entity. Next, we see the @Table annotation. The most common reason to use this annotation is to map the JPA entity to the corresponding table via the annotation's name attribute. This is needed only when the name of the JPA entity does not match the name of the table (which is the case in our example). In this specific case, our @Table annotation also has the catalog and schema attributes set. This is the case because we selected the Fully Qualified Database Table Names check box in the wizard. MySQL doesn't distinguish between “schemas” and “databases.” Therefore, the generated value for the schema attribute is empty. What is meant as a “catalog” is different depending on the database vendor. In the case of MySQL, the catalog is simply the database name. Therefore, we see the corresponding value in the catalog attribute. Next, we see the @XmlRootElement annotation. This annotation is used by JAXB to map our entity to XML. It was added because we selected the Generate JAXB Annotations check box in the wizard. We are not going to use this functionality in our example, but it doesn't hurt to have it, especially since we got it “for free.” The generated @NamedQueries annotation encapsulates all the generated @NamedQuery annotations. The NetBeans wizard generates a @NamedQuery annotation for each field in our entity. JPA named queries allow us to define Java Persistence Query Language (JPQL) queries right in the corresponding JPA entity, which means we don't need to hard-code the queries elsewhere in our code. JPQL queries defined in @NamedQuery annotations can be accessed through the createNamedQuery()method in the JPA EntityManager. Identifiers preceded by a colon (:) are named parameters. These parameters need to be replaced by the appropriate values before executing the query, which is done by invoking the setParameter() method on a Query object. The @Id annotation specifies that the id property of our entity is its primary key. The NetBeans wizard detected that the primary key in the corresponding table in the database is automatically incremented and used the appropriate JPA primary key generation strategy, which is denoted by the @GeneratedValue annotation. The @Column annotation for the @Id field has its nullable attribute set to false. The NetBeans wizard detected that the corresponding column in the database does not accept nulls and automatically added this attribute to the annotation. Similarly, every @Column annotation for every field of type String has a length attribute. The value of the attribute corresponds to the maximum length allowed for the corresponding column in the database. The nullable and size attributes were added because we selected the Attributes for Regenerating Tables check box in the wizard. The @Basic annotation is JPA-specific. Setting its optional attribute to false prevents us from attempting to persist an entity with a null value for the attribute that this annotation decorates. The @NotNull and @Size annotations are part of Bean Validation, a new feature introduced in Java EE 6. The @Size annotation allows us to specify the minimum (not shown in Listing 1) and maximum length that a field can have. The values in our entity were derived from the corresponding database columns. The @NotNull annotation makes the annotated field non-nullable. The @NotNull and @Size annotations are part of the Bean Validation specification and are always added by the wizard when appropriate. The corresponding attributes for @Column (nullable and length) are added only if we select the Attributes for Regenerating Tables check box. The former is used for validation, and the latter is used for regenerating the database tables from the JPA entities. As you can see, developing the data layer of our application is very easy when using JPA and NetBeans, because most of the code is actually generated by the NetBeans wizard. In part 2 of this series, we will see how NetBeans can help us generate the other layers of our application..
https://www.oracle.com/technetwork/articles/java/springtojavaee-522240.html
CC-MAIN-2020-24
en
refinedweb
#include "i2c_abuse_test.h" #include "led.h" #include "mcu_periph/i2c.h" Go to the source code of this file. Total I2C Abuse: -all transaction types: T1 T2 T3 T4 R1 R2 R3 R4 T1R1 T2R1 T1R2 T1R3 T1R4 T1R5 T2R5 -all bitrates: 1k (way too slow) to 1M (way to fast) -occasional Short circuit (simulate bus capacitance or EMI errors) -variable bus load: from empty to full stack -Connect LED to MosFet that pulls-down the SCL and SDA lines Definition in file i2c_abuse_test.c. Definition at line 180 of file i2c_abuse_test.c. References i2c_abuse_send_transaction(), i2c_abuse_test_bitrate, i2c_abuse_test_counter, i2c_idle(), i2c_setbitrate(), i2c_submit(), i2c_test1, i2c_test2, I2CTransFailed, I2CTransRx, I2CTransSuccess, LED_OFF, LED_ON, LED_TOGGLE, i2c_transaction::len_r, i2c_transaction::slave_addr, i2c_transaction::status, and i2c_transaction::type. Definition at line 60 of file i2c_abuse_test.c. References i2c_transaction::buf, i2c_submit(), i2c_test1, I2CTransRx, I2CTransTx, I2CTransTxRx, i2c_transaction::len_r, i2c_transaction::len_w, i2c_transaction::slave_addr, and i2c_transaction::type. Referenced by event_i2c_abuse_test(). Definition at line 45 of file i2c_abuse_test.c. References i2c_abuse_test_bitrate, i2c_abuse_test_counter, i2c_test1, i2c_test2, I2CTransSuccess, i2c_transaction::slave_addr, and i2c_transaction::status. Definition at line 226 of file i2c_abuse_test.c. Definition at line 43 of file i2c_abuse_test.c. Referenced by event_i2c_abuse_test(), and init_i2c_abuse_test(). Definition at line 42 of file i2c_abuse_test.c. Referenced by event_i2c_abuse_test(), and init_i2c_abuse_test(). Definition at line 39 of file i2c_abuse_test.c. Referenced by event_i2c_abuse_test(), i2c_abuse_send_transaction(), and init_i2c_abuse_test(). Definition at line 40 of file i2c_abuse_test.c. Referenced by event_i2c_abuse_test(), and init_i2c_abuse_test().
http://docs.paparazziuav.org/v5.14/i2c__abuse__test_8c.html
CC-MAIN-2020-24
en
refinedweb
Version 1.23.0 For an overview of this library, along with tutorials and examples, see CodeQL for C/C++ . The C/C++ void type. See 4.7. void void foo(); import cpp Canonical QL class corresponding to this element. Gets a detailed string representation explaining the AST of this type (with all specifiers and nested constructs such as pointers). This is intended to help debug queries and is a very expensive operation; not to be used in production queries. Gets the source of this element: either itself or a macro that expanded to this element. Holds if this element may be from a library. Holds if this element may be from source. Gets a specifier of this type,. typedef decltype typedef const int *restrict t volatile t volatile restrict const Gets as many places as possible where this type is used by name in the source after macros have been replaced (in particular, therefore, this will find type name uses caused by macros). Note that all type name uses within instantiations are currently excluded - this is too draconian in the absence of indexing prototype instantiations of functions, and is likely to improve in the future. At present, the method takes the conservative approach of giving valid type name uses, but not necessarily all type name uses. Gets the alignment of this type in bytes. Gets an attribute of this type. Gets the closest Element enclosing this one. Element Gets the primary file where this element occurs. Gets the primary location of this element. Gets the name of this type. Gets the parent scope of this Element, if any. A scope is a Type (Class / Enum), a Namespace, a Block, a Function, or certain kinds of Statement. Type Class Enum Namespace Block Function Statement Gets the pointer indirection level of this type. Gets the size of this type in bytes. Gets this type after typedefs have been resolved. Gets this type after specifiers have been deeply stripped and typedefs have been resolved. Holds if this type is called name. name Holds if this declaration has a specifier called name,. Internal – should be protected when QL supports such a flag. Subtypes override this to recursively get specifiers that are not attached directly to this @type in the database but arise through type aliases such as typedef and decltype. protected @type Holds if this type involves a reference. Holds if this type involves a template parameter. Holds if this element is affected in any way by a macro. All elements that are totally or partially generated by a macro are included, so this is a super-set of isInMacroExpansion. isInMacroExpansion Holds if this type is const. Holds if this type is constant and only contains constant types. For instance, a char *const is a constant type, but not deeply constant, because while the pointer can’t be modified the character can. The type const char *const* is a deeply constant type though - both the pointer and what it points to are immutable. char *const const char *const*). const char *const const char * type is volatile. Holds if this type refers to type t (by default, a type always refers to itself). t Holds if this type refers to type t directly. Gets this type with any typedefs resolved. For example, given typedef C T, this would resolve const T& to const C&. Note that this will only work if the resolved type actually appears on its own elsewhere in the program. typedef C T const T& const C& Gets this type after any top-level specifiers and typedefs have been stripped. Gets the type stripped of pointers, references and cv-qualifiers, and resolving typedefs. For example, given typedef const C& T, stripType returns C. typedef const C& T stripType C Gets a textual representation of this element.
https://help.semmle.com/qldoc/cpp/semmle/code/cpp/Type.qll/type.Type$VoidType.html
CC-MAIN-2020-24
en
refinedweb
Hello guys! I would like to test a program I did in c, but I would like something small and to show errors with accessibility. Does anyone know of any tool for c so? Hello guys! If your talking about unit tests, I'd recommend Catch. It can be found at. It allows you to write unit tests and test individual procedures of your application, and reports failures. It even supports telling you duration's if you want to benchmark your application. An example follows: #include <dlib/config_reader.h> #include <iostream> #include <fstream> #include <vector> #define CATCH_CONFIG_MAIN #include <catch.hpp> using namespace std; using namespace dlib; /* We'll assume we have a configuration file on disk named config.txt with the contents: # This is an example config file. Note that # is used to create a comment. # At its most basic level a config file is just a bunch of key/value pairs. # So for example: key1 = value2 dlib = a C++ library # You can also define "sub blocks" in your config files like so user1 { # Inside a sub block you can list more key/value pairs. id = 42 name = davis # you can also nest sub-blocks as deep as you want details { editor = vim home_dir = /home/davis } } user2 { id = 1234 name = joe details { editor = emacs home_dir = /home/joe } } */ TEST_CASE("Config reader test") { config_reader cr("config.txt"); // We use the REQUIRE macro to require that a condition be true or false REQUIRE(cr["key1"]!="" && cr["key1"] == "value2"); // And so on... } You'll have noticed that this program doesn't have a main() function like it's supposed to. This is because Catch defines it's own main() function that accepts command-line arguments. (That's what the CATCH_CONFIG_MAIN preprocessor definition does.) You should only define that in one .cpp file; defining it multiple times... won't go well for you . Next we use the TEST_CASE() macro to create test cases. These test cases can be executed manually via the command-line or will be executed sequentially when the program runs. The syntax of the macro is: TEST_CASE( test name [, tags ] ) The [, tags] means that tags are optional (as clearly demonstrated here). Test cases can have sections, which then can have sub-sections and so on. You can do this with the SECTION() macro: SECTION( section name ) The full documentation can be found at … Readme.md. The tutorial can be found at … torial.md. 3 2017-10-18 01:57:58 (edited by nyanchan 2017-10-18 15:08:33) Show errors? Does that mean you still don't have a set of compiling environment? If that's the case, I recommend using MinGW compiler collection. You can just type "gcc filename.c <additional compiler options if any>" on the command prompt. We can of course use Visual Studio, but MinGW is much smaller and easier while you are testing basic C programs that don't heavily rely on the latest Win32 API or VC/VC++ specific macros. Also, if you are sure you are writing in C, the code Ethin pasted above doesn't work because it's written for C++. I'm not sure if the testing library itself is actually providing C API's, so it may work with some tweeks though. @3, actually, your slightly incorrect -- I wrote that code to fit the question. Catch will work for C or C++ though, though C++ is recommended. The only thing I pasted was the config file. Oh, I looked at it again and got it. Thanks for pointing that out. Glad to know that it does support C. Hello guys! Can someone send the direct link from Catch? The git hub link does not work It doesn't work? Odd... you can download it at … catch.hpp. If that doesn't work... check your firewall settings. You should have no problems accessing Git Hub. At all.
http://forum.audiogames.net/viewtopic.php?id=23268
CC-MAIN-2017-47
en
refinedweb
Does sputtering butter mean that water is present? share|improve this answer answered Feb 10 '11 at 1:04 hmp 5363725 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign Is there any known limit for how many dice RPG players are comfortable adding up? What you lose in the syntactic sugar of model inheritance, you gain a bit in query speed. What now? Not the answer you're looking for? If through this overriding a subclass contains no more abstract methods, that class is concrete (and we can construct objects directly from it). As we described above, although we cannot constuct a new object from the class Shape, we can call the constructor for this class inside the constructor for a subclass (as the What is really curved, spacetime, or simply the coordinate lines? Has swap space a file system? django inheritance django-models data-modeling share|improve this question edited Oct 18 '10 at 16:55 asked Oct 18 '10 at 0:04 jackiekazil 2,36441219 add a comment| 2 Answers 2 active oldest votes up The PositionalShape subclass, extends the abstract Shape superclass. share|improve this answer edited Jan 6 '12 at 6:15 answered Dec 15 '08 at 8:24 muhuk 9,42243879 Thanks for the suggestions, but answers will have 1..M risks as well If you get stumped on any problem, go back and read the relevant part of the lecture. Ballpark salary equivalent today of "healthcare benefits" in the US? You can define a reusable core app which includes base.py with abstract models and models.py with models extending the abstract ones and inheriting all the features. class F(models.Model): pass class C1(F): pass class C2(F): pass class D(F): pid = models.ForeignKey(F) Or use a GenericforeignKey which is a bit more complicated, especially if you need to limit the However, if you use this approach, you shouldn't declare your service model as abstract as you do in the example. The InheritanceManager class from django-model-utils is probably the easiest to use. We can construct objects from the formerly abstract class; when calling their stub methods, bad results are returned. This way: Answer_Risk would work without modification. Finally, we will examine some general principles for designing classes in inheritance hierarchies.. Feels like overkill. An example : Say I would like a Queryset containing all the Content objects that are associated with a specific object of Child1(eg. I like this approach because no matter who the author is, I can easily build the list of comments just by iterating over BlogPostComments set and calling display_name() for each of To work around this problem, when you are using related_name in an abstract base class (only), part of the name should be the string %(class)s. Why are these methods new here and not inherited from other interfaces? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Arrays.sort(allShapes, new Comparator () { public int compare(Object o1, Object o2) { double areaDiff = ((Shape)o1).getArea() - ((Shape)o2).getArea(); if (areaDiff < 0) return -1; else if (areaDiff > 0) return +1; django-models share|improve this question asked Feb 10 '11 at 0:56 Burak 2,20494278 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote accepted Multi-table inheritance? Finally, final as an access modifier for class and methods in classes. My problem is that I have no idea how to represent different types of services in my database. The downside of this is that if these are large tables and/or your queries are complex, queries against it could be noticeably slower. Browse other questions tagged python django inheritance django-models or ask your own question. Now assume that we want to find the two shapes that have the most similar area. Why is this C++ code faster than my hand-written assembly for testing the Collatz conjecture? If you know what child type these will have beforehand, you can just access the child class in the following way: from django.core.exceptions import ObjectDoesNotExist try: telnet_service = service.telnetservice except (AttributeError, This site is great, its users too :) –pistache Oct 25 '11 at 13:47 OK, that was a great solution you gave here, especially the InheritanceManager trick, and the asked 5 years ago viewed 659 times active 5 years ago Related 234When to use an interface instead of an abstract class and vice versa?853Interface vs Abstract Class (general OO)9Abstract base share|improve this answer answered Oct 18 '10 at 0:45 Bernhard Vallant 26.3k871106 That's what I thought, but what I don't understand is the docs give me the impression that So, this class must be abstract because it contains two abstract methods: it specifies getBoundingBox and also inherits (and doesn't override) getArea. Are “Referendum” and “Plebiscite” the same in the meaning, or different in the meaning and nuance? It WOULD NOT make sense to specify getBoundingBox or mayOverlap in either individual interface, because the concepts of bounding boxes and overlaping shapes don't make sense when applied to just shapes That might be done saving final non-abstract classes in a dictionary and referencing to them by names (let's say, defined in the settings).If this didn't help you, maybe you can send What is the simplest way to put some text at the beginning of a line and to put some text at the center of the same line? Count trailing truths The 10'000 year skyscraper How can I check to see if a process is stopped from the command-line? The Liskov subsitution rule: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, Either make F a concrete class, there are some downsides to this, but its not a bad way to go. How to deal with a coworker that writes software to give him job security instead of solving problems? We can also easily define a simlar subclass for rectangles. Finally, it adds one additional method that detects whether two shapes "may overlap" by checking for intersection in their bounding boxes: if the bounding boxes don't intersect, there is no possibility I also have other models that need to reference an 'Answer' regardless of it's sub-type. Can I hint the optimizer by giving the range of an integer? archatas at 11:30 Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest Labels: Advanced, Programming, Python 5 comments: AnonymousMonday, March 09, 2009 4:41:00 PMIs there a workaround for the drawback you mention? This is called delegation: one object uses another to implement a method. The delegation mechanism is known as the HAS-A mechanism. In a company crossing multiple timezones, is it rude to send a co-worker a work email in the middle of the night? The only difficulty with this approach is that when you do something like the following: node = Node.objects.get(pk=node_id) for service in node.services.all(): # Do something with the service The 'service' objects Leave unchanged any methods in a subclass that overrides a formerly abstract method. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed One could, for example, define the Circle and Elipses classes separately, and then implement the Circle class by delegating its behavior to an Elipse stored as an instance variable, ensuring that Reread the last part of the previous section. As stated above, the second design is a bit more symmetrical. You can, however, imitate this behaviour with one-to-one relationships: class F(models.Model): pass # stuff here class C1(models.Model): f = models.OneToOneField(F) class C2(models.Model): f = models.OneToOneField(F) class D(F): pid = models.ForeignKeyField(F) You'll public class Circle extends PositionalShapeBasics { public Circle (String name, int centerX, int centerY, double r) { super(name,centerX,centerY); radius = r; } //Implement the getArea method, // specified in the Shape
http://hiflytech.com/cannot-define/cannot-define-a-relation-with-abstract-class.html
CC-MAIN-2017-47
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Configuring Boost.TR1 is no different to configuring any other part of Boost; in the majority of cases you shouldn't actually need to do anything at all. However, because Boost.TR1 will inject Boost components into namespace std::tr1 it is more than usually sensitive to an incorrect configuration. The intention is that Boost.Config will automatically define the configuration macros used by this library, so that if your standard library is set up to support TR1 (note that few are at present) then this will be detected and Boost.TR1 will use your standard library versions of these components rather than the Boost ones. If you would prefer to use the Boost versions of the TR1 conponents rather than your standard library, then either: include the Boost headers directly #include <boost/regex.hpp> boost::regex e("myregex"); //etc Or else don't enable TR1 in your standard library: since TR1 is not part of the current standard, there should be some option to disable it in your compiler or standard library. The configuration macros used by each TR1 component are documented in each library section (and all together in the Boost.Config documentation), but defining BOOST_HAS_TR1 will turn on native TR1 support for everything (if your standard library has it), which can act as a convenient shortcut.
http://www.boost.org/doc/libs/1_47_0/doc/html/boost_tr1/config.html
CC-MAIN-2017-47
en
refinedweb
I. This seems like a very noob question but I can't find an answer anywhere! I'm very new to developing packages for Homebrew but when I edit my formula and come to update my package I get the following error Error: SHA256 mismatch My question is, how do I generate the expected SHA256 value? In AIX 7.1 I'm generating SSH keypairs in the mkuser.sys.custom file when users are created. I want to use SHA1 to verify that the private key is valid after it's uploaded to a staging server for delivery to the user's PC. The problem I'm having is that the signature generated by mkuser.sys.custom is not the same as the signature generated by an either an external script running the same command, or by the Powershell script on the Windows end that's trying to validate it.Here is a snippet from mkuser.sys.custom: #mkuser (home_directory, userid)#generate the ssh keysssh-keygen -f $1/.ssh_$2/id_rsa -t rsa -N '' &> /dev/null...#rename the private key so it can be identified by the Powershell scriptmv $1/.ssh_$2/id_rsa $1/.ssh_$2/$2_rsa...#generate the SHA1 and store it for uploadingshasum $1/.ssh_$2/$2_rsa > $1/.ssh_$2/$2_sha Sample output looks like "17dfe8f6ed59a191f552ecca1a3232cb9436fe23". If I copy the shasum line to a script, and run it with the same home_directory and userid input I would get the following signature: "C0A35786719399CF707E8AC9611A10B6C5474E31". This is the same result that I get in Powershell. I've tried the same thing with the built-in csum -h SHA1, and get identical results. Why are my signatures coming out different? I am working on following the SHA-2 cryptographically functions as stated in. I am examining the lines that say: I do not understand the last two lines. If my string is short can its length after adding K '0' bits be 512. How should I implement this in Java code? Why in this soulution which I found there is no Loop to use buffor few times? using System.IO;using System.Security.Cryptography;private static string GetChecksum(string file){ using (FileStream stream = File.OpenRead(file)) { SHA256Managed sha = new SHA256Managed(); byte[] checksum = sha.ComputeHash(stream); return BitConverter.ToString(checksum).Replace("-", String.Empty); }} I'm trying to generate SHA checksum for +2GB file. How it should be?.
http://convertstring.com/no/Hash/SHA384
CC-MAIN-2017-47
en
refinedweb
java.lang.Object org.apache.commons.math3.ode.EquationsMapperorg.apache.commons.math3.ode.EquationsMapper public class EquationsMapper Class mapping the part of a complete state or derivative that pertains to a specific differential equation. Instances of this class are guaranteed to be immutable. SecondaryEquations, Serialized Form public EquationsMapper(int firstIndex, int dimension) firstIndex- index of the first equation element in complete state arrays dimension- dimension of the secondary state parameters public int getFirstIndex() public int getDimension() public void extractEquationData(double[] complete, double[] equationData) throws DimensionMismatchException complete- complete state or derivative array from which equation data should be retrieved equationData- placeholder where to put equation data DimensionMismatchException- if the dimension of the equation data does not match the mapper dimension public void insertEquationData(double[] equationData, double[] complete) throws DimensionMismatchException equationData- equation data to be inserted into the complete array complete- placeholder where to put equation data (only the part corresponding to the equation will be overwritten) DimensionMismatchException- if the dimension of the equation data does not match the mapper dimension
http://commons.apache.org/proper/commons-math/javadocs/api-3.2/org/apache/commons/math3/ode/EquationsMapper.html
CC-MAIN-2017-47
en
refinedweb
Wiki cogent3 / Home PyCogent3 This is not just a direct port of PyCogent to Python 3. Numerous modules have been removed, method and argument names changed to adhere (more extensively) with PEP8 and default behaviours of some commonly used functions have changed. Why the big changes? The Python bioinformatics universe is a much richer space than when PyCogent began in 2002. Since PyCogent's publication in 2007, the suite of options for controlling applications, numerical calculations etc .. has exploded. This is a fantastic thing! In addition to meaning users have multiple options, it also means we can now refocus PyCogent back to its roots as the most flexible library for molecular evolutionary analyses around. Changed default alignment object This is a very significant change in default behaviour! The default alignment object returned by LoadSeqs is now an ArrayAlignment (previously called DenseAlignment). This class has been chosen as the default for a number of reasons (e.g. more natural slicing of alignments). You can obtain an instance of the original Alignment class by either: - creating it directly, i.e. from cogent3.core.alignment import Alignment, etc.. - via LoadSeqs(..., array_align=False,..), or - If you have an ÀrrayAlignmentinstance, use new = aln.to_type(array_align=False). API changes in PyCogent3 API module changes in PyCogent3 API argument changes in PyCogent3 API class changes in PyCogent3 API function changes in PyCogent3 API method changes in PyCogent3 Style for mercurial commit messages Commit message formatting follows that of numpy. Updated
https://bitbucket.org/pycogent3/cogent3/wiki/Home
CC-MAIN-2017-47
en
refinedweb
how do you make dos open a seperate file like ms paint for example how do you make dos open a seperate file like ms paint for example The simplest way without knowing what exec-type functions your compiler supports would be Code:system ( "mspaint" ); My best code is written with the delete key. what would be the whole program code for that? Code:... #include <cstdlib> ... int main() { ... system("[enter system call That would be Note the escaping(sp?) of the backslash. I think a forward slash will work too, but I'm not sure.Note the escaping(sp?) of the backslash. I think a forward slash will work too, but I'm not sure.Code:<cstdlib> using namespace std; int main() { system("C:\\map\\map\\program.exe"); return 0; } If you are in pure DOS or a DOS session (NOT available in XP) then you can call DOS int 21h and open a file via DOS or you can execute a dos interrupt from C which will automatically call DOS for you. DOS has specific functions for opening child processes although they are somehwhat limited since it was never designed to be a multi-tasking OS.
https://cboard.cprogramming.com/cplusplus-programming/47110-opening-file-dos.html
CC-MAIN-2017-47
en
refinedweb
Hi all, I'm new to python. I am developing a text to speech application using python. So, I'm using a package named "pyTTS" version 3, compatible with python 2.5. Using an existing example, I wrote the following orders: import pyTTS tts = pyTTS.Create() Before this, I've installed the following packages: python-2.5 pyTTS-3.0.win32-py2.5 msttss22L SAPI5SpeechInstaller But I faced this error: Trackback (most recent call last): File "<pyshell#1>", line 1, in <module> tts = pyTTS.Create() File "C:\Python2 5\Lib\site-packages\pyTTS\_init_.py", line 28, in Create raise ValueError('"%s" not supported' % api) ValueError: "SAPI" not supported I think that the problem is in the SAPI package, but I don't know how to solve it. Could you help me?
https://www.daniweb.com/programming/software-development/threads/181533/help-using-pytts
CC-MAIN-2017-47
en
refinedweb
Cookbook/Dates And Time From HaskellWiki < Cookbook Revision as of 14:33, 12 January 2014 by Artyom Kazak (Talk | contribs) 1 Finding today's date import Data.Time c <- getCurrentTime --> 2009-04-21 14:25:29.5585588 UTC (y,m,d) = toGregorian $ utctDay c --> (2009,4,21) 2 Adding to or subtracting from a date 3 Difference of two dates)
https://wiki.haskell.org/index.php?title=Cookbook/Dates_And_Time&direction=next&oldid=29399
CC-MAIN-2015-22
en
refinedweb
generator Element In this tutorial you will learn about the hibernate's generator element , which is used to auto generate the primary key value Define sequence generated primary key in hibernate sequence generated primary key in hibernate? Use value "sequence" for class attribute for generator tag. <id column="USER_ID" name="id" type="java.lang.Long"> <generator class="sequence"> <param name Data Generator Data Generator Data Generator is free open source script that is designed to create the application, which can be used to generate a large sum of data... JavaScript, PHP and MySQL. Characteristics Data generator is browser Hibernate - Hibernate -mapping-3.0.dtd"><hibernate-mapping><class name="...;int" column="id" > <generator class="assigned"...Hibernate SessionFactory Can anyone please give me an example Hibernate - Hibernate ){ System.out.println(e.getMessage()); } finally{ } }}hibernate mapping <class name...;generator </id> <property name="...Hibernate pojo example I need a simple Hibernate Pojo example  Using Hibernate <generator> to generate id incrementally " type="long" column="id" > <generator class="increment"/>... that we have used increment for the generator class. *After adding the entries... Using Hibernate <generator> to generate id incrementally hibernate Excetion - Hibernate hibernate Excetion The database returned no natively generated identity value Even i mentioned in generator class as native still am getting the same error that Hibernate: insert into login (uname, password) values Reagrsding Hibernate joins - Hibernate -mapping-3.0.dtd"><hibernate-mapping> <class name="...;int" column="ID" > <generator class="assigned"...; <generator class="assigned"/> < Sequence generator problem - JDBC Sequence generator problem Dear sir, I have created one table by name massemailsendingdetails. CREATE TABLE MASSEMAILDETAILS( ID...) ); Then i created a sequence generator for an id as follows CREATE SEQUENCE Generator Tag <action name="GeneratorTagCompAttribute" class="net.roseindia.GeneratorTag...; } } Create a jsp page where the generator tag...; Output of the Generator Tag Example Configuring Hibernate ://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class...="id" type="long" column="ID" > <generator class="assigned"/> <...Configuring Hibernate How to configure Hibernate?   Javah - Header File Generator Javah - Header File Generator  ... it are derived from the name of the class. By default javah creates a header file for each class listed on the command line and puts the files in the current directory random pass generator - Java Beginners random pass generator Write a program (name the program and class "NamePass") that will generate a list of 20 passwords. Each password is to contain...*; import java.util.*; import java.net.*; public class NamePass { public Hibernate one-to-one relationships ;generator </id> <property name...="id" type="int" column="id"> <generator class="native" />...Hibernate one-to-one relationships How does one to one relationship Generator Tag (Control Tags) Example Generator Tag (Control Tags) Example In this section, we are going to describe the generator tag. The generator tag... into the struts.xml file. sturts.xml <action name="GeneratorTag" class Named ? SQL query in hibernate ;hibernate-mapping> <class name="com.test.Product" table="product">... <generator class="identity" /> <...Named ? SQL query in hibernate What is Named SQL query in hibernate Complete Hibernate 4.0 Tutorial generator hibernate tomcat hibernate jndi hibernate versions... This section contains the Complete Hibernate 4.0 Tutorial. Complete Hibernate 4.0 Tutorial Hibernate is a Object-relational mapping PHP Simple password generator PHP Simple password generator In this tutorial I will be showing you how to create a random password using nothing but loops and random letters. For extra security I will define a minimum length and a maximum length, I will also Hibernate Tutorials . Hibernate generator element generates the primary key for new record.  ... Hibernate Tutorials Deepak Kumar Deepak... by him on hibernate. Abstract: Hibernate is popular open source object Hibernate Basic Example not working Hibernate Basic Example not working I have implemented basic hibernate example from your site . In console i got output but database is not affected. hbm.xml <id name="id" type="long" column="id"> <generator Generator Tag (Control Tags) Using Count Attributes Generator Tag (Control Tags) Using Count Attributes In this section, we are going to describe the generator...="GeneratorTagCountAttribute" class="net.roseindia.GeneratorTag"> < Struts2.2.1 generator Tag Example Create primary key using hibernate hibernate? The id element describes the primary key for the persistent class and how the key value is generated. <id name="id" column="id" type="long"> <generator class="increment"/> </id> Hibernate SessionFactory .dtd"> <hibernate-mapping <class...;id" type="int" column="Id" > <generator class="...Hibernate SessionFactory In this tutorial we will learn about how HIBERNATE HIBERNATE What is difference between Jdbc and Hibernate hibernate hibernate what is hibernate flow hibernate hibernate what is hibernate listeners Hibernate Criteria ;> <generator class="native" /> </id> <property...Hibernate Criteria org.hibernate.Criteria is an interface which is very powerful alternatives of HQL (Hibernate Query Language) with some limitations fIRST APPLICATION - Hibernate ; <generator class="assigned"/> <... OF HIBERNATE. BUT IT IS SHOWING exception Exception in thread "main Hi friend... org.hibernate.SessionFactory;import org.hibernate.cfg.Configuration;import java.util.*;public class hibernate integration application ; <generator class="...Struts2 hibernate integration application. In this tutorial we are going to show how to Integrate Struts Hibernate and create an application. This struts Mapping Files service Persistence class configuration Hibernate Service configuration... Hibernate Mapping Files Hibernate Mapping Files: In this section I IN CONSOLE & SERVLET ;hibernate-mapping> <class name="player" ...; <generator class="increment"/> </id> <... HIBERNATE IN CONSOLE & SERVLET ( part O/R Mapping ; <hibernate-mapping> <class name="roseindia.Employee" table...; <generator class="assigned"/> <...Hibernate O/R Mapping In this tutorial you will learn about the Hibernate O/RM Generator Tag (Control Tags) Using an Iterator with Id Attributes Generator Tag (Control Tags) Using an Iterator with Id Attributes... the generator tag using the id attributes. Add the following code...="GeneratorTagIdAttribute" class="net.roseindia.Generat Hibernate Aggregate <!-- <generator class...Hibernate Aggregate Functions In this tutorial you will learn about aggregate functions in Hibernate In an aggregate functions values of columns org.hibernate.MappingException: Error reading resource: contact.hbm.xml - Hibernate ;generator </id> <property name="...; Hi friend,<?xml version="1.0"?><!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN""http Hibernate Collection Mapping ;addressId" column="address_id"> <generator class="...;generator </id> <property name="...Hibernate Collection Mapping In this tutorial you will learn about hibernate ;Hi Friend, Please visit the following link: Dirty Checking In Hibernate ; <generator class="assigned"/> <...Dirty Checking In Hibernate In this section we will read about the dirty checking in hibernate. Dirty Checking is the feature of hibernate that helps and Hibernate 4 Tutorial ;generator> method in detail. Hibernate generator element generates the primary key... Hibernate configuration file, POJO class and Tutorial.hbm.xml (Hibernate mapping class) In this section we will write required hibernate objects Criteria Associations ; <generator class="native"/> </id> <property name..._id"> <generator class="native" /> </id> <...Hibernate Criteria Associations In this section you will learn about Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/62897
CC-MAIN-2015-22
en
refinedweb
22 June 2012 07:14 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Total PVC demand in Deshpande was speaking at at the 16th World Chlor-alkali Conference (21/22 June), organised by ICIS and Tecnon OrbiChem. “Because of the lack of ethylene capacity in PVC growth in Domestic demand for PVC pipes is increasing because of the growing housing sector as well as the water and irrigation sector especially within the agriculture industry, said Deshpande. “Investments in wires and cables are also expected to consume around 1m tonnes of PVC in the next five years. Downstream expansion in PVC use is outpacing supply making imports inevitable,” said Deshpande. Indian PVC makers include Finolex, Reliance, Chemplast, DCW and D
http://www.icis.com/Articles/2012/06/22/9571819/india-pvc-market-faces-ethylene-capacity-shortage-producer.html
CC-MAIN-2015-22
en
refinedweb
Hello everyone, I have tested try-catch works with structured exception, to my surprise. Previously I think we have to use __try and __except. Any comments? Here is my test code and I am using Visual Studio 2008. Code:#include <iostream> using namespace std; int main() { int* address = NULL; try{ (*address) = 1024; } catch (...) { cout << "access violation caught" << endl; } return 0; } thanks in advance, George
http://cboard.cprogramming.com/cplusplus-programming/98165-try-catch-works-structured-exception.html
CC-MAIN-2015-22
en
refinedweb
public class WritableFile extends File implements Writable pathSeparator, pathSeparatorChar, separator, separatorChar canExecute, canRead, canWrite, compareTo, createNewFile, createTempFile, createTempFile, delete, deleteOnExit, equals, exists, getAbsoluteFile, getAbsolutePath, getCanonicalFile, getCanonicalPath, getFreeSpace, getName, getParent, getParentFile, getPath, getTotalSpace, getUsableSpace, hashCode, isAbsolute, isDirectory, isFile, isHidden, lastModified, length, list, list, listFiles, listFiles, listFiles, listRoots, mkdir, mkdirs, renameTo, setExecutable, setExecutable, setLastModified, setReadable, setReadable, setReadOnly, setWritable, setWritable, toPath, toString, toURI, toURL clone, finalize, getClass, notify, notifyAll, wait, wait, wait public WritableFile(File delegate) public WritableFile(File delegate, String encoding)
http://docs.groovy-lang.org/latest/html/api/org/codehaus/groovy/runtime/WritableFile.html
CC-MAIN-2015-22
en
refinedweb
This article introduces a simple yet flexible way of creating a splash screen for Silverlight applications. One of my projects is to migrate a Windows Forms application to Silverlight. The business owners want to include a splash screen in it. Searching the internet, I found an article, "Navigating and passing values between XAML pages in Silverlight 2", by Nipun Tomar, discussing how to navigate among XAML pages in Silverlight. Based on this navigation method, a splash screen can easily be implemented. The following is a step by step introduction to adding a simple yet flexible splash screen to Silverlight applications. In Visual Studio 2008, follow the Microsoft instructions on how to "Create a New Silverlight Project" to create an empty Silverlight application, with a website to host the Silverlight application, and name the Silverlight project "SplashDemoApplication". By default, a solution is created, which is also named "SplashDemoApplication". In this solution, two projects are created by the wizard. One is the Silverlight project, and the other is the hosting website project called "SplashDemoApplicationWeb". By default, the "SplashDemoApplicationWeb" project is set as the Start Up project for running in Debug mode by Visual Studio. In order to have a web page run the Silverlight application in Debug mode in the Visual Studio environment, we can right click on the file "SplashDemoApplicationTestPage.aspx" in the Solution Explorer and set it as the start page. If we run the Silverlight application in Debug mode now, a web browser window will be launched, showing a blank screen, since we have not yet added anything to the Silverlight application. Since this article is not intended to discuss how to program WPF and XAML, we will add only a new XAML file called "Splash.xaml" to the project beyond the default files created by Visual Studio, which will be the splash screen in our demonstration. The "MainPage.xaml" added by default in Visual Studio 2008 will be the main Silverlight application page for the demonstration purpose. After adding "Splash.xaml" to the project, we will add a folder called "images" and put two pictures "NiagaraFalls.jpg" and "Dock.jpg" in the folder. Each picture will be embedded in one of the XAML pages. To make the XAML pages show something, we will add a picture in each of them. We will need to edit the two XAML files. We first add "NiagaraFalls.jpg" in the "Splash.xaml" file. <UserControl x:Class="SplashDemoApplication.Splash" xmlns="" xmlns: <Grid x: <Image Source="images/NiagaraFalls.jpg" Width="750" /> </Grid> </UserControl> And then, add "Dock.jpg" and some text in "MainPage.xaml". <UserControl x:Class="SplashDemoApplication.MainPage" xmlns="" xmlns: <Grid x: <Grid.RowDefinitions> <RowDefinition Height="60"/> <RowDefinition Height="42"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <TextBlock Grid. <TextBlock Grid.Row="1" Text="This is the main Silverlight user control displayed after the splash screen" HorizontalAlignment="Center" FontFamily="Verdana" FontSize="20" Foreground="Green" VerticalAlignment="Top" /> <Image Grid. </Grid> </UserControl> The starting point of the Silverlight application is the code-behind file of "App.axml". We will be modifying the default "App.xaml.cs" to let Silverlight load "Splash.axml" first and then switch to "MainPage.axml" after a short wait time, to achieve the splash effect. using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; namespace SplashDemoApplication { public partial class App : Application { private Grid root; public void Navigate(UserControl NewPage) { root.Children.Clear(); root.Children.Add(NewPage); } public App() { this.Startup += this.Application_Startup; this.Exit += this.Application_Exit; this.UnhandledException += this.Application_UnhandledException; InitializeComponent(); } private void Application_Startup(object sender, StartupEventArgs e) { root = new Grid(); root.Children.Add(new Splash()); this.RootVisual = root; System.Threading.Thread Worker = new System.Threading.Thread( new System.Threading.ThreadStart(BackgroundWork)); Worker.Start(); } private void BackgroundWork() { System.Threading.Thread.Sleep(2000); Deployment.Current.Dispatcher.BeginInvoke(() => Navigate(new MainPage())); } } } In the above C# code, we added a private variable "root" of type "Grid", and a method "Navigate" in the "App" class. In the "Application_Startup" method, instead of directly adding the start up XAML page to the "RootVisual" like how Visual Studio created it for us by default, we first add it to the "root" Grid and then add the Grid to the "RootVisual". By doing this, we can navigate among different XAML pages simply by calling the "Navigate" method. Details about this navigation method can be found in "Navigating and passing values between XAML pages in Silverlight 2". root Grid Navigate App Application_Startup RootVisual When the Silverlight application runs, "Splash.xaml" will be first loaded and shown in the browser. The application will then start a background thread to call the BackgroundWork method. In this demonstration project, I just let this thread sleep for a while and then call "Navigate" to load "MainPage.xaml" to achieve the splash screen effect. BackgroundWork Compile and run the application. We can see that the "Splash.xaml" page is first loaded and the application then switches to "MainPage.xaml" after the thread sleeping time. The splash effect is achieved. Besides the navigation method introduced by Nipun Tomar, there are two things of interest. This is the first edition.
http://www.codeproject.com/Articles/47342/A-Simple-Flexible-Silverlight-Splash-Screen/?fid=1554358&df=90&mpp=10&sort=Position&tid=4073038
CC-MAIN-2015-22
en
refinedweb
What is RMS ? What is RMS ? hii, What is RMS ? hello, The Record Management System (RMS) is a simple record-oriented database that allows a MIDlet to persistently store information and retrieve it later J2ME RMS Sorting Example J2ME RMS Sorting Example This example simply shows how to use of RMS package. In this example we...{ private RecordStore record; static  KeyPressed | Co-ordinates MIDlet | J2ME Record Store MIDlet | J2ME... | J2ME Crlf | J2ME Command Class | J2ME Record Store | J2ME Form... | J2ME Timer MIDlet | J2ME RMS Sorting | J2ME Read File | J2ME J2ME RMS Read Write J2ME RMS Read Write  ...;Core J2ME Technology"); writeRecord("J2ME ...; rs = RecordStore.openRecordStore(REC_STORE, true  J2ME Record Store Example J2ME Record Store Example  .... In J2ME a record store consists of a collection of records and that records remain...; Output of the Record Store Example.. Source Codeme RMS & View RMS & View DIffrence between RMS & VIEW Tutorial as given below in the figure. J2ME Record Store MIDlet.... J2ME Record Store Example In this Midlet, we are going... J2ME RMS Sorting Example This example simply shows how J2ME Audio Record J2ME Audio Record This example is used to record the audio sound and play the recorded sound. In this example we are trying to capture the audio sound and encoded ; EclipseME EclipseME is an Eclipse plugin to help develop J2ME MIDlets...; J2ME Java Editor Extends Eclipse Java Editor support ing J2ME Polish directives, variables and styles. Edit java files using sorting student record - Java Beginners recording ? u want to store value in database or in file or opertinng run time.... Insert Record"); System.out.println("2. Delete Record"); System.out.println("3. Display Record"); System.out.println("4. Exit"); System.out.print j2me - Java Beginners j2me Hi, I want to save list in record store.How can i do this. Hi Friend, Please visit the following link: Thanks Java Xml Data Store comes in by phone or in person, she needs to record their name, address, contact... be followed up and/or purchased. You will need to store the data in a local binary... the implementation needs to change later on (perhaps they might decide to store the data store store i want to store data in (.dat) file format using swing and perform operation on that insertion,deletion and update store store hi i want store some information in my program and use them in other method. what shoud i do How To Store Image Into MySQL Using Java How To Store Image Into MySQL Using Java In this section we will discuss about how to store an image into the database using Java and MySQL. This example explains you about all the steps that how to store image into MySQL database Jsp code for disabling record. Jsp code for disabling record. I want a Jsp and servlet code for the mentioned scenario. Q. A cross sign appears in front of each record, click to disable the record.System marks the record as disabled.The record next and previous record in php next and previous record in php How to display next and previous records in PHP j2me application j2me application code for mobile tracking system using j2me how to edit a record in hibernate? how to edit a record in hibernate? how to edit a record in hibernate? Hi Friend, Please visit the following link: Hibernate Tutorials Thanks Hi Friend, Please visit the following link: Hibernate Acess Record from database. Acess Record from database. How to access records from database and how to display it on view page, with the help of hibernate Batchwise Store Batchwise Store i want to read the column from excel and store the data as batchwise into database add record to database - JDBC add record to database How to create program in java that can save record in database ? Hi friend, import java.io.*; import java.sql....); if(i!=0){ out.println("The record has been inserted How to Display next Record from Database "On the Click of Next Button" USING ARRAYLIST How to Display next Record from Database "On the Click of Next Button" USING ARRAYLIST In this code how i will use arraylist(which store all my records in index form) to show next data on button click,so that it will goes query to fetch the highest record in a table query to fetch the highest record in a table Write a query to fetch the highest record in a table, based on a record, say salary field in the empsalary table Record and Save Video using Java Record and Save Video using Java How to record video(webcam) and save it using Java.?? Its really urgent Mysql Last Record Mysql Last Record Mysql Last Record is used to the return the record using... in set (0.00 sec) Query to view last record of Table named record management application - Java Beginners record management application write a small record management application for a school.Tasks will be Add record, Edit record,Delete record, List records. Each record contains: name(max 100 char), Age,Notes(No Max.Limit Write a query to insert a record into a table Write a query to insert a record into a table Write a query to insert a record into a table Hi, The query string is as follows- Insert into employee values ('35','gyan','singh'); Thanks hibernate record not showing in database - Hibernate hibernate record not showing in database session =sessionFactory.openSession(); //inserting rocords in Echo Message table...)); //It showing on console Records inserted 21 But not showing in database Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/96274
CC-MAIN-2015-22
en
refinedweb
NextPrevious PublicUtility/AUOutputBL.h /* File: AUOutputBL.h Abstract: Part of CoreAudio Utility Classes Version: 1.01. */#ifndef __AUOutputBL_h__#define __AUOutputBL_h__ #include "CAStreamBasicDescription.h"#if !defined(__COREAUDIO_USE_FLAT_INCLUDES__)#else#endif // ____________________________________________________________________________//// AUOutputBL - Simple Buffer List wrapper targetted to use with retrieving AU output// Works in one of two ways (both adjustable)... Can use it with NULL pointers, or allocate// memory to receive the data in. // Before using this with any call to AudioUnitRender, it needs to be Prepared// as some calls to AudioUnitRender can reset the ABL class AUOutputBL {public: // you CANNOT use one of these - it will crash!// AUOutputBL (); // this is the constructor that you use // it can't be reset once you've constructed it AUOutputBL (const CAStreamBasicDescription &inDesc, UInt32 inDefaultNumFrames = 512); ~AUOutputBL(); void Prepare () { Prepare (mFrames); } // this version can throw if this is an allocted ABL and inNumFrames is > AllocatedFrames() // you can set the bool to true if you want a NULL buffer list even if allocated // inNumFrames must be a valid number (will throw if inNumFrames is 0) void Prepare (UInt32 inNumFrames, bool inWantNullBufferIfAllocated = false); AudioBufferList* ABL() { return mBufferList; } // You only need to call this if you want to allocate a buffer list // if you want an empty buffer list, just call Prepare() // if you want to dispose previously allocted memory, pass in 0 // then you either have an empty buffer list, or you can re-allocate // Memory is kept around if an Allocation request is less than what is currently allocated void Allocate (UInt32 inNumberFrames); UInt32 AllocatedFrames() const { return mFrames; } const CAStreamBasicDescription& GetFormat() const { return mFormat; } #if DEBUG void Print();#endif private: UInt32 AllocatedBytes () const { return (mBufferSize * mNumberBuffers); } CAStreamBasicDescription mFormat; Byte* mBufferMemory; AudioBufferList* mBufferList; UInt32 mNumberBuffers; UInt32 mBufferSize; UInt32 mFrames; // don't want to copy these.. can if you want, but more code to write! AUOutputBL () {} AUOutputBL (const AUOutputBL &c); AUOutputBL& operator= (const AUOutputBL& c);}; #endif // __AUOutputBL_h__ NextPrevious Copyright © 2012 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2012-07-17
https://developer.apple.com/library/mac/samplecode/PlayFile/Listings/PublicUtility_AUOutputBL_h.html
CC-MAIN-2015-22
en
refinedweb
Data Structures for Drivers no-involuntary-power-cycles(9P) usb_completion_reason(9S) usb_other_speed_cfg_descr(9S) usb_request_attributes(9S) - structure for interrupt kstats #include <sys/types.h> #include <sys/kstat.h> #include <sys/ddi.h> #include <sys/sunddi.h> Solaris DDI specific (Solaris DDI) Interrupt statistics are kept in the kstat_intr structure. When kstat_create(9F) creates an interrupt kstat, the ks_data field is a pointer to one of these structures. The macro KSTAT_INTR_PTR() is provided to retrieve this field. It looks like this: #define KSTAT_INTR_PTR(kptr) ((kstat_intr_t *)(kptr)->ks_data) An interrupt is a hard interrupt (sourced from the hardware device itself), a soft interrupt (induced by the system through). Drivers generally report only claimed hard interrupts and soft interrupts from their handlers, but measurement of the spurious class of interrupts is useful for auto-vectored devices in order to pinpoint any interrupt latency problems in a particular system configuration. Devices that have more than one interrupt of the same type should use multiple structures. ulong_t intrs[KSTAT_NUM_INTRS]; /* interrupt counters */ The only member exposed to drivers is the intrs member. This field is an array of counters. The driver must use the appropriate counter in the array based on the type of interrupt condition. The following indexes are supported: Hard interrupt Soft interrupt Watchdog interrupt Spurious interrupt Multiple service interrupt
http://docs.oracle.com/cd/E26502_01/html/E29047/kstat-intr-9s.html
CC-MAIN-2015-22
en
refinedweb
#include <Xm/XmIm.h> void XmImSetFocusValues( Widget widget, ArgList arglist, Cardinal argcount, ); XmImSetFocusValues notifies the input manager that the specified widget has received input focus. This function also updates the attributes of the input context associated with the widget. The focus window for the XIC is set to the window of the widget. The arglist argument is a list of attribute/value pairs for the input context. This function passes the attributes and values to XICSetValues. The caller of this routine should pass in only those values that have changed since the last call to any of these functions; XmImSetValues, XmImSetFocusValues, XmImVaSetValues, or XmImVaSetFocusValues. See the description in the XmImSetValues(3) reference page for a list of associated resources.FocusValues function when they receive focus. Therefore, further calls to the XmImSetFocusValues function for these widgets are unnecessary. XmImSetValues(3), XmImVaSetFocusValues(3), and XmImVaSetValues(3).
http://www.makelinux.net/man/3/X/XmImSetFocusValues
CC-MAIN-2015-22
en
refinedweb
September 2008 July 2009: This article discusses bundling large Python libraries using the zipimport module, using the Django 1.0 web application framework as an example. As of release 1.2.3 of the Python runtime environment, Django 1.0 is included in the runtime environment, and no longer needs to be bundled with your app. Using the version of Django included with the runtime environment provides faster start-up times for your application, and is the recommended way to use Django 1.0. The maximum file size is 10 megabytes, and the maximum file count (including application files and static files) is 10,000, with a limit of 1,000 files in a single directory. Introduction Using a Python web application framework with your App Engine application is usually as simple as including the files for the framework with your application's code. However, there is a limit to the number of files that can be uploaded for an application, and the standard distributions for some frameworks exceed this limit or leave little room for application code. You can work around the file limit using Python's "zipimport" feature, which is supported by App Engine as of the 1.1.3 release (September 2008). This article describes how to use Django 1.0 with Google App Engine using the "zipimport" feature. You can use similar techniques with other frameworks, libraries or large applications. Introducing zipimport When your application imports a module, Python looks for the module's code in one of several directories. You can access and change the list of directories Python checks from Python code using sys.path. In App Engine, your handler is called with a path that includes the App Engine API and your application root directory. If any of the items in sys.path refers to a ZIP-format archive, Python will treat the archive as a directory. The archive contains the .py source files for one or more modules. This feature is supported by a module in the standard library called zipimport, though this module is part of the default import process and you do not need to import this module directly to use it. For more information about zipimport, see the zipimport documentation. To use module archives with your App Engine application: - Create a ZIP-format archive of the modules you want to bundle. - Put the archive in your application directory. - If necessary, in your handler scripts, add the archive file to sys.path. For example, if you have a ZIP archive named django.zip with the following files in it: django/forms/__init__.py django/forms/fields.py django/forms/forms.py django/forms/formsets.py django/forms/models.py ... A handler script can import a module from the archive as follows: import sys sys.path.insert(0, 'django.zip') import django.forms.fields This example illustrates zipimport, but is not sufficient for loading Django 1.0 in App Engine. A more complete example follows. zipimport and App Engine App Engine uses a custom version of the zipimport feature instead of the standard implementation. It generally works the usual way: add the Zip archive to sys.path, then import as usual. Because it is a custom implementation, several features do not work with App Engine. For instance, App Engine can load .py files from the archive, but it can't load .pyc files like the standard version can. The SDK uses the standard version, so if you'd like to use features of zipimport beyond those discussed here, be sure to test them on App Engine. Archiving Django 1.0 When App Engine launched in Summer 2008, it included the Django application framework as part of the environment to make it easy to get started. At the time, the latest release of Django was 0.96, so this is the version that is part of version "1" of the Python runtime environment. Since then, the Django project released version 1.0. For compatibility reasons, App Engine can't update its version of Django without also releasing a new version of the Python runtime environment. To use 1.0 with App Engine with version "1" of the runtime environment, an application must include the 1.0 distribution in its application directory. The Django 1.0 distribution contains 1,582 files. An App Engine application is limited to 1,000 files, so the Django distribution can't be included directly. Of course, not every file in the distribution needs to be included with the application. You can prune the distribution to remove documentation files, unused locales, database interfaces and other components that don't work with App Engine (such as the Admin application) to get the file count below the limit. Using zipimport, you can include Django 1.0 with your application using just 1 file, leaving plenty of room for your own application files in the 1,000 file limit. A single ZIP archive of Django 1.0 is about 3 MB. This fits within the 10 MB file size limit. You may wish to prune unused libraries from the Django distribution anyway to further reduce the size of the archive. Update: Prior to the 1.1.9 release of the Python SDK in February 2009, the file size limit was 1 MB. With 1.1.9, the limit has been increased to 10 MB. These instructions produce a Django archive smaller than 1 MB. To make an archive containing all of Django, replace steps 2, 3 and 4 below with the following command: zip -r django.zip django To download and re-package Django 1.0 as a ZIP archives: - Download the Django 1.0 distribution from the Django website. Unpack this archive using an appropriate tool for your operating system (a tool that can unpack a .tar.gzfile). For example, on the Linux or Mac OS X command line: tar -xzvf Django-1.0.tar.gz - Create a ZIP archive that contains everything in the django/directory except for the .../conf/and .../contrib/sub-directories. (You can also omit bin/and test/.) The path inside the ZIP must start with django/. cd Django-1.0 zip -r django.zip django/__init__.py django/bin django/core \ django/db django/dispatch django/forms \ django/http django/middleware django/shortcuts \ django/template django/templatetags \ django/test django/utils django/views The confpackage contains a large number of localization files. Adding all of these files to the archive would increase the size of the archive beyond the 1 MB limit. However, there's room for a few files, and many Django packages need some parts of conf. Add everything in confexcept the localedirectory to the archive. If necessary, you can also add the specific locales you need, but be sure to check that the file size of the archive is below 1 MB. The following command adds everything in confexcept conf/localeto the archive: zip -r django.zip django/conf -x 'django/conf/locale/*' - Similarly, if you need anything in .../contrib/, add it to the archive. The largest component in contribis the Django Admin application, which doesn't work with App Engine, so you can safely omit the adminand admindocsdirectories. For example, to add formtools: zip -r django.zip django/contrib/__init__.py \ django/contrib/formtools - Put the archive file in your application directory. mv django.zip your-app-dir/ Using the Module Archive Tip: The latest version of the Django App Engine Helper (starting with version "r64") supports Django 1.0 with zipimport out of the box. Make sure your archive is named django.zip and is in your application root directory. All new projects created using the Google App Engine Helper for Django will automatically use django.zip if present. If you are upgrading an existing project you will need to copy the appengine_django, manage.py and main.py files from Google App Engine Helper for Django into your existing project. See Using the Google App Engine Helper for Django. The following instructions only apply if you are using Django without the Helper, or if you are preparing another module archive. To use a module archive, the .zip file must be on the Python module load path. The easiest way to do this is to modify the load path at the top of each handler script, and in each handler's main() routine. All other files that use modules in the archives will work without changes. Because App Engine pre-loads Django 0.96 for all Python applications, using Django 1.0 requires one more step to make sure the django package refers to 1.0 and not the preloaded version. As described in the article Running Django on App Engine, the handler script must remove Django 0.96 from sys.modules before importing Django 1.0. The following code uses the techniques described here to run Django 1.0 from an archive named django.zip: import sys from google.appengine.ext.webapp import util # Uninstall Django 0.96. for k in [k for k in sys.modules if k.startswith('django')]: del sys.modules[k] # Add Django 1.0 archive to the path. django_path = 'django.zip' sys.path.insert(0, django_path) # Django imports and other code go here... import os os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi def main(): # Run Django via WSGI. application = django.core.handlers.wsgi.WSGIHandler() util.run_wsgi_app(application) if __name__ == '__main__': main() With appropriate app.yaml, settings.py and urls.py files, this handler displays the Django "It worked!" page. See Running Django on App Engine for more information on using Django. Using Multiple Archive Files for a Single Package Since all of Django 1.0 is too large to fit into a single archive, can we split it into multiple archives, each on sys.path? Actually yes, with some bootstrapping code to help Python navigate the different locations. When Python imports a module, it checks each location mentioned in sys.path for the package that contains the module. If a location does not contain the first package in the module's path, Python checks the next sys.path entry, and so on until it finds the first package or runs out of locations to check. When Python finds the first package in the module's path, it assumes that wherever it found it is the definitive location for that package, and it won't bother looking for it elsewhere. If Python cannot find the rest of the module path in the package, it raises an import error and stops. Python does not check subsequent sys.path entries after the first package in the path has been found. You can work around this by importing the package that is split across multiple archives from the first archive, then telling Python that the contents of the package can actually be found in multiple places. The __path__ member of a package (module) object is a list of locations for the package's contents. For example, if the django package is split between two archives called django1.zip and django2.zip, the following code tells Python to look in both archives for the contents of the package: sys.path.insert(0, 'django1.zip') import django django.__path__.append('django2.zip/django') This imports the django package from django1.zip, so make sure that archive contains django/__init__.py. With the second archive on the package's __path__, subsequent imports of modules inside django will search both archives. Additional Notes Some additional things to note about using zipimport with App Engine: - Module archives use additional CPU time the first time a module is imported. Imports are cached in memory for future requests to the same application instance, and modules from archives are cached uncompressed and compiled, so subsequent imports on the same instance will not incur CPU overhead for decompression or compilation. - The App Engine implementation of zipimport only supports .pyfiles, not precompiled .pycfiles. - Because handler scripts are responsible for adding module archives to the path, handler scripts themselves cannot be stored in module archives. Any other Python code can be stored in module archives.
https://cloud.google.com/appengine/articles/django10_zipimport?csw=1
CC-MAIN-2015-22
en
refinedweb
Generic comment moderation¶ Warning Django’s comment framework has been deprecated and is no longer supported. Most users will be better served with a custom solution, or a hosted product like Disqus. The code formerly known as django.contrib.comments is still available in an external repository. Django’s bundled comments application is extremely useful on its own, but the amount of comment spam circulating on the Web today essentially makes it necessary to have some sort of automatic moderation system in place for any application which makes use of comments. To make this easier to handle in a consistent fashion, django.contrib.comments.moderation provides a generic, extensible comment-moderation system which can be applied to any model or set of models which want to make use of Django’s comment system. Overview¶ The entire system is contained within django.contrib.comments.moderation, and uses a two-step process to enable moderation for any given model: - A subclass of CommentModerator’s enable_comments field is False, the comment will simply be disallowed (i.e., immediately deleted). - If the enable_comments field is True, the comment will be allowed to save. - Once the comment is saved, an email should be sent to site staff notifying them of the new comment. Accomplishing this is fairly straightforward and requires very little code: from django.contrib.comments.moderation import CommentModerator, moderator class EntryModerator(CommentModerator): email_notification = True enable_field = 'enable_comments' moderator.register(Entry, EntryModerator) The CommentModerator class pre-defines a number of useful moderation options which subclasses can enable or disable as desired, and moderator knows how to work with them to determine whether to allow a comment, whether to moderate a comment which will be allowed to post, and whether to email notifications of new comments. Built-in moderation options¶ - class CommentModerator[source]¶ Most common comment-moderation needs can be handled by subclassing CommentModerator and changing the values of pre-defined attributes; the full range of built-in options is as follows. - auto_close_field¶ If this is set to the name of a DateField or DateTimeField on the model for which comments are being moderated, new comments for objects of that model will be disallowed (immediately deleted) when a certain number of days have passed after the date specified in that field. Must be used in conjunction with close_after, which specifies the number of days past which comments should be disallowed. Default value is None. - auto_moderate_field¶. - close_after¶ If auto_close_field is used, this must specify the number of days past the value of the field specified by auto_close_field after which new comments for an object should be disallowed. Allowed values are None, 0 (which disallows comments immediately), or any positive integer. Default value is None. If True, any new comment on an object of this model which survives moderation (i.e., is not deleted) will generate an email to site staff. Default value is False. - enable_field¶. - moderate_after¶ If auto_moderate_field is used, this must specify the number of days past the value of the field specified by auto_moderate_field after which new comments for an object should be marked non-public. Allowed values are None, 0 (which moderates comments immediately), or any positive integer. Default value is None. Simply subclassing CommentModerator and changing the values of these options will automatically enable the various moderation methods for any models registered using the subclass. Adding custom moderation methods¶ For situations where the built-in options listed above are not sufficient, subclasses of CommentModerator can also override the methods which actually perform the moderation, and apply any logic they desire. CommentModerator defines three methods which determine how moderation will take place; each method will be called by the moderation system and passed two arguments: comment, which is the new comment being posted, content_object, which is the object the comment will be attached to, and request, which is the HttpRequest in which the comment is being submitted: - CommentModerator.allow(comment, content_object, request)[source]¶ Should return True if the comment should be allowed to post on the content object, and False otherwise (in which case the comment will be immediately deleted). - CommentModerator.email(comment, content_object, request)[source]¶[source]¶ In addition to the moderator.register() and moderator.unregister() methods detailed above, the following methods on Moderator can be overridden to achieve customized behavior: - connect()[source]¶ Determines how moderation is set up globally. The base implementation in Moderator does this by attaching listeners to the comment_will_be_posted and comment_was_posted signals from the comment models. - pre_save_moderation(sender, comment, request, **kwargs)[source]¶ In the base implementation, applies all pre-save moderation steps (such as determining whether the comment needs to be deleted, or whether it needs to be marked as non-public or generate an email).
https://docs.djangoproject.com/en/1.7/ref/contrib/comments/moderation/
CC-MAIN-2015-22
en
refinedweb
Introduction It’s superbowl Sunday and between the snacks it is time to learn something new. Today my eye fell on Angularjs. Angularjs is an MVC framework for javascript. It kinda puts the controllers and model on the clientside and not on the serverside like ASP.Net MVC does. I will use Nancy as our service to provide us with json data. Server Our server is pretty simple. Just make an empty asp.net application and add a Models folder. Here is my model. using System; namespace NancyJTable.Models { public class PlantModel { public int Id { get; set; } public string Name { get; set; } public string Genus { get; set; } public string Species { get; set; } public DateTime DateAdded { get; set; } } } And here is my module, which I put in the Modules folder. using System; using System.Collections.Generic; using System.Linq; using Nancy; using NancyJTable.Models; namespace NancyJTable.Modules { public class PlantsModule:NancyModule { public PlantsModule() { Get["/plants/{Id}"] = parameters => Response.AsJson(GetPlantModels().SingleOrDefault(x => x.Id == parameters.Id)); Get["/plants"] = parameters => { return Response.AsJson(GetPlantModels()); }; } private IList<PlantModel> GetPlantModels() { var plantModels = new List<PlantModel>(); for (var i = 1; i <= 25; i++) { var j = i.ToString("000"); plantModels.Add(new PlantModel() { Id = i, Name = "name" + j, Genus = "genus" + j, Species = "Species" + j, DateAdded = DateTime.Now }); } return plantModels; } } } Client First I added another empty ASP.Net web application project to our solution. I added angularjs to my project. It is on nuget so no problems there. I want to add a list of plants and I want a details page. First I need to start with adding my app.j in the js folder. This will take care of the routes. angular.module('plantsapp', []). config(['$routeProvider', function ($routeProvider) { $routeProvider. when('/plants', { templateUrl: 'partials/plants.html', controller: PlantsController }). when('/plants/:Id', { templateUrl: 'partials/plant.html', controller: PlantController }). otherwise({ redirectTo: '/plants' }); }]); You already see that I have two partial views and 2 controllers. The controllers are in my controllers.js file. function PlantsController($scope, $http) { $http.get('').success(function (data) { $scope.plants = data; }); } function PlantController($scope, $routeParams, $http) { $http.get('' + $routeParams.Id).success(function (data) { $scope.plant = data; }); } So I have a Plantscontroller that uses a get to get my json from my nancy service. And I have a PlantController that uses the routeparameter to tell which plant to get from my service. See how I inject http and how it magically gets injected for me. The next thing is to create an index.html which is the base of our views. <!DOCTYPE html> <html lang="en" ng- <head> <title>Plants</title> <script src="/Scripts/angular.js"></script> <script src="/js/controller.js"></script> <script src="/js/app.js"></script> </head> <body> <div ng-view></div> </body> </html> In the html tag I added an ng-app with the name I provided in my app.js. I import angular.js, controller.js and app.js. And I added a div with an ng-view attribute. Now I need the partials which I put in the partials folder. <table id="table_id"> <thead> <tr> <th>Id</th> <th>Name</th> <th>Genus</th> </tr> </thead> <tbody ng- <tr> <td>{{plant.Id}}</td> <td><a href="#/plants/{{plant.Id}}">{{plant.Name}}</a></td> <td>{{plant.Genus}}</td> </tr> </tbody> </table> This is my plants.html file and it uses the plants object which I have added to the scope in my controller. See how I add the link (the # is important). The next file is the detail view, plant.html. <table id="table_id"> <tr> <th>Id</th> <th>{{plant.Id}}</th> </tr> <tr> <th>Name</th> <th>{{plant.Name}}</th> </tr> <tr> <th>Genus</th> <th>{{plant.Genus}}</th> </tr> </table> Here I use plant because that is what I used to add the the scope in my controller. And that is it. Here are the screenshots. Look also at the url in the addressbarr. The plants. And here the detail view. Conclusion Yeah, uhm. Not sure. Not saying this is bad, but how will this scale ;-). One thing is for sure, the documentation was very good to get my started. Not that I read it before beginning but once I did, it helped. Framework fatigue didn’t kick in yet? Oh yeah. I wish someone had a monopoly and take over this damn mess we call programming. Yep, back in the year 2000 or so there was none of this stuff….no libraries, 2 browsers, php, asp, jsp, coldfusion or perl were the webstack technologies and that was pretty much what 93% of the world used Nice work. I’ve been reviewing Angular myself. One thing you could do is make a separate service or repository if you will and that can get injected into your controllers! IF you use $resource instead of $http, you get a higher-level abstraction and you don’t have to deal with any of the HTTP calls. @SQLDenis, Yes, back in the old days it was just CGI or ISAPI, but things are also different. Back then it was very much client-server with HTTP. This is all about being client with server only acting as data and processing, thus improving the experience with a more fluid and responsive design. One feature worth elaborating on is Angular’s HTML5 mode. That allows any browser supporting the history API to ditch the hashbang and use standard URL paths with the option to gracefully fall back. Having built a couple medium to large scale SPAs with Angular, I’d say it scales better than most of today’s popular MV(*) JavaScript frameworks. It’s on par with Ember, no doubt. Ive googled this all day and haven’t been able to get around this error. Closest Ive gotten was using jsonp instead of $http. Is there something I need to configure or set up to get this to work in local development and on a server. I tried adding delete $http.default.headers.common ‘x-requested-with’. Ive also tried writing the angular side about 50 different ways XMLHttpRequest cannot load. Origin is not allowed by Access-Control-Allow-Origin.
http://blogs.lessthandot.com/index.php/webdev/serverprogramming/aspnet/angularjs/
CC-MAIN-2015-22
en
refinedweb
Drupal 8 Preview: Object Oriented Programming for Module Developers Are you wondering how Drupal 8's transition to object oriented programming (OOP) will affect how you write modules in Drupal 8? In this session, we'll look at an example module, Pants, that touches most of Drupal's foundational APIs, and dive into the code differences between the Drupal 7 and 8 versions of it. If you're new to OOP, we hope that you'll come away feeling that while there are some new concepts and approaches to learn, it's really not that scary. If you're already an OOP expert, we hope you'll enjoy how Drupal has implemented the patterns you might be familiar with. In this webinar, you will learn how OOP affects how you: - Arrange classes, namespaces, and files within your module - Implement controllers (pages and forms) - Implement plugins (from blocks to field types to Views plugins) - Work with entities - Work with the configuration system
http://www.acquia.com/fr/resources/webinars/drupal-8-preview-object-oriented-programming-module-developers
CC-MAIN-2015-22
en
refinedweb
05 February 2010 15:47 [Source: ICIS news] TORONTO (ICIS news)--Canadian companies have gained an exemption from “Buy American” rules in the $787bn (€575bn) US economic stimulus package, ending a trade dispute between the two countries, Canada’s international trade minister Peter Van Loan said in a media briefing on Friday. Under a deal with the US, Canadian firms would have access to US state and local public works projects under the “American Recovery and Reinvestment Act” in a range of areas, including programmes of the US Department of Energy, the US Department of Housing and Urban Development and the Environmental Protection Agency, Van Loan said. However, parts of the settlement with the ?xml:namespace> Canadian commentators noted that Friday’s announcement only covered the US stimulus act and would not extend to future The chemical industry is Member companies of chemical trade group Chemistry Industry Association of Canada export almost 80% of production, that is, roughly double the global average and almost three times the The Ottawa-based trade group noted Van Loan's announcement on its website but officials were not immediately available for additional comment. However, Jayson Meyers, chief executive of trade group Canadian Manufacturers & Exporters said: “This is an important agreement. It is a good step in the right direction and puts (
http://www.icis.com/Articles/2010/02/05/9332345/canadian-producers-to-be-exempt-from-us-buy-american-law.html
CC-MAIN-2015-22
en
refinedweb
Thomas, Can it wait just a few more days? Thanks, dims --- Thomas Sandholm <sandholm@mcs.anl.gov> wrote: > Now when 1.1 has been labeled I would like to merge the fixes I put into > the dynamic_deserilization branch into the trunk so that they get included > into the next release. > I have done a successful merge in my workspace, but I wanted to give you a > heads up too with things that have been merged before committing. > > -dynamic deserialization support described at: > > -SOAP Header dirty flag bug fix > -Added support for getting default namespace in XMLUtils getNamespace, > getFullQNameFromString > -Fix for correct namespace generation for types in Java to wrapped WSDL > generation > -xsd:union support > -added meta data generation to enum emitter > > If I don't hear any objections I will merge it into the trunk tomorrow. > > Thanks, > Thomas > > > At 06:03 PM 6/8/2003 -0400, Glen Daniels wrote: > > >I did drop a label ("axis1_1"), but I did not cut an actual branch, > >figuring we can do that from the label if/when necessary. > > > >--Glen > > > > > -----Original Message----- > > > From: Davanum Srinivas [mailto:dims@yahoo.com] > > > Sent: Sunday, June 08, 2003 4:01 PM > > > To: axis-dev@ws.apache.org > > > Subject: Re: 1.1 pre-release (please test) > > > > > > > > > Thanks). > > > > > > > > Thomas Sandholm <sandholm@mcs.anl.gov> > The Globus Project(tm) <> > Ph: 630-252-1682, Fax: 630-252-1997 > Argonne National Laboratory > ===== Davanum Srinivas - __________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo.
http://mail-archives.apache.org/mod_mbox/axis-java-dev/200306.mbox/%3C20030610162046.12482.qmail@web12809.mail.yahoo.com%3E
CC-MAIN-2015-22
en
refinedweb
SYNOPSIS #include <unibilium.h> size_t unibi_dump(const unibi_term *ut, char *p, size_t n); DESCRIPTIONThis function creates a compiled terminfo entry from ut. The output is written to p, which must have room for at least n bytes. RETURN VALUE"unibi_dump" returns the number of bytes required to store the terminfo data. If this exceeds n, nothing is written to p. If the terminal object can't be represented in terminfo format (e.g. because the string table would be too large), the return value is "SIZE_MAX". ERRORS - "EINVAL" - ut can't be converted to terminfo format. - "EFAULT" - The resulting terminfo entry would be longer than n bytes.
https://manpages.org/unibi_dump/3
CC-MAIN-2022-40
en
refinedweb
Scalar wave equation with higher-order mass lumping¶ Introduction¶ In this demo, we solve the scalar wave equation with a fully explicit, higher-order (up to degree 5) mass lumping technique for triangular and tetrahedral meshes. This scalar wave equation is widely used in seismology to model seismic waves and is especially popular in algorithms for geophysical exploration such as Full Waveform Inversion and Reverse Time Migration. This tutorial demonstrates how to use the mass-lumped triangular elements originally discovered in [CJKMVV99] and later improved upon in [GMvdV18] in the Firedrake computing environment.** The short tutorial was prepared by `Keith J. Roberts <mailto:krober@usp.br>`__ The scalar wave equation is: where \(c\) is the scalar wave speed and \(rho\) is the density (assumed to be 1 for simplicity). The weak formulation is finding \(u \in V\) such that: where \(<\cdot, \cdot>\) denotes the pairing between \(H^{-1}(\Omega)\) and \(H^{1}_{0}(\Omega)\), \((\cdot, \cdot)\) denotes the \(L^{2}(\Omega)\) inner product, and \(a(\cdot, \cdot) : H^{1}_{0}(\Omega) \times H^{1}_{0}(\Omega)\rightarrow ℝ\) is the elliptic operator given by: We solve the above weak formulation using the finite element method. In the work of [CJKMVV99] and later [GMvdV18], several triangular and tetrahedral elements were discovered that could produce convergent and stable mass lumping for \(p \ge 2\). These elements have enriched function spaces in the interior of the element that lead to more degree-of-freedom per element than the standard Lagrange element. However, this additional computational cost is offset by the fact that these elements produce diagonal matrices that are comparatively quick to solve, which improve simulation throughput especially at scale. Firedrake supports (through FInAT) these elements up to degree 5 on triangular, and degree 3 on tetrahedral meshes. They can be selected by choosing the “KMV” finite element. In addition to importing firedrake as usual, we will need to construct the correct quadrature rules for the mass-lumping by hand. FInAT is responsible for providing these quadrature rules, so we import it here too.: from firedrake import * import finat import math A simple uniform triangular mesh is created: mesh = UnitSquareMesh(50, 50) We choose a degree 2 KMV continuous function space, set it up and then create some functions used in time-stepping: V = FunctionSpace(mesh, "KMV", 2) u = TrialFunction(V) v = TestFunction(V) u_np1 = Function(V) # timestep n+1 u_n = Function(V) # timestep n u_nm1 = Function(V) # timestep n-1 Note The user can select orders up to p=5 for triangles and up to p=3 for tetrahedra. We create an output file to hold the simulation results: outfile = File("out.pvd") Now we set the time-stepping variables performing a simulation for 1 second with a timestep of 0.001 seconds: T = 1.0 dt = 0.001 t = 0 step = 0 Ricker wavelets are often used to excite the domain in seismology. They have one free parameter: a peak frequency \(\text{peak}\). Here we inject a Ricker wavelet into the domain with a frequency of 6 Hz. For simplicity, we set the seismic velocity in the domain to be a constant: freq = 6 c = Constant(1.5) The following two functions are used to inject the Ricker wavelet source into the domain. We create a time-varying function to model the time evolution of the Ricker wavelet: def RickerWavelet(t, freq, amp=1.0): # Shift in time so the entire wavelet is injected t = t - (math.sqrt(6.0) / (math.pi * freq)) return amp * ( 1.0 - (1.0 / 2.0) * (2.0 * math.pi * freq) * (2.0 * math.pi * freq) * t * t ) The spatial distribution of the source function is a Guassian kernel with a standard deviation of 2,000 so that it’s sufficiently localized to emulate a Dirac delta function: def delta_expr(x0, x, y, sigma_x=2000.0): sigma_x = Constant(sigma_x) return exp(-sigma_x * ((x - x0[0]) ** 2 + (y - x0[1]) ** 2)) To assemble the diagonal mass matrix, we need to create the matching colocated quadrature rule. FInAT implements custom “KMV” quadrature rules to do this. We obtain the appropriate cell from the function space, along with the degree of the element and construct the quadrature rule: quad_rule = finat.quadrature.make_quadrature(V.finat_element.cell, V.ufl_element().degree(), "KMV") Then we make a new Measure object that uses this rule: dxlump=dx(rule=quad_rule) To discretize \(\partial_{t}^2 u\) we use a central scheme Substituting the above into the time derivative term in the variational form leads to Using Firedrake, we specify the mass matrix using the special quadrature rule with the Measure object we created above like so: m = (u - 2.0 * u_n + u_nm1) / Constant(dt * dt) * v * dxlump Note Mass lumping is a common technique in finite elements to produce a diagonal mass matrix that can be trivially inverted resulting in a in very efficient explicit time integration scheme. It’s usually done with nodal basis functions and an inexact quadrature rule for the mass matrix. A diagonal matrix is obtained when the integration points coincide with the nodes of the basis function. However, when using elements of \(p \ge 2\), this technique does not result in a stable and accurate finite element scheme and new elements must be found such as those detailed in :cite:Chin:1999 . The stiffness matrix \(a(u,v)\) is formed using a standard quadrature rule and is treated explicitly: a = c*c*dot(grad(u_n), grad(v)) * dx The source is injected at the center of the unit square: x, y = SpatialCoordinate(mesh) source = Constant([0.5, 0.5]) ricker = Constant(0.0) ricker.assign(RickerWavelet(t, freq)) We also create a function R to save the assembled RHS vector: R = Function(V) Finally, we define the whole variational form \(F\), assemble it, and then create a cached PETSc LinearSolver object to efficiently timestep with: F = m + a - delta_expr(source, x, y)*ricker * v * dx a, r = lhs(F), rhs(F) A = assemble(a) solver = LinearSolver(A, solver_parameters={"ksp_type": "preonly", "pc_type": "jacobi"}) Note Since we have arranged that the matrix A is diagonal, we can invert it with a single application of Jacobi iteration. We select this here using appropriate solver parameters, which tell PETSc to construct a solver which just applies a single step of Jacobi preconditioning. Now we are ready to start the time-stepping loop: step = 0 while t < T: step += 1 # Update the RHS vector according to the current simulation time `t` ricker.assign(RickerWavelet(t, freq)) R = assemble(r, tensor=R) # Call the solver object to do point-wise division to solve the system. solver.solve(u_np1, R) # Exchange the solution at the two time-stepping levels. u_nm1.assign(u_n) u_n.assign(u_np1) # Increment the time and write the solution to the file for visualization in ParaView. t += dt if step % 10 == 0: print("Elapsed time is: "+str(t)) outfile.write(u_n, time=t) References MJS Chin-Joe-Kong, Wim A Mulder, and M Van Veldhuizen. Higher-order triangular and tetrahedral finite elements with mass lumping for solving the wave equation. Journal of Engineering Mathematics, 35(4):405–426, 1999. doi:. Sjoerd Geevers, Wim A Mulder, and Jaap JW van der Vegt. New higher-order mass-lumped tetrahedral elements for wave propagation modelling. SIAM journal on scientific computing, 40(5):A2830–A2857, 2018. doi:.
https://www.firedrakeproject.org/demos/higher_order_mass_lumping.py.html
CC-MAIN-2022-40
en
refinedweb
Update to kill and I figured I could use it to continue my study in python programming. Yes I know most CS students start with python, but for us it was C so I’m still learning the language now. I’ve read through the manual for the basic stuff months ago but I never really worked on any exercise. And just last Tuesday (December 20) I thought, how else would you want to practice python programming than by creating a sample of the snake game? Yes I found it funny. I’m pretty shallow that way. But thinking it over, game programming is enjoyable and the snake game is pretty simple. It would be worth a shot. So here’s my work. Note: If you want to try this on your own, please make sure that you have Python2.7 installed and the corresponding version of PyGame. THE SPRITES I put all the sprites in one file named sprites.py. The class names are pretty ridiculous for I was sort of doing things on the fly and I am sorry for that. I even used the name Apple instead of Food so for this game you are going to have an apple-eating snake. To begin, we need to put the following line to import the pygame module: import pygame The Apple Class class Apple(pygame.sprite.Sprite): #private constants _DEFAULT_COLOR = [255, 0, 0] #red _DEFAULT_SIZE = [10, 10] def __init__(self, color, size, position): #parameter validation if color == None: color = Apple._DEFAULT_COLOR if size == None: size = Apple._DEFAULT_SIZE if position == None: raise Exception('Invalid position.') #initialization self.color = color self.size = size self.image = pygame.Surface(size) self.image.fill(color) self.rect = self.image.get_rect() self.rect.topleft = position This one’s pretty straight-forward. We have here an apple class accepting parameters for the color, size, and position upon creation of an instance. I provided default values for the color and size and a little parameter validation since it’s going to be a part of a class library. Note: All sprites are just composed of square areas in the screen of about 10px X 10px filled by a certain color. The Callable Class class Callable: def __init__(self, anycallable): self.__call__ = anycallable This class is only used to implement static functions in the sprite classes. The Snake Class The Snake class has with it three internal classes namely _SnakeTail, _SnakeHead, and SnakeMove with the first two as private. I figured, If I would like to use images rather than solid colors in the future, I would most certainly dedicate a different image to the head compared to the body. Hence, I decided to separate the snake’s head from its tail. Like what I did in the Apple class, I also provided default values for the attributes. #private constants _DEFAULT_COLOR = [0, 255, 0] #green _DEFAULT_SIZE = [10, 10] #10px X 10px _DEFAULT_POSITION = [30, 30] #space given to tail of length 2 The _SnakeTail Class class _SnakeTail(pygame.sprite.Sprite): def __init__(self): #initialization self.tiles = [] def add_tile(self, color, size, position): #creates a new tile tile = pygame.Surface(size) tile.fill(color) rect = tile.get_rect() rect.topleft = position self.tiles.append({'image':tile, 'rect':rect}) The snake’s tail is composed of an array of color-filled square areas that I call tiles for lack of a better term. It also has an add_tile function that will make the tail grow longer. The add_tile function creates another instance of a tile of a given color and size and adds it to the tail’s array of tiles. Once the tail is re-rendered on screen, it should appear longer. The _SnakeHead Class class _SnakeHead(pygame.sprite.Sprite): def __init__(self, color, size, position): #initialization self.image = pygame.Surface(size) self.image.fill(color) self.rect = self.image.get_rect() self.rect.topleft = position The _SnakeHead class is very similar to the Apple class. The SnakeMove Class class SnakeMove(): UP = '1Y' DOWN = '-1Y' RIGHT = '1X' LEFT = '-1X' #checks a direction's validity def SnakeMove_is_member(direction): if direction == Snake.SnakeMove.UP: return True elif direction == Snake.SnakeMove.DOWN: return True elif direction == Snake.SnakeMove.RIGHT: return True elif direction == Snake.SnakeMove.LEFT: return True return False #makes the function 'is_member' a static function SnakeMove_is_member = Callable(SnakeMove_is_member) The creation of this class would have been avoided if I settled on accepting just some set of values for directions in making the snake move but it was against to what I’m used to. Since I wanted sprites.py to function like a standard class library I wanted it to implement a uniform way of using the classes and their functions and sort of inform developers about it. The SnakeMove class functions like an enum class in C#, this way developers won’t have to guess valid direction values once they deal with making the snake move. All they have to do is pass a constant from this class as a parameter to the Snake class’ move function. The class has a SnakeMove_is_member static function which is used for validating the direction parameter in the Snake class’ move function. The static class was created with the help of the previously shown Callable class. The Snake Class Functions Initialization def __init__(self, color, size, position): #parameter validation if color == None: color = Snake._DEFAULT_COLOR if size == None: size = Snake._DEFAULT_SIZE if size[0] != size[1]: raise Exception('Invalid tile size. Width and height must be equal.') if position == None: position = Snake._DEFAULT_POSITION self.color = color self.size = size self.head = Snake._SnakeHead(color, size, position) self.tail = Snake._SnakeTail() tailposition = [(position[0] - size[0]), position[1]] self.tail.add_tile(color, size, tailposition) tailposition = [(position[0] - 2*size[0]), position[1]] self.tail.add_tile(color, size, tailposition) The initialization of the snake class simple involves validating the parameters and assigning default values for missing paramaters and creating the instance for the snake’s head and tail. As shown in the code, the initial length of the tail is 2 tiles. All in all, the Snake sprite has a color, size, head, and tail attributes. Movement def move(self, direction, frame_width, frame_height): #parameter validation if Snake.SnakeMove.SnakeMove_is_member(direction) != True: raise Exception('Invalid movement direction.') #initializes new position stepsize = self.head.image.get_rect()[2] #gets the size of the head tile newheadposition = [self.head.rect.topleft[0], self.head.rect.topleft[1]] if direction == Snake.SnakeMove.UP: newheadposition[1] = (newheadposition[1]-stepsize)%frame_height if direction == Snake.SnakeMove.DOWN: newheadposition[1] = (newheadposition[1]+stepsize)%frame_height if direction == Snake.SnakeMove.RIGHT: newheadposition[0] = (newheadposition[0]+stepsize)%frame_width if direction == Snake.SnakeMove.LEFT: newheadposition[0] = (newheadposition[0]-stepsize)%frame_width if self.occupies_position(newheadposition): return False #moves the head to its new position newtileposition = self.head.rect.topleft self.head.rect.topleft = newheadposition #moves the tail tiles to its respective new positions for count in range(len(self.tail.tiles)): prevtileposition = self.tail.tiles[count]['rect'].topleft self.tail.tiles[count]['rect'].topleft = newtileposition newtileposition = prevtileposition return True As mentioned earlier, the move function first checks if the supplied direction is valid. If not, it will raise an exception. If it is, it will proceed to determing the next position of each tile starting with the head. stepsize = self.head.image.get_rect()[2] #gets the size of the head tile In this line, we are assuming that the size of the head is equal with the all the other tiles in the snake’s body. if self.occupies_position(newheadposition): return False Here we are making sure that the snake is not trying to move to a space that its body already occupies. If it is, then the snake won’t be able to move and the game will be over. The remaining lines deal with moving each tile to the position of the one before it. If all goes well then the function will return True, signifying that the movement was successful. Collision Detection #checks if this snake's body occupies a given position def occupies_position(self, position): #parameter validation if position[0] == None or position[1] == None: return True if self.head.rect.topleft[0] == position[0] \ and self.head.rect.topleft[1] == position[1]: return True for count in range(len(self.tail.tiles)): if self.tail.tiles[count]['rect'].topleft[0] == position[0] \ and self.tail.tiles[count]['rect'].topleft[1] == position[1]: return True return False PyGame already has a built-in function for collision detection involving objects with rect attribute but I found it quite late. I did this one and it was pretty good enough for me. As the function name implies, this function checks whether the snake is occupying a given position. Lengthening the Tail def lengthen_tail(self, number, current_direction): #parameter validation if number is None: number = 1 if Snake.SnakeMove.SnakeMove_is_member(current_direction) != True: raise Exception('Invalid movement direction.') size = self.size[0] color = self.color for count in range(number): lastindex = len(self.tail.tiles) - 1 X = self.tail.tiles[lastindex]['rect'].topleft[0] Y = self.tail.tiles[lastindex]['rect'].topleft[1] #determines position of new tile if current_direction == Snake.SnakeMove.UP: Y = Y - size + (count*size) elif current_direction == Snake.SnakeMove.DOWN: Y = Y + size + (count*size) elif current_direction == Snake.SnakeMove.RIGHT: X = X - size + (count*size) elif current_direction == Snake.SnakeMove.LEFT: X = X + size + (count*size) self.tail.add_tile(color, self.size, [X, Y]) Since the _SnakeTail class is supposed to be private, I provided this function on the Snake class. This should be the one that developers using my sprites library use when making the Snake grow longer. I included an number parameter just in case someone wants to make the snake grow longer by more than one tile. Determining the position for the new tile involves checking the snake’s current direction to ensure that addition of tiles will be done on the right end and following the right direction of movement. THE GAME The main file for this snake game is game.py. It handles the display and game flow. Initialization import pygame import pygame._view from pygame import * from sprites import Snake from sprites import Apple import random The lines above shows the modules imported by the game file. Notice that only the Apple and Snake classes were imported from sprites.py. pygame.init() Initialize the game. DEFAULT_SCREEN_SIZE = [640, 480] INITIAL_DIRECTION = Snake.SnakeMove.RIGHT DEFAULT_UPDATE_SPEED = 100 updatetime = pygame.time.get_ticks() + DEFAULT_UPDATE_SPEED Some default values and update time initialization. #screen initialization screen = pygame.display.set_mode(DEFAULT_SCREEN_SIZE) display.set_caption('Snake') #sprite initialization snake = Snake(None, None, None) apple = None Screen and sprites initialization. is_done = False #signifies escape from game is_over = False #signifies end of game by game rules direction = None score = 0 create_apple() Initialization of variables that will be used in the game flow. Notice that we have is_done and is_over. create_apple was also called to create first instance of the apple. Rendering def render_snake(): screen.blit(snake.head.image, snake.head.rect) for count in range(len(snake.tail.tiles)): screen.blit(snake.tail.tiles[count]['image'] , snake.tail.tiles[count]['rect']) This method renders the snake on screen. This simply displays all the tiles forming the image of the snake. #renders the apple on screen def render_apple(): global apple screen.blit(apple.image, apple.rect) Like the render_snake method, this method displays the apple on screen. #creates a new apple def create_apple(): global apple global snake hlimit = (DEFAULT_SCREEN_SIZE[0]/Apple._DEFAULT_SIZE[0])-1 vlimit = (DEFAULT_SCREEN_SIZE[1]/Apple._DEFAULT_SIZE[1])-1 X, Y = None, None while snake.occupies_position([X, Y]) == True: X = random.randint(0, hlimit)*Apple._DEFAULT_SIZE[0] Y = random.randint(0, vlimit)*Apple._DEFAULT_SIZE[1] apple = Apple(None, None, [X, Y]) Notice that horiontal and vertical limits were computed first before generating a random position. This ensures that the position generated will be inside the area of the screen. Default values are used here ‘though, maybe we can make it configurable on a better version. We also keep on generating random positions if the current position is occupied by the snake. The Game Flow while is_done == False: screen.fill(0) #color screen') The flow as expected is implemented using a loop. screen.fill(0) #color screen black First we fill the screen with Then we check for changes in direction based on keyboard input. Notice that global direction was used to make things less confusing. After all only one direction is needed. Additional check was also placed to deem direction opposite the current one as invalid. ') Updating the display depends on two things, one is if the game is not over yet and the other is if the update time was already reached. moved = snake.move(direction, DEFAULT_SCREEN_SIZE[0], DEFAULT_SCREEN_SIZE[1]) if moved == False: is_over = True If the game is not yet over and the update time was already reached, an attempt to move the snake is done. In case movement failed which is probably becaused the snake hit itself, the game will be over. if snake.occupies_position(apple.rect.topleft) == True: create_apple() snake.lengthen_tail(1, direction) global score score += 1 display.set_caption('Snake: ' + str(score)) If movement was successful, it is determined if the snake passes through the position of the apple. If it does, then a new apple is created, the snake’s tail will be lengthened, the score will be incremented, and the score display will be updated. render_apple() render_snake() pygame.display.update() updatetime += DEFAULT_UPDATE_SPEED To cap off the updates, all the elements will be re-rendered and the update time will be set to a future time. On the other hand, if the game is already over, the updates will simply stop and the display will flash GAME OVER together with the score. Wow this post is pretty lengthy but I hope someone might find it useful at least for comparisons haha. Feel free to comment ‘though I’d be moderating them. Have a nice day everyone! 8 thoughts on “Sample Snake Game in Python 2.7” Superb 🙂 m a fan! Oh thanks! I’ve sort of stopped developing games in Python though. thank you! how can i change the color of the apples and the snake? i want a snake htat changes to the color of the apple it just ate. it has been a very long time; i cannot perfectly recall everything. but you can do color randomization for the apple then fill the snake head and snake tail’s image with that color once the snake gets to eat the apple. the rendering function should render the snake with the new color. thanks for asking. it’s nice to get a message from readers once in a while. Thank you so much! I Will try that when I get to know a little bit more about programming.I am starting a python game programming course in coursera.org that you may want to reccomend to your readers/followers, because (as your site), its fun and free! 😀 SIR, Can you email me on tanaynagarsheth.tn@gmail.com because this one has many and shows many errors Hi, I’m not entirely sure of what you mean, but maybe it’s okay to post your questions here. That way even the other visitors can leave you a reply.
https://markbadiola.com/2011/12/22/sample-snake-game-in-python-2-7/
CC-MAIN-2022-40
en
refinedweb
ArangoDB v3.10 is under development and not released yet. This documentation is not final and potentially incomplete. ArangoDB Starter Architecture What does the Starter do The ArangoDB Starter is a program used to create ArangoDB database deployments on bare-metal (or virtual machines) with ease. It enables you to create everything from a simple Single server instance to a full blown Cluster with datacenter to datacenter replication in under 5 minutes. The Starter is intended to be used in environments where there is no higher level orchestration system (e.g. Kubernetes) available. Starter versions The Starter is a separate process in a binary called arangodb (or arangodb.exe on Windows). This binary has its own version number that is independent of a ArangoDB (database) version. This means that Starter version a.b.c can be used to run deployments of ArangoDB databases with different version. For example, the Starter with version 0.11.2 can be used to create ArangoDB deployments with ArangoDB version 3.2.<something> as well as deployments with ArangoDB version 3.3.<something>. It also means that you can update the Starter independently from the ArangoDB database. Note that the Starter is also included in all binary ArangoDB packages. To find the versions of you Starters & ArangoDB database, run the following commands: # To get the Starter version arangodb --version # To get the ArangoDB database version arangod --version Starter deployment modes The Starter supports 3 different modes of ArangoDB deployments: - Single server - Active failover - Cluster Note: Datacenter replication is an option for the cluster deployment mode. You select one of these modes using the --starter.mode command line option. Depending on the mode you’ve selected, the Starter launches one or more ( arangod / arangosync) server processes. No matter which mode you select, the Starter always provides you a common directory structure for storing the servers data, configuration & log files. Starter operating modes The Starter can run as normal processes directly on the host operating system, or as containers in a docker runtime. When running as normal process directly on the host operating system, the Starter launches the servers as child processes and monitors those. If one of the server processes terminates, a new one is started automatically. When running in a docker container, the Starter launches the servers as separate docker containers, that share the volume namespace with the container that runs the Starter. It monitors those containers and if one terminates, a new container is launched automatically. Starter data-directory The Starter uses a single directory with a well known structure to store all data for its own configuration & logs, as well as the configuration, data & logs of all servers it starts. This data directory is set using the --starter.data-dir command line option. It contains the following files & sub-directories. setup.jsonThe configuration of the “cluster of Starters”. For details see below. DO NOT edit this file. arangodb.logThe log file of the Starter single<port>, agent<port>, coordinator<port>, dbserver<port>: directories for launched servers. These directories contain among others the following files: apps: A directory with Foxx applications data: A directory with database data arangod.conf: The configuration file for the server. Editing this file is possible, but not recommended. arangod.log: The log file of the server arangod_command.txt: File containing the exact command line of the started server (for debugging purposes only) Starter configuration file The Starter can be configured using a configuration file. The format of the configuration file is the same as the arangod configuration file format. For more details, refer to the configuration file format and how to use configuration files. The default configuration file of the Starter is arangodb-starter.conf. It can be changed using the --configuration option. For more information about other configuration options, see ArangoDB Starter options. The Starter has a different set of supported command line options than arangod binary. Using the arangod configuration file as input for arangodb binary is not supported. Passing through arangod options The configuration file also supports setting pass-through options. Options with same prefixes can be split into sections. # passthrough-example.conf args.all.log.level = startup=trace args.all.log.level = warning [starter] mode = single [args] all.log.level = queries=debug all.default-language = de_DE [args.all.rocksdb] enable-statistics = true ./arangodb --configuration=passthrough-example.conf Configuration precedence When adding a command line option next to a modified configuration file, the last occurrence of the option becomes the final value. Running the Starter with the configuration example above and adding the default-language=es_419 command line option ./arangodb --args.all.default-language=es_419 --configuration=passthrough-example.conf results in having the default-language set to es_419 and not the value from the configuration file. Running on multiple machines For the activefailover & cluster mode, it is required to run multiple Starters, as every Starter will only launch a subset of all servers needed to form the entire deployment. For example in cluster mode, a Starter will launch a single Agent, a single DB-Server and a single Coordinator. It is the responsibility of the user to run the Starter on multiple machines such that enough servers are started to form the entire deployment. The minimum number of Starters needed is 3. The Starters running on those machines need to know about each other’s existence. In order to do so, the Starters form a “cluster” of their own (not to be confused with the ArangoDB database cluster). This cluster of Starters is formed from the values given to the --starter.join command line option. You should pass the addresses ( <host>:<port>) of all Starters. For example, a typical command line for a cluster deployment looks like this: arangodb --starter.mode=cluster --starter.join=hostA:8528,hostB:8528,hostC:8528 # this command is run on hostA, hostB and hostC. The state of the cluster (of Starters) is stored in a configuration file called setup.json in the data directory of every Starter and the ArangoDB Agency is used to elect a master among all Starters. The master Starter is responsible for maintaining the list of all Starters involved in the cluster and their addresses. The slave Starters (all Starters except the elected master) fetch this list from the master Starter on regular basis and store it to its own setup.json config file. Note: The setup.json config file MUST NOT be edited manually. Running on multiple machines (under the hood) As mentioned above, when the Starter is used to create an activefailover or cluster deployment, it first creates a “cluster” of Starters. These are the steps taken by the Starters to bootstrap such a deployment from scratch. - All Starters are started (either manually or by some supervisor) - All Starters try to read their config from setup.json. If that file exists and is valid, this bootstrap-from-scratch process is aborted and all Starters go directly to the runningphase described below. - All Starters create a unique ID - The list of --starter.joinarguments is sorted - All Starters request the unique ID from the first server in the sorted --starter.joinlist, and compares the result with its own unique ID. - The Starter that finds its own unique ID, is continuing as bootstrap masterthe other Starters are continuing as bootstrap slaves. - The bootstrap masterwaits for at least 2 bootstrap slavesto join it. - The bootstrap slavescontact the bootstrap masterto join its cluster of Starters. - Once the bootstrap masterhas received enough (at least 2) requests to join its cluster of Starters, it continues with the runningphase. - The bootstrap slaveskeep asking the bootstrap masterabout its state. As soon as they receive confirmation to do so, they also continue with the runningphase. In the running phase all Starters launch the desired servers and keeps monitoring those servers. Once a functional Agency is detected, all Starters will try to be running master by trying to write their ID in a well known location in the Agency. The first Starter to succeed in doing so wins this master election. The running master will keep writing its ID in the Agency in order to remaining the running master. Since this ID is written with a short time-to-live, other Starters are able to detect when the current running master has been stopped or is no longer responsible. In that case the remaining Starters will perform another master election to decide who will be the next running master. API requests that involve the state of the cluster of Starters are always answered by the current running master. All other Starters will refer the request to the current running master.
https://www.arangodb.com/docs/devel/programs-starter-architecture.html
CC-MAIN-2022-40
en
refinedweb
. The only required step to verify Authenticode signatures on non-Windows systems is to install our “Microsoft Authenticode” package from Cerbero Store. Cerbero Suite has been using its own implementation of Microsoft Authenticode for performance reasons since the very beginning, back in 2012. However, thanks to the recently introduced Cerbero Store we can now offer this feature on systems other than Windows. We have also exposed Authenticode validation to our Python SDK. from Pro.PE import * print(PE_VerifyAuthenticode(obj)) Alternatively, scan hooking extensions can check the generated report for the validation scan entries.
https://cerbero-blog.com/?p=2378
CC-MAIN-2022-40
en
refinedweb
QWGLNativeContext Class A class encapsulating a WGL context on Windows with desktop OpenGL (opengl32.dll). More... This class was introduced in Qt 5.4. Public Functions Detailed Description Note: There is no binary compatibility guarantee for this class, meaning that an application using it is only guaranteed to work with the Qt version it was developed against. QWGLNativeContext is a value class that can be passed to QOpenGLContext::setNativeHandle(). When creating a QOpenGLContext with the native handle set, no new context will get created. Instead, the provided handles are used, without taking ownership. This allows wrapping a context created by an external framework or rendering engine. The typical usage will be similar to the following snippet: #include <QtPlatformSupport/QWGLNativeContext> ...create and retrieve the WGL context and the corresponding window... QOpenGLContext *context = new QOpenGLContext; QWGLNativeContext nativeContext(hglrc, hwnd); context->setNativeHandle(QVariant::fromValue(nativeContext)); context->create(); ... The window is needed because the its pixel format will be queried. When the adoption is successful, QOpenGLContext::format() will return a QSurfaceFormat describing this pixel format. It is recommended to restrict the usage of QOpenGLContexts created this way. Various platform-specific behavior and issues may prevent such contexts to be made current with windows (surfaces) created by Qt due to non-matching pixel formats for example. A potentially safer solution is to use the wrapped context only to set up sharing and perform Qt-based rendering offscreen, using a separate, dedicated QOpenGLContext. The resulting textures are then accessible in the foreign context too. ...like above... QOpenGLContext *qtcontext = new QOpenGLContext; qtcontext->setShareContext(context); qtcontext->setFormat(context->format()); qtcontext->create(); ...use qtcontext for rendering with Qt... In addition to being used with QOpenGLContext::setNativeHandle(), this class is used also to retrieve the native context handle, that is, a HGLRC value, from a QOpenGLContext. Calling QOpenGLContext::nativeHandle() returns a QVariant which, on Windows with opengl32.dll at least, will contain a QWGLNativeContext: QVariant nativeHandle = context->nativeHandle(); if (!nativeHandle.isNull() && nativeHandle.canConvert<QWGLNativeContext>()) { QWGLNativeContext nativeContext = nativeHandle.value<QWGLNativeContext>(); HGLRC hglrc = nativeContext.context(); ... } See also QOpenGLContext::setNativeHandle() and QOpenGLContext::nativeHandle(). Member Function Documentation QWGLNativeContext::QWGLNativeContext(HGLRC ctx, HWND wnd) Constructs a new instance with the provided ctx context handle and wnd window handle. Note: The window specified by wnd must have its pixel format set to a format compatible with the context's. If no SetPixelFormat() call was made on any device context belonging to the window, adopting the context will fail. QWGLNativeContext::QWGLNativeContext() Construct a new instance with no handles. HGLRC QWGLNativeContext::context() const Returns the WGL context. HWND QWGLNativeContext::window() const Note: The window handle is not available when the QWGLNativeContext is queried from a regular, non-adopted QOpenGLContext using QOpenGLContext::nativeHandle(). This is because the windows platform plugin creates WGL contexts using a dummy window that is not available afterwards. Instead, the native window handle (HWND) is queriable from a QWindow via QPlatformNativeInterface::nativeResourceForWindow() using the "handle" resource key. Note however that the window will not have its pixel format set until it is first associated with a context via QOpenGLContext::makeCurrent(). Returns handle for the window for which the context was.
https://doc-snapshots.qt.io/qt5-5.15/qwglnativecontext.html
CC-MAIN-2022-40
en
refinedweb
I keep getting different attribute errors when trying to run this file in ipython…beginner with pandas so maybe I’m missing something Code: from pandas import Series, DataFrame import pandas as pd import json nan=float('NaN') data = [] with open('file.json') as f: for line in f: data.append(json.loads(line)) df = DataFrame(data, columns=['accepted', 'user', 'object', 'response']) clean = df.replace('NULL', nan) clean = clean.dropna() print clean.value_counts() AttributeError: 'DataFrame' object has no attribute 'value_counts' Any ideas? value_counts is a Series method rather than a DataFrame method (and you are trying to use it on a DataFrame, clean). You need to perform this on a specific column: clean[column_name].value_counts() It doesn’t usually make sense to perform value_counts on a DataFrame, though I suppose you could apply it to every entry by flattening the underlying values array: pd.value_counts(df.values.flatten()) To get all the counts for all the columns in a dataframe, it’s just df.count() value_counts() is now a DataFrame method since pandas 1.1.0 value_counts work only for series. It won’t work for entire DataFrame. Try selecting only one column and using this attribute. For example: df['accepted'].value_counts() It also won’t work if you have duplicate columns. This is because when you select a particular column, it will also represent the duplicate column and will return dataframe instead of series. At that time remove duplicate column by using df = df.loc[:,~df.columns.duplicated()] df['accepted'].value_counts()
https://techstalking.com/programming/question/solved-attributeerror-dataframe-object-has-no-attribute/
CC-MAIN-2022-40
en
refinedweb
>> count number of homogenous substrings in Python Beyond Basic Programming - Intermediate Python 36 Lectures 3 hours Practical Machine Learning using Python 91 Lectures 23.5 hours Practical Data Science using Python 22 Lectures 6 hours Suppose we have a string s, we have to find the number of homogenous substrings of s. The answer may be very large, so return answer modulo 10^9+7. A string is said to be homogenous when all the characters of the string are the same. So, if the input is like s = "xyyzzzxx", then the output will be 13 because the homogenous substrings are listed like 1."x" appears thrice. "xx" appears once. 3. "y" appears twice. "yy" appears once. 5. "z" appears thrice. "zz" appears twice. "zzz" appears once. so , (3 + 1 + 2 + 1 + 3 + 2 + 1) = 13. To solve this, we will follow these steps − s := s concatenate "@" h:= a new map prev:= s[0] c:= 1 for each i in s from index 1 to end, do if prev is not same as i, then if prev*c is present in h, then h[prev*c] := h[prev*c] + 1 otherwise, h[prev*c]:= 1 c:= 1 if prev is same as i, then c := c + 1 prev := i fin:= 0 for each i in h, do t:= size of i k:= 0 while t is not same as 0, do k := k + t t := t - 1 fin := fin + k*h[i] return fin mod 10^9+7 Example Let us see the following implementation to get better understanding − def solve(s): s+="@" h={} prev=s[0] c=1 for i in s[1:]: if prev!=i: if prev*c in h: h[prev*c]+=1 else: h[prev*c]=1 c=1 if prev == i: c += 1 prev = i fin=0 for i in h: t=len(i) k=0 while t!=0: k+=t t-=1 fin+=k*h[i] return fin % 1000000007 s = "xyyzzzxx" print(solve(s)) Input "xyyzzzxx" Output 13 - Related Questions & Answers - Program to count number of palindromic substrings in Python - Program to count number of distinct substrings in s in Python - Program to count number of similar substrings for each query in Python - Program to count maximum score from removing substrings in Python - Program to find maximum number of non-overlapping substrings in Python - C++ code to count number of even substrings of numeric string - Program to find number of substrings with only 1s using Python - Program to count substrings that differ by one character in Python - Program to find out the number of pairs of equal substrings in Python - Program to count substrings with all 1s in binary string in Python - Program to count number of unhappy friends in Python - Program to count number of nice subarrays in Python - Count number of substrings with exactly k distinct characters in C++ - Program to find out number of distinct substrings in a given string in python - Program to find number of different substrings of a string for different queries in Python
https://www.tutorialspoint.com/program-to-count-number-of-homogenous-substrings-in-python
CC-MAIN-2022-40
en
refinedweb
Avro RPC Since Camel 2.10 Both producer and consumer are supported This component provides a support for Apache Avro’s rpc, by providing producers and consumers endpoint for using avro over netty or http. Before Camel 3.2 this functionality was a part of camel-avro component. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-avro-rpc< Apache Avro Overview Avro allows you to define message types and a protocol using a json like format and then generate java code for the specified types and messages. An example of how a schema looks like is below. {"namespace": "org.apache.camel.avro.generated", "protocol": "KeyValueProtocol", "types": [ {"name": "Key", "type": "record", "fields": [ {"name": "key", "type": "string"} ] }, {"name": "Value", "type": "record", "fields": [ {"name": "value", "type": "string"} ] } ], "messages": { "put": { "request": [{"name": "key", "type": "Key"}, {"name": "value", "type": "Value"} ], "response": "null" }, "get": { "request": [{"name": "key", "type": "Key"}], "response": "Value" } } }: package org.apache.camel.avro.reflection; public interface KeyValueProtocol { void put(String key, Value value); Value get(String key); } class Value { private String value; public String getValue() { return value; } public void setValue(String value) { this.value = value; } } Note: Existing classes can be used only for RPC (see below), not in data format. Using Avro RPC in Camel. Examples An example of using camel avro producers via http: <route> <from uri="direct:start"/> <to uri="avro:http:localhost:{{avroport}}?protocolClassName=org.apache.camel.avro.generated.KeyValueProtocol"/> <to uri="log:avro"/> </route> In the example above you need to fill CamelAvroMessageName header. Since 2.12 you can use following syntax to call constant messages: <route> <from uri="direct:start"/> <to uri="avro:http:localhost:{{avroport}}/put?protocolClassName=org.apache.camel.avro.generated.KeyValueProtocol"/> <to uri="log:avro"/> </route> An example of consuming messages using camel avro consumers via netty: <route> <from uri="avro:netty:localhost:{{avroport}}?protocolClassName=org.apache.camel.avro.generated.KeyValueProtocol"/> <choice> <when> <el>${in.headers.CamelAvroMessageName == 'put'}</el> <process ref="putProcessor"/> </when> <when> <el>${in.headers.CamelAvroMessageName == 'get'}</el> <process ref="getProcessor"/> </when> </choice> </route> Since 2.12 you can set up two distinct routes to perform the same task: <route> <from uri="avro:netty:localhost:{{avroport}}/put?protocolClassName=org.apache.camel.avro.generated.KeyValueProtocol"> <process ref="putProcessor"/> </route> <route> <from uri="avro:netty:localhost:{{avroport}}/get?protocolClassName=org.apache.camel.avro.generated.KeyValueProtocol&singleParameter=true"/> <process ref="getProcessor"/> </route> In the example above, get takes only one parameter, so singleParameter is used and getProcessor will receive Value class directly in body, while putProcessor will receive an array of size 2 with String key and Value value filled as array contents. Avro via HTTP SPI The Avro RPC component offers the org.apache.camel.component.avro.spi.AvroRpcHttpServerFactory service provider interface (SPI) so that various platforms can provide their own implementation based on their native HTTP server. The default implementation available in org.apache.camel:camel-avro-jetty is based on org.apache.avro:avro-ipc-jetty. Spring Boot Auto-Configuration When using avro.
https://camel.apache.org/components/3.12.x/avro-component.html
CC-MAIN-2022-40
en
refinedweb