text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
import requests from bs4 import BeautifulSoup def trade_spider(max_pages): page = 0 while page <= max_pages: url = '' + str(page * 100) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text) for link in soup.findAll('a', {'class':'hdrlnk'}): href = '' + link.get('href') title = link.string print title #print href get_single_item_data(href) page += 1 def get_single_item_data(item_url): source_code = requests.get(item_url) plain_text = source_code.text soup = BeautifulSoup(plain_text) for item_name in soup.findAll('section', {'id':'postingbody'}): print item_name.string trade_spider(1) I am trying to crawl craigslist (for practice), in particular. I have it right now set to print the title of the entry and the description of the entry. The issue is that although the title correctly prints for every object listed, the description is listed as "None" for most of them, even though there is clearly a description. Any help would be appreciated. Thanks.
http://www.howtobuildsoftware.com/index.php/how-do/buu/python-html-web-scraping-beautifulsoup-html-parsing-beautifulsoup-is-not-getting-all-data-only-some
CC-MAIN-2019-09
refinedweb
142
53.68
With Arnold 5.2.0.0 that ships with Maya 2017, the following code works with Python 2.7 but fails with Python 3.6:import os import sys sys.path.insert(0, 'C:/solidangle/mtoadeploy/2017/scripts') os.environ['PATH'] += ';C:/solidangle/mtoadeploy/2017/bin' import arnold print(arnold.AiGetVersion()) Here's the output with Python 2.7: ['5', '2', '0', '0'] and the error with Python 3.6: Traceback (most recent call last): File "D:\temp\test.py", line 7, in <module> import arnold File "C:/solidangle/mtoadeploy/2017/scripts\arnold__init__.py", line 6, in <module> from .ai_color_managers import * File "C:/solidangle/mtoadeploy/2017/scripts\arnold\ai_color_managers.py", line 7, in <module> from .ai_nodes import * File "C:/solidangle/mtoadeploy/2017/scripts\arnold\ai_nodes.py", line 9, in <module> from .ai_universe import AtUniverse File "C:/solidangle/mtoadeploy/2017/scripts\arnold\ai_universe.py", line 7, in <module> import ai_nodes ModuleNotFoundError: No module named 'ai_nodes' This is caused by this import ai_nodes statement. Interestingly, Arnold 5.1.1.1 does not have that statement and consequently the sample code works fine with Python 3.6. Sorry for the formatting but the text editor is extremely buggy... Answer by Stephen Blair · Aug 28, 2018 at 09:35 PM In ai_universe.py, change line 7 to this: from .ai_nodes import * Answer by François Beaune · Aug 29, 2018 at 12:37 PM @Stephen Blair: That's not enough as the code references the ai_nodes package name later on, for instance return NullToNone(func(universe), POINTER(ai_nodes.AtNode)) in _AiUniverseGetOptions(). Removing all ai_nodes. prefixes appears to solve the problem. Answer by Tyler Furreboe · Aug 29, 2018 at 01:09 PM @François Beaune I solved it by changing line 7 in ai_universe.py to this, you'll have to ensure arnold is apart of your PATH environment variable: from arnold import ai_nodes.
https://answers.arnoldrenderer.com/questions/7798/cant-import-arnold-module-with-python-3.html
CC-MAIN-2019-04
refinedweb
308
54.39
The Samba-Bugzilla – Bug 10002 make the logging header customizable Last modified: 2014-05-08 23:11:11 UTC Hi, We have an use case where we trap winbindd logs from stdout. The trapped debug messages would be more useful in debugging and understanding if we prefix PID and time stamp etc to stdout log messages. In that respect can we consider what Matthieu proposed in the following mail thread? Created attachment 9034 [details] Proposed patch Version of the patch for 4.0 Comment on attachment 9034 [details] Proposed patch Jeremy you promised to do the review. Can you do it please ! Unfortunately this patch has a bunch of bugs and off-by-one errors in length calculations which could overrun the string or leave it non-null terminated :-(. It's easier to fix myself, so that's what I'm doing.. New patch will be uploaded shortly. Jeremy. This will be done by tomorrow (more changes than expected needed). It is essentially a rewrite of the patch, so will need re-review by Matthieu to make sure it does what he wants before I'd be happy to push. Also, we need an update of the smb.conf man page to go with this change. Jeremy. This line in the patch: + settings.debug_header_template = talloc_strdup(talloc_tos(), + lp_debug_header_template()); will leave the global settings debug_header_template pointer with a pointer that will go out of scope ! This needs more fixing... (In reply to comment #5) > This line in the patch: > > + settings.debug_header_template = talloc_strdup(talloc_tos(), > + > lp_debug_header_template()); > > > will leave the global settings debug_header_template pointer with a pointer > that will go out of scope ! This needs more fixing... Interesting, should we do something like for the set_logfile where we actually talloc_dup the string with the NULL context ? ie something like that debug_set_logfile(lp_logfile(talloc_tos())); Or attach a talloc context to the settings structure and everytime we reopen the logs we free the previous talloc context ? (In reply to comment #3) > Unfortunately this patch has a bunch of bugs and off-by-one errors in length > calculations which could overrun the string or leave it non-null terminated > :-(. > Ok can you point them to me so that I don't do it next time I tried to be careful with the overrun by using snprintf I also tried with a very large template (> 200chars) and did see it being trunctated and also valgrind didn't complained. Of course it didn't mean that it's failproof but still I'm willing to understand where are the mistakes. > It's easier to fix myself, so that's what I'm doing.. > > New patch will be uploaded shortly. > Ok great thanks for the rewrite. Unfortunately snprintf doesn't null terminate (I know, that's why you used the len - 1 idiom). :-(. (In reply to comment #8) > Unfortunately snprintf doesn't null terminate (I know, that's why you used the > len - 1 idiom). > Really ? Man pages says the opposite and this #include <stdio.h> int main() { char buf[20]; char txt[] = "A very long string more than 20 chars actually, really I mean it"; snprintf(buf, sizeof(buf), "len = %d %s", sizeof(txt) - 1, txt); fprintf(stderr, "buf = %s\n", buf); return 0; } Clearly show the opposite. > :-(. Oh right ! I don't think it guarantees null termination. At least it's really unclear to me. It's also unclear to a lot of other people :-). Anyway, nearly finished with the rewrite. Actually, this: says it is. So I can remove some of my code changes :-). My reference, susv4, says: The snprintf() function shall be equivalent to sprintf(), with the addition of the n argument which states the size of the buffer referred to by s. If n is zero, nothing shall be written and s may be a null pointer. Otherwise, output bytes beyond the n-1st shall be discarded instead of being written to the array, and a null byte is written at the end of the bytes actually written into the array. That is pretty clear language to me Created attachment 9044 [details] The patch I think we need. Compiles but not yet tested. I'll work on that now. Thought you might want to look at the initial rewrite. Jeremy. Created attachment 9045 [details] Tested fix :-) Ok, this one works :-). Mat - please take a look and let me know what you think. We'll need a docs fix also before this can be merged. Jeremy. Looks good, but let me recheck it tomorrow. Ping ? I'd like to get this finished if it's going to go in :-). Jeremy. glou glou glou I'm drowning. Ping. If you +1 this I'll push it :-). Jeremy. Karolin, can you pick this fix for next 4.0.x release and also for 4.1.0 ? Also I'll add a backport to 3.x. Jeremy was it pushed upstream ? I didn't managed to find it. This patch is really helpful in multiple winbindd child cases. Can we have the patch pushed to 4.X ? I think I missed your +1 on the patch so it didn't get pushed upstream. Let me do that first and then we'll get Karolin to add to 4.1.x and/or 4.0.x. Jeremy. Argh. Patch has now bit-rotted and doesn't apply any more to master :-(. Matthieu, do you want to fix it up for current master ? If you want me to do it it's going to take a while.. Jeremy.
https://bugzilla.samba.org/show_bug.cgi?id=10002
CC-MAIN-2017-09
refinedweb
916
75
MQTT FTW This blog post isn't really a proper post, more of a way in which I can remember how to use MQTT for future projects. Feel free to refer to this and of course comment :) What is MQTT? MQTT: Message Queuing Telemetry Transport. A lightweight machine to machine protocol for IoT devices. How can I install it on Linux / Raspberry Pi? In a terminal sudo apt update sudo apt install mosquitto mosquitto-clients To test Run the mosquitto background service broker sudo service mosquitto start One one computer which will react to a message (this was my Pocket Chip SBC) -d = debug mode, handy for finding issues -t = topic, the subject that we are listening to, almost like a channel -h = IP address of the broker mosquitto_sub -d -t hello/world -h IP ADDRESS Publish a message on the set topic On another computer, this was my laptop. -d = debug mode, handy for finding issues -t = topic, the subject that we are sending a message to, almost like a channel -m = message, the message that we wish to send -h = The IP address of the broker mosquitto_pub -d -t hello/world -m "Is this thing on?" -h IP ADDRESS So what happens? The laptop publishing a message on the hello/world topic will send its message that will be picked up by every device that is subscribing / listening to that topic. To install MQTT for Python 3 sudo pip3 install paho-mqtt An example Python script that will listen and can post to topic. import paho.mqtt.client as mqtt def on_connect(client, userdata, flags, rc): print("Connected with result code "+str(rc)) client.subscribe("hello/world") def on_message(client, userdata, msg): message = str(msg.payload) message = message[2:(len(message))] print(message) if(message=="test"): print("I'm working!") client = mqtt.Client() client.on_connect = on_connect client.on_message = on_message #connects to a certain IP address client.connect("192.168.1.191", 1883, 60) client.publish("hello/world","test") client.loop_forever()
https://bigl.es/mqtt-ftw/
CC-MAIN-2018-47
refinedweb
332
61.26
Important: Please read the Qt Code of Conduct - Is it possible to apply gradient on texts? Hi, I was testing around what is possible to do with Qt Design Studio and QML, and was wondering whether if it is possible or not to apply gradients on texts with QML. This is what I was trying after searching a bit for gradients on QML, but probably there is something stupid: Text { id: element text: qsTr("Text") font.pixelSize: 12 } LinearGradient { id: mask anchors.fill: element start: Qt.point(0, 0) end: Qt.point(element.width, 0) gradient: Gradient { GradientStop { id: gradientStop position: 0 color: "#ffffff" } GradientStop { id: gradientStop2 position: 0.914 color: "#ffffff" } GradientStop { id: gradientStop1 position: 1 color: "#000000" } } } OpacityMask { anchors.fill: element source: element maskSource: mask } The error I get is on LinearGradient, and Qt Design Studio tells me: "Component with path ...../LinearGradient.qml can not be created". has an example that might help. - Shrinidhi Upadhyaya last edited by Hi @Garu94 , yes you can definitely add Gradient to the texts Here is a sample code:- Text { id: dummyText text: "Gradient Text" font.pointSize: 40 anchors.centerIn: parent visible: false } LinearGradient { anchors.fill: dummyText source: dummyText gradient: Gradient { GradientStop { position: 0; color: "green" } GradientStop { position: 0.4; color: "yellow" } GradientStop { position: 0.6; color: "orange" } GradientStop { position: 1; color: "cyan" } } } Sample Output:- Hi @Shrinidhi-Upadhyaya, Thank you for answering me. This is the whole code of my .qml, buy I still get the same error/warning over the LinearGradient voice in the navigator: "Component with path ...../LinearGradient.qml can not be created". import QtQuick 2.12 import TestQTstudio 1.0 import QtGraphicalEffects 1.12 Rectangle { width: Constants.width height: Constants.height color: Constants.backgroundColor Text { id: dummyText text: "Dio Porco" font.pixelSize: 40 anchors.centerIn: parent visible: false } LinearGradient { anchors.fill: dummyText source: dummyText gradient: Gradient { GradientStop { position: 0 color: "green" } GradientStop { position: 0.914 color: "yellow" } GradientStop { position: 1 color: "cyan" } } } } - Pradeep P N last edited by Pradeep P N Hi @Garu94 The code looks fine to me, i suspect your Qt Installation is not properly done or I guess the Component you have added this code as LinearGradient.qml, if so please change the file name to something relevant and use proper path to import the Components to access them in other QML files. Can you please show us your code structure and file name. All the best. Hi, These are the Navigator and project structures:
https://forum.qt.io/topic/109715/is-it-possible-to-apply-gradient-on-texts/5
CC-MAIN-2021-39
refinedweb
408
52.05
1, Foreword Single case mode will face many problems whether in our interview or in our daily work. However, the details of many singleton patterns are worth exploring in depth. This article connects various basic knowledge through single case mode, which is very worth reading. 1. What is singleton mode? Singleton pattern is a very common software design pattern. It defines that the class of singleton object can only allow one instance to exist. This class is responsible for creating its own objects and ensuring that only one object is created. It is generally used in business scenarios where resource consumption is required for the implementation of tool classes or the creation of objects. Features of singleton mode: - Class constructor private - Hold a reference to your own class - Provide static methods for obtaining instances externally Let's use a simple example to understand the usage of a singleton pattern.; } public static void main(String[] args) { System.out.println(SimpleSingleton.getInstance().hashCode()); System.out.println(SimpleSingleton.getInstance().hashCode()); } } Print results: 1639705018 1639705018 We see that the hashCode of the SimpleSingleton instance obtained twice is the same, indicating that the same object is obtained in the two calls. Maybe many friends usually use it in their work, but I want to say that there is a problem with this code. Will you believe it? No, let's look down together. 2, Hungry and lazy model When introducing singleton mode, we must first introduce its two very famous implementation modes: hungry man mode and lazy man mode. 1. Hungry man model The instance has been built during initialization. Whether you use it or not, build it first. The specific codes are as follows:; } } In fact, there is another variant of the hungry man model: public class SimpleSingleton { //Hold a reference to your own class private static final SimpleSingleton INSTANCE; static { INSTANCE = new SimpleSingleton(); } //Private construction method private SimpleSingleton() { } //Provide static methods for obtaining instances externally public static SimpleSingleton getInstance() { return INSTANCE; } } Instantiate the INSTANCE object using a static code block. The advantage of using hungry man mode is that there is no thread safety problem, but the disadvantage is also obvious. Instantiate the object at the beginning. If the instantiation process is very time-consuming and the object is not used in the end, isn't it a waste of resources? At this time, you may think that you don't need to instantiate the object in advance. Can you instantiate it when you really use it? This is what I want to introduce next: lazy mode. 2. Lazy mode As the name suggests, an instance is created only when it is used. It is "lazy". When it is used, it is checked whether there is an instance. If there is an instance, it will be returned. If there is no instance, it will be created. The specific codes are as follows: public class SimpleSingleton2 { private static SimpleSingleton2 INSTANCE; private SimpleSingleton2() { } public static SimpleSingleton2 getInstance() { if (INSTANCE == null) { INSTANCE = new SimpleSingleton2(); } return INSTANCE; } } The INSTANCE object in the example is empty at first, and will not be instantiated until the getInstance method is called. Well, good. But there is still a problem with this code. 3. synchronized keyword What's wrong with the code above? A: if the getInstance method is called in multiple threads, it may be true at the same time when the if (INSTANCE == null) judgment is reached, because the default value during INSTANCE initialization is null. This will lead to the creation of INSTANCE objects in multiple threads at the same time, that is, the INSTANCE object has been created multiple times, which is contrary to the original intention of creating only one INSTANCE object. So, how to improve? A: the easiest way is to use the synchronized keyword. The improved code is as follows: public class SimpleSingleton3 { private static SimpleSingleton3 INSTANCE; private SimpleSingleton3() { } public synchronized static SimpleSingleton3 getInstance() { if (INSTANCE == null) { INSTANCE = new SimpleSingleton3(); } return INSTANCE; } public static void main(String[] args) { System.out.println(SimpleSingleton3.getInstance().hashCode()); System.out.println(SimpleSingleton3.getInstance().hashCode()); } } Add the synchronized keyword to the getInstance method to ensure that only one thread can create an INSTANCE of the INSTANCE object in the case of concurrency. Is that all right? A: sorry, there is still a problem. What's the problem? A: using the synchronized keyword will consume the performance of the getInstance method. We should judge that the lock should be added only when the INSTANCE is empty. If it is not empty, the lock should not be added and we need to return directly. This requires the use of the double check lock described below. 4. The difference between the pattern of hungry and lazy but, before introducing the double check lock, let's insert a topic that friends may be more concerned about: what are the advantages and disadvantages of the hungry man mode and the lazy man mode? Hungry man mode: the advantage is that there is no thread safety problem, and the disadvantage is a waste of memory space. Lazy mode: the advantage is that there is no waste of memory space. The disadvantage is that if the control is not good, it is not a single case. Well, let's take a safe look at how the double check lock ensures performance and single instance. 3, Double check lock Double check lock, as its name implies, checks twice: check whether it is empty before locking, and check whether it is empty again after locking. So, how does it implement singletons? 1. How to implement singleton? The specific codes are as follows: public class SimpleSingleton4 { private static SimpleSingleton4 INSTANCE; private SimpleSingleton4() { } public static SimpleSingleton4 getInstance() { if (INSTANCE == null) { synchronized (SimpleSingleton4.class) { if (INSTANCE == null) { INSTANCE = new SimpleSingleton4(); } } } return INSTANCE; } } Judge whether it is empty before locking to ensure that if the INSTANCE is not empty, you can return directly without locking. Why do you need to judge whether INSTANCE is empty after locking? A: to prevent only one object from being instantiated in the case of multithreading concurrency. For example, thread a and thread b call getInstance method at the same time. If it is judged that INSTANCE is empty at the same time, lock grabbing will be carried out at the same time. If thread a grabs the lock first and starts executing the code contained in the synchronized keyword, thread b is in a waiting state. Thread a has created a new INSTANCE and released the lock. At this time, thread b gets the lock and enters the code contained in the synchronized keyword. If it does not judge whether the INSTANCE is empty again, the INSTANCE may be created repeatedly. Therefore, you need to judge twice before and after synchronized. Don't think it's over. What's the problem? 2. volatile keyword What's wrong with the above code? public static SimpleSingleton4 getInstance() { if (INSTANCE == null) {//1 synchronized (SimpleSingleton4.class) {//2 if (INSTANCE == null) {//3 INSTANCE = new SimpleSingleton4();//4 } } } return INSTANCE;//5 } The code of getInstance method is written in the order of 1, 2, 3, 4 and 5. I hope to execute it in this order. However, the java virtual machine will actually do some optimization and rearrange some code instructions. After the rearrangement, the order may become: 1, 3, 2, 4 and 5. In this way, multiple instances will be created in the case of multithreading. The rearranged code may be as follows: public static SimpleSingleton4 getInstance() { if (INSTANCE == null) {//1 if (INSTANCE == null) {//3 synchronized (SimpleSingleton4.class) {//2 INSTANCE = new SimpleSingleton4();//4 } } } return INSTANCE;//5 } I see. What can be done? A: the volatile keyword can be added to the definition of INSTANCE. The specific codes are as follows: public class SimpleSingleton7 { private volatile static SimpleSingleton7 INSTANCE; private SimpleSingleton7() { } public static SimpleSingleton7 getInstance() { if (INSTANCE == null) { synchronized (SimpleSingleton7.class) { if (INSTANCE == null) { INSTANCE = new SimpleSingleton7(); } } } return INSTANCE; } } volatile keyword can guarantee the visibility of multiple threads, but it cannot guarantee atomicity. It also prevents instruction rearrangement. The double check lock mechanism not only ensures thread safety, but also improves execution efficiency and saves memory space compared with direct locking. Besides the above singleton mode, are there any other singleton modes? 4, Static inner class Static inner classes, as the name suggests, implement the singleton pattern through static inner classes. So, how does it implement singletons? 1. How to implement singleton mode? How to implement singleton mode? public class SimpleSingleton5 { private SimpleSingleton5() { } public static SimpleSingleton5 getInstance() { return Inner.INSTANCE; } private static class Inner { private static final SimpleSingleton5 INSTANCE = new SimpleSingleton5(); } } We see that a static Inner class is defined in the SimpleSingleton5 class. In the getInstance method of SimpleSingleton5 class, the INSTANCE instance object of the Inner class is returned. The virtual machine loads Inner and instantiates the INSTANCE object only when the program calls the getInstance method for the first time. The internal mechanism of java ensures that only one thread can obtain the object lock, and other threads must wait to ensure the uniqueness of the object. 2. Reflection vulnerability The above code looks perfect, but there are still loopholes. If others use reflection, they can still create objects through the nonparametric construction of classes. For example: Class<SimpleSingleton5> simpleSingleton5Class = SimpleSingleton5.class; try { SimpleSingleton5 newInstance = simpleSingleton5Class.newInstance(); System.out.println(newInstance == SimpleSingleton5.getInstance()); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } The print result of the above code is false. It can be seen that the object created through reflection is not the same object as the object obtained through getInstance method, that is, this vulnerability will lead to SimpleSingleton5 non singleton. So, how to prevent this vulnerability? A: this needs to be judged in the nonparametric construction mode. If it is not empty, an exception will be thrown. The modified code is as follows: public class SimpleSingleton5 {(); } } } If at this time, you think this static inner class, the method of implementing singleton mode, is perfect. Well, what I want to tell you is that you are wrong and there are loopholes... 3. Deserialization vulnerability As we all know, classes in java can be serialized by implementing the Serializable interface. We can save the class object to memory or a file first. Later, at a certain time, it will be restored to the original object. The specific codes are as follows: public class SimpleSingleton5 implements Serializable {(); } private static void writeFile() { FileOutputStream fos = null; ObjectOutputStream oos = null; try { SimpleSingleton5 simpleSingleton5 = SimpleSingleton5.getInstance(); fos = new FileOutputStream(new File("test.txt")); oos = new ObjectOutputStream(fos); oos.writeObject(simpleSingleton5); System.out.println(simpleSingleton5.hashCode()); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if (oos != null) { try { oos.close(); } catch (IOException e) { e.printStackTrace(); } } if (fos != null) { try { fos.close(); } catch (IOException e) { e.printStackTrace(); } } } } private static void readFile() { FileInputStream fis = null; ObjectInputStream ois = null; try { fis = new FileInputStream(new File("test.txt")); ois = new ObjectInputStream(fis); SimpleSingleton5 myObject = (SimpleSingleton5) ois.readObject(); System.out.println(myObject.hashCode()); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } finally { if (ois != null) { try { ois.close(); } catch (IOException e) { e.printStackTrace(); } } if (fis != null) { try { fis.close(); } catch (IOException e) { e.printStackTrace(); } } } } public static void main(String[] args) { writeFile(); readFile(); } } After running, it is found that the hashcodes of serialized and deserialized objects are different: 189568618 793589513 Note: a new object is created during deserialization, which breaks the requirement of singleton mode object uniqueness. So, how to solve this problem? Answer: rereadresolve method. In the above example, add the following code: private Object readResolve() throws ObjectStreamException { return Inner.INSTANCE; } The operation results are as follows: 290658609 290658609 We see that the hashCode of serialized and deserialized instance objects is the same. The method is very simple. You only need to return a unique Inner.INSTANCE object every time in the readResolve method. When the program deserializes the object, it will look for the readResolve() method. If the method does not exist, the new object is returned directly. If the method exists, the object is returned according to the content of the method. If we have not instantiated the singleton before, null will be returned. Well, come here and finally step on all the pits. It took a lot of effort. However, I secretly tell you that there are actually simpler methods, ha ha ha. what... 5, Enumeration In fact, enumeration is a natural single instance in java. Each instance has only one object, which is guaranteed by the underlying internal mechanism of java. Simple usage: public enum SimpleSingleton7 { INSTANCE; public void doSamething() { System.out.println("doSamething"); } } Where called: public class SimpleSingleton7Test { public static void main(String[] args) { SimpleSingleton7.INSTANCE.doSamething(); } } INSTANCE object INSTANCE is unique in enumeration, so it is a natural singleton pattern. Of course, in the feature of enumerating object uniqueness, other singleton objects can be created, such as: public enum SimpleSingleton7 { INSTANCE; private Student instance; SimpleSingleton7() { instance = new Student(); } public Student getInstance() { return instance; } } class Student { } The jvm ensures that enumerations are natural singletons, that there are no thread safety issues, and that serialization is supported. In the classic book Effective Java by Joshua Bloch, the great God of java, it is said: The enumeration type of single element has become an implementation Singleton The best way. reference resources 1, official account No. three, thank you very much for an article about technology.
https://programmer.help/blogs/singleton-mode-is-really-not-simple.html
CC-MAIN-2021-49
refinedweb
2,229
57.37
19 July 2006 16:37 [Source: ICIS news] LONDON (ICIS news)--Production at Neochim’s 200,000 tonnes/year biodiesel plant in ?xml:namespace> Construction work had so far gone to plan, and the factory was expected to be up and running by around 22 August, the source said. Production was expected to be at full rates by the end of September. Neochim, part of Italian bioenergy company Spiga Nord, said it had invested around €10m (around $12.6m) to build the new facility. The Feluy plant was built in 1995 to refine crude glycerine, a by-product of biodiesel, into high-quality kosher glycer
http://www.icis.com/Articles/2006/07/19/1074744/neochim-to-start-feluy-biodiesel-production-in-aug.html
CC-MAIN-2013-48
refinedweb
105
72.16
Writing a Unit Test As already mentioned in the coding conventions, we require unit tests for newly added functionalities. The focus hereby lies on functional testing rather than 100% code coverage. Writing a test usually helps oneself to completely understand what you code is doing. Further, it is necessary for our CI-pipeline to provide tests and to ensure the stability of our software. Failing tests are the first indicator, that your function or change broke something in the project. This guide will show you how to write a test for your new functionality in MNE-CPP and what you should keep in mind. What and How to Test In general, every new functionality should be tested. You should think about the use-case of your function and think about how to prove or validate your function. It is possible to compare the output of your function to the output of another software package, e.g. MNE-Matlab, MNE-C or MNE-Python. Once you know what you want to test and have your reference results to compare to, you can carry on with the following steps. Creating a new Test As part of the MNE-CPP wizards for QtCreator, we provide a template for a new test project. How to setup and use the MNE-CPP wizards is described in our Coding Conventions. You can create a new test project as shown in following picture: After this, a new window should open and allow you to choose from a variety of templates. Under Projects choose MNE-CPP and then MNE-CPP Test. After you completed all wizard steps you have created a new test project. Structuring the Test Always keep our Coding Conventions in mind. Consider taking a look at already available tests to get started. First, you create a class named after your test: TestName. The following code snippet shows an example for a test. The slots are defining the functions to execute your function initTestCase() and compare the output to reference values, e.g. compareValue(). Further, you can declare threshold values as private variables that indicate the difference which should not be exceeded when comapring. An example for a test can be found here. class TestFiffRWR: public QObject { Q_OBJECT public: TestFiffRWR(); private slots: void initTestCase(); void compareValue(); void cleanupTestCase(); private: // some variables and error thresholds double dEpsilon; Eigen::MatrixXd mFirstInData; Eigen::MatrixXd mSecondInData; }; initTestCase() Here you execute and declare everything that is necessary for setting up your test. You generate and load all values in a variable that can be compared to later. If you want to load external calculated data in e.g. .txt files you can use: Eigen::MatrixXd mDataFromFile; UTILSLIB::IOUtils::read_eigen_matrix(mDataFromFile, QCoreApplication::applicationDirPath() + "/mne-cpp-test-data/Result/<yourFile>.txt"); All files you use, have to be added to mne-cpp-test-data. In case you need to add new data open a Pull Request to this repository. The files you use should be as small as possible. If you need a .fif file, have a look at the already existing data first. compareValue() void TestFiffRWR::compareValue() { // compare your data here, think about usefull metrics Eigen::MatrixXd mDataDiff = mFirstInData - mSecondInData; QVERIFY( mDataDiff.sum() < dEpsilon ); } Here you compare the output of your functions to the reference data. The actual comparison is made by QVERIFY. Before the test, think about useful and meaningful measures and thresholds for your comparison. Don’t combine comparisons of different values, use a new compare function instead. Once you have build the test project, you should run your test locally and debug your test. Possible Error Message It might be possible that the last line of the test shows an error in your editor. Don’t worry about this, once you have built the test project, the error will disappear. #include "test_fiff_rwr.moc" Naming Conventions Please follow the following naming conventions when naming your test project and class:
https://mne-cpp.github.io/pages/contribute/test.html
CC-MAIN-2020-45
refinedweb
649
64.71
Why there must be a msgid tag in ngettext? From the first glance, it is not clear why we should use msgid tag for the first argument of ngettext function. The main reason for this is to be able to use ttag without babel transpile. Valid ngettext usage: import { ngettext, msgid } from 'ttag' function test(n) { return ngettext(msgid`${n} time clicked`, `${n} times clicked`, n) } For instance, we have a universal(isomorphic) application and our translations must be applied on the server side at a runtime. So, ttag should be able to match an appropriate translation without additional transpilation steps. To find the translation it should get a proper msgid from ngettext's first argument. Let's imagine we are executing the function described above: test(3); Inside ngettext function we will receive 3 time clicked instead of ${n} time clicked. The problem is, that expressions in the template have been already evaluated. So there is no opportunity to find out what template we used before evaluation in our source strings. That is the reason, why there must be some kind of a tag that will save msgid before evaluation. You can find out more about es6 tagged templates - here
https://ttag.js.org/blog/2018/09/06/why-msgid.html
CC-MAIN-2019-09
refinedweb
201
62.07
Introduction If you have experience in Machine Learning, specifically supervised learning, you should have known that hyper parameter-tuning is an important process to improve model accuracy. This process tunes hyperparameters in a Machine Learning algorithm. As we have known, every algorithm requires input parameters from observed data and hyperparameters are not from the observed data. Hyperparameters are different for each algorithm. If we do not tune the hyperparameters, the default hyperparameters are used. There are many ways to do hyper parameter-tuning. This article will later focus on Bayesian Optimization as this is my favorite. There are 2 packages that I usually use for Bayesian Optimization. They are “bayes_opt” and “hyperopt” (Distributed Asynchronous Hyper-parameter Optimization). We will simply compare the two in terms of the time to run, accuracy, and output. But before that, we will discuss some basic knowledge of hyperparameter-tuning. Hyperparameter-tuning Hyperparameter-tuning is the process of searching the most accurate hyperparameters for a dataset with a Machine Learning algorithm. To do this, we fit and evaluate the model by changing the hyperparameters one by one repeatedly until we find the best accuracy. The search methods can be uninformed search and informed search. Uninformed search tries sets of hyperparameters repeatedly and independently. Each search does not inform or suggest the other searches. Examples of uninformed search are GridSearchCV and RandomizedSearchCV. Hyper Parameter tuning Using GridSearchCV Now, I want to perform hyperparameter-tuning on GradientBoostingClassifier. The dataset is from Kaggle competition. The hyperparameters to tune are “max_depth”, “max_features”, “learning_rate”, “n_estimators”, and “subsample”. Note that as mentioned above, these hyperparameters are only for GradientBoostingClassifier, not for the other algorithms. The accuracy metric is the accuracy score. I will run 5 fold cross-validation. Below is the code for GridSearchCV. We can see that the value options for each hyperparameter are set in the “param_grid”. For example, the GridSearchCV will try to run with n_estimators of 80, 100, and so on until 150. To know how many times the GridSearchCV will run, just multiply the number of value options in each hyperparameter with one another. It will be 8 x 3 x 3 x 5 x 3 = 1080. And for each of the 1080 GridSearchCV, there will be 5 fold cross-validation. That makes 1080 x 5 = 5400 models should be built to find which is the best. # Load packages from scipy.stats import uniform from sklearn.model_selection import cross_val_score from sklearn.ensemble import GradientBoostingClassifier from sklearn.model_selection import GridSearchCV # GridSearchCV param_grid = {'max_depth':[3,4,5,6,7,8,9,10], 'max_features':[0.8,0.9,1], 'learning_rate':[0.01,0.1,1], 'n_estimators':[80,100,120,140,150], 'subsample': [0.8,0.9,1]} grid = GridSearchCV(estimator=GradientBoostingClassifier(), param_grid=param_grid, scoring=acc_score, cv=5) grid.fit(X_train.iloc[1:100,], y_train.iloc[1:100,]) Disadvantages The disadvantage of this method is that we can miss good hyperparameter values not set in the beginning. For instance, we do not set an option for the max_features to be 0.85 or the learning_rate to be 0.05. We do not know if that combination can give better accuracy. To overcome this, we can try RandomizedSearchCV. Hyper Parameter-Tuning Using RandomizedSearchCV Below is the code for that. Examine that the code sets a range of possible values for each hyperparameter. For example, the learning_rate can have any values from 0.01 to 1 distributed uniformly. from sklearn.model_selection import RandomizedSearchCV # RandomizedSearhCV param_rand = {'max_depth':uniform(3,10), 'max_features':uniform(0.8,1), 'learning_rate':uniform(0.01,1), 'n_estimators':uniform(80,150), 'subsample':uniform(0.8,1)} rand = RandomizedSearchCV(estimator=GradientBoostingClassifier(), param_distributions=param_rand, scoring=acc_score, cv=5) rand.fit(X_train.iloc[1:100,], y_train.iloc[1:100,]) Problem With Uninformed Search The problem with uninformed search is that it takes relatively a long time to build all the models. Informed search can solve this problem. In informed search, the previous models with a certain set of hyperparameter values can inform the later model which hyperparameter values better to select. One of the methods to do this is coarse-to-fine. This involves running GridSearchCV or RandomizedSearchCV more than once. Each time, the hyperparameter value range is more specific. For example, we start RandomizedSearchCV with learning_rate ranging from 0.01 to 1. Then, we find out that high accuracy models have their learning_rate around 0.1 to 0.3. Hence, we can run again GridSearchCV focusing on the learning_rate between 0.1 and 0.3. This process can continue until a satisfactory result is achieved. The first trial is coarse because the value range is large, from 0.01 to 1. The later trial is fine as the value range is focused on 0.1 to 0.3. The drawback of the coarse-to-fine method is that we need to run the code repeatedly and observe the value range of hyperparameters-tuning. You might be thinking if there is a way to automate this. Yes, that is why my favorite is Bayesian Optimization. Baysian Optimization Bayesian Optimization also runs models many times with different sets of hyperparameter values, but it evaluates the past model information to select hyperparameter values to build the newer model. This is said to spend less time to reach the highest accuracy model than the previously discussed methods. bayes_opt As mentioned in the beginning, there are two packages in python that I usually use for Bayesian Optimization. The first one is bayes_opt. Here is the code to run it. from bayes_opt import BayesianOptimization # Gradient Boosting Machine def gbm_cl_bo(max_depth, max_features, learning_rate, n_estimators, subsample): params_gbm = {} params_gbm['max_depth'] = round(max_depth) params_gbm['max_features'] = max_features params_gbm['learning_rate'] = learning_rate params_gbm['n_estimators'] = round(n_estimators) params_gbm['subsample'] = subsample scores = cross_val_score(GradientBoostingClassifier(random_state=123, **params_gbm), X_train, y_train, scoring=acc_score, cv=5).mean() score = scores.mean() return score # Run Bayesian Optimization start = time.time() params_gbm ={ 'max_depth':(3, 10), 'max_features':(0.8, 1), 'learning_rate':(0.01, 1), 'n_estimators':(80, 150), 'subsample': (0.8, 1) } gbm_bo = BayesianOptimization(gbm_cl_bo, params_gbm, random_state=111) gbm_bo.maximize(init_points=20, n_iter=4) print('It takes %s minutes' % ((time.time() - start)/60)) output: | iter | target | learni... | max_depth | max_fe... | n_esti... | subsample | ------------------------------------------------------------------------------------- | 1 | 0.7647 | 0.616 | 4.183 | 0.8872 | 133.8 | 0.8591 | | 2 | 0.7711 | 0.1577 | 3.157 | 0.884 | 96.71 | 0.8675 | | 3 | 0.7502 | 0.9908 | 4.664 | 0.8162 | 126.9 | 0.9242 | | 4 | 0.7681 | 0.2815 | 6.264 | 0.8237 | 85.18 | 0.9802 | | 5 | 0.7107 | 0.796 | 8.884 | 0.963 | 149.4 | 0.9155 | | 6 | 0.7442 | 0.8156 | 5.949 | 0.8055 | 111.8 | 0.8211 | | 7 | 0.7286 | 0.819 | 7.884 | 0.9131 | 99.2 | 0.9997 | | 8 | 0.7687 | 0.1467 | 7.308 | 0.897 | 108.4 | 0.9456 | | 9 | 0.7628 | 0.3296 | 5.804 | 0.8638 | 146.3 | 0.9837 | | 10 | 0.7668 | 0.8157 | 3.239 | 0.9887 | 146.5 | 0.9613 | | 11 | 0.7199 | 0.4865 | 9.767 | 0.8834 | 102.3 | 0.8033 | | 12 | 0.7708 | 0.0478 | 3.372 | 0.8256 | 82.34 | 0.8453 | | 13 | 0.7679 | 0.5485 | 4.25 | 0.8359 | 90.47 | 0.9366 | | 14 | 0.7409 | 0.4743 | 8.378 | 0.9338 | 110.9 | 0.919 | | 15 | 0.7216 | 0.467 | 9.743 | 0.8296 | 143.5 | 0.8996 | | 16 | 0.7306 | 0.5966 | 7.793 | 0.8355 | 140.5 | 0.8964 | | 17 | 0.772 | 0.07865 | 5.553 | 0.8723 | 113.0 | 0.8359 | | 18 | 0.7589 | 0.1835 | 9.644 | 0.9311 | 89.45 | 0.9856 | | 19 | 0.7662 | 0.8434 | 3.369 | 0.8407 | 141.1 | 0.9348 | | 20 | 0.7566 | 0.3043 | 8.141 | 0.9237 | 94.73 | 0.9604 | | 21 | 0.7683 | 0.02841 | 9.546 | 0.9055 | 140.5 | 0.8805 | | 22 | 0.7717 | 0.05919 | 4.285 | 0.8093 | 92.7 | 0.9528 | | 23 | 0.7676 | 0.1946 | 7.351 | 0.9804 | 108.3 | 0.929 | | 24 | 0.7602 | 0.7131 | 5.307 | 0.8428 | 91.74 | 0.9193 | ===================================================================================== It takes 20.90080655813217 minutes params_gbm = gbm_bo.max['params'] params_gbm['max_depth'] = round(params_gbm['max_depth']) params_gbm['n_estimators'] = round(params_gbm['n_estimators']) params_gbm Output: {'learning_rate': 0.07864837617488214, 'max_depth': 6, 'max_features': 0.8723008386644597, 'n_estimators': 113, 'subsample': 0.8358969695415375} The package bayes_opt takes 20 minutes to build 24 models. The best accuracy is 0.772. hyperopt Another package is hyperopt. Here is the code. from hyperopt import hp, fmin, tpe # Run Bayesian Optimization from hyperopt start = time.time() space_lr = {'max_depth': hp.randint('max_depth', 3, 10), 'max_features': hp.uniform('max_features', 0.8, 1), 'learning_rate': hp.uniform('learning_rate',0.01, 1), 'n_estimators': hp.randint('n_estimators', 80,150), 'subsample': hp.uniform('subsample',0.8, 1)} def gbm_cl_bo2(params): params = {'max_depth': params['max_depth'], 'max_features': params['max_features'], 'learning_rate': params['learning_rate'], 'n_estimators': params['n_estimators'], 'subsample': params['subsample']} gbm_bo2 = GradientBoostingClassifier(random_state=111, **params) best_score = cross_val_score(gbm_bo2, X_train, y_train, scoring=acc_score, cv=5).mean() return 1 - best_score gbm_best_param = fmin(fn=gbm_cl_bo2, space=space_lr, max_evals=24, rstate=np.random.RandomState(42), algo=tpe.suggest) print('It takes %s minutes' % ((time.time() - start)/60)) Output: 100%|██████████| 24/24 [19:53<00:00, 49.74s/trial, best loss: 0.22769091027055077] It takes 19.897333371639252 minutes gbm_best_param Output: {'learning_rate': 0.03516615427790515, 'max_depth': 6, 'max_features': 0.8920776081423815, 'n_estimators': 148, 'subsample': 0.9981549036976672} The package hyperopt takes 19.9 minutes to run 24 models. The best loss is 0.228. It means that the best accuracy is 1 – 0.228 = 0.772. The duration to run bayes_opt and hyperopt is almost the same. The accuracy is also almost the same although the results of the best hyperparameters are different. But, there is another difference. The package bayes_opt shows the process of tuning the values of the hyperparameters. We can see which values are used for each iteration. The package hyperopt only shows one line of the progress bar, best loss, and duration. In my opinion, I prefer bayes_opt because, in reality, we may feel the tuning process takes too long time and just want to terminate the process. After stopping the processing, we just want to take the best hyperparameter-tuning result. We can do that with bayes_opt, but not with hyperopt. There are still other ways of automatic hyperparameter-tuning. Not only the hyperparameter-tuning, but choosing the Machine Learning algorithms also can be automated. I will discuss that next time. The above code is available here. About Author The media shown in this article on Top Machine Learning Libraries in Julia are not owned by Analytics Vidhya and is used at the Author’s discretion.You can also read this article on our Mobile APP
https://www.analyticsvidhya.com/blog/2021/05/bayesian-optimization-bayes_opt-or-hyperopt/
CC-MAIN-2021-25
refinedweb
1,756
53.47
#include <zb_zcl_price.h> PublishCreditPayment command payload. An unsigned 32-bit field denoting the last credit payment. This field should be provided in the same currency as used in the Price cluster. A UTCTime field containing the time at which the last credit payment was made. A UTCTime field containing the time that the next credit payment is due. An unsigned 32-bit field denoting the current amount this is overdue from the consumer. This field should be provided in the same currency as used in the Price cluster. An string of between 0-20 octets used to denote the last credit payment reference used by the energy supplier. An 8-bit enumeration identifying the current credit payment status. Unique identifier generated by the commodity provider. When new information is provided that replaces older information for the same time period, this field allows devices to determine which information is newer. An unsigned 32-bit field containing a unique identifier for the commodity provider.
https://developer.nordicsemi.com/nRF_Connect_SDK/doc/zboss/3.8.0.1/structzb__zcl__price__publish__credit__payment__payload__s.html
CC-MAIN-2022-27
refinedweb
162
58.28
Hello Luca, thanks for your thoughts. Meanwhile i have got the user tools to compile. I added your #ifdef statement in /usr/include/devfs_kernel.h , and additionnally i needed to add the following : in /usr/include/linux/genhd.h #ifdef __KERNEL__ #include <linux/devfs_kernel.h> #endif /* __KERNEL__ */ and in 0.8final/tools/lib/liblvm.h typedef struct devfs_entry * devfs_handle_t; with this changes the user tools compiled cleanly, but I still need to check the functionality - which will happen early next week, when i return from a short trip. thanx and best regards Stefan On 25 Apr, Luca Berra wrote: > This is a problem with devfs, i believe. > Actually it is due to kernel includes residing into /usr/include > where they should not be. > > anyway i addedd > #ifdef __KERNEL__ > and > #endif /* __KERNEL__ */ > at the beginning and end of /usr/include/linux/devfs_fs_kernel.h > > i CCed Richard Gooch, author of devfs so maybe he can comment on this > > Regards, > Luca > > On Tue, Apr 25, 2000 at 12:44:26AM +0200, stk rmi de wrote: >> Hello, >> >> i have problems compiling the user space tools from lvm0.8final, >> I have already applied patch-lvm_0.8final-2 cleanly. -- Stefan K email: stk rmi de Aix-la-Chapelle fax/data: +49-241-533353
https://www.redhat.com/archives/linux-lvm/2000-April/msg00114.html
CC-MAIN-2017-13
refinedweb
207
57.67
The problem, widely known as digit root problem, has a congruence formula: For base b (decimal case b = 10), the digit root of an integer is: - dr(n) = 0 if n == 0 - dr(n) = (b-1) if n != 0 and n % (b-1) == 0 - dr(n) = n mod (b-1) if n % (b-1) != 0 or - dr(n) = 1 + (n - 1) % 9 Note here, when n = 0, since (n - 1) % 9 = -1, the return value is zero (correct). From the formula, we can find that the result of this problem is immanently periodic, with period (b-1). Output sequence for decimals (b = 10): ~input: 0 1 2 3 4 ... output: 0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 .... Henceforth, we can write the following code, whose time and space complexities are both O(1). class Solution { public: int addDigits(int num) { return 1 + (num - 1) % 9; } }; Thanks for reading. :) It seems that different languages have different implementation for the mod operator when dealing with negative number. Don't know why this kind of pure math questions keep appearing in a coding interview... if you don't know what is digital root it will take you quite a while to figure it out, it's definitely not "easy". @zhiqing_xiao ok,well,we are the same,this is my java code: public class Solution { public int addDigits(int num) { if(num==0)return 0; return num%9==0?9:num%9; } } @zhiqing_xiao Great! Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/21498/accepted-c-o-1-time-o-1-space-1-line-solution-with-detail-explanations
CC-MAIN-2018-05
refinedweb
271
69.31
how to connect and write on db using groovy connector]) when i eveluate it ask for test variable value it is ok , but ask for sql value why ? and sql statement give following alert error with yellow triangle sign "sql cannot be resolved. It may lead to runtime errors." how can i solve it pl provide perticular script for wirte on external db HI, In order to execute SQL query inside a groovy script, you have to use a dedicated wrapper provided by Bonita BPM. - BonitaSql sql = BonitaSql.newInstance(url,user,password,driver); You can find the documentation and a shortcut to this Object in the right panel in "Bonita" category. Note that this object currently support only single connection. For your use case, I think you have to adapt the code of this object in order to access to your connection pool. Here is the code of BonitaSql - - - /** - * Creates a new Sql instance given a JDBC connection URL. - * @param url a database url of the form jdbc:subprotocol:subname - * @param user - * @param password - * @param driver an instance of the java.sql.Driver to use - * @return a new Sql instance with a connection - */ - - Properties p= new Properties() - p.setProperty("user",user) - p.setProperty("password",password) - - } - } Note that I advice you to minimize the use of groovy script for your project. Groovy is really usefull to prototype within Bonita BPM but in order to have good performance in production, you should convert your groovy scripts into custom connectors. hello, Go to the menu "developement", then "connector" and "new definition". Write the definition of your connector (input, and output value) for example, name and address should be a good candidate for input value. Then, go to "developement", then "connector" and "new implementation". Choose the definition you made, and then put your code in the executeBusinessLogiq(). Attention, you are in Java here, not in Groovy. I found a documentation here : Second idea, use a Sql connector. For example, the Datasource database query is a connector to play a request. Normally, this connector is use to query a database, but it mays work in update ? I don't know. Hope that's help. Hello, Try maybe import groovy.sql.Sql; // don't forget the ; // then define your variable Sql sql = Sql.newInstance("jdbc:oracle:thin:@200.200.1.224:1583:DNLCLONE", "apps", "spider123", "oracle.jdbc.pool.OracleDataSource"); Else create a connector in Java. Sir , Thanks for your reply but whenever i try to code using groovy script and use Sql class of it gives only same error "sql cannot be resolved. It may lead to run-time errors." as well as i tried execute process rather then evaluate but still not working sir , java connector ? you mean to say i should create custom connector ? because there is no available option in exists script connector to write java code if any other option pl let i know thanks kandarp Did you try to execute your process and see if it would work, instead of evaluating it only? sir thanks for reply , ya i tried to execute process , but it now successfully work it failed at connector execution stage . any alternative pl let i know thanks kandarp
https://community.bonitasoft.com/node/523
CC-MAIN-2019-22
refinedweb
531
65.12
From: Andreas Pokorny (andreas.pokorny_at_[hidden]) Date: 2005-03-24 14:36:03 On Thu, Mar 24, 2005 at 01:14:57PM +0530, Tushar <tushar_at_[hidden]> wrote: > Hi all, > > I am thinking of converting GNU Classpath used for GCJ to C++. I have > following reasons about why to do this. The Classpath in C++ would be the runtime linker path, and a mapping of namespaces onto directories? How would one implement such a concept, and for what use? Or is the classpath the whole library api? > 1. C++ requires a Good OOPs library with well defined api. Java has that > . While C++ also have many of them, most do not look truly object > oriented. Could you elaborate "Good OOPs"? Do you think that only object oriented librarys are good enough? > 2.C++ has much of the libs using STL. The only problem is STL is not a > object oriented. Where is the problem? Containers are object, algorithms are not. Looks perfectly good to me. > (See STL Tutorial and reference) This really makes it > difficult to think in OO and implement in STL. Particularly in STL, > T a,b > T a=b means a is separate entity and same for b. This make problem in > many case where one wants just pointer e.g File handling and > manipulation of large buffer. Where is the relation between value/pointer/reference semantic, and oop? Btw, you need to understand language semantics before implementing anything in that language. > My idea is that API specification is already defined for java. And it > seems much complete.I mean to say more complete then if start deciding > from scratch -:). Why not to use java then? I doubt that reusing the design makes much sense, because it was designed for a different language, a language without templates, without references, multiple inheritance, without operators and without destructors. Thus there are lots of things that can be handled nicer. > Difference bet proposed C++ and Java > 1.Everything is a pointer other them simple atomic data types.(Same) References are helpful, try to implement a swap without references. Why should you prune C++ here? > 4.Every thing is subclass of Object(Same). Why should one pay for a deep hierarchy that is never used? You should ask yourself why java needs to have that common base class. Regards Andreas Pokorny Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/03/83096.php
CC-MAIN-2019-04
refinedweb
411
69.89
In this article we will see how to connect to, login and upload a file to FTP server using python. We will require a publicly available FTP server to test our code. You can use below details for same. FTP URL: FTP User: dlpuser@dlptest.com Password: e73jzTRTNqCN9PYAAjjn If above details are not working or are outdated, let us know by commenting below so that we can update the article. Meanwhile you can search other FTP server details publicly available over the Internet. We will use ftplib python module. from ftplib import FTP Define host, username and password. host = "" username = "dlpuser@dlptest.com" password = "e73jzTRTNqCN9PYAAjjn" Create a connection. ftp = FTP(host=host) Login to FTP server. login_status =(user=username, passwd=password) print(login_status) Now create a dummy file in your current directory. echo 'Hi Rana' > rana_ Print the content of current directory on FTP server and upload the txt file. print() fp = open("rana_", 'rb')('STOR %s' % os.path.basename("rana_"), fp, 1024) fp.close() Print the content of current directory on FTP again after uploading file. You can verify via browser by visiting the link. If you need to upload the file in some other directory, change to that directory using('dirname'). Here we have uploaded the file to upload directory. Complete code is below. # # Sample python program showing FTP connection and # how to upload any file to a FTP server # # Author - # from ftplib import FTP import os host = "" username = "dlpuser@dlptest.com" password = "e73jzTRTNqCN9PYAAjjn" # connect to host on default port i.e 21 ftp = FTP(host=host, user=username, passwd=password) login_status = print(login_status) # change directory to upload('upload') # print the content of directory print() fp = open("rana_", 'rb') # upload file('STOR %s' % os.path.basename("rana_"), fp, 1024) fp.close() print()
https://pythoncircle.com/post/668/uploading-a-file-to-ftp-server-using-python/
CC-MAIN-2021-43
refinedweb
294
60.31
From the makers of ‘Anonymous classes’ comes… anonymous functions. Or as friends call them: lambda expressions. Anonymous functions, as we already saw in a previous post, are functions that don’t need to be declared previously. Let’s see an example: A function which returns the length of a String removing any blank space in it could be defined as: def length(s: String): Int = s.replaceAll(" ", "").length Its equivalent anonymous function would be: (s: String) => s.replaceAll(" ", "").length The type of this expression is: String => Int Where can we use anonymous functions? The most common way to use them is in function that accept other functions as parameter. This type of functions are called Higher Order Functions. Functions that return a function as result are also known as higher order functions. Fantastic, extraordinary… an example, please? Thanks to our unlimited imagination, we’ll create a simple integer calculator. Let’s define our calculator in the following way: object Calculator { def sum(n1: Int, n2: Int): Int = n1 + n2 def subtract(n1: Int, n2: Int): Int = n1 - n2 def multiplicate(n1: Int, n2: Int): Int = n1 * n2 def divide(n1: Int, n2: Int): Int = n1 / n2 } Hmmmm, cool. A class with some methods in it. It works but… what if we try and take a more generic turn? What we really want is to apply a binary operation. That operation, given two integers, will return a new integer. We could say that we need a method like this: def calculate(n1: Int, n2: Int)(operation:(Int, Int) => Int) = operation(n1, n2) As can be appreciated, we are actually passing a function as parameter. We are using an anonymous function. In order to make it more readable, we can create a new type that represents the integer binary operation: (Int, Int) => Int. type Operation = (Int, Int) => Int And if we apply this to our calculate method: def calculate(n1: Int, n2: Int)(operation: Operation) = operation(n1, n2) This method can be used in several ways: 1) The easiest one: we insert a previously defined function. def addition(n1: Int, n2: Int) = n1 + n2 calculate(1, 2)(addition) //returns 3 2) There is no function defined. Besides, the function is pretty simple and it won’t be used elsewhere in the code. All right then, we can use a lambda expression: calculate(1, 2)((n1: Int, n2: Int) => n1 + n2) //returns 3 As can be seen, in this case, an anonymous function is used to define the operation we want to apply to the two integers. It is a nimble and quick way to define functions. But that’s not all. Thanks to the type inference, we can avoid writing the type of the input parameters: calculate(1, 2)((n1, n2) => n1 + n2) //returns 3 And with a spoonful of syntactic sugar… calculate(1, 2)(_ + _) //returns 3 What advantages do we have when compared to the object oriented implementation? - Our code is significantly reduced: object Calculator { type Operation = (Int, Int) => Int def calculate(n1: Int, n2: Int)(operation: Operation) = operation(n1, n2) } - We are not bound to use only the operations that we have hardcoded in our implementation. We can create more complex operations on the fly: calculate(1, 2)((n1, n2) => (n1 + n2) * (n1 - n2)) As it always happens in Scala, its misuse might lead to unpleasant consequences. In future posts, we’ll take a look at the dark side of lambda expressions. In the meantime, we shall remain naively happy.
https://scalerablog.wordpress.com/2015/06/01/lambda-expressions-everywhere/
CC-MAIN-2019-26
refinedweb
582
61.46
Red Hat Bugzilla – Bug 185840 kickstart install traceback Last modified: 2007-11-30 17:11:27 EST Description of problem: File "/usr/lib/python2.4/site-packages/pykickstart/parser.py", line 1001, in readKickstart self.handleCommand(lineno, args) File "/usr/lib/anaconda/kickstart.py", line 688, in handleCommand self.handler.handlers[cmd](self.id, cmdArgs) File "/usr/lib/anaconda/kickstart.py", line 572, in doXConfig dict["startX"]) File "/usr/lib/anaconda/installclass.py", line 414, in configureX self.setVideoCard(id, driver, videoRam) File "/usr/lib/anaconda/installclass.py", line 395, in setVideoCard import rhpxl.videocard ImportError: No module named rhpxl.videocard install exited abnormally kickstart contains: xconfig --monitor "Monitor 1280x1024" --resolution 1280x1024 --depth 16 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: What version of anaconda is this, and what tree did it come from? There should be a /usr/lib/python2.4/site-packages/rhpxl/videocard.py in the second stage that you're using. Happens with rawhide tree as well as current FC5 tree from wallace. This is using "text" and minstg2.img. regards, Florian La Roche Please test with the next anaconda rebuild and let me know if this is working better. Looks like we need rhpxl in minstg2 instead of the larger image. *** Bug 186820 has been marked as a duplicate of this bug. *** *** Bug 187161 has been marked as a duplicate of this bug. *** I'm having the exact same problem. When will the next rebuild of anaconda be released? It was rebuilt last night and is in today's Rawhide. I was thinking about this last night on the drive home - the kickstart is failing before it gets to the RPM intall step, so the problem won't be solved by updating the RPM. Instead won't I need to replace just the old minstg2.img file with an fixed version for Core 5? If I attempt to use the one from Rawhide but it complains about the installer not matching the media. Where can I find an updated version for Core 5?. Thank you for the help so far, but still no luck. I've downloaded that image file and added it to my web server and booted from both PXE and the FC5 cd with the following command: linux ks= The installer proceeds to the detecting hardware step and then dies with the Import error "No module named rhpxl.videocard" Whoops - I mistyped: linux ks= updates= (In reply to comment #9) >. is there a separate image for x86_64 FC5 thanks no floppy here, I coppied the patch to my custom cd and I'm trying to use the following update command (i have modified the isolinux.cfg) label kick kernel vmlinuz append initrd=initrd.img ks=cdrom:/ks.cfg updates=cdrom:/185840-i386.img is this right? or am I doing somthing wrong, I would realy like to get this to work. Sorry, that was dumb. I put rhpl in the updates image instead of rhpxl. Please try again with the URL of. I've removed the -i386 to indicate there's nothing but regular python in this image. b chang - No, there's not a separate one for x86_64. Please try with the new URL and that should work for you. Joe - if you're creating a new CD image, you should put the updates image in /Fedora/base on the first CD. You won't need to provide any special parameters. The kickstart gets farther than before.... The traceback is: File /usr/bin/anaconda, line 373 in ? setup PythonUpdates () File /usr/bin/anaconda, line 311 in setup PythonUpdates () for f in os.listdir ("/usr/%s/python%s/site-packages/%s" %(libdir, OS Error: [Errno 2] No such file or directory: '/usr/lib/python2.4/site-packages/rhpxl' The kickstart still fails for me, I am using a modified boot iso and from there a ftp install. I coppied the image to the Fedora/base on my ftp server. How Do i get the traceback? Joe Has this worked in any previous versions? I can't see that it has, but maybe I'm missing something. We simply do not have the information in text ftp/http installs to configure X because these installs use minstg2.img, which has none of the X stuff in it. Unfortunately, X configuration is very tied in with running X stuff at the moment. Yes, I've been able to install workstations with X enabled via the kicstart for all the previous versions of Fedora Core (I started working with this basic configuration back in the Redhat 7.3 days). Yes it worked in core 4 just fine, I noticed in 4 that it starts annaconda, probes for the correct video card and monitor, and then says graphical installation not avalible switching to text (or something to that effect) In 5 it simply starts annaconda and the says graphical installation not availible switching to text. Hope that helps to shed some light on the situation. Joe After some additional probing, I've decided I don't think this is fixable as an update image. It requires first off the rhpxl library in minstg2.img, plus some files from the xserver package, plus some additional shared libraries that will need to be compiled for each platform. I also had to modify a couple files in anaconda. Having done all this, I was able to fix it in Rawhide (I believe) but not in FC5. Even after that stuff, there's something else missing in minstg2.img that's not making it work for FC5. It's kind of crappy, but please just test this in Rawhide and let me know if it's working for you. For FC5, you can either use a graphical install, use a different install method, or comment out the xconfig line in your kickstart file. How Do I test this in Rawhide, Where do I go to download it? Joe Joe - it may be best for you to just wait until FCtest1 at this point, as that will be much more like a real release instead of trying to use the development tree.
https://bugzilla.redhat.com/show_bug.cgi?id=185840
CC-MAIN-2017-26
refinedweb
1,036
66.23
Difference between pages "File:L-redesign-01.gif" and "The Gentoo.org Redesign, Part 1" Revision as of 08:21, December 31, 2014 A site reborn Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. An unruly horde Fellow software developer, may I ask you a question? Why is it that although many of us are intimately familiar with Web technologies such as HTML, CGI, Perl, Python, Java technology, and XML, our very own Web sites -- the ones devoted to our precious development projects -- look like they were thrown together by an unruly horde of hyperactive 12-year-olds? Why, oh why, is this so? Could it be because most of the time, we've left our Web site out to rot while we squander our precious time hacking away on our free software projects? The answer, at least in my case, is a most definite "Yes." When I'm not writing articles for IBM developerWorks or being a new dad, I'm feverishly working on the next release of Gentoo Linux, along with my skilled team of volunteers. And, yes, Gentoo Linux has its own Web site (see Resources). As of right now (March 2001), our Web site isn't that special; that's because we don't spend much time working on it because we're generally engrossed in improving Gentoo Linux itself. Sure, our site does have several admittedly cute logos that I whipped up using Xara X (see Resources), but when you look past the eye candy, our site leaves a lot to be desired. Maybe yours does too. If so, I have one thing to say to you -- welcome to the club. In our case, our Web site dilemma exists because our project has been growing, and our Web site hasn't. Now that Gentoo Linux is approaching the 1.0 release (when it'll be officially ready for non-developers) and is growing in popularity, we need to start seriously looking at how our Web site can better serve its users. Here's a snapshot of: As you can see, we have all the bare essentials -- a description of Gentoo Linux, a features list, a daily Changelog (automatically updated thanks to Python), and a bunch of important links (to the download sites, to our mailing list sign-up pages, and to cvsWeb). We also have links to three documentation resources -- the Gentoo Linux Install Guide and Development Guides, and Christian Zander's NVIDIA Troubleshooting Guide. However, while the site seems O.K., we're missing a lot of things. The most obvious is documentation -- our installation and development guides need a lot of work. And then we need to add an FAQ, new links, new user information...the list is endless. Content vs. display And now we come to our second problem. Right now, all of our work is done in raw HTML; I hack away at the index.html file until it looks O.K. Even worse, our Web documentation is written in raw HTML. This isn't a good thing from a development perspective because our raw content (consisting of paragraphs, sections, chapters) is garbled together with a bunch of display-related HTML tags. This, of course, makes it difficult to change both the content and the look of our site. While this approach has worked so far, it is bound to cause problems as our site continues to grow. Clearly, we need to be using better technologies behind the scenes. Instead of using HTML directly, we need to start using things like XML, XSLT, and Python. The goal is to automate as much as possible so that we can add and expand our site with ease. If we do our job well, even major future changes to our site should be relatively painless. A strategy! It was clear that we had a lot of work ahead of us. In fact, there was so much to be done that I didn't know where to begin. Just as I was trying to sort out everything in my head, I came across Laura Wonnacott's "Site Savvy" InfoWorld column (see Resources). In it, she explained the concept of "user-centric" design -- how to improve a Web site while keeping the needs of your target audience (in this case, Gentoo Linux users and developers) in focus. Reading the article and taking a look at the "Handbook of User-Centered Design" link from the article helped me to formulate a strategy -- an action plan -- for the redesign: - First, clearly define the official goal of the Web site -- in writing. What's it there for, and what's it supposed to do? - Identify the different categories of users who will be using your site -- your target audience. Rank them in order of priority: Which ones are most important to you? - Set up a system for getting feedback from your target audience, so they can let you know what you're doing right and wrong. - Evaluate the feedback, and use it to determine what parts of the site need to be improved or redesigned. Tackle high-priority sections first. - Once you've selected the part of the site to improve, get to work! During your implementation, make sure that the content and design of the new section caters specifically to the needs of your target audience and fixes all known deficiencies. - When the section redesign is complete, add it to your live site, even if it has a look that's markedly different from your current site. This way, your users can begin benefitting from the newly redesigned section immediately. If there's a problem with the redesign, you'll get user feedback more quickly. Finally, making incremental improvements to your site (rather than revamping the whole site and then rolling it out all at once -- surprise!) will help prevent your users from feeling alienated by your (possibly dramatic) site changes. - After completing step 6, jump to step 4 and repeat. The mission statement I was happy to discover that we already had step 3 in place. We had received several e-mail suggestions from visitors to the site, and our developer mailing list also served as a way of exchanging suggestions and comments. However, I had never really completed steps 1 or 2. While the answers may seem obvious, I did find it helpful to actually sit down and write out our mission statement: exists to assist those who use and develop for Gentoo Linux by providing relevant, up-to-date information about Gentoo Linux and Linux in general, focusing on topics related to Gentoo Linux installation, use, administration, and development. As the central hub for all things Gentoo, the site should also feature important news relevant to Gentoo Linux users and developers. In addition to catering to Gentoo Linux users and developers, has the secondary purpose of meeting the needs of potential Gentoo Linux users, providing the information they need to decide whether Gentoo Linux is right for them. The target audience So far, so good. Now for step 2 -- defining our target audience: has three target audiences -- Gentoo Linux developers, users, and potential users. While no one group is absolutely a higher priority than another, right now the needs of Gentoo Linux developers are our highest priority, followed by Gentoo Linux users, and then potential users. This is because Gentoo Linux is currently in a prerelease state. When Gentoo Linux reaches version 1.0, Gentoo Linux users and potential users will also become a priority. O.K., now it's time to evaluate the suggestions and comments we've collected: Over the past few months, we've received a number of suggestions from Web site visitors. Overwhelmingly, people are requesting better documentation -- for both developers and users. Several developers have asked if we could create a mailing list that would be devoted exclusively to describing CVS commits. Interestingly, we've also received a couple of e-mails asking whether Gentoo Linux is a commercial or free product. I'm guessing that because our main logo is inscribed with the name "Gentoo Technologies, Inc." (our legal corporation name), people assume that we have a commercial focus. Modifying our logo so that it reads "Gentoo Linux" and adding small opening paragraph to the main page explaining that we are a free software project should help. The improvement list O.K., now let's turn these suggestions into a list of possible improvements: - Revamp main page - Implementation: update logo and add free software blurb - Goal: to clearly state that we are a free software project - Target group: potential users - Difficulty: medium - Improve basic user documentation - Implementation: new XML/XSLT system, verbose documentation - Goal: to make it easier for users to install Gentoo Linux - Target group: new users - Difficulty: medium - Improve/create developer documentation - Implementation: new XML/XSLT system, CVS guide, dev guide, Portage guide - Goal: to help our developers to do a great job - Target group: developers - Difficulty: hard - Add a CVS mailing list - Implementation: use our existing mailman mailing list manager - Goal: to better inform our developers - Target group: developers - Difficulty: easy A selection! Two things leap out from the list, for different reasons. The first is the CVS mailing list -- this one is a no-brainer because it's so easy to implement. Often, it makes sense to implement the easiest changes first so that users can benefit from them right away. The second big thing that leaps out from the list is the need for developer documentation. This is a longer-term project that will require much more work. From my conversations with the other developers, we all appear to be in agreement that some kind of XML/XSL approach is the right solution. The XML/XSL prototype To help start the process, I developed a prototype XML syntax to be used for all our online documentation. By using this XML syntax (called "guide"), our documentation will be clearly organized into paragraphs, sections, and chapters (using XML tags like <section>, <chapter>, etc.) while remaining free of any display-related tags. To create the HTML for display on our site, I created a prototype set of XSL transforms. By using an XSLT processor such as Sablotron, our guide XML files can be converted into HTML as follows: devguide.xml + guide.xsl ---XSLT processor---> devguide.html The great thing about this XML/XSLT approach is that it separates our raw content (XML) from the display-related information contained in the guide.xsl (XSLT) file. If we ever need to update the look of our Web pages, we simply modify the guide.xsl file and run all our XML through the XSLT processor (Sablotron), creating updated HTML pages. Or, if we need to add a few chapters to the development guide, we can modify devguide.xml. Once we're done, we then run the XML through Sablotron, which then spits out a fully-formatted devguide.html file with several added chapters. Think of XML as the content and XSLT as the display-related formatting macros. While our entire team is convinced that XML/XSLT is the way to go, we haven't yet agreed upon an official XML syntax. Achim, our development lead, suggested that we use docbook instead of rolling our own XML syntax. However, the prototype guide XML format has helped to start the decision-making process. Because we developers are going to be the ones using the XML/XSL on a daily basis, it's important to choose a solution that we're comfortable with and meets all of our needs. By my next article, I should have a working XML/XSL doc system to show off to you. Technology demo: pytext For the most part, our current Web site isn't using any new or super-cool technologies that are worth mentioning. However, there's one notable exception -- our tiny pytext embedded Python interpreter. Like many of you, I'm a huge Python fan and much prefer it over other scripting languages, so when it came time to add some dynamic content to our Web site, I naturally wanted to use Python. And, as you probably know, when coding dynamic HTML content, it's usually much more convenient to embed the language commands inside the HTML, rather than the other way around. Thus, the need for an embedded Python interpreter that can take a document like this: line beginning with are appended to a string called mycode. Pytext then executes the mycode string using the built-in exec() function, effectively creating an embedded Python interpreter. There's something really beautiful about this particular implementation -- we call exec() in such a way that all modifications to the global and local namespaces are saved. This makes it possible to import a module or define a variable in one embedded block, and then access this previously-created object in a later block, as this example clearly demonstrates: <!--code import os foo=23 --> Hello <!--code print foo if os.path.exists("/tmp/mytmpfile"): print "it exists" else: print "I don't see it" --> Handy, eh? pytext serves is an excellent demonstration of the power of Python, and is an extremely useful tool for Python fans. For our current site, we call pytext from a cron job, using it to periodically generate the HTML code for our main page Changelog: $ pytext index.ehtml > index.html That's it for now; I'll see you next time when we'll take a look at the first stage of the redesign! Read the next article in this series: The Gentoo.org Redesign, File history Click on a date/time to view the file as it appeared at that time. - You cannot overwrite this file. File usage The following page links to this file:
http://www.funtoo.org/index.php?title=Usermap&diff=8034&oldid=8033
CC-MAIN-2015-14
refinedweb
2,303
61.16
10 March 2008 12:27 [Source: ICIS news] By Linda Naylor ?xml:namespace> LONDON (ICIS news)--Polyethylene (PE) players are noting a slowdown in activity amid offers of cheaper imported material in March, leading to some buyers’ expectations of lower prices for the first time in many months, market sources said on Monday. “This is the first time for months that we have been able to realistically expect lower prices,” said one large low density polyethylene (LDPE) buyer, who reported just a €10/tonne ($15/tonne) drop in its March LDPE business, still leaving the gross €1,345/tonne FD (free delivered) NWE (northwest Europe) price level only €10/tonne from LDPE’s record high. LDPE spot was declining more rapidly, with most business reported at €1,220-1,250/tonne FD NWE. C4 linear low density polyethylene (LLDPE) was looking firmer than LDPE, and some rollovers were being settled for March. Much LDPE and LLDPE business is done on a retroactive basis and was not expected to be fully settled before the end of March. Imported product was expected to impact the high density polyethylene (HDPE) market more significantly than the LDPE and LLDPE markets, however, although here too some large buyers envisaged little more than a €10/tonne drop. The weak dollar meant that spot prices were under downward pressure and as prices dropped, buyers were more hesitant to move, as prices could fall further. Imported HDPE blowmoulding was widely offered at €1,200/tonne FD NWE, with some sellers of imported blowmoulding unable to shift even at this level. “We are lucky if we can sell at €1,160/tonne FD NWE. It is often not even a case of price, they just do not want to buy,” said one trader with imported blowmoulding in stock. Gross prices from European producers had been settled €10/tonne up in February, in the mid-€1,300s/tonne FD NWE, but this was now under threat of slipping in March. “The most we can do is defend a rollover in the HDPE market this month,” said a European producer. Another contributory factor to the current PE unease was the short-term future of ethylene monomer. Spot product was on its way from ?xml:namespace> For the time being, ethylene suppliers were talking of a rollover to small increase into the second quarter 2008, based on record high oil prices, and a so far balanced supply and demand scenario. Any reduction in second-quarter ethylene would find its way into PE, however, and PE buyers were not ready to buy one kilo more than necessary with even the sniff of lower prices ahead. “What is also important in the PE market, is the general economy in Despite the widespread expectations of lower prices in the PE market, few players expected anything more than a downward correction, however. “With oil at $100/bbl we cannot really expect prices to crash,” said a large PE buyer. “Okay, the dollar goes some way in mitigating it, but it is still very high.” March discussions were expected to be protracted. PE prices have risen steadily alongside higher oil prices for several years, with only brief periods of slight erosion. Most PE prices have doubled since their 2003 level. Producers in (
http://www.icis.com/Articles/2008/03/10/9107263/europe-pe-buyers-bank-on-price-fall.html
CC-MAIN-2013-48
refinedweb
545
57
An important aspect of test-driven development: What’s my motivation? A month ago, Fagner Brack published “This Is The One Thing Nobody Told You About TDD.” As someone who is still relatively new on the test-driven development (TDD) bandwagon, I definitely want to know what is that one thing nobody has told me about this programming philosophy I’m trying to adopt. “The tests give you hints of what to do next. They drive you to build the code.” — Fagner Brack Well, maybe I was luckier than most to have had an instructor, specifically Samah Majadla, who explained this to me. She didn’t use these exact words, but I think she got the point across to my classmates and I. The tests motivate the production code. You write a simple test. Then you write the bare minimum necessary to make that test pass. Then you write a slightly more complicated test. And then write what’s needed to pass that test. And so on and so forth. At each step of the process, to the best of your ability, don’t let the production code be more complex than strictly required to pass the tests. Because… the test drives the source. The refactoring step of the TDD cycle gets very little attention. If there really is something about TDD that no one has ever told me, it’s likelier to be in the refactoring step. But the thing about refactoring, though, is that it’s about simplifying the production code. If during refactoring you think of an improvement that’s not about simplifying what you wrote to pass the test, don’t make that change yet: write a test first. I think the greatest common divisor (GCD) function provides a nice, simple and to-the-point example of how to do TDD. Quick math refresher: the greatest common divisor of two integers is the greatest positive integer that divides them both. For instance, gcd(−91, −98) = 7, because both −91 and −98 are divisible by 7. Also I think it’s better to demonstrate TDD in a programming language with a commonly preferred testing framework, like Java with JUnit, or C# with xUnit. Because of the naming convention, you might think that JsUnit is the commonly preferred testing framework for JavaScript. But that’s a can of worms I better leave to someone else to tackle. I like to start by writing a stub that will obviously fail the test. It doesn’t matter much if you’re writing your source code in a plain text editor and compiling on the command line, but in an integrated development environment (IDE) like NetBeans or IntelliJ, stubs help the IDE give you more pertinent errors and warnings. So here’s our GCD stub that should obviously fail the first test: public static int gcd(int a, int b) { return -1; } For the first test, you might already be thinking of a pair of distinct numbers, like −27 and 18. But we can go simpler than that for our first test: how about gcd(n, n), where n is a positive integer? Then obviously gcd(n, n) = n. So our first test then looks like this: @Test public void testGCDOnSameNumber() { int expected = 10; int actual = gcd(expected, expected); assertEquals(expected, actual); } At this point, there are two valid ways to amend gcd() to pass the test. One way is: public static int gcd(int a, int b) { return a; } The other way is to return b instead of a. Either way, the test should pass now. At this point it might be too much to test if a == b, and it would certainly be too much to write a full-fledged implementation of an algorithm that derives lists of divisors from the prime factorizations of the integers and is equipped to handle the special case gcd(Integer.MIN_VALUE, Integer.MIN_VALUE) with a very elegant and informative custom exception. We’ll get to that, but only after writing the relevant tests. No refactoring is needed at this point, because all we did was substitute one character for two characters on a function that was just a simple stub. Can we do a test for two distinct numbers now? We could, but I think there is one other simpler thing we can do first: test for gcd(n, n) with n a negative integer. Let’s not use the MIN_VALUE of our numeric data type, though. @Test public void testGCDOnSameNegativeNumber() { int expected = 10; int actual = gcd(-expected, -expected); assertEquals(expected, actual); } This should fail because it should give us −10 instead of +10. The fix to pass is easy. public static int gcd(int a, int b) { return Math.abs(a); } Or we could also use Math.abs(b), either way it’s enough to pass the tests so far. Okay, now let’s write a test with distinct numbers. At this point, it should be only positive numbers though, we should probably hold off on a test with one positive and one negative number. @Test public void testGCD10501125() { int expected = 75; int actual = gcd(1050, 1125); assertEquals(expected, actual); } Clearly this test will fail because the GCD is neither of the input numbers. Now can we write the proper GCD implementation? We could, but it’s actually still quite easy to be a smart-aleck with this one. Notice that the GCD in this case is the difference of a and b. So… public static int gcd(int a, int b) { if (a == b) { return Math.abs(a); } else { return Math.abs(a - b); } } At this point you’re probably ready to actually write an actual GCD function, so here’s a test that will hopefully require that. @Test public void testGCDOnSortOfRandomNumbers() { int psRanNum = (int) Math.floor(Math.random() * 200 + 1); psRanNum = 6 * psRanNum + 1; // Ensure it's odd and not // divisible by 3 int currPowRan = psRanNum; int threshold = Integer.MAX_VALUE / 16; int expResult, result, holderA, holderB; int curr3Pow = 3; while ((currPowRan > 0) && (currPowRan < threshold)) { expResult = 1; result = gcd(currPowRan, currPowRan + 1); assertEquals(expResult, result); expResult = psRanNum; result = gcd(currPowRan, psRanNum); assertEquals(expResult, result); expResult *= 2; holderA = 2 * currPowRan; holderB = 2 * curr3Pow * psRanNum; result = gcd(holderA, holderB); assertEquals(expResult, result); currPowRan *= psRanNum; curr3Pow *= 3; } } We’re trying to steer well clear of numbers above 2³⁰, so as to not deal with overflow issues yet. I suppose it’s still possible at this point to write an implementation that will pass this test but fail other simple tests with likely inputs, like gcd(336, −864) or gcd(245, 343). But now the complexity of detecting what the wanted result might be vastly outweighs the complexity of actually writing a proper GCD implementation that works for all but a few unlikely edge cases. There are at least three distinct algorithms we could use to find the GCD of two integers. If you know one I don’t mention, please let me know about it in the comments. One way is to obtain the prime factorizations of both numbers and multiply the factors that they have in common. For example, 1050 = 2 × 3 × 5² × 7 and 1125 = 3² × 5³, so 3 × 5² = 75, and indeed gcd(1050, 1125) = 75. The problem with this is that we would need to either write a new prime factorization function (which we’ll have to test and develop) or find one from a third party library. We also need something to find the elements in common of two sets (what mathematicians call “intersection”). With this algorithm, we’re going to get sidetracked. To me the most obvious algorithm is to list the positive divisors of each number and then find the largest one of them. Using the same example as before, we see that the positive divisors of 1050 are 1, 2, 3, 5, 6, 7, 10, 14, 15, 21, 25, 30, 35, 42, 50, 70, 75, 105, 150, 175, 210, 350, 525 and 1050 itself. The positive divisors of 1125 are 1, 3, 5, 9, 15, 25, 45, 75, 125, 225, 375 and 1125 itself. The numbers in common to both sets are 1, 3, 5, 15, 25, 75, of which 75 is obviously the largest number, and thus the greatest common divisor. In Scala this would be very easy to implement. Here is the example on the Scala REPL: scala> (1 to 1050).filter(1050 % _ == 0) res0: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 2, 3, 5, 6, 7, 10, 14, 15, 21, 25, 30, 35, 42, 50, 70, 75, 105, 150, 175, 210, 350, 525, 1050)scala> (1 to 1125).filter(1125 % _ == 0) res1: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 3, 5, 9, 15, 25, 45, 75, 125, 225, 375, 1125)scala> res0.intersect(res1) res2: scala.collection.immutable.IndexedSeq[Int] = Vector(1, 3, 5, 15, 25, 75) This might not be the most elegant way to do this in Scala, but for some quick and dirty experiments on the REPL, it’s just fine. In Java, we’d have to write these things from scratch, or find a third party library. Though in Scala it would still be good form to write tests for a list of divisors function. Regardless, both of these algorithms have the potential to sidetrack us from the objective of writing the bare minimum implementation that will pass all the tests we’ve written so far. In this GCD scenario, our motivation is to write the simplest implementation that will meet the requirements expressed by the tests we have written. No more, no less. So the Euclidean GCD algorithm here is the simplest and most direct way to get at our goal. However, let us take a brief moment to reflect on the mathematical principles at work here. To use the Euclidean GCD algorithm, we need a Euclidean function f(n) such that f(n) is always a positive integer, except for f(0) = 0, and such that whenever d is a divisor of n, the inequality f(d) ≤ f(n) is true. Almost always the absolute value function is chosen for this purpose, so much so that it gets taken for granted. And since Java comes with Math.abs() standard, we don’t have to reinvent the wheel on this one. Then, to figure out gcd(a, b), if f(b) ≤ f(a) (swap a and b if necessary), we need to find numbers q and r such that a = qb + r and f(r) < f(b) (it must be less than, not less than or equal to). If r = 0, we’re done, the gcd is b. If not, then we reset a to b and b to r, and go through the process of finding numbers q and r such that a = qb + r and f(r) < f(b) all over again. To use the example gcd(1050, 1125) once more, we see that 1125 = 1 × 1050 + 75. Since our remainder is not 0, we go on to 1050 = 14 × 75 + 0. The remainder is now 0 and the GCD is 75. I think this is enough information for you to implement a proper GCD function that passes the tests presented here so far, as well as other simple tests that you may devise. It is a well known fact that the Euclidean GCD algorithm is the least efficient for consecutive Fibonacci numbers. Work out gcd(−144, 89) on paper, for example. By prime factorization it is much quicker to confirm your hunch that these two numbers are coprime. However, computers are so fast that you won’t notice any performance penalty even for gcd(1134903170, 1836311903). You’d have to go well above the upper limits of int and long to notice any performance degradation. If you do need to test for performance, I’d read up on timeouts in JUnit. The basic idea is that you allot a certain number of milliseconds for the test to run, and if it’s not done in that time, the test fails. In Scala, we could implement gcd() in such a way that it can take a custom Euclidean function at runtime, but uses Math.abs() as the default Euclidean function if no Euclidean function is specified. It would go something like this: def gcd(a: Int, b: Int, eucFn: Int => Int = Math.abs _): Int = { // Euclidean algorithm goes here currB // Or currA? Don't need "return" either way } Our TDD process would be the same as with Java, aside from superficial differences, and we could continue to use JUnit, at least for the time being. Thanks to the implicit parameter (assuming I have the right syntax), we wouldn’t need to go back and add “ , Math.abs” to the gcd() calls in the previous tests. The crucial difference would be in testing that gcd() actually uses a custom function that we pass it during a test. The way we would do that might be to pass an invalid Euclidean function (like one that returns negative integers) and seeing if it throws an exception or if it just uses the absolute value function and returns the right result anyway. At that point, our draft of gcd() should be such that it uses Math.abs() regardless of what function we pass it. Then, once we have a test fail because our custom function is ignored, we go ahead and rework gcd() so that it does use the custom function. Aside from these functional considerations, the TDD process in Scala is hardly different from the TDD process in Java. Earlier I mentioned the case gcd(Integer.MIN_VALUE, Integer.MIN_VALUE), and that maybe this should cause a custom exception. The GCD in that example is of course Integer.MAX_VALUE + 1, which obviously can’t be properly represented in a signed 32-bit int. We’d have much the same issue with Long.MIN_VALUE if we were using long instead of int, but at the signed 64-bit threshold instead. But why write a custom exception (which we’ll have to take through the TDD process as well) when we can just use an existing exception in the standard Java library? I think ArithmeticException makes sense for this purpose. @Test(expected = ArithmeticException.class) public void testGCDIntMinVal() { gcd(Integer.MIN_VALUE, Integer.MIN_VALUE); } If you prefer the “old school” try-catch with fail, you can also have the test report the erroneous result (probably 0) if the exception is not triggered. Once you have this test fail because gcd() did not throw new ArithmeticException for this edge case, you can go ahead and write where that actually happens. Depending on how you actually implement gcd(), it could be the case that ArithmeticException arises for this edge case without your having to actually write a throw statement. By writing the test first, you can then see if it really is necessary for you to make any changes to what you have so far. It might be, it might not be, the results of the test will tell you. That’s the point of TDD: write the test, then, if necessary, write what’s needed to pass the test. Don’t write more than what is necessary, let the tests show you what is needed and what isn’t. I hope someone has already told you this. I also hope my exposition here has made it clearer in your mind.
https://alonso-delarte.medium.com/an-important-aspect-of-test-driven-development-whats-my-motivation-4183631a1d44?source=post_internal_links---------2----------------------------
CC-MAIN-2022-33
refinedweb
2,567
70.02
Your First Python Script in Grasshopper This manual is for Grasshopper users who would like to create their own custom scripts using Grasshopper for Rhino. Introduction Scripting components works as an integrated part of GH. They can get input and produce output from and to other standard GH components. They can be used to create specialized functionality that opens up tremendous potential beyond the standard components. But there is a cool twist… the GhPython component supports rhinoscriptsyntax functions. The rhinoscriptsyntax functions can be set to generate geometry inside of Grasshopper that does not live in the Rhino document. We are using a concept called “duck typing” to swap the document that the rhinoscriptsyntax functions target: from the Rhino document to a Grasshopper document. This means that the following script: import rhinoscriptsyntax as rs for x in range(10): rs.AddPoint((x,0,0) will add 10 points to the Rhino document when run from Rhino’s “_RunPythonScript” or “_EditPythonScript” commands. The same script will add 10 points to a Grasshopper document that can be passed on to other components when run inside a GhPython component. Grasshopper supports multiple .NET scripting languages such as VB and C# to help develop custom code. There is also the Python component. Python supports multiple programming paradigms, often used as a scripting language, but is also used in a wide range of advanced programming contexts. The Rhino.Python website directory is a great place to get more information about Python in Rhino in general. The GhPython component brings: - Rhinoscript syntax to Grasshopper - a Python parallel to the C# and Vb.Net scripting components - a dynamic UI with control over the number of inputs and outputs - ability to reference .NET libraries and a huge number of Python packages - integration with the Python editor included in Rhino Rhino allows access to its algorithms through the Rhino SDK (software development kit). Rhino is written in C++ but it also provides a couple SDKs for scripting and programming languages. The most basic SDK for Python is RhinoScriptSyntax. For more direct access to Rhino functions, more experienced programmers may choose to use the RhinoCommon SDK. There is extensive documentation about RhinoScriptSyntax and Python on the Developer site. For more details about RhinoCommon, please refer to the McNeel RhinoCommon Developer site. Where to find the script components To add the Python script component to the canvas, drag and drop the component from the “Script” panel under the “Maths” tab. The component first shows orange indicating a warning. This is because there is no code typed yet inside it. If you click on the bubble at the top-right corner, you can see what the warning is. and prints. - A: Returned output of type object. The HelloWorld Script Let’s start with the classic “Hello World!” example for our first script. - Drag a Python component onto the Grasshopper canvas. - Add a Boolean component (Params toolbar) and connect it to the x input. - Add a Panel component and connect it to the a output. The “Hello World!” script will check for a TRUE in x to display the Hello World message through the ‘a’ output. Double-click on the component to bring up the Python editor: Type in the following Python code: if x: a = "Hello World!" Click on the OK button of the editor to get back to the Grasshopper Canvas. If you toggle the Boolean switch from true to false, the Panel should display Hello World!. Congratulations! You have compete your first Python script in Grasshopper. To change the script, double-click o the Python component. Add another line of code: if x: a = "Hello World!" else: a = "Nothing to say." Now you can test that using the toggle.
https://developer.rhino3d.com/guides/rhinopython/your-first-python-script-in-grasshopper/
CC-MAIN-2021-21
refinedweb
618
65.62
As I've been working with Web Components, I've been trying to find a compatible workflow that is easy to use and efficient. I'm primarily a React developer, which is the only framework that hasn't fully integrated web components. This means my usual tools of the trade, such as Gatsby or NextJS, don't immediately work with web components well. And I wasn't too hyped about digging into another framework's docs, like 11ty, which is a more generic SSG that supports web components. Luckily, Preact supports web components fully, while mirroring React's functionality. And Gatsby has a plugin that swaps React with Preact. So I thought I'd give Gatsby another shot for web components, this time in Preact mode! If you're interested in the source code, you can clone it off Github. Let's dig into the process! Web Components are great, but it's just a web standard — they don't come with anything out of the box like app routing. Frameworks like Gatsby allow you to circumvent this process by leveraging the framework's routing. In Gatsby's case, we get to benefit from reach-router, combined with Gatsby's Webpack configuration. This allows us to create pages with Preact components .js files in the pages directory. Much better than manually setting up routing! Like I mentioned before, I prefer the Gatsby workflow and know the framework well. Rather than learn a new SSG, or one that doesn't come with all of Gatsby's features (like GraphQL), I'd like to leverage what I know. I also have plenty of Gatsby templates, components, and snippets that would probably work well with Preact. Gatsby also has an entire ecosystem of plugins, templates, documentation that can be used once inside. Although many are React-based, others like image handling, offline caching, and PWA setup are universal and essential. Preact has a smaller footprint than React, which means smaller overall build, and much faster on-load stats such as TTFB (time to first byte). Many developers have shown significant performance improvements by switching over their existing apps using preact and preact-compat. And like I mentioned at the start, it's an excellent way to incorporate web components into your workflow. You can even leverage JSX and it's ability to assign arrays and objects to properties: <Layout> <wired-link Elevation </wired-link> <wired-input placeholder="Your name" ref={textInput} /> {/* Pass down functions into props */} <wired-button onClick={handleClick}>Submit</wired-button> {/* Can send array and user without any . or $ syntax */} <x-card data={postArray} user={userObject} /> </Layout> This is much better than the React alternative of assigning props through the ref: import React, { Component } from 'react'; // Utility that helps assign data using ref // @see: StencilJS React components import { wc } from './utils/webcomponent'; class SomeComponent extends Component { render() { const postArray = [] const userObject = {} return ( <div> <x-card ref={wc( // Events {}, // Props { data: postArray, user: userObject } )} /> </div> ); } } export default SomeComponent; The process to make a Gatsby project with Preact was very simple: gatsby new gatsby-preact-web-component-test yarn add gatsby-plugin-preact preact preact-render-to-string plugins: [gatsby-plugin-preact ] This process requires you have NodeJS installed on your development machine. Yarn is optional and can be swapped out using NPM instead ( npm i instead of yarn add). You can swap out Preact with React in the default Gatsby project with no problem. Even their Typescript page works fine. I need to stress test this (like adding it to my personal blog) but otherwise seems good! Gatsby will build out the Web Components as is. It doesn't parse the Web Components down and display any shadow DOM in the production build HTML. The Web Components JS file should initialize the web component on load (much like React/Vue/etc without SSR). This is why it's key to leave vital information inside slots, instead of props/attributes, to ensure non-JS spiders and bots can quickly find key data they need (or a user without JS enabled). Techniques like taking an array prop/attribute and mapping out elements inside the web component ( <your-list .) lead content being unavailable unless parsed with JS. Instead, lean towards DOM mirroring implementation ( <your-list><list-item>). This way the content still shows in the raw HTML, which is major improvement on things like SEO. Just import the web component library as a <script> in the app wrapper/layout using react-helmet. Or best practice, import inside gatsby-browser. Make sure to include polyfills. Here's an example of using WiredJS web components inside a <Layout> wrapper (trimmed down for size): import React from "react" import { Helmet } from "react-helmet" const Layout = ({ children }) => { return ( <> {/** Use react-helmet to place <script> in <head> **/} <Helmet> <script src=""></script> <script async</script> </Helmet> <main>{children}</main> </> ) } export default Layout Then just use the web components anywhere inside the app! By using Gatsby, you can get near 100% Lighthouse scores with the base installation. And combined with Preact, you get even more performance benefits. It's a synergetic pairing that provides an excellent foundation for scalable, offline-friendly, static PWAs. The only limitation is honestly the web components, which are not server-rendered by Gatsby. It's a further reminder that you'd be better off creating an app completely with React/Vue/Angular/etc components instead of web components. However this flow would be ideal for someone creating a client-side app using web components as primitives and Preact for more complex behavior (state, routing, events, etc). Since client-side apps don't have to be server-rendered or as geared for SEO, web components actually work well. I've also experimented with this process using Vue's Gridsome, if you're looking for a Vue alternative to Gatsby that can support Web Components. And since Vue itself has better compatibility for web components out of the box, you don't need to swap to "Prue" or something 😂. Learn Something New Everyday, Connect With The Best Developers!
https://hashnode.com/post/using-web-components-with-gatsby-and-preact-ckbqt47d4011en8s1vkyzs0tf
CC-MAIN-2020-45
refinedweb
1,009
53.51
How to add a button in an editable grid Hi all, i want to add a button at the last column on each row of an editable grid using following code: final Button buttonInGrid = new Button(i18n.resetButton(), new SelectionListener<ButtonEvent>() { public void componentSelected(ButtonEvent ce) { GWT.log("buttonInGrid is clicked...", null); } }); final AdapterField adaptField = new AdapterField(buttonInGrid); CellEditor editorBtn = new CellEditor(adaptField) { public Object preProcessValue(Object value) { return ((Button)adaptField.getWidget()); } public Object postProcessValue(Object value) { return ((Button)adaptField.getWidget()); } }; But the buttons only appear, when I click on its cell. I would like to make all buttons appear at the first time when the editable grid is loaded. How can I achieve it? Thanks in advanced! You set tha button as an editor. They only appear of you want to edit that cell. The next GXT2 release (M2) will ship with a grid widgetrenderer
https://www.sencha.com/forum/showthread.php?68205-How-to-add-a-button-in-an-editable-grid
CC-MAIN-2016-30
refinedweb
146
51.85
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to pass connection parameters in the URL ? I am working with odoo 8. I want to know how can I log in the server by just passing connection parameters in the URL (db, login, password). MeansURL string for seamless login from another page to Odoo 8. Any idea how can we do that ? you can use erppeek to do that. import erppeek SERVER = yourserverurl DATABASE = yourdatabasename USERNAME = yourusername PASSWORD = yourpassword client = erppeek.Client(SERVER,DATABASE,USERNAME,PASSWORD) About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-pass-connection-parameters-in-the-url-88948
CC-MAIN-2017-47
refinedweb
133
60.21
KharkivPy #17 November 25th, 2017 by Roman Podoliaka, Software Engineer at DataRobot twitter: @rpodoliaka slides: Create an algorithm (also called a model) to classify whether images contain either a dog or a cat. Kaggle competition: There are much better algorithms for this task, but we'll stick to logistic regression for the sake of simplicity. This task is an example of: 1) supervised learning problem (we are given training examples with "correct" answers, as opposed to unlabeled examples in unsupervised learning) 2) binary classification problem (the output is a categorical value denoting one of two classes - 0 or 1, as opposed to multiclass classification, where there are more than two classes, or regression, where the output is a real number) Let a column vector $ x = [x_1, x_2, \ldots, x_n] $ represent a single training example, where $ x_1, x_2, \ldots, x_n $ are values of its features. For the task of image classification it's very common to treat pixels of a given image as its features. If we were to build a machine learning model for medical diagnosis, we would need to come up with a list of features, that would be suitable for that task (e.g. age, weight, height of a person, whether they have been vaccinated, etc). import concurrent.futures import enum import multiprocessing import os import random import sys import numpy as np from PIL import Image class Category(enum.Enum): dog = 0 cat = 1 def image_to_example(path, width=64, height=64): filename = os.path.basename(path) # normalize the input image, so that we only work with images of the same size with Image.open(path) as img: resized = img.resize((width, height)) # encoding of string labels: "dog" -> 0, "cat" -> 1 y = Category[filename.split('.')[0]].value # RGB image is flattened into a one long column vector of floats, # that denote color intensity x = np.array(resized, dtype=np.float64) \ .reshape(width * height * 3, 1) / 256. return x, y, path # preprocessing of a given example x, y, _ = image_to_example('train/cat.1.jpg') # true label y 1 # normalized example x array([[ 0.15625 ], [ 0.171875 ], [ 0.16796875], ..., [ 0.140625 ], [ 0.08984375], [ 0.06640625]]) # feature-vector dimensions x.shape (12288, 1) # restored image plot.imshow(x.reshape(64, 64, 3)) <matplotlib.image.AxesImage at 0x10a5008d0> # load and preprocess images in parallel def load_examples(path, width=64, height=64): concurrency = multiprocessing.cpu_count() with concurrent.futures.ThreadPoolExecutor(concurrency) as executor: images_futures = [ executor.submit( image_to_example, os.path.join(path, name), width, height ) for name in os.listdir(path) ] return [ i.result() for i in concurrent.futures.as_completed(images_futures) ] $$ z = w_1 x_1 + w_2 x_2 + \ldots + w_n x_n + b $$ $$ \hat{y} = \frac{1}{1 + e^{-z}} $$ where: z = np.linspace(-10, 10, 100) plot.xlabel('$ z $'), plot.ylabel('$ \hat{y} $') plot.plot(z, 1 / (1 + np.exp(-z))) plot.grid(True) Note, that $ z $ is essentially a result of matrix multiplication of two column vectors of weights $ w $ and features $ x $ plus bias $ b $: $$ z = w_1 x_1 + w_2 x_2 + \ldots + w_n x_n + b = w^T x + b $$ Column vectors $ x $ of each example can also be stacked into a matrix (each computation can be performed in parallel): $$ z = w^T \begin{bmatrix} x_{1}^{(1)} & x_{1}^{(2)} & \dots & x_{1}^{(m)} \\ x_{2}^{(1)} & x_{2}^{(2)} & \dots & x_{2}^{(m)} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n}^{(1)} & x_{n}^{(2)} & \dots & x_{n}^{(m)} \end{bmatrix} = \\ = [(w^T x^{(1)} + b), (w^T x^{(2)} + b), \ldots, (w^T x^{(m)} + b)] $$ v1 = np.random.rand(10000, 1) v2 = np.random.rand(10000, 1) # computation happens in Python interpreter %timeit sum(a * b for a, b in zip(v1, v2)) # computation happens in numpy %timeit np.dot(v1.T, v2) 12.2 ms ± 262 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 2.87 µs ± 76.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) def initialize_weights(n): # the simplest way is to initialize to zeros, but random # initialization is also ok w = np.zeros((n, 1)) b = 0.0 return w, b def hypothesis(w, b, x): z = np.dot(w.T, x) + b return 1. / (1. + np.exp(-z)) Cost function is a measure of how close model predictions $ \hat{y} $ are to true labels $ y $: $$ J(w, b) = - \frac{1}{m} \sum_{i=1}^{m} \big( y_i \ln{\hat{y_i}} + (1 - y_i) \ln{(1 - \hat{y_i})} \big) $$ There are many different cost functions for ML algorithms, this one is called logistic loss. y_h = np.linspace(0.000001, 0.999999, 100) plot.plot(y_h, np.log(y_h)), plot.plot(y_h, np.log(1 - y_h)) plot.legend(['$ ln(\hat{y}) $', '$ ln(1 - \hat{y}) $']) plot.grid(True) def cost(w, b, x, y): m = x.shape[1] y_h = hypothesis(w, b, x) return - np.sum(y * np.log(y_h) + (1 - y) * np.log(1 - y_h)) / m Gradient descent is an iterative function optimization algorithm, that takes steps proportional to the negative of the gradient to find the local minimum: $$ w^{i + 1} = w^{i} - \alpha \frac{\partial J}{\partial w} $$ $$ b^{i + 1} = b^{i} - \alpha \frac{\partial J}{\partial b} $$ where: $$ \frac{\partial J}{\partial z} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z} = \hat{y} - y $$ $$ \frac{\partial J}{\partial w} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z} \frac{\partial z}{\partial w} = \frac{1}{m} X (\hat{y} - y)^T $$ $$ \frac{\partial J}{\partial b} = \frac{\partial J}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial z} \frac{\partial z}{\partial b} = \frac{1}{m} \sum_{i=1}^{m} (\hat{y}_i - y_i) $$ The process of changing the weights is sometimes called backward propagation of errors (as we use the difference between predictions $ \hat{y} $ and true labels $ y $ in order to modify the weights of inputs). def update_weights(w, b, x, y_h, y, learning_rate): m = x.shape[1] # calculate the values of partial derivatives dz = y_h - y dw = np.dot(x, dz.T) / m db = np.sum(dz) / m # update the weights for the next iteration w = w - learning_rate * dw b = b - learning_rate * db return w, b def logistic_regression(train_set, learning_rate=0.001, iterations=100, batch_size=64, callback=None): # stack training examples as columns into a (n, m) matrix x = np.column_stack(x[0] for x in train_set) y = np.array([x[1] for x in train_set], dtype=np.float64).reshape(1, len(train_set)) # split the whole training set into batches of equal size n, m = x.shape num_batches = m // batch_size + (1 if m % batch_size > 0 else 0) x_batches = np.array_split(x, num_batches, axis=1) y_batches = np.array_split(y, num_batches, axis=1) # run the gradient descent to learn w and b w, b = initialize_weights(n) for iteration in range(iterations): j = 0 for x_batch, y_batch in zip(x_batches, y_batches): y_hat = hypothesis(w, b, x_batch) w, b = update_weights(w, b, x_batch, y_hat, y_batch, learning_rate) j += cost(w, b, x_batch, y_batch) / num_batches if callback is not None: callback(iteration=iteration, w=w, b=b, cost=j) return w, b It's important that we test the trained model on a separate set of examples it has not seen before, so that we ensure it generalizes well and does not overfit the training set. Training and test set examples should come from the same distribution, so that the model is trained on the same kind of data it will be used to make predictions on later. The performance metric we will use is accuracy, i.e. the percentage of correct predictions on a given data set. def predict(w, b, x, threshold=0.5): y_h = hypothesis(w, b, x) return y_h >= threshold def accuracy(w, b, data): # stack examples as columns into a (n, m) matrix x = np.column_stack(x[0] for x in data) y = np.array([x[1] for x in data]).reshape(1, x.shape[1]) # calculate the accuracy value as a percentage of correct predictions correct_predictions = np.count_nonzero((y == predict(w, b, x))) total_predictions = x.shape[1] return correct_predictions / total_predictions def main(path, iterations=100, max_examples=25000, train_ratio=0.9): # load examples and make sure they are uniformly distributed examples = load_examples(path) random.shuffle(examples) # split all the examples into train and test sets m_train = int(max_examples * train_ratio) train_set = examples[:m_train] test_set = examples[m_train:] # monitor the progress of training def progress(iteration, cost, w, b): print('Iteration %d' % iteration) print('\tCost: %f' % cost) if iteration % 10 == 0: print('\tTrain set accuracy: %s' % accuracy(w, b, train_set)) print('\tTest set accuracy: %s' % accuracy(w, b, test_set)) # run the training process to learn model parameters w, b = logistic_regression(train_set, iterations=iterations, callback=progress) print('\tFinal train set accuracy: %s' % accuracy(w, b, train_set)) print('\tFinal test set accuracy: %s' % accuracy(w, b, test_set)) main("train") Iteration 0 Cost: 0.671012 Train set accuracy: 0.5730222222222222 Test set accuracy: 0.5704 Iteration 1 Cost: 0.661474 Iteration 2 Cost: 0.657053 Iteration 3 Cost: 0.654155 Iteration 4 Cost: 0.651971 Iteration 5 Cost: 0.650178 Iteration 6 Cost: 0.648624 Iteration 7 Cost: 0.647228 Iteration 8 Cost: 0.645945 Iteration 9 Cost: 0.644749 Iteration 10 Cost: 0.643624 Train set accuracy: 0.6156444444444444 Test set accuracy: 0.6032 Iteration 11 Cost: 0.642558 Iteration 12 Cost: 0.641543 Iteration 13 Cost: 0.640573 Iteration 14 Cost: 0.639644 Iteration 15 Cost: 0.638753 Iteration 16 Cost: 0.637895 Iteration 17 Cost: 0.637070 Iteration 18 Cost: 0.636274 Iteration 19 Cost: 0.635505 Iteration 20 Cost: 0.634762 Train set accuracy: 0.6285333333333334 Test set accuracy: 0.6108 Iteration 21 Cost: 0.634043 Iteration 22 Cost: 0.633347 Iteration 23 Cost: 0.632671 Iteration 24 Cost: 0.632016 Iteration 25 Cost: 0.631379 Iteration 26 Cost: 0.630760 Iteration 27 Cost: 0.630158 Iteration 28 Cost: 0.629572 Iteration 29 Cost: 0.629001 Iteration 30 Cost: 0.628444 Train set accuracy: 0.6368444444444444 Test set accuracy: 0.6152 Iteration 31 Cost: 0.627900 Iteration 32 Cost: 0.627369 Iteration 33 Cost: 0.626851 Iteration 34 Cost: 0.626344 Iteration 35 Cost: 0.625848 Iteration 36 Cost: 0.625363 Iteration 37 Cost: 0.624888 Iteration 38 Cost: 0.624422 Iteration 39 Cost: 0.623966 Iteration 40 Cost: 0.623518 Train set accuracy: 0.6437777777777778 Test set accuracy: 0.6204 Iteration 41 Cost: 0.623079 Iteration 42 Cost: 0.622647 Iteration 43 Cost: 0.622224 Iteration 44 Cost: 0.621808 Iteration 45 Cost: 0.621398 Iteration 46 Cost: 0.620996 Iteration 47 Cost: 0.620600 Iteration 48 Cost: 0.620211 Iteration 49 Cost: 0.619827 Iteration 50 Cost: 0.619450 Train set accuracy: 0.6501777777777777 Test set accuracy: 0.6184 Iteration 51 Cost: 0.619078 Iteration 52 Cost: 0.618711 Iteration 53 Cost: 0.618350 Iteration 54 Cost: 0.617993 Iteration 55 Cost: 0.617642 Iteration 56 Cost: 0.617295 Iteration 57 Cost: 0.616953 Iteration 58 Cost: 0.616615 Iteration 59 Cost: 0.616281 Iteration 60 Cost: 0.615951 Train set accuracy: 0.6545777777777778 Test set accuracy: 0.6216 Iteration 61 Cost: 0.615625 Iteration 62 Cost: 0.615304 Iteration 63 Cost: 0.614986 Iteration 64 Cost: 0.614671 Iteration 65 Cost: 0.614360 Iteration 66 Cost: 0.614052 Iteration 67 Cost: 0.613748 Iteration 68 Cost: 0.613447 Iteration 69 Cost: 0.613149 Iteration 70 Cost: 0.612854 Train set accuracy: 0.6580444444444444 Test set accuracy: 0.6224 Iteration 71 Cost: 0.612562 Iteration 72 Cost: 0.612273 Iteration 73 Cost: 0.611986 Iteration 74 Cost: 0.611703 Iteration 75 Cost: 0.611422 Iteration 76 Cost: 0.611143 Iteration 77 Cost: 0.610867 Iteration 78 Cost: 0.610594 Iteration 79 Cost: 0.610323 Iteration 80 Cost: 0.610054 Train set accuracy: 0.6606666666666666 Test set accuracy: 0.624 Iteration 81 Cost: 0.609788 Iteration 82 Cost: 0.609524 Iteration 83 Cost: 0.609262 Iteration 84 Cost: 0.609002 Iteration 85 Cost: 0.608744 Iteration 86 Cost: 0.608489 Iteration 87 Cost: 0.608235 Iteration 88 Cost: 0.607983 Iteration 89 Cost: 0.607733 Iteration 90 Cost: 0.607485 Train set accuracy: 0.6634222222222222 Test set accuracy: 0.6228 Iteration 91 Cost: 0.607239 Iteration 92 Cost: 0.606995 Iteration 93 Cost: 0.606752 Iteration 94 Cost: 0.606512 Iteration 95 Cost: 0.606272 Iteration 96 Cost: 0.606035 Iteration 97 Cost: 0.605799 Iteration 98 Cost: 0.605565 Iteration 99 Cost: 0.605332 Final train set accuracy: 0.6658222222222222 Final test set accuracy: 0.6244 logistic regression is a basic model, that does not generalize well for this task and only achieves ~63% accuracy on the test set (e.g. the leader in this Kaggle competition achieved ~98.6%) there are more sophisticated ML algorithms like convolutional neural networks, that perform better at image classification tasks concepts like backward propagation of errors and gradient descent are used in other algorithms as well Andrew Ng: Introduction to ML: Deep Learning specialization: Machine Learning and Data Analysis specialization (Yandex): twitter: @rpodoliaka slides:
http://nbviewer.jupyter.org/format/slides/github/malor/machine-learning-101/blob/master/index.ipynb
CC-MAIN-2018-26
refinedweb
2,161
54.08
sigaltstack — set and/or get signal stack context #include <signal.h>: Allocate an area of memory to be used for the alternate signal stack. Use sigaltstack() to inform the system of the existence and location of the alternate signal stack. When establishing a signal handler using sigaction(2), inform the system that the signal handler should be executed on the alternate signal stack by specifying the SA_ONSTACK flag.:.. Alternatively, this value is returned if the process. sigaltstack() returns 0 on success, or −1 on failure with errno set to indicate the error. Either ss or old_ss is not NULL and points to an area outside of the process's address space. ss is not NULL and the ss_flags field contains an invalid flag. The specified size of the new alternate signal stack ss.ss_size was less than MINSIGSTKSZ. An attempt was made to change the alternate signal stack while it was active (i.e., the process was already executing on the current alternate signal stack). For an explanation of the terms used in this section, see attributes(7).().) == −1) { perror("sigaltstack"); exit(EXIT_FAILURE); } sa.sa_flags = SA_ONSTACK; sa.sa_handler = handler(); /* Address of a signal handler */ sigemptyset(&sa.sa_mask); if (sigaction(SIGSEGV, &sa, NULL) == -1) { perror("sigaction"); exit(EXIT_FAILURE); }. execve(2), setrlimit(2), sigaction(2), siglongjmp(3), sigsetjmp(3), signal(7)
https://manpages.net/htmlman2/sigaltstack.2.html
CC-MAIN-2022-21
refinedweb
219
60.41
CHI::Driver::BerkeleyDB -- Using BerkeleyDB for cache use CHI; my $cache = CHI->new( driver => 'BerkeleyDB', root_dir => '/path/to/cache/root' ); This cache driver uses Berkeley DB files to store data. Each namespace is stored in its own db file. By default, the driver configures the Berkeley DB environment to use the Concurrent Data Store (CDS), making it safe for multiple processes to read and write the cache without explicit locking. Path to the directory that will contain the Berkeley DB environment, also known as the "Home". BerkeleyDB class, defaults to BerkeleyDB::Hash. Use this Berkeley DB environment instead of creating one. Use this Berkeley DB object instead of creating one. Questions and feedback are welcome, and should be directed to the perl-cache mailing list: Bugs and feature requests will be tracked at RT: The latest source code can be browsed and fetched at: git clone git://github.com/jonswar/perl-chi-driver-bdb.git Jonathan Swartz CHI::Driver::BerkeleyDB is provided "as is" and without any express or implied warranties, including, without limitation, the implied warranties of merchantibility and fitness for a particular purpose. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/CHI-Driver-BerkeleyDB/lib/CHI/Driver/BerkeleyDB.pm
CC-MAIN-2014-52
refinedweb
205
52.6
Hey gang, A post on the Rails forum a while back had it sound like you pretty much had to use the Index Readers & Writers if you were going to be potentially accessing an index from more than one process. (i.e. multiple dispatch.fcgi’s, etc) Is this still the case, or does the main Index class do that black magic behind the scenes? =) I was having trouble implementing the Readers & Writers so I thought I’d post an example stub of what I have here. Any feedback would be much appreciated. Non-Reader/Writer Example - Main Index::Index.new only works like a charm but haven’t tried firing up a bunch to see if we get IO blocks. require ‘ferret’ class SearchEngine include Ferret include Ferret::Document def self.get_index() index_dir = “/var/search/index” index = Index::Index.new(:path => index_dir, :create_if_missing => true) return index end end Reader/Writer Example require ‘ferret’ class SearchEngine include Ferret include Ferret::Document Creates or returns an existing index for an organization def self.get_index(type = ‘writer’) index_dir = “/var/search/index” if type == ‘writer’ index = Index::IndexWriter.new(index_dir, :create_if_missing => true) elsif type == ‘reader’ index = Index::IndexReader.open(index_dir, false) end return index end end Thanks!! - Shanti
https://www.ruby-forum.com/t/index-index-new-vs-readers-and-writers/59962
CC-MAIN-2022-21
refinedweb
205
56.45
Namespace Declarations Namespaces are declared on elements using the xmlns: attribute, and the value of that attribute is the URI that identifies the namespace. The syntax for a namespace declaration is xmlns:<name>=<"uri">, where <name> is the name of the namespace prefix, and the <"uri"> is a string depicting the namespace URI. Once declared, the prefix can be used to qualify elements and attributes in an XML document and associate them with the namespace URI. Because the namespace prefix is used throughout a document, it should be short in length. The following example below defines two BOOK elements. The two BOOK elements are not identical, as each one is associated with a different namespace. The first BOOK element is qualified by the namespace prefix, mybook, while the second BOOK element is qualified by the prefix, bb. Each namespace prefix is associated with a different namespace URI by the use of the namespace declarations on each BOOK element. To signify that an element is a part of a particular namespace, prepend the namespace prefix to it, thereby making it a fully qualified element name. For example, if the Publisher element exists in a document, and a namespace has been declared for it, the Publisher element needs to have the namespace alias prepended to it with a colon. If the Publisher element belongs to the mybook namespace, it is declared as <mybook:Publisher>. Thus, the Publisher element is now fully qualified.
http://msdn.microsoft.com/en-us/library/a9a1451a(v=vs.100).aspx
CC-MAIN-2014-15
refinedweb
240
51.38
This is how we have it import modulex to svn://repo/trunk/modulex (rev 1) work some in trunk/modulex (rev 2) branch svn://repo/trunk/modulex to svn://repo/branches/modulex/REL1 (rev 4) I think i tagged it first and then branched from trunk, maybe i should have branched from tags/ after tagging it? Now when we do annotate directly on some unchanged file with subclipse it can't really backtrack from where it came, so we get an error box. We get the same when issuing a (commandline) svn log -r COMMITTED from a file in the branch which hasn't changed since before the branch. Example: $ svn log -r COMMITTED ThresholdCheckerData.java svn: File not found: revision 2, path '/branches/modulex/REL1/path/to/file/ThresholdCheckerData.java' If you do this via the resource history and selecting the branch revision (which is shown, but not as the currently loaded rev which is rev 2, the rev before the branch) it works. Is there any plan on how to fix this? This might be a subversion problem, but I'm thinking maybe this is a gotcha that can be solved by doing it some other way, like always loading the annotate from resource history. Another problem is purely a human logic one. Say the branch have rev 4, files not changed in the branch will have rev 2 since it was where it was last changed. If i branch again at rev 73 i'll see the rev 4 and rev 73 log messages in resource history, but rev 2 is the "current" one. This is a bit logically hard to grasp for users not familiar on how branching is made on a low level. Magnus -- no .sig This is an archived mail posted to the Subclipse Dev mailing list.
https://svn.haxx.se/subdev/archive-2005-04/0000.shtml
CC-MAIN-2021-43
refinedweb
303
66.57
In this blog, you will learn how to use AWS Lambda versions and aliases by means of a Java example. You will create a simple AWS Java Lambda, create several versions for it and you will learn how to use aliases for your environments. Enjoy! 1. Introduction AWS Lambda allows you to run serverless functions onto the AWS environment. More general information about AWS Lambda can be found at the Amazon website. An important item when working with lambdas, is how to keep control over the different versions of your lambda and over which version runs onto which environment. You can imagine that you have a production environment which runs version 1 of your lambda, a test environment which runs version 2 and a development environment which runs version 3. Version 1 is your stable version, version 2 is the one with extra functionality or fixed bugs and is about to make the step to production but needs to be verified by the customer first, version 3 is your development version and is not yet tested and therefore the most instable version. In this post, you will learn how to manage your versions in these different environments. You will create a sample Java lambda and will create several versions of it. The sources for this blog can be found at GitHub. 2. Create Sample App The sample application is based on the AWS example Java handler where a JSON object is transformed into a Java object. First thing to do, is to create a basic Maven project. Add the AWS dependencies aws-lambda-java-core, aws-lambda-java-events and aws-lambda-java-log4j2, because these will be used in the lambda. Besides that, the gson dependency needs to be added because you will use the Gson library for transforming the JSON object into a Java object. <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-lambda-java-core</artifactId> <version>1.2.1</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-lambda-java-events</artifactId> <version>3.1.0</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-lambda-java-log4j2</artifactId> <version>1.2.0</version> </dependency> <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.8.5</version> </dependency> </dependencies> In order to create the lambda in AWS, you will need to create an uber jar. This means, a jar which contains all the necessary dependencies into one jar file. The Maven Shade Plugin can create such an uber jar. Add the following section to the POM and the uber jar will be automatically created for you. <build> <plugins> > <configuration> <transformers> <transformer implementation="com.github.edwgiz.maven_shade_plugin.log4j2_cache_transformer.PluginsCacheFileTransformer"> </transformer> </transformers> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>com.github.edwgiz</groupId> <artifactId>maven-shade-plugin.log4j2-cachefile-transformer</artifactId> <version>2.13.0</version> </dependency> </dependencies> </plugin> </plugins> </build> The domain consists out of two simple objects. A Car object where a car can be defined with a brand and a type. The brand is an enumeration of some brands. In the code below, the Car object is shown, getters and setters are left out for brevity purposes. public class Car { private Brand brand; private String type; public Car(Brand brand, String type) { this.brand = brand; this.type = type; } ... } The Brand enum contains three car brands. public enum Brand { FORD, RENAULT, TESLA } The lambda which will be executed is implementend in the handleRequest method. Note that the response contains the version number, this will help you to differentiate between the different versions which will be deployed when following this blog. At the end of the method, the Car JSON object which is received, is transformed into the Java object and the brand and the type of the Car Java object are printed to the log. public class LambdaJava implements RequestHandler<Map<String, String>, String> { private static final Gson GSON = new GsonBuilder().setPrettyPrinting().create(); @Override public String handleRequest(Map<String, String> event, Context context) { LambdaLogger logger = context.getLogger(); String response = "Version 1"; // log execution details logger.log("ENVIRONMENT VARIABLES: " + GSON.toJson(System.getenv()) + System.lineSeparator()); logger.log("CONTEXT: " + GSON.toJson(context) + System.lineSeparator()); // process event logger.log("EVENT: " + GSON.toJson(event) + System.lineSeparator()); logger.log("EVENT TYPE: " + event.getClass() + System.lineSeparator()); // Parse JSON into an object Car car = GSON.fromJson(GSON.toJson(event), Car.class); logger.log("Car brand: " + car.getBrand() + System.lineSeparator()); logger.log("Car type: " + car.getType() + System.lineSeparator()); return response; } } Build the code: $ mvn package The target directory contains two jar files: original-MyAWSLambdaJavaPlanet-1.0-SNAPSHOT.jar and MyAWSLambdaJavaPlanet-1.0-SNAPSHOT.jar. The latter is the uber jar which needs to be used for creating the AWS lambda. 3. Create Lambda Now that you have created the Java lambda, it is now time to create the lambda in AWS. Navigate to the AWS console to the Lambda service and choose Create function. Choose Author from scratch, give the function the name MyJavaLambda and choose Java 11 (Corretto) as Runtime. Click the Create function button. Choose in the Code tab for .zip or .jar file from the Upload from dropdown list. Upload the uber jar from the target directory and click the Save button. Click the Edit button in the Runtime settings section. The Handler section should contain the entry point for the lambda. Meaning, the package name, class and method the lambda should invoke: com.mydeveloperplanet.myawslambdajavaplanet.LambdaJava::handleRequest. Click the Save button. Create a test event via the Test tab. Give it the name CarEvent en create a JSON for the Car. Note that the brand should be written with capital letters otherwise the enum will not be parsed correctly. Click the Save changes button. { "brand": "FORD", "type": "Kuga" } Click the Test button in order to invoke the test event. As a response "Version 1" is returned, just as was expected. Also the logs can be viewed here. When you navigate to the end, the parsing log statements can be viewed here. As can be seen, the object is parsed correctly. EVENT: { "brand": "FORD", "type": "Kuga" } EVENT TYPE: class java.util.LinkedHashMap Car brand: FORD Car type: Kuga 4. Create Versions Let’s assume that the version 1 app is approved by the customer. This version can be frozen and this means that this version should not be modified anymore in AWS Lambda. Navigate to the Versions tab. Click the Publish new version button. As a description, you fill v1 and finally, click the Publish button. The Versions tab now contains one version. When you click the number 1 in the Version column, the information of this version is shown. The version itself, however, cannot be modified anymore. In order to return to the main page where modifications can be made, you need to click MyJavaLambda at the left top corner of the page. Change in the code the response into Version 2 (see branch version2 in GitHub). String response = "Version 2"; Execute the following steps: - Build the jar file; - Upload the jar file; - Test the lambda by means of the test event (should return "Version 2"this time); - Create a version 2 of this lambda just like you did for version 1. You have two versions now which can be invoked separately from eachother. Both also have a unique ARN including the version number. The account ID will be your own account ID of course. arn:aws:lambda:eu-west-3:<account ID>:function:MyJavaLambda:1 arn:aws:lambda:eu-west-3:<account ID>:function:MyJavaLambda:2 5. Create Aliases You have by now created two versions of your lambda and you can invoke both. So, what can you do with this? The advantage will become more clear in this section where you will create aliases. An alias can ressemble for example an environment, like DEV, TEST, PROD. Aliases will allow you to deploy a version to a specific environment (i.e. alias). The alias will also have a unique ARN which can be invoked. Navigate to the Aliases tab and click the Create alias button. Create an alias DEV and link it to the $LATEST version. This means that you link it to the lambda which you can modify on the fly. Click the Save button. Create also an alias for TEST which links to v2 and an alias for PROD which links to v1. Change the response in the code to “ Version 3" and upload the uber jar again to AWS Lambda. When you click a version and execute the test event, you will notice that alias DEV returns "Version 3", alias TEST returns "Version 2" and PROD returns "Version 1". And now the magic happens. Assume that you want to release version 2 to production. Navigate to the PROD alias and click the Edit button. Set the Version to 2 and in the Weighted alias section, you indicate a weight of 50% for version 1. This means that half of your users will still be using version 1 and the other half will be using version 2. This way, it is possible to mitigate some risks when you release a new version in production. When something goes wrong, only half of your users will be affected. Click the Save button. Test the PROD alias with the test event and you will notice that half of the time "Version 1" is returned and the other half "Version 2". When the new version runs in production correctly for some time, the weight can be easily changed to 0%. After this, all of the users will be using version 2. 6. Conclusion You learned how to create a Java lambda and how to deploy it to AWS Lambda. Next, you learned how to create versions for you lambda and how alias can be used for representing your different environments. By means of aliases and versions, you have full control of which version runs onto which environment. Besides that, you can safely deploy a new version in production by making use of weights.
https://mydeveloperplanet.com/2022/01/11/aws-lambda-versions-and-aliases-explained-by-example/
CC-MAIN-2022-05
refinedweb
1,664
58.18
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. Hi, There is bug report that ld.so in GLIBC 2.24 built by Binutils 2.29 will crash on arm-linux-gnueabihf. This is confirmed, and the details is at:. And I could also reproduce this crash using GLIBC master. As analyzed in the PR, the old code was with the assumption that assembler won't set bit0 of thumb function address if it comes from PC-relative instructions and the calculation can be finished during assembling. This assumption however does not hold after PR gas/21458. I think ARM backend in GLIBC should be fix to be more portable so it could work with various combinations of GLIBC and Binutils. OK for master and backport to all release branches? 2017-07-12 Jiong Wang <jiong.wang@arm.com> * sysdeps/arm/dl-machine.h (elf_machine_load_address): Also strip bit 0 of pcrel_address under Thumb mode. diff --git a/sysdeps/arm/dl-machine.h b/sysdeps/arm/dl-machine.h index 7053ead16ed0e7dac182660f7d88fa21f2b4799a..5b67e3d004818308d9bf93effb13d23a762e160f 100644 --- a/sysdeps/arm/dl-machine.h +++ b/sysdeps/arm/dl-machine.h @@ -56,11 +56,19 @@ elf_machine_load_address (void) extern Elf32_Addr internal_function __dl_start (void *) asm ("_dl_start"); Elf32_Addr got_addr = (Elf32_Addr) &__dl_start; Elf32_Addr pcrel_addr; + asm ("adr %0, _dl_start" : "=r" (pcrel_addr)); #ifdef __thumb__ - /* Clear the low bit of the funciton address. */ + /* Clear the low bit of the funciton address. + + NOTE: got_addr is from GOT table whose lsb is always set by linker if it's + Thumb function address. PCREL_ADDR comes from PC-relative calculation + which will finish during assembling. GAS assembler before the fix for + PR gas/21458 was not setting the lsb but does after that. Always do the + strip for both, so the code works with various combinations of glibc and + Binutils. */ got_addr &= ~(Elf32_Addr) 1; + pcrel_addr &= ~(Elf32_Addr) 1; #endif - asm ("adr %0, _dl_start" : "=r" (pcrel_addr)); return pcrel_addr - got_addr; }
https://sourceware.org/ml/libc-alpha/2017-07/msg00518.html
CC-MAIN-2017-43
refinedweb
316
60.01
Hello to all, welcome to therichpost.com. In this post, I will tell you, Angular 8 chartjs working example. Chartjs is very popular and very easy to use. On my blog, I have share many posts related to chartjs. Now, I am using chartjs in angular 8 and in this I will static data but In my future posts, I will dynamic data from laravel 6. Here is the working code snippet and please use this carefully: 1. Here are the basics commands, you need to use into your terminal or command prompt to install Angular 8 fresh set up: $ npm install -g @angular/cli //Setup Angular8 atmosphere $ ng new angular8chartjs //Install New Angular App /**You need to update your Nodejs also for this verison**/ $ cd angular8chartjs //Go inside the Angular 8 Project 2. After Install Angular 8 fresh setup and go inside the Angular 8 setup, run below command into your terminal to install chartjs module: npm install --save chart.js 3. After all above setup, here is code, you need to add into your app.component.ts file: import { Component } from '@angular/core'; import * as Chart from 'chart.js' @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'angular8chartjs'; canvas: any; ctx: any; ngAfterViewInit() { this.canvas = document.getElementById('myChart'); this.ctx = this.canvas.getContext('2d'); let myChart = new Chart(this.ctx, { type: 'pie', data: { labels: ["New", "In Progress", "On Hold"], datasets: [{ label: '# of Votes', data: [1,2,3], backgroundColor: [ 'rgba(255, 99, 132, 1)', 'rgba(54, 162, 235, 1)', 'rgba(255, 206, 86, 1)' ], borderWidth: 1 }] }, options: { responsive: false, display:true } }); } } 4. Now, add below code into your app.component.html file: <canvas id="myChart" width="700" height="400"></canvas> In this end, don’t forget to run ng serve command into your terminal and you will get this url localhost:4200 If you have any query related to this post, then please comment below. jassa Thank you 20 Comments Hi sir, I want to learn use of og tag in angular 7 project. And use its link for sharing on social networking sites. Thanks Hello sir, it is related to seo or somethings, please explain more briefly. Hi Ajay , This is vinoth Can you please post example with Grouped Bar chart using data REST api i have done the Chart but the data is not populating in Chart , but i can see the Json in console . unable to read from that Thank you Hi Thank you so much. i have tried to draw the grouped bar chart , but its getting over lapping . export class AppComponent { title = ‘angular8chartjs’; canvas: any; data = []; chart = []; constructor(private http: HttpClient) { this.http.get(‘’).subscribe(data => { this.data.push(data); console.log(this.data); this.canvas = document.getElementById(‘myChart’); this.chart = new Chart(‘canvas’, { type: ‘bar’, data: { labels: [this.data[0][‘year’]], datasets: [{ label: ‘Count’, data: [this.data[0][‘type1’], this.data[0][‘type2’], this.data[0][‘type3’], this.data[0][‘type4’]], backgroundColor: [ ‘rgba(255, 99, 132, 1)’, ‘rgba(54, 162, 235, 1)’, ‘rgba(255, 206, 86, 1)’ ], borderWidth: 1 }] }, options: { responsive: false, display:true } }); }, error => console.error(error)); } } Which error you are getting? Thank you for this post. hello sir,How can we use waterfall charts instead of pie charts in angular?Thanks in advance. Sure and I will share link soon. Where can I find your chart.js samples with external data like csv or json? Please check, json data example: Thank you. It is helped me. 🙂 Hello Vijay… When we use the same code to bar chart the minimum data is getting nothing in Chart.. any help ? 🙂 Will update you on this. Hello Ajay….. any help on this 🙂 Ajay… i Observed the issue that any one of displaying bar level minimum 15 else it does not display … how we can change the scale ( startup ).. for example 5,6,15 is there is no issue… but 5,6,10 means the 5 is not getting display Correct Hi, I’ve built a angular project sample with chart.js. The type of chart I’m working on is the vertical bar type chart. I’m currently able to generate the bar chart with appropriate data. I’m currently working on the tooltip style. I need your help in setting the code for tooltips caret pointing downwards. I found that using tooltip property ‘yAlign:”bottom”‘ will resolve my need while googling. However, using ‘yAlign’ property in my project throws error stating that ‘yAlign’ not a property in chart.js This is for your info that I’m using the chart .js version 2.7.2 and Angular 8+. Could you please guide me on this regards? Okay sure, I will update you on this.
https://therichpost.com/angular-8-chartjs-working-example/
CC-MAIN-2021-43
refinedweb
788
65.73
Object and array are both "structured data" and have lots in common, so the Opt_trace_struct is a base class for them. More... #include <opt_trace.h> Object and array are both "structured data" and have lots in common, so the Opt_trace_struct is a base class for them. When you want to add a structure to the trace, you create an instance of Opt_trace_object or Opt_trace_array, then you add information to it with add(), then the destructor closes the structure (we use RAII, Resource Acquisition Is Initialization). This constructor is never called directly, only from subclasses. not defined Not implemented, use add_alnum() instead. Add a value to the structure. Add a value (of Item type) to the structure. The Item should be a condition (like a WHERE clause) which will be pretty-printed into the trace. This is useful for showing condition transformations (equality propagation etc). Adds a value (of string type) to the structure. A key is specified, so it adds the key/value pair (the structure must thus be an object). There are two "add_*" variants to add a string value. If the value is 0-terminated and each character add(x,y).add(z,t).add(u,v) String-related add() variants are named add_[something](): Adds a value (of string type) to the structure. No key is specified, so it adds only the value (the structure must thus be an array). Adds a JSON null object (==Python's "None") Helper to put the number of query_block in an object. Variant of add_utf8() where 'value' is 0-terminated. Like add_alnum() but supports any UTF8 characters in 'value'. Will "escape" 'value' to be JSON-compliant. Variant of add_utf8() where 'value' is 0-terminated. Variant of add_utf8() for adding to an array (no key) Helper to put the database/table name in an object. Validates the key about to be added. When adding a value (or array or object) to an array, or a key/value pair to an object, we need to know this outer array or object. Opt_trace_array*). Then the add(key,val) call would not compile as Opt_trace_array wouldn't feature it. But as explained in comment of class Opt_trace_context we cannot pass down the object, have to maintain a "current object or array" in the Opt_trace_context context (pointer to an instance of Opt_trace_struct), and the adder grabs it from the context. As this current structure is of type "object or array", we cannot do compile-time checks that only suitable functions are used. A call to add(key,value) is necessarily legal for the compiler as the structure may be an object, though it will be wrong in case the structure is actually an array at run-time. Thus we have the risk of an untested particular situation where the current structure is not an object (but an array) though the code expected it to be one. This happens in practice, because subqueries are evaluated in many possible places of code, not all of them being known. Same happens, to a lesser extent, with calls to the range optimizer. So at run-time, in check_key(), we detect wrong usage, like adding a value to an object without specifying a key, and then remove the unnecessary key, or add an autogenerated key. Really adds to the object. Full initialization. Really does destruction. The exception to RAII: this function is an explicit way of ending a structure before it goes out of scope. Don't use it unless RAII mandates a new scope which mandates re-indenting lots of code lines. not defined Informs this structure that we are adding data (scalars, structures) to it. This is used only if sending to I_S. Whether this structure caused tracing to be disabled in this statement because belonging to a not-traced optimizer feature, in accordance with the value of @@optimizer_trace_features. Fixed-length prefix of previous key in this structure, if this structure is an object. Serves to detect when adding two same consecutive keys to an object, which would be wrong. Whether the structure requires/forbids keys for values inside it. true: this is an object. false: this is an array. Key if the structure is the value of a key/value pair, NULL otherwise. Whether the structure does tracing or is dummy. Trace owning the structure.
https://dev.mysql.com/doc/dev/mysql-server/latest/classOpt__trace__struct.html
CC-MAIN-2022-27
refinedweb
717
64.91
The easiest place to start in writing your Audio HW DLL is the analog mixer, because it's the simplest part of controlling a card; it doesn't involve any realtime constraints and small mistakes generally don't crash the machine. When you're using DMA, it's possible to overwrite the kernel, so we'll save PCM programming for later (see the Handling PCM Audio Data chapter). In addition, if your card has an analog input (e.g. CD or Line In), it's very easy to test the analog mixer in isolation from the rest of the sound card. In contrast, without volume controls to adjust, it's very difficult to verify that your PCM playback (and capture) code is functioning correctly. If your card uses one of the standard codecs (listed in the Supported Codecs appendix), see “Using a standard mixer DLL,” later in this chapter. If you have a nonstandard or unsupported codec, you must define a set of mixer elements for it. A simplified codec has the following structure: A simplified codec for an analog mixer. In our terminology, all of the shapes are called mixer elements, and the lines are mixer routes. Some mixer elements are informational only. The OUTPUT element is an I/O type element and holds only information such as the number of channels it contains. Other elements provide control by means of callback functions; for example, the volume elements have a callback that's used to read and set their associated gain level. One common variation on this design is where all or some of the inputs can be mixed together into the ADC (Analog Digital Converter). This is usually done using a series of switches instead of the multiplexer. The last important idea is that every element is routed to and from at least one other element. Only I/O elements break this rule. The hardware design of the chip you're supporting dictates the elements and routes for the mixer. In fact, the diagram of your mixer might be similar to the example above, but is probably more complicated. As an example, the standard AC97 diagram, has approximately 13 I/O elements and approximately 43 elements in total. To translate the diagram to mixer software, you need to create a mixer element for every symbol on the diagram, and then create a route for every line. At this point it's useful to discuss all the supported elements types, their respective attributes, any associated controls, and the function you can call to create one: Creation function: ado_mixer_element_accu1() Creation function: ado_mixer_element_accu2() Creation function: ado_mixer_element_accu3() Creation function: ado_mixer_element_io() Creation function: ado_mixer_element_mux1() Creation function: ado_mixer_element_mux2() Creation function: ado_mixer_element_pcm1() Creation function: ado_mixer_element_pcm2() Creation function: ado_mixer_element_sw1() Creation function: ado_mixer_element_sw2() Creation function: ado_mixer_element_sw3() Creation function: ado_mixer_element_volume1() You can associate instance data with the more complex elements. If you need to access this instance data later, you have to call ado_mixer_get_element_instance_data() because ado_mixer_delement_t is an opaque data type. In the simplest terms, a mixer group is a collection or group of elements and associated control capabilities. For the purpose of simplifying driver coding, we further define groups as relating to either playback or capture functionality: Creation function: ado_mixer_playback_group_create() Creation function: ado_mixer_capture_group_create() The input selection element is either a multiplexer or an input switch. With these restrictions, the group control logic can be contained entirely within the io-audio module. To create a group, you can simply specify the group name, type, and its component elements. Unlike elements and routes, mixer groups aren't strictly dictated by the hardware. You, as the driver writer, can decide on the number and contents of mixer groups. In order to build a useful driver, you need to create mixer groups with a logical design that attempts to satisfy the following conditions: For example, the standard Photon mixer application displays and manipulates only mixer groups. It's possible to make the PCM, MIC, and CD capture groups contain the input volume and input mute elements, but this would lead application developers to believe there are independent volume and mute controls on the these inputs, when clearly they're shared. For the purposes of demonstration, we assume that the simplified codec shown in the previous figure represents the mixer that you plan to support. The rest of this chapter demonstrates how to translate this relatively standard diagram into code. The complete code for the sample mixer in this chapter is available in the Sample Mixer Source appendix. Before we can write any of the mixer code, the Sound Blaster driver directory ( sb ) to a directory named for your card. After copying the directory, you should rename the C, header, and usage-message. If you're writing a custom audio mixer, the next task to perform (after ctrl_init() function has done the common part of the initialization) is to allocate and initialize a new ado_mixer_t structure. Do this by calling ado_mixer_create(). All the information pertaining to this mixer is attached to this structure, so you need to store a copy of the returned pointer somewhere (usually in your context structure), so that you can access it later. However, ado_mixer_t is an opaque data type; your Audio HW DLL doesn't need to know what's in it. Here's an example of initializing your Audio HW DLL if you're writing your own audio mixer: int example_mixer (ado_card_t * card, HW_CONTEXT_T * example) { int32_t status; if ( (status = ado_mixer_create (card, "Example", &example->mixer, example)) != EOK ) return (status); return (0); } ado_ctrl_dll_init_t ctrl_init; int ctrl_init( HW_CONTEXT_T ** hw_context, ado_card_t * card, char *args ) { example_t *example; if ((example = (example_t *) ado_calloc (1, sizeof (example_t))) == NULL) { ado_error ("Unable to allocate memory (%s)\n", strerror (errno)); return -1; } *hw_context = example; /* Verify that the hardware is available here. */ if (example_mixer(card, *hw_context) != 0) return -1; else return 0; } If you need to allocate memory for your mixer, you should create a cleanup function for io-audio to call when your mixer is destroyed. For more information, see ado_mixer_set_destroy_func(). You can also create a function to be called when the mixer's hardware is reset, but this usually isn't necessary. For more information, see ado_mixer_set_reset_func(). You must next construct a description of the mixer from its component parts. As mentioned earlier, a mixer consists of mixer elements, routes, and groups. In this example, there are 17 mixer elements, 18 routes, and 8 groups. The elements and routes are relatively straightforward to identify. Elements are any of the symbols, and routes are the paths that data can travel between them. Use the functions listed above to create the elements; use ado_mixer_element_route_add() to create the routes. Identifying the groups is a little more troublesome. That's the reason why we enforce the rules on what can be in a group. It simplifies choosing how to divide the elements up into groups, and makes the drivers more consistent in form and behaviour. The eight groups are Master Output, Input Gain, PCM OUT, MIC OUT, CD OUT, PCM IN, MIC IN, and CD IN. The groups in the sample analog mixer. The PCM IN, MIC IN, and CD IN groups include the multiplexer, but specify a different input to it. To build the mixer, first create the elements and routes, then pass pointers to the required elements to the functions that create the mixer group. Here's the section of code that creates the master group, including all elements and routes: int build_example_mixer (MIXER_CONTEXT_T * example, ado_mixer_t * mixer) { int error = 0; ado_mixer_delement_t *pre_elem, *elem = NULL; /* ################ */ /* the OUTPUT GROUP */ /* ################ */ if ( (example->output_accu = ado_mixer_element_accu1 (mixer, SND_MIXER_ELEMENT_OUTPUT_ACCU, 0)) == NULL ) error++; pre_elem = example->output_accu; if ( !error && (elem = ado_mixer_element_volume1 (mixer, "Output Volume", 2, output_range, example_master_vol_control, (void *) EXAMPLE_MASTER_LEFT, NULL)) == NULL) error++; if ( !error && ado_mixer_element_route_add (mixer, pre_elem, elem) != 0 ) error++; example->master_vol = elem; pre_elem = elem; if ( !error && (elem = ado_mixer_element_sw2 (mixer, "Output Mute", example_master_mute_control, (void *) EXAMPLE_MASTER_LEFT, NULL)) == NULL ) error++; if ( !error && ado_mixer_element_route_add (mixer, pre_elem, elem) != 0 ) error++; example->master_mute = elem; pre_elem = elem; if ( !error && (elem = ado_mixer_element_io (mixer, "Output", SND_MIXER_ETYPE_OUTPUT, 0, 2, stereo_voices)) == NULL ) error++; if ( !error && ado_mixer_element_route_add (mixer, pre_elem, elem) != 0 ) error++; if ( !error && (example->master_grp = ado_mixer_playback_group_create (mixer, SND_MIXER_MASTER_OUT, SND_MIXER_CHN_MASK_STEREO, example->master_vol, example->master_mute)) == NULL ) error++; return (0); } Don't feel that you must have all the mixer elements represented in the mixer groups. This isn't the point. The mixer elements and mixer groups are meant to be complementary. Nonstandard, complex, or just plain weird controls may not be needed at the mixer group level. They may be better as a simple mixer element or mixer switch. The mixer groups are intended to help the developer of audio applications figure out which mixer elements are related to each other and to a particular connection (e.g. PCM OUT). In this sample mixer, none of the individual input groups (PCM IN, MIC IN, CD IN) has volume or mute controls. They're still required because they contain the capture selection switch, but the only volume and mute controls on the input side are in the Input Gain group. This is important to note because it points out that you don't need to completely fill the requirements to specify a group. If you're missing a mixer element in your hardware, you can specify NULL for the missing element, if it makes sense to group them that way. If your card uses one of the standard codecs (listed in the Supported Codecs appendix), the amount of work you have to do is reduced. The benefit of using standardized codecs is that you just have to write a few access functions, typically the ones that read and write the codec registers. Before we can write these functions, one of the existing driver directories (/audio/src/hardware/deva/*) in the DDK to a directory named for your card or chip type. The best code to copy is either the template driver or the Sound Blaster (sb), depending on your answers to the questions in the Evaluating Your Card chapter. After copying the directory, you should rename the C, header, and use. After you've verified that the hardware exists, you need to map in the card memory if it's memory-mapped and initialize a mutex in the context structure. The mutex is used to make sure only one thread is accessing the hardware registers at a given point in time. Generally you lock the mutex around any routines that access card registers. Now that we have access to the hardware, the next step is to inform the upper layers of the driver of the capabilities of this hardware. We do this by creating devices: mixers and PCM channels. We'll look at creating the PCM device in the next chapter. Since we have a standard codec, we use the ado_mixer_dll() function to create the mixer structure and load the appropriate mixer DLL. The prototype is: int32_t ado_mixer_dll( ado_card_t *card, char *mixer_dll, uint32_t version, void *params, void *callbacks, ado_mixer_t **rmixer ); The arguments to ado_mixer_dll() include: The data types and contents of the params and callbacks structures depend on the mixer DLL that you're loading; see the Supported Codecs appendix for details. The params structure is the key to making the mixer work correctly. It tells the mixer DLL about functions that you've written in your Audio HW DLL, typically to read and write the codec registers. This structure contains pointers to a hw_context structure and (typically) functions that read and write the codec registers. The hw_context is generally, but it doesn't need to be, the same context that you allocated at the beginning of the ctrl_init() function. The hw_context is passed back to you as a parameter when the mixer DLL calls the read or write routines. The callbacks structure tells you about functions that are defined in the mixer DLL that your Audio HW DLL needs to call in order to control the device. The ado_mixer_dll() function fills in this structure, based on the mixer DLL that you're opening. To test this code, start up the driver and input an analog signal to one of the codec inputs (line, CD, etc.). Then, using the GUI mixer, try to control the volume of that signal at the speakers. Once this works reliably, you can move onto the next chapter.
https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx.doc.ddk_en_audio/analogmixer.html
CC-MAIN-2021-25
refinedweb
2,034
51.78
Hey everyone. I am supposed to create a program that finds the area of a triangle...this i can do fine. My only problem is i was told that i need to use a get and set method to calculate the area. I dont exactly know how to use a get and set method with an object...would anybody show me how to do this or set me in the right direction? here is what i have, which works: Main class: public class TriMain { public static void main(String[] args) { TriObj myTriObj = new TriObj(); myTriObj.runIt(); } } Object: import java.util.Scanner; public class TriObj { public void runIt() { Scanner input = new Scanner (System.in); int base; int height; int area; System.out.println("Enter base of triange: "); base=input.nextInt(); System.out.println("Enter height of triange: "); height=input.nextInt(); area=(base*height/2); System.out.println("The area of the triangle is: "+ area); } }
https://www.daniweb.com/programming/software-development/threads/240737/help-with-get-and-set-methods
CC-MAIN-2018-13
refinedweb
153
69.79
Generating a signed URL for an Amazon S3 file using boto I was refactoring some code for EZ Exporter, a data exporter app for Shopify, last week as our customer base has been growing pretty steadily these last few months. I figured it's time to do some optimization to make sure the app is ready for future growth. One of the functionalities that we needed to optimize is how we handle manual downloads. Initially, I just made it very simple by returning the data as a CSV file and stream it directly to the client as part of the response. While it works just fine currently with a smaller user base, we know this will eventually become a problem as we get more users. For example, if a user has a big report to generate, it will tie up the web worker while the report is being generated and even after that part when the data is being downloaded. To optimize this process, we have a Celery task in the background that generates the report and a view to check the status of that task. This way, as soon as the user clicks the "Download" button, the task runs asynchronously in the background and the response is returned right away with the Celery task ID that will be used to check the task status via periodic AJAX calls. Once the report is done, we then write the file directly to S3 and generate a signed URL that is returned to the user to start the download process. At this point of the process, the user downloads directly from S3 via the signed private URL. Here's a sample code of how we handle the S3 upload and generating the private download URL using boto (code is written in Python 3 with boto 2): import uuid from io import BytesIO from django.conf import settings import boto from boto.s3.key import Key def download_file(data, output_filename): conn = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY) bucket = conn.get_bucket(settings.AWS_BUCKET_NAME) k = Key(bucket) k.key = 'temp-downloads/{}'.format(uuid.uuid4().hex) k.set_contents_from_file(BytesIO(data.encode('utf-8'))) download_url = k.generate_url( expires_in=60, response_headers={ 'response-content-type': 'text/csv', 'response-content-disposition': 'attachment; filename={}'.format( output_filename), } ) ... The main thing to look at here is the k.generate_url() method. As you can see, there's an expires_in parameter that lets you set an expiration for the URL. In this case, the signed URL is only valid for 60 seconds. This was intentional for additional security, in case someone gets a hold of the signed URL and makes it publicly available. If that URL is accessed after 1 minute, the requestor will get an "Access Denied" message from AWS. You can also override the default response headers with your own (click here to see which headers you can override) . In the code above, we wanted to make the content type as "text/csv" and also use a different filename from what's actually stored in S3. We store the files in S3 using UUIDs for the filenames so we don't have to worry about name conflicts.
https://www.calazan.com/generating-a-signed-url-for-an-amazon-s3-file-using-boto/
CC-MAIN-2020-34
refinedweb
524
60.24
#include <ServoTimer2.h> // the servo library// defined the pins for the servos#define leftWpin 10#define rightWpin 12 const int MIN_PULSE = 544; // the shortest pulse sent to a servo (0 degrees)const int MAX_PULSE = 2400; // the longest pulse sent to a servo (180 degrees)int degreesToUS(int degrees){ return (map,degrees, 0,180, MIN_PULSE, MAX_PULSE);}ServoTimer2 leftWheel; // declare variables for up to eight servosServoTimer2 rightWheel; void setup() { leftWheel.attach(rightWpin); // attach a pin to the servos and they will start pulsing rightWheel.attach(leftWpin); leftWheel.write(degreesToUS(50)); rightWheel.write(degreesToUS(130)); delay(950); rightWheel.detach(); leftWheel.detach();}void loop(){} #include <Servo.h> // Include servo library Servo servoRight; // Declare right servo Servo servoLeft;void setup() // Built in initialization block{ servoRight.attach(12); // Attach right signal to pin 12servoLeft.attach(10); servoLeft.write(50); // Left wheel clockwise servoRight.write(130); // Right wheel counterclockwise}void loop() // Main loop auto-repeats{ } int degreesToUS(int degrees){ return (map(degrees, 0,180, MIN_PULSE, MAX_PULSE));} The original post you posted the library is where I got it. I had some errors and found a forum post the OP had the same errors and he posted the code containing the fixed errors and I haven't received an error(when compiling) since I used that code. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=212441.msg1565710
CC-MAIN-2016-22
refinedweb
244
54.22
AWS Barcelona Meetup: Danilo Poccia talks about Serverless on AWS AWS Barcelona Meetup: Serverless Architectures on AWSDanilo Poccia, Technical Evangelist at Amazon Web Services attended at AWS Barcelona Meetup....Learn More The 2016 AWS re:Invent will take place in Las Vegas in less than a week. We are all expecting big updates, especially regarding one of the AWS suite’s highest trending and pervasive services: Lambda. In the past few days, AWS announced some interesting updates, and I assume that they are just preparing the ground for bigger news during the event. Here is a short recap of what AWS has announced so far. How many times have you hard-coded sensitive information directly in your Lambda Function code? How often have you deployed a “production” Lambda Function with the wrong “development” keys? How many identical Lambda Functions have you re-deployed just because a simple configuration parameter had changed? I bet many of you have been avoiding ugly workarounds in favor of sophisticated automation tools (such as the Serverless Framework) to solve most of these problems. Either way, you can finally configure Environment Variables natively on AWS Lambda by using the AWS KMS secure storage. It means that your variable values will be securely encrypted and retrieved by AWS when needed. The interesting part is that you won’t need to change your Lambda Function code much, as you can simply read the variable values from the standard environment of each Runtime (e.g. process.env in Node.js, os.environ in Python, etc.). I’d like to highlight two important details of this new feature: AWS SAM might be the biggest Serverless update since the initial AWS Lambda announcement. Tim Wager already announced Project Flourish at the first ServerlessConf in New York earlier this year. AWS SAM appears to be the new name of Flourish, which is aimed at becoming the first vendor-neutral reference for Serverless applications. In practice, AWS SAM makes it easy to define all of the resources used by a Serverless application. The new model will allow you to create specific resources that are optimized for serverless apps with Amazon CloudFormation. This is incredibly useful for all of the automation tools and frameworks that already help you with the deployment and management of Lambda Functions, API Gateway APIs, DynamoDB Tables, etc. Technically, you can finally define a Serverless application with only a few lines of text. Indeed, you can use the new AWS::Serverless CloudFormation namespace. Here is a list of the new resource types: For example, here is how you’d define a simple read-only API connected to a DynamoDB table: AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: Ready-only access to DynamoDB table. Resources: MyReadOnlyFunction: Type: AWS::Serverless::Function Properties: Handler: index.get Runtime: python27 Policies: AmazonDynamoDBReadOnlyAccess Environment: Variables: TABLE_NAME: !Ref ItemsTable Events: ReadItem: Type: Api Properties: Path: /item/{itemId} Method: get ItemsTable: Type: AWS::Serverless::SimpleTable Unfortunately, some of the native Lambda event sources are not supported yet—namely, AWS CodeCommit—but I am looking forward to its upcoming improvements and how it will change the development workflow of Serverless applications. If you want to learn more, you can find the project on GitHub. I have personally met many AWS users who have been waiting for this feature for a long time. API Gateway only supported JSON payloads and responses, and the implementation of API resources related to multimedia has been pretty hard, or hacky, at least. You can now provide binary payloads—e.g. a big PNG image—and expect binary responses such as a resized PNG image. To achieve this, you simply have to define two options: Unfortunately, the new contentHandling property is not fully supported by API Gateway Resources backed by AWS Lambda, for which the request body is always converted to JSON. I’m looking forward to furthering improvements on this front. Another interesting use case has always been the possibility to serve gzipped responses with API Gateway and Lambda, which makes sense if the size of your JSON responses is big enough and easy to cache. So far, the only available alternative is serving an API Gateway through an additional CloudFront distribution or implementing a custom decompression functionality on your client. The Serverless ecosystem is evolving on many fronts, and many other actors are working hard to make an impact on the Serverless revolution. The following updates are not strictly related to AWS and won’t be affected by the upcoming AWS re:Invent, but are definitely worth mentioning. If you are interested in Serverless updates, you can check our blog’s Serverless archive. Here is a selection of our serverless material: The Serverless Framework has changed a lot since we first talked about it six months ago. It was only in version 0.5 and a lot has happened in just a few months, including their $3M fundraising. After Version 1.0 was released on October 12, the development team at serverless.com committed to a bi-weekly release plan, and they’ve just announced version 1.2, which includes the following: You can find all of the scheduled milestones on GitHub. The next version, 1.3, is due on November 30 and it will bring more exciting features. Not to mention are all of the new, top-secret announcements yet to come during the AWS re:Invent. For example, they have just disclosed a new open-source project called Serverless Dashboard. It will represent a new user-friendly layer on top of the Serverless CLI to enhance the framework user experience. Here is a screenshot of the native app for Mac, which you can find on GitHub: Last week, Microsoft announced the general availability of Azure Functions. Microsoft released its Function as a Service in preview only eight months ago. It supports C#, JavaScript, Python, and PHP. This new serverless platform is gaining traction with big enterprises and small startups in the Microsoft ecosystem. In fact, it is creating a new Application Model under the Azure App Services, in addition to a whole new series of services such as the Azure Bot Service. I am looking forward to more features coming soon, as well as its integration in the Serverless Framework. Iron.io announced its first major open-source project, IronFunctions. The project is mostly written in Go, and it represents a new effort to build a hybrid/multi-cloud serverless solution by using open-source technologies such as Kubernetes, Mesosphere, CloudFoundry, OpenShift, Docker, etc. Choosing Docker as a packaging mechanism makes IronFunctions language-agnostic, although the team also decided to support the AWS Lambda packaging format to improve portability, and they announced that they would support others soon. AWS announced Lambda at its 2014 re:Invent, and I’d expect much more news from this year’s event. Since then, Serverless has become a global movement. Each Cloud provider has built its own serverless solution—with different approaches and benefits—and at the same time, hundreds of local communities have gathered to share serverless ideas, wishlist items, and problems. Here is my personal serverless wishlist for the upcoming months. I hope that AWS will surprise me next week! Let us know if you have interesting improvements to add to our wishlist, and meet us at AWS re:Invent next week!....
https://cloudacademy.com/blog/serverless-news-aws-reinvent-2016/
CC-MAIN-2019-18
refinedweb
1,223
53.31
Sampling in Voronoi grids¶ In RT computations, the need to uniformly sample points in the cells of a grid often arises. A typical example is that of the evaluation of the average value of a function (e.g., a density function) within a cell. Whereas this task is relatively simple in regular grids, the situation is more complicated in Voronoi grids. Cells in Voronoi grids are indeed convex polyhedra with an arbitrary number of faces, whose shapes and volumes are determined by the intial distribution of sites. The Voronoi helper class in Hyperion contains support for producing a list of sampling points for each cell. Simple case: averaging of function¶ In the simplest case, we might want to evaluate a function in all cells (for example a function to find the density), and rather than using only the position of the sites, we want to use several random samples in each cell. The easiest way to do this is to make use of the top-level VoronoiGrid class and the evaluate_function_average() method. Suppose we have a density function defined as def density_func(x,y,z): # Density is proportional to the inverse # square of the distance from the origin. density = 1 / (x*x + y*y + z*z) return density We can now generate some random sites: >>> import numpy as np >>> N = 100000 >>> x = np.random.uniform(size=N) >>> y = np.random.uniform(size=N) >>> z = np.random.uniform(size=N) and set up a voronoi grid: >>> from hyperion.grid import VoronoiGrid >>> g = VoronoiGrid(x, y, z) We can then simply call the evaluate_function_average() method to evaluate the function at n_samples sites in total, with a minimum of min_cell_samples samples in each cell: >>> dens = g.evaluate_function_average(density_func, n_samples=1000000, min_cell_samples=5) Advanced: accessing the random samples directly¶ In this example, we now show how to manually get the random samples in each cell and show to how to do the same function evaluation as above. As before, we first initialise a cubic domain with unitary edge, and we fill it with randomly-placed points: >>> import numpy as np >>> N = 100000 >>> x = np.random.uniform(size=N) >>> y = np.random.uniform(size=N) >>> z = np.random.uniform(size=N) >>> sites_arr = np.array([x, y, z]).transpose() Next, we import the voronoi_helpers module and we compute the Voronoi grid g generated by the points in sites_arr: >>> from hyperion.grid import voronoi_helpers as vh >>> g = vh.voronoi_grid(sites_arr, np.array([[0, 1.], [0, 1], [0, 1]]),n_samples=1000000,min_cell_samples=10) INFO: Computing the tessellation via voro++ [hyperion.grid.voronoi_helpers] Here we have passed to voronoi_grid two extra parameters related to the sampling: n_samplesis the (approximate) total number of sampling points to be produced. For each cell, the algorithm will produce a number of sampling points that is proportional to the cell volume: larger cells will be sampled with more points than smaller cells; min_cell_samplesis the minimum number of sampling points per cell. If a cell is small enough, it could be that no sampling points are allotted to it. This parameters forces the minimum number of sampling points to be allocated to a cell, regardless of its volume. The default value for this parameter, if omitted, is 10. When a Voronoi grid is constructed with a positive n_samples parameters, it will expose two properties: samplesis a list of three-dimensional vectors representing all sampling points in the domain; samples_idxis a list of indices describing, in sparse format, to which cell the points in samplesbelong. For instance, in the example above: >>> g.samples array([[ 0.57565603, 0.9219989 , 0.15469812], [ 0.58406352, 0.91473664, 0.15834503], [ 0.57642814, 0.93045367, 0.16361907], ..., [ 0.80025712, 0.18526818, 0.61809793], [ 0.78721772, 0.18366617, 0.62582103], [ 0.79493898, 0.17735752, 0.62803905]]) >>> g.samples_idx array([ 0, 10, 20, ..., 1147105, 1147115, 1147131], dtype=int32) This means that the sampling points for the first cell have indices 0 to 10 in g.samples, the sampling points for the second cell have indices 10 to 20 in g.samples, and so on. If now we suppose to have a density function defined as def density_func(x,y,z): # Density is proportional to the inverse # square of the distance from the origin. density = 1 / (x*x + y*y + z*z) return density where x, y and z are 1-D numpy arrays of coordinates, we can then first compute the density at all sampling points like this: >>> dens_all = density_func(g.samples[:,0],g.samples[:,1],g.samples[:,2]) We can then compute the average density per cell with: >>> dens_average = np.add.reduceat(dens_all, g.samples_idx[:-1]) / np.diff(g.samples_idx) That is, dens_average will be an array of 1E5 elements each containing the average value of density_func() for each cell of the grid: >>> dens_average array([ 0.8288213 , 3.24626334, 0.74344873, ..., 2.98673651, 0.64962755, 0.96117706]) >>> len(dens_average) 100000
http://docs.hyperion-rt.org/en/stable/advanced/voronoi_sampling.html
CC-MAIN-2021-17
refinedweb
808
53.71
Leading LightSwitch Logging in to a LightSwitch Application Using Social Media Credentials More and more Web sites are outsourcing the process of authenticating a user to a third-party Web site such as Windows Live ID, Yahoo!, Google or Facebook. These social media sites fulfill the role of an identity provider and grant the requestor with a signed token, a key that proves the user is who he says he is. LightSwitch shipped with three possible authentication modes: None, Windows credentials and Forms authentication. With LightSwitch, choosing an authentication mode is as easy as selecting one from the application’s properties. LightSwitch then sets up the database and guides the user through the selected login process without any additional effort from the developer. The authentication is fast and convenient—but in no supported or obvious way open for extensibility. Not supported and not obvious don’t always mean impossible, however. A developer with a bit of knowledge about LightSwitch internals can find ways to bypass the built-in LightSwitch authentication. In this article, I'll show you how I have made it possible for users of my LightSwitch applications to log in using their social media credentials of choice. Before talking specifically about LightSwitch, I want to provide a brief recap on—or an introduction to, if you’re new to the topic—Security Token Services. Understanding Security Token Services Not too long ago, each application, Web site or Web service was a silo that had its own way to authenticate and authorize users. A couple years back, I had about a dozen username–password combinations for different sites, and more often than not I clicked the “Forgot password?” link more than any other. This problem inconvenienced site maintainers as well as regular users. Even if company A forged an alliance with company B so that the 100,000 users from company B had access to company A’s software, those users had to be duplicated and maintained across the two sites. The answer to this duplication problem was as simple as taking the elements that authenticate and authorize users out of the application, Web site or Web service and putting them in a separate service called a Security Token Service (STS). Figure 1 shows how a classic, simple STS system works. Figure 1 A simple STS scenario Instead of maintaining a custom login mechanism itself, the service the user interacts with trusts the STS to do the authentication and authorization, and focuses solely on the business logic. The flow now goes like this: - The user’s browser or application requests a security token from the STS by passing a valid username–password combination. - The STS validates the username and password. - The STS sends back a security token, which is a key with the following characteristics: - Is signed, to prove that it comes from the STS. - Proves that the user is authenticated—that is, who she claims to be. - Contains authorization claims, which can be considered simple strings that represent the user’s permissions and properties—that is, what the user is allowed to do or is denied from doing. - The user uses this security token as a key to unlock the service. - The service verifies that the security token is genuine by checking the certificate used to sign the token. - The service serves the client’s request. Separating the authentication and authorization process from the Web site, Web service or application by setting up STS offers some benefits because of the configurability of this setup: - Multiple applications owned by company A can now trust the same STS, meaning that multiple applications can use one login service. - One STS can authenticate users in multiple ways: by username–password combination, by Windows credentials or even by trusting tokens signed by a third party. - Using third-party tokens helps solve the duplication problem between two companies (A and B) in an alliance, as illustrated in Figure 2. This third-party STS then becomes an identity provider to the original STS. For example, the STS from company A can now trust tokens from the STS from company B. An employee from company B can then authenticate with STS B, using the Windows credentials stored in company B’s Active Directory. The user’s browser or application uses security token B to authenticate to STS A. STS A recognizes that STS B has signed the token and trusts STS B to know that the token genuinely means that the user is who he says he is. Using simple mapping, claims from domain B are mapped to claims from domain A. For example, if the token has the claim B\helpdesk, STS A issues a security token with claim A\admin. The user, an employee from company B, can now use that token to access company A’s software. Observe that company A’s software didn’t have to be modified for this use; the company A STS has to trust only the company B STS, which is a matter of configuration. Figure 2 A more advanced STS scenario Switching from in-application authentication to STS architecture might seem like a lot of work, but the benefits definitely outweigh the effort, which is why most large companies, including Microsoft (Windows Live ID STS), Facebook, Google and Yahoo!, currently use this architecture. Azure Access Control Service If you’re interested, you can find some excellent documentation about STS on the Microsoft Patterns & Practices site (). However, I suggest you resist the urge to use the Patterns & Practices framework to code a custom STS yourself and instead use the Azure Access Control Service (ACS). Azure ACS is an inexpensive, easy-to-configure security token service hosted in the cloud. It has built-in support to trust other STSs as an identity provider, including those from Windows Live ID, Facebook, Google and Yahoo!. If you have an Azure account and download the Windows Identity Foundation SDK (), in the first lab (Exercise 1, at) you’ll find out how to create an ASP.NET Web site where users can authenticate with their Windows Live ID, Google or Yahoo! account in 30 minutes or less, as shown in Figure 3. Figure 3 Logging in to an ASP.NET application using social media credentials The resulting ASP.NET “Hello social world” application, shown in Figure 4, is the first corner piece of the puzzle of how we can enable authentication using social media credentials for users of LightSwitch applications. Figure 4 “Hello social world” ASP.NET application created in Windows Identity Foundation SDK lab Extending the Azure ACS Sample If you have an existing (or new) LightSwitch application with its authentication mode set to Forms and you followed the ACS lab, you can now combine the ASP.NET and LightSwitch applications. In the ASP.NET application you created, remove all visual controls and add an IFrame instead. Then replace the code behind with this code: public partial class _Default : System.Web.UI.Page { public static readonly string LightSwitchApplication = "PathToYourLightSwitchAppliction"; protected void Page_Load(object sender, EventArgs e) { MyFrame.Attributes["src"] = LightSwitchApplication + "default.htm?UserName=" + Thread.CurrentPrincipal.Identity.Name.Replace(" ", ""); } } The result is an ASP.NET application that hosts the LightSwitch application, which in turn shows the LightSwitch login screen, as you can see in Figure 5. Figure 5 A simple, clean and totally undesirable LightSwitch login screen You now have an ASP.NET page that requires the user to log in using her social media credentials and then takes the authenticated user’s name (from Thread.CurrentPrincipal) and passes it to the LightSwitch application via the URL. Once the LightSwitch application starts, the user is again presented with a login screen. Obviously, since the user already provided credentials, the next and most challenging step is to rip out the LightSwitch login screen and replace it with an automated process. The LightSwitch View In my April Leading LightSwitch article (), I explained the LightSwitch MVVM architecture, concluding that the View doesn’t have any XAML. (This notion that there’s no XAML in a LightSwitch application comes from John Rivard and Karol Zadora-Przylecki, from their blog post series about the LightSwitch architecture:.) Obviously, it isn’t quite true that no XAML is used in the LightSwitch View, but it is true that there is no XAML you need to worry about—unless you want to customize your application. If you’re a professional developer or you just want to do some graphical modifications to your LightSwitch application, chances are you’ll need a good understanding of and access to the View layer. Let’s start with a brief summary of how this View layer is composed to see where and how the login screen is displayed. With this knowledge, we can try to replace the login screen with a component that takes the social media credentials, passed via the URL, into account. Figure 6 In-browser LightSwitch application A standard LightSwitch application, like the example in Figure 6, has various components that form the View layer: - ASP.NET page that hosts the Silverlight application If you select your LightSwitch application in Solution Explorer and then select Logical View, you can find this page, default.htm, in the ServerGenerated project (Visual Studio 2010) or at the root of your LightSwitch project (Visual Studio 11 beta). Being able to find this page in Solution Explorer implies that the developer can modify it. - LightSwitch “application bootstrapper” A Silverlight application that loads the metadata handles the login if necessary (depending on the chosen login mode) and navigates to the Shell page. The implementations are found mainly in the assemblies Microsoft.LightSwitch.dll, Microsoft.LightSwitch.Client.dll and Microsoft.LightSwitch.Client.Internal.dll. (Read more about metadata in MVVM architecture in my April Leading LightSwitch article, at.) - Shell page This Silverlight page gives a visual representation to the commands (visible at the top in the default LightSwitch 1.0 shell) and the navigation menu (visible at the left), and it has a placeholder for the screens. The implementations for the default shell are in the same assemblies mentioned in the preceding entry, although you can use custom shells instead if you like. - Screens The screens are made up of Silverlight controls that are laid out as defined in the metadata (explained in my April Leading LightSwitch article). The ControlTemplates, or the XAML that defines how the controls should look, are defined by the theme. The implementations for the default theme are in the same assemblies mentioned earlier, although you can use custom controls and custom themes if you prefer. Contrary to what most professional developers believe, a LightSwitch application is not a generated and closed box. Yes, a LightSwitch application is partly generated, but it’s definitely not a closed box. The ASP.NET page can be modified to fit your needs (see entry 1 in the preceding list) and to use Shell extensions (entry 3), theme extensions (entries 3 and 4), Control extensions (entry 4) or Silverlight Custom controls (entry 4). A developer can take fine-grained control over any visual part of a LightSwitch application—any visual part, that is, except for the application bootstrapper (entry 2), which contains the login page we want to replace. When the LightSwitch application bootstrapper navigates to the login page, Silverlight fires an event. By adding an event handler to this event, you can bypass the entire LightSwitch login logic. It isn’t possible to prevent the login logic from happening, so you’re not replacing the login page—rather, you’re acting on the event that the LightSwitch bootstrapper is trying to navigate to the login page to take control of the application flow. The code in Figure 7 should be placed in the Application_Initialize extension point (“write code”). It’s the only LightSwitch extension point that occurs before the login page is shown..Runtime.Shell.Internal.Implementation; using Microsoft.LightSwitch.ApplicationInfrastructure.Utilities.Internal; using Microsoft.LightSwitch.Runtime.Shell.ViewModels.Login; using Microsoft.VisualStudio.ExtensibilityHosting; namespace LightSwitchApplication { public partial class Application { Frame rootFrame = null; partial void Application_Initialize(){ Microsoft.LightSwitch.Threading.Dispatchers.Main.BeginInvoke(() => { rootFrame = ((Page)((ContentPresenter)System.Windows.Application.Current.RootVisual) .Content).Content as Frame; if (rootFrame != null) rootFrame.Navigated += new System.Windows.Navigation.NavigatedEventHandler( rootFrame_Navigated ); }); } void rootFrame_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e) { if (e.Content is LoginPage) { rootFrame.Navigated -= rootFrame_Navigated; string userName = null; if (QueryStringHelper.TryGetValue("UserName", out userName)) { ILoginViewModel vm = VsExportProviderService.GetExportedValue<ILoginViewModel>(); vm.UserName = userName; vm.Password = "RandomPassword"; vm.LoginCommand.Execute(vm); } } } } } Figure 7 Bypassing the LightSwitch login screen In the Application_Initialize method, find the Silverlight root frame and add an event handler to its navigated event. Do this on the LightSwitch main dispatcher to avoid threading issues. Once this event fires, check whether the page being navigated to is LoginPage. If it is, the best practice is to remove the event handler to avoid the small memory leak where the garbage collector won’t collect the event handler. Use the LightSwitch QueryStringHelper class to get the UserName that the ASP.NET page passed in the URL. If this information is available, utilize the LightSwitch ILoginViewModel implementation to handle the login by passing the UserName and a random password, and executing the LoginCommand on that ViewModel. A Short Technical Recap We have set up an ASP.NET Web application that delegates the login process to Azure ACS. Thanks to simple configuration, we can now delegate this application to the social media network we want to access. The ASP.NET application passes the credentials to the LightSwitch application that contains some custom code to use these credentials instead of the default login screen. Words to the Wise Although LightSwitch doesn’t have out-of-the-box support for this scenario, you can offer the user the ability to log on using his social media platform of choice by making a couple small hacks, as demonstrated in this article. If you’re going to try this, be sure to heed the words of caution in the following paragraphs. Bypassing the LightSwitch LoginPage isn’t officially supported and uses LightSwitch internal implementations. Because of this, the code is highly dependent on parts of the LightSwitch framework that aren’t included in the public API and thus could change if in a future release the LightSwitch team sees any need for it. Strictly speaking, using the internals could even be considered a violation of the EULA. Actually, you shouldn’t even be reading this article; your browser will self-destruct in five seconds. This article is not a step-by-step guide but rather tries to lay out the crucial pieces of the puzzle of how to create a LightSwitch application that provides users with a familiar, consistent login screen whether they are at work, at home or using social media. Your next step should be to think about the security of your implementation. For example, you can encrypt the UserName instead of sending it in the URL as plain string, preferably including some notion of the time so that the URL can’t be hijacked and reused by others. Consider using a hash of the original username instead of a hardcoded RandomPassword. If you use LightSwitch 2.0 (in Visual Studio 11 beta), you’re exposing your domain data as OData endpoints, which could be consumed in different clients than the Silverlight client, such as a Windows Phone application. (See my March article on how to consume a LightSwitch OData service from a Windows Phone application:.) These services are not secured by the Azure ACS but instead are secured by the LightSwitch authentication you choose (None, Windows credentials or Forms). If you investigate the mapping of the social media security token to the Azure ACS security token, you’ll notice that the Windows Live ID security token doesn’t contain a claim for “name.” Depending on the identity providers (social media platforms) you choose, you’ll want to find a specific claim that fits your needs. Figure 8 shows that the “nameidentifier” claim would make a valid candidate if you opt to use Google, Yahoo! and Windows Live ID as your identity providers. Figure 8 The ACS rule map If you read and understand the preceding cautions, welcome to the wonderful world where STS, claim-based security and LightSwitch are blended together. Recent studies show that employees are up to five times more careful about giving out their social media credentials than they are about sharing the username and password they use at work. Besides that, it’s simply more convenient for users to remember only a few username–password combinations and use them to log in to a variety of applications. I have used this single-sign-on technique since LightSwitch 1.0, including in MyBizz Portal, an award-winning LightSwitch network (). I have received some great feedback from users since I made it possible for them to use their social media credentials to log in to their business applications. Jan Van der Haegen is a green geek who turns coffee into software. He’s a loving husband, he’s proud to be part of an international team at Centric Belgium, and so addicted to learning about any .NET technology—Visual Studio LightSwitch in particular—that he maintains a blog on his coding experiments. You can find his latest adventures at. Thanks to the following technical expert for reviewing this article:Paul Patterson. MSDN Magazine Blog More MSDN Magazine Blog entries > Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/jj129610.aspx
CC-MAIN-2015-27
refinedweb
2,926
52.8
©2005 Felleisen, Proulx, et. al. By now we are aware of the fact that there are many different ways to represent the same information. We have also seen that the same structure of data can represent all kinds of different information. Java libraries, specifically the Java Collections Framework contain hierarchies of different classes and interfaces that can be used to represent data in many different forms. This allows the programmers to avoid mistakes, share code, and better understand programs written by others. We will learn about some of the classes and interfaces included in this framework, and will learn to read the documentation to understand how to use any of the others. The first class from the framework we will use is ArrayList. It contains a list of data, similar to the lists we have build ourselves, and does not have a limit on the number of elements that can be added. Adding an element to an ArrayList mutates the list: the add method produces a boolen value that indicates success, but it does not produce a new ArrayList. Here is an example. We first construct an empty ArrayList: // using the constructor to build an empty ArrayList ArrayList alist = new ArrayList(); Now, the test shows us that it is initially empty, and after adding three elements, its size grows as expected. The examples also show the use of the methods get(int index) and set(int index, Object value): public void runTests(){ test("IsEmpty: ", true, alist.isEmpty()); alist.add("Hello"); alist.add("Good Day"); alist.add("Goodbye"); // currently the list has three elements test("IsEmpty: ", false, alist.isEmpty()); test("Current size: ", 3, alist.size()); // elements can be accessed directly via index test("element at 2: ", "Goodbye", alist.get(2)); test("element at 1: ", "Good day", alist.get(1)); test("element at 0: ", "Hello", alist.get(0)); test("change element at 0: ", "Hello", alist.set(0, "Hi")); test("element at 0: ", "Hi", alist.get(0)); } We started with an empty ArrayList and added to it three String objects. Even though the method add produces a boolean value, we ignored this value, because we know it always produces true. The add method is included in an interface that is implemented by many other classes, some of which may not have the space available to add another element. We then tested that the the method isEmpty() produced false and that the number of elements in the ArrayList is indeed three. The data in an ArrayList is arranged in a linear fashion, and each element has a numberic label called index. The first element we added to alist has index 0, the next one has index 1, etc. If we wish to refer to an element of an ArrayList we use the method get and specify the desired index. So, the test test("element at 2: ", "Goodbye", alist.get(2)); verified that the last element of alist was indeed "Goodbye". The last two tests also illustrate the use of the method set(int index, Object value) that allows us to replace the element at a given location with a new one. This method returns the reference to the object that has been removed from the ArrayList. Designing IRange iterator for ArrayList We would like to use our IRange iterator to traverse over the elements in the ArrayList. To do so, we need to design a class ArrayListRange that implements Irange. The class needs at least one field -- to hold the instance of the ArrayListRange it traverses. Here is a skeleton of this class: public class ArrayListRange implements IRange{ ArrayList alist; ... other fields if needed ... public ArrayListRange(ArrayList alist, ... ){ this.alist = alist; ... } public boolean hasMore(){ ... } public Object current(){ ... } public IRange next(){ ... } } At the first glance it looks like hasMore is the easiest method to write. However, let's hold of on that. For the ListRange the methods current and next mirrored the behavior of the first and rest field access. What is the first in an ArrayList? Well, is is the element at index 0, but only when we begin the traversal. The iterator produced by the next method should return the element at index 1 from it's current method. A few examples should help us understand the problem better. For the following ArrayList ArrayList alist = new ArrayList(); ... // followed by initialization - inside some method: ... alist.add("Hello"); alist.add("Good Day"); alist.add("Goodbye"); and the ArrayListRange iterator: IRange alistIt = new ArrayListRange(alist, ...); we expect the following behavior: // elements can be accessed directly via index test("element at 0: ", "Hello", alistIt.current()); IRange alisIt1 = alistIt.next(); test("element at 1: ", "Good day", alistIt1.current()); IRange alisIt2 = alistIt1.next(); test("element at 2: ", "Goodbye", alistIt2.current()); IRange alisIt3 = alistIt2.next(); test("end of the ArrayList ", false, alistIt3.hasMore()); It is clear that each ArrayListRange instance not only needs to know what is the ArrayList instance it is traversing, but also what is the current position in this traversal. That means, we should add a field to the class ArrayListRange that represents the current index. The hasMore method then determines whether the current index refers to a valid location in the ArrayList. Here is the complete class: public class ArrayListRange implements IRange{ ArrayList alist; int index; // construct the IRange for the given ArrayList at the given index // index < 0 indicates no more elements to generate public ArrayListRange(ArrayList alist, int index){ this.alist = alist; this.index = index; } // current element available if the index is valid public boolean hasMore(){ return (this.index >= 0) || (this.index < alist.size()); } // throw exception if current element is not available public Object current(){ if (this.hasMore()) return alist.get(index); else throw new NoSuchElementException( "No element is available."); } // throw exception if no further iteration is possible public IRange next(){ if (this.hasMore()) return new ArrayListRange(alist, index + 1); else throw new NoSuchElementException( "Iterator cannot advance further."); } } We can now run the tests to make sure our iterator works as expected. We can also add the tests from the previous lecture")); }
http://www.ccs.neu.edu/home/vkp/csu213-sp05/Lectures/lecture22.html
crawl-003
refinedweb
998
58.38
[01 - programmatic menus in Gatsby] January 20, 2019 by alex christie Programmatically generating menus is a difficult concept at first because it seems simple enough just to hard code your menu into a component. However, making a reusable and data agnostic menu component is really easy and can travel with you from project to project. Here, I want to outline two approaches to generating menus from your data. The first involves defining menu items in markdown frontmatter, which can be useful when designing sites for folks less familiar with javascript or coding more generally. The second is an iteration on the current Gatsby documentation that skips using GraphQL fragments in favor of a meta config file. Markdown Frontmatter I'm currently working on a Gatsby site template designed specifically for folks with minimal, if any, javascript knowledge. With these users in mind, I'm trying to define most things through really basic global variables in gatsby-config.js and frontmatter. I think performing tasks by querying markdown frontmatter can simplify user experience immensely, especially if users are given clear guidelines and a fairly robust set of options. I wanted users to be able to define their menu from frontmatter for a few reasons: - It's fairly semantic. Users don't need to write out objects or arrays to get some nice menus. - Users can distinguish between pages, posts, and pages that should be menu items from the markdown file itself. - Users don't have to think about slugs because gatsby-node.jsprogrammatically generates them. So there's never a time where a slug changes and your menu doesn't reflect that change. That being said, this method surely has some downsides: - Menu is no longer centralized -- you have to go to different files to edit and reorder the menu. - It's arguable whether this is a major upgrade from just defining a menu in gatsby-node.js, especially if users are already defining other site data there. - Implementing sub menus this way is going to be difficult. With these things in mind, let's take a look at some example frontmatter: --- title: "hi." author: "ed." date: 2018-12-29T10:52:33-6:00 type: "page" menuItem: 1 menuTitle: "home" draft: true --- The important things here are just that I've defined a menuItem to position the item in the menu, and I have a label or menuTitle for the item in case I want the title and what shows up in the menu to be different. Here's our menu component: import React from 'react'; import { Link } from 'gatsby'; import { graphql, StaticQuery } from 'gatsby'; export default (props) => ( <StaticQuery query={graphql` query MenuQuery { allMarkdownRemark( filter: { frontmatter: { type: { eq: "page" } } }, sort: {fields: [frontmatter___menuItem], order: ASC}, ) { edges { node { frontmatter { menuTitle } fields { slug } } } } } `} render={data => { const items = data.allMarkdownRemark.edges return ( <nav> {items.map(({node}) => { const { frontmatter: { menuTitle }, fields:{ slug }, } = node; return ( <Link to={slug} activeClassName={'active'} key={menuTitle}> {menuTitle} </Link> ) })} </nav> ); }} /> ) StaticQuery is doing the majority of the heavy lifting here. We're filtering all of our markdown files for anything that's listed as a "page," and then ordering them by frontmatter.menuItem. Any page that doesn't define a menuItem in its frontmatter is excluded from the search. This query also exposes the menuTitle and slug, so all that's left to do is map our object into Link. What we're left with is a set of easily styled list items nester in nav tag. If you're looking for a simple way to generate single level menus, this is a really great way to go. But what if you're looking for something a little more robust, specifically with nested sub-menus? Meta Config File The approach I ended up taking was siloing metadata about the site into a siteConfig file. Here, I define some site wide settings, and map out my menu. While this can be done in gatsby-config.js (and Gatsby docs even offer this as a solution for mapping programmatic menus), I decided to forego writing another GraphQL fragment of just use an object with some arrays. So the main difference and benefit here is just slimmer syntax at the cost of passing the object through the top level layout component. There are two parts of this setup: the siteConfig.js file, and the menu components. I keep my siteConfig in content/meta, though it could be stored anywhere. Here, I define my menu with a series of objects listing the item label, and the page we're navigating to. Additionally, I wanted to be able to define sub-menu items, so I giving 'teaching' a subItems array with objects that are constructed the same way as the top-level objects: const menu = [ { label: 'home', to: '/' }, { label: 'about', to: '/about' }, { label: 'teaching', to: '/teaching', subItems: [ { label: 'teaching philosophy', to: '/teaching' }, { label: 'course descriptions', to: '/classes' }, ]}, { label: 'cv', to: '/cv' }, { label: 'code', to: '/code' }, ] ... module.exports = { menu: menu, ... } The final bit of code here is just exporting a bunch of arrays from siteConfig.js, but we're just looking at menu for now. The second part is my menu.js and menuItem.js component. menu.js simply maps the array into the menuItem.js component. This is where we use the arrays to define and programmatically generate out menu: import React from 'react'; import { Link } from 'gatsby'; import { FontAwesomeIcon } from '@fortawesome/react-fontawesome' import { library } from '@fortawesome/fontawesome-svg-core' import { fas } from '@fortawesome/free-solid-svg-icons' library.add(fas) const MenuItem = ({ item }) => { const { label, to, subItems } = item; return ( subItems ? ( <li key={label}> <Link to={to}> {label} <FontAwesomeIcon icon={['fas', 'angle-down']} </Link> <ul> {subItems.map((subItem) => { const { label, to } = subItem; return ( <li key={label}><Link to={to}>{label}</Link></li> ) })} </ul> </li> ) : ( <li key={label}><Link to={to}>{label}</Link></li> ) ) }; export default MenuItem; So, what's happening here? - We're deconstructing the item so we can use "label", "to", and check for subItems. - We then use a ternary operator to differentiate arrays that do and do not have "subItems". - If one does, we map the "label" and "to" properties to a list item and Link tag, respectively. We also add an "angle-down" icon to signify that the link has a dropdown menu. - Then, we map the subItems in similar fashion, iterating over the subItem object. - The second half of the ternary just gives us a standard Link nested in a list item. I like this approach because it gives us the separation of concerns that Gatsby's documentation recommends without having to write a static GraphQL query. Additionally, the example above should give you an idea of how to start working with sub-menus, which is absent in the current Gatsby documentation. Check out my Github for styling. Wrap Up I hope these two models of dealing with menus provide useful examples for folks working on their own Gatsby projects. In future iterations of these menu designs, I'd like to work on rendering a collapsible (hamburger) menu for mobile.
https://www.inadequatefutures.com/blog/01-programmatic-menu/
CC-MAIN-2020-24
refinedweb
1,168
61.97
I know this gets asks occasionally: "How do I import win32api?". Well you can't...at least not out of the box. This is where Pywin32 comes in. I recently started throwing together a plugin that needed pywin32 so I could show Windows notification bubbles from sublime: So I threw this together. example to show path of all open explorer windows:[pre=#232628]import sublime_pluginimport Pywin32.setupfrom win32com.client.gencache import EnsureDispatch def run(): for w in EnsureDispatch("Shell.Application").Windows(): print(w.LocationName + "=" + w.LocationURL) run()[/pre] Output: . repo: github.com/facelessuser/Pywin32 Interesting. Another step closer to finally being able to utilize Sublime Text as a cross-platform shell replacement / portable environment platform. At least reducing the alt+tab frequency while working. I am interesting to see what people may do with this. This does open up a number of possibilities on Windows. I need to get plain text from clipboard (sublime.get_clipboard() not work with copied file object), but sth like this doesn't work:OpenClipboard() d=GetClipboardData(win32con.CF_TEXT) # get clipboard dataSetClipboardData(win32con.CF_TEXT, "Hello") # set clipboard dataCloseClipboard() Why try to extract the file path from a file object in clipboard? Why not just hold shift in explorer, right click the file, and select "Copy as path"? If this is unacceptable, I would try and read through the pywin32 documentation. I frankly don't have the time to be a consultant for pywin32, I simply provide the package to help people out. I'm sorry I cannot be of more help right now. subscribing~ Thinking about dropping support for this. I only have one plugin that uses it, and I am in the process of moving to using ctypes instead. There looks to be maybe around 400 installs on Package Control, not sure how many actively even use this. As I can do most things with ctypes directly, which saves me this extra dependency, I have no personal use for it anymore; therefore, I have no desire to continue supporting it. My support for packages is usually tied to my personal usage. If anyone wants to take over, let me know. I think once I finish getting my plugin weened off this, I will give it a month to see if anyone wants to take it over, if not, I may transfer it over to SublimeText organization or just kill it off completely. To be honest, if someone where taking this over, it would make more sense to implement it as a package control dependency than its current usage model; maybe I will kill it off and let someone re-implement it as a dependency if there is a strong enough desire for it. I can easily translate the package into a dependency since I spent quite a bit of time with Pc's internals in that regard and know how it works. I don't think I would be able to provide support however (never used itit so far) , so moving it to the sublime text org is probably better. That would be great. I figure the people who can't figure out how to get things working with ctypes and want to use PyWin32 can just contribute to the Sublime Text org if they run into troubles with it. That way the people who are actually using it can make things better. As I have spent more time with ctypes, it's really not that hard to use instead of PyWin32, though you are working with less of an abstraction layer making you do a little more reading. So yeah, PyWin32 can be more accessible at times, but not having the extra dependency is nice as well. On a side note, I have been looking into dependencies, for Package Control. How do you setup Package Control to test out dependencies locally? I wouldn't mind setting up PyWin32 myself, but I'm just not sure yet how you validate it locally. I don't think it's m.tiond anywhere, but you can add channel or repository JSON files to PC's config that are on your hard drive. I used that to test much of the dependency installation stuff and quicks that existed in 3.0.0 and have been fixed in 3.0.1. Unfortunately, I couldn't seem to get PC to install packages from the hard drive, so I had to upload the test package to a web server. Following is an excerpt of my local .json file that I added to PC's "repositories" setting: { "schema_version": "3.0.0", "packages": { "name": "RequestsTest", "author": "me", "releases": { "sublime_text": "*", "url": "<url to the .sublime-package file I uploaded on some server>", "version": "1.0.0", "date": "2015-01-10 22:38:02" } ] } ], "dependencies": { "name": "requests", "load_order": "50", "description": "Python requests module", "author": "FichteFoll", "issues": "", "releases": { "base": "", "tags": true, "sublime_text": "*" } ] } ] } Note: "m.tioned" was required to trick the utterly stupid and annoying "kit*n spam filter". I get so mad whenever I see it. And yes, I did delete remote tags a couple times. Hmm. Not as straight forward as I hoped. Alright, well, I at least have a plan now. My plugins are all no longer using PyWin32 (unreleased), so when I get some time to tackle this, I will start looking into dependencies. I will just upload it to Sublime Text org and add to PC's dependencies when I am done. I am going to start transferring this over to SublimeText org. Not sure when I will get time to convert this to a dependency, but when I get time, I will probably do that.
https://forum.sublimetext.com/t/st3-pywin32-plugin-beta-pywin32-support-in-sublime/12296/1
CC-MAIN-2016-22
refinedweb
935
64.3
I have a hyperlink vbscript that displays a message using msgbox. I am receiving permission denied trying to use msgbox on 10.3.1 clients. The script still works fine on 10.0 clients. Anyone know a fix for this problem? I have a hyperlink vbscript that displays a message using msgbox. I am receiving permission denied trying to use msgbox on 10.3.1 clients. The script still works fine on 10.0 clients. Anyone know a fix for this problem? The script executes fine when the message box line is excluded. The hyperlink is also vbscript and not VB or VBA. MsgBox works when executed from a .vbs file outside of ArcMap. well, then I am sure you have been through these, but it appears to be a difference between the versions that isn't obvious Using Hyperlinks—Help | ArcGIS for Desktop perhaps showing the lines would help The message box call will cause the script to fail. Even the simplest of scripts fail. Function OpenLink ( [FACILITY_ID] ) MsgBox "Hello" End Function The rest of the script does not matter. Adding MsgBox anywhere causes the error when you click verify and the link does nothing when used. I ran into a similar problem when writing scripts for the web using vbscript ... apparently in my case the code was server side code which was not valid.... I had to re-write the section for "Client Side" This server side code fails with Permission denied and continued processing just stops similar to what you are describing <% function x() msgbox("Hello World") end function %> Had to re-write to client side: <HTML> <Body onload= x()> <script language=vbscript> sub x() msgbox("Hello World") end sub </Script> </Body> </HTML> However I don't know if this still applies this was back in the ASP days "Active Server Pages" Now I think I understand .. you are trying to use the msgbox in the script areas for the hyperlink definition. You can't....nor can you use alert in javascript. As always there are ways around it but you would have to create your own hyperlink launcher as an object and you would have to use the createobject command on your custom launcher to employ msgbox. Better explanation can be found here: Advanced Hyperlink Functionality I ended up converting the script to python and using the ctypes library to create the popup. Sadly, 10.0 doesn't have a python option for hyperlinks, so now I have 2 scripts and lyr files to support, one for 10.0 and one for 10.3. This is the python code for open a folder and displaying a message if it doesnt exist: import subprocess,os,ctypes def OpenLink ( [Foldername] ): path = '\\\\server\\sharename\\'+[Foldername] if (os.path.isdir(path)): subprocess.call('explorer "'+path+'"', shell=True) else: ctypes.windll.user32.MessageBoxW(0,u'Folder was not found:\n'+path,u'Not Found',0) return Sucks that they killed MsgBox functionality sometime after 10.0 though. guessing wildly that vb wasn't installed... it used to be, but it is being slowly nudged out Introduction to installing and configuring ArcGIS for Desktop—Help | ArcGIS for Desktop
https://community.esri.com/thread/171690
CC-MAIN-2020-45
refinedweb
526
63.59
26 April 2013 15:46 [Source: ICIS news] HOUSTON (ICIS)--Eastman Chemical hopes to announce by mid-2013 on how it will proceed on plans to restart an idled cracker in ?xml:namespace> “We have got site visits going on, and we have narrowed the list [of potential partners] down to a few parties; hopefully mid-year, we can kind of spell out who we are going to work with and what it’s going to look like,” Jim Rogers told analysts during Eastman’s 2013 first-quarter results conference call. “We are a pretty good-sized company, and restarting another cracker and doing something with the excess ethylene would probably take two to three years to get in place anyway,” he said. “I hope they are not buying our stock waiting for that shoe to drop,” he said. “Whatever we do with a partner, it’s going to be a long-term arrangement, whether we will continue to own that cracker, or we sell the cracker, or what we do with our excess
http://www.icis.com/Articles/2013/04/26/9663124/eastman-eyes-mid-year-announcement-on-plans-for-4th-texas.html
CC-MAIN-2014-42
refinedweb
174
63.77
Light-weight RPC package for creating RESTful server-side Dart APIs. The package supports the Google Discovery Document format for message encoding and HTTP REST for routing of requests. The discovery documents for the API are automatically generated and are compatible with existing Discovery Document client stub generators (see the "Calling the API" section below for more details). This makes it easy to create a server side API that can be called by any client language for which there is a Discovery Document client stub generator. Getting started is simple! The example below gives a quick overview of how to create an API and in the following sections a more elaborate description follows of how to build the API and setup an API server. @ApiClass(version: 'v1') class Cloud { @ApiMethod(method: 'GET', path: 'resource/{name}') ResourceMessage getResource(String name) { ... find resource of name {resourceName} ... return new ResourceMessage ..id = resource.id ..name = resource.name ..capacity = resource.capacity; } @ApiMethod(method: 'POST', path: 'resource/{name}/update') VoidMessage updateResource(String name, UpdateMessage request) { ... process request, throw on error ... } } class ResourceMessage { int id; String name; int capacity; } class UpdateMessage { int newCapacity; } Two complete examples using respectively dart:io and shelf can be found at Example. We use the following concepts below when describing how to build your API. ApiClassannotation. @ApiResourceare exposed as resources. ApiPropertyannotation. Defining an API starts with annotating a class with the @ApiClass annotation. It must specify at least the version field. The API name can optionally be specified via the name field and will default to the class name in camel-case if omitted. @ApiClass( name: 'cloud', // Optional (default is 'cloud' since class name is Cloud). version: 'v1', description: 'My Dart server side API' // optional ) class Cloud { (...) } The above API would be available at the path /cloud/v1. E.g. if the server was serving on the API base url would be. Inside of your API class you can define public methods that will correspond to methods that can be called on your API. For a method to be exposed as a remote API method it must be annotated with the @ApiMethod annotation specifying a unique path used for routing requests to the method. The @ApiMethod annotation also supports specifying the HTTP method used to invoke the method. The method field is used for this. If omitted the HTTP method defaults to GET. A description of the method can also be specified using the description field. If omitted it defaults to the empty string. A method must always return a response. The response can be either an instance of a class or a future of the instance. In the case where a method has no response the predefined VoidMessage class should be returned. Example method returning nothing: @ApiMethod(path: 'voidMethod') VoidMessage myVoidMethod() { ... return null; } Example method returning class: class MyResponse { String result; } @ApiMethod(path: 'someMethod') MyResponse myMethod() { ... return new MyResponse(); } Example method returning a future: @ApiMethod(path: 'futureMethod') Future<MyResponse> myFutureMethod() { ... completer.complete(new MyResponse(); ... return completer.future; } The MyResponse class must be a non-abstract class with an unnamed constructor taking no required parameters. The RPC backend will automatically serialize all public fields of the the MyResponse instance into JSON corresponding to the generated Discovery Document schema. Method parameters can be passed in three different ways. Path parameters and the request body parameter are required. The query string parameters are optional named parameters. Example of a method using POST with both path parameters and a request body: @ApiMethod( method: 'POST', path: 'resource/{name}/type/{type}') MyResponse myMethod(String name, String type, MyRequest request) { ... return new MyResponse(); } The curly brackets specify path parameters and must appear as positional parameters in the same order as on the method signature. The request body parameter is always specified as the last parameter. Assuming the above method was part of the Cloud class defined above the url to the method would be: where the first parameter name would get the value foo and the type parameter would get the value storage. The MyRequest class must be a non-abstract class with an unnamed constructor taking no arguments. The RPC backend will automatically create an instance of the MyRequest class, decode the JSON request body, and set the class instance's fields to the values found in the decoded request body. If the request body is not needed it is possible to use the VoidMessage class or change it to use the GET HTTP method. If using GET the method signature would instead become. @ApiMethod(path: '/resource/{name}/type/{type}') MyResponse myMethod(String name, String type) { ... return new MyResponse(); } When using GET it is possible to use optional named parameters as below. @ApiMethod(path: '/resource/{name}/type/{type}') MyResponse myMethod(String name, String type, {String filter}) { ... return new MyResponse(); } in which case the caller can pass the filter as part of the query string. E.g. The data sent either as a request (using HTTP POST and PUT) or as a response body corresponds to a non-abstract class as described above. The RPC backend will automatically decode HTTP request bodies into class instances and encode method results into an HTTP response's body. This is done according to the generated Discovery Document schemas. Only the public fields of the classes are encoded/decoded. Currently supported types for the public fields are int, double, bool, String, DateTime, List<SomeType>, Map<String, SomeType>, and another message class. A field can be further annotated using the @ApiProperty annotation to specify default values, format of an int or double specifying how to handle it in the backend, min/max value of an int property, and whether a property is required. For int properties the format field is used to specify the size of the integer. It can take the values int32, uint32, int64 or uint64. The 64-bit variants will be represented as String in the JSON objects. For int properties the minValue and maxValue fields can be used to specify the min and max value of the integer. For double properties the format parameter can take the value double or float. The defaultValue field is used to specify a default value. The required fields is used to specify whether a field is required. Example schema: class MyRequest { @ApiProperty( format: 'uint32', defaultValue: 40, minValue: 0, maxValue: 150) int age; @ApiProperty(format: 'float') double averageAge; } Resources can be used to provide structure to your API by grouping certain API methods together under a resource. To create an API resource you will add a field to the class annotated with the @ApiClass annotation. The field must point to another class (the resource) containing the methods that should be exposed together for this resource. The field must be annotated with the @ApiResource annotation. By default the name of the resource will be the field name in camel-case. If another name is desired the name field can be used in the @ApiResource annotation. Example resource API: @ApiClass(version: 'v1') class Cloud { @ApiResource(name: 'myResource') MyResource aResource = new MyResource(); ... } class MyResource { @ApiMethod(path: 'someMethod') MyResponse myResourceMethod() { return new MyResponse(); } } Notice the @ApiResource annotation is on the field rather than the resource class. This allows for a resource class to be used in multiple places (e.g. different versions) of the API. Also notice the path of the MyResource.myResourceMethod method is independent from the resource. E.g. if MyResource was used in the previous mentioned Cloud API the method would be exposed at the url http://<server ip>:<port>/cloud/v1/someMethod. When having annotated your classes, resources, and methods you must create an ApiServer to route the HTTP requests to your methods. Creating a RPC API server is done by first creating an instance of the ApiServer class and calling the addApi method with an instance of the class annotated with the @ApiClass annotation. You can choose to use any web server framework you prefer for serving HTTP requests. The rpc-examples github repository () includes examples for both the standard dart:io HttpServer as well as an example using the shelf middleware. E.g. to use dart:io you would do something like: final ApiServer _apiServer = new ApiServer(); main() async { _apiServer.addApi(new Cloud()); HttpServer server = await HttpServer.bind(InternetAddress.ANY_IP_V4, 8080); server.listen(_apiServer.httpRequestHandler); } The above example uses the default provided ApiServer HTTP request handler which converts the HttpRequest to a HttpApiRequest and forwards it along. A custom HTTP request handler doing the conversion to the HttpApiRequest class and calling the ApiServer.handleHttpApiRequest method itself can also be used if more flexibility is needed. Notice that the ApiServer is agnostic of the HTTP server framework being used by the application. The RPC package does provide a request handler for the standard dart:io HttpRequest class. There is also a shelf_rpc package which provides the equivalent for shelf (see the example for how this is done). However as the RPC ApiServer is using its own HttpApiRequest class any framework can be used as long as it converts the HTTP request to a corresponding HttpApiRequest and calls the ApiServer.handleHttpApiRequest method. The result of calling the handleHttpApiRequest method is returned as an HttpApiResponse which contains a stream with the encoded response or in the case of an error it contains the encoded JSON error as well as the exception thrown internally. There are a couple of predefined error classes that can be used to return an error from the server to the client. They are: RpcError(HTTP status code,Error name ,Any message ) BadRequestError('You sent some data we don't understand.') NotFoundError('We didn't find the api or method you are looking for.') ApplicationError('The invoked method failed with an exception.') Some internal exception occurred and it was not due to a method invocation. If one of the above exceptions are thrown by the server API implementation it will be sent back as a serialized json response as described below. Any other exception thrown will be wrapped in the ApplicationError exception containing the toString() version of the internal exception as the method. The JSON format for errors is: { "error": { "code": <http status code>, "message": <error message> } } In addition to the basic way of returning an http status code and an error message, you can attach RpcErrorDetail objects to your RpcError (as specified in the Google JSON style guide): throw new RpcError(403, 'InvalidUser', 'User does not exist') ..errors.add(new RpcErrorDetail(reason: 'UserDoesNotExist')); This will return the JSON: { "error": { "code": 403, "message": "User does not exist", "errors": [ {"reason": "UserDoesNotExist"} ] } } Once your server API is written you can generate a Discovery Document describing the API and use it to generate a client stub library to call the server from your client. There are two ways to generate a Discovery Document from your server API. Using the rpc:generate script you can generate a Discovery Document by running the script on the file where you put the class annotated with @ApiClass. Assuming your @ApiClass class is in a file 'lib/server/cloudapi.dart' you would write: pub global activate rpc cd <your package directory> mkdir json pub global run rpc:generate discovery -i lib/server/cloudapi.dart > json/cloud.json In order for the rpc:generate script to work the API class (@ApiClass class) must have a default constructor taking no required arguments. The other way to retrive a Discovery Document if from a running server instance. This requires the Discovery Service to be enabled. This is done by calling the ApiServer.enableDiscoveryApi() method on the ApiServer, see Example. for details. After enabling the Discovery Service deploy the server and download the Discovery Document. For example if we have the 'cloud' API from the above example the Discovery Document can be retrieved from the deployed server by: URL='' mkdir json curl -o json/cloud.json $URL Once you have the Discovery Document you can generate a client stub library using a Discovery Document client API generator. For Dart we have the Discovery API Client Generator. Discovery Document generators for other languages can also be used to call your API from e.g Python or Java. If you want to generate a standalone client library for calling your server do: pub global activate discoveryapis_generator pub global run discoveryapis_generator:generate package -i json -o client This will create a new Dart package with generated client stubs for calling each of your API methods. The generated library can be used like any of the other Google Client API libraries, some samples here. If you want to generate a client stub code that should be integrated into an existing client you can instead do: pub global activate discoveryapis_generator pub global run discoveryapis_generator:generate files -i json -o <path to existing client package> This will just generate a file in the directory specified by the '-o' option. NOTE: you might have to modify the existing client's pubspec.yaml file to include the packages required by the generated client stub code. Add this to your package's pubspec.yaml file: dependencies: rpc: ^0.5.6 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:rpc/rpc.dart'; This package version is not analyzed, because it is more than two years old. Check the latest stable version for its analysis.
https://pub.dartlang.org/packages/rpc/versions/0.5.6
CC-MAIN-2018-47
refinedweb
2,230
54.73
. Default Namespace. ... A root namespace ("/") is also supported. The root is the namespace when a request directly under the context path is received. As with other namespacenamespaces, it will fall back to the default ("") namespace if a local action is not found. ... ... If a request for /barspace/bar.action is made, the /barspace namespace is searched and if it is found for the bar action. If found, the bar action is executed, else it will fall back to the default namespace. In the Namespace Example, the bar action does exists exist in the /barspace namespace. The barspace., selectedexecuted. Upon success, the request would be forwarded to bar2.jsp.
https://cwiki.apache.org/confluence/pages/diffpages.action?pageId=14276&originalId=76773
CC-MAIN-2017-04
refinedweb
108
66.64
Wow, thank you for fixing the code, I really appreciate it! Wow, thank you for fixing the code, I really appreciate it! Hello, How can I pass an expression similar to the one I commented out (instead of the binary representation) to the constructor? I would like to make the constructor argument more readable. ... Yes, this works! Thank you! Hello, I was mildly surprised when I realized that I cannot initialize the vector directly like in the commented out command, but have to use a default constructor. Surprised because... You mean like this? Engine& operator=(const Engine&) = delete; Engine& operator=(Engine&&) = delete; The problem I had is if I put this in engine.h I have a follow up issue that I cannot solve even after checking a few examples on the net. How do I declare the two entities in a header and a cpp file so that function can reach the destructor?... Thanks to both of you. I realized that I don't need y base class at all, so the problem became simpler. Yes, checking error messages does help in most cases. Hello, how can pass a value to the constructor of the derived structs in this simple factory pattern? thx #include <iostream> #include <memory> #include <vector> I have read the following statement (partial quote): "A reasonable rule of thumb is to not put inline at functions that have more than 3 lines of code in them." Does this rule of thumb apply if... void myprint(const short *src[], size_t len) fixed it. But something still is not right, for the two print calls produce different output: void myprint(const short *src, size_t len) { int i; for(i = 0; i < len; ++i) printf("%d\n", src[i]); } OK thanks! So, in my example, the pointer has been passed correctly but I need to pass the size separately (if I want to do something in the function). Just of curiosity, what did my function... Hi, why does the function return an incorrect value, here 2 instead of 5? #include <stdio.h> size_t len(const short (*arr)) { But still it should be possible to recast pointers of unrelated classes, I just can' wrap my head around how: Something like this should doable: auto A = nullptr; if (true) { MyType B =... Another question that came to my mind is why does inserting e.g. weapon in loadout increase the counter of shared pointers (I am using shared)? My types are: typedef std::shared_ptr<Weapon>... Thank you very much! I will rethink the style (still very new to cpp, and it shows). Over the past few months, I have learnt so much from all of you in this forum and I greatly appreacite it. I... Ah..get it! So I need: WeaponPtr leftweapon = nullptr; WeaponPtr rightweapon = nullptr; Hello, I have a function similar to this for character construction (types omitted). The switch statement does not work. I though it is an issue of score, but setting curly brackets inside each... Yes, I made a cut-and-paste error - vector should be element. The problem that I have is this: 1. Header (utilities.h) #ifndef UTILITIES_H #define UTILITIES_H #pragma once To follow up on templates. Putting this function in the header (along with #include <iostream> and #include <vector>) template <typename T> void PrintVector(const std::vector<T>& vector)... OK, I missed that! So where should the second line go, given that the type is used in practically the whole program, i.e. certainly downstream of dice.h. Does it need to stay global? Yes, this works if I put implementations in the header, but i would like to have a clean header file and a cpp file with function bodies. The below pattern currently produces an mt19937 error: So... Hi, Where should I put typedef that is used in several header files? Putting it in each file causes multiple declaration error, and yet each header file requires a declaration. Should I play... Ah.. nice! Thx again Hello, I just wanted to check on this. I have the statement below inside a function (method of Player) that checks that all weapons in the Loadout vector has the same scale as the Player who...
https://cboard.cprogramming.com/search.php?s=1dc9d54440565ab5f1943f99077e568c&searchid=2221250
CC-MAIN-2019-51
refinedweb
699
73.47
Start my free, unlimited access. Next, verify the trust by going to the Domains and trusts snap-in. Advertisement Advertisement WindowsITPro.com Windows Exchange Server SharePoint Virtualization Cloud Systems Management Site Features Contact Us Awards Community Sponsors Media Center RSS Sitemap Site Archive View Mobile Site Penton Privacy Policy Terms LOS Migration Replacing the heart and soul of our organization with a bigger, stronger, faster software promising a least twice the production when all is said and done. navigate here This, as I said, makes Windows use TCP. He wrote Windows 2000: Active Directory Design and Deployment and co-authored Windows Server 2003 on HP ProLiant Servers. Grant permission to allow a user or group from the other domain access to the share. A domain controller cannot be found to verify that user name.". So at the prompt I tried entering their username as REMOTEDOMAIN.REMOTEDOMAINNAME.COM\username and then the password, and THAT worked! It would be something like: A SERVER.DOMAIN.loc 10.0.0.5 Furthermore - have you used netdiag or dcdiag? Some admins are concerned it takes ... It's only file sharing which isn't working properly. IBM hybrid storage play acknowledges enterprise realities The recent IBM hybrid storage push aims to court enterprise clients with resources on premises and in the public cloud -- and ... His qualifications include Certified Novell Instructor, Master CNE, Microsoft Certified Trainer, MCSE, and the Cisco CCNA. Chris is currently a Senior Systems Programmer for Intel Corporation. Advertisement Related ArticlesAvoid Active Directory Mistakes in Windows Server 2008 Avoid Windows Server 2008 Integration Challenges 1 Avoid Windows Server 2008 Integration Challenges 1 Q: How can I enforce the application We'll send you an email containing your password. Scott is an MCSE+I and is MCT certified to teach all the current Microsoft products, including the new Windows 2000 curriculum. That is, you can have it wide open so that authenticated users in one forest have the same rights as authenticated users in the other, or you can set it so By creating an account, you're agreeing to our Terms of Use, Privacy Policy and to receive emails from Spiceworks. This would not be a problem if Active Directory was also propagated across the domains. 0 Cayenne OP GUIn00b Nov 28, 2012 at 7:28 UTC JCAlexandres wrote: Chris enjoys mountain biking, golf, and Tae Kwon Do in his spare time. The system detected a possible attempt to compromise security. Since then, REMOTEDOMAIN's login scripts prompt their users for a username and password when it tries to map OURDOMAIN's file shares on their Windows 7 workstations. I logged into one of You might also encounter this error when creating DCs in a forest root domain that is subordinate to an existing corporate intranet namespace. Table 1 lists possible extended error strings for this error message. Mary McLaughlin, MCSE+I, MCT, ASE, ACT, lives in the Boston area with her beloved daughter, Margaret. David Shackelford holds a master's degree from California State University at Fullerton. check over here It will provide a concise, technically accurate distillation of the essential information an administrator would need to know to successfully administer Windows 2000 in a real-life, enterprise production environment. While some say it's expensive and difficult, proponents find value ... Each application directory partition in a forest has an Infrastructure Master, and the Rodcprep command contacts each one. After: I raised domain and forest functionality for REMOTEDOMAIN to 2003. Then I proceeded to delete and re-create a new 2-way trust. I set this up as a Forest trust. (We As I said previously, the VPN may require this to be enabled. There is a chance the laptop is now using the DNS entries in location B that may not reference location A. SearchVirtualDesktop How NVIDIA vGPU aims to lighten the graphics rendering load The number of users working with graphics-intensive apps on virtual desktops is rising. share|improve this answer edited Aug 27 '14 at 13:34 Andrew Schulman 5,21881835 answered Aug 27 '14 at 9:54 DoComputing 11 add a comment| up vote 0 down vote I found I Verify the laptops DNS settings. 0 Message Active 3 days ago Author Comment by:ptsolutionsinc2011-01-20 Comment Utility Permalink(# a34661125) sykojester, that did the trick. If you are certain that the name is not a NetBIOS domain name, then the following information can help you troubleshoot your DNS configuration. Hot Scripts offers tens of thousands of scripts you can use. If you're replacing a previously demoted DC with a new DC of the same name, make sure to remove the old DC's metadata. As a staff instructor at DPAI, he was one of the first MCTs in the Midwest to teach beta classes on Windows 2000 to other trainers and network engineers. Success! The workstations having issue are Windows 7 32-bit. The domain controllers in the remote domain are 2k3 R2 32-bit. Their domain is in its own forest. Join Now There have been some changes recently to our system, so I will explain those first. asked 7 years ago viewed 33473 times active 2 years ago Visit Chat Related 3Trouble joining Windows Server 2008 to Domain2Windows XP cant join Server 2008 Domain2How to add a DC weblink windows networking file-sharing cals share|improve this question edited May 31 '13 at 7:53 asked May 24 '13 at 13:35 NickG 2272719 migrated from superuser.com May 24 '13 at 13:50 This question MySQL relational databases MySQL and Microsoft SQL Server relational databases have their pros and cons. And you are giving your servers static IP addresses, right? Is it an anti-pattern if a class property creates and returns a new instance of a class? Jeremy Deats ([email protected]) is a Web application developer and e-commerce consultant with Penta, Inc. This email address doesn’t appear to be valid. How were Lisps usually implemented on architectures that has no stack or very small stacks? Art Henning's 14-year career with Intergraph Corporation includes a wide scope of experience in a support role for hardware, software, and networks with VAX VMS, various flavors of Unix, and WinNT, more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Make sure that DCs that are being promoted have network connectivity and the necessary administrative credentials to create delegations on Microsoft DNS servers that host the parent DNS zone. That user name has already been tried. That is how they maintain a safer network environment by not allowing you to download stuff from potentially threatening sites. To that end, the AD DS installation wizard (Dcpromo) in Server 2008 and later automatically tries to create a DNS delegation when you create a new forest. E-Handbook Determining the right time for a Windows Server 2016 upgrade 0comments Oldest Newest Send me notifications when other members comment. If you plan to install a read-only DC (RODC -- new in Server 2008), then you also need to run adprep /rodcprep for every domain that will have an RODC. That user name has already been tried.
http://geekster.org/domain-controller/domain-controller-cannot-be-found-to-verify-that-user.html
CC-MAIN-2017-43
refinedweb
1,212
52.19
Introduction to Linear Search in Data Structure One of the very simplest methods to search an element in an array is a linear search. This method uses a sequential approach to search the desired element in the list. If the element is successfully found in the list then the index of that element is returned. The search starts from the first element and sequentially proceeds in the forward direction. Algorithm for Linear Search in Data Structure The algorithm for linear search is as shown below. It is a very simple algorithm. Go through it and study it as we shall be building a computer program on the algorithm. Algorithm: function linear_search(integer array[], integer n, integer x) { integer k; for (k = 0, k < n, k++) if (array [k] = x) return k; return -1; } Example to Implement Linear Search The program code to implement a linear search is as given below. This program has been written in C programming. Let’s go through the following program so as to understand how it helps us find the requisite element in the list using the linear search algorithm. Study each and every component of the code properly, including the statements, variables, loops, etc. Code: #include <stdio.h> #include <conio.h> int linear_search(int arr[], int n, int x) { int i; for(i = 0; i < n; i++) if(arr[i] == x) return i + 1; return -1; } void main() { int arr[50], n, i, x, res; printf("Enter the number of elements in array: "); scanf("%d", &n); printf("\nEnter the numbers: "); for(i = 0; i < n; i++) scanf("%d", &arr[i]); printf("\nEnter the number to be searched: "); scanf("%d", &x); res = linear_search(arr, n, x); if(res == -1) printf("\n%d does not exist in the array.", x); else printf("\n%d is present at position %d in the array.", x, res); getch(); } Code Explanation: The above program first asks the user to specify the number of elements in the array along with the elements. It takes up to 50 elements. Once the array is specified, in the next step, the user is asked to specify the element that needs to be searched in the array. The program using loop sequentially searches for the desired element. For this task, a function linear_search() has been used as can be seen in the code. If the element is found in the array, then the function linear_search() returns the position of the element, and if the element is not found in the array then -1 is returned. We must verify and validate the correctness of the implemented program. For this, the program should be checked by passing multiple parameters to it. We validate the program by passing multiple inputs. The inputs passed and the respective results obtained have been discussed in the below section. Input 1 In this case, we decided to have ten elements in the array, and so, specified 10 when asked to specify the number of elements in array. Next, we passed ten different numeric elements in the array. The inputs must be passed carefully. Passing input of different data types may give incorrect results. Also, while passing elements, they must be separated by space. Once, we pass the entire array correctly, next, we are asked to specify the number that we intend to search in the array. Here, we want 98 to be searched. As 98 is present in the array, its position has been returned correctly by the program. So, the program worked correctly. Input 2 In this case, we passed twenty-one elements into the array. Follow the steps and pass the inputs properly. After specifying the number of elements in the array, while passing the elements, ensure that the required number of elements are only passed. This is especially important when the number of elements in the array is high. Once done with the array, specify the requisite number to be searched. Here it is 29 as passed by us. 29 is present in the array, and the program successfully gave its position which is 14. Go through the following output and see how the correct result has been obtained. Input 3 Here, we passed eight three-digit numbers into the array. Then we specified number to be searched in the array, which is 245. As the number 245 is present in the list, so, the program correctly returned its position in the array. Go through the following program output. Input 4 Till now, we saw the program correctly returning the position of the element present in the array. However, the program should work correctly, if the element is not present. The following program output shows this. As can be seen below, we decided to have eight elements in the array, and then specified the eight elements. After this, we specified the number to be searched which is 102. 102 is not present in the array and the program gave correct output saying that the number doesn’t exist in the array. How Linear Search Algorithm Works Let’s consider the following array to understand the working of the algorithm. Now, suppose we want to search 92 in the above-mentioned array, the linear search algorithm shall follow the steps mentioned below. Step 1: The algorithm begins from the left-hand side, and the element to be searched is matched with every element. In the first, the matching doesn’t happen. Step 2: Now the algorithm moves to the next element and compares the two elements to check if matching happens. Step 3: Similarly, the searching happens until no match happens. Step 4: Finally, when the match happens, the algorithm returns the position of the element. Conclusion Linear searches through a simple searching algorithm has vast applications. It is especially useful in situations that involve numerous elements. It is a very easy methodology for searching requisite elements and can be implemented easily using any programming language. Recommended Articles This is a guide to Linear Search in Data Structure. Here we discuss the algorithm and working of Linear Search in Data Structure along with code implementation. You may also have a look at the following articles to learn more –
https://www.educba.com/linear-search-in-data-structure/
CC-MAIN-2020-45
refinedweb
1,028
64.3
ASP.NET and creating a word document From: Dave (david_at_revilloc.remove.this.bit.com) Date: 01/25/05 - Next message: nikhilchopra_at_gmail.com: "problem passing file path from c# .net to COM dll" - Previous message: Paul Clement: "Re: DSOFRAMER control in ASP.NET" - Messages sorted by: [ date ] [ thread ] Date: Tue, 25 Jan 2005 06:36:56 -0800 Hi all, I have found this document to be able to create a word doc from my ASP.NET.;en- us;316384 I am having some problems... I am using C#, Word 2003 and Windows XP, though the application will be an ASP.NET application. First, the document says to add: using Word = Microsoft.Office.Interop.Word; however, this won't compile. It says that the namespace " already has Word. A bit of research I did suggested to remove this line, so I did and the project now compiles. Now, when I run it, and click my linkbutton, I am getting an error. Exception Details: System.UnauthorizedAccessException: Access: Line 194: Word._Application oWord; Line 195: Word._Document oDoc; Line 196: oWord = new Word.Application(); Line 197: oWord.Visible = true; the error being on line 196. Note, I am not doing everything it says in the MS document, i.e. I go up to and include oPara3, I don't need tables, though saying that, this error is at the point of creating the word document. Please note: I am fairly new to .NET. I had never heard of interop prior to trying to do this. I am also within a Microsoft Content Management Server environment. Thanks for any help you care to offer. Best regards, Dave Colliver. - Franchise opportunities available. - Next message: nikhilchopra_at_gmail.com: "problem passing file path from c# .net to COM dll" - Previous message: Paul Clement: "Re: DSOFRAMER control in ASP.NET" - Messages sorted by: [ date ] [ thread ]
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.interop/2005-01/0424.html
crawl-002
refinedweb
305
70.6
More Channels Showcase Channel Catalog Articles on this Page - 05/29/14--10:54: _New Post: Using Cle... - 05/29/14--11:07: _New Post: Manage co... - 05/29/14--11:36: _New Post: Using Cle... - 05/29/14--11:41: _New Post: Using Cle... - 05/30/14--08:37: _New Post: Manage co... - 05/30/14--09:14: _New Post: Manage co... - 05/30/14--12:15: _New Post: Manage co... - 06/01/14--04:58: _Source code checked... - 06/01/14--05:16: _Edited Issue: [FIXE... - 06/01/14--05:16: _Commented Issue: [F... - 06/01/14--05:19: _New Post: Memory leak - 06/01/14--09:17: _New Post: Memory leak - 06/02/14--07:34: _New Post: Scripting... - 06/03/14--04:02: _New Post: Memory leak - 06/03/14--05:23: _New Post: Scripting... - 06/03/14--09:15: _New Post: Memory leak - 06/04/14--04:36: _New Post: Problem w... - 06/04/14--06:34: _New Post: How to ge... - 06/04/14--07:39: _New Post: Problem w... - 06/04/14--07:57: _New Post: How to ge... Channel Description: - 05/29/14--10:54: New Post: Using ClearScript. WebApplication problem - everything still works on my local machine, in VS and running straight from IIS. - i verified that the code is being executed (by attaching to local IIS process) - 05/29/14--11:07: New Post: Manage code problem - 05/29/14--11:36: New Post: Using ClearScript. WebApplication problem - 05/29/14--11:41: New Post: Using ClearScript. WebApplication problem - 05/30/14--08:37: New Post: Manage code problem - Unless you've built it from modified source code, ClearScript.dll is architecture-neutral and can be loaded into both 32- and 64-bit processes. - ClearScriptV8-32.dll and ClearScriptV8-64.dll (and the respective native V8 libraries) are not architecture-neutral. When you first instantiate a V8-based ClearScript class, the main library detects the architecture and loads the appropriate ClearScriptV8 library. - In general, 32-bit libraries cannot be loaded into 64-bit processes, and vice versa. - 05/30/14--09:14: New Post: Manage code problem - 05/30/14--12:15: New Post: Manage code problem - 06/01/14--04:58: Source code checked in, #5f66ce585a04c97dd9c75370651bd940fe74d659 - 06/01/14--05:19: New Post: Memory leak - 06/01/14--09:17: New Post: Memory leak - 06/02/14--07:34: New Post: Scripting like Boo - 06/03/14--04:02: New Post: Memory leak - 06/03/14--05:23: New Post: Scripting like Boo - 06/03/14--09:15: New Post: Memory leak - 06/04/14--04:36: New Post: Problem with nullable property of object - 06/04/14--07:39: New Post: Problem with nullable property of object thanks so much for the suggestion. i put in the code and updated your /Dependenciespath to my /bin/ClearScript.V8path. after this implementing this, i first got the versioning error. this is because the 64bit was throwing the "cannot find" error and then tried loading the 32bit version. this was bad news. what was exciting was that it seemed to be able to find the 32bit version. so i commented out the try/catch and i'm back to the same error. then again, the paths in the error aren't updating to what's being supplied in the rootPath variable. for kicks, i turned shadow copying off (which just eliminates the temp directory part of the message) then i copied the dlls into the bin folder since that's where it seems to want them, and of course this gives me version issues bc of the 32bit one and if i remove the 32bit dlls i get,and if i remove the 32bit dlls i get, Could not load file or assembly 'ClearScriptV8-32' or one of its dependencies. An attempt was made to load a program with an incorrect format. i'm going to see if i can find the CodePlex versions of the dlls you mentioned and i'll let you know how that works.i'm going to see if i can find the CodePlex versions of the dlls you mentioned and i'll let you know how that works. Could not load file or assembly 'ClearScriptV8-64.dll' or one of its dependencies. The specified module could not be found. brad I have recently upgraded my project from Javascript.Net to ClearScript with V8. Currently, when I'm running the webdev server I fairly frequently get this error: The program '[6484] WebDev.WebServer40.EXE: Managed (v4.0.30319)' has exited with code 1073741855 (0x4000001f). This is the same error I got when dealing with Javascript.Net, and it is the biggest reason why I made the switch. I looked into the problem and it seems to have to do with 32bit dlls in a 64bit program. Or calling c++ code from c# (something about marshalling issues?) The compiled clearscript.dll is 32bit. How can I force it to use the 64 bit dlls that were also built? How would I make the clearscript.dll 64bit (I tried and was not successful). Is this even the problem? Is there a special procedure to create a 64bit version? How can I make it more stable? Thanks, ~Scott M. it's time for me to admit to a n00b mistake. C++ redistributable wasn't installed. (as is mentioned several times in this post) it needed both the x64 and x86 version of the Visual C++ 2012 installed. from now on, i'll know that's what The specified module could not be found.means. THANKS EVERYONE! In case you missed it, if Visual Studio is not installed on your deployment machine, you must install 32-bit and 64-bit Visual C++ Redistributable packages: Visual C++ Redistributable for Visual Studio 2012 Visual C++ Redistributable for Visual Studio 2013 The version you need depends on which version of Visual Studio was used to generate your ClearScript binaries. If you're using a NuGet package and aren't sure, you should be able to install both sets side-by-side. Good luck! Hmm, we don't have much to go on here, so all we can offer is some clarification: Thanks! That's interesting about ClearScript.dll. I'm not sure I follow that it can be either, When I run a dll diagnostic on it it says it's a 32 bit one. However, it's not that bit of a deal, I just have to check the 'allow 32 bit applications' in my IIS app Pool. This appears to be more of an optimization task that I'll have to tackle at a later date. So more detail about the 1073741855 exit code problem. I'm running an MVC3 web-application. A crash would occur within two minutes if I was running it in the debug environment and closed out of a session by closing the browser. The crash would happen when I restarted the browser for a new session. Quick background: I'm developing survey software that uses ClearScript as a method to control branching logic etc. Each time I hit the 'next' button it reinstantiates the formula engine (with clearscript). After I included the 'dispose' method at the end of my code for each time I hit 'next' (so right before it serves up the html) the problems disappeared. I had used dispose with Javascript.Net and while it reduced the number of crashses it certainly didn't eliminate it. Your dispose method however seems to be vastly superior. Granted, there has only been a day of testing, but I have yet to see another 1073741855 exit code problem! I suspect that the rapid reinstantiations without using dispose caused some memory access problems which in turn caused the crash. Thank you for your quick feedback! It's great to hear that you've found a resolution! As for ClearScript.dll, .NET libraries can be built to target the so-called AnyCPU platform (please see here), which just means that the same library file can be loaded in both 32- and 64-bit processes. Is it possible that your diagnostic tool doesn't recognize such libraries? In any case, ClearScript.dll isn't an executable, so it can't dictate whether a process is 32- or 64-bit. There must be something in your project or IIS configuration that forces 32-bit process creation. Good luck! Bypassed reflection for Windows script item property and method access, fixing Issue #47. This issue was reported [here](). An initial investigation points at a bug in the Windows script runtime. A workaround may be to force the engine to call back to the host to retrieve the property value. This issue was reported [here](). An initial investigation points at a bug in the Windows script runtime. A workaround may be to force the engine to call back to the host to retrieve the property value. Can you implement the Scripting language visual Boo? We don't have Boo on our radar at the moment (we have lots of existing issues to take care of first) but it certainly looks like a well-designed, modern language suitable for both scripting and general-purpose programming. It's also .NET-centric to begin with and includes a ClearScript-like API, so it's not clear how much value there'd be in having ClearScript support it. Cheers! Small bug with the new MarshalNullAsDispatch option. String property in case of null value should always return the DBNull value instead of DispatchWrapper(null). Following code does not work : Without MarshalNullAsDispatch option the getName() function return Undefined object. Is it behavior by design?Without MarshalNullAsDispatch option the getName() function return Undefined object. Is it behavior by design? using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Text; using System.Threading.Tasks; using Microsoft.ClearScript.Windows; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { using (var scriptEngine = new VBScriptEngine(WindowsScriptEngineFlags.MarshalNullAsDispatch)) { var script = @" function getName getName = person.Name end function "; scriptEngine.Execute(script); scriptEngine.Script.person = new Person(); Console.WriteLine("Person name is '{0}'", scriptEngine.Script.getName()); Console.ReadKey(); } } } public class Person { public string Name { get; set; } } } Is it not should return to .NET the null value? Thanks At first, thank you for your work, ClearScript is awesome. My question is: how to get the console.log("") messages after executing the js file? I created a host object and basically substituted the console object. engine.AddHostObject("console", new ClearScriptLogger()); But this solution is kind of a hack. Any other solutions?But this solution is kind of a hack. Any other solutions? class ClearScriptLogger { public void log(String message) { debug(message); } public void debug(String message) { Debug.WriteLine(message); } } It looks like the code above fails because this line: can't handle the case wherecan't handle the case where getName = person.Name person.Namereturns nothing, and that's because nothingis a special object reference that requires a setstatement rather than simple assignment. Is that your understanding as well? Without MarshalNullAsDispatch option the getName() function return Undefined object. Is it behavior by design? Hmm, we're not seeing that behavior. Without MarshalNullAsDispatchwe get: Have you made any modifications to ClearScript that could have changed the behavior?Have you made any modifications to ClearScript that could have changed the behavior? Person name is '' Thanks! First, thanks for your positive feedback! Regarding your question, the consoleobject is not part of the JavaScript standard; it's part of the Web API. A ClearScript JavaScript engine by default has no consoleobject, so you're not replacing anything. You're simply providing an API for scripts to call. Nothing wrong with that :) Cheers!
http://clearscript3.rssing.com/chan-14849437/all_p37.html
CC-MAIN-2019-04
refinedweb
1,953
67.86
Hello, am new to this so please bear with me. I'm working on a windfarm simulator in C++, it has to read in data from a .csv file and calculate various power values. Anyway that's beside the point. The problem I'm having with is the 2 Dimensional Dynamic Array that I created to store the values as the size of each windfarm file varies.. The program runs through the .csv file once to count how big to make the array then goes through it again taking each value in tern and stores in in the array it just created. From what I can gather it is creating the array and also storing the values but when I try to output the array it doesn't seem to work. I get the following error - Unhandled exception at 0x60fd942c in Windfarm.exe: 0xC0000005: Access violation reading location 0xcdcdcde5. I think it hs something to do with trying to read memory from outside the array but I have no idea why it would be doing this. Here is my code #include <iostream> #include <string> #include <math.h> #include <fstream> #include <sstream> using namespace std; int main (){ string windspeeddate, filepath, temp, **windarray ; int windspeed= 0,rowcounter = 0, colcounter = 1 ,row,col; cout << " Enter the name of the windspeed data .csv file \n"; cin >> filepath; ifstream inFile; inFile.open(filepath.c_str()); if (!inFile){ cout << " The file did not open correctly \n"; return -1;} while (!inFile.eof()){ getline (inFile, temp , ','); rowcounter++ } cout << " The .csv file contains " << rowcounter <<endl; try { windarray = new string *[rowcounter]; windarray[rowcounter]= new string [colcounter]; } catch (bad_alloc& ba) { cerr << "bad_alloc caught: " << ba.what() << endl; } while (!inFile.eof()){ getline (inFile, temp , ','); for ( row = 0; row<rowcounter;row++){for ( col = 0;col<colcounter;col++){ cout << temp << endl; windarray[row][col] = temp;} } } for ( row = 0; row<rowcounter;row++){ for ( col = 0;col<colcounter;col++) {cout <<windarray[row][col]<< " "; } cout<< endl; } system("PAUSE"); } Have been looking around for a solution but can't seem to figure it out. Any help from you guys would be really appreciated. If you need me to add anything to this please just ask. I've attached below the the .csv file in a .txt file format. Thank You Kungu
https://www.daniweb.com/programming/software-development/threads/186735/outputing-a-2-dimensional-dynamic-array
CC-MAIN-2017-17
refinedweb
371
66.23
In the last tutorial, we learned about method overloading, in which we can provide same name to many methods but with different parameters. Now in this tutorial, we will learn about method overriding, it is similar to virtual keyword in C++. Basically, in method overriding, we will be overriding the definition of a method defined in parent class (base class) in derived class (child class) using virtual or override keyword in C#. So, in method override, you override the definition of a method from base class in child class. If you are not familier with base class and child class, let's understand it first. A class can be derived from more than one class or interface, which means that it can inherit data and functions from multiple base classes or interfaces. So, we can create derived class from base class like this //base class public class A { } //derviced class, or child class public class B:A{ } Syntax <acess-specifier> class <base_class_name> { // } <acess-specifier> class <derived_class_name> : <base_class_name> { // } Now, you are familier with terms base class and derived class, let's understand method override. Suppose my base class(A) has method " Display()" and I want to override the defination of Display() inside B ( child class), I can do like below public class Program { public class A { //base class method, created with keyword 'virtual' public virtual void Display() { System.Console.WriteLine("A::Display"); } } public class B :A { // dervied class methodm with new definition using override public override void Display() { System.Console.WriteLine("B::Display"); } } public static void Main() { A a; a = new A(); a.Display(); a = new B(); a.Display(); } } Output: A::Display B::Display As you can see in the above example, we have used two keywords " virtual" and " override", to give new definition to method " Display()" inside derived (child- class) B. With the help of virtual keyword in base class (A), we are notifying C# Compiler that method " Display()"can be changed in the derived class. With the help of override keyword in child class (B), we are notifying C# compiler that method " Display()" has different code statements than it was in base class. So, using virtual and override keyword, we accomplish method overrriding in C#. Above we saw, how we can achieve method overriding using virtual and override keyword, but in Method hiding we use new keyword to provide new definition to the derived class method of base class. When using new keyword for method hiding, we don't need to declare 'virtual' to base class method. Example public class Program { public class A { //base class method, not using virtual method public void Display() { System.Console.WriteLine("A::Display"); } } public class B :A { // using 'new' keyword for new definition of Display new public void Display() { System.Console.WriteLine("B::Display"); } } public static void Main() { A a; a = new A(); a.Display(); B b = new B(); b.Display(); } } Output A::Display B::Display In the above code, as you see we have two classes linked in a parent-child relationship and we have a same name method called "Display" which is declared in a child class with a new keyword. So it means it will hide the "A" class display method when called.
https://qawithexperts.com/tutorial/c-sharp/21/c-sharp-method-overriding
CC-MAIN-2021-39
refinedweb
533
57.91
05 September 2012 16:31 [Source: ICIS news] LONDON (ICIS)--Crude oil futures fell by more than $1.00/bbl on Wednesday as investors remain cautious of the European Central Bank’s plans to buy unlimited amounts of short-term government debt.?xml:namespace> By 14:22 GMT, the front-month October ICE Brent contract fell to an intra-day low at $112.79/bbl, a loss of $1.39/bbl. The contract then edged a little higher to trade around $113.45/bbl. At the same time, the front-month October NYMEX WTI contract was trading around $94.70/bbl, having touched an intra-day low at $94.26/bbl, a loss of $1.04/bbl. The European Central Bank’s President, Mario Draghi said previously that the bank is looking at plans to purchase short-term government bonds to help troubled economies such as Greece, Spain and Italy. If the ECB buys government bonds, it is likely to exert some downward pressure on yields which are paid by the
http://www.icis.com/Articles/2012/09/05/9593027/crude-falls-more-than-1bbl-on-possible-ecb-plans.html
CC-MAIN-2014-15
refinedweb
171
73.78
So this is my secondary class public class Tuna{ private int hour = 2; private int minute = 2; private int second = 3; public void setTime(int hour, int minute, int second){ this.hour = 4; this.minute = 5; this.second = 6; } public String toMilitary(){ return String.format("%02d:%02d:%02d", hour, minute, second ); } public String toString(){ return String.format("%d:%02d:%02d %s",((hour==0||hour==12)?12:hour%12), minute, second, (hour <12? "AM": "PM")); } } and this is my main class The effect i wanted was (for all the printlns) it would use the numbers in the setTime(method?constructor? im not sure) but it uses the local 'private' variables.
https://www.javaprogrammingforums.com/whats-wrong-my-code/37668-small-problem-do-help-explanation-much-appreciated.html
CC-MAIN-2020-40
refinedweb
109
77.23
Single Round Match 742 Editorials I participated on SRM 742 Div II and, inspired by my brother’s editorial of SRM 739, decided to present my solutions here! During SRM I was solving tasks in C++ but, since I am currently learning Haskell, I will also present my solutions in Haskell. BirthdayCandy (250) To figure out how many candies will Elisa get if she picks specific candy bag, we do the whole number division with (K + 1) to learn how many rounds of giving candies Elisa will do. Then, we take the rest of the candies (which is remainder from the whole division with (K + 1) and sum it up with the number of rounds and that is the total number of candies Elisa would get from that bag of candies. We do this for every bag of candies and return the biggest number. C++ #include <vector> using namespace std; class BirthdayCandy { public: int mostCandy(int K, vector <int> candy) { int maxCandy = -1; for (int i = 0; i < (int) candy.size(); i++) { const int numCandy = candy[i] / (K + 1) + candy[i] % (K + 1); if (numCandy > maxCandy) maxCandy = numCandy; } return maxCandy; } }; Haskell It is really easy to describe this solution in Haskell, resulting in just one line of logic! module BirthdayCandy (mostCandy) where mostCandy :: Int -> [Int] -> Int mostCandy k = maximum . map (\c -> c `div` (k + 1) + c `mod` (k + 1)) SixteenQueens (500) At first, this one seems hard, if we want to make sure that we arrange queens in an optimal way so that the biggest amount of them can fit on the board. However, if we look at the constraints (board size and the maximal number of queens), we can figure out that the board is so big that we can just put queens one by one in a greedy manner on it and there will always be enough space for all of them. Therefore, we go with the greedy solution where we, for each queen that we want to add, go through all the fields on the board until we find the first field that is available (not under attack by queens that are already on the board). Then we put the new queen on the board and continue with the next queen. To figure out if the field is under attack we check that it is not on the same diagonal, row or column as any other queen. Nice way to check if two fields are on the same diagonal is by checking if the absolute difference of their row indexes is equal to the absolute difference of their column indexes. If it is, they are on the same diagonal, otherwise, they are not. C++ #include <vector> #include <cstdlib> using namespace std; class SixteenQueens { public: vector <int> addQueens(vector <int> row, vector <int> col, int add) { vector<int> result; for (int i = 0; i < add; i++) { // For each queen that we want to add. bool done = false; for (int r = 0; r < 50 && !done; r++) { // Let's try every field on the board. Every row. for (int c = 0; c < 50 && !done; c++) { // And every column. bool fieldOk = true; // Is field ok to put queen on it? // Check if any of previous queens is compromising the field. for (int j = 0; j < (int) row.size() && fieldOk; j++) { if (r == row[j] || c == col[j] || (abs(r - row[j]) == abs(c - col[j]))) { fieldOk = false; break; } } if (fieldOk) { // If field is ok to go, put queen on it. row.push_back(r); col.push_back(c); // Add to queens on board. result.push_back(r); result.push_back(c); // Add to results. done = true; } } } } return result; } }; Haskell This problem can naturally be described recursively, which results in a very elegant solution in Haskell. module SixteenQueens (addQueens) where addQueens :: [Int] -> [Int] -> Int -> [Int] addQueens row col 0 = [] addQueens row col add = rAdd:cAdd:(addQueens (rAdd:row) (cAdd:col) (add - 1)) where (rAdd, cAdd):_ = [(r, c) | r <- [0..49], c <- [0..49], not (isUnderAttack (r, c))] isUnderAttack (r, c) = any (\(r', c') -> r == r' || c == c' || abs (r - r') == abs (c - c')) (zip row col) ResistorFactory (1000) This task I was not able to finish during the SRM – I did come up with the solution and started implementing it, but “thinking part” took me a long time and I did not have enough time left to finish it. I finished it later and confirmed in the practice room that it works correctly. While the problem is in itself simple – combine resistors to achieve wanted value – the solution is not so obvious. The way problem is stated made me consider if there is a dynamic programming solution, however, I concluded that search space is just too big and I could not see any smart way to narrow down that search. There are just too many choices – resistor values can be real numbers, we can do series or parallel, we can combine any two resistors that we have built so far. Therefore, I decided to somehow simplify the situation. Series doubles, parallel halves First, we can observe that if we put a resistor in series with itself, we get a resistor with double the resistance, and if we put it in parallel, we get a resistor with half the resistance. This means that if we start with a resistor of resistance R, we know how to easily create resistors of resistance 2^x * R where x is an integer. Since we start with a resistor of 10^9 nano ohm (all the values from now on I will express in nano-ohms), we can create 2 * 10^9 resistor by putting it in series with itself, or we can create 0.5 * 10^9 resistor by putting it in parallel with itself. We can then repeat this process with newly created resistors to create smaller and bigger resistors. Creating an inventory Next, we can observe that if we have an inventory of resistors with various values, we can just put some of them in series and probably get pretty close to the target value. But, what kind of values should those be, so that we can actually do that? - We need a resistor of small enough value that we can achieve any possible target value with high enough precision. Precision defined in the problem is 1 nano ohm. - We need resistors to have big enough values so that we don’t need to combine too many resistors in series since limit stated by the problem is 1000 commands, which means that we can’t combine more than 1000 resistors. We start our inventory with the only resistor we have at the beginning, resistor #0 with a resistance of 10^9 nano-ohms. Combining the observations so far, we can see that if we repeatedly create smaller and bigger resistors as described before (by doubling and halving), we will cover the space of possible target resistor values (from 0 to 10^18 nano-ohms) with values that are logarithmically (base 2) spaced. To cover that space densely enough and to also be sure that we have small enough resistor to always be able to obtain the needed precision, we can keep halving the smallest resistor in the inventory and doubling the biggest resistor in the inventory until the smallest resistor is 2^-30 * 10^9 (~ 0.001) nano-ohms and the biggest resistor is 2^29 * 10^9 (~0.54 * 10^18) nano-ohms. Now, let’s observe the commands needed to build such inventory. First, let’s build resistors from 2 * 10^9 to 2^29 * 10^9, that is 29 resistors. We can do this by repeatedly putting the last resistor in inventory in series with itself, 29 times. This results in following commands: [(0, 0, 0), (1, 1, 0), ..., (27, 27, 0), (28, 28, 0)] Next, let’s build resistors from 2^(-1) * 10^9 to 2^(-30) * 10^9, that is 30 resistors. The first resistor we build by putting the resistor #0 in parallel with itself and then we repeatedly put the last resistor in inventory in parallel with itself, 29 times. This expands our list of commands to a total of 59 commands: [(0, 0, 0), (1, 1, 0), ..., (27, 27, 0), (28, 28, 0), (0, 0, 1), (30, 30, 1), (31, 31, 1), ..., (59, 59, 1)] Values of created resistors (in ohms, not in nano-ohms!): [2^1, 2^2, ..., 2^28, 2^29, 2^-1, 2^-2, 2^-3, ..., 2^-30] Does this inventory satisfy our needs from before? The smallest resistor is smaller than required precision, so that is ok. The only question that remains is, are we sure that we can always build target resistor with less than 1000 – 59 = 941 commands by using this inventory of resistors that we created? Building target resistor from the inventory First, let’s see how we can use our resistor inventory to build the target resistor. The problem states that the last command in our list of commands has to be the one constructing target resistor. With inventory just freshly built, the last command is currently the one that constructs resistor of 2^-30 ohms (~0.93 nano-ohms). If that is not close enough to target resistor, we are going to add to it, in series, another resistor from inventory that will bring us as close as possible to target resistor (which is the biggest resistor that is smaller than the difference between target resistor value and the last resistor in inventory). We are going to repeat this process until we construct the resistor that is close enough to target resistor, and that is it! Now, to answer the question from before, if we can guarantee that we are not going to need more than 1000 commands in total. Since distance between our inventory resistors is logarithmic with base 2 it means that with each resistor we add to series while building target resistor as described above we are halving the mistake between our current best resistor and target resistor, which means that a few dozen steps (<= 60) will be always enough to build it! This means we are in total never going to have more than 59 + 60 = 119 commands, which is way below the limit of 1000. That is all! Although slightly complex, this solution performs very well in terms of speed and is reliable. I do wonder if there is a simpler solution or solution that returns the smallest number of commands but have not thought of one yet. C++ #include <vector> using namespace std; class ResistorFactory { public: vector <int> construct(long long nanoOhms) { vector<int> commands; vector<double> values; values.push_back(1000000000.0); // Product 0 is 10^9 ohm. for (int i = 0; i <= 28; i++) { // Products 1 to 29, each next is 2 times bigger. commands.push_back(i); commands.push_back(i); commands.push_back(0); values.push_back(((double) values[values.size() - 1]) * 2); } // Product 30, which is 10^9 / 2 (product #0 / 2). commands.push_back(0); commands.push_back(0); commands.push_back(1); values.push_back(values[0] / 2); for (int i = 30; i <= 58; i++) { // Products 31 to 59. each is 2 times smaller. commands.push_back(i); commands.push_back(i); commands.push_back(1); values.push_back(((double) values[values.size() - 1]) / 2); } // Inventory is built! Now we use our inventory to build the final resistor. double remaining = nanoOhms - values[values.size() - 1]; // Difference between what we have and target. while (remaining >= 1) { int bestIdx = -1; // Biggest resistor that is smaller than remaining amount. for (int i = 0; i < (int) values.size(); i++) { if (values[i] <= remaining && (bestIdx == -1 || remaining - values[i] < remaining - values[bestIdx])) { bestIdx = i; } } commands.push_back(values.size() - 1); commands.push_back(bestIdx); commands.push_back(0); values.push_back(values[values.size() - 1] + values[bestIdx]); remaining -= values[bestIdx]; } return commands; } }; Haskell (v1, more expressive) First Haskell version I came up with ended up pretty big, due to me writing very expressive code. This comes natural when writing Haskell because it is so easy to define functions and data types. module ResistorFactory (construct) where import Data.List (foldl', maximumBy) precisionInNanoOhm = 1 :: Double data ResistorBuild = Parallel Resistor Resistor | Series Resistor Resistor | OnePiece data Resistor = Resistor { getResistorId :: Int, getValueInNanoOhm :: Double, getBuild :: ResistorBuild } resistorToCommand :: Resistor -> (Int, Int, Int) -- Transforms resistor into format expected by Topcoder as result. resistorToCommand (Resistor _ _ (Series r1 r2)) = (getResistorId r1, getResistorId r2, 0) resistorToCommand (Resistor _ _ (Parallel r1 r2)) = (getResistorId r1, getResistorId r2, 1) resistorToCommand _ = error "Can't be transformed to command!" createFromSeries :: Int -> Resistor -> Resistor -> Resistor createFromSeries id r1 r2 = Resistor id (getValueInNanoOhm r1 + getValueInNanoOhm r2) (Series r1 r2) createFromParallel :: Int -> Resistor -> Resistor -> Resistor createFromParallel id r1 r2 = Resistor id (v1 * v2 / (v1 + v2)) (Parallel r1 r2) where (v1, v2) = (getValueInNanoOhm r1, getValueInNanoOhm r2) type Inventory = [Resistor] resistor0 = Resistor 0 (10^9) OnePiece -- This is the resistor we start with, 1 ohm. initialInventory :: Inventory initialInventory = [resistor0] -- Initial resistor. -: (extendInventory createFromSeries [1..29]) -- Big resistors (> 10^9 nano ohm). -: ((createFromParallel 30 resistor0 resistor0):) -- First small resistor. -: (extendInventory createFromParallel [31..59]) -- Other small resistors (< 10^9 nano ohm). where -- For each of given ids, it takes last resistor from inventory, creates new one with that id from it and -- adds it to the end of that same inventory, repeating the process. extendInventory create ids inv = foldl' (\i@(r:_) id -> (create id r r):i) inv ids a -: f = f a construct :: Integer -> [(Int, Int, Int)] construct target = transformResult $ construct' initialInventory (fromIntegral target) where transformResult = map resistorToCommand . tail . reverse -- Given initial inventory and value of target resistor, it will return inventory in which -- last resistor has that value (within defined precision). construct' :: Inventory -> Double -> Inventory construct' inv@(lastResistor:_) target = if diff < precisionInNanoOhm then inv -- We are done. else construct' (newResistor:inv) target where diff = target - (getValueInNanoOhm lastResistor) closestResistor = maximumBy (\r1 r2 -> compare (getValueInNanoOhm r1) (getValueInNanoOhm r2)) $ filter (\r -> getValueInNanoOhm r <= diff) inv newResistor = createFromSeries (getResistorId lastResistor + 1) lastResistor closestResistor Haskell (v2, less expressive) I also refactored it to make it shorter but less expressive and therefore much more similar to C++ version. I prefer the first, more expressive version. module ResistorFactory (construct) where import Data.List (foldl', maximumBy) initialInventory :: [(Int, Int, Int, Double)] initialInventory = [(undefined, undefined, undefined, 10^9)] -- Initial resistor. -: (extendInventory (\id v -> (id, id, 0, v * 2)) [0..28]) -- Big resistors (> 10^9 nano ohm). -: ((0, 0, 1, 10^9 / 2):) -: (extendInventory (\id v -> (id, id, 1, v / 2)) [30..58]) -- Small resistors. where extendInventory create ids inv = foldl' (\i@((_,_,_,v):_) id -> (create id v):i) inv ids a -: f = f a construct :: Integer -> [(Int, Int, Int)] construct target = transformResult $ construct' initialInventory (fromIntegral target) where transformResult = map (\(id1, id2, c, v) -> (id1, id2, c)) . tail . reverse construct' :: [(Int, Int, Int, Double)] -> Double -> [(Int, Int, Int, Double)] construct' inv@(lastRes:_) target = if diff < 1 then inv else construct' (newRes:inv) target where lenInv = length inv diff = target - (value lastRes) (bestResId, bestRes) = maximumBy (\(_, r1) (_, r2) -> compare (value r1) (value r2)) $ filter (\(_, r) -> value r <= diff) $ zip [lenInv - 1, lenInv - 2 .. 0] inv newRes = (lenInv - 1, bestResId, 0, value lastRes + value bestRes) value (_,_,_,v) = v
https://www.topcoder.com/blog/single-round-match-742-editorials/
CC-MAIN-2019-26
refinedweb
2,522
50.67
Talk:AVerMedia A828 Hi. This isn't working on openSUSE 11.4 with kernel 2.6.37.1, 32 bits. Everything compiles OK, but when inserting a828.ko I have the following error message if I use the installer.sh script (generated in /tmp/vm-install when you run sh AVERMEDIA-Linux-x86-A828-0.28-beta.sh): FATAL: Error inserting a828 (/lib/modules/2.6.37.1-1.2-desktop/kernel/drivers/media/dvb/dvb-usb/a828.ko): Invalid argument or the next one if I run it from the console: insmod: error inserting './a828.ko': -1 Invalid parameters Greetings. Hi, Could you try again, and show what this command returns : dmesg Greetings, ColdSun Hello. I was a bit tired because of the bad support and when I asked AVerMedia about new drivers they send me an email saying that the Linux support has ceased. Then I bought a Pinnacle nanostick and I don't use AVerMedia anymore. Because now I'm using kernel 2.6.39 I think that the output of dmesg may not be meaningful for you since things may have changed again. Anyway, thank you for your time and effort. Greetings. Hi Thanks for your efforts on producing this Wiki page. I managed to use the info to get my Avermedia A828 USB TV stick to work on Linux, on Ubuntu 11.04 via MythTV. Now I've tried Ubuntu 11.10 with Kernel 3.0.0-0300, and I get a compilation issue: /A828-expert-install/aver/osdep_th2.c:78:28: fatal error: linux/smp_lock.h: No such file or directory This is because linux/smp_lock.h has now been deleted because of removal of the Big Kernel lock. I've deleted every instance of: #include <linux/smp_lock.h> In the following files: aver/osdep_dvb.c aver/osdep_th2.c aver/osdep.c aver/osdep_v4l2.c The make command compiled successfully. The Avermedia A828 driver now works with Kernel 3.0.0-0300 which comes with Ubuntu 11.10 I tried to update the main page with new instructions. But was defeated by a spam filter reporting against the word s e m. Here's the new instructions: Contents Modifications to bring 1. In aver/osdep.c : 1.a If Kernel >= 3.0.0 then remove line : #include <linux/smp_lock.h> 1); } 1.c As per previous instructions (can't include because blocked by spam filter) 2. In aver/osdep_th2.c : 2.a After : #include "osdep_th2.h" Add : #include "osdep.h" 2.b If Kernel >= 3.0.0 then remove line : #include <linux/smp_lock.h> 2.c Replace : lock_kernel(); By : SysLockKernel(); 2.d and Replace : unlock_kernel(); By: SysUnlockKernel(); 3. In aver/osdep_v4l2.c : 3.a After : #include "debug.h" Add : #define VFL_TYPE_VTX 3 3.b If Kernel >= 3.0.0 then remove line : #include <linux/smp_lock.h> 4. In aver/osdep_dvb.c : If Kernel >= 3.0.0 then remove line : #include <linux/smp_lock.h> 5. Run : sed -i 's/param_array_[gs]et/param_array_ops/g' * sed -i 's/param_array_[gs]et/param_array_ops/g' aver/* 6. Run : make 7. Run : sudo modprobe dmx3191d sudo modprobe v4l2-common sudo modprobe dvb-core sudo insmod ./averusba828.ko sudo insmod ./a828.ko 8. To make modules ./averusba828.ko and ./a828.ko available after reboot : cp ./averusba828.ko /lib/modules/`uname -r`/kernel/drivers/media/dvb/dvbusb cp ./a828.ko /lib/modules/`uname -r`/kernel/drivers/media/dvb/dvbusb It would be good to automate this with scripts and patch files. Regards Mark Script to automate I totally agree. I just made a script that automate the installation, using dkms. Thanks for your modifications. Cheers Cold Sun It doesn't work for me Hi, thanks for your work, i didn't know nothing about how to install this card with latest linux kernels but here I can see something. I don't know why, but i get this error, in archlinux (kernel 32 bits, 3.0.7) and the same with ubuntu 11.10 (32 bits). [eduardo@myhost a828-0.28]$ sudo ./install.sh make -C /lib/modules/3.0-ARCH/build O=/lib/modules/3.0-ARCH/build SUBDIRS=`pwd` make[1]: se ingresa al directorio `/usr/src/linux-3.0-ARCH' CC [M] /usr/src/a828-0.28/a828-core.o CC [M] /usr/src/a828-0.28/aver/osdep_dvb.o CC [M] /usr/src/a828-0.28/aver/osdep_th2.o CC [M] /usr/src/a828-0.28/aver/osdep_v4l2.o CC [M] /usr/src/a828-0.28/aver/osdep_vbuf.o CC [M] /usr/src/a828-0.28/aver/osdep_alsa.o SHIPPED /usr/src/a828-0.28/_prebuild.o CC [M] /usr/src/a828-0.28/aver/averusb-mod.o LD [M] /usr/src/a828-0.28/a828.o /usr/src/a828-0.28/_prebuild.o: file not recognized: File format not recognized make[3]: *** [/usr/src/a828-0.28/a828.o] Error 1 make[2]: *** [_module_/usr/src/a828-0.28] Error 2 make[1]: *** [sub-make] Error 2 make[1]: se sale del directorio `/usr/src/linux-3.0-ARCH' make: *** [default] Error 2 FAILED : did you execute the script as root? I hope you can help me. There is an error in your script, in osdep_dvb.c you need to comment this line ;) - include <linux/smp_lock.h> Regards and thanks Aberkoke Script to automate - worked for me! Hi Cold Sun I've just tried your automated script. It worked for me without any modifications. Regards and many thanks. Mark Update for kernel 3.2 I have updated the script for the last linux kernel 3.2. I have just added #include <linux/modules.h> when some macros are not defined in in a828-core.c. @Mark : Thanks for the feedback =) @Aberkoke : are you in a 32 bits environment ? If you are, you may get the 32-bits source and compile it by your own. Cold Sun It doesn't work for me Hi, you are right, i'm in a 32 bits environment!! i will try to modify and compile. Thanks! Aberkoke It doesn't work on 32 bits environment Hi. I have changed the code, following the instructions, but it doesn't work.. What happen with spam filter? Aberkoke Update for kernel 3.2 - works for me Hi Cold Sun The updated script worked for me. Many thanks Regards Mark
https://www.linuxtv.org/wiki/index.php/Talk:AVerMedia_A828
CC-MAIN-2016-36
refinedweb
1,055
63.56
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll Preview Adding an API Controller8:29 with James Churchill Now we're ready to add our first API.3 -b adding-an-api-controller After configuring our default route, we're ready to at our fist API controller. 0:00 But before we do that, let's do a quick review of REST API endpoints, and 0:05 how HTTP methods are use to specify the action to be taken against a resource. 0:10 When working with REST API, HTTP requests are make against endpoints. 0:17 When using the Web API framework, API controllers represent our endpoints. 0:22 Each request is also associated with an HTTP verb or method. 0:28 These are the actions that can be taken against resources. 0:33 There are four main HTTP methods that are used with REST APIs. 0:37 The GET HTTP method fetches a collection of resources, or a single resource. 0:43 PUT updates a resource, and DELETE deletes a resource. 0:52 When mapping a request to a controller action method, Web API, 0:57 by default, will look for a controller action method, who's name matches or 1:01 starts with the HTTP method name. 1:06 Let's add our first API controller, and see all of this in action. 1:10 Just as with MVC projects, it's a convention though not an absolute 1:14 requirement to put all of your API controllers in a folder named controllers. 1:19 When working in a project that's using both MVC and Web API, 1:30 I typically put MVC controllers in a folder named Controllers, and 1:34 API controllers in a folder named API Controllers. 1:39 But again, that's not a requirement. 1:43 It's just a convention that I find helps me to quickly find the controller that I'm 1:45 looking for. 1:49 Let's add a class to the Controllers folder named EntriesController. 1:50 When web API is attempting to resolve or route to a controller action method, 2:01 it looks for a class that has a suffix of controller is public and 2:06 non-abstract and implements IHttpController. 2:10 Our class name contains the controller suffix. 2:15 And it's public and non-abstract, so we're good on those counts. 2:19 But it currently doesn't implement the IHttpController interface. 2:23 To satisfy that requirement, 2:28 we can inherit from Web APIs ApiController base class. 2:30 Be sure to add add a using directive for the System.Web.Http namespace. 2:39 If we navigate to the definition for the ApiController base class, 2:46 we can see that it implements the IHttpController interface. 2:50 The ApiController base class also defines a number of helper methods that we'll make 2:58 use of later in the section. 3:02 Now, let's step out our controller's action methods. 3:09 To review, our API design calls for 3:15 us to support the GET, POST, PUT, DELETE HTTP methods. 3:18 So to start, I'll add four methods with names to match those HTTP methods. 3:26 Public void 3:35 Get Public void Post. 3:38 Public void Put. 3:50 And public void Delete. 3:55 We actually need two Get methods. 4:00 One that will return a collection of resources and 4:03 one that will return a single resource. 4:05 For the Get method, that will return a single resource, allot a parameter 4:13 named id of type int, to represent the id of the resource to return. 4:18 Get requests are intended to retrieve resources, so 4:24 these methods should have return values. 4:27 For now, let's use IEnumerable of type Entry, 4:36 And Entry. 4:45 For the return types and null for the return values. 4:50 Entry is a model class that represents an entry's resource in our application. 5:02 Later in the section, we'll see how it can use Web API response types and 5:08 helper methods in the ApicController base class, til we find our responses. 5:12 We've named our action methods so that the match the HTTP method names exactly. 5:17 But in reality, the method names just need to start with the HTTP method name. 5:22 For example, we could rename our first Get method to GetEntries. 5:28 And the second Get method to GetEntry. 5:38 I prefer to use the exact names, it's more concise. 5:44 Which approach you use is something that you and 5:48 your team should discuss, decide, and follow consistently. 5:51 If you prefer, you can also ignore the naming convention completely. 5:55 And use a configuration based approach by decorating your 5:59 action methods with HTTP method attributes. 6:02 HttpGet for Get request. 6:09 HttpPost for Post request. 6:18 HttpPut for Put requests. 6:23 And, HttpDelete for Delete requests. 6:28 When using the HTTP method attributes, 6:37 you can name your action methods whatever you'd like. 6:40 But, unless there's a compelling reason to do so, 6:47 I'd still use the corresponding HTTP method names. 6:51 Let's update the Get action method to return some data so 7:03 that we can test our entries controller. 7:06 First, let's define a biking activity. 7:11 And then, return a list of type entry. 7:30 Now, let's add some entry objects to our list collection. 7:40 New Entry, the year, the month, the day, 7:43 and our activityBiking and the duration, 7:48 10.0m, which indicates that 10 is a decimal. 7:54 Now copy and paste that entry. 8:04 Change the date to 2017, 1, 3. 8:08 Same activity and change the duration to 12, 12.2. 8:14 Now that our Get method is returning some data, we're ready to run and 8:24 test our application. 8:27
https://teamtreehouse.com/library/adding-an-api-controller?t=235
CC-MAIN-2021-04
refinedweb
1,062
73.58
Is it possible to dynamically produce large files (10Gb+) for the client to download? I'm building a webapp that dynamically creates files and downloads them to the client. I've implemented this by creating a Blob of data and using an objectUrl to download them. (example from: Download large files in Dartlang): import 'dart:html'; void main() { var data = new Blob(['Hello World!\n'*1000000]); querySelector("#downloadLink") ..setAttribute('href', Url.createObjectUrl(data)); } for < 800MB i would recommend FileSaver but for 10GB+ you are going to need something like StreamSaver (It will only work in Blink doe) 16GB+ haven't been any problem for me const fileStream = streamSaver.createWriteStream('filename.txt') const writer = fileStream.getWriter() const encoder = new TextEncoder() // When you have any data you write it as Uint8array let data = 'a'.repeat(1024) let uint8array = encoder.encode(data + "\n\n") writer.write(uint8array) // chunk writer.write(uint8array) // chunk writer.write(uint8array) // chunk writer.write(uint8array) // chunk // After you have written all bytes you need to close it writer.close() (just don't write all chunks at once, do it progressively as you get more data avalible)
https://codedump.io/share/ZicGwdr93Xgb/1/is-it-possible-to-dynamically-produce-large-files-10gb-and-stream-them-to-the-client
CC-MAIN-2017-09
refinedweb
187
51.65
TestNG - A Flexible Java Test Framework By Rad Widmer, OCI Senior Software Engineer July 2010 Introduction TestNG is a Java testing framework that is particularly well-suited for use on large projects. It provides especially strong support for writing functional and integration tests as well as unit tests. TestNG key features include: - test groups: test methods can belong to any number of groups - test dependencies: tests can be skipped based on the results of other tests - parameterized test methods and data providers: allows data to be supplied to a test or configuration method from another method, or an XML file - dependency injection: it's easy to hook up listeners for customizing behavior - multi-threading support: tests can run in multiple threads in order to speed up execution TestNG uses Java annotations extensively. This means no reliance on class inheritance or method naming conventions as in JUnit 3. While JUnit 4 also uses annotations and shares some features with TestNG, there are significant differences in philosophy and implementation. JUnit strives to keep tests isolated from one another, while TestNG is more flexible, allowing various types of dependencies between tests. The key difference is the context in which the tests run. The Basics TestNG can be invoked in a number of ways - from the command line, from within an IDE such as Eclipse, IntelliJ IDEA, or NetBeans, from the testng Ant task, or from Maven. In all cases, a suite XML file (also referred to as a testng.xml file) is used to define the test suites. Although it is not required, a suite XML file provides the most power and flexibility in configuring and selecting which tests to run. Here's a simple command line example, which runs the suite defined by testng.xml (assuming the testng jar file and any classes used during testing are on the classpath): java org.testng.TestNG testng.xml There are several useful optional arguments, including -groups: a comma-separated list of groups to run -excludegroups: a comma-separated list of groups to exclude -listener: a comma-separated list of classes which implement ITestListener -reporter: a comma-separated list of classes which implement ITestReporter The Suite XML (testng.xml) file A testng.xml file defines a single test suite. A suite contains one or more test elements, each of which specifies a set of test classes. The tests which make up the test element can be specified in one or more of the following ways: - a list of classes which contain the test methods - a list of packages to search for test classes - a list of groups to include or exclude from the testelement Here is an example suite which defines a test named CoreTests that includes classes ClassATest and ClassBTest, plus all test classes in the package com.ociweb.jnb.testng.core and any subpackages (because the package name ends with ".*"). In addition, the group long-tests is excluded from the test. - <suite name="Suite" parallel="false"> - <test name="CoreTests"> - <classes> - <class name="com.ociweb.jnb.testng.ClassATest"></class> - <class name="com.ociweb.jnb.testng.ClassBTest"></class> - </classes> - <packages> - <package name="com.ociweb.jnb.testng.core.*"></package> - </packages> - <groups> - <run> - <exclude name="long-tests"></exclude> - </run> - </groups> - </test> - </suite> For more details on the suite XML file, an HTML version of the DTD schema is available at. Annotations Any class can be turned into a TestNG test class by adding at least one TestNG annotation. The @Test annotation is used to designate the test methods in a class. It can be applied to individual methods, as in the following example: - public class ExampleMethodAnnotations { - @Test - public void checkSomething() {} - - @Test - public void checkSomethingElse() {} - - public void notATestMethod() {} - } The @Test annotation can also be applied to a class, in which case all public methods of that class will be test methods, unless they are annotated by one of the @BeforeXXX or @AfterXXXconfiguration annotations described below. - @Test - public class ExampleClassAnnotation { - public void checkSomething() {} - - public void checkSomethingElse() {} - } Configuration Annotations Configuration annotations can be applied to methods to perform some action either before or after specific events: @BeforeSuite/@AfterSuite: Executes before/after any tests in the test suite @BeforeTest/@AfterTest: Executes before/after any tests in the element which contains this class @BeforeClass/@AfterClass: Executes before/after any tests in this class @BeforeMethod/@AfterMethod: Executes before/after every test method in this class @BeforeGroups/@AfterGroups: Executes before/after any tests in the specified groups. For example, the methods in the class TestBeforeAfter are executed in the following order: beforeSuite beforeTest beforeClass beforeMethod test1 afterMethod beforeMethod test2 afterMethod afterClass afterTest afterSuite - public class TestBeforeAfter { - @BeforeSuite - public void beforeSuite() {} - - @AfterSuite - public void afterSuite() {} - - @BeforeTest - public void beforeTest() {} - - @AfterTest - public void afterTest() {} - - @BeforeClass - public void beforeClass() {} - - @AfterClass - public void afterClass() {} - - @BeforeMethod - public void beforeMethod() {} - - @AfterMethod - public void afterMethod() {} - - @Test - public void test1() {} - - @Test - public void test2() {} - } Groups It's natural to want to place tests into groups. For example, you may have a core group of tests which should be run before any files are checked in to the source code repository. You could also have groups for different types of tests, such as integration tests and performance tests. Tests which cover different features could also be in different groups. A "broken tests" group can be useful for tests which are known to be broken, and won't be fixed for a while. In TestNG, groups are used to control execution of tests, both by including or excluding tests from a suite, and by controlling the order of tests and whether tests are skipped. A test method can be a member of any number of groups. Test methods can be assigned to groups at the class level (all test methods in the class are members of the class-level groups), on a per-method basis, or a combination of both. An example of assigning groups at the method level: - public class MethodLevelGrouping { - @Test(groups={"a"}) - public void a() {} - - @Test(groups={"b"}) - public void b() {} - - @Test(groups={"a", "b"}) - public void ab() {} - } To assign groups at the class level, add a @Test annotation to the class. This turns all public methods in the class into test methods (unless they have a @BeforeXXX or @AfterXXX annotation). These methods also acquire any annotation elements (such as groups) defined by the class-level @Test annotation. In the following example, testB1 and testB2 are test methods belonging to group b, and method testBC belongs to groups b and c. - @Test(groups={"b"}) - public class ClassLevelGrouping { - public void testB1() {} - - public void testB2() {} - - @Test(groups={"c"}) - public void testBC() {} - } Test Dependencies Tests can depend on other methods or groups. This effects test execution order and can cause tests to be skipped depending on the results of other tests. The execution order rules are as follows: - A test which depends on other methods executes after the methods it depends on. - A test which depends on groups executes after all methods in the groups it depends on. There are two types of dependencies, hard (the default) and soft. This determines whether tests are skipped or not. - A hard dependency ( @Test(alwaysRun=false)) means that the dependent test will be skipped if any tests it depends on fails. - A soft dependency ( @Test(alwaysRun=true)) means that a test will run regardless of the outcome of the tests it depends on. However, it will always execute after all the tests it depends on. In this example, test depTest depends on methods pre1 and pre2. depTest always runs after pre1 and pre2, and will be skipped if either pre1 or pre2 fail. Note that pre1 and pre2 can be executed in any order. - public class DependsOnMethods { - @Test(dependsOnMethods={"pre1", "pre2"}) - public void depTest() {} - - @Test - public void pre1() {} - - @Test - public void pre2() {} - } Here's an example where a number of tests depend on an external server and some environment configuration. We want to avoid executing the server-dependent tests if the server is down. One way to do this is do create test methods which verify that the server is working and the environment is configured correctly. For convenience, these methods are placed in an init group, and the server-dependent tests depend on this group. Then, if the server is down, the server-dependent tests will be skipped. This has the benefit of potentially saving a lot of time (no waiting for the server to time-out for each test), plus the error report is more accurate, allowing one to focus more quickly on the actual failures. In this example, the init group methods are placed in their own class. Alternatively, all the methods could be placed into one class, though this would require annotations on each method. - @Test(groups={"init"}) - public class InitTests { - public void checkEnvironment() {} - - public void checkServer() {} - } - - @Test(dependsOnGroups={"init"}) - public class ServerDependentTests { - public void sdtest1() {} - - public void sdtest2() {} - } Adding Flexibility with Method Parameters Test and configuration methods can have parameters. Parameter values can be assigned in two ways - by using the @Parameters annotation, or by using DataProviders. @Parameters Here's an example of using the @Parameters annotation to assign a server name and port number. - @Parameters({"server-name", "port"}) - @BeforeTest - public void setupServer(String serverName, int port) { - this.serverName = serverName; - this.port = port; - } The parameter values are assigned in the suite XML file (note that the name attributes must match the names listed in the @Parameters annotation): - <test name="ParametersTest"> - <parameter name="server-name" value="test-server "/> - <parameter name="port" value="1234"/> - ... - </test> Parameter elements can be placed under the or elements. Suite parameters apply to all tests unless they are overridden by a parameter of the same name under a element. While the @Parameters approach is easy to use, it has some limitations. Parameter types are limited to simple types such as String, int, and double. Also, the parameter values are only assigned once, so it is not possible to invoke the same method with multiple sets of parameter values. DataProviders overcome these limitations. DataProviders A DataProvider is a method annotated with @DataProvider which returns either an Object[][] or Iterator. In either case, the inner Object[] contains the parameters for a single invocation of a test method. The number and types of elements in each row must match the number and types of parameters in the test method. Here's a simple example: - @DataProvider(name = "indexOfProvider") - public Object[][] indexOfTestGenerator() { - return new Object[][] { - { -1, "something", "x" }, - { 4, "something", "thing" }, }; - } - - @Test(dataProvider="indexOfProvider") - public void testStringIndexOf(int expect, String s1, String s2) { - assertEquals(expect, s1.indexOf(s2)); - } In this example, the test method will be invoked twice, first with parameters {-1, "something", "x"}, then with parameters {4, "something", "thing"}. DataProviders can be particularly useful when implementing data-driven tests - tests where it is necessary to perform the same checks with a variety of input data sets. Typical ways of handling these types of tests include: - Iterate over the input data sets within a single test method - Create a separate test method for each input data set DataProviders can provide a more elegant solution by helping keep your test methods small, and by separating the test data generation from the actual tests. Parallel Execution TestNG supports running tests in multiple threads. Use this feature to speed up execution of thread-safe tests. By default, all tests run in a single thread. To use multiple threads, use the parallel and thread-count attributes of the and tags. The parallel attribute can be set to the following values: "tests": All test methods in a given tag are executed in the same thread, but different tags may execute in different threads. This value is only valid for "classes": All test methods in the same class execute in a single thread, but methods in different classes may execute in different threads. "methods": All test methods may potentially execute in different threads. "false": Don't use multiple threads. The thread-count attribute is used to set the number of threads in the thread pool. If parallel or thread-count attributes are specified on a tag, their values take precedence over the values. Here's one way to define a suite where parallel is "methods" by default, but is overridden by a test which contains methods which should not be run in parallel: <suite name="Suite" parallel="methods" thread- ... <test name= "not-parallel" parallel="false"> Reports By default, TestNG produces an HTML report and a TestNG specific XML file. In addition, XML files are produced which are compatible with the Ant JUnitReport task. This is handy for integrating into existing build frameworks, but lacks any TestNG-specific data. It is also possible to produce custom reports by implementing an IReporter reporter class. This class is called when all the test suites have completed, and it is passed the test results for all tests in the suites. Summary TestNG provides good support for managing large test suites, and provides solutions to common problems which arise particularly when writing functional and integration tests. This makes it a natural fit for many projects, where it can make it easier to write tests, more efficient to run the tests, and easier to interpret the results. References - [1] The TestNG web site: This is a good place to start.. - [2] The testng-users discussion group:. - [3] Next-Generation Testing with TestNG, An Interview with Cédric Beust by Frank Sommers: A good interview with the creator of TestNG.. - [4] Next Generation Java Testing by Cédric Beust and Hani Suleiman:The definitive TestNG book, coauthored by the creator..
https://objectcomputing.com/resources/publications/sett/july-2010-testng-a-flexible-java-test-framework
CC-MAIN-2022-33
refinedweb
2,241
50.16
In the last couple of weeks I had a chance to sprinkle some of JP’s syntactic sugar, all over my projects. It’s amazing how much more concise my units test have become. I’ve had a couple of issues where I was mocking out the behavior of some Win Forms controls, but for the most part it’s been an awesome experience! I just wanted to take a moment to say Thank you JP! I am enjoying using your BDD (on steroids) extensions. If you haven’t already you need to check it out here… NOW! Maaad, maaaad props Mr. JP! 10 public class behaves_like_save_changes_view_bound_to_presenter : concerns_for<SaveChangesView> 11 { 12 context c = () => { presenter = an<ISaveChangesPresenter>(); }; 13 14 because b = () => sut.attach_to(presenter); 15 16 static protected ISaveChangesPresenter presenter; 17 } 18 19 public class when_the_save_button_is_clicked : behaves_like_save_changes_view_bound_to_presenter 20 { 21 it should_forward_the_call_to_the_presenter = () => presenter.was_told_to(x => x.save()); 22 23 because b = () => EventTrigger.trigger_event<Events.ControlEvents>( 24 x => x.OnClick(new EventArgs()), 25 sut.ux_save_button 26 ); 27 } 28 29 public class when_the_cancel_button_is_clicked : behaves_like_save_changes_view_bound_to_presenter 30 { 31 it should_forward_the_call_to_the_presenter = () => presenter.was_told_to(x => x.cancel()); 32 33 because b = () => EventTrigger.trigger_event<Events.ControlEvents>( 34 x => x.OnClick(new EventArgs()), 35 sut.ux_cancel_button 36 ); 37 } 38 39 public class when_the_do_not_save_button_is_clicked : behaves_like_save_changes_view_bound_to_presenter 40 { 41 it should_forward_the_call_to_the_presenter = () => presenter.was_told_to(x => x.dont_save()); 42 43 because b = () => EventTrigger.trigger_event<Events.ControlEvents>( 44 x => x.OnClick(new EventArgs()), 45 sut.ux_do_not_save_button 46 ); 47 } Post Footer automatically generated by Add Post Footer Plugin for wordpress. Mo, What’s “sut”? P.S.: If I gotta ask, then it needs to be renamed. If I follow Toyota’s “Objective of the Objective” exercise I can arrive at a clearer answer. The issue is: you believe that “sut” is sufficient to communicate to me, who has never seen this code and who is scanning it, what the subject of the specification is. Your concern as a coder of this test may in fact be the “system under test”. But I don’t know what that system is because as a user, I’m not in the same context when I read that code as you are. My concern is quickly gaining knowledge. The spec as written has usability challenges in this regard. The objective is the “sut”. Granted. What is the objective of the “sut”? The Objective of the Objective here will satisfy your needs as well as mine. If you code for yourself, you might as well keep your code on your own hard drive and not share it with others. It’s like creating a user interface flow that only you have an intimate understanding of at first glance, forcing others to decipher the intended experience. If “sut” is an order, call it “order”. If “sut” is a dog, call it “dog”. If “sut” is a shopping cart, call it “shopping cart”. The level of abstractness achieved is certainly laudable, but abstractness isn’t the objective that best serves humans. Productivity not only doesn’t come from abstractness, it’s often damaged by it. Scott, What can I say? You are right! It’s about “solubility”, right? I suppose I got so used to staring at my own code, that I forget what it’s like to view it from another perspective. Thanks for your fresh perspective, Mo I can’t get to the ” BDD (on steroids) extensions” link in your blog. (404 error) any thoughts? That’s probably because I put up the link to the old repo… Sorry about that! mO
http://lostechies.com/mokhan/2009/03/11/bdd-on-steroids/
CC-MAIN-2014-15
refinedweb
580
59.8
Visual Basic .NET offers its users, among many other things, a fully object-oriented programming (OOP) experience. Some former VB6 developers have prepared themselves well to embrace this new version of the language. Others, however, need more time and guidance in taking on this new challenge and opportunity. In this VB.NET OOP series, Budi Kurniawan introduces many facets of object-oriented design and programming to VB programmers new to OOP. Discussions focus more on OO concepts rather than OO techniques. In the first article of this series, Budi answers what is probably the most often asked question by newcomers to OOP: Why OOP? Finally, we look at why VB6 programmers often find it hard to shift to OOP when the OOP itself is not difficult. Microsoft launched the .NET Framework early this year and overhauled its most popular language: Visual Basic. With the introduction of the Framework, the language was reconstructed and given a new name: VB.NET. It's now a language that fully supports object-oriented programming. It is true that VB6 supported some notion of objects, but lack of support for inheritance made it unfit to be called an OOP language. For Microsoft, changing VB is more of a necessity than a choice, if VB is to become one of the .NET Framework languages and in order for VB programmers to be able to use the .NET Framework Class Library. Related Reading VB.NET Core Classes in a Nutshell By Budi Kurniawan, Ted Neward With the change to the language, many VB fans might think that OOP is a natural progression from procedural programming, and that OO technology is a new technology. Surprisingly, this is not the case. OO concepts were first introduced in 1966 in a journal paper titled "SIMULA-An Algol-based Simulation Language." Its authors, Ole-Johan Dhal and Kristen Nygaard from the University of Oslo and the Norwegian Computing Center (Norsk Regnesentral), later completed the first OOP language in the world. The language is called Simula 67, short for simulation language. You may find it hard to believe that a full-fledged OO language was implemented before structured programming and when, as OO expert Bertrand Meyer puts it in his book, Object Oriented Software Construction, "The Vietnam War was still a page 4 item; barricades had not yet sprung up in the streets of Paris; a mini-skirt could still cause a stir." However, OO technology did not become popular until the early 1980s, because at the time of invention, it was considered too radical for practical use. VB only became a fully OOP officially in 2002 and the inventors of this technology, Ole-Johan Dhal and Kristen Nygaard, only got their Turing Award, considered the Nobel prize of computer science, in 2001 "for ideas fundamental to the emergence of object-oriented programming, through their design of the programming languages Simula I and Simula 67." OK, enough history. Now, let's discuss why OOP in VB is a good thing. There are many benefits of OOP that make the technology so popular and the computing science society grateful to its inventors. Below, I will mention the three most important ones, in no particular order. Modern software applications tend to become larger and larger. Once upon a time, a "large" system comprised a few thousand lines of code. Now, even those consisting of one million lines are not considered that large. When a system gets larger, it starts giving its developers problems. Bjarne Stroustrup, the father of C++, once said something like this: a small program can be written in anything, anyhow. If you don't quit easily, you'll make it work, at the end. But a large program is a different story. If you don't use techniques of "good programming," new errors will emerge as fast as you fix the old ones. The reason for this is there is interdependency among different parts of a large program. When you change something in some part of the program, you may not realize how the change might affect other parts. Modularity makes maintenance less of a headache. Modularity is inherent in OOP because a class, which is a template for objects, is a module by itself. What is the problem for those new to OOP is how to determine what methods and data should make up a class. This is another topic of its own, though. A good design should allow a class to contain similar functionality and related data. An important and related term that is used often in OOP is coupling, which means the degree of interaction between two classes (modules). Reusability means that code that has previously been written can be reused by the code writer and others who need the same functionality provided by the code. It is not surprising, then, that an OOP language often comes with a set of ready-to-use libraries. In the case of VB.NET, the language shares the .NET Framework Class Library with other .NET languages. These classes have been carefully designed and tested. It is also easy to write and distribute your own library. Support for reusability in a programming platform is very attractive, because it shortens the time taken to develop an application. In the world where good programmers are expensive, what else is more appealing? One of the main challenges to class reusability is creating good documentation for the class library. How fast can a programmer find a class that provides the functionality he/she is looking for? Is it faster to find such a class or to write a new one from scratch? In the .NET Framework Class library, classes and other types are grouped into namespaces. For instance, the System.Drawing namespace contains the collection of types used for drawing. System.Drawing VB.NET programmers are lucky, because the .NET Framework comes with good documentation, with proper indexing and structured and searchable content. Reusability does not only apply to the coding phase through the reuse of classes and other types; when designing an application in an OO system, solutions to OO design problems can also be re-used. These solutions are called design patterns, and to make it easier to refer to each solution, the pattern is given a name. For example, if you need to make sure that a class can only have one instance, there is already a solution to it, which is called the "singleton" design pattern. The early catalog of reusable design pattern can be found in Design Patterns: Elements of Resusable Object-Oriented Software. In future articles, I will also discuss the use of these patterns. Every application is unique. It has its own requirements and specifications. In terms of reusability, sometimes you cannot find an existing class that provides the exact functionality that your application requires. However, you will probably find one or two that provide part of the functionality. Extendibility means that you can still use those classes by extending them so that they provide what you want. You still save time, because you don't have to write code from scratch. In OOP, extendibility is achieved through inheritance. You can extend an existing class, add some methods or data to it, or change the behavior of methods you don't like. If you know the basic functionality that will be used in many cases, but you don't want your class to provide very specific functions, you can provide a generic class that can be extended later to provide functionality specific to an application. A good example is the System.Windows.Forms.Form class, which provides basic methods, properties, and events for a Windows form. To create a form in a Windows application, you extend this class. Because your application has specific requirements that do not necessarily exist in other applications' specifications, you can add the methods and data specific to your need. You save time and programming (and debugging) efforts because you get the basic functionality of a form for free by extending System.Windows.Forms.Form. System.Windows.Forms.Form As another example, take a look at the Control class in the System.Windows.Forms namespace. This class provides basic functionality for a Windows control. For instance, it has the BackColor property that will be needed by all Windows controls. Other classes that extend it get this property for free. Examples of classes that extend the Control class are Label, MonthCalendar, ScrollBar, etc. These classes need the BackColor property as part of their user interface. Control System.Windows.Forms BackColor Label MonthCalendar ScrollBar Not every control needs the SetCalendarDimensions method, however; only MonthCalendar does. Therefore, the designer of the .NET Framework class library didn't put this method in the Control class, but in the MonthCalendar class. If you create your own custom control by extending the Control class, your class, too, will get the BackColor property for free. SetCalendarDimensions Now, I hope you agree with me that to work within the .NET Framework, VB programmers need to master OOP. In the light of this, there is bad news and good news. The bad news is, the VB versions prior to VB .NET, including VBScript and Visual Basic for Applications (VBA), are not fully OOP languages. It is true that you could use and write COM components, which are basically objects; this is probably the reason that some people insist that VB6 is an OOP language. The lack of other OOP features such as inheritance, however, disqualify previous versions of VB from being an OOP language (for the criteria of an OO system, see Chapter 2 of Bertrand Meyer's Object-Oriented Software Construction, Second Edition). It is probably more appropriate to categorize the old VB into something between a procedural language and an OOP language. This is bad news for those wanting to learn VB.NET. Researchers have been arguing about the best way to teach OOP at school; some argue that it is best to teach procedural programming before OOP is introduced. In many curricula, we see that the OOP course can be taken when a student is nearing the final year of his/her university term. More recent studies, however, have revealed the contrary. Michael Kolling, from the School of Computer Science and Computer Engineering, Monash University, Australia, explained in one of his papers why learning OOP seems harder than learning structured programming. He and other experts argue that it is how OOP is taught, not OOP itself, that makes it such a difficult task. Someone with procedural programming skill thinks in a paradigm very different from how OO programmers view and try to solve problems. When this person needs to learn OOP, he/she has to go through a paradigm shift. It is said that it takes six to 18 months to switch your mindset from procedural to object-oriented paradigms. Another study shows that students who have not learned procedural programming do not find OOP that difficult. Now the good news. VB.NET is a relatively easy language to program with. VB.NET programmers do not need to worry about pointers, don't have to spend precious time solving memory leaks caused by forgetting to destroy unused objects, etc. The .NET Framework also comes with a very comprehensive class library with relatively very few bugs in its early version. Once you know the nuts and bolts of OOP, programming with VB.NET is easy. Budi Kurniawan is a senior J2EE architect and author. Dhal, Ole-Johan Dhal and Nygaard, Kristen. "SIMULA-An Algol-based Simulation Language," Communication of the ACM, vol. 9, no. 9, September 1966, pages 671-678. A PDF version of this paper is available. Gamma, Erich; Helm, Richard; Johnson, Ralph; and Vlissides, John. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995. Kölling, M. "The Problem of Teaching Object-Oriented Programming Part 1: Languages," Journal of Object-Oriented Programming, 11(8), 8-15, 1999. A PDF version of this paper is available. Meyer, Bertrand. Object-Oriented Software Construction, Second Edition, Prentice Hall PTR, 1997. Return to .NET DevCenter © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onjava.com/pub/a/dotnet/2002/09/22/vb-oop.html
CC-MAIN-2017-34
refinedweb
2,039
55.84
- Binary Tree - Binary Trees in C : Array Representation and Traversals - Binary Tree in C: Linked Representation & Traversals - Binary Tree in Java: Traversals, Finding Height of Node - Binary Search Tree A binary search tree is a binary tree where for every node, the values in its left subtree are smaller than every value in its right subtree. In the above picture, the second tree is not a binary search tree. All the values in the left subtree of any node must be smaller than the values of the right subtree of that node but the here the value 8 is not smaller than 6. Hence, the second tree is not a binary search tree. Searching in a Binary Tree Let's suppose we have to find a value in a binary search tree. If we are on any node of a binary search tree and the value to be found is greater than the value at the node then we are assured that the value will lie somewhere on the right subtree and if it is smaller then on the left subtree and thus making the searching in a tree much efficient. One interesting point should be noted here that when the inorder traversal is applied on a binary search tree, it prints all the data of the binary tree in the sorted order. Thus, the pseudo code for searching in a binary tree can be written as: Node search(int x, Node n) { if(n != null) { if(n.data == x) return (n) else if(n.data > x) search(x, n.left) else search(x, n.right) } } if(n.data == x) – We are simply comparing the data at the current node with the value to be found and if both are equal then we have found the value and thus returning the current node (which contains the value) in the next line ( return (n)). else if(n.data > x) – If the data to be found is smaller than the data at the current node, then it must lie at the left subtree and thus we are again calling the search funtion on the left subtree. else – the data to be found is greater than the data at the current node and thus it must lie in the right subtree. So, we are implementing the search funtion to the right subtree in the next line. Finding Minimum/Maximum in a Binary Tree In a binary tree, we can strongly say that the smallest element will be the leftmost leaf of the tree i.e., to find the smallest element, we will start from the root and will go left as long as there is a left child. Thus, the code for the same can be written as: int findMin(Node n) { if(n.left==null) return (n.data) else findMin(n.left) } if(n.left==null) – There is no left child of the node and thus the current node is the leftmost node and thus it must hold the smallest value of the tree. else – Move to the left child. Similary, we can find the maximum element of a binary search tree at its rightmost node. int findMax(Node n) { if(n.right==null) return (n.data) else findMax(n.right) } Inserting an Element in a Binary Search Tree A new element should be inserted at a particular position in a binary search tree such that by inserting the new node the binary search tree remains a binary search tree. This can be achieved by simply making a search for the element to be inserted and if we didn't find the element, then insert a new node at the correct position. Thus, the steps for inserting an element x in a binary search tree are: - If the element is found, do nothing. - Else, insert x at the last spot on the path traversal insert (Node n, int x) { if (n==null) { n = new Node(x) } if(n.data < x) insert(n.right, x) else if(n.data > x) insert(n.left, x) } Deleting a Node in BST The deletion part is also easy but most complex among the above-mentioned tasks. When we delete a node, we have to take care of the children of the node and also that the property of a BST is maintained. There can be three cases in deletion of a node which are explained below: - The node is a leaf - This is the most simple case and here we can delete the node immediately without giving a second thought. - The node has one child - This is represented in the picture given below. When the node to be deleted has only one child then just replacing the node with its child will maintain the property of the search tree. So, we just replace the node with its child i.e., link the parent of the node to be deleted to its child. Since both the node to be deleted and its child are on the same side and there are no further children, so the property of a search tree will be maintained. - The node has 2 children - When the node to be deleted has 2 children, we need to choose a node to be replaced with the node to be deleted such that the property of the binary tree remains intact. When you look at the tree, you will find that either choosing the maximum element of the left subtree or the minimum element of the right subtree will satisfy this condition. We will proceed with choosing the minimum element of the right subtree and then we will delete the node. Now, the smallest element on the right subtree must have a single child or no child at all so, we can delete this node using either the first or the second case. code Now you know the concepts of the binary search tree, the implementations in C and Java are can be found in the posts: Binary Search Tree in Java and Binary Search Tree in C.
https://www.codesdope.com/blog/article/binary-search-tree/
CC-MAIN-2021-39
refinedweb
1,003
67.28
14 September 2012 16:09 [Source: ICIS news] LONDON (ICIS)--The European methanol contract price should decrease in the fourth quarter because of a stronger euro versus the dollar, a large buyer said on Friday. Compared with three months ago, the euro has strengthened by $0.027. This means if the European contract price were to rollover at €340/tonne ($442/tonne), the value in dollars will have increased by just over $9/tonne. The buyer believes this it not justified and that the contract price in euros should decrease to compensate. However, sellers have largely dismissed this claim as unreasonable, with one producer branding the proposal as "ridiculous". A supplier and trader noted that crude oil prices have increased over the past two quarters and that, with recent actions taken by the European Central Bank and the US Federal Reserve, the European and ?xml:namespace> “[Methanol] was undervalued anyway. Q4 is traditionally stronger than Q3 and the world economy looks better,” the trader and supplier added. Additionally, there are a number of plant turnarounds taking place in Trinidad during the late-third and fourth quarters, which may tighten the Buyers disagreed with the view that the fourth quarter is usually stronger than the third, pointing out that December usually sees heavily reduced activity because of the holiday season. Furthermore, many companies undertake inventory destocking in order to enter the new year with lower working capital. For this reason several other buyers suggested a slight decrease could be justified. Nevertheless, all suppliers were adamant that the contract price should roll over at minimum, with perhaps even a slight increase applied. The European contract price for methanol is settled on a
http://www.icis.com/Articles/2012/09/14/9595760/europe-methanol-contract-price-must-fall-in-q4-major-buyer.html
CC-MAIN-2015-11
refinedweb
280
50.26
. In React, a component describes its own appearance; React then handles the rendering for you. A clean abstraction layer separates these two functions. In order to render components for the web, React uses standard HTML tags. This same abstraction layer, known as the "bridge," enables React Native to invoke the actual rendering APIs on iOS and Android. On iOS, that means that your React Native components render to real UI Views, while on Android, they'll render to native Views. You'll write what looks an awful lot like standard JavaScript, CSS, and HTML. Instead of compiling down to native code, React Native takes your application and runs it using the host platform's JavaScript engine, without blocking the main UI thread. You get the benefits of native performance, animations, and behavior, without having to write Objective-C or Java. Other cross-platform methods of app development, such as Cordova or Titanium, can never quite match this level of native performance or appearance. <div>, <img>, and <p>, React Native provides you with basic components such as <Text> and <View>. In the example below, the basic components used are <ScrollView>, <TouchableHighlight>, and <Text>, all of which map to Android and iOS-specific views. Using them to create a scrolling view with proper touch handling is pretty straightforward: // iOS & Android var React = require('react-native'); var { ScrollView, TouchableHighlight, Text } = React; var TouchDemo = React.createClass({ render: function() { return ( <ScrollView> <TouchableHighlight onPress={() => console.log('pressed')}> <Text>Proper Touch Handling</Text> </TouchableHighlight> </ScrollView> ); }, });: <View style={styles.container}> ... </View> <RCTBridgeModule> ('react-native'); var { NativeModules, Text } = React; var Message = React.createClass({ getInitialState() { return { text: 'Goodbye World.' }; }, componentDidMount() { NativeModules.MyCustomModule.processString(this.state.text, (text) => { this.setState({text}); }); }, render: function() { return ( <Text>{this.state.text}</Text> ); } });. Community comments Interesting read - Just sharing my use case! by John Matthews / Nice Post For Beginners by Dipti Arora / Re: Nice Post For Beginners by Mohsin Khan / Great intro article! by Mark Webster / Very Informative by Mohsin Khan / A totally demented programming language by Richard Eng / Interesting read - Just sharing my use case! by John Matthews / Your message is awaiting moderation. Thank you for participating in the discussion. One of my friends was working on a react project with ES5 but they had to revamp their ES5 components into ES6 components. They faced so many problems at each point. Here's an interesting blog covering steps to convert ES5 react components to ES6 components - bit.ly/233Q9Zw Nice Post For Beginners by Dipti Arora / Your message is awaiting moderation. Thank you for participating in the discussion. Nice post to read as an app developer. I have developed a number of apps with the help of various cross platform mobile app development platforms. I want to suggest the author as well as all the developers, an another app development platform, Configure.IT . Just try this one, you will not regret for sure. This is my personal view, I have developed plenty of apps with this platform, so I recommend this tool to everyone. The unique features, Review this platform once, I am sure you will forget all other platform, same happen to me as well. Great intro article! by Mark Webster / Your message is awaiting moderation. Thank you for participating in the discussion. any thoughts on embedding tags (id, class, name, text) in the html for use with mobile automation testing (such as appium, etc) using appium? Re: Nice Post For Beginners by Mohsin Khan / Your message is awaiting moderation. Thank you for participating in the discussion. Hey thanks for your idea its good. But personally I prefer open source tools which involve no cost. Very Informative by Mohsin Khan / Your message is awaiting moderation. Thank you for participating in the discussion. Hello Bonnie, I am Mohsin Khan An Sr.Android Developer from India. Thanks for such pretty article about react native before this article for me react is horrible, But now I found its pretty I will change will learn this very soon. Please help me with Learning materials. Thanks. A totally demented programming language by Richard Eng / Your message is awaiting moderation. Thank you for participating in the discussion. Of course, the major problem with using React Native is that you have to use a totally demented programming language. I have many, many friends and colleagues who share my view that JavaScript should be avoided at all costs. I've done web programming for a decade now, and I've managed to largely avoid using JavaScript (it's always a tiny fraction of my codebase). Sorry, but if I'm going to write cross-platform mobile apps, I'm going to use something like Java (with Codename One or Multi-OS Engine), or C# (with Xamarin), or Ruby (with RubyMotion). I would advise all sane developers to do the same.
https://www.infoq.com/articles/react-native-introduction?useSponsorshipSuggestions=true/
CC-MAIN-2019-13
refinedweb
799
57.47
Hello, I know StateT is exactly aimed at dealing with a state and an inner monad but I have an example in which I have to mix State and IO and in which I didn't get to an elegant solution using StateT. I have a higher order function which gets some State processing functions as input, makes some internal operations with IO and has to return a State as output. My (ugly) function interface is netlist :: DT.Traversable f => (State s (S HDPrimSignal) -> State s v ) -> -- new (State s (Type,v) -> S v -> State s ()) -> -- define State s (f HDPrimSignal) -> -- the graph StateT s IO () The returned type is a StateT and the only way in which I succesfully managed to internally work with both State and StateT is converting from the former to the later one using this function (not elegant at all) state2StateT :: Monad m => State s a -> StateT s m a state2StateT f = StateT (return.runState f) I tried avoiding to use state2StateT by changing the interface to netlist :: DT.Traversable f => (State s (S HDPrimSignal) -> State s v ) -> -- new (State s (Type,v) -> S v -> State s ()) -> -- define State s (f HDPrimSignal) -> -- the graph State s (IO ()) but the function ended up being even uglier and I had to be care full about all the internal IO actions being executed (it is aesy to formget about it), let me show a (quite stupid) example myState :: State () (IO ()) myState = (return $ putStrLn "first line") >> (return $ putStrLn "second line") > eval myState () second line The first line is obviously lost Here is the full code of my function (many type definitions are missing but I hope it is understandable anyway) import qualified Data.Traversable as DT (Traversable(mapM)) import qualified Control.Monad.Trans import Language.Haskell.TH(Type) netlist :: DT.Traversable f => (State s (S HDPrimSignal) -> State s v ) -> -- new (State s (Type,v) -> S v -> State s ()) -> -- define State s (f HDPrimSignal) -> -- the graph StateT s IO () -- Generates a netlist given: -- new: generates the new (and normally unique) tag of every node given -- the iteration state which is updated as well. -- define: given the tag of a node, -- current iteration state, its type, and the tag of its children, -- generates the netlist of that node, updating the iteration state -- pSignals: the graph itself, a traversable collection of root -- signals including the initial state of the iteration -- It returns the final iteration state and the tags of outputs -- (root primitivesignals) netlist new define pSignals = do f <- state2StateT pSignals tab <- lift table let -- gather :: State s HDPrimSignal -> StateT s IO v gather sm = do HDPrimSignal t node <- sm visited <- lift (find tab node) case visited of Just v -> return v Nothing -> do let sP = deref node v' <- state2StateT (new (return sP)) lift (extend tab node v') sV <- DT.mapM (gather.return) sP state2StateT (define (return (t,v')) sV) return v' in DT.mapM (gather.return) f >> return() ---- just in case it helps table :: IO (Table a b) find :: Table a b -> Ref a -> IO (Maybe b) extend :: Table a b -> Ref a -> b -> IO () ---- Maybe is asking too much but would anyone be able to provide a more elegant netlist function which ... option a) returns StateT but doesn't make use of state2StateT? or option b) returns State but doesnt end up being messy? Thanks in advance, Alfonso Acosta
http://www.haskell.org/pipermail/haskell-cafe/2007-February/022915.html
CC-MAIN-2014-41
refinedweb
559
54.29
SYNOPSIS use mro; # enables next::method and friends globally use mro 'dfs'; # enable DFS MRO for this class (Perl default) use mro 'c3'; # enable C3 MRO for this class DESCRIPTION The "mro" namespace provides several utilities for dealing with method resolution order and method caching in general. These interfaces are only available in Perl 5.9.5 and higher. See MRO::Compat on CPAN for a mostly forwards compatible implementation for older Perls. OVERVIEW". The C3 MRO. What is C3? C3 is the name of an algorithm which aims to provide a sane method resolution order under multiple inheritance. It was first introduced in the language Dylan (see links in the "SEE ALSO" section), and then later adopted as the preferred, for instance, the classic diamond inheritance pattern: <A> / \ <B> <C> \ / <D> The standard Perl 5 MRO would be (D, B, A, C). The result being that A appears before C, even though C is the subclass of A. The C3 MRO. mro::set_mro($classname, $type) Sets the MRO of the given class to the $type argument (either "c3" or "dfs"). mro::get_mro($classname) Returns the MRO of the given class (either "c3" or "dfs"). mro::get_isarev($classname). mro::is_universal($classname). mro::invalidate_all_method_caches() Increments "PL_sub_generation", which invalidates method caching in all packages. occurence, but it can happen if someone does something like "undef %PkgName::"), the number will be reset to either 0 or 1, depending on how completely package was wiped out. next::method: 1. First, it determines the linearized C3 MRO of the object or class it is being called on. 2. Then, it determines the class and method name of the context it was invoked from. 3. Finally, it searches down the C3 MRO list until it reaches the contextually enclosing class, then searches further down the MRO list for the next method with the same name as the contextually enclosing method. Failure to find a next method will result in an exception being thrown (see below for alternatives). *Foo::foo = sub { (shift)->next::method(@_) }; The problem exists because the anonymous subroutine being assigned to. next::can This is similar to "next::method", but just returns either a code reference or "undef" to indicate that no further methods of this name exist. maybe::next::method In simple cases, it is equivalent to: $self->next::method(@_) if $self->next::can; But there are some cases where only this solution works (like "goto &maybe::next::method"); SEE ALSO The original Dylan paper <> The prototype Perl 6 Object Model uses C3 <> Parrot now uses C3 <> <> Python 2.3 MRO related links <> <> C3 for TinyCLOS <> Class::C3 Class::C3 AUTHOR
http://www.linux-directory.com/man3/mro.shtml
crawl-003
refinedweb
440
61.16
Hello, I am just wondering what the best practice is for when to use static classes (by static class, I mean a class which has only static attributes and functions). If you are creating more than one independent object of a particular class, then obviously this should not be static because each object will be the same. But what about the case when you know that you will only ever need one instance of a class? On its own, does this mean that you should create it as a static class? Personally, I use static class when I want its member attributes and functions to be available globally, which I think is fine. However, I am not sure about the case when I know that only one object will be created - should this be a static class or not? Thanks. Originally Posted by karnavor I am just wondering what the best practice is for when to use static classes The general consensus is as little as possible. The idea is that global shared data is bad because it induces dependencies in designs. Nevertheless, there's a design pattern to handle the one global object situation and it's called the Singleton. Many have taken the very existence of Singleton as cue to use it extensively but design pattern or not, global data still is global data and it's better to avoid it. On the other hand if you have a bunch of stateless global functions that belong together logically, of course you can make them static and keep them in a class. That's what you would do in Java (see for example the Math class). In C++ you can as well use free functions in a namespace. Last edited by nuzzle; May 3rd, 2013 at 01:52 AM. Forum Rules
http://forums.codeguru.com/showthread.php?536733-how-to-learn-Codeing-Especially-C-HTML-CSS&goto=nextoldest
CC-MAIN-2014-42
refinedweb
302
66.88
Hey guys, this is my first post. I used to use these forums a long time ago when I fiddled around with C in highschool. Anyways, after a 4 year stint in the Marine Corps, I'm going to college for a degree in engineering and physics at Georgia Tech. I'm starting in August. I've been advised by some contacts that knowledge of a programming language is good to have on a resume if I get into research and development in biomedical engineering, which is my goal. So... I'm working on C++ right now, and I'm getting into the "for" and "while" loops. I'm using a copy of C++ for Dummies that I managed to scrounge up for free, and I'm running the Code::Blocks compiler on Windows 7. Here is the block of code that I'm working with to make this easy for you to see what I'm referencing. All of the comments are ones that I did on my own to kind of help me talk myself through the code..Code:#include <iostream> #include <cstdio> #include <cstdlib> int main(int nNumberofArgs, char* pszArgs[]) { //input the loop count int loopcount; // initating the argument named loopcount interger type std::cout << "Enter loopCount: "; std::cin >> loopcount; //takes input from user and puts into loopcount for (int i =1; i <= loopcount; i++) // it's saying the argument interger type "i" is equal to 1. When it's // compared to the argument loopcount, run the loop as long as it's less // than or equal to loopcount. the i++ means to add 1 at the start of the // loop to the argument "i" { std::cout << "We've finished " << i << " loops\n"; } /* the COUT statement is saying print the text, then print interger "i" then print loops followed by a newline. then repeat the for loop. */ system("PAUSE"); return 0; } 1. When I searched online for cstdlib, I got a list of functions and the explanation that it's for memory allocation and lists and sorts. Do I even need this in the program I'm using? The list of functions didn't show anything I use in that block of code. 2. cstdio looks like something for input/output, but online referencing says it's formatted IO functions. Can someone elaborate for an idiot? Once again... I don't see how I need it in this program. 3. ...int main(). I don't understand the (int nNumberofArgs, char* pszArgs[]) part. I searched the string on google, and got limited results. I was thinking maybe something to do with declaring an array or matrix with the [] brackets. Could someone explain this to me in a basic way. No need to go too deep. And once again..... Is this even necessary in this program? 4. And finally.... I took out the books original content to include using namespace std. I read somewhere online that this is inefficient, and so instead I did the other method above in the code. Am I looking at this in the right way, or is there a potential problem with my styling of the code? The book isn't explaining these things it includes at all. I'm about to go ahead and get a different book by a new author. Until then, could I get some help? By the way remember, I'm a 1 week C++ veteran so don't give me a stroke with technical stuff yet! Thanks guys! Semper Fi.
https://cboard.cprogramming.com/cplusplus-programming/126882-headers-int-main.html
CC-MAIN-2017-47
refinedweb
582
80.51
1 /*2 * @(#)FileLock.java 1.8 03/12/193 *4 * Copyright 2004 Sun Microsystems, Inc. All rights reserved.5 * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.6 */7 8 package java.nio.channels;9 10 import java.io.IOException ;11 12 13 /**14 * A token representing a lock on a region of a file.15 *16 * <p> A file-lock object is created each time a lock is acquired on a file via17 * one of the {@link FileChannel#lock(long,long,boolean) lock} or {@link18 * FileChannel#tryLock(long,long,boolean) tryLock} methods of the {@link19 * FileChannel} class.20 *21 * <p> A file-lock object is initially valid. It remains valid until the lock22 * is released by invoking the {@link #release release} method, by closing the23 * channel that was used to acquire it, or by the termination of the Java24 * virtual machine, whichever comes first. The validity of a lock may be25 * tested by invoking its {@link #isValid isValid} method.26 *27 * <p> A file lock is either <i>exclusive</i> or <i>shared</i>. A shared lock28 * prevents other concurrently-running programs from acquiring an overlapping29 * exclusive lock, but does allow them to acquire overlapping shared locks. An30 * exclusive lock prevents other programs from acquiring an overlapping lock of31 * either type. Once it is released, a lock has no further effect on the locks32 * that may be acquired by other programs.33 *34 * <p> Whether a lock is exclusive or shared may be determined by invoking its35 * {@link #isShared isShared} method. Some platforms do not support shared36 * locks, in which case a request for a shared lock is automatically converted37 * into a request for an exclusive lock.38 *39 * <p> The locks held on a particular file by a single Java virtual machine do40 * not overlap. The {@link #overlaps overlaps} method may be used to test41 * whether a candidate lock range overlaps an existing lock.42 *43 * <p> A file-lock object records the file channel upon whose file the lock is44 * held, the type and validity of the lock, and the position and size of the45 * locked region. Only the validity of a lock is subject to change over time;46 * all other aspects of a lock's state are immutable.47 *48 * <p> File locks are held on behalf of the entire Java virtual machine.49 * They are not suitable for controlling access to a file by multiple50 * threads within the same virtual machine.51 *52 * <p> File-lock objects are safe for use by multiple concurrent threads.53 *54 *55 * <a name="pdep">56 * <h4> Platform dependencies </h4>57 *58 * <p> This file-locking API is intended to map directly to the native locking59 * facility of the underlying operating system. Thus the locks held on a file60 * should be visible to all programs that have access to the file, regardless61 * of the language in which those programs are written.62 *63 * <p> Whether or not a lock actually prevents another program from accessing64 * the content of the locked region is system-dependent and therefore65 * unspecified. The native file-locking facilities of some systems are merely66 * <i>advisory</i>, meaning that programs must cooperatively observe a known67 * locking protocol in order to guarantee data integrity. On other systems68 * native file locks are <i>mandatory</i>, meaning that if one program locks a69 * region of a file then other programs are actually prevented from accessing70 * that region in a way that would violate the lock. On yet other systems,71 * whether native file locks are advisory or mandatory is configurable on a72 * per-file basis. To ensure consistent and correct behavior across platforms,73 * it is strongly recommended that the locks provided by this API be used as if74 * they were advisory locks.75 *76 * <p> On some systems, acquiring a mandatory lock on a region of a file77 * prevents that region from being {@link java.nio.channels.FileChannel#map78 * </code>mapped into memory<code>}, and vice versa. Programs that combine79 * locking and mapping should be prepared for this combination to fail.80 *81 * <p> On some systems, closing a channel releases all locks held by the Java82 * virtual machine on the underlying file regardless of whether the locks were83 * acquired via that channel or via another channel open on the same file. It84 * is strongly recommended that, within a program, a unique channel be used to85 * acquire all locks on any given file.86 *87 * <p> Some network filesystems permit file locking to be used with88 * memory-mapped files only when the locked regions are page-aligned and a89 * whole multiple of the underlying hardware's page size. Some network90 * filesystems do not implement file locks on regions that extend past a91 * certain position, often 2<sup>30</sup> or 2<sup>31</sup>. In general, great92 * care should be taken when locking files that reside on network filesystems.93 *94 *95 * @author Mark Reinhold96 * @author JSR-51 Expert Group97 * @version 1.8, 03/12/1998 * @since 1.499 */100 101 public abstract class FileLock {102 103 private final FileChannel channel;104 private final long position;105 private final long size;106 private final boolean shared;107 108 /**109 * Initializes a new instance of this class. </p>110 *111 * @param channel112 * The file channel upon whose file this lock is held113 *114 * @param position115 * The position within the file at which the locked region starts;116 * must be non-negative117 *118 * @param size119 * The size of the locked region; must be non-negative, and the sum120 * <tt>position</tt> + <tt>size</tt> must be non-negative121 *122 * @param shared123 * <tt>true</tt> if this lock is shared,124 * <tt>false</tt> if it is exclusive125 *126 * @throws IllegalArgumentException127 * If the preconditions on the parameters do not hold128 */129 protected FileLock(FileChannel channel,130 long position, long size, boolean shared)131 {132 if (position < 0)133 throw new IllegalArgumentException ("Negative position");134 if (size < 0)135 throw new IllegalArgumentException ("Negative size");136 if (position + size < 0)137 throw new IllegalArgumentException ("Negative position + size");138 this.channel = channel;139 this.position = position;140 this.size = size;141 this.shared = shared;142 }143 144 /**145 * Returns the file channel upon whose file this lock is held. </p>146 *147 * @return The file channel148 */149 public final FileChannel channel() {150 return channel;151 }152 153 /**154 * Returns the position within the file of the first byte of the locked155 * region.156 *157 * <p> A locked region need not be contained within, or even overlap, the158 * actual underlying file, so the value returned by this method may exceed159 * the file's current size. </p>160 *161 * @return The position162 */163 public final long position() {164 return position;165 }166 167 /**168 * Returns the size of the locked region in bytes.169 *170 * <p> A locked region need not be contained within, or even overlap, the171 * actual underlying file, so the value returned by this method may exceed172 * the file's current size. </p>173 *174 * @return The size of the locked region175 */176 public final long size() {177 return size;178 }179 180 /**181 * Tells whether this lock is shared. </p>182 *183 * @return <tt>true</tt> if lock is shared,184 * <tt>false</tt> if it is exclusive185 */186 public final boolean isShared() {187 return shared;188 }189 190 /**191 * Tells whether or not this lock overlaps the given lock range. </p>192 *193 * @return <tt>true</tt> if, and only if, this lock and the given lock194 * range overlap by at least one byte195 */196 public final boolean overlaps(long position, long size) {197 if (position + size <= this.position)198 return false; // That is below this199 if (this.position + this.size <= position)200 return false; // This is below that201 return true;202 }203 204 /**205 * Tells whether or not this lock is valid.206 *207 * <p> A lock object remains valid until it is released or the associated208 * file channel is closed, whichever comes first. </p>209 *210 * @return <tt>true</tt> if, and only if, this lock is valid211 */212 public abstract boolean isValid();213 214 /**215 * Releases this lock.216 *217 * <p> If this lock object is valid then invoking this method releases the218 * lock and renders the object invalid. If this lock object is invalid219 * then invoking this method has no effect. </p>220 *221 * @throws ClosedChannelException222 * If the channel that was used to acquire this lock223 * is no longer open224 *225 * @throws IOException226 * If an I/O error occurs227 */228 public abstract void release() throws IOException ;229 230 /**231 * Returns a string describing the range, type, and validity of this lock.232 *233 * @return A descriptive string234 */235 public final String toString() {236 return (this.getClass().getName()237 + "[" + position238 + ":" + size239 + " " + (shared ? "shared" : "exclusive")240 + " " + (isValid() ? "valid" : "invalid")241 + "]");242 }243 244 }245 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/java/nio/channels/FileLock.java.htm
CC-MAIN-2017-30
refinedweb
1,482
59.84
compress, zlibVersion, deflateInit, deflate, deflateEnd, inflateInit, inflate, inflateEnd, deflateInit2, deflateSetDictionary, deflateCopy, deflateReset, deflateParams, deflateTune, deflateBound, deflatePrime, deflateSetHeader, inflateInit2, inflateSetDictionary, inflateSync, inflateCopy, inflateReset, inflatePrime, inflateGetHeader, inflateBackInit, inflateBack, inflateBackEnd, zlibCompileFlags, compress2, compressBound, uncompress, gzopen, gzdopen, gzsetparams, gzread, gzwrite, gzprintf, gzputs, gzgets, gzputc, gzgetc, gzungetc, gzflush, gzseek, gzrewind, gztell, gzeof, gzdirect, gzclose, gzerror, gzclearerr, adler32, adler32_combine, crc32, crc32_combine— #include <zlib.h> Basic functions const char * zlibVersion(void); int deflateInit(z_streamp strm, int level); int deflate(z_streamp strm, int flush); int deflateEnd(z_streamp strm); int inflateInit(z_streamp strm); int inflate(z_streamp strm, int flush); int inflateEnd(z_streamp strm); Advanced functions int deflateInit2(z_streamp strm, int level, int method, int windowBits, int memLevel, int strategy); int deflateSetDictionary(z_streamp strm, const Bytef *dictionary, uInt dictLength); int deflateCopy(z_streamp dest, z_streamp source); int deflateReset(z_streamp strm); int deflateParams(z_streamp strm, int level, int strategy); int deflateTune(z_streamp strm, int good_length, int max_lazy, int nice_length, int max_chain); uLong deflateBound(z_streamp strm, uLong sourceLen); int deflatePrime(z_streamp strm, int bits, int value); int deflateSetHeader(z_streamp strm, gz_headerp head); int inflateInit2(z_streamp strm, int windowBits); int inflateSetDictionary(z_streamp strm, const Bytef *dictionary, uInt dictLength); int inflateSync(z_streamp strm); int inflateCopy(z_streamp dst, z_streamp source); int inflateReset(z_streamp strm); int inflatePrime(z_streamp strm, int bits, int value); int inflateGetHeader(z_streamp strm, gz_headerp head); int inflateBackInit(z_stream *strm, int windowBits, unsigned char FAR *window); int inflateBack(z_stream *strm, in_func in, void FAR *in_desc, out_func out, void FAR *out_desc); int inflateBackEnd(z_stream *strm); uLong zlibCompileFlags(void); Utility functions typedef voidp gzFile; int compress(Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen); int compress2(Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen, int level); uLong compressBound(uLong sourceLen); int uncompress(Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen);, voidpc buf, unsigned len); intungetc(int c, gzFile file); int gzflush(gzFile file, int flush); z_off_t gzseek(gzFile file, z_off_t offset, int whence); int gzrewind(gzFile file); z_off_t gztell(gzFile file); int gzeof(gzFile file); int gzdirect(gzFile file); int gzclose(gzFile file); const char * gzerror(gzFile file, int *errnum); void gzclearerr(gzFile file); Checksum functions uLong adler32(uLong adler, const Bytef *buf, uInt len); uLong adler32_combine(uLong adler1, uLong adler2, z_off_t len2); uLong crc32(uLong crc, const Bytef *buf, uInt len); uLong crc32_combine(uLong crc1, uLong crc2, z_off_t len2); zlibgeneral purpose compression library, version 1.2(1) (.gz) format with an interface similar to that of stdio(3) using the functions that start with “gz”. The gzip format is different from the zlib format. gzip is a gzip wrapper, documented in RFC 1952, wrapped around a deflate stream. This library can optionally read and write gzip case of corrupted input. The functions within the library are divided into the following sections: zlibVersion(). deflateInit(z_streamp strm, int level); The deflateInit() function successful, accumulate before producing output, in order to maximise the value returned by deflateBound() (see below). If deflate() does not return Z_STREAM_END, then it must be called again as described above. deflate() sets strm->adler to the Adler-32 checksum of all input read so far (that is, total_in bytes).). Note that Z_BUF_ERROR is not fatal, and deflate() can be called again with more input and more output space to continue processing. deflateEnd(z_streamp strm); All dynamically allocated data structures for this stream are freed. This function discards any unprocessed input and does not flush any pending output. deflateEnd() returns Z_OK if successful, Z_STREAM_ERROR if the stream state was inconsistent, Z_DATA_ERROR if the stream was freed prematurely (some input or output was discarded). In the error case, msg may be set but then points to a static string (which must not be deallocated). inflateInit(z_streamp strm); inflateInit() function initializes the internal stream state for decompression. The fields next_in, avail_in, zalloc, zfree, and opaque must be initialized before by the caller. If next_in is not Z_NULLand avail_in is large enough (the exact value depends on the compression method), inflateInit() determines the compression method from the zlibheader and allocates all data structures accordingly; otherwise the allocation will be deferred to the first call to inflate(). If zalloc and zfree are set to Z_NULL, inflateInit() updates them to use default allocation functions. inflateInit() returns Z_OK if successful, to inflate(). Provide more output starting at next_out and update next_out and avail_out accordingly. inflate() provides as much output as possible, until there is no more input data or no more space in the output buffer (see below about the flush parameter). Before the call. inflate() should normally be called until it returns Z_STREAM_END or an error. However if all decompression is to be performed in a single step (a single.. Note that Z_BUF_ERROR is not fatal, and inflate() can be called again with more input and more output space to continue compressing. If Z_DATA_ERROR is returned, the application may then call inflateSync() to look for a good compression block if a partial recovery of the data is desired. inflateEnd(z_streamp strm); inflateEnd() returns Z_OK if successful, or Z_STREAM_ERROR if the stream state was inconsistent. In the error case, msg may be set but then points to a static string (which must not be deallocated). deflateInit2. windowBits can also be -8..-15 for raw deflate. In this case, -windowBits determines the window size. deflate() will then generate raw deflate data with no zlib header or trailer, and will not compute an Adler-32. Z_RLE is designed to be almost as fast as Z_HUFFMAN_ONLY, but gives successful, Z_MEM_ERROR if there was not enough memory, Z_STREAM_ERROR if a parameter is invalid (such as an invalid method). msg is set to null if there is no error message. deflateInit2() does not perform any compression: this will be done by deflate(). deflateSetDictionary(z_streamp strm, const Bytef *dictionary, uInt dictLength); Initializes the compression dictionary from the given byte sequence without producing any compressed output. This function must be called immediately after deflateInit(), deflateInit2(), or deflateReset(), before any call successful,(). deflateCopy(z_streamp dest, z_streamp source); The deflateCopy() function successful, Z_MEM_ERROR if there was not enough memory, Z_STREAM_ERROR if the source stream state was inconsistent (such as zalloc being NULL). msg is left unchanged in both source and destination. deflateReset successful, or Z_STREAM_ERROR if the source stream state was inconsistent (such as zalloc or state being NULL). deflateParams(z_streamp strm, int level, int strategy); The deflateParams() function dynamically updates to deflate(). Before the call to deflateParams(), the stream state must be set as for a call to deflate(), since the currently available input may have to be compressed and flushed. In particular, strm->avail_out must be non-zero. deflateParams() returns Z_OK if successful, Z_STREAM_ERROR if the source stream state was inconsistent or if a parameter was invalid, or Z_BUF_ERROR if strm->avail_out was zero.. deflateBound(z_streamp strm, uLong sourceLen) deflateBound() returns an upper bound on the compressed size after deflation of sourceLen bytes. It must be called after deflateInit() or deflateInit2(). This would be used to allocate an output buffer for deflation in a single pass, and so would be called before deflate(). successful, or Z_STREAM_ERROR if the source stream state was inconsistent.(1) do not support header CRCs, successful, or Z_STREAM_ERROR if the source stream state was inconsistent. inflateInit232. inflateInit2() returns Z_OK if successful, Z_MEM_ERROR if there was not enough memory, Z_STREAM_ERROR if a parameter is invalid (such as a null strm)..) inflateSetDictionary(z_streamp strm, const Bytef *dictionary, uInt dictLength); Initializes the decompression dictionary from the given uncompressed byte sequence. This function must be called immediately after a call to inflate() if that call returned Z_NEED_DICT. The dictionary chosen by the compressor can be determined from the Adler-32 value returned by that call to inflate(). The compressor and decompressor must use exactly the same dictionary (see deflateSetDictionary()). For raw inflate, this function can be called immediately after inflateInit2() or inflateReset() and before any call to inflate() to set the dictionary. The application must ensure that the dictionary that was used for compression is provided. inflateSetDictionary() returns Z_OK if successful, Z_STREAM_ERROR if a parameter is invalid (such as NULL dictionary) or the stream state is inconsistent, Z_DATA_ERROR if the given dictionary doesn't match the expected one (incorrect Adler-32 value). inflateSetDictionary() does not perform any decompression: this will be done by subsequent calls of inflate(). inflateSync value of total_in which indicates where valid compressed data was found. In the error case, the application may repeatedly call inflateSync(), providing more input each time, until success or end of the input data. NULL). msg is left unchanged in both source and dest. inflateReset successful, or Z_STREAM_ERROR if the source stream state was inconsistent (such as zalloc or state being NULL). inflatePrime(z_stream. inflatePrime() returns Z_OK if successful, or Z_STREAM_ERROR if the source stream state was inconsistent. successful, or Z_STREAM_ERROR if the source stream state was inconsistent. inflateBackInit(z_stream *strm, int windowBits, unsigned char FAR *window). inflateBack(z_stream *strm, in_func in, void FAR *in_desc, out_func out, void FAR *out_desc) inflateBack() does a raw inflate with a single call using a call-back interface for input and output. This is more efficient than inflate() for file I/O applications in that it avoids copying between the output and the sliding window by simply making the window itself the output buffer. This function normal behavior of inflate(), which expects either a zlib or gzip() is not Z_NULL, then the Z_BUF_ERROR was due to out() returning non-zero. ( in() will always be called before out(), so strm->next_in is assured to be defined if out() returns non-zero.) Note that inflateBack() cannot return Z_OK. inflateBackEnd(z_stream *strm) All memory allocated by inflateBackInit() is freed. inflateBackEnd() returns Z_OK on success, or Z_STREAM_ERROR if the stream state was inconsistent. zlibCompileFlags(void) This function returns flags indicating compile-time options. Type sizes, two bits each: Compiler, assembler, and debug options: One-time table building (smaller code, but not thread-safe if true): Library content (indicates missing functionality): Operation variations (changes in library functionality): The sprintf variant used by gzprintf (zero is best): gzprintf() not secure! Remainder: compress(Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen); The compress() function buffer. This function can be used to compress a whole file at once if the input file is mmap'ed. compress() returns Z_OK if successful, Z_MEM_ERROR if there was not enough memory, or Z_BUF_ERROR if there was not enough room in the output buffer. compress2(Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen, int level); The compress2() function buffer. compress2() returns Z_OK if successful, Z_MEM_ERROR if there was not enough memory, Z_BUF_ERROR if there was not enough room in the output buffer, or Z_STREAM_ERROR if the level parameter is invalid. compressBound(uLong sourceLen) compressBound() returns an upper bound on the compressed size after compress() or compress2() on sourceLen bytes. It would be used before a compress() or compress2() call to allocate the destination buffer. uncompress(Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen); The uncompress() function successful, Z_MEM_ERROR if there was not enough memory, Z_BUF_ERROR if there was not enough room in the output buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete. gzopen(const char *path, const char *mode); The gzopen() function opens a gzip (.gz) file for reading or writing. The mode parameter is as in fopen(3) (“rb” or “wb”) but can also include a compression level (wb9) or a strategy: ‘f’ for filtered data, as in “wb6f”; ‘h’ for Huffman only compression, as in “wb1h”, or ‘R’ for run-length encoding as in “wb1). gzdopen(int fd, const char *mode); The gzdopen() function associates a gzFile with the file descriptor fd. File descriptors are obtained from calls like open(2), dup(2), creat(3), pipe(2), or fileno(3) (if the file has been previously opened with fopen(3)). The mode parameter is as in gzopen(). The next call. gzsetparams(gzFile file, int level, int strategy); The gzsetparams() function dynamically updates the compression level or strategy. See the description of deflateInit2() for the meaning of these parameters. gzsetparams() returns Z_OK if successful, or Z_STREAM_ERROR if the file was not opened for writing. gzread(gzFile file, voidp buf, unsigned len); The gzread() function). gzwrite(gzFile file, voidpc buf, unsigned len); The gzwrite() function writes the given number of uncompressed bytes into the compressed file. gzwrite() returns the number of uncompressed bytes actually written (0 in case of error). gzprintf(gzFile file, const char *format, ...); The gzprintf() function converts, formats, and writes the args to the compressed file under control of the format string, as in fprintf(3). gzprintf() returns the number of uncompressed bytes actually written (0 in case of error). The number of uncompressed bytes written is limited to 4095. The caller should make sure. gzputs(gzFile file, const char *s); The gzputs() function writes the given null-terminated string to the compressed file, excluding the terminating null character. gzputs() returns the number of characters written, or -1 in case of error. gzgets(gzFile file, char *buf, int len); The gzgets() function. gzputc(gzFile file, int c); The gzputc() function writes c, converted to an unsigned char, into the compressed file. gzputc() returns the value that was written, or -1 in case of error. gzgetc(gzFile file); The gzgetc() function reads one byte from the compressed file. gzgetc() returns this byte or -1 in case of end of file or error. gzungetc(). gzflush(gzFile file, int flush); The gzflush() function. gzrewind(gzFile file); The gzrewind() function rewinds the given file. This function is supported only for reading. gzrewind(file) is equivalent to (int)gzseek(file, 0L, SEEK_SET). gztell(gzFile file); The gztell() function returns the starting position for the next gzread() or gzwrite() on the given compressed file. This position represents a number of bytes in the uncompressed data stream. gztell(file) is equivalent to gzseek(file, 0L, SEEK_CUR). gzeof(gzFile file); The gzeof() function returns 1 when EOF has previously been detected reading the given input stream, otherwise zero. gzdirect(gzFile file); The gzdirect() function returns 1 if the file is being read directly without compression; otherwise it returns 0. gzclose(gzFile file); The gzclose() function flushes all pending output if necessary, closes the compressed file and deallocates all the (de)compression state. The return value is the zlib error number (see function gzerror() below). gzerror(gzFile file, int *errnum); The gzerror() function returns the error message for the last error which occurred on the given compressed file. errnum is set to the zlib error number. If an error occurred in the file system and not in the compression library, errnum is set to Z_ERRNO and the application may consult errno to get the exact error code. gzclearerr(gzFile file) clearerr() function in stdio. This is useful for continuing to read a gzip file that is being written concurrently. adler32(uLong adler, const Bytef *buf, uInt len); adler32() function updates a running Adler-32 checksum with the bytes buf[0..len-1] and returns(); adler32_combine(uLong adler1, uLong adler2, z_off_t len2) The adler32_combine() function combines two Adler-32 checksums into one. For two sequences of bytes, seq1 and seq2 with lengths len1 and len2, Adler-32 checksums are calculated for each, adler1 and adler2. adler32_combine() returns the Adler-32 checksum of seq1 and seq2 concatenated, requiring only adler1, adler2, and len2. crc32(uLong crc, const Bytef *buf, uInt len); The crc32() function updates a running CRC-32 with the bytes buf[0..len-1] and returns the updated CRC-32.(); crc32_combine(uLong crc1, uLong crc2, z_off_t len2) The crc32_combine() function combines two CRC-32 check values into one. For two sequences of bytes, seq1 and seq2 with lengths len1 and len2, CRC-32 check values are calculated for each, crc1 and crc2. crc32_combine() returns the CRC-32 check value of seq1 and seq2 concatenated, requiring only crc1, crc2, and len2. struct internal_state; typedef struct z_stream_s { Bytef *next_in; /* next input byte */ uInt avail_in; /* number of bytes available at next_in */ off_t total_in; /* total nb of input bytes read so far */ Bytef *next_out; /* next output byte should be put there */ uInt avail_out; /* remaining free space at next_out */ off_t: ) */ } gz_header; typedef gz_header FAR *gz_headerp;). #define Z_NO_FLUSH 0 #define Z_PARTIAL_FLUSH 1 /* will be removed, use Z_SYNC_FLUSH instead */ #define Z */ zlibversion and the compiler's view of z_stream. deflateInit_(z_stream strm, int level, const char *version, int stream_size); inflateInit_(z_stream strm, const char *version, int stream_size); deflateInit2_(z_stream strm, int level, int method, int windowBits, int memLevel, int strategy, const char *version, int stream_size); inflateInit2_(z_stream strm, int windowBits, const char *version, int stream_size); inflateBackInit_(z_stream *strm, int windowBits, unsigned char FAR *window, const char *version, int stream_size) zError(int err); inflateSyncPoint(z_streamp z); get_crc_table(void); P. Deutsch, DEFLATE Compressed Data Format Specification version 1.3, RFC 1951, May 1996. P. Deutsch, GZIP file format specification version 4.3, RFC 1952, May 1996. <zlib.h>converted by piaip <piaip@csie.ntu.edu.tw> and was converted to mdoc format by the OpenBSD project.
https://man.openbsd.org/OpenBSD-6.2/compress.3
CC-MAIN-2019-35
refinedweb
2,803
52.6
Created on 2012-11-04 01:30 by Markus.Amalthea.Magnuson, last changed 2020-11-06 20:03 by iritkatriel. If the default value for a flag is a list, and the action append is used, argparse doesn't seem to override the default, but instead adding to it. I did this test script: import argparse parser = argparse.ArgumentParser() parser.add_argument( '--foo', action='append', default=['bar1', 'bar2'] ) args = parser.parse_args() print args.foo Output is as follows: $ ./argparse_foo_test.py ['bar1', 'bar2'] $ ./argparse_foo_test.py --foo bar3 ['bar1', 'bar2', 'bar3'] I would expect the last output to be ['bar3']. Is this on purpose (although very confusing) or is it a bug? This behavior is inherited from optparse. I think it is more-or-less intentional, and in any case it is of long enough standing that I don't think we can change it. We documented it for optparse in another issue, but I don't think we made the corresponding improvement to the argparse docs. The test file, test_argparse.py, has a test case for this: 'class TestOptionalsActionAppendWithDefault' argument_signatures = [Sig('--baz', action='append', default=['X'])] successes = [ ('--baz a --baz b', NS(baz=['X', 'a', 'b'])), ] As this is likely not to get solved, is there a recommanded way to work around this issue ? Here's what I have done : import argparse def main(): """Main function""" parser = argparse.ArgumentParser() parser.add_argument('--foo', action='append') for arg_str in ['--foo 1 --foo 2', '']: args = parser.parse_args(arg_str.split()) if not args.foo: args.foo = ['default', 'value'] print(args) printing Namespace(foo=['1', '2']) Namespace(foo=['default', 'value']) as expected but I wanted to know if there a more argparse-y way to do this. I have tried using `set_defaults` without any success. Also, as pointed out the doc for optparse describes the behavior in a simple way : ". It should be easy to write a subclass of Action, or append Action, that does what you want. It just needs a different `__call__` method. You just need a way of identifying an default that needs to be overwritten as opposed to appended to. def __call__(self, parser, namespace, values, option_string=None): current_value = getattr(namspace, self.dest) if 'current_value is default': setattr(namespace, self.dest, values) return else: # the normal append action items = _copy.copy(_ensure_value(namespace, self.dest, [])) items.append(values) setattr(namespace, self.dest, items) People on StackOverFlow might have other ideas. Well that won't work. Example: import argparse class TestAction(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): print("default: {}({})\tdest: {}({})".format(self.default, type(self.default), getattr(namespace, self.dest), type(getattr(namespace, self.dest)))) if getattr(namespace, self.dest) is self.default: print("Replacing with: ", values) setattr(namespace, self.dest, values) # extra logical code not necessary for testcase parser = argparse.ArgumentParser() parser.add_argument\ ( "-o", "--output" , type=int , action=TestAction , default=42 ) args = parser.parse_args() $ ./argparse_test -o 42 -o 100 default: 42(<class 'int'>) dest: 42(<class 'int'>) Replacing with: 42 default: 42(<class 'int'>) dest: 42(<class 'int'>) Replacing with: 100 100 Use this approach: class ExtendAction(argparse.Action): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) if not isinstance(self.default, collections.abc.Iterable): self.default = [self.default] self.reset_dest = False def __call__(self, parser, namespace, values, option_string=None): if not self.reset_dest: setattr(namespace, self.dest, []) self.reset_dest = True getattr(namespace, self.dest).extend(values) Anyway, this should be properly documented.... From what I can tell a workaround for this still isn't documented. The documentation for argparse still does not mention this behaviour. I decided to make a patch based no the optparse issue. Hopefully it is good enough to be merged. It may help to know something about how defaults are handled - in general. `add_argument` and `set_defaults` set the `default` attribute of the Action (the object created by `add_argument` to hold all of its information). The default `default` is `None`. At the start of `parse_args`, a fresh Namespace is created, and all defaults are loaded into it (I'm ignoring some details). The argument strings are then parsed, and individual Actions update the Namespace with their values, via their `__call__` method. At the end of parsing it reviews the Namespace. Any remaining defaults that are strings are evaluated (passed through `type` function that converts a commandline string). The handling of defaults threads a fine line between giving you maximum power, and keeping things simple and predictable. The important thing for this issue is that the defaults are loaded into the Namespace at the start of parsing. The `append` call fetches the value from the Namespace, replaces it with `[]` if it is None, appends the new value(s), and puts it back on the Namespace. The first `--foo` append is handled in just the same way as the 2nd and third (fetch, append, and put back). The first can't tell that the list it fetches from the namespace came from the `default` as opposed to a previous `append`. The `__call__` for `append` was intentionally kept simple, and predictable. As I demonstrated earlier it is possible to write an `append` that checks the namespace value against some default, and does something different. But that is more complicated. The simplest alternative to this behavior is to leave the default as None. If after parsing the value is still None, put the desired list (or any other object) there. The primary purpose of the parser is to parse the commandline - to figure out what the user wants to tell you. There's nothing wrong with tweaking (and checking) the `args` Namespace after parsing. One thing that this default behavior does is allow us to append values to any object, just so long as it has the `append` method. The default does not have to be a standard list. For example, in another bug/issue someone asked for an `extend` action. I could provide that with `append` and a custom list class class MyList(list): def append(self,arg): if isinstance(arg,list): self.extend(arg) else: super(MyList, self).append(arg) This just modifies `append` so that it behaves like `extend` when given a list argument. parser = argparse.ArgumentParser() a = parser.add_argument('-f', action='append', nargs='*',default=[]) args = parser.parse_args('-f 1 2 3 -f 4 5'.split()) produces a nested list: In [155]: args Out[155]: Namespace(f=[['1', '2', '3'], ['4', '5']]) but if I change the `default`: a.default = MyList([]) args = parser.parse_args('-f 1 2 3 -f 4 5'.split()) produces a flat list: In [159]: args Out[159]: Namespace(f=['1', '2', '3', '4', '5']) I've tested this idea with an `array.array` and `set` subclass. You don't need action='append'. For desired behavior you can pass action='store' with nargs='*'. I think it's a simplest workaround. I just got bit by this in Python 3.5.3. I get why it does this. I also get why it's impractical to change the behavior now. But, it really isn't the obvious behavior, so it should be documented at. I update the doc of argparse and think this bpo could be closed when PR merged.
https://bugs.python.org/issue16399
CC-MAIN-2021-21
refinedweb
1,187
59.9
, the mobile phone also provides us with a range of image processing software, but as soon as we need to manipulate a huge quantity of photographs we need other tools. This is when programming and Python come into play. Python and its modules like Numpy, Scipy, Matplotlib and other special modules provide the optimal functionality to be able to cope with the flood of pictures. To provide you with the necessary knowledge this article of our Python tutorial deals with basic image processing and manipulation. For this purpose, we use the modules NumPy, Matplotlib, and SciPy. Can you guess what is this image of? You are right, it’s a typical picture of an open sky. I took this picture last to last night in Jaipur from my terrace. At that point of time, I had no clue that this can be such an exciting and rewarding exercise. As a kid, we used to spend hours counting these stars but almost always failed to go beyond 50–60. I wanted to complete this exercise using some help from my machine. I had no clue to complete my task but today I have completed this long pending task using Python. import numpy as npfrom PIL import Imageimport cv2from matplotlib import pyplot as plt%matplotlib inline #create image of 100*100 pixels#initial black imageimg=np.zeros((100,100,3),dtype=np.uint8)Image.fromarray(img) 2. Two colors Image img[:50,:,0]=255img[50:,:,:]=[255,255,0]Image.fromarray(img) Use the function cv2.imread() to read an image. The image should be in the working directory or a full path of the image should be given. The second argument is a flag which specifies the way image should be read. import numpy as npimport cv2 # Load an color image in grayscaleimg = cv2.imread('path of the image',0) Use the function cv2.imshow() to display an image in a window. The window automatically fits the image size. The first argument is a window name which is a string. the second argument is our image. You can create as many windows as you wish, but with different window names. cv. It can also be set to detect specific keystrokes. Use the function cv2.imwrite() to save an image. The first argument is the file name, the second argument is the image you want to save. cv2.imwrite('messigray.png',img) This will save the image in PNG format in the working directory. Below program loads an image in grayscale displays it, save the image if you press ‘s’ and exit, or simply exit without saving if you press ESC key. img = cv2.imread('path() Matplotlib is a plotting library for Python which gives you wide variety of plotting methods. Here, you will learn how to display image with it. You can zoom images, save it etc using it. import numpy as npimport cv2from matplotlib import pyplot as plt img = cv2.imread('path.jpg',0)plt.imshow(img, cmap = 'gray', interpolation = 'bicubic')plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axisplt.show() A screen-shot of the window will look like this : Visit our site: Our machine learning course Enroll now! Reference: Our INTELLIGENT WEB APPS DEVELOPMENT Enroll now! Related Blog: Essentials of Linear Regression in Python
https://pytholabs.com/blogs/Basics%20of%20Image%20Processing%20in%20PYTHON.html
CC-MAIN-2019-30
refinedweb
549
67.25
C - Typedef typedef is a keyword in C language which is used to assign alternative name to existing datatype. It does not introduce a distinct type, it only establishes a synonym for an existing datatype. The syntax for typedef declaration is given below: Syntax typedef <existing_name> <alias_name> Where, existing_name is the name of an already existing data type and alias_name is the alternative name given to it. For example, To give a name MyInt to unsigned int, the following declaration can be used: typedef unsigned int MyInt; Example: In the example below, unsigned int is typedef declared as MyInt, which is further used in a for loop to print it. #include <stdio.h> int main (){ typedef unsigned int MyInt; for(MyInt i = 1; i <= 5; i++){ printf("%u\n", i); } return 0; } The output of the above code will be: 1 2 3 4 5 Using typedef with structures The typedef declaration can be used to give a name to user defined data types as well. For example, typedef can be used with structures to define a new data type and then use that data type to define structure variables. Example: In the example below, a variable emp of type struct Employee is created, which is further used to store data of an Employee. #include <stdio.h> #include <string.h> typedef struct Employees { char name[50]; int age; char city[50]; int salary; } Employee; int main (){ Employee emp; strcpy(emp.name, "John"); emp.age = 25; strcpy(emp.city, "London"); emp.salary = 75000; printf("Employee name: %s\n", emp.name); printf("Employee age: %d\n", emp.age); printf("Employee city: %s\n", emp.city); printf("Employee salary: %d\n", emp.salary); return 0; } The output of the above code will be: Employee name: John Employee age: 25 Employee city: London Employee salary: 75000 Using typedef with pointers Similarly, an another name or alias name can be provided to pointer variables by using typedef declaration. Example: In the example below, integer pointer type is given an another name intptr. #include <stdio.h> int main (){ //typedef declaration typedef int* intptr; int MyVar = 10; //using intptr instead of int* intptr p1; p1 = &MyVar; printf("Address stored in p1 variable: %p\n",p1); printf("Value stored in *p1: %i",*p1); return 0; } The output of the above code will be: Address stored in p1 variable: 0x7ffe2c4a86cc Value stored in *p1: 10 typedef vs #define #define is a C-directive which is also used to define the aliases for various data types similar to typedef, but with the following differences: - typedef is limited to giving symbolic names to types only, whereas #define can be used to define an alias for values as well, for example: 3.14 can be defined as PI. - typedef interpretation is performed by the compiler where #define statements are performed by pre-processor. - #define is not terminated with a semicolon, but typedef is terminated with a semicolon. - typedef follows the scope rule i.e., if a new type is defined in a scope (inside a function or block), then the new type name will only be visible within the scope. Whereas in case of #define, when the pre-processor encounters #define, it replaces all the occurrences. Example: The example below shows the usage of #define C-directive. #include <stdio.h> #define PI 3.14 #define MyInt int #define False 0 int main (){ MyInt x = 10; printf("x = %i\n", x); printf("PI = %f\n", PI); printf("False = %i\n", False); return 0; } The output of the above code will be: x = 10 PI = 3.140000 False = 0
https://www.alphacodingskills.com/c/c-typedef.php
CC-MAIN-2021-31
refinedweb
595
60.75
Compute and Reduce with Tuple Inputs¶ Author: Ziheng Jiang Often we want to compute multiple outputs with the same shape within a single loop or perform reduction that involves multiple values like argmax. These problems can be addressed by tuple inputs. In this tutorial, we will introduce the usage of tuple inputs in TVM. from __future__ import absolute_import, print_function import tvm import numpy as np Describe Batchwise Computation¶ For operators which have the same shape, we can put them together as the inputs of tvm.compute, if we want them to be scheduled together in the next schedule procedure. n = tvm.var("n") m = tvm.var("m") A0 = tvm.placeholder((m, n), name='A0') A1 = tvm.placeholder((m, n), name='A1') B0, B1 = tvm.compute((m, n), lambda i, j: (A0[i, j] + 2, A1[i, j] * 3), name='B') # The generated IR code would be: s = tvm.create_schedule(B0.op) print(tvm.lower(s, [A0, A1, B0, B1], simple_mode=True)) Out: produce B { for (i, 0, m) { for (j, 0, n) { B.v0[((i*n) + j)] = (A0[((i*n) + j)] + 2.000000f) B.v1[((i*n) + j)] = (A1[((i*n) + j)]*3.000000f) } } } Describe Reduction with Collaborative Inputs¶ Sometimes, we require multiple inputs to express some reduction operators, and the inputs will collaborate together, e.g. argmax. In the reduction procedure, argmax need to compare the value of operands, also need to keep the index of operand. It can be expressed with comm_reducer as below: # x and y are the operands of reduction, both of them is a tuple of index # and value. def fcombine(x, y): lhs = tvm.expr.Select((x[1] >= y[1]), x[0], y[0]) rhs = tvm.expr.Select((x[1] >= y[1]), x[1], y[1]) return lhs, rhs # our identity element also need to be a tuple, so `fidentity` accepts # two types as inputs. def fidentity(t0, t1): return tvm.const(-1, t0), tvm.min_value(t1) argmax = tvm.comm_reducer(fcombine, fidentity, name='argmax') # describe the reduction computation m = tvm.var('m') n = tvm.var('n') idx = tvm.placeholder((m, n), name='idx', dtype='int32') val = tvm.placeholder((m, n), name='val', dtype='int32') k = tvm.reduce_axis((0, n), 'k') T0, T1 = tvm.compute((m, ), lambda i: argmax((idx[i, k], val[i, k]), axis=k), name='T') # the generated IR code would be: s = tvm.create_schedule(T0.op) print(tvm.lower(s, [idx, val, T0, T1], simple_mode=True)) Out: produce T { for (i, 0, m) { T.v0[i] = -1 T.v1[i] = -2147483648 for (k, 0, n) { T.v0[i] = tvm_if_then_else((T.v1[i] < val[((i*n) + k)]), idx[((i*n) + k)], T.v0[i]) T.v1[i] = tvm_if_then_else((T.v1[i] < val[((i*n) + k)]), val[((i*n) + k)], T.v1[i]) } } } Note For ones who are not familiar with reduction, please refer to Define General Commutative Reduction Operation. Schedule Operation with Tuple Inputs¶ It is worth mentioning that although you will get multiple outputs with one batch operation, but they can only be scheduled together in terms of operation. n = tvm.var("n") m = tvm.var("m") A0 = tvm.placeholder((m, n), name='A0') B0, B1 = tvm.compute((m, n), lambda i, j: (A0[i, j] + 2, A0[i, j] * 3), name='B') A1 = tvm.placeholder((m, n), name='A1') C = tvm.compute((m, n), lambda i, j: A1[i, j] + B0[i, j], name='C') s = tvm.create_schedule(C.op) s[B0].compute_at(s[C], C.op.axis[0]) # as you can see in the below generated IR code: print(tvm.lower(s, [A0, A1, C], simple_mode=True)) Out: // attr [B.v0] storage_scope = "global" allocate B.v0[float32 * n] // attr [B.v1] storage_scope = "global" allocate B.v1[float32 * n] produce C { for (i, 0, m) { produce B { for (j, 0, n) { B.v0[j] = (A0[((i*n) + j)] + 2.000000f) B.v1[j] = (A0[((i*n) + j)]*3.000000f) } } for (j, 0, n) { C[((i*n) + j)] = (A1[((i*n) + j)] + B.v0[j]) } } } Summary¶ This tutorial introduces the usage of tuple inputs operation. - Describe normal batchwise computation. - Describe reduction operation with tuple inputs. - Notice that you can only schedule computation in terms of operation instead of tensor. Total running time of the script: ( 0 minutes 0.015 seconds) Gallery generated by Sphinx-Gallery
https://docs.tvm.ai/tutorials/language/tuple_inputs.html
CC-MAIN-2019-26
refinedweb
714
53.37
If this template helps then use it. If not then just delete and start from scratch. OS (e.g. Win10): Catalina (Mac) PsychoPy version 2020.1.3 Standard Standalone? (y/n) If not then what?: Standalone What are you trying to achieve?: recording using microphone What did you try to make it work?: from psychopy import microphone, sound What specifically went wrong when you tried that?: ImportError: sys.meta_path is None, Python is likely shutting down Exception ignored in: <bound method TextStim.del of <psychopy.visual.text.TextStim object at 0x1215fa320>> Traceback (most recent call last): File “/Applications/PsychoPy3.app/Contents/Resources/lib/python3.6/psychopy/visual/text.py”, line 239, in del File “/Applications/PsychoPy3.app/Contents/Resources/lib/python3.6/pyglet/gl/lib.py”, line 97, in errcheck ImportError: sys.meta_path is None, Python is likely shutting down Experiment ended. Best thibault
https://discourse.psychopy.org/t/pyo-not-found-with-catalina-mac/11691
CC-MAIN-2021-43
refinedweb
144
54.9
The streaming build system What is gulp 3000 curated plugins for streaming file transformations - Simple - By providing only a minimal API surface, gulp is easy to learn and simple to use What's new in 4.0?!What's new in 4 InstallationInstallation Follow our Quick Start guide. RoadmapRoadmap Find out about all our work-in-progress and outstanding issues at. DocumentationDocumentation Check out the Getting Started guide and API docs on our website! Excuse our dust! All other docs will be behind until we get everything updated. Please open an issue if something isn't working. Sample gulpfile.js This file will give you a taste of what gulp does. var gulp = require('gulp'); var less = require('gulp-less'); var babel = require('gulp-babel'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); var rename = require('gulp-rename'); var cleanCSS = require('gulp-clean-css'); var del = require('del'); var paths = { styles: { src: 'src/styles/**/*.less', dest: 'assets/styles/' }, scripts: { src: 'src/scripts/**/*.js', dest: 'assets/scripts/' } }; /* Not all tasks need to use streams, a gulpfile is just another node program * and you can use all packages available on npm, but it must return either a * Promise, a Stream or take a callback and call it */ function clean() { // You can use multiple globbing patterns as you would with `gulp.src`, // for example if you are using del 2.0 or above, return its promise return del([ 'assets' ]); } /* * Define our tasks using plain functions */ function styles() { return gulp.src(paths.styles.src) .pipe(less()) .pipe(cleanCSS()) // pass in options to the stream .pipe(rename({ basename: 'main', suffix: '.min' })) .pipe(gulp.dest(paths.styles.dest)); } function scripts() { return gulp.src(paths.scripts.src, { sourcemaps: true }) .pipe(babel()) .pipe(uglify()) .pipe(concat('main.min.js')) .pipe(gulp.dest(paths.scripts.dest)); } function watch() { gulp.watch(paths.scripts.src, scripts); gulp.watch(paths.styles.src, styles); } /* * Specify if tasks run in series or parallel using `gulp.series` and `gulp.parallel` */ var build = gulp.series(clean, gulp.parallel(styles, scripts)); /* * You can use CommonJS `exports` module notation to declare tasks */ exports.clean = clean; exports.styles = styles; exports.scripts = scripts; exports.watch = watch; exports.build = build; /* * Define default task that can be called by just running `gulp` from cli */ exports.default = build; Use latest JavaScript version in your gulpfileUse latest JavaScript version in your gulpfile Most new versions of node support most features that Babel provides, except the import/ export syntax. When only that syntax is desired, rename to gulpfile.esm.js, install the [esm][esm-module] module, and skip the Babel portion below. Node already supports a lot of ES2015+ features, but to avoid compatibility problems we suggest to install Babel and rename your gulpfile.js to gulpfile.babel.js. npm install --save-dev @babel/register @babel/core @babel/preset-env Then create a .babelrc file with the preset configuration. { "presets": [ "@babel/preset-env" ] } And here's the same sample from above written in ES2015+. import gulp from 'gulp'; import less from 'gulp-less'; import babel from 'gulp-babel'; import concat from 'gulp-concat'; import uglify from 'gulp-uglify'; import rename from 'gulp-rename'; import cleanCSS from 'gulp-clean-css'; import del from 'del'; const paths = { styles: { src: 'src/styles/**/*.less', dest: 'assets/styles/' }, scripts: { src: 'src/scripts/**/*.js', dest: 'assets/scripts/' } }; /* * For small tasks you can export arrow functions */ export const clean = () => del([ 'assets' ]); /* * You can also declare named functions and export them as tasks */ export function styles() { return gulp.src(paths.styles.src) .pipe(less()) .pipe(cleanCSS()) // pass in options to the stream .pipe(rename({ basename: 'main', suffix: '.min' })) .pipe(gulp.dest(paths.styles.dest)); } export function scripts() { return gulp.src(paths.scripts.src, { sourcemaps: true }) .pipe(babel()) .pipe(uglify()) .pipe(concat('main.min.js')) .pipe(gulp.dest(paths.scripts.dest)); } /* * You could even use `export as` to rename exported tasks */ function watchFiles() { gulp.watch(paths.scripts.src, scripts); gulp.watch(paths.styles.src, styles); } export { watchFiles as watch }; const build = gulp.series(clean, gulp.parallel(styles, scripts)); /* * Export a default task */ export default build; Incremental BuildsIncremental Builds You can filter out unchanged files between runs of a task using the gulp.src function's since option and gulp.lastRun: const paths = { ... images: { src: 'src/images/**/*.{jpg,jpeg,png}', dest: 'build/img/' } } function images() { return gulp.src(paths.images.src, {since: gulp.lastRun(images)}) .pipe(imagemin({optimizationLevel: 5})) .pipe(gulp.dest(paths.images.dest)); } function watch() { gulp.watch(paths.images.src, images); } Task run times are saved in memory and are lost when gulp exits. It will only save time during the watch task when running the images task for a second time..
https://libraries.io/bower/gulplocal
CC-MAIN-2019-51
refinedweb
771
51.95
String Palindrome in Java Today we will see a program where we will find String palindrome in Java programming language. We have already learned how to find palindrome of a number in one of my Java article. Now in this article, we are going to learn about a new thing. So let’s begin… What is a String Palindrome? A string palindrome is a string which is the same when we read it either from front direction or backward direction. Some examples are NITIN, NAMAN, MADAM, CIVIC, etc. You will able to find lots of other meaningful words like these which will remain the same even if you read it from backward. In our program what we will do is compare the first and last character of the input string until we reach the half-length of the string. Consider the string “NITIN”. We will compare index 0 with 4, 1 with 3,2 with 2. (2 with 2 can be ignored). So let’s go straight to our Java code. Below is our code: public class MyClass { public static void main(String args[]) { String str="abba"; int ln=str.length(); int flag=1; for(int i=0;i<ln/2;i++) { if(str.charAt(i)!=str.charAt(ln-i-1)) { flag=0; } } if(flag==1) { System.out.println("Palindrome"); } else { System.out.println("Not Palindrome"); } } } Java CharAt function This Java CharAt function returns the element at the index passed as a parameter. This code can work both for even numbered and odd numbered string. Hope this helped you out. I hope you have understood String Palindrome in Java. Have a nice day ahead and happy coding. Also, read: How to find the difference or gap of days between two dates in python Find the next greater number from the same set of digits in C++ How to copy data from one excel sheet to another using python
https://www.codespeedy.com/string-palindrome-in-java/
CC-MAIN-2020-24
refinedweb
318
75.2
Closed Bug 749225 Opened 9 years ago Closed 9 years ago Stack trace: Incorrect line numbers for jsms that include preprocessor directives Categories (Firefox Build System :: General, defect) Tracking (Not tracked) People (Reporter: miker, Unassigned) Details When a jsm contains preprocessor directives it knocks the line numbers out of sync for stack traces. I guess this may also cause issues with choosing the correct line from the debugger. e.g. if (modifiersAttr.match("accel")) #ifdef XP_MACOSX combo.push(keysbundle.GetStringFromName("VK_META")); #else combo.push(keysbundle.GetStringFromName("VK_CONTROL")); #endif if (modifiersAttr.match("shift")) combo.push(keysbundle.GetStringFromName("VK_SHIFT")); will offset the line by 4. Is this something that should be fixed in the JS engine, or something that should be fixed in the preprocessor? IMO this bug should be moved to Core::Build Config and config/Preprocessor.py should be modified to insert blank lines whenever it takes things out so that line numbers are preserved. Agreed. Assignee: general → nobody Component: JavaScript Engine → Build Config See also bug 797325 as to why even when Preprocess is helpful, the stacktraces still aren't. Product: Core → Firefox Build System
https://bugzilla.mozilla.org/show_bug.cgi?id=749225
CC-MAIN-2021-31
refinedweb
186
56.25
I encountered ‘Loss is nan, stopping training’ when training my model with an additional multiheadAttention module. I have checked that when I’m not using the if block, the training is passing without error. Can anyone spot what’s causing the nan in this part of the code? def forward(self, x): x = self.forward_features(x) # [32, 198, 384] if self.add_attn: # This if block is causing the `nan` B, N, C = x.shape # batch_size, sequence_length, embed_dim x_qkv = self.add_forward(x) # This is nn.Linear(self.embed_dim, 3*self.embed_dim) x_qkv = x_qkv.reshape(B, N, 3, C).permute(2, 1, 0, 3) x, _ = self.add_attn(x_qkv[0], x_qkv[1], x_qkv[2]) # This is nn.MultiheadAttention(embed_dim=embed_dim, num_heads=12) x = x.permute(1, 0, 2) x = self.GeM(x, axis=1) x = self.head(x) return x
https://discuss.pytorch.org/t/loss-is-nan-stopping-training-in-multiheadattention/120628
CC-MAIN-2022-33
refinedweb
138
64.88
Debate: Writing for a Residual Income or Writing for Clients? Residual Income or Writing for Clients ? I was having this discussion with someone who is just starting out as an online writer. I debated this topic when I first started out as a writer. Although, I make most of my money with print publications now. I still do a lot of writing for Triond and Hubpages. I therefore, was interested in this particular topic. The person who I was discussing this with was basically saying it is pointless working for clients because they pay so little online. I basically told her that a lot of the low paying work you see online is for inexperienced writers who are just starting out. She was sceptical about this anyway, my view is that if you are just starting out as an online writer you should dip your foot into both pools. Writing for Residual Income There are a number of sites out there that you can join and start earning residual income from today. The idea behind these sites is, you write content and receive a percantage of the adverstising revenue. However, you have to work hard to promote your articles and get as many people to read your articles as possible. To see a good return on your time, you need to put in a lot of work. This might deter some people from actually going down this route because of the fact that you do not see a return on your time straight away. My view on this is, the time you put in now you will be rewarded for in the future. For example, with Hubpages I started writing a couple of years ago, and I did not see any return on my time. I therefore, stopped writing for Hubpages and then came back at the beginning of this year. I am now seeing a good return on the time that I put in. Some Hubbers are making enough money every month passively that they do not need to put in so much time and work into their hubs anymore. Their hubs are basically working for them, if you know what I mean. You can continue to earn from your hubs for months, even years to come. However, if you need immediate money then this might not be something that you should rely on for a full time income. You could see it as a way of making extra money and saving for the future. Writing for Clients If you write for clients, you do the work and get paid on completion. Some clients will pay you upfront for the work, or give you a percentage of the contract fee. You can either gain clients through sites like oDesk or Elance. Or, you might even find work on Craiglist however, most experienced online writers will find their clients in other ways; even offline or through agencies. Some build their online presence to a level whereby they are constantly contacted to provide work for clients all over the world. The upside to this is that you can get regular money and make a good living if you are willing to work hard. However, at times where people are just not hiring writers you might need an extra buffer. For example, during a recession you might not receive so many offers for work. Therefore, you might have to look for other ways of making money. If you had worked hard on revenue share sites you might even earn some money passively, and you can therefore, ease up on the client work for a period of time. Conclusion Some people might think that writing for a residual income is a waste of time. Because of the fact that you put so much time and effort into your work and you might not get a major return on your efforts. However, when it comes to writing or revenue share sites if you work hard and write good quality content you will build yourself a long term residual income. Writing for clients can be a stable income if you provide a good and reliable service for your clients. Therefore, in conclusion if you are just starting out as an online content writer, the best thing to do is choose one or two content sites and slowly begin building your writing profile on there. You can also offer your services to three or four clients that will provide you with long term, reliable work. I think it's a matter of getting over the hump. If there's any possible way that you can write for residual income, do it! Another factor is the expertise required to make it work writing for residual income. You have to write about the right topics for sure. Great hub! Excellent overview on the pros and cons of writing for residual income and writing for clients. I suppose one is just slow stable income while the other is fast unstable income. Yet, the best option would be to make an effort for both of them. Interesting hub. Getting experience was hubPages, so I can't complain. It gives a person a feel of what could be next. Thanks for your hub! I know exactly what you are writing about. I am still undecided on how much time to put into residual income writing and doing work for others. I've gotten some good income from jobs I've gotten on Elance; I've even gotten some $$ from Craigslist clients. But that can be pretty demanding work and it keeps you on the treadmill if you want to make decent money. I'm now setting up a blog and concentrating on Hubpages to see if I can build up an online presence. Thanks again for this hub, and best of luck to you. Voted up, interesting, useful. Hi Thanks for this hub and such a clear explanation. It makes a lot of sense. I started writing on Hubpages about a year ago and am yet to see any real return (although I have not been a very prolific writer). 9
https://hubpages.com/money/Debate-Writing-for-a-Residual-Income-or-Writing-for-Clients
CC-MAIN-2018-22
refinedweb
1,020
79.7
.NET Tip: Take a Byte out of Strings WEBINAR: On-demand webcast How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER > If you work much with Streams or Sockets, you are bound come across the need to convert a string to a byte array or to convert a byte array into a string. There is an easy means to accomplish this, but it is not where I would have expected it to be. The methods to do this are found in the Encoding class. The GetBytes() method is used to get a byte array representation of a string and the GetString() method is used to get a string representation of an array of bytes. Here is an example of how to use the methods: string s1 = "This is a test of converting a string to a byte array and back again."; string s2 = ""; byte[] b; b = Encoding.ASCII.GetBytes(s1); s2 = Encoding.ASCII.GetString(b); Debug.Print("Byte array length: {0}", b.Length); Debug.Print("Strings are equal: {0}", s1 == s2); Start with the string to convert, in this case s1. The code then uses GetBytes() to turn the string into a byte array. The byte array is then turned back into a string using GetString(), which is stored in s2. The Debug.Print() statements display the length of the byte array to show that it is the same length as the string and a comparison of s1 and s2 to show that they are the same. The above example uses ASCII encoding. If you are working with Unicode characters, you should use the Unicode version of the methods in the Encoding class as shown below. b = Encoding.Unicode.GetBytes(s1); s2 = Encoding.Unicode.GetString(b); Debug.Print("Byte array length: {0}", b.Length); Debug.Print("Strings are equal: {0}", s1 == s2) You will find that the length of the byte array is now twice as long because each Unicode character requires two bytes of storage. There is one more thing that I think could be done to make the code a little cleaner and you won't have to remember that the GetBytes() and GetSTring() methods are in the Encoding class. Create extension methods that take care of the conversion for you. Here are examples of extension methods that add GetBytes() and GetUnicodeBytes() methods to strings as well as GetString() and GetUnicodeString() methods to bytes. public static byte[] GetBytes(this string s) { return (Encoding.ASCII.GetBytes(s)); } public static byte[] GetUnicodeBytes(this string s) { return (Encoding.Unicode.GetBytes(s)); } public static string GetString(this byte[] b) { return (Encoding.ASCII.GetString(b)); } public static string GetUnicodeString(this byte[] b) { return (Encoding.Unicode.GetString(b)); } Using the extension methods instead of the Encoding class directly looks like this: // ASCII Version b = s1.GetBytes(); s2 = b.GetString(); // Unicode Version b = s1.GetUnicodeBytes(); s2 = b.GetUnicodeString(); I hope this helps when you need to work with strings and byte arrays. Remember to take advantage of extension methods when appropriate to help simplify!
https://www.codeguru.com/csharp/.net/net_general/tipstricks/article.php/c15547/NET-Tip-Take-a-Byte-out-of-Strings.htm
CC-MAIN-2017-51
refinedweb
507
64.3
Building Filesystems the Way You Build Web Apps By Ksplice Post Importer on Jul 07, 2010 FUSE is awesome. While most major Linux filesystems (ext3, XFS, ReiserFS, btrfs) are built-in to the Linux kernel, FUSE is a library that lets you instead write filesystems as userspace applications. When something attempts to access the filesystem, those accesses get passed on to the FUSE application, which can then return the filesystem data. It lets you quickly prototype and test filesystems that can run on multiple platforms without writing kernel code. You can easily experiment with strange and unusual interactions between the filesystem and your applications. You can even build filesystems without writing a line of C code. FUSE has a reputation of being used only for toy filesystems (when are you actually going to use flickrfs?), but that's really not fair. FUSE is currently the best way to read NTFS partitions on Linux, how non-GNOME and legacy applications can access files over SFTP, SMB, and other protocols, and the only way to run ZFS on Linux. But because the FUSE API calls separate functions for each system call (i.e. getattr, open, read, etc.), in order to write a useful filesystem you need boilerplate code to translate requests for a particular path into a logical object in your filesystem, and you need to do this in every FUSE API function you implement. Take a page from web apps This is the kind of problem that web development frameworks have also had to solve, since it's been a long time since a URL always mapped directly onto a file on the web server. And while there are a handful of approaches for handling URL dispatch, I've always been a fan of the URL dispatch style popularized by routing in Ruby on Rails, which was later ported to Python as the Routes library. Routes dissociates an application's URL structure from your application's internal organization, so that you can connect arbitrary URLs to arbitrary controllers. However, a more common use of Routes involves embedding variables in the Routes configuration, so that you can support a complex and potentially arbitrary set of URLs with a comparatively simple configuration block. For instance, here is the (slightly simplified) Routes configuration from a Pylons web application: from routes import Mapper def make_map(): map = Mapper() map.minimization = False # The ErrorController route (handles 404/500 error pages); it should # likely stay at the top, ensuring it can always be resolved map.connect('error/{action}/{id}', controller='error') map.connect('/', controller='status', action='index') map.connect('/{controller}', action='index') map.connect('/{controller}/{action}') map.connect('/{controller}/{action}/{id}') return map In this example, {controller}, {action}, and {id} are variables which can match any string within that component. So, for instance, if someone were to access /spend/new within the web application, Routes would find a controller named spend, and would call the new action on that method. RouteFS: URL routing for filesystems Just as URLs take their inspiration from the filesystem, we can use the ideas from URL routing in our filesystem. And to make this easy, I created a project called RouteFS. RouteFS ties together FUSE and Routes, and it's great because it lets you specify your filesystem in terms of the filesystem hierarchy instead of in terms of the system calls to access it. RouteFS was originally developed as a generalized solution to a real problem I faced while working on the Invirt project at MIT. We wanted a series of filesystem entries that were automatically updated when our database changed (specifically, we were using .k5login files to control access to a server), so we used RouteFS to build a filesystem where every filesystem lookup was resolved by a database query, ensuring that our filesystem always stayed up to date. Today, however, we're going to be using RouteFS to build the very thing I lampooned FUSE for: toy filesystems. I'll be demonstrating how to build a simple filesystem in less than 60 lines of code. I want to continue the popular theme of exposing Web 2.0 services as filesystems, but I'm also a software engineer at a very Git- and Linux-heavy company. The popular Git repository hosting site Github has an API for interacting with the repositories hosted there, so we'll use the Python bindings for the API to build a Github filesystem, or GithubFS. GithubFS lets you examine the Git repositories on Github, as well as the different branches of those repositories. Getting started If you want to follow along, you'll first need to install FUSE itself, along with the Python FUSE bindings - look for a python-fuse or fuse-python package. You'll also need a few third-party Python packages: Routes, RouteFS, and github2. Routes and RouteFS are available from the Python Cheeseshop, so you can install those by running easy_install Routes RouteFS. For github2, you'll need the bleeding edge version, which you can get by running easy_install Now then, let's start off with the basic shell of a RouteFS filesystem: #!/usr/bin/python import routes import routefs class GithubFS(routefs.RouteFS): def make_map(self): m = routes.Mapper() return m if __name__ == '__main__': routefs.main(GithubFS) As with the web application code above, the make_map method of the GithubFS class creates, configures, and returns a Python Routes mapper, which RouteFS uses for dispatching accesses to the filesystem. The routefs.main function takes a RouteFS class and handles instantiating the class and mounting the filesystem. Populating the filesystem Now that we have a filesystem, let's put some files in it: #!/usr/bin/python import routes import routefs class GithubFS(routefs.RouteFS): def __init__(self, *args, **kwargs): super(GithubFS, self).__init__(*args, **kwargs) # Maps user -> [projects] self.user_cache = {} def make_map(self): m = routes.Mapper() m.connect('/', controller='list_users') return m def list_users(self, **kwargs): return [user for user, projects in self.user_cache.iteritems() if projects] if __name__ == '__main__': routefs.main(GithubFS) Here, we add our first Routes mapping, connecting '/', or the root of the filesystem, to the list_users controller, which is just a method on the filesystem's class. The list_users controller returns a list of strings. When the controller that a path maps to returns a list, RouteFS automatically makes that path into a directory. To make a path be a file, you just return a single string containing the file's contents. We'll use the user_cache attribute to keep track of the users that we've seen and their repositories. This will let us auto-populate the root of the filesystem as users get looked up. Let's add some code to populate that cache: #!/usr/bin/python from github2 import client import routes import routefs class GithubFS(routefs.RouteFS): def __init__(self, *args, **kwargs): super(GithubFS, self).__init__(*args, **kwargs) # Maps user -> [projects] self.user_cache = {} self.github = client.Github() def make_map(self): m = routes.Mapper() m.connect('/', controller='list_users') m.connect('/{user}', controller='list_repos') return m def list_users(self, **kwargs): return [user for user, projects in self.user_cache.iteritems() if projects] def list_repos(self, user, **kwargs): if user not in self.user_cache: try: self.user_cache[user] = [r.name for r in self.github.repos.list(user)] except: self.user_cache[user] = None return self.user_cache[user] if __name__ == '__main__': routefs.main(GithubFS) That's enough code that we can start interacting with the filesystem: opus:~ broder$ ./githubfs /mnt/githubfs opus:~ broder$ ls Users and projects and branches, oh my! You can see a slightly more fleshed-out filesystem on (where else?) Github. GithubFS lets you look at the current SHA-1 for each branch in each repository for a user: opus:~ broder$ ./githubfs /githubfs master opus:~ broder$ cat /mnt/githubfs/ebroder/githubfs/master cb4fc93ba381842fa0c2b34363d52475c4109852 What next? Want to see more examples of RouteFS? RouteFS itself includes some example filesystems, and you can see how we used RouteFS within the Invirt project. But most importantly, because RouteFS is open source, you can incorporate it into your own projects. So, what cool tricks can you think of for dynamically generated filesystems?
https://blogs.oracle.com/ksplice/entry/building_filesystems_the_way_you
CC-MAIN-2015-35
refinedweb
1,351
64.1
5148/want-my-aws-s3-bucket-to-read-name-from-cloudwatch-event Currently, I am writing a Lambda function that triggers when a new s3 bucket is created (under my project). I am using a cloudwatch function which triggers the lambda function. I have to pass the whole event to the lambda function as input. What do I do to get my Lambda function so that it can read the bucket's name from the event and assign the name as the value to a string variable? I am using this code below, I have written it myself, might be some mistakes, please correct me where I am lacking and I need to do to accomplish the task that I have mentioned above, thanks in advance and yeah now take a look at my code below: import boto3 from botocore.exceptions import ClientError s3 = boto3.client('s3') def lambda_handler(event, context): bucket = event['s3']['bucket']['name'] CloudTrail events for S3 bucket level operations have the different format than the other events. Actually, the name of bucket is within a JSON object called requestParameters. Also, the whole event is encapsulated within the Records array. See CloudTrail Log Event Reference. A truncated version of CloudTrail event for bucket creation is here, this code below might be able to help you out: "eventSource": "s3.amazonaws.com", "eventName": "CreateBucket", "userAgent": "signin.amazonaws.com", "requestParameters": { "CreateBucketConfiguration": { "LocationConstraint": "aws-region", "xmlns": "" }, "bucketName": "my-awsome-bucket" } Therefore, your code could look something like: import boto3 from botocore.exceptions import ClientError s3 = boto3.client('s3') def lambda_handler(event, context): for record in event['Records']: if record['eventName'] == "CreateBucket": bucket = record['requestParameters']['bucketName'] print(bucket) It might be throwing an error on ...READ MORE 1 <class 'boto.s3.key.Key'> <Key: numbers-email, staging/Procured_Numbers_Status/procured_numbers_status_2019-05-15:06:09:04.csv> I ...READ MORE You can download the file from S3 ...READ MORE Copying is the easy part! Use the AWS .. You can try these steps to put ...READ MORE CloudTrail logs API calls accessed to your ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/5148/want-my-aws-s3-bucket-to-read-name-from-cloudwatch-event
CC-MAIN-2020-16
refinedweb
348
58.18
Almost all of the functionality of the C++ OMPL library is accessible through Python using more or less the same API. Some important differences will be described below. The Python bindings are generated with Py++, which relies on Boost.Python. The bindings are packaged in the ompl module. The main namespaces (ompl::base, ompl::control, ompl::geometric) are available as sub-modules. To quickly get an idea of what classes, functions, etc., are available within each submodule, type something like this at the Python prompt: Contents: - Usage of the Python bindings: Good practices - Important differences between C++ and Python - Differences between the C++ and Python API's - A simple example - Creating boost::function objects from Python functions - (Re)generating the Python bindings Usage of the Python bindings: Good practices Although almost all C++ functionality is exposed to Python, there are some caveats to be aware off: - By default OMPL often returns a reference to an internal object when the original C++ function or method returns a reference or takes a reference as input. This means that you have to be careful about the scope of variables. Python objects need to exist as long as there exist at least one OMPL object that contains a reference to it. Analogously, you should not try to use Python variables that point to OMPL objects that have already been destroyed. - C++ threads and Boost.Python don't mix; see the Boost.Python FAQ. If you have multi-threaded C++ code where multiple threads can call the Python interpreter at the same time, then this can lead to crashes. For that reason, we do not have Python bindings for the parallelized RRT and SBL planners, because they can call a Python state validator function at the same time. - Just because you can create Python classes that derive from C++ classes, this doesn't mean it is a good idea. You pay a performance penalty each time your code crosses the Python-C++ barrier (objects may need to copied, locks acquired, etc.). This means that it is an especially bad idea to override low-level classes. For low-level functionality it is best to stick to the built-in OMPL functionality and use just the callback functions (e.g., for state validation and state propagation). It is also highly recommended to use the ompl::geometric::SimpleSetup ompl::control::SimpleSetup classes rather than the lower-level classes for that same reason. Important differences between C++ and Python - There are no templates in Python, so templated C++ classes and functions need to be fully instantiated to allow them to be exposed to python. - There are no C-style pointers in python, and no “new” or “delete” operators. This could be a problem, but can be dealt with mostly by using Boost shared_ptr's. If a C++ function takes pointer input/output parameters, usually a reference to the object is passed in the python bindings. In other words, you should be able to get and set the current value of an object through its methods. In some cases odd side effects may occur if you pass temporary objects (e.g., function_call(Constructor_call())), so it's advisable to create variables with the appropriate scope. Differences between the C++ and Python API's - An STL vector of int's is of type vectorInt in Python (analogously for other types). - The C++ class State has been renamed AbstractState, while the C++ class ScopedState<> is called State in Python. - The C++ class ScopedState<RealVectorStateSpace> is called RealVectorState. The ScopedState's for the other pre-defined state spaces have been renamed analogously. - The C++ class RealVectorStateSpace::StateType has been renamed to RealVectorStateInternal (analogously for the other state space types), to emphasize that an end user should really be using RealVectorState. - To get a reference to a C++ State stored inside a ScopedState from Python, use the "()" operator: - The print method (for classes that have one) is mapped to the special python method str, so a C++ call like foo.print(std::cout)becomes print(foo)in python. Similarly, a C++ call like foo.printSettings(std::cout)becomes print(foo.settings())in python. Many of the python demo and test programs are direct ports of the corresponding C++ programs. If you compare these programs, the sometimes subtle differences will become more obvious. In the python programs you will notice that we can create python classes that derive from C++ classes and pass instances of such classes to C++ functions. Similarly, we can create python functions (such as state validity checkers or propagate functions) that can be called by C++ code. A simple example Below is a simple annotated example. It is available in ompl/py-bindings/demos/RigidBodyPlanning.py. Creating boost::function objects from Python functions OMPL relies heavily on boost::function objects for callback functions. To specify a Python function as a callback function, that function needs to be cast to the right function type. The simple example above already showed how to do this for a state validity checker function: If you need to pass extra arguments to a function, you can use the Python partial function like so: (Re)generating the Python bindings - Attention - See also the tutorial Creating Python bindings for a new planner The Python bindings are subdivided into modules, to reflect the main namespaces: ompl::base, ompl::control, and ompl::geometric. The code in the ompl/src/ompl/util directory is available in a submodule as well. Whenever you change the API to OMPL, you will need to update the Python bindings. Updating the bindings is a two-step process. First, the code for the modules needs to be generated. Second, the code needs to be compiled into binary Python modules. Code generation The code for the Python bindings can be generated by typing “ make update_bindings.” This creates one header file per module, formed by concatenating all relevant header files for that module. This header file is then parsed by Py++ and the appropriate C++ code is generated. This code uses Boost.Python. Py++ is smart enough to create wrapper classes when necessary, register Python <-> C++ type conversions, and so on. If you only need to update the bindings for one module (say base), you can type “ make update_base_bindings.” Any diagnostic information from Py++ is stored in a log file in the build directory for each module ( pyplusplus_base.log for the module base and likewise for other modules). You can remove all generated code by typing “ make clean_bindings”. This is sometimes necessary if Py++ gets confused about which code needs to be regenerated. If this happens, all the generated code might still compile, but you can get odd crashes or errors at runtime. For each module the relevant header files are listed in ompl/py-bindings/headers_\<_modulename_\>.txt. The order in which the header files are listed is important. A header file should not be included by another header file listed above it. If you have created new header files, you should add the names of these files to the appropriate headers_\<_modulename_\>.txt. file. Compiling the Python modules To compile the Python modules type “ make py_ompl” (or simply “make”). If you only want to compile one python module (say base), type “ make py_ompl_base.” The modules will appear as libraries in the lib subdirectory in the build directory, but they are also copied to ompl/py-bindings/ompl/\<_modulename_\>/_\<_modulename_\>.so. Forcing CMake to do The Right Thing Every attempt has been to have CMake correctly identify dependencies and only compile code when necessary. If you want force CMake to regenerate the bindings from scratch, you can type “ make clean_bindings,” followed by “ make update_bindings” again. If, on the other hand, you want to (temporarily) disable the compilation of Python bindings, type: cmake -D OMPL_BUILD_PYBINDINGS:BOOL=OFF . in your build directory. You can re-enable them, by running this command again, but with OFF changed to ON. Changing these settings can also be done through the CMake GUI.
http://ompl.kavrakilab.org/core/python.html
CC-MAIN-2015-48
refinedweb
1,330
54.12
Introduction:In previous article i was explained the concept of facebook login support for windowsphone 8.0,Now this article will show you how to easily integrate Facebook to your Windows Phone Store 8.1 application. This topic contains the following sections: - Installation of Facebook SDK - Linking App with facebook. - Work with Facebook Login Page. - Post status message on FaceBook Wall. - Logout from Facebook Page. Facebook users increasingly rely on their Facebook identity to access apps, play games with friends, share playlists or comment in a forum. As a developer, you may also rely on Facebook Login to tap into the Facebook social graph to enhance your app’s experience, enable new scenarios and open up the app to new customers, resulting in better revenue opportunities.. Description: First of all, Open Microsoft Visual Studio Express 2013 for Windows and then create new project type Blank App(Ex:FaceBookWp8.1) 1.1 Installation of Facebook SDK: Install the Facebook nuget package into the solution by starting the Package Manager PowerShell by following: Tools->Library Package Manager->Package Manager console Once the powershell command prompt is running, type the following command Install-Package Facebook, This will add Facebook SDK in the current project like below. 1.2 Linking App with facebook: First of all, you need to create Facebook Application on website. Here is the link to do so Click at "Add New App" button Enter the Display Name, namespace (Optional) and then click 'Create App ID'. Now go to Settings ,there click on add platform Please note above App Id,and You must select Windows App as a platform,because in this sample we are trying to connect windowsphone 8.1 store apps. And now the most important step is we need to fill Windows Store ID. There are a couple of ways to get the value to be put in that field, one of them in this sample i am going to take help of WebAuthenticationBroker class like this Uri _callbackUri = WebAuthenticationBroker.GetCurrentApplicationCallbackUri(); In my case the above statement return following URI,you may check above code by downloading the sample and observe in 'FaceBookHelper.cs' class from 'Helpers' folder in a project We proceed to copy that Uri, on page Facebook App 'facebookwptest' a new platform and select Windows App there two fields appear, since this app are creating the Windows Phone 8.1 and then we place the Uri Windows Store ID. If I were a Silverlight App Windows Phone 8.1 we should use another field and a different Uri. The Uri must place there Guid copying everything from "s-" without inclkuir rl "/" end being something like: Note: As we are creating a windowsphone store app ignore the field talking about Windows Phone and we look only at the Windows Store ID. 1.3 Work with Facebook Login Page: Before going to login any social networks,oAuth is the common authentication method nowadays for Apps and Websites,In this article I am interested to use WebAuthenticationBroker. Note:). But we are able to get it working, no worries. First, we need the so called ContinuationManager. This class brings the user back to the app where the fun begun.So create a folder name is 'Helpers' and add following class. - namespace FaceBookWp8._1.Helpers - { -); - } - } Here the only thing you need to do is to add your app’s Namespace into it(Here in my case namespace is FaceBookWp8._1.Helpers). The next step we need to do: some modifications at the App.xaml.cs file. -(); - } Add a CreateRootFrame() method with some little changes to the default behavior. - private void CreateRootFrame() - { - Frame rootFrame = Window.Current.Content as Frame; - if (rootFrame == null) - { - rootFrame = new Frame(); - SuspensionManager.RegisterFrame(rootFrame, "AppFrame"); - Window.Current.Content = rootFrame; - } - }.However in this sample i added SuspensionManager class in 'Helpers" folder.). - ContinuationManager _continuator = new ContinuationManager(); Of the suspension system is responsible, the event will fire OnSuspending found in the App case: App.xaml.cs , which at the moment looks like this (no need to do anything). - private async void OnSuspending(object sender, SuspendingEventArgs e) - { - var deferral = e.SuspendingOperation.GetDeferral(); - await SuspensionManager.SaveAsync(); - deferral.Complete(); - } Add following class in 'Helpers' folder.Which is useful to login facebook with help of WebAuthenticationBroker. - namespace FaceBookWp8._1.Helpers - { - public class FaceBookHelper - { - FacebookClient _fb = new FacebookClient(); - readonly Uri _callbackUri = WebAuthenticationBroker.GetCurrentApplicationCallbackUri(); - readonly Uri _loginUrl; - private const string FacebookAppId = "xxxxxxxxxxxxxxx";//Enter your FaceBook App ID here - private const string FacebookPermissions = "user_about_me,read_stream,publish_stream"; - public string AccessToken - { - get { return _fb.AccessToken; } - } - - public FaceBookHelper() - { - _loginUrl = _fb.GetLoginUrl(new - { - client_id = FacebookAppId, - redirect_uri = _callbackUri.AbsoluteUri, - scope = FacebookPermissions, - display = "popup", - response_type = "token" - }); - Debug.WriteLine(_callbackUri);//This is useful for fill Windows Store ID in Facebook WebSite - } - private void ValidateAndProccessResult(WebAuthenticationResult result) - { - if (result.ResponseStatus == WebAuthenticationStatus.Success) - { - var responseUri = new Uri(result.ResponseData.ToString()); - var facebookOAuthResult = _fb.ParseOAuthCallbackUrl(responseUri); - - if (string.IsNullOrWhiteSpace(facebookOAuthResult.Error)) - _fb.AccessToken = facebookOAuthResult.AccessToken; - else - {//error de acceso denegado por cancelación en página - } - } - else if (result.ResponseStatus == WebAuthenticationStatus.ErrorHttp) - {// error de http - } - else - { - _fb.AccessToken = null;//Keep null when user signout from facebook - } - } - public void LoginAndContinue() - { - WebAuthenticationBroker.AuthenticateAndContinue(_loginUrl); - } - public void ContinueAuthentication(WebAuthenticationBrokerContinuationEventArgs args) - { - ValidateAndProccessResult(args.WebAuthenticationResult); - } - } - } Note:Please enter your facebook App ID in above code.otherwise you will get error like below. Now our project hierarchy will be like this. Wow! Now we are done almost,Let's make following UI in MainPage.xaml page to use above helpers. - <StackPanel> - <!--Title--> - <TextBlock Text="FaceBook Integration in WP8.1:" FontSize="28" Foreground="Gray"/> - <!--Buttons for Login & Logout--> - <Button Name="BtnLogin" Content="FaceBook Login" HorizontalAlignment="Stretch" Background="#FF00A9CF" Click="BtnFaceBookLogin_Click"/> - <Button Visibility="Collapsed" Name="BtnLogout" Content="FaceBook Logout" HorizontalAlignment="Stretch" Background="#FF00A9CF" Click="BtnFaceBookLogout_Click"/> - - <StackPanel Visibility="Collapsed" Name="StckPnlProfile_Layout"> - <!--Display facebook profile info--> - <TextBlock Text="User Profile :" FontSize="30" TextWrapping="Wrap" Foreground="White"/> - <Image Stretch="None" x: - <TextBlock FontSize="20" Name="TxtUserProfile" TextWrapping="Wrap" Foreground="White"/> - <!--Post wall--> - <TextBox Name="TxtStatusMsg" MinHeight="150" TextWrapping="Wrap" Header="Status Message:" FontSize="18" Foreground="Black"/> - <Button Content="Post Status on FaceBook" HorizontalAlignment="Stretch" Background="#FF00A9CF" Click="BtnFaceBookPost_Click"/> - </StackPanel> - - </StackPanel> Here in above xaml code ,there are four sections: 1)For displaying sample title. 2)Buttons for Login and Logout. 3)UI for displaying user profile info ,after successfully logon to facebook. 4)Post message to wall. In MainPage.cs file , create following two global object for 'FaceBookHelper.cs' class and FacebookClient. - FaceBookHelper ObjFBHelper = new FaceBookHelper(); - FacebookClient fbclient = new FacebookClient(); Ok,Let's write code for facebook login on BtnFaceBookLogin_Click Event: - private void BtnFaceBookLogin_Click(object sender, RoutedEventArgs e) - { - ObjFBHelper.LoginAndContinue(); - } When clicking on Login button from UI,WebAuthenticationBroker will get facebook login url from the FaceBookHelper constructor and screen will be appeared like below. Entered facebook username and password will be processed for authentication,and then will be ask for your permissions.Press ok for successfully logon to facebook page. -; - } - - } To get user profile image,add following method: - private void GetUserProfilePicture(string UserID) - { - string profilePictureUrl = string.Format("{0}/picture?type={1}&access_token={2}", UserID, "square", ObjFBHelper.AccessToken); - picProfile.Source = new BitmapImage(new Uri(profilePictureUrl)); - } 1.4 Post status message on FaceBook Wall: When click on Post Status button,add following code in MainPage.cs file, - private async void BtnFaceBookPost_Click(object sender, RoutedEventArgs e) - { - var postParams = new - { - name = "Facebook Post Testing from App.", - caption = "WindowsPhone 8.1 FaceBook Integration.", - link = "", - description=TxtStatusMsg.Text, - picture = "" - }; - try - { - dynamic fbPostTaskResult = await fbclient.PostTaskAsync("/me/feed", postParams); - var responseresult = (IDictionary<string, object>)fbPostTaskResult; - MessageDialog SuccessMsg = new MessageDialog("Message posted sucessfully on facebook wall"); - await SuccessMsg.ShowAsync(); - } - catch (Exception ex) - { - //MessageDialog ErrMsg = new MessageDialog("Error Ocuured!"); - - } - } After posting status,we will be found status message on your facebook timeline like below, 1.5 Logout from Facebook Page: When click on Logout button,add following code in MainPage.cs file, - private async void BtnFaceBookLogout_Click(object sender, RoutedEventArgs e) - { - _logoutUrl = fbclient.GetLogoutUrl(new - { - access_token = ObjFBHelper.AccessToken - }); - WebAuthenticationBroker.AuthenticateAndContinue(_logoutUrl); - BtnLogin.Visibility = Visibility.Visible; - BtnLogout.Visibility = Visibility.Collapsed; - } Summary: From this article we have learned "Facebook Integration. Summary:From this article we have learned "Facebook Integration in windowsphone 8.1 application".Hope i wrote this post with my best level,When writing this article, i had taken lot of hard works and making nice presentation to understand the article at beginners level. FeedBack Note: Follow me always at @Subramanyam_B Have a nice day by Subramanyam Raju :) Follow me always at @Subramanyam_BHave a nice day by Subramanyam Raju :) Nice Tutorial about Facebook Integration in windows phone 8.1.... Its very useful for us.. Thanks Subbu This was really helpful! :) I've been struggling to integrate windows login on WP 8.1! And right now my page is stuck at "login success". Can you help me with that? Check once following things: 1)Your facebook app is must be approved. 2)Make sure to fill Windows Store ID from Step 1.2 3)Make sure to enter your correct APP ID in FaceBookHelper.cs file I did all that. When I put anything other than "" in the redirect uri, I get invalid url. What can I do to move back to my app? Hey! I didn't do that Windows store step. Thats where I was going wrong! Thank you so much ! :) This helped me a lot!!! Hey! I found something else. When I login after authorising the app again it says "You have authorized the app already". Is there any way for a already authorized user to not see that page? Thanks for your notice and you are right! and i need to work on it..You may also share with me,if you are solved that bug.So that it will be useful for future visitors :) I'm stuck in the same issue. Can you please share the solution for this? Thanks. This might help - Have you been able to solve this issue? I still get the "You have authorized the app already" message. Hey how did u get that windows store id ..? Please help me out Follow step 1.2 Hi I have an issue with login. After login I get a message "Success" and also a warning message "SECURITY WARNING: Please treat the URL above as you would your password and do not share it with anyone." I don't receive the access token too. Can you suggest how to solve this issue. Hi, Thank you for the great tutorial, it's been an amazing help. I have one question, when logging out of a Facebook account using the demo, a generic Facebook "Success" page appears. For a production app it would be necessary to have a cleaner log-out, is it possible to avoid that "Success" page? Thank you Hi, I am doing the windows phone 8.1 silver light app . For this I integrated the facebook . For log in process I used the log in with app method , For testing i created one sample in vs and at the developers.facebook.com , and I have placed the product id from the wmmanifest.xml which is present in place holder , and I am able to login with the sample . Now for live app I got my app id from the store , can you please tell me where to replace this app id , I am confused where to replace this app id . I tried to put this in dev.fb.com and wm.manifest.xml and in extensions tag , by doing this I am getting error message from facebook saying that calling app id not matched calling app id . Any help.. Thanks... You should follow my previous articleFacebook login support with windowsphone 8 silverlight apps. .And in this post i mentioned in step 2. yes I followed that manual only but After updating the product id present in the placeholder I am getting error message saying that wmmanifest.xml signature is invalid try with new signature.. How to over come this... there is no need to change the product id in the wmmanifest,xml with the store app id am i correct See in previous article in step 3 Check this two things 1)updating the store product id in the facebook website placeholder 2)Change app id in manifest file , under Extensions If you miss any of above steps,you will be fail I have replaced the store app id in those two places told by you , after executing it I am getting error messgae from facebook saying that calling appid does not match the calling app id. Is it not possible to test the app without publishing? Be aware that the product ID used during development is a randomly assigned placeholder. After your app has been certified, it has a different product ID.In this case, submit your app first to get a product ID assigned, but select the Manual publish option to ensure that you don’t release an app that is not set up correctly on the Facebook Developers site.Once you have your final product ID, make sure you update your WMAppManifest.xml under Extensions yes I remembered those two points , please give me clarity that is it possible to test the app without publishing or not after changing the app id in dev.fb.com and in wmmanifest.xml under extensions. Yes you can test without publishing,but make sure remembering above two steps. does this work after changing the store id in those two places and publishing to the store also . Yes :) thank you so much for your information.. It is very nice talking to you.............. testing in the sense facebook authentication checking... After successfull log in to facebook , this function ContinueWithWebAuthenticationBroker was not getting called. Can you please help on that Same problem here, can you please help? Try to check your class interface, do not forget to add "IWebAuthenticationBrokerContinuable" Example public sealed partial class MainPage : Page, IWebAuthenticationBrokerContinuable { ..... can we have any google account integration in windows phone 8.1 ? this is a good tutorial. I am still trying to get the app id but once I get over that issue I should be able to integrate it into my windows phone app Login page does not come. I get this error while debugging The remote procedure call failed. (Exception from HRESULT: 0x800706BE) at Windows.Security.Authentication.Web.WebAuthenticationBroker.AuthenticateAndContinue(Uri requestUri) I have replaced the store app id in those two places told by you , after executing it I am getting error messgae from facebook saying that calling appid does not match the calling app id. this is a good tutorial. I am still trying to get the app id but once I get over that issue I should be able to integrate it into my windows phone app facebook this is a good tutorial. I am still trying to get the app id but once I get over that issue I should be able to integrate it into my windows phone app . Thanks very much for this great article;this is the stuff that keeps me going through out these day. hi, thanks for the guide! but i'm stuck on logging in with facebook.. that say me that given url is not permitted byapplication configuration ecc. , i've set my AppID in facebookHelper class... Can you help me? Thanks!!! Be sure to set the correct Store Id on Facebook page. in your code run the WebAuthenticationBroker.GetCurrentApplicationCallbackUri() and see what is the code, because you probably have something different in the app, than registered on facebook Hi, This tutorial work for me, but I have 2 questions: 1. I want to automate the login. I don't want to show the login button, but instead just show the login page. When I do it on Page_Loaded event (call WebAuthenticationBroker.AuthenticateAndContinue(LoginUrl);) I get the following error: The remote procedure call failed. (Exception from HRESULT: 0x800706BE) It works fine when I press the button. 2. This works great when I run this for the first time, but later when I log in it says that I have already accepted this app. I should have got AccessToken in some other fashion then? 1.What is use of SuspensionManager.cs class? 2.where is in method of SuspensionManager.cs? 1.What is use of SuspensionManager.cs class? 2.where is in method of SuspensionManager.cs? Hi Subbu.I am doing one application for Professional.So my requierement is i need to retrieve all installed apps list in my Application.Please help me to do this.is there any API? Hi Subramanyam Raju, How can i get friend list and can u also give me all supported request and response to interact with data of facebook. Regards, Manish Makwana 1.What is use of SuspensionManager.cs class? 2.where is in method of SuspensionManager.cs? SuspensionManager is a class which is helpful for navigation purpose and It saves and restores the navigation state of the Frame. If you are creating Blank page we need to add this class for making navigation history. Ex: Suppose if i am navigating one page1 to page 2. And pressing on back button it will redirect to page1 Note: Please googling about SuspensionManager class, you will get lots of info from MSDN How about integrating twitter for wp8.1? I want to add a status through my app to twitter for sharing purposes, is that workable? I get invalid url. ...please help got it...but please tell me how to get the gender of the user Hi,(); } I have got a successfull response but don't know while callling this line "_continuator.ContinueWith(args);" we are getting null response. so could you tell us what mistaken we have done ? Hello, How to integrate Instagram api in windows phone 8.1 application Please help me out for that Thank You Does anybody try the logout method above? I get UserCancel status response in OnActivation event Yes, I have also try this, but it doesn't work. Can someone help us? I've actually changed to "winsdkfb" supported by microsoft as it seems to be use in Win 10. The problem with logout however still persisted, but it seems that is because of the cookies stored in application's browser. After quick search I've found how to delete cookies: Windows.Web.Http.Filters.HttpBaseProtocolFilter myFilter = new Windows.Web.Http.Filters.HttpBaseProtocolFilter(); var cookieManager = myFilter.CookieManager; HttpCookieCollection myCookieJar = cookieManager.GetCookies(new Uri("")); foreach (HttpCookie cookie in myCookieJar) { cookieManager.DeleteCookie(cookie); } that was a nice tutorial sir. Can you please tell how to get the user's country and cover photo Thank you sir can you upload images we use in this application? read_stream and publish_stream are deprecated now I cant use "Post status message on FaceBook Wall" function. I did write something on the textbox but when i clicked on the Post status button there was nothing happen. The other functions Login, Logout are useful, only this function cant be used. Could you plz tell me how to make it work? Thank you sir Nice Tutorial about Facebook Integration in windows phone 8.1.... Its very useful for us.. Thanks Subbu. But I have 2 issues please help me! First: Layout for Success Page when logout. I want it return a page log in with empty account name, password textbox Second: Why I post status in facebook,but my friend can't see it ? Do you have any linkedin integration into windows phone 8.1 article? Install-package facebook giving me error Install-Package : Unable to find package 'Facebook'. this error "You are using a display type of 'popup' in a large browser window or tab. For a better user experience, show this dialog with our JavaScript SDK without specifying an explicit display type. The SDK will choose the best display type for each environment. Alternatively, set height and width on your window.open() call to properly size this dialog if you have special requirements precluding you from using the SDK. This message is only visible to developers of your application."how to solve my problem.please help me.; } in the above code, continuator getting null.then below method not calling; } Actually what happend is whenever i click login button it goes to login page and successfully login but userdata not found .i.e., above method not calling. I want to get user data.how to get user data .please help me. did you find why Continuator is null? i have the same problem This comment has been removed by the author. var continuator = rootFrame.Content as IWebAuthenticationBrokerContinuable; if (continuator != null) { Debug.WriteLine("ContinuationManager::ContinueWith continuator OK > ContinueWithWebAuthenticationBroker"); continuator.ContinueWithWebAuthenticationBroker(args as WebAuthenticationBrokerContinuationEventArgs); } else { Debug.WriteLine("ContinuationManager::ContinueWith continuator NULL"); } continuator value is NULL. so my IWebAuthenticationBrokerContinuable is not call. How i can fix that? I have the same error of continuator getting a null value.I do not seem to understand why. Please help. Hi I had the same error but I solved this using the CoreApplicationView class which is used in this example. I used the viewActivated event to handle the WebAuthenticationBrokerContinuationEventArgs and it works for me. However I don't know if this is the ideal solution to handle the suspension state of winrt apps. Face the Error of An exception of type 'System.NotImplementedException' occurred in AllEvents_fb.exe but was not handled in user code Help me out... how to solve this??? Hi, very god tutorial, but a have this error: WebAuthenticationBroker name does not exist in the context atual Thanks a lot for sharing this list with us. I was searching it for a long time. magento development company | web design and development company Hello sir when I am going to click on Btn_Post Button message is not posted so plz give me solution i always get this error - can't load URL... i don't know how to resolve it please help me..) // It returns null value); } ------------- In 'ContinuationManager' class: var continuator = rootFrame.Content as IWebAuthenticationBrokerContinuable; if (continuator != null) // 'continuator' varaible returns null value, that's why application terminate .Any solution ? Thanks for sharing .. How can we get friends online status ? I am very much pleased with the contents you have mentioned. I wanted to thank you for this great article. I enjoyed every little bit part of it and I will be waiting for the new updates. Facebook app development company
http://bsubramanyamraju.blogspot.com/2014/12/windowsphone-store-81-facebook.html
CC-MAIN-2018-17
refinedweb
3,701
50.33
clearerr_unlocked Check and reset stream status DescriptionTheand returns its integer descriptor. The clearerr_unlocked, feof_unlocked, ferror_unlocked and fileno_unlocked functions are equivalent to clearerr, feof, ferror and fileno respectively, except that the caller is responsible for locking the stream with flockfile before calling them. These functions may be used to avoid the overhead of locking the stream and to prevent races when multiple threads are operating on the same stream. The code below attempts to open the text file "fred.txt" in the current directory and read each character in it until the EOF symbol is encountered. The feof function has been used to test for EOF. Example: Example - Check and reset stream status Workings #include <stdio.h> int main() { FILE *in; if (in = fopen("fred.txt", "rt")) { for (char c; !feof(in); fscanf(in, "%c", &c)); fclose(in); } return 0; }
http://www.codecogs.com/library/computing/c/stdio.h/ferror.php?alias=clearerr_unlocked
CC-MAIN-2018-34
refinedweb
139
58.08
i know but my assignment say addTextbok(String id) i know but my assignment say addTextbok(String id) -jGRASP exec: javac -g Bookstore.java Bookstore.java:17: error: variable apparel is already defined in class Bookstore private Apparel apparel; ^ Bookstore.java:64: error: no... import java.util.List; import java.util.ArrayList; public class Bookstore { //Instance field /** *@param name *@param textbooks *@param apparel ----jGRASP exec: javac -g Bookstore.java Bookstore.java:62: error: <identifier> expected public void addTextbook(textbook) ^ 1 error ----jGRASP wedge2:... public void addTextboox(textbook) { textbooks.add(textbook); } i know how but i dont know how to write myself yes i do. how to write add method and fuction for that? My assignment due tonight addTextbook(Textbook) adds a new Textbook to the inventory of the bookstore. buyTextbook(String id) removes a Textbook object from the list of textbooks of the Bookstore. If the id does not match... It didn't give me an error. Is this code right for method computeGrade() handles the conversion of the letter grade to a numeric equivalent and saves it in the instance variable numberGrade? public void computeGrade() { int size = letterGrade.length(); switch(letterGrade.substring(0,1)) { case "A": numberGrade = 0.4;break; case "B": numberGrade = 0.3;break; case... ok.do you know where can I find that infrmation? How do you use if { ... } else { ... } construction for handle the "+" and "-" modifiers?The "+" can be appended to B's, C's, and D's and increases the value of the letter grade by 0.3 points. A "-"... My question is how can i update letterGrade and numberGrade as desricbe in instruction letter C above 1. Develop a simple grade transcript program. We will have two classes, a Course class representing an individual course that a student has taken and a Transcript class that will combine several... move just public, not public Rectangle? import java.util.Scanner; /* * Stacktrace.in * You have a exception , we have a solution */ public class DrawRectange { public static void main(String[] args) { no I like to learn myself and use your help You can do that? write the code for someone esle? All i need is example of loops controls for (int j= 3; j <= height; j++) { } if (i%width) == 3) System.out.print("#####"); or Ok --- Update --- What should I say in if statement for height? Tell me what I should do? or if (width < 1 || width >10 ?
http://www.javaprogrammingforums.com/search.php?s=a69d8fc409ecc94a4ca0cb0a049aa252&searchid=1361346
CC-MAIN-2015-06
refinedweb
399
68.26
Why I cannot use fold Left in the following code: def concatList[T](xs: List[T],ys:List[T]): List[T]= (xs foldLeft ys)(_::_) Well, you can, scala> def concatList[T](xs: List[T],ys:List[T]) = (xs foldLeft ys)( (a, b) => b :: a ) concatList: [T](xs: List[T], ys: List[T])List[T] scala> concatList(List(1,2,3), List(6,7,8)) res0: List[Int] = List(3, 2, 1, 6, 7, 8) Was that the result you were expecting? I don't think so. First let's look at the signature of the folds and :: (only a simplification for illustrative purposes, but fits perfectly in our case) : given a List[T] def ::(v:T): List[T] // This is a right associative method, more below def foldLeft[R](r:R)(f: (R,T) => R):R def foldRight[R](r:R)(f: (T,R) => R):R Now, apply one argument list in foldLeft we xs.foldLeft(ys) and unifying the types from our signature from foldLeft sample call: List[T] : List[Int], therefore T : Int, and R : List[Int], that applied to foldLeft signature gives foldLeft[List[Int]](r:List[Int])( f:(List[Int],Int) => List[Int] ) Now, for the usage of ::, a :: b compiles to b.::(a), Scala often refers to it as a right associative method. This a special syntax sugar for methods ending in : and quite convenient when defining a list: 1 :: 2 :: Nil is like writing Nil.::(2).::(1). Continuing our instantiation of foldLeft, the function we need to pass has to look like this: (List[Int],Int) => List[Int]. Consider (a,b) => a :: b, if we unify that with the type of we f get: a : List[Int], and b : Int, comparing that with the signature of a2 :: b2, a2 : Int, b2 : List[Int]. For this to compile, a and a2 in conjunction with b and b2 must have the same types each. Which they don't! Notice in my example, I inverted the arguments, making a match the type of b2 and b match the type of a2. I will offer yet another version that compiles: def concatList[T](xs: List[T],ys:List[T]) = (xs foldLeft ys)( _.::(_) ) To cut the story short, look at foldRight signature def foldRight[R](r:R)(f: (T,R) => R):R The arguments are already inverted, so making f = _ :: _ gives us the right types. Wow, that was a lot of explanation about type inference and I am sort on time, but I still owe a explanation on difference between the meaning of fold left and right. For now have a look at, in special these two imagines: Notice, the argument to foldl and foldr are inverted, it first takes the function and them the inital arguments, r in the signatures, and instead of :: for list construction it uses just :. Two very small details.
https://codedump.io/share/qQsH0sftLkr1/1/what-39s-the-difference-between-foldright-and-foldleft-in-concat
CC-MAIN-2017-51
refinedweb
481
63.12
upgrade moralis module to the last. add authentiticated option { provider: “walletconnect” } and get error TypeError: MWalletConnectProvider.default is not a constructor How make it worked in react? upgrade moralis module to the last. add authentiticated option { provider: “walletconnect” } and get error TypeError: MWalletConnectProvider.default is not a constructor How make it worked in react? Hi, Can you please elaborate your question. Please follow these guidelines to post on the forum so that we can assist you better! FAQ - Common Issues & How To Get Help Please paste your code as much as possible. And also screenshots to provide reference. Thank you. Thank you. I think Codesandbox will be ok. to check the problem and help to fix it… Did you only try running on the Codesandbox? Codesandbox will not have the providers injected by the plugin wallets. So, when I ran the react project and opened in my browser, the login functionality worked as expected. Please do let me know. Looking forward to your response. I got a similar error in my local react app. Maybe I did not correct injected the Wallet Connect provider. How do you do it in your project? Can you send screen, or code, please) @malik Can you make Pull Request to the to show how correct inject WalletConect provider in react for Moralis. Maybe just add one more button - “login with WalletConnect”. it will be very Apretiated. I am looking into it as we speak. Will get back to you very soon. Yes, there was an issue with the WalletConnect Provider in the Moralis sdk which caused issues when using it with React. We have fixed this bug and it will be pushed in 0.0.28 version. Thank you for supporting and pointing these issues out. Really appreciate it. Have a wonderful day sir. Hi @malik , Thank you for fix it. Kindly looking forward to upgrading the “moralis-react” module to include the WC provider at auth method, at least… Do you have any idea when it can happen? I just create a pull request for react-moralis to add the WalletConnect Provider to authenticate() Can you check it) Looking at it right now. Thanks. [email protected] is published so you can install it directly now. Until this PR is merged, You can try to achieve it this way – import React, { useState } from "react"; import ReactDOM from "react-dom"; import Moralis from "moralis"; Moralis.initialize("5yzecM1asSKvvuJeiVr7HrssT3ma4hmLRXIalQZx"); Moralis.serverURL = ""; const initialUser = Moralis.User.current(); const App = () => { const [user, setUser] = useState(initialUser); const onLoginW = async () => { //Change made here const user = await Moralis.Web3.authenticate({ provider: "walletconnect" }); setUser(user); }; const onLoginM = async () => { const user = await Moralis.authenticate(); setUser(user); }; const onLogout = () => { Moralis.User.logOut(); setUser(null); }; if (user) { return <button onClick={onLogout}>Logout</button>; } return ( <> <button onClick={onLoginM}>Login Metamask</button> <button onClick={onLoginW}>Login WalletConnect</button> </> ); }; ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById("root") ); This should get you moving with the functionality. Thank you. I got the 0.28 already. It works properly. thanks. I use the react-moralis user provider in the project, but there doesn’t provide any method for setUser in react state, there only setUserData(), but it not will help on it.(. Could you elaborate more on what you mean. If it’s regarding another issue of react-moralis, you can create another thread as this thread is titled WalletConnect. As always please provide all the information necessary for us to properly evaluate what you’re looking for. Have a great day. You now can use walletconnect with the latest release of react-moralis by providing: provider:'walletconnect' to the authenticate and/or enable function. Thank you. It’s worked well.)
https://forum.moralis.io/t/walletconnect-provider/1042
CC-MAIN-2021-43
refinedweb
611
60.01
| Join Last post 09-20-2007 11:31 PM by chetan.sarode. 11 replies. Sort Posts: Oldest to newest Newest to oldest I have never gotten an UpdatePanel to work. Obviously everyone else is getting them to work, but not me. 1) I'm running the latest Atlas Beta. I have uninstalled and reinstalled. It's not that.2) I started with the Atlas web.config and put my previous web.config customization back into it. 3) I have EnablePartialRendering on the ScriptManager set to True. 4) I have AjaxControlToolkit.dll, Microsoft.Web.Extensions.dll, and Microsoft.Web.Preview.dll all in my bin directory. Every update panel I've tried still does a standard post-back and whole-page refresh. ANY ideas? Well I presume that when you'r saying that you're using the latest, that you're using Ajax.net beta 2. Have you try to start a Ajax.net web-site from vs?Can you see the updatepanel in the vs toolbox? I've since gotten basic UpdatePanel examples to work, proving that it's not a configuration problem on my part, but only when they're as trivial and useless as the examples on the Microsoft's website. Everything that I've tried with real-world complex code doesn't work consistently, if at all, without generating full-page post-backs. One of the simplest examples I can site is a user control referenced within an update panel, and the user control is dynamicly adding asp:Button controls to the page. The click events of those controls always generate full-page post-backs from what I've seen. This is just one of several examples I've ran into. I'm not trying to be a pessimist here. I really want Atlas to work, but I've had our own Ajax framework implmentation with asynch panels in our application for a year now and had less problems with it than with Atlas so far. I was really enthusiastic after all the hype, but the more I see of it, especially in the beta 2 stage now, the less impressed I am with it. There are a couple of more advanced examples here: I know it can be frustrating when you are expecting to see a non-flickering page (indicating a async postback) and then the whole page updates. The documentation can use some more information on using user controls with UpdatePanel controls. Thanks for the reply and assistance. However, I have it running on a local PC development environment, but can't seem to get it to work through the web when I place it on the web server IIS 6.0 x64 bit. I used the config provided with some modifications for our connections, namespaces, etc. I copied the dlls into the bin folder. Is there something special I need to do with the server, dlls, config, or anything? Thanks Fysh1, I have been running this on IIS6.0 x64bit and the UpdatePanel scenarios work fine. I would suggest that you download the and monitor the requsts that are happening. Can you confirm that your app is working on one box and not on another - then we can try and ensure the settings on both boxes to whet out the problem. If you are running Beta2, it might be that the script resource handler is not registered on your production server and that is causing the ASP.Net AJAX javascripts to not be downloaded to your systems and ultimately causing full post-backs. Hope that helps. Wow I finally was able to get it to work. It appears some of the info was missing from the config file. Also, I used the msi to install on the server. It works, but it looks like I will have to use the update progress portion since there is a slight delay when refreshing the listboxes that are related to other dropdown boxes. Thanks for all the help. What web.config items were missing? I'm having the same issue and i've compared the web.config to several versions to make sure everything is in. Advertise | Ads by BanManPro | Running IIS7 Trademarks | Privacy Statement © 2009 Microsoft Corporation.
http://forums.asp.net/p/1042858/1455936.aspx
crawl-002
refinedweb
701
67.04
Interview with Simon Ritter on Java 9 Recorded at: - | - - - - - - Read later Reading List - Download - MP3 - | - Android app Bio Simon Ritter is the Deputy CTO at Azul and previously was a Java Technology Evangelist at Oracle Corporation. Having moved to Oracle as part of the Sun acquisition he now focuses on the core Java platform and Java for client applications. He also continues to develop demonstrations that push the boundaries of Java for applications like gestural interfaces.. [...]Do you just want to introduce yourself and say what you've been doing recently at Azul? Alex's full question: So hi. I'm here with Simon Ritter, Deputy CTO of Azul Systems at QCon London 2016. Simon, you are well known within the Java community. Do you just want to introduce yourself and say what you've been doing recently at Azul? Yes. I joined Azul back in November last year as Deputy CTO and really, I describe that as being the understudy to Gil Tene who's the real CTO and a lot of what I'm doing is what I was doing before in terms of Java community, so appearing at conferences like this, presenting on Java and really helping developers to understand what's happening in the Java space because that's Azul is doing, we're focusing on the JVM so we want to help developers understand what's happening in the Java space. 2. And Azul has got a lot of different products in the Java space, hasn't it, from servers to the embedded stuff? Yes. I mean you can really say that we've only got two products, and both of them are JVMs. So on the one hand, we've got Zing which is our commercial high performance, low latency JVM and then we've got the Zulu project which is the build of the OpenJDK that we're doing as a binary distribution, purely built from OpenJDK, and making that available freely so that people can use that as an alternative to other binary distributions. Alex: Recently, you announced the embedded Java though which is I think the first time a non-Sun/Oracle company has offered Java on embedded systems. Yes, strictly speaking, that was on the ARM 32 architecture, so previous to that, we'd already done a build of the embedded version of Intel architecture so you could run on the Intel Galileo and do things with that but the big announcement as you said at Embedded World a couple of weeks ago, was that we now have support for ARM 32 and that means you can run it on the Raspberry Pi and if you want to do all those wonderful things with the Raspberry Pi then you can do that. 3. And the Raspberry Pi 3 has come out: I wonder what the benchmarks of the embedded Java on that look like now? Yes. I must admit, I had a look last week of buying one but I hadn't actually got around to getting one yet: as soon as we get one, we'll start having a look and see what the benchmarks come out as; but it looks like kind of a good specification for developing embedded type applications, yes. 4. Do you think specifically with the Raspberry 3, moving into the ARMv8 64-bit space that we'll start to see 64-bit applications on Raspberry Pis or do you think it's mainly going to stay on the 32-bit world for a while? I think it's probably going to stay in the 32-bit for a while simply because you need to think to yourself -- what advantage does 64-bit give you in that space, and what would you need to use that for in terms of specific applications? So I think certainly for the time being that most things will be 32-bit and then as Moore's law and advances carry on, then yes, we'll move into the 64-bit space. 5. [...]What do you see is going to be the big bang feature of the Java 9 platform when it comes out? Alex's full question: So back to its big brother, Java on the desktop and on the servers, Java has experienced resurgence recently with new releases coming out – Java 7, Java 8 and obviously further with Java 9 – and you're talking at QCon about Java 9 and some of the changes that are coming in here. What do you see is going to be the big bang feature of the Java 9 platform when it comes out? I mean the real feature, the key feature has got to be project Jigsaw: modularity of the Java platform, the ability to modularize applications so that's really the key thing. There's a few other nice features that are coming along. There's some changes to streams, a few new APIs there. There's the REPL idea so that people can do quick editing and figuring out how things work, but Jigsaw is definitely the big thing that's coming in JDK 9. 6. [...]Do you think Jigsaw's going to come in Java 9? If so, when we might see that merging? Alex's full question: At the moment, I think that Jigsaw development is on a separate branch from the main line, Java 9 development. Do you think Jigsaw's going to come in Java 9? If so, when we might see that merging? Yes. In fact, what I believe from some of the e-mail I've seen recently is that the merge will happen within a couple of months [of March 2016]. I mean I don't know the exact date but I think it's probably going to be within a couple of months which is great because then obviously everything comes back into the main build. [Update: this was recorded at QCon London in March 2016, and since then, Jigsaw has been merged into the main Java 9 codebase.] One of the nice things for us at Azul is I've been out there talking about JDK 9, that's wonderful. And I've been talking about Project Jigsaw and that's very nice but the build that we have of JDK 9 as an early access is from the main branch of the OpenJDK source code, so we don't actually have all of the Jigsaw pieces in it. As soon as the main tree gets Jigsaw merged into it, then we'll be able to provide an early access release of that which would be great for people to start experimenting with. [Update: Zulu 9.0.0.4 and above have Jigsaw available for early access testing] 7. [...]What do developers really need to think about in terms of modularity and how much designing in a modular fashion might impact? Alex's full question: So I've been obviously going on about modularity being important for some time, and you've been pushing the agenda forward with Jigsaw as well. What do developers really need to think about in terms of modularity and how much designing in a modular fashion might impact? The key thing I think is understanding that in terms of accessibility, public, which we are used to in Java meaning accessible anywhere in an application doesn't necessarily mean that anymore. So with a module, you can expose only classes and packages that you want to expose from that module. So even though you've got a public class with public methods, if you don't expose that as available to other modules that have a dependency on that, then they won't be able to see it. So that means that public won't mean quite the same thing that people used to in the way that they've been developing code up until now. And so that's something that people are going to have to get used to. I think that's the biggest thing from a language and programming perspective is making that slight adjustment in terms of the understanding of accessibility. 8. How is that separation between modules enforced in Jigsaw? Through the module system – so basically, when you have a dependency on a module, the JVM will look at that module. It will see what is made public and accessible from that module and then only things will be accessible to the other module that's got a dependency on it. So it's enforced by the JVM. Alex: Is it enforced though at runtime or just at compile time – because you can imagine the compile time check would say, you can't compile against it but actually under the covers through reflection you can get to it. No. It's definitely at runtime. It has to be at runtime as well otherwise as you say, you could kind of bypass some of these things. 9. Do you think developers are going to adopt modules or do you think it's going to be something which just the Java language shows the way and that we'll see people adopting modules in Java 9, Java 10 and onwards? No. I think that people will gradually start to adopt modules. I think it makes a lot of sense, and the ability for people to have this finer-grained control over the way the application works and the design of the application I think we'll see people gradually adopt it, yes. Alex: How do people work with it if they don't want to adopt modules at the moment, because there are sort of generic modules that kind of normal application code will fit into. Oh, yes. I mean if you want to continue using your code without the module system, then everything will appear very much the same way in that you can have your JAR files, you don't need to recompile those. You can just use them as JAR files and they can appear as if they were modules. So you don't need to convert them to modules in order to use them with JDK 9 so there's a kind of smooth migration path through that. 10. [...]Are those kind of things going to be hidden in Java 9 or removed in the future? Alex's full question: How does some of the Java APIs change – I'm thinking like Unsafe obviously being the classical one that everyone raises but also something like the XML Apache parsers that we use to get a hold of for Base64 decoding. Are those kind of things going to be hidden in Java 9 or removed in the future? So the way that's being organized is that there was originally an idea to encapsulate all the private APIs and then things like [sun.]misc.Unsafe would go away completely in JDK 9. That didn't prove to be very popular publicly because a lot of people said well, we've used these APIs and there’s no alternative in terms of the implementation so what are we supposed to do? So Oracle decided that, okay we won't encapsulate those APIs straight away. What they would do is that certain APIs where there are alternatives will be encapsulated, but things like sun.misc.Unsafe will be left available so people can still use it. The idea is that in JDK 10 and potentially later, that will be migrated to a public API so that people have the same functionality available but through a public API rather than private API: at which point either sun.misc.Unsafe will be completely removed or it will be encapsulated so you won't be able to see it. Alex: And some of the things that are coming in Java 9 like the VarHandles are really incremental replacements for part of the sun.misc.Unsafe functionality. That's right, yes. It's the idea of trying to gradually provide the functionality in a way that allows people to move forward in a controlled way without breaking things at any particular point. 11. Your talk today is on Jigsaw and Java 9. Are you going to be educating developers how to create modules or you merely sketching the landmarks and the direction process with which people need to go to? Yes. It's more of a sketch because there's a lot of details to go into and so with 50 minutes, what I'm trying to do is keep it at fairly high level. I'm talking about the changes to the APIs. So the encapsulation around things like sun.misc.Unsafe. I'm talking about what the basics of Jigsaw are in terms of how the module system is designed, dependencies, and exposing APIs and so on. And then I'll talk a little bit about some of the things that people would need to be aware of when they're developing code or migrating code to Jigsaw but with only 50 minutes, it's just not enough time to go into a lot of deep detail. 12. [...]I think you have OpenJDK builds going back to JDK 6? Alex's full question: And coming back to the Zulu project. Zulu's the Azul supported version of OpenJDK; I think you have OpenJDK builds going back to JDK 6? That's correct. Alex: And presumably any security vulnerabilities and so on you back-port on there. That's right. 13. [...]One would someone choose one versus the other? Alex's full question: How would you compare the Zulu offerings from the Java that you can download from Oracle, for example. One would someone choose one versus the other? Really, the idea is that we're just building from OpenJDK so from a platform perspective, if you're looking just for the JDK functionality, everything is there. There are some things like the [Applet] Plug-In, Java Web Start which we don't support and that is actually from my point of view a good thing because of the security issues so we don't support those things. But other than that, the platform’s identical. If you've got a Java application, it will run without any changes on top of Zulu, so what we're really trying to do is give people an alternative to other JVMs. So if they want to run Java 6 with the back ported bug fixes in it, they can do that. If they want to run Java 7 with bug fixes back ported, they can do that. It's about giving people a choice. 14. [...]How do those garbage collectors compare with the commercial offerings that Azul have? Alex's full question: I'm presuming these versions of Zulu come with the standard HotSpot garbage collectors, the G1 garbage collectors and so on, and ultimately I guess the new one that's coming out in Java 9. How do those garbage collectors compare with the commercial offerings that Azul have? Yes. So obviously they are, as you said, the HotSpot versions. So it's either the Concurrent Mark Sweep or it's the G1 collector, all of the HotSpot ones, we have in the Zulu builds. In terms of the comparison, what we target with Zing is low latency so it's the idea of a pause-less garbage collector: no matter how big the heap is, then you're looking at very low pause times because of the way it works. So you would have exactly the same type of footprint from a garbage collection perspective as you would have with HotSpot because it's the same code. Zing is much more targeted at low latency applications where you need that guarantee of not having long pause times. Alex: Simon Ritter, thank you very much. Good interview by Richard Richter Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
https://www.infoq.com/interviews/simon-ritter-java9
CC-MAIN-2018-09
refinedweb
2,693
64.95
context-free-art: Generate art from context-free gramm.. How to use import Art.ContextFree.Definite import Data.List.NonEmpty move = Mod [Move (0, -1.8), Scale 0.8] armN :: Int -> Symbol armN 0 = move $ Circle 1 armN n = move $ Branch $ Circle 1 :| [Mod [Rotate 10] $ armN (n - 1)] arm :: Symbol arm = armN 20 spiral = Branch $ Circle 1 :| [arm, Mod [Rotate 120] arm, Mod [Rotate 240] arm] The latter produces this graphic: ## Examples The code for these can be found in the exmaples/ folder Properties Modules - Art Downloads - context-free-art-0.3.0.1.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees
http://hackage.haskell.org/package/context-free-art-0.3.0.1/candidate
CC-MAIN-2022-40
refinedweb
117
65.62
Use After Free Exploits for Humans Part 1 – Exploiting MS13-080 on IE8 winxpsp3 A use after free bug is when an application uses memory (usually on the heap) after it has been freed. In various scenarios, attackers can influence the values in that memory, and code at a later point will use it with a broken reference. This is an introductory post to use after free – walking through an exploit. Although there are a million posts about the class of bug, not many are hands on (and this one is). I’ve been spending some free time over the past month looking into use after free type bugs. I’m a noob in this space so please call out/forgive mistakes. Setup Install a windows xpsp3 VM without updates. I got the vulnerable version of IE from this totally legit looking site,. When installing, make sure you disconnect the internet connection so it doesn’t update, which it will do otherwise. We begin with this, which should give us a crash that looks something like the following: (31c.8f0): Access violation - code c0000005 (!!! second chance !!!) eax=028601c5 ebx=001e8520 ecx=02ff0801 edx=020be80c esi=636397e4 edi=63662c78 eip=63662dca esp=020be7ecxb6: 63662dca ff11 call dword ptr [ecx] ds:0023:02ff0801=???????? 0:008> ub mshtml!CTreeNode::GetInterface+0xa2: 63662db6 f3a7 repe cmps dword ptr [esi],dword ptr es:[edi] 63662db8 0f84c9f31800 je mshtml!CTreeNode::GetInterface+0x1e9 (637f2187) 63662dbe 8b03 mov eax,dword ptr [ebx] 0:008> k ChildEBP RetAddr 020be810 63662d3a mshtml!CTreeNode::GetInterface+0xb6 020be828 635f3fe4 mshtml!CTreeNode::NodeAddRef+0x24 020be8ac 637fd38f mshtml!CDoc::PumpMessage+0x4a3 020be968 638b9650 mshtml!CDoc::SetMouseCapture+0xe6 020be990 638e5dab mshtml!CElement::setCapture+0x50 ... Based on the crash, this is most likely either a use after free where ecx could be a pointer to a table of function pointers (although for me at this point it is difficult to tell the difference between this and a null ptr dereference). Investigation Let’s take a second to analyze. The first step is to turn on pageheap and user mode stack tracing for iexplore.exe using gflags. pageheap will monitor all heap memory operations, allowing us to see when our application is trying to access the freed memory immediately (it will crash sooner – a good writeup on what’s going on is here) and also see some additional info, such as the sizes of the allocations and stack traces involved. Running the same file, you will get a different crash now. (40c.a04): Access violation - code c0000005 (!!! second chance !!!) eax=00000000 ebx=39fae8d0 ecx=335006a8 edx=39fae608 esi=00000000 edi=0d5b6fb0 eip=635f4478 esp=39fae820 ebp=39fae828 iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000202 mshtml!CDoc::HasContainerCapture+0x14: 635f4478 8b0f mov ecx,dword ptr [edi] ds:0023:0d5b6fb0=???????? 0:008> k ChildEBP RetAddr 037ce828 635f5887 mshtml!CDoc::HasContainerCapture+0x14 037ce8ac 637fd38f mshtml!CDoc::PumpMessage+0x3e2 037ce968 638b9650 mshtml!CDoc::SetMouseCapture+0xe6 037ce990 638e5dab mshtml!CElement::setCapture+0x50 We are crashing earlier, at the setCapture functions from our Javascript which is trying to reference memory that was freed in the earlier document.write. We can verify this with windbg. Using the info stored in heaplib, we can use this to find the size of the chunk 0:023> .printf "0x%x", 1000 - edi & 0xFFF 0x50 Because we want to replace a piece of memory 0x50 big, we can modify our script to add the following. <html> trigger() { var id_0 = document.createElement("sup"); var id_1 = document.createElement("audio"); document.body.appendChild(id_0); document.body.appendChild(id_1); id_1.applyElement(id_0); id_0.onlosecapture=function(e) { document.write(""); tt = new Array(20); for(i = 0; i < tt.length; i++) { tt[i] = document.createElement('div'); tt[i].className = ""; } } id_0['outerText']=""; id_0.setCapture(); id_1.setCapture(); } window.onload = function() { trigger(); } </script> </html> We will now get a crash that looks like this: (be0.9e4): Access violation - code c0000005 (!!! second chance !!!) eax=24242424 ebx=001e83e8 ecx=00000003 edx=00000000 esi=636397e4 edi=63662c78 eip=63662dc0 esp=020be7f8xac: 63662dc0 8b08 mov ecx,dword ptr [eax] ds:0023:24242424=???????? mshtml!CTreeNode::GetInterface+0xac: 63662dca ff11 call dword ptr [ecx] 63662dcc 8bf0 mov esi,eax 63662dce 85f6 test esi,esi This is just a few instructions before our call [ecx]. So to get EIP, we need to point eax to a valid value that points to where we want to start executing. We should be able to do this with a heap spray (maybe not ideal, but easy), then a stack pivot to this address where we can execute our ROP. Because we are modifying a metasploit payload, let’s just do everything the metasploit way, which I’ll cover in the next section. Metasploit browser Detection, Heap Spray, and ROP Because we are modifying a metasploit module, let’s just use all their builtin stuff and do this the metasploit way. First thing we need to do is detect the browser, which is described here. In the msf module, there was existing parameters to detect windows 7 IE9, so we have to configure it to also accept windows XP with IE8. 'Targets' => [ [ 'Automatic', {} ], [ 'Windows 7 with Office 2007|2010', { :os_name => /win/i, :ua_name => HttpClients::IE, :ua_ver => "9.0", :os_flavor => "7", :office => /2007|2010/ } ], [ 'Windows XP with IE 8', { :os_name => "Windows XP", :ua_name => HttpClients::IE, :ua_ver => "8.0"; } ] ], Heap spraying is the process of throwing a bunch of crap on the heap in a predictable way. Again, metasploit can do this for us. This is described here. We can also get rid of our lfh allocation at the beginning because js_property_spray will take care of it for us. The docs say a reliable address is 0x20302020, so we’ll just use that. At this point you should be able to have eip control (eip=0x414141) using something like this def get_exploit_html_ie8(cli, target_info) #address containing our heap spray is 0x20302020 spray_addr = "\\u2020\\u2030" #size to fill after free is 0x50 free_fill = spray_addr + "\\u2424" * (((0x50-1)/2)-2) %Q| <html> <script> #{js_property_spray} tt = new Array(30); function trigger() { var id_0 = document.createElement("sup"); var id_1 = document.createElement("audio"); document.body.appendChild(id_0); document.body.appendChild(id_1); id_1.applyElement(id_0); id_0.onlosecapture=function(e) { document.write(""); for(i = 0; i < tt.length; i++) { tt[i] = document.createElement('div'); tt[i].className ="#{free_fill}"; } var s = unescape("%u4141%u4141%u4242%u4242%u4343%u4343%u4444%u4444%u4545%u4545%u4646%u4646%u4747%u4747"); //this is mangled thanks to wordpress, but is just escaped js strings sprayHeap({shellcode:s}); } id_0['outerText']=""; id_0.setCapture(); id_1.setCapture(); } window.onload = function() { trigger(); } </script> </html> | end Finally, we just need rop, which msf also has (if you’re interested in learning how to ROP, this is a good tutorial). ROP with heap sprays are generally really easy (as long as you know a base address). This is not like a stack overflow where we need to do shenanigans, we just put stuff in order on our heap spray and we don’t have to worry about null bytes, etc etc. Nevertheless, metasploit has builtin RopDB for Windows XP where the addresses are constant, using msvcrt.. To connect everything, we just need a stack pivot, meaning we need esp to point to our heap that we control. right now eax points to our heap of course, so we can look for something that moves eax to esp (there are several ways to do this, mov esp, eax | xchg eax, esp | push eax then pop esp, etc.). Let’s just look in msvcrt since we’re using that dll to virtualprotect later anyway. I usually use rp++ for this although I had mona loaded so I did search through that (unfortunately not able to find something I could use in msvcrt with mona). Searching for output with rp++ I found 0x77c3868a xchg eax, esp rcr dword [ebx-0x75], 0xFFFFFFC1 pop ebp ret This will work great. Now just put them in the right order, and we have our exploit! I put in a pull request for this into metasploit so you can see the “final” version here, although it hasn’t yet been reviewed or accepted at the time I’m writing this. Of course this is largely useless for practical purposes, but it is a pretty good way to learn the basics of use after free IMO. Many vulns affect multiple versions of a browser, so it’s a fun exercise to port the bugs to different versions. Follow-Ups Some of my favorite work in this space is done by Peter Vreugdenhil at exodusintel. For a more advanced UAF (from someone who knows WAY more about browser exploitation than I do) see I plan on doing a part II here, perhaps trying to touch on memory leaks and newer browsers. Stay tuned…
https://webstersprodigy.net/category/pwnable/
CC-MAIN-2019-43
refinedweb
1,469
63.39
it works very well. thanks it works very well. thanks thank you after unpacking the zip file, i found this jar file "commons-codec-1.9.jar", and i put it in the same class directory. I am using netbeans IDE, what is the next step, please? Hello I want to use this method Hex.decodeHex() in my class, but there is something wrong with import statement!! I wrote : import org.apache.commons.codec.binary.Hex; but it said that the... When should I decide to use Anonymous object and Wrapper classes? In Which cases should I? thanks in advance. i've used for loop in toString() method. thanks a lot. --- Update --- i want to display array a elements how to write method toString() for it? thank you i am appreciated for your help. Could you please write the statement of the caller? thanks but I'm asking why don't we displaying a element directly? I mean without create temp array? how i can read the API to read about? i use eclipse. if you please explain more about array of objects and why we Do use new operator in accessor methods? accessor methods are only for returning values, but on this example they create object at line... thank you Mr.javaPF i like the lesson I have another code import javax.swing.JFrame; import javax.swing.JMenu; import javax.swing.JMenuItem; thank you so much I searched over the internet and found a database for texts but if please help me how to start coding the dictionary hi i wanna try how to program a simple GUI for translator Application i did the GUI but i need help about the idea of this code to translate between 2 languages my code is : I will try thanks alot thank you but how can I check them ?? hi everyone I am still new at learning java language and I use eclipse program but when I run this code it doesn't work the code is : import javax.swing.JFrame;
http://www.javaprogrammingforums.com/search.php?s=4acbb94b779904a5425db9c0775c44da&searchid=1461259
CC-MAIN-2015-14
refinedweb
339
85.79
API Side note: Python is AWESOME! Reading Automate the Boring Stuff with Python changed my life. Python (with iPython) is my goto language of choice (sorry JavaScript) anytime I want to crunch some data or automate a task. Install Python Twitter API Library For the purpose of this article, I am assuming you have Python installed and know how to access it from your terminal. All Macs come with Python pre-installed and it can be easily installed on Windows. I used a library called Python Twitter which you can be installed via pip install python-twitter. Generate Access Token to Authenticate ( aka Login) to Twitter API Once this is done you will need to get: - Consumer Key (API Key) - Consumer Secret (API Secret) - Access Token - Access Token Secret You can get all 4 by heading over to. Once there, sign in with your Twitter account and click on “Create New App” button. Fill in required information (note that app name must be unique) and select “Create Your Twitter Application”. You will be taken to your application view. There click on “Keys and Access Tokens” tab. Look for section called Token Actions and click on “Create my Access Token”. The page should refresh, and if everything went well you should see both Consumer Key/Secret and Access Token/Secret. Search with Twitter API Now that the boring part is done, we can do the fun stuff. Open up your Python console, by running python in your terminal. I highly recommend using iPython, which is a drop in replacement for Python console, but way better. You can get it via pip install ipython and then running ipython in your terminal. In any case, first thing you’ll need to do in your Python terminal is to import python-twitter library via: import twitter Next you’ll need to authenticate with Twitter via the following command, making sure to fill in the Consumer and Token information that you obtained earlier: api = twitter.Api(consumer_key='FILL-ME-IN', consumer_secret='FILL-ME-IN', access_token_key='FILL-ME-IN', access_token_secret='FILL-ME-IN') You can check that you’ve authenticated by running: print(api.VerifyCredentials()) Finally the search can be performed via: search = api.GetSearch("happy") # Replace happy with your search for tweet in search: print(tweet.id, tweet.text) Get User Tweets with Twitter API python-twitter library has all kinds of helpful methods, which can be seen via help(api). All user tweets are fetched via GetUserTimeline call, you can see all available options via: help(api.GetUserTimeline) Note: If you are using iPython you can simply type in api. and hit tab to get all of the suggestions. Or write api.GetUserTimeline? and hit enter to see all of the supported parameters` In my case, I wanted to get the last 10 Tweets for myself. My Twitter username is akras14 (wink wink), so I ran the following: t = api.GetUserTimeline(screen_name="akras14", count=10) This will return a list (a.k.a. Array in JavaScript) of “Status” class, something internal to python-twitter library, where every Status represents a tweet. To quickly see all of the returned data, it can be converted to a regular Dictionary (a.k.a Object in JavaScript). The following command uses list comprehension which is just a hipster way of doing a for loop on every Tweet, converting it to a Dictionary via built in “AsDict” method, and storing all of the converted Tweets into a List. tweets = [i.AsDict() for i in t] It’s as simple as that. Now all of the Tweets can be printed out for inspection: for t in tweets: print(t['id'], t['text']) Enjoy 🙂 Everything worked fine but the last print statement didnt print anythin Now, its working. TypeError: init() got an unexpected keyword argument ‘access_token’ I get the above error when I try to execute the code.Someone help me fix this issue. it should be “access_token_key” Very nice job, thank you for sharing!!!!
https://www.alexkras.com/how-to-get-user-feed-with-twitter-api-and-python/
CC-MAIN-2020-24
refinedweb
664
63.49
Hoessein AbdPython Web Development Techdegree Graduate 16,285 Points Game doesn't run! No syntax errors. Can anyone have a look? I followed Kenneth but whenever i try to run the game i don't get the prompt question but another "treehouse:~/workspace$" line. here's my code: import random def game(): #generate a random number between 1 - 10 secret_num = random.randint(1, 10) guesses = 0 while guesses < 10: try: #get a number guess from the player guess = int(input("Guess a number between 1 and 10: ")) except ValueError: print("{} isn't ta number:".format(guess)) else: #compare guess to secret number if guess == secret_num: print("You got it: the correct number was {}".format(secret_num)) break elif guess < secret_num: print("My number is higher than{}".format(guess)) else: print("My number is lower than{}".format(guess)) guesses += 1 else: print("You didn't get it! My number was{}".format(secret_num)) play_again = input("Do you want to play again? Y/n ") if play_again.lower() != 'n': game() else: print("Thanks for playing, have a good day!") 2 Answers Stuart Wright41,072 Points It's difficult to tell because the first couple of lines of your code aren't formatted correctly, but it looks like your indentation might be off. Your second to last 'else:' statement does not line up with any 'if' statement, which looks like an error. Edit to add: Just in case I'm wrong and your code is correct but just the formatting in the post messed up, I should also add that you need to call your function in order for the game to run. After your function definition, you need to call it using game(). Otherwise you've just defined a function but not told Python to execute the code inside it. Hoessein AbdPython Web Development Techdegree Graduate 16,285 Points Ugh that was really stupid. thanks for your quick reply. Hoessein AbdPython Web Development Techdegree Graduate 16,285 Points Hoessein AbdPython Web Development Techdegree Graduate 16,285 Points formatted my post.
https://teamtreehouse.com/community/game-doesnt-run-no-syntax-errors
CC-MAIN-2020-10
refinedweb
335
66.13