text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I am very new to C++ and am trying to learn how most people do things.
I have made a class and am wondering how much is too much in a constructor and what is considered bad practice. What I want to do with this class is (just for now) create a socket ready to listen on a server.
Here is how I am currently doing it.
I also don't know if it is typically a good idea to "return" out of a constructor.I also don't know if it is typically a good idea to "return" out of a constructor.Code:class server_socket { public: server_socket(short int port) { fport = port; fsock = 0; memset(&faddr, 0, sizeof(struct sockaddr_in)); /* Create the socket */ create(); if (!isvalid()){return;} bind(fport); if (!isvalid()){return;} listen(); if (!isvalid()){return;} //After I create the class I also call isvalid() in the function as well } ~server_socket() { close(fsock); } bool isvalid(void) { return (fsock != -1); } bool send(const char *); int recv(char *); private: int fsock; short int fport; struct sockaddr_in faddr; bool create(void); bool bind(const int port); bool listen(void); bool accept(int *fsock); };
I'm all ears right now because I am trying to write code a standard C++ programmer typically would. Moving from C to C++ has quite a few differences to say the least. Still haven't moved into <string> yet. | http://cboard.cprogramming.com/cplusplus-programming/114693-how-much-do-contructor.html | CC-MAIN-2015-35 | refinedweb | 231 | 72.05 |
Here is the question....
Write a base class Worker and derived classes HourlyWorker and SalariedWorker. Every worker has a name and a salary rate. Write a virtual function ComputePay (in hour) that computes weekly pay for every worker. An hourly worker gets paid the hourly wages for the actual number of hours worked. The hours are at most 40 per week. If it is greater than 40, the worker gets 1.5 times of the hourly rate for excess hour. Te salaried worker gets paid the hourly wage for 40 hours, no matter what the actual number of hours is .
And here is the little code I have so far. The int main() was given to me, and I need to write the base worker class and the two derived classes. But I am lost. Thanks for any help!
Code:
#include <iostream>
using namespace std;
class Worker
{
public:
{
virtual compute_Pay(float hours);} //this doesn't like to compile correctly
//hourly and salaried functions
}
}
class hourly: public Worker
{
compute_Pay; //need to add the computer_Pay in here, salaried, and worker.
//and functions that calculate hourly guys with/without overtime
}
class Salaried: public Worker
{
compute_Pay=; //also need to add functions that calculate just salaried guys
};
int main()
{
HourlyWorker a("Sam", 20);
HourlyWorker b("Mary", 15);
SalariedWorker c("Tom", 30);
SalariedWorker d("Pat", 40);
cout << "Hourly worker " << a.get_name() << " earns $" << a.get_salary();
cout << " and worked 20 hours for a pay of $" << a.compute_pay(20) << "\n";
cout << "Hourly worker " << b.get_name() << " earns $" << b.get_salary();
cout << " and worked 50 hours for a pay of $" << b.compute_pay(50) << "\n";
cout << "Salaried worker " << c.get_name() << " earns $" << c.get_salary();
cout << " and worked 20 hours for a pay of $" << c.compute_pay(20) << "\n";
cout << "Salaried worker " << d.get_name() << " earns $" << d.get_salary();
cout << " and worked 50 hours for a pay of $" << d.compute_pay(50) << "\n";
system("pause");
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/109234-needing-help-cplusplus-program-im-writing-printable-thread.html | CC-MAIN-2014-42 | refinedweb | 308 | 86.2 |
.
Step 1: How It Works:
- Red for Anger
- Yellow for Happy
- Pink for Love
- White for Fear
- Green for Envy
- Orange for Surprise
- Blue for Sadness
- "wow"
- "can't believe"
- "unbelievable"
- "O_o".
- You could put it on your desk to get an early warning of something big happening somewhere in the world
- A literal 'mood...
Materials
- Arduino Duemilanove
- Wifly Shield
- Breakaway headers
- 9v battery
- 9v to Barrel Jack Adapter
- 5mm RGB LED
- 3x resistors (2x100 ohm,1x180 ohm)
- Wire
- Small printed circuit board
- USB Cable A to B to connect Arduino to computer
- Rosin-core solder
- Source code
- 1 x (5" x 5" x 0.25") - the top
- 4 * (4.75" x 4.75" x 0.25") - the 4 walls
- 1 x (4.5" x 4.5" x 0.25") - the base
- 1 x (4.5" x 4.5" x 0.125") - the mirror with a 6mm hole drilled in the middle
- 4 x (4.25 x 1" x 0.25") - the 4 inside walls
- Acrylic solvent cement
- Sand paper (to help diffuse the light)
- Soldering iron
- A computer
- Arduino development environment
- A wireless network (802.11b/g)
- Pliers
- Wire stripper
Step
Step 5: Choosing Good Search Terms
Twitter allows you to search for recent tweets that contain particular words or phrases.
You can search for tweets that contain any of a list of phrases by using the "+OR+" conjunction.
For example, here is a search request that might find tweets that express Fear:
GET /search.json?q="i'm+so+scared"+OR+"i'm+really+scared"+OR+"i'm+terrified"+OR+"i'm+really+afraid"+OR+"so+scared+i"&rpp=30&result_type=recent
I spent a long time finding good search phrases.
The search phrases needed to produce tweets that:
- very often express the desired emotion.
- very rarely express the opposite emotion or no emotion.
Many search phrases that I thought would work, turned out to not work that well when I searched with them.
Smileys have been used with some success to extract whether the sentence is positive or negative, but I didn't find them useful for extracting anything more.
The trouble with smileys is that a smile can mean so many things ;D
It is often used, it seems, as a kind of qualifier for the whole sentence; since people have to compress their thoughts into 140 characters, the meaning can become ambiguous.
The smiley often then acts as a qualifier that:
- 'this is a friendly comment'
- 'don't take this the wrong way'
- 'i am saying hello/goodbye with a smile'
- 'this is almost a joke'
- 'I know I'm being cheeky'
- 'I don't really mean this'
"so scared" or "really scared" is better than just "scared" which returns bad results: for example, "not scared".
Phrases in the first person seemed to produce better results.
Some search phrases give tweets that suggest the author feels the emotion: for example, "i really hate...", often sounds like they really are full of hate or angry, whereas other phrases containing the word "hate" give tweets that do not seem to express much emotion, like "why do you hate..."
Hyperbole is your best friend, ever:
Using phrases with hyperbole produced good results. Tweets with "I'm terrified" or "I'm petrified" in them were generally more fearful sounding than "I'm scared"
Regardless, the approach is still naive, but statistically, from my tests, it does seem to work well.
While testing the code, I did at one point get the horribly ominous "Flashing White" that signifies the world is feeling intense fear, but since I was still testing it all, I did not hide under the table straight away, but instead, threw caution to the winds, and went on to Twitter to see what people were suddenly so fearful about.
The recent tweets containing the Fear search string (see top of page) were largely relating to a large thunderstorm that had just started somewhere near Florida.
If you're interested, here are some of those tweets:
So... it works! ...Well, it needs the numbers tweaking to ignore the world's "tantrums", the short-lived fits of emotional outburst, and be more concerned with larger changes that signify bigger news.So... it works! ...Well, it needs the numbers tweaking to ignore the world's "tantrums", the short-lived fits of emotional outburst, and be more concerned with larger changes that signify bigger news.
- "Ahhh Thunder I'm so scared of Thunder !!!!! Help some 1"
- "I'm so scared of lightning now. Like I just ran home praying "
- "On our way to Narcosses at @Disney world's Grand Floridian hotel and there's a tropical storm right now. I'm terrified! ..."
- "I'm in my bathroom til the rain stops. I'm terrified of lightning and thunder..."
- "I'm terrified of thunder storms *hides in corner*"
- "I'm terrified of Thunder :("
- "If only I was wit my becky during this thunderstorm cause I'm really scared cause of a bad experience"
Step 6: Download the Code
The four libraries need to be copied into the Arduino library directory and then they can be imported as shown.
WorldMood/WorldMood.pde (see below) should be opened in the Arduino development environment.
You then need to correct the "[your network]" and "[your network password]" fields. eg.
#define network ("mynetwork")
#define password ("mypassword")
Then the sketch (and libraries) should be compiled and uploaded to the Arduino board.
see arduino.cc/en/Hacking/LibraryTutorial
The next 5 programming steps just give an overview of each of the components and include the most noteworthy parts of the source code...
**** Update ****};
See here for more info:
***** End Update ****
Step 7: Programming Step 1: SPI UART
The featured components of the shield are:
- a Roving Network's RN-131G wireless module
- SC16IS750 SPI-to-UART chip.
The Universal asynchronous receiver/transmitter (or UART) is a type of asynchronous receiver/transmitter, a piece of computer hardware that translates data between parallel and serial forms.
- The PC communicates over UART with the Arduino through pins RX and TX
- The Arduino communicates over SPI with the SPI-UART chip on the WiFly shield (SC16IS750 SPI-to-UART chip) though pins 10-13 (CS, MOSI, MISO, SCLK respectively)
- The RN-131G wireless module accesses network and send/receive serial data over UART.
The code below is based on a number of sources, but primarily from this tutorial over at sparkfun:
WiFly Wireless Talking SpeakJet Server
Step 8: Programming Step 2: Connecting to a Wireless Network
Step 9: Programming Step 3: Searching Twitter With TCP/IP Port 80
for example:
"Open 80"
will open a Http connection to.
Twitter actually requires more of the Http protocol than google.
For example, the "Host" field is often required in case there's more than one
domain name mapped to the server's IP address so it can tell which
website you actually want.
Twitter also requires a final linefeed and carriage return ("\r\n")
"Host: server\r\n"
"\r\n"
I use search.json rather than search.atom to give results in non-html format, and more easily parsed. (see apiwiki.twitter.com/Twitter-API-Documentation)
Step 10: Programming Step 4: RGB LED
*** update ***
If you find the colours look wrong, try removing the "255 -" from the analogWrite calls.
Thanks to shobley for finding this.
More info at
*** end update ***
Step 11: Programming 5: Computing the World Mood
The important thing is to carefully normalize and smooth the data, and to adjust the thresholds to give the right level of responsiveness and alarm. (i.e. it should flash when a headline news story
happens and not when a TV show starts, GMT)
Emotion, mood, and temperament
Firstly, the "world's emotion" is calculated by searching twitter for tweets with each of the 7 mood types (love, joy, surprise, anger, fear, envy, sad) .
A measure of "tweets per minute" is used to calculate the current emotion. A higher number of tweets per minute suggests more people are currently feeling that emotion.
Emotions are volatile, so these short-lived emotional states are smoothed over time by using a "fast exponential moving average"
(see en.wikipedia.org/wiki/Moving_average#Exponential_moving_average)
This gives us ratios for the different moods.
Each mood ratio is then compared to a base line, a "slow exponential moving average", that I call the "world temperament".
The mood that has deviated furthest from its baseline temperament value is considered to be the current world mood.
The deviation is measured as a percentage, so, for example, if fear changes from accounting for 5% of tweets to 10% then this is more significant than joy changing from 40% to 45% (They are both a +5% in additive terms, but fear increased by 100% in multiplicative terms.)
Finally, the world temperament values are tweaked slightly in light of this new result. This gives the system a self adjusting property so that the world temperament can very slowly change over time.
These values in WorldMood.pde are used to adjust how sensitive the system is to information.
- Do you want it to pick up when people are happy about a sport result or scared about the weather?
- Or would you prefer to only track big events like natural disasters or terrorist attacks?
#define emotionSmoothingFactor (0.1f)
#define moodSmoothingFactor (0.05f)
#define moderateMoodThreshold (2.0f)
#define extremeMoodThreshold (4.0f)
Step 12: Building the Box
Build an acrylic box ala this Instructable:
Step 13: Enjoy!
- Making it multilingual and not just English speaking places.
- Perhaps just associating with a keyword, for example every tweet must contain the word "Obama", then you could gauge public opinion on just that subject.
- Location specific. Perhaps you just care about your town or country. Twitter allows you to use the geocoding to do this.
- Make it tweet what the world mood is so as to complete the circle
- Ability to connect to it from a computer to see what keywords people are so emotive about.
I am very interested to hear any comments, corrections or questions. Please do contact me, if you so wish.
118 Discussions
6 years ago on Introduction
The world has changed a bit since this program was first posted. I finally have my mood light working, but it wasn't always easy..
Reply 4 years ago on Introduction
Excellent update/summary interesting that the tweets per second needs to be adjusted, i wonder if it would work on other social media platforms?
thanks for putting the time in to update.
Reply 6 years ago on Introduction
Yes, Arduino, Twitter and the World's Mood have changed a lot since May 2010. I"m afraid this is not an active project of mine, and I rely on great comments like this to keep this 'ible current.
Reply 5 years ago on Introduction.
Reply 5 years ago on Introduction
Also, as of today, the project no longer works.
Twitter has changed its API and now the query returns errors.
Reply 5 years ago on Introduction
Is there a way i can still make it ?
would be a great help if you can sugest a way to do this ?
Reply 5 years ago on Introduction
Unfortunately, no. I've not been able to get it to interface with the new API.
Question 1 year ago
hi mate, how much would be the budget for this project
i'm prepping for a national Arduino competition so a rapid reply would be much appreciated :)
love from malaysia
Question 1 year ago on Step 2
may i ask if how the arduino detects the moods on the twitter ? since it's my first to encounter this kind of project
2 years ago
I love this, and would love this next to my bed! One thing though. To keep cost down, how hard would it be to program the Arduino to read the information straight from the computer via USB connection
2 years ago
Sir do you have any codes for the modern twitter mood light?
3 years ago
Really Cool Instructable. I featured this Instructable in one of my collections:
3 years ago
Please keep us updated, good luck!
3 years ago
Can this still be made or has there been updates to Twitter that will make this accomplishable? Should we use the last change you made ?
4 years ago on Introduction
Is there anyone out there who has an idea of making code for aduinos? I'm having a hard time doing my project. I have no idea of what code to use. My project is about a gh-718c mini PIR motion sensor detecting my arm or hand. If the motion sensor detects that my hand is low, the light or LED will dim, and if it detects my hand on a high position, the LED will bright up. Please please. Help please. Thanks for the reply!
Reply 4 years ago on Introduction
By the way, I'm using an arduino uno. Thankyou. Please reply.
Reply 3 years ago
You'll have more luck with a raspberry pi
3 years ago
You're awesome, thanks for doing this!
6 years ago on Step 13
I am not even sure where to start with this comment! Bottom line, I got this running on newer hardware with a newer IDE.
Reply 5 years ago on Step 13
Hey! I tried going to the dropbox link you gave, but the file is missing. Do you still have it up somewhere? Thanks in advance! | https://www.instructables.com/id/Twitter-Mood-Light-The-Worlds-Mood-in-a-Box/ | CC-MAIN-2019-26 | refinedweb | 2,245 | 70.23 |
MIGRATED to
Don't use the using namespace ... or using ... constructs in a header file. It might save you some typing but it also forces the namespaces you use onto every .cpp file that directly or indirectly includes your headers. That could create name clashes with names in some other namespace that the author of the .cpp file wants to use. Use fully qualified names, painful as it might be.
There is one exception. It's sometimes handy to "import" names from one namespace to another. For example suppose some C++ compiler doesn't provide the std::tr1::auto_ptr template, which is used in qpid. [ Boost] does provide a compatible boost::auto_ptr but it's in the boost namespace and qpid expects it in std::tr1. No problem, we create our own tr1/memory header file:
#include <boost/memory> namespace std { namespace tr1 { using boost::auto_ptr; } }
This makes the boost template available in the standard namespace. (Actually you don't need to do this yourself, boost provides a set of adapter headers for all the tr1 stuff.) | http://wiki.apache.org/qpid/CppTips/NoUsingNamespaceInHeaders?action=diff | CC-MAIN-2016-40 | refinedweb | 177 | 67.04 |
DUMP(5) UNIX Programmer's Manual DUMP(5)
dump, dumpdates - incremental dump format
#include <sys/types.h> #include <sys/inode.h> #include <protocols/dumprestore.h>
Tapes used by dump and restore(8) contain: a header record two groups of bit map records a group of records describing directories a group of records describing files The format of the header record and of the first record of each description as given in the include file <protocols/dumprest; }; MirOS BSD #10-current April 29, 1991 1 DUMP(5) UNIX Programmer's Manual DUMP(5) bit for each inode that was dumped. TS_ADDR A subrecord of a file description. See c_addr below. system; see fs(5). c_count The count of characters in c_addr. c_addr An array of characters describing the blocks of the dumped file. A character is zero if the block associated with that character was not present on the file system, otherwise the char- acter is non-zero. If the block was not present on the file system, no block was dumped; the block will be restored as a hole in the file. If there is not sufficient space in this record to describe all of the blocks in a file, MirOS BSD #10-current April 29, 1991 2 DUMP(5) UNIX Programmer's Manual DUMP(5) TS_ADDR records will be scattered through the file, each one picking up where the last left off. Each volume except the last ends with a tapemark (read as an end of file). The last volume ends with a TS_END record and then the tapemark. The structure idates describes an entry in the file /etc/dumpdates where dump history is kept. The fields of the structure are: id_name The dumped filesystem is `/dev/id_nam'. id_incno The level number of the dump tape; see dump(8). id_ddate The date of the incremental dump in system format see types(5).
/etc/dumpdates
dump(8), restore(8), fs(5), types(5) MirOS BSD #10-current April 29, 1991. | http://mirbsd.mirsolutions.de/htman/sparc/man5/dump.htm | crawl-003 | refinedweb | 330 | 72.46 |
from joblib import Parallel, delayed def parallel_dot(A,B,n_jobs=2): """ Computes A x B using more CPUs. This works only when the number of rows of A and the n_jobs are even. """ parallelizer = Parallel(n_jobs=n_jobs) # this iterator returns the functions to execute for each task tasks_iterator = ( delayed(np.dot)(A_block,B) for A_block in np.split(A,n_jobs) ) result = parallelizer( tasks_iterator ) # merging the output of the jobs return np.vstack(result)This function spreads the computation across more processes. The strategy applied to distribute the data is very simple. Each process has the full matrix B and a contiguous block of rows of A, so it can compute a block of rows A*B. In the end, the result of each process is stacked to build final matrix.
Let's compare the parallel version of the algorithm with the sequential one:
A = np.random.randint(0,high=10,size=(1000,1000)) B = np.random.randint(0,high=10,size=(1000,1000))
%time _ = np.dot(A,B)
CPU times: user 13.2 s, sys: 36 ms, total: 13.2 s Wall time: 13.4 s
%time _ = parallel_dot(A,B,n_jobs=2)
CPU times: user 92 ms, sys: 76 ms, total: 168 ms Wall time: 8.49 sWow, we had a speedup of 1.6X, not bad for a so naive algorithm. It's important to notice that the arguments passed as input to the Parallel call are serialized and reallocated in the memory of each worker process. Which means that the last time that parallel_dot have been called, the matrix B have been entirely replicated two times in memory. To avoid this problem, we can dump the matrices on the filesystem and pass a reference to the worker to open them as memory map.
import tempfile import os from joblib import load, dump # saving A and B to a local file for memmapping temp_folder = tempfile.mkdtemp() filenameA = os.path.join(temp_folder, 'A.mmap') dump(A, filenameA) filenameB = os.path.join(temp_folder, 'B.mmap') dump(A, filenameB)Now, when parallel_dot(A_memmap,B_memmap,n_jobs=2) is called, both the processes created will use only a reference to the matrix B.. | http://glowingpython.blogspot.it/2014/05/ | CC-MAIN-2017-30 | refinedweb | 361 | 59.4 |
Turns out there is an easier way to achieve this:
Turns out there is an easier way to achieve this:
First of all, I definitely do not consider myself to be an expert and there are tutorials which describe this process in more detail (for example:). However, sometimes you simply want to know how to do something without knowing all the ins and outs.
Anyway, I noticed that since Quasar v1 beta was released, a lot of questions or remarks come down to “this was much easier in v0.17”. I also experienced this myself and as annoying as it is that you can’t simply do something, it is the only way to create a proper and maintainable framework. Let’s hope that in the future Quasar will get improved to a point where you can perform everything without programming anything, but until that time arrives we will have to add the functionality ourselves.
So, the great thing about Quasar v1 is that they seemed to have considered every scenario when developing the base components, so by extending the base you can add your own functionality.
For example, there is an Input and a Date component, and as explained in the documentation you can combine the two to get a date input (). Now, I wanted to hide the popup on selection of a date:
This works fine as of itself, but imagine you would need 10 date inputs. That requires 90 lines of code, and in the case you would want to change something in the future, changing the same thing 10 times. So, what do we do? We create a reusable component:
quasar new component QDateInput
If we would copy the code from the codepen into QDateInput.vue, we can import QDateInput.vue as a component in any page in Quasar:
import qDateInput from 'components/QDateInput.vue' export default { name: 'PageDates', components: { qDateInput } }
In the template we can then use it with <q-date-input>:
<template> <q-date-input /> </template>
However, QInput and QDate have a lot of properties which are and cannot be used in this way. To be able to set the underlying properties of QDate and QInput we will have to add the properties to our QDateInput component:
<template> <q-input : <template v-slot:append> <q-icon <q-popup-proxy <q-date </q-popup-proxy> </q-icon> ` </template> </q-input> </template> <script> import { date as dateUtil } from 'quasar' export default { name: 'QDateInput', props: { value: { type: String, default: '' }, error: Boolean, errorMessage: String, label: String, stackLabel: String, hint: String, hideHint: Boolean, filled: Boolean, outlined: Boolean, borderless: Boolean, standout: Boolean, bottomSlots: Boolean, rounded: Boolean, square: Boolean, readOnly: Boolean, dense: Boolean, landscape: Boolean, color: String, textColor: String, dark: Boolean, readonly: Boolean, disable: Boolean, firstDayOfWeek: [String, Number], todayBtn: Boolean, minimal: Boolean, options: [Array, Function], events: [Array, Function], eventColor: [String, Function] }, computed: { localValue: { get () { return this.value }, set (localValue) { this.$emit('input', localValue) } }, formattedDate () { return dateUtil.formatDate(this.value, 'DD/MM/YYYY') } } } </script>
By defining the
value prop, you are able to use
v-model on the component. This will give a two-way binding with the data you use as
v-model. Note that in order to set the date, the data is also coupled with
v-model inside the component. Directly using
value as
v-model for QDate will result in a “Avoid mutating a prop directly” error, so instead we will have to use a local computed value which takes the value of
value and emits an input event on change.
Now we can use the component as followss:
<q-date-input filled hint="Hint" bottom-slots first-day-of-week="1" :events="['2018/11/05', '2018/11/06', '2018/11/09', '2018/11/23']" ... />
So you can use any of the props of either QInput and QDate and it will change them inside the component accordingly, and we added our own custom functionality. Hiding the popup on select is a really simple example, but it proves the point.
“Wait, will I have to do this everytime I need a function that isn’t covered by Quasar out of the box?”
Well, yes, or you let someone else do it. And that is what I think the app extensions will be for.
I found a solution in.
yarn add portal-vue
quasar new boot PortalVue
boot/PortalVue.js:
import PortalVue from 'portal-vue' export default async ({ Vue }) => { Vue.use(PortalVue) }
In MyLayout.vue:
<q-menu> <q-list <portal-target </q-list> </q-menu>
In any component:
<portal to="menu"> <q-item v-close-menu clickable > <q-item-section> <div class="text-h6 text-truncate"> Title </div> <div class="text-subtitle2"> Subtitle </div> </q-item-section> </q-item> </portal>
I did some searching but can’t really find the solution which I am looking for (might not even be possible maybe):
I’d want to have a button in the toolbar in the main layout which dynamically changes content according to the page which is shown and triggers actions on the shown page. From the topics above I understand that I should define the menu for the whole app and show the right actions by using v-if. However, how does the layout interact with the component in that case?
An example of the layout would be:
<q-layout <q-header elevated <q-toolbar> <q-btn flat round dense <q-menu> <q-list <!-- Dynamic content --> </q-list> </q-menu> </q-btn> </q-toolbar> </q-header> <q-page-container> <router-view /> </q-page-container> </q-layout>).
It’s your lucky day
: I just updated it to v1.0 and as far as I tested everything works. Please check it out and I am open to any feedback
@pavarine said in Quasar v1.0 beta has arrived:
Has anyone already mentioned that u guys are VERY awesome? Thank you Razvan and colaborators!
I’ll have to agree. Quasar 0.x already was the most developer-friendly framework imo, but with v1.0 you guys kicked it to another level. Really impressive.
Quasar also provides a localstorage API:
Hmm, weird. I first noticed it on a Windows 10 laptop with a low resolution. Then I could reproduce it on a Linux machine in the responsive design mode of Firefox.
I’ll try to figure out what the problem is, but there does not seem to be an obvious solution.
I am using Firefox. In Chrome it seems to work fine indeed. I forgot to check the behavior in different browsers than Firefox…
So it is a bug, but is it a bug with Quasar or Firefox?
The limiting width of a QCard seems to be default behaviour. The only way to get it to stretch across the whole width of the screen is by applying col-12, by default it seems to match col-4. | https://forum.quasar-framework.org/user/stefanvh | CC-MAIN-2019-51 | refinedweb | 1,133 | 60.35 |
Thanks for using Syncfusion products.
We would like to inform you that, We can add the schedule control in the MVC5 application by manually. Please follow the below steps to add the schedule control in MVC5 application.
1. Create the MVC5 application in the Visual studio.
2. And add the necessary assemblies in the reference folder. Refer the below screenshot.
3. Refer the assembly details in web.config. To refer the assemblies add the following code snippet within the “system.web” blog
4. Add the namespaces in the web.config page. To add the namespaces include the following code snippet within the system.web blog under pages.
5. And then within the appSetting in the web.config page add the following code snippet to block the browser link.
6. And add the following highlighted code snippet within the “web.config” page under the “Views” folder.
7. Add the necessary scripts and themes in the sample application. Refer the following screenshot.
8. Refer the scripts in the layout.cshtml page. Refer the following code.
9. Add the script manager render code in the layout.cshtml page.
10.Then write the code to render the schedule control in the view page like the below.
11.And while running the sample the schedule control will render look like the following.11.And while running the sample the schedule control will render look like the following.
Please let us know if it helps and any further assistance.
Regards,
Velmurugan
Hi Peter,
We regret for the inconvenience caused, due to the internal server problem the updated content doesn’t display properly and we are checking this now, once the problem get solved we will update further details on this.
We appreciate your patience until then.
Regards,
Velmurugan
This post will be permanently deleted. Are you sure you want to continue?
Sorry, An error occured while processing your request. Please try again later.
This page will automatically be redirected to the sign-in page in 10 seconds. | https://www.syncfusion.com/forums/117362/how-to-add-essential-tools-to-an-mvc-5-application | CC-MAIN-2019-13 | refinedweb | 331 | 61.43 |
[ ]
Phil Steitz commented on MATH-151:
----------------------------------
I understand and agree with your analysis of the IEEE754 representation, but I would still
like to see if there is anyting clever that we can do to work around the problem. Could
be this is hopeless, but I am bothered by the fact that the previous implementation actually
handles this correctly. Sorry I messed up the link in the comment above to the earlier BigDecimal-based
impl. That should have been to
In any case, the impl there, modified to handle the special values included in later tests
would be:
public static double round(double x, int scale, int roundingMethod) {
try {
return (new BigDecimal
(new Double(x).toString())
.setScale(scale, roundingMethod))
.doubleValue();
} catch (NumberFormatException ex) {
if (Double.isInfinite(x)) {
return x;
} else {
return Double.NaN;
}
}
}
Before, it was just
return (new BigDecimal(x).setScale(scale, roundingMethod))
The code above passes all tests, including even
double x = 39.0d;
x = x + 245d/1000d;
assertEquals(39.25,MathUtils.round(x, 2), 0.0);
> | http://mail-archives.apache.org/mod_mbox/commons-dev/200606.mbox/%3C1480690.1150045110922.JavaMail.jira@brutus%3E | CC-MAIN-2016-36 | refinedweb | 168 | 56.45 |
By Umesh Kumhar, Alibaba Cloud Community Blog author.
Argo Workflows is an open-source container-native workflow engine for orchestrating sequential and parallel jobs on Kubernetes Cluster, which means that, by using Argo, your workflow can be executed as a container. Argo supports multiple step workflows to work as a sequence of tasks as well as captures the dependencies or artifacts between tasks using a directed acyclic graph (DAG).
In this tutorial, you will be learning how you can install and set up the Argo Workflow engine in your Kubernetes cluster. In the tutorial, you will install and configure the Argo Workflow, and install Artifact repositories, configure the controller, and also get to know a bit more about the Argo Workflow user interface.
Before you can begin to follow the steps provided in this tutorial, make sure that you first have the following:
/.kube/config.
Argo comes with three main interfaces that you can use to interact with it:
Below I'll show you how you can install each of these interfaces so that you can use any of them to interact with Argo.
You can download the latest Argo Command Line Interface (CLI) version from here. The interface is available with Linux, Windows, along with Darwin and macOS Homebew versions. If your system is Linux as is mine, you can download it using the following command:
curl -sSL -o /usr/bin/argo chmod +x /usr/bin/argo
To setup the Argo controller in a Kubernetes cluster, you'll want to create a separate namespace for the Argo components. You can do so with the following commands:
kubectl create namespace argo kubectl apply -n argo -f
Then, to verify that these components were installed successfully, you'll want to also check the status of the controller and UI pods. You can do so with the commands below:
kubectl get pods –n argo kubectl get services –n argo
Now, that we've got all of these interfaces installed, let's continue with the rest of the setup process, which is mainly some basic configurations.
Initially, the default service account of any namespace is restricted to having minimal access. For this tutorial, however, we want the default account to have more privileges so that we can more clearly demonstrate some of the features of Argo Workflow such as Artifacts, outputs, and secrets access.
For this tutorial purpose specifically, you'll want to grant the admin privileges to default service account of default namespace. You can do it by running the below commands:
kubectl create rolebinding namespace-admin --clusterrole=admin --serviceaccount=default:default
Next, we will want to figure out how we can manage our Argo Workflow. For this, there are two methods to manage Argo Workflow. We will take a look at both below.
Argo Workflow is implemented as a Kubernetes CRD (Custom Resource Definition), which therefore means that it can be natively integrated with other Kubernetes services, such as ConfigMap, secrets, persistent volumes, and role-based access control (RBAC). Following this, Argo Workflow can also be managed by kubectl. You can use Kubectl to run the below commands to submit hello-work Argo workflow:
kubectl create -f kubectl get wf kubectl get wf hello-world-xxx kubectl get pods --selector=workflows.argoproj.io/workflow=hello-world-xxx --show-all kubectl logs hello-world-zzz -c main
After you have run the hello-world example, you can find the output when you check the logs for the workflow pod.
Argo CLI offers a lot other extra features that Kubectl does not provide directly. Some special features such as YAML validation, parameter passing, retries and resubmits, suspend and resume, as well as an interface for workflow visualization, and so on. Run the below Argo commands to submit hello-world workflow and get the details and logs.
argo submit --watch argo list argo get xxx-workflow-name-xxx argo logs xxx-pod-name-xxx
Once after you have run the above hello-world example, you can find the output when you check the logs for the workflow pod. Next, you can also run the below examples to get a better idea of how Argo works.
argo submit --watch argo submit --watch
Since Argo supports Artifactory as well as MinIO as artifacts repositories. We will go ahead with MinIO for its open-source object storage and its portability. The artifact repositories are very useful to store logs, final exports and reuse them in later stage of Argo workflow
Install MinIO using below Helm commands to the Kubernetes cluster:
helm install stable/minio \ --name argo-artifacts \ --set service.type=LoadBalancer \ --set defaultBucket.enabled=true \ --set defaultBucket.name=my-bucket \ --set persistence.enabled=false
When minIO is installed using helm charts, it uses the following hard-coded default credentials. These are used to login to the MinIO user interface.
AccessKey: AKIAIOSFODNN7EXAMPLE
SecretKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
To get the exposed external IP address for the minIO user interface, you'll want to run the below command:
kubectl get service argo-artifacts -o wide
Next, you'll want to log in to the minIO user interface by using the endpoint obtained from above command, port 9090, in a web browser. After you're logged in, you'll then want to create a bucket named my-bucket from in the MinIO interface.
Now, you should have installed the minIO artifact repository in which you can storeyou're your workflow artifacts. So, with that done, you can go on to configure minIO as an artifact repository in your Argo Workflow, as will be outlined in the following steps.
We need to modify the Argo workflow controller ConfigMap to point to the minIO service (specifically, the argo-artifacts) and secret (argo-artifacts) to use as artifact repository. For this, you can use the command below.
Remember that, of course, the minIO secret is retrieved from the namespace that the Argo Workflow uses to run workflow pod. After you saved the changes to the ConfigMap, then you can run the Workflow and save artifacts to minIO bucket.
kubectl edit cm -n argo workflow-controller-configmap ... data: config: | artifactRepository: s3: bucket: my-bucket endpoint: argo-artifacts.default:9000 insecure: true # accessKeySecret and secretKeySecret are secret selectors. # It references the k8s secret named 'argo-artifacts' # which was created during the minio helm install. The keys, # actual minio credentials are stored. accessKeySecret: name: argo-artifacts key: accesskey secretKeySecret: name: argo-artifacts key: secretkey
Run the command below to run the demo workflow that uses the minIO bucket to store artifacts.
argo submit or kubectl create –f
After that you can log in to MinIO UI and check the artifacts generated in the my-bucket.
The Argo user interface is designed to showcase all your executed workflow. It will present your workflow in form of flow diagram to give the better visualising of the workflow. There you can monitor and fetch logs for all steps of your workflow. This helps in troubleshooting and understanding the workflow easily.
Note: Argo user interface does not provide the function for create workflows on its own. Rather, the interface is used for the visualisation of already executed workflows.
We have also installed Argo UI along with the controller. By default, the Argo UI service is not exposed nor is it restricted to ClusterIP type. To access the Argo UI, you can use one of the following methods:
kubectl proxy
Next, after you ran the command, you can access your application in browser, at this address:
kubectl -n argo port-forward deployment/argo-ui 8001:8001
Then, after that, you can access your application in browser at this address:
kubectl patch svc argo-ui -n argo -p '{"spec": {"type": "LoadBalancer"}}'
You may need to wait a while for this command to process. More specifically, you'll need to wait until once the external endpoint is assigned to the Argo user interface. Then, once that's out of the way, you can run the below command to get the endpoint:
kubectl get svc argo-ui -n argo
Note: If you are running on-premises kubernetes cluster, then LoadBalancer will not work. Instead, you'll want to try using the NodePort type.
You will find the Argo user interface for all the workflows executed.
Next, you can go with the selection of any workflow executed. Then, you will find out the flow graph for that workflow if that has multiple steps.
Here you can visualize and figure out the dependency between steps and their relations. Following this, you'll want to select the step on which you want to explore details or fetch logs.
The Argo interface has a lot of other features and even better visuals that you can explore in the future.
In this tutorial, you got to see how you can set up the Argo Workflow engine in your Kubernetes cluster, configure it, and see how you can use its main interface. One thing that is important to note is that this tutorial has consisted of the Argo workflow setup and using this workflow for learning and development purposes. For production use, however, I'd recommend that you consider also using the role-based access control (RBAC) and RoleBinding to limit the access to the default service account for security reasons.
Introduction to TensorFlow for Deep Learning
The 10th Batch of Alibaba Cloud MVPs Officially Announced!
2,631 posts | 623 followersFollow
Alibaba Developer - September 7, 2020
Alibaba Developer - June 29, 2021
Alibaba Clouder - January 13, 2021
Alibaba Developer - February 3, 2020
Alibaba Container Service - July 16, 2019
Alibaba Developer - June 21, 2021
2,631 posts | 623 followersFollow
Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.Learn More
5814894225829244 July 8, 2020 at 4:31 pm
Is there a way to run the samples with bare metal k8s cluster and metallb load balancer. | https://www.alibabacloud.com/blog/595446 | CC-MAIN-2021-39 | refinedweb | 1,648 | 50.16 |
When comparing text generated on different platforms, the newlines are different. This recipe normalizes any string to use unix-style newlines.
This code is used in the TestOOB unit testing framework ().
Discussion
I've tested this on POSIX and Windows. Anyone with an old Mac care to try it? :-)
Speed up by precompiling regular expression. On the expense of one more line (and the re module plus the regular expression inserted into your namespace), you can get some speed (On my PC, for the contents of a random python script, it finishes in a third of the time) by pulling almost everything out of the function. Of course, this works best if you use this function quite often.
don't use regular expressions when not really needed. It's even better to do two replace calls:
The last version is several times faster (of course this also depends on the string you convert).
Good point. When this function shows up in my profiler I'll probably do this.
Until it does, I prefer the greater readability -- in my eyes -- of not precompiling the expression.
The regular expression isn't there for special features. It's there for readability.
Replacing (\r\n|\r|\n) with whatever (I arbitrarily chose '\n') sits in my mind fairly well. And I understand that regex at a single glance.
I agree that using two replaces, noting that '\n' need not be replaced with '\n', is both efficient and clever.
I'll probably stay with the regex, though, because I find it easier to understand, and it isn't a performance hit in my application yet. | http://code.activestate.com/recipes/435882/ | crawl-002 | refinedweb | 270 | 65.42 |
First, references will be added to the CLR assemblies that will be used.
import clr clr.AddReference('System.Windows.Forms')
Next the names we will use are imported.
from System.Windows.Forms import Application, Form
A class will be created for the Hello World form using
Form as its subclass.
class HelloWorldForm(System.Windows.Forms.Form): def __init__(self): self.Text = 'Hello World' self.Name = 'Hello World'
The text attribute of the form sets the title bar's text.
To run the application, we create an instance of the
HelloWorldForm.
form = HelloWorldForm() Application.Run(form)
The
Application class provides static methods and such as starting and stopping an application. The
Run static method runs the form on the current thread. | https://riptutorial.com/ironpython/example/8689/hello-word-example-using-windows-forms | CC-MAIN-2022-05 | refinedweb | 120 | 61.83 |
> > > I. The prefix has no semantic value: it is indeed syntactic sugar. However, it is very important to maintain the "principle of least surprise" for users. If a user runs his XSLT stylesheet through a SAX processor and finds that all his "xsl:template" elements have been renamed to "prefix00001:template", he might be very confused indeed. Note that there is at least one case in which the prefix does matter: XSLT uses the prefix to match declared namespaces in the stylesheet to namespaces in the source document. Now many people have already railed against this violation of the spirit of XML Namespaces 1.0, but there is no srguing that it was the most elegant solution to a difficult problem that the XSLT WG faced in dealing with namespaces. So, in short, though prefixes are not technically part of the document, there are good arguments for including them in the SAX binding. >). The best solution to this is education. If the interface documentation clearly states that prefixes are not technically part of the document, hopefully users will avoid mis-using them. This is not ideal, but there's not much better to do given the practical issues involved. > IMO, it is much better to regenerate a new set of prefixes for the set of > namespace URIs that are present in an XML document. Even as a user who knows better about the meaning of prefixes, I would be very annoyed at a processor that did this. I often deal with documented with 4 or more namespaces (this is not too unusual: very common in RDF) and I give my prefixes mnemonic names to help sort things out. I don't want processors renaming them to "p01a3", etc. -- Uche Ogbuji FourThought LLC, IT Consultants uche.ogbuji@fourthought.com (970)481-0805 Software engineering, project management, Intranets and Extranets | https://mail.python.org/pipermail/xml-sig/2000-January/001805.html | CC-MAIN-2016-22 | refinedweb | 308 | 61.06 |
A validator that checks user input against a regular expression. More...
#include <Wt/WRegExpValidator>
A validator that checks user input against a regular expression.
This validator checks whether user input matches the given (perl-like) regular expression. It checks the complete input; prefix ^ and suffix $ are not needed.
The following perl features are not supported (since client-side validation cannot handle them):
See for a full overview of the supported regular expression syntax. However, if you want client-side validation to work correctly, you will have to limit your regular expressions to those features supported by JavaScript (ECMAScript style regular expressions). See for an overview of the ECMAScript regular expression syntax.
Usage example:
The strings used in this class can be translated by overriding the default values for the following localization keys:
Sets a new regular expression validator that accepts input that matches the given regular expression.
This constructs a validator that matches the perl perl regular expression.
Sets the message to display when the input does not match.
The default value is "Invalid input".
Sets the text to be shown if no match can be found.
This calls setInvalidNoMatchText()
Sets the regular expression for valid input.
Sets the perl regular expression
expr.
Validates the given input.
The input is considered valid only when it is blank for a non-mandatory field, or matches the regular expression.
Reimplemented from Wt::WValidator.
Reimplemented in Wt::WTimeValidator. | https://webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WRegExpValidator.html | CC-MAIN-2021-31 | refinedweb | 235 | 50.12 |
Hi,
I am a first-time user of IntelliJ IDEA CE 15 and I am trying to compile the handlbars plugin from
It cannot find this import:
import com.intellij.lang.javascript.JavascriptLanguage;
Which plugin do I have to install? The readme files don't mention this
Thanks,
Roland
You must add JAR(s) from JavaScript-plugin (from corresponding IJ installation) to your "IntelliJ Platform SDK".
Thanks for you reply, Yann. That is the same answer as I found on the internet. Unfortunately I have no folder "javascript" or "JavascriptLanguage" in C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 15.0.3\plugins
In Settings - Plugins I find no Javascript, only IntelliLang, which is installed. After "Browse repositories" I find many plugins, which one to install???
Indeed JavaScript plugin is only available in IJ Ultimate Edition respectively in Web/PHPStorm.
Thanks. I was afraid of this. Fortunately the import file is used only in test classes. I commented out the code and could compile the plugin.
That's to say, you can't develop a javascript plugin with Intellij CE? How about i import that jar which lives in webstorm's installation path? | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206439649-com-intellij-lang-javascript-JavascriptLanguage-not-found?page=1#community_comment_207074705 | CC-MAIN-2022-27 | refinedweb | 194 | 58.69 |
26 January 2012 09:49 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The unit has a nameplate propylene capacity of 250,000-260,000 tonnes/year, but usually produces around 200,000 tonnes/year of the monomer.
The RCC at Balongan was earlier running at 40-45% capacity since 16 January because of technical issues.
The ongoing repair works are expected to be completed by 1 February, the source said.
However, he said that the RCC would be shut for scheduled maintenance starting on 1 March for around 20 days.
Because of the upcoming turnaround, Pertamina is looking to import 3-5 propylene spot cargoes for any March arrival but there are no offers so far, | http://www.icis.com/Articles/2012/01/26/9526988/indonesias-pertamina-keeps-rcc-op-rate-at-50-amid.html | CC-MAIN-2013-48 | refinedweb | 116 | 53.71 |
My original plan was it to write about the rules of the C++ Core Guidelines to the regex and chrono library, but besides the subsection title, there is no content available. I already wrote a few posts about time functionality. So I'm done. Today, I fill the gap and write about the regex library.
Okay, here are my rules for regular expressions.
Regular expressions are powerful but also sometimes expensive and complicated machinery to work with text. When the interface of a std::string or the algorithms of the Standard Template Library can do the job, use them.
Okay, but when should you use regular expressions? Here are the typical use-cases.
I hope you noticed it. The operations work on text patterns and not on text.
First, you should use raw strings to write your regular expression.
First of all, for simplicity purposes, I will break the previous rule.
The regular expression for the text C++ is quite ugly: C\\+\\+. You have to use two backslashes for each + sign. First, the + sign is a special character in a regular expression. Second, the backslash is a special character in a string. Therefore one backslash escapes the + sign, the other backslash escapes the backslash.By using a raw string literal the second backslash is not necessary any more, because the backslash is not be interpreted in the string.
The following short example may not convince you.
std::string regExpr("C\\+\\+");
std::string regExprRaw(R"(C\+\+)");
Both strings stand for regular expression which matches the text C++. In particular, the raw string R"(C\+\+) is quite ugly to read. R"(Raw String)" delimits the raw string. By the way, regular expressions and path names on windows "C:\temp\newFile.txt" are typical use-case for raw strings.
Imagine, you want to search for a floating point number in a text, which you identify by the following sequence of signs: Tabulator FloatingPointNumber Tabulator \\DELIMITER. Here is a concrete example for this pattern: "\t5.5\t\\DELIMITER".
The following program uses a regular expression encode in a string and in a raw string to match this pattern.
// regexSearchFloatingPoint.cpp
#include <regex>
#include <iostream>
#include <string>
int main(){
std::cout << std::endl;
std::string text = "A text with floating pointer number \t5.5\t\\DELIMITER and more text.";
std::cout << text << std::endl;
std::cout << std::endl;
std::regex rgx("\\t[0-9]+\\.[0-9]+\\t\\\\DELIMITER"); // (1)
std::regex rgxRaw(R"(\t[0-9]+\.[0-9]+\t\\DELIMITER)"); // (2)
if (std::regex_search(text, rgx)) std::cout << "found with rgx" << std::endl;
if (std::regex_search(text, rgxRaw)) std::cout << "found with rgxRaw" << std::endl;
std::cout << std::endl;
}
The regular expression rgx("\\t[0-9]+\\.[0-9]+\\t\\\\DELIMITER") is pretty ugly. To find n "\"-symbols (line 1), you have to write 2 * n "\"-symbols. In constrast, using a raw string to define a regular expression, makes it possible, to express the pattern your are looking for directly in the regular expression: rgxRaw(R"(\t[0-9]+\.[0-9]+\t\\DELIMITER)") (line 2). The subexpression [0-9]+\.[0-9]+ of the regular expression stands for a floating point number: at least one number [0-9]+ followed by a dot \. followed by at least one number [0-9]+.
Just for completeness, the output of the program.
Honestly, this example was rather simple. Most of the times, you want to analyse your match result.
Using a regular expression consists typically of three steps. This holds for std::regex_search, and std::regex_match.
Let's see what that means. This time I want to find the first e-mail address in a text. The following regular expression (RFC 5322 Official Standard) for an e-mail address finds not all e-mail addresses because they are very irregular.
(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[az0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x2])+)\])
For readability, I made a line break in the regular expression. The first line matches the local part and the second line the domain part of the e-mail address. My program uses a simpler regular expression for matching an e-mail address. It's not perfect, but it will do its job. Additionally, I want to match the local part and the domain part of my e-mail address.
Here we are:
// regexSearchEmail.cpp
#include <regex>
#include <iostream>
#include <string>
int main(){
std::cout << std::endl;
std::string emailText = "A text with an email address: This email address is being protected from spambots. You need JavaScript enabled to view it.;
// (1)
std::string regExprStr(R"(([\w.%+-]+)@([\w.-]+\.[a-zA-Z]{2,4}))");
std::regex rgx(regExprStr);
// (2)
std::smatch smatch;
if (std::regex_search(emailText, smatch, rgx)){
// (3)
std::cout << "Text: " << emailText << std::endl;
std::cout << std::endl;
std::cout << "Before the email address: " << smatch.prefix() << std::endl;
std::cout << "After the email address: " << smatch.suffix() << std::endl;
std::cout << std::endl;
std::cout << "Length of email adress: " << smatch.length() << std::endl;
std::cout << std::endl;
std::cout << "Email address: " << smatch[0] << std::endl; // (6)
std::cout << "Local part: " << smatch[1] << std::endl; // (4)
std::cout << "Domain name: " << smatch[2] << std::endl; // (5)
}
std::cout << std::endl;
}
Lines 1, 2, and 3 stand for the beginning of the 3 typical steps of the usage of a regular expression. The regular expression in line 2 needs a few additional words.
Here it is:([\w.%+-]+)@([\w.-]+\.[a-zA-Z]{2,4})
The output of the program shows the detailed analyse.
I'm not done. There is more to write about regular expressions in my next post. I write about various types of text and iterating through all matches.997
Yesterday 6384
Week 7997
Month 166428
All 5035742
Currently are 175 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://www.modernescpp.com/index.php/regular-expressions | CC-MAIN-2020-50 | refinedweb | 964 | 68.06 |
Local Variables in Functions
?"
Local variables only ever define names within their scope, and these names do not appear in the surrounding class or module. Consider this function:
def f(x, y, z): a = x + y + z print "Done!" return a
The names x, y and z come to life as the function f is called, pointing to the objects supplied as arguments to f. When f is finished executing, the names x, y and z are forgotten, but the objects they pointed to might still be around, of course. Here, we use a variable a in the function; this also gets forgotten at the end of the function, but the object a pointed to is returned to the caller.
In short, you always get local variables in Python unless you use the global keyword. You can always use del to make Python forget names, but this is done automatically at the end of function execution for all names in the local scope of that function.
CategoryAskingForHelp CategoryAskingForHelpAnswered | https://wiki.python.org/moin/Asking%20for%20Help/How%20do%20I%20use%20local%20variables%20in%20methods%3F?action=print | CC-MAIN-2018-09 | refinedweb | 168 | 67.18 |
Now, we will see how to use the LSTM network to generate Zayn Malik's song lyrics. The dataset can be downloaded from here () ,which has a collection of Zayn's song lyrics.
First, we will import the necessary libraries:
import tensorflow as tfimport numpy as np
Now, we will read our file containing the song lyrics:
with open("Zayn_Lyrics.txt","r") as f: data=f.read() data=data.replace('\n','') data = data.lower()
Let's see what we have in our data:
data[:50]"now i'm on the edge can't find my way it's inside "
Then, we store all the characters in the all_chars ... | https://www.oreilly.com/library/view/hands-on-reinforcement-learning/9781788836524/9084fc18-8040-4528-b939-3455fa652915.xhtml | CC-MAIN-2019-39 | refinedweb | 108 | 72.97 |
26 August 2010 18:44 [Source: ICIS news]
?xml:namespace>
“Global rosin production dropped to 1.13m short tons last year after it reached a record high production of 1.48m tons in 2007,” said Donald Stauffer, president of Pennsylvania, US-based consulting firm International Development Associates (IDA).
IDA’s newly released 2010 Study of International Rosin Markets reported a 33% decline in Chinese rosin production from 826,000 tons in 2007 to 553,000 tons in 2009.
“Although gum rosin is expected to increase by 10-20% this year, long-term Chinese production is expected to decline because of industrialisation of lands formerly occupied by pine woods and increasing shortage of available labour,” said Stauffer.
Rosin is a natural form of resin that is obtained from pine and other coniferous plants. According to IDA, gum rosin accounted for 67% of the global total rosin production.
Tall oil rosin (TOR) accounted for 32% while wood rosin was only 1%. The
Stauffer said TOR supply was also tightening as availability of crude tall oil, the supply source for TOR, has been on the decline because of the closure of pulp mills in the US and Europe.
“At the same time, consumption of rosin and its derivatives, especially for adhesives and printing ink, continues its growth at 2-4%/year. There are currently no visible new sources for rosin production other than in
Hydrocarbon resins would have to supply the market gap, he | http://www.icis.com/Articles/2010/08/26/9388639/low-china-output-tightens-global-rosin-supply---consultant.html | CC-MAIN-2013-48 | refinedweb | 241 | 60.55 |
Hello, readers! In this article, we will learn the universal NumPy Set Operations in Python. So, let us get started! 🙂
Useful Numpy set operations
We’re going over 5 useful numpy set operations in this article.
numpy.unique(array)
numpy.union1d(array,array)
numpy.intersect1d(array,array,assume_unique)
np.setdiff1d(arr1, arr2, assume_unique=True)
np.setxor1d(arr1, arr2, assume_unique=True)
Let’s check these operations individually.
1. Unique values from a NumPy Array
This numpy set operation helps us find unique values from the set of array elements in Python. The
numpy.unique() function skips all the duplicate values and represents only the unique elements from the Array
Syntax:
numpy.unique(array)
Example:
In this example, we have used unique() function to select and display the unique elements from the set of array. Thus, it skips the duplicate value 30 and selects it only once.
import numpy as np arr = np.array([30,60,90,30,100]) data = np.unique(arr) print(data)
Output:
[ 30 60 90 100]
2. Set union operation on NumPy Array
NumPy offers us with universal
union1d() function that performs UNION operation on both the arrays.
That is, it clubs the values from both the arrays and represents them. This process completely neglects the duplicate values and includes only a single occurrence of the duplicate element in the UNION set of arrays.
Syntax:
numpy.union1d(array,array)
Example:
import numpy as np arr1 = np.array([30,60,90,30,100]) arr2 = np.array([1,2,3,60,30]) data = np.union1d(arr1,arr2) print(data)
Output:
[ 1 2 3 30 60 90 100]
3. Set Intersection operation on NumPy array
The
intersect1d() function enables us to perform INTERSECTION operation on the arrays. That is, it selects and represents the common elements from both the arrays.
Syntax:
numpy.intersect1d(array,array,assume_unique)
- assume_unique: If set to TRUE, it includes the duplicate values for intersection operation. Setting it to FALSE, would result in the neglection of duplicate values for intersection operation.
Example:
Here, as we have set
assume_unique to TRUE, the intersection operation has been performed including the duplicate values i.e. it selects the common values from both the arrays including the duplicates of those common elements.
import numpy as np arr1 = np.array([30,60,90,30,100]) arr2 = np.array([1,2,3,60,30]) data = np.intersect1d(arr1, arr2, assume_unique=True) print(data)
Output:
[30 30 60]
4. Finding uncommon values with NumPy Array
With
setdiff1d() function, we can find and represent all the elements from the 1st array that are not present in the 2nd array according to the parameters passed to the function.
import numpy as np arr1 = np.array([30,60,90,30,100]) arr2 = np.array([1,2,3,60,30]) data = np.setdiff1d(arr1, arr2, assume_unique=True) print(data)
Output:
[ 90 100]
5. Symmetric Differences
With
setxor1d() function, we can calculate the symmetric differences between the array elements. That is, it selects and represents all the elements that are not common in both the arrays. Thus, it omits all the common values from the arrays and represents the distinct values with respect to both the arrays.
Example:
import numpy as np arr1 = np.array([30,60,90,30,100]) arr2 = np.array([1,2,3,60,30]) data = np.setxor1d(arr1, arr2, assume_unique=True) print(data)
Output:
[ 1 2 3 90 100]
Conclusion
By this, we have come to the end of this topic. Feel free to comment below, in case you come across any question. For more such posts related to Python programming, Stay tuned with us.
Till then, Happy Learning!! 🙂 | https://www.askpython.com/python/numpy-set-operations | CC-MAIN-2021-31 | refinedweb | 603 | 51.44 |
Overview
Intro
If you need a quick-and-dirty HTTP server that doesn't need fancy configuration, try some of these one-line HTTP servers in languages like Python, Ruby, and PHP. Some examples include a small script that allows you to embed the server in to your program and add more customization. These are not intended for production use.
These examples are good for:
- Temporary test servers
- Development of web apps
- Quickly sharing or moving files over HTTP
- Testing reverse proxies
These servers do not take security in to consideration. Most of them will serve your current directory, unencrypted, with a directory listing, on all public interfaces (0.0.0.0). That means with the default settings they will publicly serve up all your information in plain-text. Examples are provided whenever possible on how to specify the bind interface and port to prevent exposing data publicly. Be aware of the risks when using these.
If you want to setup a production server, I recommend Apache httpd or nginx. I have an Nginx tutorial that covers installation, usage, and basic configuration options.
Production servers should also use encryption (SSL). Check out my OpenSSL self-signed certificate tutorial and my Let's Encrypt SSL certificate tutorial for instructions on how to obtain SSL certificates and use them with a production quality web server like Apache httpd or nginx.
Quick examples
Run these in the shell or command prompt. They will serve the current directory, usually with a directory listing if an index.html is not present. The default ports vary but are typically 8000 for Python or 8080 for Ruby. Most listen on 0.0.0.0.
python3 -m http.server
python2 -m SimpleHTTPServer
php -S 0.0.0.0:8000
ruby -run -e httpd
Python 3
The Python 3 standard library comes with the http.server module. You can invoke the module directly with Python using a single command or you can use the HTTP server within your own Python application. Both examples are demonstrated below. For more information about http.server check out the official documentation at.
# Default listen: 0.0.0.0:8000
# Provides directory listing if no index.html present
python3 -m http.server
# Specify listen host and port information
python3 -m http.server --bind 0.0.0.0 9999
# In 3.7+ you can specify directory too
python3 -m http.server --directory /path/to/serve
To use the http.server in your own program, you can import the http.server and socketserver modules. The handler is a class that takes the incoming TCP data and processes it like an HTTP request. It implements do_GET() and do_POST() methods for example. Python comes with the SimpleHTTPRequestHandler that is used here.
#
import http.server
import socketserver
Handler = http.server.SimpleHTTPRequestHandler
with socketserver.TCPServer(("0.0.0.0", 9999), Handler) as httpd:
httpd.serve_forever()
Python 2
The Python 2 standard library comes with the SimpleHTTPServer module. You can invoke the module directly with Python using a single command or you can use the HTTP server within your own Python application. Both examples are demonstrated below. Read more at.
# Default listen: 0.0.0.0:8000
# Provides directory listing if no index.html present
python2 -m SimpleHTTPServer
# Specify port, but can't change the 0.0.0.0 bind or directory
python2 -m SimpleHTTPServer 8888
To embed the HTTP server inside your own Python 2 application, use the example below.
#
import SimpleHTTPServer
import SocketServer
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("0.0.0.0", 9999), Handler)
httpd.serve_forever()
PHP
The PHP server will serve static files just fine, but it will also process PHP scripts allowing you to create dynamic pages that execute code when they are visited in the browser. The PHP built-in server has a few more features and configuration than the simple Python servers, but it is still not intended for production use.
# Serves current directory, no directory listing
php -S localhost:9999
# Specify path to serve
php -S localhost:9999 -t /path/to/serve
# Print full usage/help info
php -S
Ruby
Ruby comes with the WEBrick module as part of the sandard library. It is an HTTP server that has quite a few configurable features. It is probably the most configurable and useful out of all the language examples provided so far. The documentation for WEBrick can be found in the standard library documentation. For example, the current latest version is available at.
This also works with JRuby if you are in a Java environment. Just replace ruby with jruby. Since WEBrick is part of the standard library, it should work out-of-the-box with JRuby.
WEBrick even has the concept of servlets like in Java. It also supports virtual hosts and even authentication, like http basic auth. Those are out of the scope here but worth mentioning how many features WEBrick has compared to all the other examples. WEBrick also supports SSL but that's not going to be covered here.
# Defaults to 0.0.0.0:8080
# Serves current directory
# Provides directory listing if no index.html
ruby -run -e httpd
# Specify port and directory to serve
ruby -run -e httpd /path/to/serve -p 9999
To use the WEBrick server within your own Ruby application, you need to require the webrick module, instantiate a new HTTPServer, and then call start. Optionally you can capture the interrupt signal (SIGINT) that is sent when a user presses CTRL-C to make it easy to kill the process from the command line.
require 'webrick'
server = WEBrick::HTTPServer.new(
:Port => 9999,
:DocumentRoot => Dir.pwd
)
# Trigger server to stop on CTRL-C (SIGINT interrupt)
# Without this, CTRL-C still works, but
# you get an unclean shutdown
trap('INT') { server.stop }
server.start
You could technically squish it all on to a single line. The -r requires the webrick module and the -e tells Ruby to execute the string provided (the code).
ruby -r webrick -e "server = WEBrick::HTTPServer.new(:Port => 9999, :DocumentRoot => Dir.pwd); trap('INT') { s.shutdown }; server.start"
OpenSSL
You can use OpenSSL to run an HTTP server with SSL. First you will need an SSL certificate. You can obtain one for free from LetsEncrypt, a trusted certificate authority, or generate your own.
For more information on creating self-signed certificates, check out my tutorial Creating self-signed SSL certificates with OpenSSL. For more information about getting a signed certificate from LetsEncrypt, check out my tutorial LetsEncrypt Free SSL Certificate Tutorial.
Here is a simple example of generating your own certificate:
openssl req -newkey rsa:2048 -nodes -keyout privkey.pem -x509 -days 36500 -out certificate.pem
# Or without the interactive prompts, provide values:
openssl \
req \
-newkey rsa:2048 -nodes \
-keyout privkey.pem \
-x509 -days 36500 -out certificate.pem \
-subj "/C=US/ST=NRW/L=Earth/O=CompanyName/OU=IT/CN="
To run the server, provide it the certificate and the port you want to bind to. The -WWW flag tells it to serve the current directory as static files. Swapping that for a -www flag would provide a status page with SSL info only. Swapping it out for a -HTTP flag would be similar to -WWW where it serves static files, but it would also assume that the files being served contained the full HTTP response including headers and all.
# There is no directory index listing, so you must visit a specific file
# e.g.
openssl s_server -key privkey.pem -cert certificate.pem -accept 9999 -WWW
To get information about more options, use openssl s_server --help or man s_server.
Conclusion
If you've read everything, you should be able to run simple HTTP servers that serve a directory. You should also understand how some examples will serve index.html files while others will serve a directory listings or both. Before using them you should understand how binding to localhost is different from binding to 0.0.0.0 and understand there is no SSL or encryption in these servers. These examples are not intended for production use and are for development and convenient purposes. Use the one-line commands for quick testing and hosting or use the built-in libraries to add an HTTP server to your own application. | https://www.devdungeon.com/content/one-line-http-servers | CC-MAIN-2022-40 | refinedweb | 1,366 | 58.99 |
Referencing Common Values Between Apps/Projects
Date Published: 23 July 2017
A pretty common scenario in building real world business software is the need to share certain pieces of information between multiple projects or applications. Frequently these fall into the category of configuration settings, and might include things like:
- Resource or CDN URLs or base URLs
- Connection Strings
- Public/Private Keys and Tokens
Some of these are more sensitive than others, obviously, and you should definitely strive to avoid storing database credentials in source control. In many cases, different apps shouldn’t be sharing a central database, anyway, as that’s likely to lead to the One Database To Rule Them All antipattern. Leaving aside databases and connection strings, how should you share common pieces of information between projects? There are several patterns you can consider.
In Code
The first pattern is simply to share the data in code. You might have a Constants or Settings class that is literally copied and pasted between projects. Or it might belong to one project that is referenced by another. You could compile it into a DLL that all projects reference. And of course, taking this to its logical next step, you can create a NuGet package that includes this hardcoded value. For example:
public static class CloudSettings { public string StaticResourcesUrlPrefix { get; } = ""; // more settings go here }
The benefit of this approach is that it’s very simple. The values are tracked in source control, which is a good thing if they’re not sensitive (not so good if they’re meant to be secret). Values are easily discovered by developers and can be updated easily in the codebase. However, these settings are probably not visible or configurable by operations staff, and any changes to settings must be done via a deployment, as opposed to something lighter-weight. Code-based values also aren’t as easily changed from one environment to the next, so promoting code from dev to test to stage to prod environments may be more difficult. This can be overcome with conditional logic or precompiler directives, but either of those degrades the simplicity of this approach (its chief advantage).
Even if you’re not hard-coding shared setting values, it can be worthwhile to share a library containing the shared setting keys. This might take the form of just constant values, as described here, or ideally you can use interfaces to describe your settings values in a strongly-typed manner, and use a convention to convert properties on your interfaces into settings keys.
To convert the above bit of code into an interface, just make this small change:
public interface CloudSettings { string StaticResourcesUrlPrefix { get; } // more settings go here } public class CloudSettings : ICloudSettings { public string StaticResourcesUrlPrefix { get; } = ""; // more settings go here }
Configuration
Probably the most common approach to solving this problem is to use configuration. In this case, you might simply add a key representing the setting in question to your project’s settings file, along with the appropriate value. Once you’ve done this once, it’s pretty easy to copy-paste this same setting into other environment-specific files or other projects’ settings files. This approach works well and offers more flexibility than the hardcoded-in-code approach. Be sure to follow these tips when working with configuration files in .NET, though:
- Apply Interface Segregation to Config Files
- Use Custom Configuration Section Handlers (pre .NET Core)
- Refactor Static Config Access
The biggest downside to the configuration approach is that over time you may end up with a ton of configuration settings, possibly without much cohesion between them. They’re also not quite as easy to update or automate in a cloud environment as something like environment variables, discussed next.
Environment Variables
A third approach is to store settings in environment variables. Environment variables are easy to update when using cloud hosting services or Docker containers. They work well cross-platform and they’re very well-supported in .NET Core. The default code templates for ASPNET Core applications, at least in the 1.x timeframe, uses both configuration files and environment variables for app settings. They way it’s configured by default, for a given setting key, the app will first check if there is an environment variable. If there is, it uses that value. Otherwise, it falls back to looking in settings file(s) for a value that matches a given key. At some of my clients we have implemented similar systems for .NET 4.6 apps. With this approach, you can also easily vary the behavior based on the environment. For instance, if you want to ensure your production environment uses environment variables, but it’s easier for your dev team to use config files, you could have your code throw an exception when the app is running in production and a value isn’t found in an environment variable. At dev time, values not in environment values could fall back to a local config file.
Hybrid Patterns
None of these approaches are exclusive – you can mix and match them to suit your needs. For example, it’s pretty common to combine environment variables with configuration settings, with one falling back to the other. You can take this a step further and specify default values in code, to use when a value is not found in either configuration files or environment variables.
These are design patterns, not absolute solutions. Use your experience to come up with a solution that solves your problems in the simplest way possible. If you’re not sure of the best approach, ask online or enlist the help of an expert. An ounce of bad design prevention is worth months of refactoring and rewriting to fix a poor design decision.
Recommendations
Start with something simple; grow complexity only if/when it becomes necessary:
- Start with a hardcoded string.
- Move that to a constant.
- Move that to a strongly typed settings class.
- Move that to an interface.
- Implement the interface to use config, environment variables, or whatever you need.
Avoid tightly coupling to a particular configuration system.
Avoid static access to any configuration system if it impact testability. Think about how you might unit test different configuration options at runtime. If it doesn’t impact testability, it may be fine, but in general watch out for static cling in your code.
Category - Browse all categories | https://ardalis.com/referencing-common-values-between-apps-projects/ | CC-MAIN-2021-25 | refinedweb | 1,059 | 52.6 |
As usual while waiting for the next release - don't forget to check the nightly builds in the forum.
Elsa
I have tried to compile Elsa on mingw32, it compiles smbase right, but the rest fails, mostly because of flex/bison stuff.
mingw32: Elsa has not been ported to mingw32, though smbase has limited support for it.
According to that article:"ANTLR is a parser generator which works on predicated LL(k) grammars.PCCTS is a C++ grammar for ANTLR (actually, a modified version of ANTLR) supports all C++ features except namespaces. andElsa."That's from 2001. I don't know if ANTLR C++ support namespaces now.But Elsa can parse them, and it's tested with parsing Mozilla, Qt, ACE, STL source code.
So:ANTLR C++ can be compiled right now in GCC, Mingw32, MSVC 6 & 7. It's written only using STL code so it's very portable.Can it parse namespaces?
Elsa seems more complete and better overall, and can be compiled in GCC/cygwin. It's written using portable code also.The fact that it doesn't work out-of-the-box yet on mingw32 is because the build system (it requieres flex, perl, bison).So far I would choose Elsa if it worked right now on mingw32. | http://forums.codeblocks.org/index.php?topic=1571.msg11537 | CC-MAIN-2019-18 | refinedweb | 212 | 67.04 |
use JetBrains Rider with a .NET Core Console application.
What is JetBrains Rider?
JetBrains Rider is a cross-platform .NET IDE based on IntelliJ and ReSharper. If you’ve used WebStorm or customized Visual Studio before then you’ll feel right at home while installing Rider. Everything from the themes, keyboard maps and extensions feel right at home for .NET developers.
I chose to take a look at Rider as I wanted to take a look at an alternative IDE that isn’t Visual Studio that helps you write .NET Core applications. Don’t confuse this with Visual Studio Code which we took a look at in the last post as it’s an editor.
Keep in mind that Rider is an early access preview and anything is subject to change.
Install JetBrains Rider
Before we do anything, make sure that you have installed JetBrains Rider. If you haven’t downloaded it before, then you can find it here. I downloaded the EAP version 18. After it is installed, your powerful new IDE presents you with the following menu:
Go ahead and click on
New Solution to see the following templates :
Nice! The .NET Core templates come free without doing anything!
Diving In
.NET Core Console Application and take a look at the files in solution explorer :
Notice the same files are created just like if you ran the
dotnet new console command like we did before. There is a few differences in the Program.cs file that we’ll quickly examine:
JetBrains Rider uses the following code :
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; namespace ConsoleApp { internal class Program { public static void Main(string[] args) { } } }
If you create a new console app from the command line then it contains the following code :
using System; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
As you can see Rider contains a few extra using statements, has a different namespace, marks the Program class as internal and does not have the “Hello World” stub.
Explore the IDE
If you run the app by hitting the play button, then you’ll notice the following configuration screen pops up :
Notice that it’s using .NET Core as the framework as discussed in previous blog post and that the output is a .DLL.
Go ahead and hit apply and then run and you’ll see that it run the following command :
"C:/Program Files/dotnet/dotnet.exe" C:/Users/mbcrump/RiderProjects/ConsoleApp1/ConsoleApp1/bin/Debug/netcoreapp1.0/ConsoleApp1.dll
No surprise here. It called dotnet and passed in the .dll that it had created. We don’t have an output as there is no code.
Tip: If you ever want to know what SDK Rider is using, you could always open a command prompt and navigate to
Program Files and run
dotnet --info. I found this helpful as I have multiple SDKs installed.
You can also debug the app, by setting your breakpoint and clicking the debug icon and examining the output window as shown below :
Wrap-up
I’m really enjoying Rider so far and it should get a lot better as the final release is around the corner. As always, thanks for reading and smash one of those share buttons to give this post some love if you found it helpful. Also, feel free to leave a comment below or follow me on twitter for daily links and tips. | http://michaelcrump.net/part10-aspnetcore/ | CC-MAIN-2017-26 | refinedweb | 579 | 64.3 |
Post your Comment
Example code of jQuery animation
Example code of jQuery animation
Example code of jQuery animation....
Given below link contain the example code for custom animation with live
The animation of image using the 'animate' effect of jQuery
The animation of image using the 'animate' effect of jQuery
In this tutorial, we will discuss about how to use 'animate' function of
jQuery... function of jQuery. We can do following things during animation using
it's
jQuery - jQuery Tutorials and examples
elements example
Creating custom animations in jQuery
Example code... piece of code that provides very good support for ajax. jQuery can be used... jQuery demo section to view the running jQuery example.
View jQuery Demos:
We have
animation in applet - Applet
animation in applet Hello sir,
your code is ok
but what happend to this code can you modify this an tell me problem please
code:
import java.applet.*;
import java.awt.*;
import java.net.*;
/*
*/
public class
jQuery Demos
supported with the example explanations. These jQuery demo examples
are very usefull in learning jQuery from scratch. You may also use the ready to
use code... many example of jQuery framework. Our website provides
easy to understand
What is jQuery?
and Netflix use jQuery.
Example of jQuery:
<html>
<head>
<script type...What is jQuery? jQuery is a multi-browser JavaScript library that makes the use of javascript in a website very easy. jQuery helps in wrapping many lines
animation
Animation
links:
Animation in Java Applet
Thread along with its methods for animation.
Applet class implements the Runnable interface for animation
program.
Example of Animation in Java program...;
import java.util.Random;
//<applet code="rect1" width="300
What is jQuery?
helps the programmers to keep
code simple and concise. The jQuery library... programming languages.
The jQuery code is very simple and easy to learn.
Here...
What is jQuery?
Write JQUERY Code
Write JQUERY Code Hi,
Iam swathi.I created the table in the below format.can u please write the jquery code of this table..and my requirement... please send me the code through jquery..?
Thank you
Swathi
Creating custom animations in jQuery
function of jQuery.
We can do following things during animation using...
Creating custom animations in jQuery
Creating custom animations in jQuery
jQuery UI effects
in jQuery. All the standard
animation types are supported...
jQuery UI effects
jQuery UI effects
effects.core.js
How to create Animation
How to create Animation
This section illustrates you how to create Animation. We are proving you an example defining a static variable DELAY of
integer type. The object
jQuery tutorial for beginners with examples
In this section we will create a simple Hello World example using jQuery..., Servlets, ASP, PHP, CGI and other
languages.
jQuery Hello World example:
Here we will create out first Hello world example using jQuery, which will
simply
J2ME Frame Animation
J2ME Frame Animation
This application shows to set the Frame Animation and implement it in the canvas class.
In this example we are creating a frame using Gauge class. When
jQuery plugin examples code
jQuery plugin examples code
jQuery plugin examples code
Following are the links of example code with live demos and
downloadable code
jQuery UI Widget : Tabs
jQuery UI Widget : Tabs
jQuery UI Widget : Tabs
Tabs are generally...();
EXAMPLE :
tabs.html
<!DOCTYPE
jQuery Tutorials, jQuery Tutorial
different types of selectors in jQuery
Many reusable code examples
Form...jQuery Tutorials, jQuery Tutorial
The jQuery tutorials listed here will help you in learning jQuery easily. We
have given many articles and easy
Getting Started with jQuery
of HTML DOM tree. You can use jQuery to handle events, perform
animation, and add... feature of jQuery with the
help of an example.
First of all we need to download...jQuery:
jQuery: jQuery is a JavaScript library which
accentuates
jQuery Training
post form
b) jQuery get example
c) Processing server response
jQuery selector
a) Introduction
b) Selector types
c) Example
HTML page manipulation
Manipulating CSS
jQuery animation
Ajax in jQuery
Ajax in jQuery How ajax can be use in JQuery?
Given below the code for ajax in jquery:
$(document).ready(function() {
$('#form').submit(function(){
var number = $('#number').val();
$.ajax({type:"post",url
The queue, dequeue & clearQueue effect of jQuery
on 'stop' button ,the animation stops :
Download Source Code
Click...The queue, dequeue & clearQueue effect of jQuery
In this tutorial, we will discuss about the queue, dequeue & clearQueue
effects of jQuery. Given
Animate Div in Jquery
in jQuery.
Animate Div in jQuery - jQuery .animate() Example
In jQuery...There are several jQuery methods that can be used for different purpose like
jQuery .finish(), .queue(), .show(), .slideDown(), .slideUp(), .stop
The 'innerfade' effect using jQuery plug-in
The 'innerfade' effect using jQuery plug-in
In this tutorial , we will discuss about how to use 'innerfade' effect of
jQuery. In below example, a table has a unordered list ,whose each element
displays one by one usingQuery UI Tabs
jQuery UI Tabs
In this section, you will learn about UI tabs .For using UI tabs you need add
"jquery-ui-1.8.4.custom.min.js" plug-in in your html... code is given below :
Tabs.html
<!DOCTYPE html>
<html lang
Post your Comment | http://roseindia.net/discussion/32501-Example-code-of-jQuery-animation.html | CC-MAIN-2016-18 | refinedweb | 859 | 58.28 |
#include <PG_FactoryRegistry.h>
Collaboration diagram for TAO::PG_FactoryRegistry:
[private]
State of the quit process
"FactoryRegistry"
Constructor.
[virtual]
virtual Destructor
Prepare to exit.
Identify this object.
Processing to happen when the ORB's event loop is idle.
alternative init using designated poa
Initialize this object.
Parse command line arguments.
An object reference to the this object. Duplicated by the call so it may (and probably should) be assigned to a _var..
Write this factory's IOR to a file
A human-readable string to distinguish this from other Notifiers.
Protect internal state. Mutex should be locked by corba methods, or by external (public) methods before calling implementation methods. Implementation methods should assume the mutex is locked if necessary.
IOR of this object as assigned by poa.
A file to which the factory's IOR should be written.
A name to be used to register the factory with the name service.
The CORBA object id assigned to this object.
The orb
The POA used to activate this object.
Quit on idle flag.
State of the quit process
This objects identity as a CORBA object. | https://www.dre.vanderbilt.edu/Doxygen/5.4.3/html/tao/portablegroup/classTAO_1_1PG__FactoryRegistry.html | CC-MAIN-2022-40 | refinedweb | 182 | 53.58 |
Synopsis edit
-
- info exists varName
Description editReturns 1 if the variable named varName exists in the current context according the the rules of name resolution, and has been defined by being given a value, returns 0 otherwise.info exists returns 0 for variables that exist but are undefined. This can happen, for example, with trace, because if a trace is set on a nonexisting variable, trace will create that variable, but leave it undefined. Although info exists doesn't detect undefined variables, namespace which -variable does. In that case, it's usually a good idea to pass a fully-qualified variable name to namespace which -variable.Examples:
info exists a # -> 0 set a 1 # -> 1 info exists a # -> 1 info exists b # -> 0 set b(2) 2 # -> 2 info exists b # -> 1 info exists b(1) # -> 0 info exists b(2) # -> 1 info exists $a # -> 0That last command is an example of a common mistake: using $varname instead of just varname. Since the value of $a is 1, the command is asking if a variable named 1 exists.It can be useful to store the name of a variable in another variable:
foreach var {a b c} { if {[info exists $var]} { puts "$var does indeed exist" } else { puts "$var sadly does not exist" } }
KPV: Another thing to keep in mind is that linking variable with upvar can do funny things with the existence of variables.
% set a 1 1 % upvar #0 a b % info exists b 1 % unset a % info exists b 0 % set a 1 1 % info exists b 1
VI 2003-12-21: I remember reading that info exists is slow on some versions of Tcl on large arrays. Anybody have more info on that?
DKF 2007-12-29: From 8.5 onwards, info exists is byte-compiled.
See Also edit
- info
- namespace which -variable | http://wiki.tcl.tk/1185 | CC-MAIN-2015-22 | refinedweb | 307 | 57.1 |
I'm working on a *soap message *to send to a client system.
I must use the client's predefined data types specified on their soap to send my message.
Theres alot of arrays and enumerators used and my code gives the following error:
Error 1 Cannot implicitly convert type 'UpdateRatePs._Default.Rate.AvailAppType' to 'UpdateRatePs.IService.AvailAppType?'.
An explicit conversion exists (are you missing a cast?)
Upon investigation of the error I found that it is basically telling me that the data types in my code and the webservice are different but I'm not able to figure out how or where the discrepnacy lies because they look the same to me.
I have looked everywhere to fix this problem, even posted on a couple of forums to no avail.
I've looked up on everything to do with enumerators, even in web service, in arrays as well as converting enumerators to arrays.
Tried all the examples suggested.
Nothing works.
The error is still the same.
Also tried the Parse and tryParse methods for enum conversions with no luck
Here is a portion of my code where the error points:
protected void SendSoapMessage() { Rate.AvailabilityApplicationType val = Rate.AvailAppType.SET; ureq.RatePackages[1].Rates[0].AvailAppType = val;
Classes/ Objects defined below from webservice data structure:
public class UpdateRatePs { public string Username; public string Password; public UpdateRateP[] RatePackages; } public class UpdateRateP { public Int64 RatePackageId; public Rate[] Rates; } public class Rate { public enum AvailAppType { SET , INCREASE, DECREASE }; }
Can somebody please help me fix this. | https://www.daniweb.com/programming/software-development/threads/418603/enumerator-error-in-array-for-web-service | CC-MAIN-2020-29 | refinedweb | 252 | 53.1 |
YPUPDATE(3N) YPUPDATE(3N)
NAME
yp_update - changes NIS information
SYNOPSIS
#include <<rpcsvc/ypclnt.h>>
yp_update(domain, map, ypop, key, keylen, data, datalen)
char *domain;
char *map;
unsigned ypop
char *key;
int keylen;
char *data;
int datalen;
DESCRIPTION
yp_update() is used to make changes to the Network Information Service
(NIS) database. The syntax is the same as that of yp_match() (see
ypclnt(3N)) except for the extra parameter ypop which may take on one
of four values. If it is YPOP_CHANGE then the data associated with the
key will be changed to the new value. If the key is not found in the
database, then yp_update() returns YPERR_KEY. If ypop has the value
YPOP_INSERT then the key-value pair will be inserted into the database.
The error YPERR_KEY is returned if the key already exists in the data-
base. net-
work is running secure RPC.
SEE ALSO
ypclnt(3N) 1987 YPUPDATE(3N) | http://modman.unixdev.net/?sektion=3&page=yp_update&manpath=SunOS-4.1.3 | CC-MAIN-2017-30 | refinedweb | 151 | 63.9 |
This goes out to all the veteran Ruby on Rails developers: those who have the RESTful scaffold memorized down to their fingertips; those who can write the tests for it using Cucumber with Webrat, Rails integration tests with Webrat, Rails controller tests with RSpec, Test::Unit, or Shoulda, using mocking, stubbing, test spies, or old-school assertion style:
Stop unit testing the RESTful scaffold
Do write an integration test with Webrat (and Cucumber or Rails integration tests or some crazy script using Hubris), but abstract that out as much as you can. It’s the scaffold. You know it works.
You should write a unit test for your RESTful controller as soon as it deviates, and you should test-drive that deviation both in the integration test and the unit test.
Here, some code. First the
#index action, non-tested:
def index @posts = Post.all end
It works. No unit test will uncover a bug in this. The integration test may -
for example,
Post may have no database table yet, or it may not derive from
ActiveRecord::Base, or may redefine
unit test for the controller will fail due to a regression.
So now we’ll make a change: posts should be listed alphabetically.
Feature: Posts # ... existing scenarios that test everything go here ... Scenario: Viewing posts alphabetically Given a post exists with a title of "Abe rents storage space" And a post exists with a title of "Aaron opens his garage to friends" When I go to the list of posts Then I should see the posts sorted alphabetically
Make sure that fails. Then add the unit test for the controller; the very first unit test for this controller. Here I’ll use shoulda and jferris-mocha:
require 'test_helper' class PostsControllerTest < ActionController::TestCase should "alphabeticalize the blog posts on GET to index" do Post.stubs(:alphabetical).returns([]) get :index assert_received(Post, :alphabetical) {|expects| expects.with()} end end
That’s all you need because that’s all that has deviated from the RESTful scaffold.
Benefits
- See what deviates from the RESTful scaffold.
- Refactor with ease.
- Spend less time implementing controller unit tests.
The Opposite of Benefits
- Requires your team to know the RESTful scaffold well.
So in summary, do write integration tests all the time, but don’t TATFT. Relax those unit tests. | https://robots.thoughtbot.com/unpopular-developer-5-stop-unit-testing-your-scaffold | CC-MAIN-2015-40 | refinedweb | 382 | 63.19 |
Tuesday, December 12, 2017¶
Release @ Avanti¶
I made an upgrade of Lino Avanti production site. I realized that
for most users in Lino Avanti we do not want to have the new
SiteSearch feature. So I
added a new user role
lino.modlib.about.SiteSearcher.
There are still some candidate courses for which a series of unused calendar entries has been generated. But because the course series “Candidates” no longer has a calendar entry type, Lino did not delete them.
A checkdata problem whose
owneris None means that the owner has been deleted. It means that we can safely delete the problem as well. AttributeError: ‘NoneType’ object has no attribute ‘has_conflicting_events’
their checkdata often reports that phonetic words aren’t up-to-date. I tried to understand why. I added a
get_simple_paraneters()to PhoneticWord because I would like to verify on their data that there are no phonetic words at all for these cases.
yield ‘owner_id’ yield ‘owner_type’
That’s how I discovered another bug: cannot use GenericForeignKey as a filter parameter.
Setting the value of a combobox in ExtJS 6¶
In
lino.core.store.ComboStoreField we need to change how a
combobox field is represented in a JSON response:
def value2dict(self, ar, v, d, row): value, text = self.get_value_text(ar, v, row) d[str(self.name)] = text d[str(self.name + constants.CHOICES_HIDDEN_SUFFIX)] = value
into this:
def value2dict(self, ar, v, d, row): value, text = self.get_value_text(ar, v, row) d[str(self.name)] = [{'text': text, 'value': value}] d[str(self.name + constants.CHOICES_DISPLAY_SUFFIX)] = text d[str(self.name + constants.CHOICES_HIDDEN_SUFFIX)] = value
and then define CHOICES_DISPLAY_SUFFIX as
'Display'. And then
we need to set displayField
and valueField | http://luc.lino-framework.org/blog/2017/1212.html | CC-MAIN-2018-05 | refinedweb | 279 | 51.24 |
- Code: Select all
#include <lapacke.h>
int main() {}
It works fine when I compile with:
$ g++ -c test_lapack.cpp
but if I add the c++11 flag:
$ g++ -std=c++11 -c test_lapack.cpp
I get a massive amount of errors (see attachment to this post). I'm guessing this is because things in c++11 got more strict and now things that used to be fine in c++03 or c++98 are now deemed unsafe or deprecated. I don't know if I should be contacting the GNU gcc/g++ people or if this is something in the domain of LAPACK developers, or maybe there is an additional flag I can add that will suppress these errors? | https://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=4315 | CC-MAIN-2015-14 | refinedweb | 119 | 78.79 |
This post was originally published here at
This is going to be one of the series of posts related to Interview topics that I would be writing. In this post, we are going to explore the importance of equals and hashCode methods in Java.
equals() and hashCode() are two methods that are part of every class in Java. Object class is the superclass for all other classes in Java. equals and hashCode are the methods that are part of the Object class
As a general rule,
If two objects are equal according to the implementation, their hashCode should match!
Let us approach understanding this with a scenario, Have you ever noticed what happens when an identical order is placed in Dominos app within a stipulated time by mistake?
Any duplicate orders will be rejected as they suspect it could be due to a mistake.
It's recommended that both equals and hashCode methods need to be overridden to achieve better business decisions. So why its best to override these implementations? Before that, we need to understand what does the base implementation does.
As Java is an Object-oriented programming language, except for primitives everything else is an object in Java.
public boolean equals(Object obj) { return (this == obj); }
The base implementation compares and identifies whether the current object and the passed object are one and the same i.e., they share the same memory location as in the below diagram. Both the objects have the same reference in the heap memory
This method returns the hash value of the given object as an integer value. Within the application lifetime, it's expected that the hashCode value of an object remains the same regardless of the however number of times it got invoked.
Hence if 2 objects are considered equal according to the default equals method then their hashCode value will always be the same.
Coming back to our scenario, according to our requirement, any 2 orders are equals if their attributes (ordering user, order items, quantity, and delivery address) are the same. So let's override the equals method to reflect the same
equals method alone overridden as below
Result when the equal method is overridden
The impact would be visible if the orders are stored in hash-based collections. For instance, consider the below example where orders are maintained in HashSet and we try to perform whether the newly received order is already present with us. This would be a problem as we have not overridden the hashCode method yet
As you can see in the above image, 2 orders even as per our requirements are equal, it's being considered as not equal.
Let's override the hashCode method to include a combination of our attributes (this is one of the ways) for comparison
Now let us compare orders with both equal and hashCode overridden as per our requirement
As we can see, overriding equals and hashCode method results in exactly what we are looking for in case of any type of collections the orders are being stored
So always respect the contract, it's binary either both methods (equals and hashcode) should be overridden or none of both methods.
This will be one among the series of posts that I would be writing which are essential for interviews. So if you are interested in this thread and found it useful, please follow me at my website or dev
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/balasr21/understanding-equals-and-hashcode-in-java-33b8 | CC-MAIN-2021-49 | refinedweb | 573 | 55.07 |
Ok, I'm trying to make a program that takes in a random assortment of points from 0 to 32,767 (for x and y) and it finds any series of 4 or more points which are collinear.
So far my program does exactly that with ONE problem. I first order the array by x values and then traverse through each point using that as a "pivot" point and taking the angle from all the other points to the right of it. I then order these angles so that I can go through an array of angles and pick out all the lines with the same angle (which means all points of these angles are on the same line).
Now this all works great, but the problem is I can't filter out subsets. i.e. if my input is (0,0) (100,100) (200,200) (300,300) and (400,400), I get the output:
(0, 0) (100, 100) (200, 200) (300, 300) (400, 400)
AND
(100, 100) (200, 200) (300, 300) (400, 400)
I can't figure out how to filter out subsets. Here's my main code (all outside classes are properly implemented and just order my arrays and find angles).
import java.util.Arrays; import java.util.Comparator; public class Fast{ public static void main (String[] args){ // rescale coordinates and turn on animation mode StdDraw.setXscale(0, 32768); StdDraw.setYscale(0, 32768); StdDraw.show(0); // read in the input int N = StdIn.readInt(); Point[] p = new Point[N]; Angle[] angle; Comparator<Angle> comp = new MyComparator(); Comparator<Point> pcomp = new ArrayOrder(); for (int i = 0; i < N; i++) { int x = StdIn.readInt(); int y = StdIn.readInt(); p[i] = new Point(x, y); p[i].draw(); } Arrays.sort(p, pcomp); // display to screen StdDraw.show(0); for (int i=0; i<p.length-1; i++){ angle = new Angle[p.length-i-1]; angle = Point.getAngles(p, i); Arrays.sort(angle, comp); for(int x = 2; x < angle.length; x++){ if(angle[x].angle == angle[x-1].angle && angle[x].angle == angle[x-2].angle){ System.out.print(p[i].toString() + " -> "); System.out.print(angle[x-2].p.toString() + " -> "); System.out.print(angle[x-1].p.toString() + " -> "); System.out.print(angle[x].p.toString()); int y = x+1; while (y < angle.length && angle[x].angle == angle[y].angle){ System.out.print(" -> " + angle[y].p.toString()); y++; } x = x+y-1; System.out.println(); } } } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/7565-problem-printing-out-collinear-lines.html | CC-MAIN-2013-20 | refinedweb | 403 | 67.15 |
In this lesson we will look at how to incorporate the data driven functionality in your script. it is one of most important or you can say mandatory requirement while creating the automation script. In data driven script we pass the input, expected output and other required values which are definite in nature from outside of the script. for an instance Excel. this makes your script more flesible and you can run the same script just by passing different values to it. it increases you functional coverage and provide facility for test application for different datasets.
You can store the data in various forms
- Excel
- Text file
- xml
- Database
- etc.
in this lesson, we will take most commonly used form which is Excel.
Step 1: Design the Excel : first of all you need to design the excelsheet which you will use as datasheet it is most important other wise you will not able to develop the logic to extract values and use the same in the code.
lets say in my code i want to enter title of post.
my code is:
driver.findElement(By.id("title")).clear(); driver.findElement(By.id("title")).sendKeys("This is My Title");
so if i defined any variable for title then i code will look like this..
String postTitle = " This is My Title"; driver.findElement(By.id("title")).clear(); driver.findElement(By.id("title")).sendKeys(postTitle );
but still i cannot pass value from outside of the script and can not make real data driven script. here what i need? I need Variable and Value. Variable name should be fixed inside my script which i can use as many times as i want and need value which i want to send from external sheet.
so i will use following format to design my sheet. here i have used rows to define variables, you can use columns as well.
Step 2: Code to Retrieve the Data from Excel and store it.
here we are going to use two Java APIs.
- JXL : to read and write the excel. you can also use POI but JXL usage is simple than POI so we will use JXL.
- HashMap Class in java. this class is basically store values in key value fashion.
here is code to read the excel and store it HashMap:
import java.io.IOException; import java.util.HashMap; import java.util.Map; import java.util.Properties; import jxl.Cell; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; public class DataReader { public static Map<String, String> data; public static Map testcaseid() throws IOException, BiffException { // create map to store web elements data = new HashMap<String, String>(); Workbook workbook = Workbook.getWorkbook(new java.io.File("File path")); // it should be xls file and not xlsx. Sheet sheet = workbook.getSheet(0); // Sheet index , Sheet1 = 0, Sheet2 =1 etc int rowcount = sheet.getRows(); System.out.println("Datasheet inputs"); for(int i=1;i<rowcount;i++) { Cell ObjectName = sheet.getCell(0, i); Cell ObjectValue = sheet.getCell(1, i); if(ObjectName.getContents()!="") { data.put(ObjectName.getContents(), ObjectValue.getContents()); System.out.println("Input VariableName: "+ObjectName.getContents()+"Variable Value from Excel::"+ ObjectValue.getContents() ); } else break; } return data; }
Above method will return data from excel.
Step 3: How to use this data in test case: Following the code where you can use the above stored values in the code.
// import the class where you have written data extraction code ( code in step 3) like import pagekage name.DataReader; public class Delete { private WebDriver driver; private String baseUrl; public static Map<String, String> data; @Before public void setUp() throws Exception { data = DataReader.testcaseid(); driver = new FirefoxDriver(); baseUrl = ""; driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); } @Test public void testDelete() throws Exception { driver.findElement(By.id("user_login")).clear(); driver.findElement(By.id("user_login")).sendKeys(data.get("userName")); } }
I hope this will clear you the how to do data driven code. | http://qeworks.com/selenium-lesson-15-make-script-data-driven/ | CC-MAIN-2018-05 | refinedweb | 643 | 60.01 |
This is a nested tabs example project.
This is great if you want tabs with probably a header on top of them. Normally tabs are displayed at the bottom of the appBar. What about if you want to include say an image or some card between the tabbar and appbar. This is what we create.
We will in the process understand around 4 classes that are mandatory for creating tabs:
Let's start.
Well we have a video tutorial as an alternative to this. If you prefer tutorials like this one then it would be good you subscribe to our YouTube channel. Basically we have a TV for programming where do daily tutorials especially android.
This is a flutter nested tabs example. We will have a container on top of our TabBar. That container can of course contain anything like say an image or some text.
Here's the example:
Here's the landscape mode:
Then below it we have our tabs using the DefaultTabController. We will give them a dark them.
Here are the things to note.
You can use this tutorial even if you have never tasted Dart before.
Here are the important concepts to get you started:
1. Importing Packages in Flutter/Dart
But first we need to import some packages and we do that using the
import keyword in dart.
We do that using the
import keyword, similar to java.
You use
import to specify how a namespace from one library is used in the scope of another library.
Remember every Dart app is a library,even if it doesn’t use a library directive.
To import a given library, the only required argument is a
URI specifying the library.
If the library is a built-in/standard one, then the URI has the special
dart: scheme. For other libraries, you can use a file system path or the
package: scheme. The
package: scheme specifies libraries provided by a package manager such as the pub tool.
For in our example here:
import 'package:flutter/material.dart'; import 'package:flutter_date_picker/flutter_date_picker.dart';
Here are the packages we are importing:
2. Classes and Object in Flutter/Dart
In Dart like in most C style programming languages you create a class using the
class keyword. Classes are encapsulation mechanism in Object Oriented Programming languages.
An
object on the other hand is an instance of a class and all classes descend from Object.
To create an object, you can use the
new keyword with a constructor for a class. Constructor names can be either
ClassName or
ClassName.identifier:
var myApp = new MyApp(); // Create a MyApp object using MyApp().
In this case we create three classes:
class MyApp...
class Home...
4. Class Inheritance in Dart
Inheritance is an object oriented pillar, one of the main ones. And Dart definitely supports it fully. Inheritance provides alot of power to programming languages like Dart as it allows one class to derive properties and methods of another.
In Dart to implement inheritance you use the
extends keyword just like in java.
Here are the keywords you may see in this tutorial:
extends- creates a subclass.
super- refers to the superclass or parent class.
Here are examples
class MyApp extends StatelessWidget {..}
class Home extends StatefulWidget {..}
3. Class-Level vs Top Level Functions
Class Level Functions, typically known as methods, are methods confined inside a class and belonging either to the instance of the class or the class itself. Top Level functions are not confined to any class.
Dart supports different function types like:
main()),
Here's our top level main function:
void main() => runApp(MyApp());
4. The main method in Dart
In Dart, like in Java and C#, the entry point to an application is the famous
main() method. When your application starts, the
main() method gets invoked first and when it finishes the app also finishes.
In our case we run our app in the main method.
void main() => runApp(MyApp());
5. Fat Arrow Shorthand Syntax in Dart
Dart is a language that allows you write short concise and compact code. One of the features it provides is the fat arror shorthand syntax.
For functions that contain just one expression, you can use this shorthand syntax.
For example instead of this:
void main() { runApp(MyApp()); }
You can write this:
void main() => runApp(MyApp());
Only an expression and not a statement can appear between the arrow
(
=>) and the semicolon (;).
6. In Dart Functions are First-Class Objects
This means you can pass a function as a parameter to another function.
For example:
showNumber(myNumber) { print(myNumber); } var myNumbers = [1,2,3]; myNumbers.forEach(showNumber); // Pass showNumber as a parameter.
7. In Dart All functions return a value.
Yeah. If no return value is specified, the statement
return null; is implicitly appended to the function body.
8. If and Else in Dart
Dart supports
if statements with optional
else statements.
if (engineRunning()) { jet.fly(); } else if (engineStopped()) { jet.restartEngine(); } else { jet.repairEngine(); }
9. For Loops in Dart Like many languages, in dart you can iterate with the standard for loop.
for (int i = 0; i < galaxies.length; i++) { print(galaxies[i].getName()); }
Closures inside of Dart's for loops capture the value of the index, avoiding a co pitfall found in JavaScript. For example, consider:
var callbacks = []; for (var i = 0; i < 2; i++) { callbacks.add(() => print(i)); } callbacks.forEach((c) => c());
The output is
0 and then
1, as expected. In contrast, the example would print
2 and then
2 in JavaScript.
If the object that you are iterating over is a Collection, you can use the
forEach() method. That is if you don't need to know the current iteration counter:
galaxies.forEach((galaxy) => galaxy.getName());
Collections also support the
for-in form of iteration:
var collection = [0, 1, 2]; for (var planet in planets) { print(planet); }
We are using Dart specifically to write flutter apps. You can target either Android or iOS.
Let's look at some of the APIs we will be using in this project
1. Widget
A
Widget is an abstract class in
flutter package that describes the configuration for an Element.
Widgets are the central class hierarchy in the Flutter framework. A widget is an immutable description of part of a user interface. Widgets can be inflated into elements, which manage the underlying render tree.
We will be implementing two
Widget sub-classes in this class:
This this will allow us override the
build() method hence returning a
Widget object:
class MyApp extends SomeWidget @override Widget build(BuildContext context) { return our_widget; }
The
build() method describes the part of the user interface represented by this widget.
Read more about Widget here.
2. StatelessWidget
A
statelesswidget widget is an abstract class that represents a widget that does not require mutable state.
Read more about StatelessWidget here.
3. BuildContext
A
BuildContext is a handle to the location of a widget in the widget tree.
It's an abstract class that presents a set of methods that can be used from
StatelessWidget.build methods and from methods on
State objects.
Read more about BuildContext here.
5. MaterialApp
MaterialApp is a class that represents an application that uses material design.
Read more about MaterialApp here.
6. ThemeData
ThemeData is a class that holds the color and typography values for a material design theme.
Read more about ThemeData here.
7. DefaultTabController
DefaultTabController is the
TabController for descendant widgets that don't specify one explicitly.
DefaultTabController derives from the StatefulWidget and has a mutable state.
Read more about DefaultTabController here.
8. TabBar
TabBar is a material design widget that displays a horizontal row of tabs.
Read more about TabBar here.
9. TabBarView
TabBarView is a page view that displays the widget which corresponds to the currently selected tab. Typically used in conjunction with a TabBar.
Read more about TabBarView here.
10. Tab
A material design TabBar tab.
Read more about Tab here.
Here are the files we explore:
Flutter supports using shared packages contributed by other developers to the Flutter and Dart ecosystems. This allows you to quickly build your app without having to develop everything from scratch.
We will be adding packages under this file.
pubspec.yamlfile located inside your app folder, and add dependencies under dependencies.
flutter packages get
OR
From Android Studio/IntelliJ: Click 'Packages Get' in the action ribbon at the top of pubspec.yaml From VS Code: Click 'Get Packages' located in right side of the action ribbon at the top of pubspec.yaml
flutter is always our sdk dependency as we use it to develop our ios and android apps.
Here's our pubspec.yaml file:
name: nested_tabbar
Take note we are not using any third party dependency in our app.
This is where we write our flutter code in dart programming language. In Dart it is very common to write several classes in a single file. So this simple file will have three classes.
here's the full
main.dart code:
import 'package:flutter/material.dart'; class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return new MaterialApp( title: 'Mr Nested TabBar', theme: ThemeData(brightness: Brightness.dark), home: Scaffold(body: HomePage())); } } class HomePage extends StatelessWidget { @override Widget build(BuildContext context) { return ListView(children: [ Container( color: Colors.orangeAccent, height: 150.0, child: Center(child: Text('Something'))), DefaultTabController( length: 2, initialIndex: 0, child: Column( children: [ TabBar(tabs: [Tab(text: 'Home'), Tab(text: 'News')]), Container( height: 300.0, child: TabBarView( children: [ Center(child: Text('Home here')), Center(child: Text('News here')), ], )) ], )) ]); } } // Our top level main function void main() => runApp(new MyApp()); //end
Just make sure you device or emulator is running and click the
run button in android studio, it will automatically pick the device and install the app.
Aletrnative you can use the terminal or command prompt. Navigate/Cd over to project root page and type this:
flutter.bat build apk
This will build the APK which you can then drag and install onto your device. The build process in my machine takes around three minutes which is not bad.
You can download full source code below.
Best Regards,
Oclemy. | https://camposha.info/flutter/nested-tabs | CC-MAIN-2019-04 | refinedweb | 1,679 | 67.15 |
On-Boarding H2o.ai and Generic Java Models) using any interface (eg.Python, Flow, R) provided by H2o., she.
Prerequisites¶
Java 1.8
The following Released components:
- Java Client v1.11.0 (java_client-1.11.0.jar)
- Generic Model Runner v2.2.3 (h2o-genericjava-modelrunner-2.2.3.jar)
Preparing to On-Board your H2o or a Generic Java Model¶
- Place Java Client jar in one folder locally. This is the folder from which you intend to run the jar. After the jar runs, the created artifacts will also be available in this folder. You will use some of these artifacts if you are doing Web-based onboarding. We will see this later. Note: the versions of the libraries in the screenshots may be outdated.
- Prepare a supporting folder with the following contents. Items of this folder will be used as input for the java client jar.
It will contain:
-
Models - In case of H2o, your model will be a MOJO zip file. In case of Generic Java, the model will be a jar file.
-
Model runner or Service jar - For H2O rename downloaded h2o-genericjava-modelrunner.jar as per the first section to H2OModelService.jar or to GenericModelService.jar for Java model and Place it in this folder.
-
CSV file used for training the model - Place the csv file (with header having the same column names used for training but without the quotes (“ ”) ) you used for training the model here. This is used for autogenerating the .proto file. If you don’t have the .proto file, you will have to supply the .proto file yourself in the supporting folder. Make sure you name it default.proto.
-
default.proto - This is only needed If you don’t have sample csv data for training, then you will have to provide the proto file yourself. In this case, Java Client cannot autogenerate the .proto file. You will have to supply the .proto file yourself in the supporting folder. Make sure you name it default.proto Also make sure, the default.proto file for the model is in the following format. You need to appropriately replace the data and datatypes under DataFrameRow and Prediction according to your model.syntax = "proto3"; option java_package = "com.google.protobuf"; option java_outer_classname = "DatasetProto"; message DataFrameRow { string sepal_len = 1; string sepal_wid = 2; string petal_len = 3; string petal_wid = 4; } message DataFrame { repeated DataFrameRow rows = 1; } message Prediction { repeated string prediction= 1; } service Model { rpc transform (DataFrame) returns (Prediction); }
-
application.properties file - Mention the port number on which the service exposed by the model will finally run on.server.contextPath=/modelrunner # IF WORKING WITH MODEL CONNECTOR AND COMPOSITE SOLUTION, THE #server.contextPath will be / # NOTE: THIS WILL TAKE AWAY SWAGGER # This is the port number you want to run the service on. User may select a convenient port. server.port=8336 spring.http.multipart.max-file-size=100MB spring.http.multipart.max-request-size=100MB # Linux version # if model_type is Generic Java, then default_model will be /models/model.jar # if model_type is H2o, then the default_model will be /models/Model.zip #default_model=/models/model.jar default_model=/models/Model.zip default_protofile=/models/default.proto logging.file = ./logs/modelrunner.log # The value of model_type can be H or G # if model is Generic java model, then model_type is G. # if model is H2o model, then model_type is H. And the /predict method will use H2O model; otherwise, it will use generic Model # if model_type is not present, then the default is H #model_type=G model_type=H model_config=/models/modelConfig.properties # Linux some properties are specific to java generic models # The plugin_root path has to be outside of ModelRunner root or the code won't work # Default proto java file, classes and jar # DatasetProto.java will be in $plugin_root\src # DatasetProto$*.classes will be in $plugin_root\classes # pbuff.jar will be in $plugin_root\classes plugin_root=/tmp/plugins
-
modelConfig.properties - Add this file only in case of Generic Java model onboarding. This file contains the modelMethod and modelClassName of the model.modelClassName=org.acumos.ml.XModel modelMethod=predict
Create your modeldump.zip file¶
Java Client jar is the executable client jar file.
For Web-based onboarding of H2o models, the parameters to run the client jar are:
- Current Folder path : Full folder path in which Java client jar is placed and run from
-.
For CLI-based onboarding, the parameters to run the client jar are:
- Onboarding server url.
- Pass the authentication API url for onboarding - This API returns jwtToken for authenticated users. e.g http://<hostname>:8090/onboarding-app/v2/auth
-.
- Username of the Portal MarketPlace account.
- Password of the Portal MarketPlace account.
-.
See example below for how to run the client jar and how the modeldump.zip artifact appears after its successful run:
Onboarding to the Acumos Portal¶
- If you used CLI-based onboarding, you don’t need to perform the steps outlined just below. The Java client has done it for you. You will see a message on the terminal that states the model onboarded successfully.
- If you use Web-based onboarding, you must complete the following steps:
- After you run the client, you will see a modeldump.zip file generated in the same folder where we ran the Java Client for.
- Upload this file in the Web based interface (drap and drop). See On-Boarding a Model Using the Portal UI
- You will be able to see a success message in the Web interface. you will be able to see a success method in the Web interface.
The needed TOSCA artifacts and docker images are produced when the model is onboarded to the Portal. You and your teammates.
Addendum : Creating a model in H2o¶
You must have H2o 3.14.0.2 installed on your machine. For instructions on how to install visit the H2o web site:.
H2o provides different interfaces to create models and use H2o for eg. Python, Flow GUI, R, etc. As an example, below we show how to create a model using the Python innterface of H2o and also using the H2o Flow GUI. You can use the other interfaces too which have comparable functions to train a model and download the model in a MOJO format.
Here is a sample H2o iris program that shows how a model can be created and downloaded as a MOJO using the Python interface:
import h2o import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # for jupyter notebook plotting, %matplotlib inline sns.set_context("notebook") h2o.init() # Load data from CSV iris = h2o.import_file(' iris_wheader.csv') Iris data set description ------------------------- 1. sepal length in cm 2. sepal width in cm 3. petal length in cm 4. petal width in cm 5. class: Iris Setosa Iris Versicolour Iris Virginica iris.head() iris.describe() # training parameters training_columns = ['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid'] # response parameter response_column = 'class' # Split data into train and testing train, test = iris.split_frame(ratios=[0.8]) train.describe() test.describe() from h2o.estimators import H2ORandomForestEstimator model = H2ORandomForestEstimator(ntrees=50, max_depth=20, nfolds=10) # Train model model.train(x=training_columns, y=response_column, training_frame=train) print (model) # Model performance performance = model.model_performance(test_data=test) print (performance) # Download the model in MOJO format. Also download the h2o-genmodel.jar file modelfile = model.download_mojo(path="/home/deven/Desktop/", get_genmodel_jar=True) predictions=model.predict(test) predictions
Here is a sample H2o iris example program that shows how a model can be created and downloaded as a MOJO using the H2o Flow GUI. | https://docs.acumos.org/en/athena/AcumosUser/portal-user/portal/onboarding-java-guide.html | CC-MAIN-2019-30 | refinedweb | 1,244 | 51.04 |
Static Method in Java
A static method in Java is a method that is declared with a keyword ‘static’. It is also known as a class method because it belongs to a class rather than an individual instance of a class.
Static methods are called using class name like this className.methodName().
For example:
Student.add(); // Student is the name of class and add is a method.
Declaration of Static method in Java
The syntax to declare the static method is as follows:
Syntax:
Access_modifier static void methodName()
{ // block start
– – – – – – — –
// Method body.
– – – – – – – – –
} // block end.
The different forms of declaration a static method can be seen at the below screenshot.
Properties of static method in Java
The following properties of a static method are as follows:
1. A static method in a class can directly access other static members of the class. We do not need to create the object of the class for accessing the other static members. It can be called directly within the same class and outside the class using the class name.
2. It cannot access instance (i.e non-static) members of a class. That is, only the static variable can be accessed inside the static method.
3. We cannot access instance variables inside the static method but a static variable can be accessed inside the instance area (instance method).
4. We cannot declare a static method and instance method with the same signature in the same class hierarchy.
5. When we create a static method in the class, only one copy of the method is created in the memory and shared by all objects of the class. Whether you create 100 objects or 1 object.
6. A static method is also loaded into the memory before the object creation.
7. The static method is always bound with compile time.
8. It cannot refer to this or super in any way.
9. Static methods can be overloaded in Java but cannot be overridden because they are bound with class, not instance.
10. Static (variable, method, and inner class) are stored in Permanent generation memory (class memory).
Why instance variable is not available to static method?
When we declare a static method in Java, the JVM first executes the static method and then it creates the objects of the class. Since the objects are not available at the time of calling the static method.
Therefore, the instance variables are also not available to a static method. Due to which a static method cannot access an instance variable in the class.
Let’s create an example program to test whether a static method can access an instance variable and static variable or not.
Program source code 1: In this program, we are trying to read and display the instance variable ‘y’ and the static variable ‘x’ of StaticTest class in an instance method ‘display()’ and static method ‘show()’.
package staticMethod; public class StaticTest { // Instance Area. static int x = 20; // static variable int y = 30; // instance variable // Declare an instance method. void display() { // Instance area. So we can directly call instance variable without using object reference variable. System.out.println(x); // Since we can access static member within instance area. Therefore, we can call the static variable directly. System.out.println(y); } // Declare a static method. static void show() { // Static Area. So we can call S.V. directly inside the S.M. System.out.println(x); // System.out.println(y); // compile time error because instance variable cannot access inside S.M. } public static void main(String[] args) { // Create the object of the class. StaticTest st = new StaticTest(); // Call instance method using reference variable st. st.display(); // Call static method. show(); } }
Output: 20 30 20
The static method can be accessed by nullable reference as like
StaticMethod s1 = null;
s1.show();
Program source code 2:
package staticMethod; public class StaticMethod { static int a = 10; void display() { System.out.println("This is an instance method"); } static void show() { System.out.println("This is a Static method"); } public static void main(String[] args) { StaticMethod sm = new StaticMethod(); sm.display(); StaticMethod s = null; s.show(); int c = s.a; System.out.println(c); } }
Output: This is an Instance method This is a Static method 10
How to change value of static variable inside static method?
Static method can access a static variable and can also change the value of it.
Program source code 3: In this example, we will change the value of the static variable inside the static method.
package staticMethod; public class ValueChange { static int a = 10; static int change() { int a = 20; return a; } public static void main(String[] args) { // Call static method using the class name. Since it will return an integer value. So we will store it by using a changeValue variable. int changeValue = ValueChange.change(); System.out.println(changeValue); } }
Output: 20
Program source code 4: In this program, we will calculate the square and cube of a given number by using the static method.
package staticMethod; public class SquareAndCube { static int x = 15; static int y = 20; static int square(int x) { // Here, x is a local variable. int a = x * x; return a; } static int cube(int y){ // Here, y is a local variable. int b = y*y*y; return b; } public static void main(String[] args) { int sq = square(5); int CB = cube(10); System.out.println(sq); System.out.println(cb); } }
Output: 25 1000
Can we use this or super keyword in static method in Java?
In entire core java, this and super keyword is not allowed inside the static method or static area. Let’s see an example program related to it.
Program source code 5:
package staticMethod; public class ThisTest { // Declare the Instance variables. int x = 10; int y = 20; // Declare S.M. with two parameters x and y with data type integer. static void add(int x, int y) { System.out.println(this.x + this.y); // Compile time error. } public static void main(String[] args) { ThisTest.add(20, 30); } }
Output: Exception in thread "main" java.lang.Error: Unresolved compilation problems: Cannot use this in a static context Cannot use this in a static context
How to call static method in Java from another class?
Program source code 6: In this example, we will create a class Student and declare the static methods name, rollNo, and std with the return type.
package staticVariable; public class Student { static String name(String n) { return n; } static int rollNo(int r) { return r; } static int std(int s) { return s; } }
Now create another class StudentTest and call the static method with passing argument values.
public class StudentTest { public static void main(String[] args) { // Call static method using class name and pass the string argument. Since it will return string value. So we will store the value by a variable nameStudent and print the output on the console. String nameStudent = Student.name("Shubh"); // Call and pass the integer value. Since it will return an integer value. So we will store the int value by rollStudent and std. int rollStudent = Student.rollNo(5); int std = Student.std(8); System.out.println("Name of Student: " +nameStudent); System.out.println("Roll no. of Student: " +rollStudent); System.out.println("Standard: " +std); } }
Output: Name of Student: Shubh Roll no. of Student: 5 Standard: 8
Let’s create a program to perform factorial series using a static method.
Program source code 7:
package staticVariable; public class Factorial { static int f = 1; static void fact(int n) { for(int i = n;i>=1;i--) { f = f * i; } } } public class FactorialTest { public static void main(String[] args) { Factorial.fact(4); System.out.println(Factorial.f); } }
Output: 24
Difference between Static method and Instance method
1. A static method is also known as class method whereas the instance method is also known as non-static method.
2. The only static variable can be accessed inside static method whereas, static and instance variables both can be accessed inside the instance method.
3. We do not need to create the object of the class for accessing static method whereas, in the case of an instance method, we need to create the object for access.
4. Class method cannot be overridden whereas, an instance method can be overridden.
5. Memory is allocated only once at the time of class loading whereas, in the case of the instance method, memory is allocated multiple times whenever the method is calling.
Recommended Post for Interview
⇒ Can We override Static method in Java?
Hope that this tutorial has covered almost all the important concepts related to the static method in java with example programs. I hope that you will have understood this topic and enjoyed programming.
Thanks for reading!
Next ⇒ Static block in Java⇐ PrevNext ⇒ | https://www.scientecheasy.com/2018/09/java-static-method.html/ | CC-MAIN-2020-24 | refinedweb | 1,453 | 66.03 |
Writing JavaScript for XHTML
From MDC
Website authors have started to write now XHTML files instead of HTML 4.01 for about 8 years. But alas, almost no XHTML file viewed over the web is served with the correct MIME type, that is, with application/xhtml+xml! This is for one reason due to a certain browser, that is not capable of XHTML as XML. But it is also founded in the experience that the JavaScript, authored carefully for HTML, suddenly breaks in an XML environment.
This article shows some of the reasons alongside with strategies to remedy the problems. It will encourage web authors to use more XML features and make their JavaScripts interoperable with real XHTML applications.
To test the following examples locally, use Firefox's extension switch. Just write an ordinary (X)HTML file and save it once as test.html and once as test.xhtml.
[edit] Problem: Nothing Works
After switching the MIME type suddenly no inline script works anymore. Even the plain old alert() method is gone. The code looks something like this:
<script type="text/javascript"> //<!-- window.alert("Hello World!"); //--> </script>
[edit].
[edit] Problem: The DOM Changed
The central object in the DOM, the document object, is of type HTMLDocument in HTML, whereas it is an XMLDocument in XML files. This has an especially huge impact on methods JavaScript authors are used to in daily work. Take the document.getElementsByTagName method, for example. This is a DOM 1 method, which means, there are no XML namespaces respected. Take a look at this common snippet:
var headings = document.getElementsByTagName("h1"); for( var i = 0; i < headings.length; i++ ) { doSomethingWith( headings[i] ); }
Enter the problem: in XHTML, served as XML, all elements are in the XHTML namespace (remember the xmlns attribute in the html tag?). This means, our plain old DOM 1 method suddenly finds no elements anymore. Bang! Immediately 80% of today's JavaScripts on the web crashed, including our snippet above.
[edit] Solution: Use DOM 2 Methods
The W3C introduced the DOM 2, addressing the needs of distinguishing namespaces. Perhaps you have seen sometimes before a method like document.getElementsByTagNameNS? The difference is the NS part, meaning, it looks for namespaces. How do we use this method? This is straight forward:
var headings = document.getElementsByTagNameNS("","h1"); for( var i = 0; i < headings.length; i++ ) { doSomethingWith( headings[i] ); }
The only difference is the mentioning of the namespace the element is in. Okay, more letters to type, but you can define shorthands. Then, let's take only DOM 2 methods from now on!
But wait! Now, taking a look in our HTML file, the script breaks again! Remember, in HTML the elements are in no namespace at all! So, what we have to do now is writing a wrapper, that determines, if we are dealing with an HTML or an XML file. Check out this piece of code:
Node.prototype.getHTMLByTagName(tagName) { if( document.contentType == "text/html" ) { return this.getElementsByTagName(tagName); } else { return this.getElementsByTagNameNS("",tagName); } }
What does it do? It extends all nodes with a method getHTMLByTagName, that distinguishes between the content type of the document element. But there is an interoperability issue: For IE, you would not only have to take a look at the document.mimeType property instead, but also cannot easily extend Node objects. So, to write a wrapper, that truely distinguishes between XML and HTML on one hand and different browsers on the other hand is a bit more tricky. We let this over to you as a exercise.
NB: The DOM 1 method getElementsByTgName also exists in XML documents. It will find every element of a given name, that is in no namespace at all. For this reason, AJAX's responseXML is often processed with DOM 1 methods without finding any problems. This is because very little XML sent via HTTPRequest bothers with namespaces.
[edit] Problem: My Cookie Won't Be Saved!
We found out already, that the document object in XML files is different from the ones in HTML files. Now we take a look at one property, that is missing in XML files and that we will miss very bad. In XML documents there is no document.cookie. That is, you can write something like
document.cookie = "key=value";
in XML as well, but you will find out, that literally nothing is saved in the cookie storage.
[edit].
[edit].
[edit] Solution: Use DOM Methods
Many people avoided DOM methods because of the typing to create one simple element, when document.write() was completely satisfying. Now you can't do this as easily as before. Use DOM methods to create all of your elements, attributes and other nodes. This is XML proof, as long as you keep the namespace problem in focus (e.g., there is a document.createElementNS method).
Now, not to be inhonest, you can still use strings like in document.write(), but it takes a little more effort. This code shows you, how to do it:.
[edit] Problem: My Favourite JS Library still Breaks
If you use JavaScript libraries like the famous prototype.js or Yahoo's one, there is bad news for you: As long as the developers don't start to apply the points.
[edit] I Read about E4X. Now, This Is Perfect, Isn't It?
As a matter of fact, it isn't. E4X is a new method of using and manipulating XML in JavaScript. But, standardized by ECMA, they forgot to implement an interface to let E4X objects interact with DOM objects our document consists of. So, with every advantage E4X has, without a DOM interface you can't use it productively to manipulate your document.
[edit] Finally: Content Negotiation
Now, how do we decide, when to serve XHTML as XML? We can do this on server side by evaluating the HTTP request header. Every browser sends with its request a list of MIME types it understands. So if the browser tells our server, that it can handle XHTML as XML, that is, the Accept field in the HTTP header
[edit] Further Reading
You will find several useful articles in the developer wiki:
DOM 2 methods you will need are: | http://developer.mozilla.org/en/docs/Writing_JavaScript_for_XHTML | crawl-001 | refinedweb | 1,028 | 66.84 |
Introduction to Inner classes Java
J Java
There are four styles of creating an inner class.
- Static member classes
- Member classes
- Local classes
- Anonymous classes
1. Static Member Classes
Here, the inner class is declared as static. Static inner classes are less used. Following program illustrates.
The inner class SecondOne is declared as static member of outer class FirstOne and includes a static method area(). When we compile the above program we get two .class files as follows.
- FirstOne.class
- FirstOne$SecondOne.class
The .class file of SecondOne inner class is placed within the outer class, FirstOne. The other classes cannot access directly the SecondOne and should go through FirstOne.
FirstOne.SecondOne.area(10);
As the SecondOne is static, it is called with FirstOne directly. The SecondOne scope is within the FirstOne.
2. Member Classes
Here, the inner class is not declared as static. It is declared within the body of the outer class (not inside the method of outer class). Following are the scope rules.
- The outer class members (variables) can be accessed by inner class.
- The inner class members cannot be accessed by outer class.
- Inner class members can be accessed by inner class only.
Following program illustrates the above scope rules.
In the above code, First is outer class and Second is inner class. price is a variable of outer class and rate is a variable of inner class. By the scope rules, Second can access price but First cannot access rate .
When the above code is compiled, two .class are obtained as follows.
- First.class
- First$Second.class
As you can observe, the inner class, Second exists within the context of outer class, First.
First.Second fs = new First( ).new Second();
The Second class object cannot be created directly and must be done through the outer class, First.
The above statement can be modified as follows.
First.Second fs = f1.new Second();
The outer class object is created with the help of inner class object. Finally, inner class can access all the static and non-static fields of outer class (including private fields) and the inner class object cannot be instantiated directly and should be done through outer class object.
3. Local Classes
A local class is simply an inner class but declared within the method of outer class (earlier, the inner class exists directly in the body of the outer class but now exists inside a method). Here the rules of accessibility are, the inner class can access, anyhow, its own members and only the final parameters of the enclosing method .
Following program illustrates.
In the above code, the perimeter() method of inner class Son can access the final parameters of area(), the method of outer class Father. That is, the perimeter() method can access the final variables y and m of area(). But, the inner class can access the price, non-final variable of outer class as it is a field declared in the body of the outer class and not within the method.
When we compile the above program, we get the following classes.
- Father.class
- Father$1Son.class
Observe, it is $1 and not $.
4, Anonymous Inner Class
An anonymous inner class, as the name indicates, does not have a name. As it does not contain a name, it must be instantiated at the time of defining it. This type of anonymous classes is mostly used in event handling mechanism of AWT. Followng snippet of code is used to close the frame window.
In the above code, an anonymous inner class of WindowAdapter is used. The advantage of anonymous inner classes is better readability as all the code exists at the time of declaration only, all at one place. The disadvantage is the code cannot be used elsewhere (no reusability).
Advantages – Inner Classes
Following list gives the advantages of inner classes.
- Better accessibility restrictions. The inner classes are controlled through outer class instances.
- The life of inner class is controlled by outer class. Inner classes can use
composition but through outer class.
- We can make a logical group of classes. If a class is useful particularly to only one another class, the class can be declared as inner class.
- Encapsulation is better implemented. The inner class members are hidden from other classes.
- The maintainability is easier as the code is closer to other class.
My question is that what is the difference between private inner class and non- private inner class. If I can access private and nonprivate class in the same way. for example in the below code line number 11. If I remove private from line number it also work same way.
1. class outer {
2. private class inner
3. {int a=10;
4. private void show()
5. {System.out.println(“a=” + a);}
6. } // end of inner class
7. public static void main(String args[])
8. {
9. System.out.print(“Enter number”);
10. outer obj = new outer();
11. obj.new inner().show();
12. } }
Hello sir, your saying that the inner class can access, anyhow, its own members and only the final parameters of the enclosing method, but there is no error in this program. Run urself.
package innerClasses;
public class Father {
double price = 10.5;
public void area(int x, final int y){
int k = x+y;
final int m = x*y;
class Son{
public void perimeter(){
System.out.println(“x is “+x);
System.out.println(“y is “+y);
System.out.println(“m is “+m);
System.out.println(“k is “+k);
System.out.println(“price is “+price);
}
}
Son s1 = new Son();
s1.perimeter();
}
public static void main(String[] args) {
Father f1 = new Father();
f1.area(10, 20);
}
}
in new version of java inner class can access both the member.
why local inner class acess only final variables
The outer class has got some purpose with the variable values. If the inner class changes them, it is loss to the outer class. For example, the outer class has a “double $rate = 61.55;”. If the inner class changes the $rate, then how the outer class will have data integrity. For this reason, the inner class is allowed to access final variables only.
public class Testis {
public static void main(String[] args) {
new OuterClass(); // line 3
}
}
class OuterClass {
private int x = 9; // line 5
public OuterClass() {
InnerClass inner = new InnerClass(); // line 7
inner.innerMethod();
}
class InnerClass {
public void innerMethod() {
System.out.println(x);
}
}
}
note: Iam getting output 9
but coming to the line 7..the way of instantiating the inner class
OuterClass.InnerClass inner = new Outer.new InnerClass();
but in program it is
InnerClass inner = new InnerClass();
how is it possible
Where exactly you are referring. Write the posting link.
public class outer
{
public static void main(String[] a)
{
static class inner{
}
}
}
//Iam getting compiler error…we can declare inner class as Static..but why iam getting error..
Do like this:
public class outer
{
public static void main(String[] a)
{
class inner{
}
}
}
OR do like this:
public class outer
{
static class inner{
}
public static void main(String[] a)
{
}
}
Sir, the Window Adapter class is an abstract class,so how can we instantiate it?
pls clarify
Internally, they do it through sub classes implementation.
sir can we put main() method in Static member classes, Member classes and Anonymous classes?
There is nothing like “Static member class”, rather “static class” or “member class”.
We cannot keep main method in nested class, or more precisely non-static inner class, because main method being static is not allowed to be kept inside non-static inner class. In order to use main method in inner class, you will need to make inner class as static also.
sir according to java object creation has to done in the main() method only na,then why u are created object of inner class(son) out side the main for Local classes
A Java object can be created anywhere in the class including constructors and methods. See AWT program where you create Button object in constructor.
sir why the inner class is accessing only final variables of outer class why not instance variables in local classes??
Outer class has variables with some meaning and functionality. For example, outer class has $ rate with some value assigned. If inner class changes the $ value, how the outer can use its value. For this reason, inner class can access final variables of outer class.
sir in the member classes program if i placed the variable “rate” in inner class with different value or same value as inner class also then it is working na,is it correct? | https://way2java.com/java-lang/inner-classes/ | CC-MAIN-2017-39 | refinedweb | 1,422 | 67.15 |
It is no real surprise that one of the primary purposes of computer languages is processing lists (indeed, one of the oldest programming languages – Lisp – is a contraction of the term “list processing”). In Javascript, lists are managed with the Array object. The last few years has seen a significant beefing up of what arrays can do as part of the EcmaScript 6 development, to the extent that even many programmers aren’t aware of the full capabilities that arrays offer.
The following is a mixed bag of tricks, focusing both on some of the cooler ES6 code, some on the more esoteric functional programming tricks of ES5. One thing that both of these improvements do is to establish a unifying principle for iterating through lists of items, a problem that’s emerged from twenty years of different people implementing what was familiar to them in JavaScript. That’s admittedly a testimony to how flexible Javascript is.
Avoiding the Index
The traditional way of iterating over an array has been to use indexes, such as the following (Listing 1).
var colors = [“red”,”orange”,”yellow”,”green”,”blue”,”violet”]; for (var index=0;index<colors.length;index++){ var color=colors[index]; // do something with color }
Listing 1. Using index to iterate over an array
The problem with this is that it first requires the declaration and creation of an index variable, and you still have to resolve the value that the array has at that index. It’s also just aesthetically unpleasing – the emphasis is on the index, not the value.
ES6 introduces the of keyword, which lets you iterate to retrieve an object directly (Listing 2)
var colors = [“red”,”orange”,”yellow”,”green”,”blue”,”violet”]; for (const colorofcolors){ // do something with color }
Listing 2. Using the of keyword to iterate over an array.
The use of this construction is both shorter and with a much clearer intent, retrieving each color from the list without having to resolve an array position. The use of the const keyword also provides a bit of optimization – by declaring the variable color as constant, JavaScript can reduce the number of pointer resolutions because a new variable isn’t declared each time.
If you’ve ever had to process web pages to retrieve specific elements, you likely also know that the result of functions such as document.getElementsByClassName() is array- like , but not strictly an array. (You have to use the item() keyword on the resulting object instead). The ES6 from () function is a static function of the Array object that lets you convert such objects into JavaScript arrays (Listing 3).
var colorNodes = Array.from(document.getElementsByClassName(“color”)); for (const colorNodeofcolorNodes){ // do something with colorNode }
Listing 3. Converting an array-like object into an array.
By the way, how do you know that colorNodes is an array? You use the static Array.isArray() function (Listing 4):
if (Array.isArray(colorNodes)) {console.log(“I’m anArray!”)}
Listing 4. Testing for an array.
Array-like objects, on the other hand, will self- identify (using the typeof keyword) as objects, or will return false to the .isArray() function. In most cases, so long as an interface exposes the length property, it should be possible to convert it into an array. This can be used to turn a string into an array of characters with a single call (Listing 5)
function strReverse(str){ return Array.from(str).reverse().join(""); }; console.log(strReverse("JavaScript"))
> "tpircSavaJ"
Listing 5. Inverting a string
In this example, the strReverse () function using from () to convert a string into an array of characters, then uses the Array reverse() function to reverse the order, followed by the join(“”) function to convert the array back into a string.
Prepositional Soup: From In to Of
The of keyword is easy to confuse with the in keyword, though they do different things. The of statement, when applied to an array, returns the items of that array in the order of that array (Listing 6).
var colors = ["red","orange","yellow","green","blue","violet"]; for (const colorofcolors){console.log(color)}
> "red" > "orange" > "yellow" > "green" > "blue" > "violet"
Listing 6. The on keyword returns the values of an array.
The in keyword, on the other hand, returns the index keys of the array.
var colors = ["red","orange","yellow","green","blue","violet"]; for (const colorIndexin colors){console.log(colorIndex)}
> 0 > 1 > 2 > 3 > 4 > 5
Listing 7. The in keyword returns the keys or indices of an array.
Note that Javascript arrays, unlike Java arrays, can be sparse. This means that you can have an element a[0] and a[5] without having an a[1] through a[4]. Iterating over the length of an array in this case can cause problems as the length of the Array does not necessarily correspond to the last item in that array (listing 8).
var a= []; a[0] = 10; a[5] = 20; for(var index=0;index!=index.length;index++){console.log(a[index])}; // Generates an error because index.length = 2, but a[1] (the second indexed item) //doesn't currently exist. for(var indexin a){console.log(index+": "+a[index])}; //Here, index has the values 0 and 5 respectively, without any intervening values.
> 0: 10 > 5: 20
Listing 8. Sparse arrays can give problems for iterated indexes, while in provides better support.
The Joy of Higher Order Functions
Callbacks have become an indispensable part of Javascript, especially with the popularity of jQuery and related browser frameworks and the node.js server. In a callback, a function is passed as an object to another function. Sometimes these callbacks are invoked by asynchronous operations (such as those used to make asynchronous calls to databases or web services), but they can also be passed to a plethora of Array related iteration functions.
Perhaps the simplest of such callbacks is that used by forEach() . The principle argument is the function to be invoked, passing as parameters both an object and its associated key or index:
colors.forEach(function(obj,key){console.log(key+": "+obj)})
> 0: red > 1: orange > 2: yellow > 3: green > 4: blue > 5: violet
Listing 9. Invoking a function using the forEach() function.
The forEach() function, while perhaps the most commonly known of the “functional” Array functions, is intended primarily to execute an expression without providing a resulting output. If, on the other hand, you want to chain functions together, a more useful Array function is the map() function, which takes the output of each function and adds it to a new array. For instance, the following function (Listing 10), takes an array of lower case characters and returns an array where the first letter of each word has been capitalized.
colors.map(function(obj,index){return obj[0].toUpperCase() + obj.substr(1);})
> ["Red", "Orange", "Yellow", "Green", "Blue", "Violet"]
Listing 10. Using the map() function.
Note that this function can be extended to turn a sentence into “title case” where every word is capitalized (listing 11).
function titleCase(str){ return str.split(" ") .map(function(obj,index){return obj[0].toUpperCase() + obj.substr(1);}) .join(" ") }; var expr = "This is a test of capitalization."; titleCase(expr);
> "This Is A Test Of Capitalization."
Listing 11. Using array functions to capitalize expressions.
As should be obvious, the split() and join() functions bridge the gap between strings and arrays, where split converts a string into an array using a separator expression, and join() takes an array and joins the strings within the array back together, with a given separator.
What’s even cooler about split() is that it can also be used with regular expressions. A common situation that arises with input data is that you may have multiple spaces between words, and you want to remove all but a single space. You can use split and join (listing 12), to do precisely that.
var "This is an example of how you clean up text with embedded tabs, white spaces and even carriage returns."
expr.split(//s+/).join(" ")
> "This is an example of how you clean up text with embedded tabs, white spaces and even carriage returns."
Listing 12. Using arrays to clean up “dirty” text.
While touching on dirty data, another useful “functional” Array function is the filter() function, which will iterate over an array and compare the item against a particular expression. If the expression is true, then the filter() function will pass this along, while if it’s false, nothing gets past. If you have data coming from a database, certain field values may have the value null. Getting rid of these (or some similar “marker” value) can go a long way towards making that data more usable without hiccups (Listing 13).
var data = [10,12,5,9,22,18,null,21,17,null,3,12]; data.filter(function(obj,index){return obj != null})
> [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]
Listing 13. Filter cleans up dirty data.
This notation is pretty typical of Javascript – powerful but confusingly opaque with all of these callback functions. However, it turns out that there are some new notational forms in ES6 that can make these kinds of functions easier to read. The predicate construct
<(arg1,arg2,…) => exprinvolvingarg1,arg2,…</span>
can be used to make small anonymous functions. Thus,
<(obj, index) => (obj != null)</span>
Is the same as
<function(obj, index){return (obj != null)};</span>
With this capability, you can write Listing 13 as:
<data.filter((obj)=> obj != null);</span>
You can even use this notation to create named functions (Listing 14).
dropNulls = (obj) => (obj != null) data.filter(dropNulls(obj))
Listing 14. Using predicate notation to define named functions for filters.
Counter tof filter() is find() , which returns the first item (not all items) where the predicate is true. For instance, suppose that you wanted to find from an array the first value that is greater a given threshold. The find() function can do exactly that (Listing 15).
var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; data.find((item) => item>10)
> 12
Listing 15. Use find to get the first item that satisfies a predicate.
Note that if you wanted to get all items where this condition is true then simply use a filter (Listing 16).
var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; data.filter((item) => (item>10))
> [12, 22, 18, 21, 18, 12]
Listing 16. Find retrieves the first value satisfying a predicte, filter, retrieves all of them.
The findIndex() function is related, except that it returns the location of the first match, or -1 if nothing satisfies the predicate.
Did you know that JavaScript Arrays can do map/reduce? This particular process, made famous by Hadoop, involves a two-step process where a dataset (an array) is initially mapped to another “processed” array. This array is then passed to a reducer function, which takes the array and converts it into a single processed entity. You can see this in action in Listing 17, which simply sums up the array values, but shows each of the arguments in the process:
var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; data.reduce(function(prev,current,index,arr){ console.log(prev+", "+current+", "+index+", "+arr); return prev+current},0);
> 0, 10, 0, 10,12,5,9,22,18,21,17,3,12 > 10, 12, 1, 10,12,5,9,22,18,21,17,3,12 > 22, 5, 2, 10,12,5,9,22,18,21,17,3,12 > 27, 9, 3, 10,12,5,9,22,18,21,17,3,12 > 36, 22, 4, 10,12,5,9,22,18,21,17,3,12 > 58, 18, 5, 10,12,5,9,22,18,21,17,3,12 > 76, 21, 6, 10,12,5,9,22,18,21,17,3,12 > 97, 17, 7, 10,12,5,9,22,18,21,17,3,12 > 114, 3, 8, 10,12,5,9,22,18,21,17,3,12 > 117, 12, 9, 10,12,5,9,22,18,21,17,3,12 > 129
Listing 17. The reduce() function takes an array and processes content into an accumulator.
The first column shows that result of adding the current value to the accumulator (initially set to initializer, the 0 value given as the second argument of the reduce() function), the second columngives the current value itself, the third item is the index, and the final item is the array.
A perhaps more realistic example would be a situation where a tax of 6% is added to each cost, but only for values above $10. Again, you can use predicate maps to simplify things (Listing 18).
var data = [10, 12, 5, 9, 22, 18, 21, 17, 3, 12]; total = data.reduce((prev,current) => (prev + current + ((current>10)?(current-10)*.05:0)),0); console.log("$" + total.toLocaleString("en"))
> "$131.52"
Listing 18. Using the reduce() function to calculate a total with a complex tax.
The .toLocaleString() function is very useful for formatting output to a given linguistic locale, especially for currency. If the argument value “de” is passed to the function, the output would be given as “131,52”, where the comma is used as the decimal delimiter and the period is used for the thousands delimiter.
How Do Your Arrays Stack Up?
When I was in college, cafeterias made use of special boxes that had springs mounted in the bottom and a flat plate on which you could put quite a number of trays. The weight of each tray pushed the spring down just enough to keep the stack of trays level, and when you took a tray off, it rose so that the last tray placed on the stack was always the first tray off. This structure inspired programmer who discovered that there were any number of situations where you might want to save the partial state of something by pushing it down on an array, then when you were done processing, popping the previous item back off. Not surprisingly, such arrays became known as, well, stacks.
JavaScript defines four functions – push (), pop (), shift (), and unshift () respectively. Push() places an item on the end of an array, pop() removes and returns it. You can see this in Listing 19.
var stack = ["a","b","c"]; console.log(stack) > ["a", "b", "c"] stack.push("d") > 4 console.log(stack) > ["a", "b", "c", "d"] stack.pop() "d" console.log(stack) > ["a", "b", "c"] stack.unshift("d") >4 console.log(stack) > ["d", "a", "b", "c"] stack.shift() "d" console.log(stack) > ["a", "b", "c"]
Listing 19. Pushing and popping the stack.
Stack based arrays have a number of uses, but before digging into them, you can extend the Array() functionality with a new function called peek() that will let you retrieve the last item in the array (which is the top of the stack), as shown in Listing 20.
Array.prototype.peek=function(){return this[this.length-1]}; arr = ["a","b","c","d","e"] console.log(arr.peek())
>"e"
Listing 20. Peeking at the stack.
(Note that you can always get the first items in an array with arr[0]).
One of the more common use for stacks is to maintain a “browse” history in a Javascript application. For instance, suppose that you have a data application where render(id) will draw a “page” of content without doing a server-side reload. Listing 21 shows a simple application that keeps a stack which will let you add new items as you continue browsing the stack, but also lets you back up to previously visited “pages”.
<!DOCTYPE html> < <head> <title></title> <style> #container,#list{color:white;} </style> <script> Array.prototype.peek=function(){return this[this.length-1]}; App = { stack:[], color:"white", colorSelect:null, popButton:null, list:null, container:null, init:function(){ App.container = document.getElementById("container"); App.colorSelect = document.getElementById("colorSelect"); App.popButton = document.getElementById("popButton"); App.list = document.getElementById("list"); App.colorSelect.addEventListener("change",App.display); App.popButton.addEventListener("click",App.goBack); return false; }, display:function(){ color = App.colorSelect.value; oldColor = App.color; App.color = color; container.innerHTML=color; document.body.style.backgroundColor=color; App.stack.push(oldColor); App.list.innerHTML = App.stack; }, goBack:function(){ var oldColor = App.stack.pop(); if (oldColor != null){ container.innerHTML=oldColor; document.body.style.backgroundColor=oldColor; App.color=oldColor; App.list.innerHTML = App.stack; } return false; } }; window.addEventListener("load",App.init); </script> </head> <body> <</div> <div> < <White</option> <Red</option> <Green</option> <Blue</option> <Orange</option> <Purple</option> </select> <Pop</button> </div> <</div> </body> </html>
Listing 21. Creating a stack that remembers color states.
When you select an item from the <select> box, it changes the background to the new color, but then pushes the old color onto the stack (shown as the comma separated list). When you pop the item, the stack removes the last item and makes that the active color. This is very similar to the way that a web browser retains its internal history.
Pop!
Contemporary Javascript arrays are far more powerful than they have been in the past. They provide a foundation for higher order manipulation of functions, let you do filtering and searching, and can even turn complex functions (often ones that relied heavily upon recursion) into chained map/reduce operations. This should make arrays and array functions a staple for both text processing and data analysis. Learning how to work with arrays properly can give you a distinct advantage as a Javascript programmer.
One final note: While ES6 is rapidly making its way into most contemporary browsers and environments such as node.js, they may not necessarily be available in older browsers (especially constructs like of and predicates). There are a number of polyfill libraries that will let you get close, most notably the Babel polfill ( ).
Kurt Cagle is the founder and chief ontologist for Semantical, LLC, a smart data company. He has been working with Javascript since 1996. Array of JavaScript Array Tricks
评论 抢沙发 | http://www.shellsec.com/news/16208.html | CC-MAIN-2016-44 | refinedweb | 2,974 | 55.74 |
Setting .type property on <object> element doesn't create an HTML attribute type
RESOLVED FIXED
Status
()
▸
DOM: Core & HTML
People
(Reporter: Martin Honnen, Assigned: jst)
Tracking
({fixed1.8, regression})
Bug Flags:
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(5 attachments)
When trying to set var object = document.createElement('object'); object.type = 'image/x-special'; then the created object element doesn't have a type attribute, that is object.getAttribute('type') yields null This happens with 1.8a trunk builds (tested with Mozilla 1.8a6 (Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8a6) Gecko/20050106) as well as with Firefox 1.0 (tested with Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0) but not with the Mozilla 1.7.5 release. As for the W3C DOM specification about that property type of HTMLObjectElement at <> it doesn't in any way suggest that setting the property should not change the HTML attribute thus it seems a bug that the attribute is not set when the property is set. I will upload a test case that contains a static <object> and then creates two <object> elements with script, trying to set .type on the first while calling .setAttribute('type', ...) on the second. The script then examines all <object> elements in the page and outputs the result of ['type'] and .getAttribute('type'). The result with Mozilla which I think is a bug is Checking [object HTMLObjectElement]; .tagName: OBJECT; .getAttribute("type"): image/x-special; ["type"]: image/x-special Checking [object HTMLObjectElement]; .tagName: OBJECT; .getAttribute("type"): null; ["type"]: image/x-special Checking [object HTMLObjectElement]; .tagName: OBJECT; .getAttribute("type"): image/x-special; ["type"]: image/x-special the correct result should be Checking [object HTMLObjectElement]; .tagName: OBJECT; .getAttribute("type"): image/x-special; ["type"]: image/x-special Checking [object HTMLObjectElement]; .tagName: OBJECT; .getAttribute("type"): image/x-special; ["type"]: image/x-special Checking [object HTMLObjectElement]; .tagName: OBJECT; .getAttribute("type"): image/x-special; ["type"]: image/x-special meaning it shouldn't matter whether the object and its type are set statically in HTML or with setting .type or with setting .setAttribute('type', ...), all cases getAttribute('type') and the property access .type should give the same result.
Created attachment 170565 [details] test case, see description in bug report
This is an aviary branch regression (plugin finder). See revision 1.73 of nsHTMLObjectElement.
Assignee: general → jst
Flags: blocking1.8b?
Keywords: aviary-landing
Summary: Setting .type property on dynamically create <object> element doesn't create an HTML attribute type → Setting .type property on <object> element doesn't create an HTML attribute type
Note that <embed> has the same problem, by the way.... jst, is there a reason we're hijacking DOM methods for internal use and changing their behavior in the process?
OS: Windows XP → All
Hardware: PC → All
Why was that patch even needed? mType seems compleatly unused. In both classes.
And the patch is by none other then jst! What were you smoking? ;-)
mType is used by SetType(). The problem is that our the frame calls SetType() on the content node and then wants to use GetType() to get that type back. But it doesn't want to change the DOM (obviosuly). So SetType got hacked to not change the attr value....
If it desperatly needs to do this then it should use some internal interface rather then abusing the nsIDOM ones. Maybe time to revive my nsHTMLSharedObjectElement class...
Too late for 1.8b1, plussing for 1.8b2.
Flags: blocking1.8b?
Flags: blocking1.8b2+
Flags: blocking1.8b-
JST, are you gonna be able to get to this pretty quickly or should we try to get it done in 1.8b3?
moving out to blocking 1.8b3
Flags: blocking1.8b3+
Flags: blocking1.8b2+
Flags: blocking1.8b-
Maybe it's just me, but it seems like this is more what we'd want than having setting object.type set the type attribute. But I bet IE works exactly opposite to that, so maybe we just need to revert to what we had... The reason for the change is that we need to expose the type of the data in the case where it's either not specified in the HTML or where what we get from the server doesn't match what's in the HTML, and in that case it surely does not make sense to go change the HTML type attribute... Maybe now is the time for a 'realType' property that doesn't map directly to the attribute value?
Having a new property that has the server-sent type sounds great to me.
Do we need to expose this in the DOM at all, if so .realType sounds good? Otherwise we could just use nsIContent::SetProperty
The PFS widget needs to know the real type, so we need a new exposed property.
If biesi's creating an nsIImageLoadingContent-like interface for objects, we should just expose it there. Note that this interface need not have classinfo or anything if all we're using it for is PFS.
that will of course only land for 1.9, so if people want a solution for 1.8 something else is needed in the meantime...
No reason the interface, or at least the realType part of it, couldn't land for 1.8.
How about we make .type expose what it exposes today, but setting it also always set the HTML attribute? For that we'd still need a new interface to tell the element about the actual type, but it wouldn't need to be scriptable since the caller that tells the element about the actual type is nsObjectFrame.cpp. That'd be the lowest impact fix for this I believe...
Created attachment 187457 [details] [diff] [review] Make setting .type change the type attr This implements what I talked about in my previous comment.
Attachment #187457 - Flags: superreview?(peterv)
Attachment #187457 - Flags: review?(bugmail)
Whiteboard: needs review
Flags: blocking-aviary1.1+
Whiteboard: needs review → [cb] ready for review? r=bugmail? sr=peterv?
Isn't this fixing just half of the problem? Getting .type will never get the attribute value and just be an empty string in most cases. I'm not entierly sure what we're trying to do here. If the plugin code just needs to store a type in the contentnode then IMHO nsIContent::SetProperty is a good solution. Or do we want to make this behave like for example myImg.width where the property always returns the 'real' value which only sometimes is the same as the attribute? If so, should setting .type really null out mType? And we should probably then return the attribute value of mType isn't set in GetType.
Comment on attachment 187457 [details] [diff] [review] Make setting .type change the type attr why not use nsACString& here, given that both caller and callee would prefer that?
Comment on attachment 187457 [details] [diff] [review] Make setting .type change the type attr r=me it's been suggested to use an nsACString for mType and SetActualType which sounds fine to me.
Attachment #187457 - Flags: review?(bugmail) → review+
Comment on attachment 187457 [details] [diff] [review] Make setting .type change the type attr Yeah, switch to nsACString, also maybe rename mType to mActualType.
Attachment #187457 - Flags: superreview?(peterv) → superreview+
Created attachment 187735 [details] [diff] [review] Final patch that was checked in.
Fixed checked in.
Status: NEW → RESOLVED
Last Resolved: 13 years ago
Resolution: --- → FIXED
Doesn't this still violate the DOM HTML spec? Per that, the .type property should reflect the type attribute on getting no matter what.
Reopening, since we're still not following DOM spec here.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Unless we hear a compelling argument for blocking, let's push this out to 1.8b4.
Flags: blocking1.8b4+
Flags: blocking1.8b3-
Flags: blocking1.8b3+
Whiteboard: [cb] ready for review? r=bugmail? sr=peterv?
would consider a non-risky patch if we can get one in time for b4 but not going to block on this.
Flags: blocking1.8b4+ → blocking1.8b4-
Keywords: regression
Created attachment 193116 [details] [diff] [review] Expose .actualType through nsIPluginElement This restores .type to map directly to the attribute and introduces a new .actualType property that our chrome (or content too for that matter) can QI to to get to the actual type. This way we won't pollute the plugin element namespace except in the case when we're dealing with missing plugins.
Attachment #193116 - Flags: superreview?(bzbarsky)
Attachment #193116 - Flags: review?(bzbarsky)
Comment on attachment 193116 [details] [diff] [review] Expose .actualType through nsIPluginElement >Index: content/html/content/src/nsHTMLObjectElement.cpp > nsHTMLObjectElement::SetActualType(const nsACString& aActualType) > { > mActualType = aActualType; This lets pages set the actual type on a broken plugin, right? Is that desirable? Do we need a security check here? Or perhaps we should make setActualType noscript and leave actualType readonly scriptable? > //NS_IMPL_STRING_ATTR(nsHTMLObjectElement, Type, type) > nsHTMLObjectElement::GetType(nsAString& aType) Why not just remove the manual SetType/GetType impls and uncomment the NS_IMPL_STRING_ATTR? >Index: content/html/content/src/nsHTMLSharedElement.cpp > nsHTMLSharedElement::GetType(nsAString& aType) > { >- if (!mNodeInfo->Equals(nsHTMLAtoms::embed) || mActualType.IsEmpty()) { >+ if (!mNodeInfo->Equals(nsHTMLAtoms::embed)) { > GetAttr(kNameSpaceID_None, nsHTMLAtoms::type, aType); > } else { >- CopyUTF8toUTF16(mActualType, aType); >+ aType.Truncate(); This looks like it will always return the empty string for the one case when type matters -- <embed>. Again, why not remove the GetType/SetType here and just uncomment the NS_IMPL_STRING_ATTR? r+sr=bzbarsky with those fixed.
Attachment #193116 - Flags: superreview?(bzbarsky)
Attachment #193116 - Flags: superreview+
Attachment #193116 - Flags: review?(bzbarsky)
Attachment #193116 - Flags: review+
Created attachment 193510 [details] [diff] [review] Fix that was checked in. Changed all that and checked this patch in.
Attachment #193510 - Flags: superreview+
Attachment #193510 - Flags: review+
Attachment #193510 - Flags: approval1.8b4?
Flags: blocking-aviary1.5+
is this fixed for the branch? If so, can you please add the fixed1.8 keyword?
Yeah, this is all fixed
Status: REOPENED → RESOLVED
Last Resolved: 13 years ago → 13 years ago
Keywords: fixed1.8
Resolution: --- → FIXED
Removing aviary-landing keyword.
Keywords: aviary-landing
It looks like the fix for this was actually wrong. :( In particular, it didn't switch the other consumer that was calling SetType() to using actualType. See bug 430424.
Component: DOM: HTML → DOM: Core & HTML
QA Contact: ian → general | https://bugzilla.mozilla.org/show_bug.cgi?id=277434 | CC-MAIN-2018-05 | refinedweb | 1,706 | 60.41 |
While you can build Q# command-line applications in any IDE, however, I would suggest using Visual Studio Code (VS Code) as IDE for your Q# applications. By using the VS Code and the QDK Visual Studio Code extension you gain access to richer functionality and the setup process is simpler for a beginner.
In order to install the Microsoft .net core., currently, it’s version 3.1
The next two-step of install VS Code and the Extension should be straight-forward, I will not waste your time here.
After you have everything installed, then install the Quantum project templates:
- Go to View -> Command Palette, or cmd+shift+p
- Select Q#: Install project templates
You now have the Quantum Development Kit installed and ready to use in your own applications and libraries.
- Go to View -> Command Palette , or cmd+shift+p
- Select Q#: Create New Project
- Select Standalone console application
- Navigate to the location on the file system where you would like to create the application
After that, the VS Code will be reloaded, it might take a few seconds. Then you should see your project folder setting in the
EXPLORER.
I named my project
TESTING123 .
Then there’s a bubble poped up at the bottom right corner of VS Code IDE.
- Click on the Open new project… the bubble popped up at the right bottom corner once the project has been created.
VS Code next will download and install all the dependencies for you. The following output should be shown in the
OUTPUT :
Installing C# dependencies...Platform: darwin, x86_64Downloading package 'OmniSharp for OSX' (47472 KB).................... Done!Validating download...Integrity Check succeeded.Installing package 'OmniSharp for OSX'Downloading package '.NET Core Debugger (macOS / x64)' (41962 KB).................... Done!Validating download...Integrity Check succeeded.Installing package '.NET Core Debugger (macOS / x64)'Downloading package 'Razor Language Server (macOS / x64)' (51065 KB).................... Done!Installing package 'Razor Language Server (macOS / x64)'Finished
If you look at the project files, you should see one file ending with the extension of
.qs, this is your very first quantum code, which is created by VS Code as a template. The file called
Program.qs created, which is a Q# program that defines a simple operation to print a message to the console.
1. Introducing Ozlo
2. How to train a neural network to code by itself ?
3. Paper repro: “Learning to Learn by Gradient Descent by Gradient Descent”
4. Reinforcement Learning for Autonomous Vehicle Route Optimisation
Go to Terminal -> New Terminal and type
dotnet run.
$ dotnet run
And you will see the output like this
Hello quantum world!
The code behind the scene is located in the
Program.qs file:
In the code you see
namespace, which is the keyword used to declare a scope that contains a set of related objects. My project name is
testing123 that is why it showed in the code, yours should be your project name.
You have learned the fastest way to set up a quantum computing environment on your macOS, and “wrote” your first quantum programming code. 🙂
The reference of this article is from
Credit: BecomingHuman By: Bill CX | https://nikolanews.com/quantum-computing-on-macos-with-ms-qdk-setting-up-with-vs-code/ | CC-MAIN-2021-31 | refinedweb | 519 | 64.71 |
Ads Via DevMavens
Since Poonam Lall
Flat
Nested
Threaded
Embedded
Oldest First
Newest First
parameters not in webform?
By mgonzales3
on
6/22/2004
i compiled the project and for some reason my dropdown lists are not being populated. any idea?
Question about how the page opens
By neilgould
on
7/16/2004
I went through your articles and they were great. I have this one working but when I first open the page, the report is fully rendered using the parameters in the report design. Also the toolbar is displayed. Once I enter the parameters on the web page and GO, the right data is displayed and the toolbar is off. When I select another catagory, the report is fully rendered again. Anyway around this? Remove the default parameters from the report definition?
Thanks,
Neil
Put it on the Internet
By JasmineRose
on
7/22/2004
I have followed this article and created a .net application .How can we publish this on the internet.It works internally.
C# code with Reporting Services
By martlin
on
7/27/2004
Hi Sir,
I downloaded your C# source code for the sample reporting services. I followed your code modification but I got c:\inetpub\wwwroot\webapplication1\reportviewer.cs(81,57): error CS0234: The type or namespace name 'Design' does not exist in the class or namespace 'Microsoft.Samples.ReportingServices' (are you missing an assembly reference?)
c:\inetpub\wwwroot\webapplication1\reportviewer.cs(81,57): error CS0234: The type or namespace name 'Design' does not exist in the class or namespace 'Microsoft.Samples.ReportingServices' (are you missing an assembly reference?)
Could you please let me know why?
Please send e-mail to martlin8@hotmail.com. I would highly appreciate your help.
Thanks,
Martin
Error in parameters using reportviewer whit string too large
By aap7401
on
7/27/2004
Error in parameters using reportviewer whit string too large.
no display the page and I have verified that fails with very great chains
similar control for windows form
By sscf
on
8/3/2004
does anybody knows if its possible to port the ReportViewer componente to windows form?? i´ve already have success with the reportviewer for asp.net
ReportViewer DLL
By rwiethorn
on
8/3/2004
I'm trying to make my own WEb app, practicing using the ReportViewer control that is supplied in the Sample.
I ran the sample: ReportViewer, and found the 'ReportViewer.dll'
I created a new ASP.Net app in VB., and installed it according to the documentation in the Books On Line "ms-help://MS.RSBOL80.1033/RSAMPLES/htm/rss_sampleapps_v1_7944.htm"
I assigned a value for the Server (), and a value for the report (/SampleReports/Company Sales),per the instructions on: ms-help://MS.RSBOL80.1033/RSAMPLES/htm/rss_sampleapps_v1_7944.htm, from Books on line.
I release the viewer would be 'static' for a single report but I just starting out.
I got an error: An attempt has been made to use a rendering extension that is not registered for this report server. (rsRenderingExtensionNotFound)
I searched the BOL and MSDN site, but no results. Anyone have any suggestions to get started using the ReportViewer control?
Thanks,
rwiethorn
Print report
By istavnit
on
8/20/2004
Is there any way to automatically print report when user clicks a button in webform that contains reportviewer?(without showing Reporting Services toolbar)
Thank you
Reporting Services and the Report Viewer component - Part II
By jwatsonzia
on
8/30/2004
Hi! I am using the Report Viewer in a VB Web application. When I view the web application, the report viewer automatically jumps to the top of the page. I have the control actually places down the page after the criteria selection for the parameters. Do you know I can control the placement of the report? Your version doesn't appear to have this problem. Is it a setting? Thank you for the help!
Location Problem
By jwatsonzia
on
9/3/2004
Hi! I have the Report viewer working. I can even pass in my parameters. However, the report always showes up at the top of my web form no matter where I place it or how I change any of the settings. It is like it is ignoring the actual location. Any ideas? I assume I am just missing a setting somewhere. Help! Thank you!
SetQueryParameter
By jwatsonzia
on
9/13/2004
Hi! I am using this method to call my reports. I have multiple reports that I am passing. I want to modify the SetQueryParameter to check if the Parameter exists in the report first, can I do this? If so, how? I really appreciate any help you can give me. Thank you!
Report Viewer Not Picking Up Parameters
By smolinsd
on
9/16/2004
I can't see to have my report pick the parameters that I am passing to it. I downloaded your code and followed your examples but can't seem to get it to work. Is there anything special you did in reporting services or maybe something else. Thank you for you time.
Drill down not working in web form
By kolansameera
on
9/16/2004
Hi,
I have used the reportviewer to show the report in my web-form (as is explained here) Every thing wored out fine, except that my report has drill downs on a group and the drill downs dont seem to work. When I open this report with report manager the drill down works like a champ.
I have seen the other postings with similar problem on MSDN newsgroup, but was of not any help.
I would appreciate any solution for this problem.
Thanks,
Sam
Drill down not working in browser
By kolansameera
on
9/20/2004
Hi,
I have followed the process explained in this article and used the report viewer dll in a web form to show the report. This report has a drill down on the group and the details are shown when the toggle/drill down is clicked. The problem here is, the drill down does not work. With report manager the toggle (+/-) works like a champ. And also there is one more weird behavior, when the drill down is made to open in a new window, or whe nthe report is exported to PDF, the drill down works fine (until the report is refreshed). Is this to do some with the sessions?
I have found similar postings by others on microsoft newsgroups, but couldnt find solution to this problem.
I would appreciate if you could help me with this.
Thanks,
Sam
Report is asking for credentials
By sreekumar
on
9/28/2004
Hi
I am having a problem while viewing the report thru report viewer. It works properly with the localhost. But when i gave my Ip address instead of localhost it is asking for N/w username and password. How can i remove these credentials because i am using this on a web page.
Thanks in advance
Drillthrough/ Drilldown
By rey
on
10/6/2004
is it possible to set the drillthrough page to be displayed inside the report viewer.
Thanks
Rey
Collapsible items in embedded report
By KATO
on
10/7/2004
Hello,
I embedded a report in my web app, but the collapsible items contained in the report do not work.
Any ideas will be greatly appreciated.
Problem with ReportView
By i9796674
on
10/27/2004
Hi,
I have found both articles about reporting services very interesting. In fact I have used most of it in some reports I have recently created. However I have found a small problem that I cannot fix and I assume more people will have had the same one.
My application is using the ReportViewer to display the reports but one of the reports links to another report. The first report is displayed correctly (within the IFRAME) but the second report just opens in a new IE window. I read somewhere that I should set the rc:LinkTarget parameter to the name of my IFRAME but to be honest, I do not know the name of the IFRAME because the IFRAME is generated when I drag and drop the component and does not seem to have a name.
Any ideas would be very helpful.
Thanks in advance
Alonso
Hide Report Manager Navigation
By CSnerd
on
10/30/2004
When I point to my reports server, essentially the reports manager web page is displayed in the control if I just want the toolbar and report to be displayed and nothing else what do I need to do
vb version
By xyz_999
on
10/31/2004
Is there a VB version of this code?
security
By ColinR
on
1/4/2005
Thanks, this works localhost, but what is required to access reports from another pc on the network.
I get as far as the reports page showing reporting folders, but not the actual report if I navigate to the report I am prompted for security but there are obviously no params passed.
Good article, but still a question ...
By cedtat
on
1/6/2005
This article (with the previous) give me answers ... but i have still a question which is very tricky. There is a limitation for the querystring parameters, apparently the reportviewer component use the querystring to pass the parameters so it is bugging when there is to much parameters ...
before finding this component, i was trying to find a solution to pass this parameters in a POST method but wasn't able to do it properly ...
Do you have an idea on how to pass the parameters selected on a custom form to reporting services without this limitation ? and of course i want to use the reportviewer component or at least, the reportviewer on the server ...
How to make DrillThrough reports work
By raviloko
on
2/25/2005
Hello Poonam,
I am using a reportviewer Control in one of my asp.net web app. and my reports have a lot of drillthrough links to other reports.
But when I click the drillthrouh it opens in a new page. I am trying to confine it to the control on the current page but so far not successful. Do you know how to use the rc:ReplacementRoot functionality ?
Thanks
Ravi | http://www.odetocode.com/Articles/156.aspx | crawl-002 | refinedweb | 1,706 | 64 |
0
Hi
I am writing a football tournament sim and cannot get the main of my program to deal with the structures which are defined in the header. Here is the relevent code:
header file
#include <math.h> #include <iostream> #include <array> #include <string> using namespace std; struct team{ string name; int atk; int def; int injurys; int kSkill; int sSkill; int lastResult; };
main cpp file
#include "cup header.h" #include "structure.h" using namespace std; void main (){ teamstructure(); cout<<one.name; system ("pause"); }
thanks for any help you can give, I can poet the rest of the code if you need it
Dan | https://www.daniweb.com/programming/software-development/threads/374027/calling-structures | CC-MAIN-2016-50 | refinedweb | 104 | 80.11 |
Xamarin has been popping up on my radar for quite a while now. Of course, I have memories of the pre-history around Novell and Mono and then onto Moonlight and those things and I know a few people over at Xamarin but I’d say that in maybe the past 12 months I’m finding that folks that I wouldn’t expect to know about it (i.e. those that don’t live in a .NET or development world) are increasingly mentioning Xamarin to me.
I should say – this is all credit to Xamarin for the work they do both in the products they build and in their promotion and the community they are building. It’s impressive to see the momentum..
Some of the folks that I know at Xamarin will be laughing at me when I say that I’ve been “getting around” to properly trying out Xamarin for longer than I’d care to admit.
So, today I thought I’d try it out. I thought I’d write a bit of code that I can use across Windows Store, then Windows Phone and then Android. My intention is to do the first 2 pieces without Xamarin and then download and install Xamarin and see what it’s like to get what I’ve already written on Android.
I should point out that I know very little about Android development although I have built the bare bones of an app or two but it’s been very “hello world” type of stuff.
In order to get something going, I’m going to re-use some of the bits that I put together for my “Windows 8/Windows Phone: Building for Both” blog post primarily because I already have those bits and so it means I spend less time getting started. It’s the same (or very similar) code that I used to experiment with the Prism framework back in this post as well but I’m going to move it around a little in this post.
The “app” that came from that post is very simple in just presenting 3 screens.
- Screen 1 – offers a search box and goes off to search flickR for photos that match that search term. tapping on a photo navigates to screen 2.
- Screen 2 – display a list of search results. If the user taps on a photo that causes navigation across to screen 3.
- Screen 3 – display one image with more detail. If the user chooses to, they can save that image to their picture library.
and that’s it. I’m going to try and broadly follow a “Model View ViewModel” approach and I suspect that in doing so I might hit some troubles at the point where I get to Android but that’s part of the fun/challenge. When I used this example to talk about Prism code I ended up with 3 screens which looked something like;
and I’ll probably steal that “UI” and although it would probably be sensible to combine screen 1 and screen 2 I’ll leave them separate here as I have those pieces already.
Getting Started
I kicked off by creating a solution called FlickrCode and dropped two app projects into it (using the blank app template) – one called WindowsApp and another PhoneApp;
I then wanted to have at least one portable class library shared between these 2 projects which I could use to share both implementation and abstractions that are going to be common across both platforms. I chose the “Windows 8+” and “Windows Phone 8” platform options and included .NET 4.5 because that effectively comes for free;
that leaves me to reference the CrossPlatform project from both the other projects;
Building out the Cross Platform Code
Because I’m moving various pieces that I already have into a new place rather than writing from scratch I already know pretty much what I’m going to end up with for this particular app. The structure that ends up inside of my portable class library is as below;
That’s quite a lot of portable code (and note the dependence on System.Net.Http (not Windows.Web.Http)) and it’s broken down into a few areas;
Services
I have 3 services that my code is ultimately going to depend upon.
- A service which knows how to do page->page navigation. Because Windows Phone and Windows 8 apps handle navigation slightly differently, this would be a platform specific service so there’s no implementation in this project.
- A service which knows how to save a photograph into the photos library of the device. Again, this would be a platform specific service so there’s no implementation in this project.
- A service which knows how to get some information from the flickR service online. This service is entirely portable and so there is an implementation in this project.
Those services look like this (in their shape – I’m going to omit the implementation here);
Those interfaces probably speak for themselves even without detailing the parameters/return types. The one that I suspect I’m going to have more trouble with when I get around to Android is the INavigationService interface because it’s built on an assumption of page->page navigation and that such navigation involves some kind of Frame that actually knows how to switch pages. It’s worth saying that in order to be portable, the RegisterFrame() method here takes an object parameter because it’s not possible to make use of a Frame in a portable class library (because it’s not portable
).
It’s questionable as to whether this whole aspect of “navigation” is one that’s actually cross-platform at all here. It just so happens that Windows/Phone work that way. Other platforms may not.
I also built out a little portable base class that I called Locator. I toyed with this for a while but decided that I would take a dependency here on Autofac because that’s available (from Nuget) as a portable IoC container and so I wrote this locator;
using Autofac; using CrossPlatform.ViewModel; using System; namespace CrossPlatform.Service { public class Locator { public Locator() { } public void Initialise(Type navigationServiceImplementation, Type photoServiceImplementation) { ContainerBuilder builder = new ContainerBuilder(); // view models builder.RegisterType<SearchPageViewModel>().AsSelf(); builder.RegisterType<SearchResultsPageViewModel>().AsSelf(); builder.RegisterType<PhotoDetailsPageViewModel>().AsSelf(); builder.RegisterType<SearchResultViewModel>().AsSelf(); // portable service builder.RegisterType<FlickrService>().As<IFlickrService>(); // platform specific services builder.RegisterType(navigationServiceImplementation).As<INavigationService>().InstancePerLifetimeScope(); builder.RegisterType(photoServiceImplementation).As<IPhotoSavingService>(); this._container = builder.Build(); } public T Resolve<T>() { return (this._container.Resolve<T>()); } public object this[string viewModelName] { get { return (this.Resolve(viewModelName)); } } object Resolve(string viewModelName) { var type = Type.GetType(viewModelName); return (this._container.Resolve(type)); } IContainer _container; } }
with the idea here being that this already registers all the view models and the IFlickrService so that’s nice and I ultimately want to drop an instance of this class as a resource in my app.xaml such that Views can use databinding to reach into it and extract their view models with the appropriate dependencies already resolved via Autofac.
The downside here is that something in my code has to call Initialise() on this object but, equally, I have a service (INavigationService) which needs to be passed a reference to a Frame when the application starts up so there’s no easy way to avoid interacting with this Locator at some point in the early life of the app.
Model
All these methods on the services deal in simple types (including Task) with the exception of IFlickrService.SearchAsync which returns a collection of the model class for my app – the details of the photos that it has found from flickR;
which is a fairly simple thing.
Platform
Another simple thing is the SimpleCommand class living in the Platform subfolder which simply implements ICommand;
along with the ViewModelBase class which is sitting in the same folder and provides an implementation of INotifyPropertChanged;
but the implementation makes use of attributes like [CallerMemberName] as per below so I wonder whether that’ll in any way stop this .NET code being portable to Android?
namespace CrossPlatform.Platform { using System; using System.ComponentModel; using System.Runtime.CompilerServices; public abstract class ViewModelBase : INotifyPropertyChanged { public event PropertyChangedEventHandler Property)); } } } }
ViewModels
There’s ultimately 3 pages in this app. The first page simply displays a TextBox (for search text) and a Button (to launch the search) and so the ViewModel that underpins it is not overly complex;
So, nothing too surprising there – a ViewModel that surfaces a SearchTerm and a SearchCommand to be data-bound into the UI. The class also relies on the INavigationService so that when the search term is entered, it can drive navigation to the search results page passing the search term as a navigation parameter.
That search results page has a ViewModel which (on construction) picks up the parameter passed from the previous page, it stores this into its own SearchTerm property for display on the UI and then uses the IFlickrService implementation that it depends upon in order to search flickR for photos. For each photo that’s returned (modelled by the FlickrSearchIterm class) it adds a new ViewModel of a different type (SearchResultViewModel) to a collection that it maintains called SearchResults.
The ViewModel also has a BackCommand which invokes the INavigationService to go backwards in the navigation stack. As I mentioned, the SearchResults property is a collection of SearchResultViewModel which looks as below;
Essentially, this ViewModel is simply wrapping the model classes returned from the IFlickrService and adding one important property – the InvokeCommand which is what would be invoked if a user taps on this particular item in the UI. The implementation of that command uses the INavigationService to navigate to the final page in the app – the photo details page passing the Id of the image in question so that its details can be displayed.
That leads on to the final ViewModel which supports that photo details page;
Once again, at construction time this ViewModel uses the INavigationService to retrieve the Id parameter passed to it. It places that value into the Id property and it then uses the IFlickrService to go and query the Title and the ImageUrl for a larger image than was displayed on the previous page. It drops those values into the Title and ImageUrl properties respectively. There is again a BackCommand which uses the INavigationService and there’s also a SaveCommand which uses the IPhotoSavingService to request that the implementation saves the photo into the photos library.
Windows 8.1 UI
With all that in place it’s time to build a basic Windows 8.1 UI to host these ViewModels and display something on the screen. I used the blank template and, first off, I tweaked my app.xaml to include my Locator class;
<Application x: <Application.Resources> <svc:Locator x: </Application.Resources> </Application>
and I modified the code for the App class such that it Initialises the Locator when the app starts up and also passes the Frame which the start up code creates through to the INavigationService. The standard start-up code from a blank template got modified to include (I missed out the rest of the code););
Naturally, I had to implement both the SimpleNavigationService and PhotoSavingService – those got written;
and then the remainder of the work here is to remember to set access to the Pictures Library in the app manifest and then implement my 3 simple pages;
which data-bind suitable properties on the ViewModels. This is the XAML for the initial SearchPage – not that it finds its ViewModel by that rather clunky use of the Locator;
<Page x: <Grid> <Grid.Background> <ImageBrush ImageSource="ms-appx:///Assets/backdrop.png" Stretch="UniformToFill" /> </Grid.Background> <Grid.RowDefinitions> <RowDefinition Height="100" /> <RowDefinition Height="40" /> <RowDefinition /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="120" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock HorizontalAlignment="Left" TextWrapping="Wrap" Text="search" VerticalAlignment="Bottom" FontFamily="Global User Interface" Style="{StaticResource HeaderTextBlockStyle}" Grid. <StackPanel Grid. <TextBox Text="{Binding SearchTerm, Mode=TwoWay}" FontSize="36" Height="64" Margin="0,0,0,24" Background="Black" Foreground="White" /> <Button HorizontalAlignment="Left" VerticalAlignment="Top" Command="{Binding SearchCommand}" Template="{x:Null}"> <Image Source="ms-appx:///Assets/search.png" Width="192" /> </Button> </StackPanel> </Grid> </Page>
and that ultimately (via the ViewModel) causes navigation to the SearchResultsPage.xaml page;
<Page x: <Page.Resources> <DataTemplate x: <Button Template="{x:Null}" Command="{Binding InvokeCommand}"> <Grid Width="250" Height="250"> <Border Background="{StaticResource ListViewItemPlaceholderBackgroundThemeBrush}"> <Image Source="{Binding ImageUrl}" Stretch="UniformToFill" AutomationProperties. </Border> <StackPanel VerticalAlignment="Bottom" Background="{StaticResource ListViewItemOverlayBackgroundThemeBrush}"> <TextBlock Text="{Binding Title}" Foreground="{StaticResource ListViewItemOverlayForegroundThemeBrush}" Style="{ThemeResource TitleTextBlockStyle}" Height="60" Margin="15,0,15,0" /> </StackPanel> </Grid> </Button> </DataTemplate> </Page.Resources> <Grid> <Grid.Background> <ImageBrush ImageSource="ms-appx:///Assets/backdrop.png" Stretch="UniformToFill" /> </Grid.Background> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <!-- Back button and page title --> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDef SearchTerm}" Style="{StaticResource HeaderTextBlockStyle}" Grid. </Grid> </Grid> <GridView Grid. </Grid> </Page>
which then navigates to the PhotoDetailsPage.xaml page;
<Page x: <Page.BottomAppBar> <CommandBar> <CommandBar.PrimaryCommands> <AppBarButton Icon="Save" AutomationProperties. </CommandBar.PrimaryCommands> </CommandBar> </Page.BottomAppBar> <Grid> <Grid.Background> <ImageBrush ImageSource="ms-appx:///Assets/backdrop.png" Stretch="UniformToFill" /> </Grid.Background> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDef Title}" Style="{StaticResource HeaderTextBlockStyle}" Grid. </Grid> <Image Grid. </Grid> </Page>
and that’s then the entirety of the Windows 8 App (such as it is) – there’s no code-behind the XAML files and the majority of the code is in the portable library;
Windows Phone UI
The process in creating the Windows Phone version of the app is pretty much the same thing as the Windows 8 version of the app. That is;
- Set up my service locator in app.xaml
- Initialise it in app.xaml.cs
- Create 3 pages with very similar data-bound controls as the windows 8 pages.
- Implement the INavigationService and IPhotoSavingService for Windows Phone.
These steps take some time and (4) needs more thought than the other three steps because I actually need to write some code at that point whereas the other steps are already well-understood.
One thing I’d say is that because on Windows Phone you can’t really bind commands to app bar buttons, implementing the photo details page of the app becomes a bit tedious so I made use of the AppBarUtils library just to make my one app bar button bindable to an ICommand.
Also, because the code I wrote for the phone app uses HttpClient in order to implement the IPhotoSavingService and because Windows Phone 8 doesn’t have HttpClient (either the .NET one or the WinRT one) I brought in a reference to that package from Nuget.
At the end of that process, I have a project that’s very similar in structure to the Windows 8.1 project;
and just like the Windows 8.1 project, there’s only a small amount of UI/code in this project – most of the code is coming from the portable class library.
Summing Up
At the end of all that, I’ve got 2 incomplete “apps” drawing most of their implementation from a portable class library.
The code for all that is shared here.
The next steps are for me to attempt to replicate the part of the work that was involved in implementing the Windows8.1/Windows Phone 8 clients for Android.
That involves first installing the Xamarin bits into Visual Studio and then hoping that I can figure out how to build 3 “pages” ( I’m not sure if “paging” is even going to be a valid paradigm ) and the implementation of my INavigationService ( if it’s needed ) and my IPhotoSavingService. That’s assuming that the portable class library I’ve built can be used and referenced from an Android Xamarin project ( who knows? ).
The next steps could take “a while” but I’ll report back on progress in subsequent posts and I’ll try and be more granular about the intermediate steps as I’m breaking new ground ( for me ). | https://mtaulty.com/2014/02/20/m_15110/ | CC-MAIN-2022-05 | refinedweb | 2,673 | 52.19 |
34480/selenium-cant-find-button
Here is the HTML:
<button id="getCoupon" class="getCoupon" onclick="IWant()" style="" data-Get Your Coupon</button>
I have been trying these ways of finding the button:
1. driver.find_element_by_id('getCoupon').click()
2. driver.find_element_by_xpath('//*[@id="getCoupon"]').click()
3. driver.find_element_by_class_name('getCoupon').click()
None of them seemed to work. Any ideas?
Hey @sebastian, here you can try using
driver.findElement(By.linkText("Get Your Coupon")).click();
Hello Nitin, as the Like button on ...READ MORE
The reason you can't locate the item ...READ MORE
In addition to flat out submitting the ...READ MORE
using OpenQA.Selenium.Interactions;
Actions builder = new Actions(driver); ...READ MORE
The problem is that, the end_with method ...READ MORE
2 issues. One thing is, "Catalogues" & ..
OR
Already have an account? Sign in. | https://www.edureka.co/community/34480/selenium-cant-find-button | CC-MAIN-2020-34 | refinedweb | 132 | 63.36 |
Given a linked list with loops, the algorithm returns the beginning of the loop. The next element of a node in a linked list points to nodes that have appeared before it, indicating that the list has loops. Example input: A-B-C-D-E-C(C node appears twice) Output: C analysis: 1. Fast/slow pointer method to determine whether there is A loop in the linked list.
Given a linked list with loops, the algorithm returns the beginning of the loop.
There is the definition of a circular list
The next element of a node in the list points to the node that preceded it, indicating that the list has loops.
The sample
Input: a->, b->, c->, d->, e-> C(C node appears twice)
Output: C
Analysis:
1, fast and slow pointer method to determine whether the linked list has rings
Fast moves forward two steps at a time, slow moves forward one step at a time. If the two Pointers can meet, there will be a ring; otherwise, there will be no ring.
2. Assuming that the head of the linked watch moves K steps to the ring head and slow points to the ring head, fast moves 2*k steps, at which time the two are k steps apart. It can also be considered that the fast pointer follows the slow pointer after m*size-k steps. When the two meet, the slow pointer still has k steps from the ring head. Since we don’t know the specific size of K at this time, but we know that K is the number of steps from the head of the linked watch to the ring head, let fast point to the head of the linked watch. After that, the fast or slow pointer moves back one step at a time, and the place where the two meet is the ring head.
package test; public class FindLoopStart { public Node findLoopStart(Node head){ Node fast = head; Node slow = head; while(fast! =null || fast.next! =null){ fast = fast.next.next; slow = slow.next; if(fast == slow) break; If (fast==null || fast. Next ==null) return null; After the encounter, the slow node takes another k steps to reach the beginning of the ring //. At this point, it does not know the specific value of K, but it knows the number of steps from the beginning of the linked list to the head of the ring //. So, let fast point to the head of the linked list. while(fast ! = slow){ fast = fast.next; slow = slow.next; } return slow; }} | http://www.itworkman.com/145279.html | CC-MAIN-2021-39 | refinedweb | 427 | 87.45 |
Custom indicator, how datas work in init
Hi,
I'm trying to understand how to build a custom indicator. Let's see this one:
import backtrader as bt class DummyDifferenceDivider(bt.Indicator): lines = ('diffdiv',) # output line (array) params = ( ('period', 1), # distance to previous data point ('divfactor', 2.0), # factor to use for the division ) def __init__(self): diff = self.data(0) - self.data(-self.p.period) diffdiv = diff / self.p.divfactor self.lines.diffdiv = diffdiv
(source: by Daniel Rodriguez)
What does this line do?
diff = self.data(0) - self.data(-self.p.period)
Daniel writes it should be self-explanatory, but i don't understand it.
It seems to me that he's subtracting two datafeeds (but this is impossible as I tried to run the code with only one datafeed and it runs). At first, I thought that it was subtracting the closing value of today and today-period, but I don't understand why it's not written as:
diff = self.datas[0].close[0] - self.datas[0].close[-self.p.period]
Thanks for any insights...
@marsario
When doing calculations on the whole line at once, you use round brackets and typically use the formula in the init method. In the line below you are moving the whole line back
-self.p.periodand getting in return
diffwhich is a whole new line.
diff = self.data(0) - self.data(-self.p.period)
In the statement below we use an individual line from the dataline, in this case
closeand we get once piece of data in return, typically used in
next. So in this case, you are subtracting: the value for
self.datas[9].close[0]at the current bar and less
self.datas[0].close[-self.p.period]-self.p.period's ago. This will yield one number, a scalar, and can be used in the current bar.
diff = self.datas[0].close[0] - self.datas[0].close[-self.p.period]
Wow, it's really difficult for me to understand...
I tried a different test. This is the indicator's code:
class hasGrown(bt.Indicator): lines = ('hasgrown',) # output line (array) def __init__(self): self.lines.hasgrown = self.data(0) > self.data(-1)
and this is the strategy:
class buyifgrewStrategy(bt.Strategy): def __init__(self): print("Starting strategy") self.hasgrown = hasGrown() def next(self): if self.hasgrown: if not self.position: calculateSize(self) self.buy(self.data,size=self.size) print("Buying") else: if self.position: self.close(self.data) print("Selling") log(self,f"Today: {self.data.close[0]}. Yesterday: {self.data.close[-1]}")
So, in the indicator I'm checking whether the stock has price has grown yesterday (and if so I buy). If it has decreased I sell.
The code apperently works, but I don't understand why.
What am I comparing here? (open prices? close prices? all of them?)
self.lines.hasgrown = self.data(0) > self.data(-1)
The print output:
... 2021-02-16, Today: 133.19. Yesterday: 135.37 2021-02-17, Today: 130.84. Yesterday: 133.19 2021-02-18, Today: 129.71. Yesterday: 130.84 Buying 2021-02-19, Today: 129.87. Yesterday: 129.71 Selling 2021-02-22, Today: 126.0. Yesterday: 129.87 2021-02-23, Today: 125.86. Yesterday: 126.0 2021-02-24, Today: 125.35. Yesterday: 125.86 2021-02-25, Today: 120.99. Yesterday: 125.35 Buying 2021-02-26, Today: 121.26. Yesterday: 120.99 2021-03-01, Today: 127.79. Yesterday: 121.26 Selling 2021-03-02, Today: 125.12. Yesterday: 127.79 2021-03-03, Today: 122.06. Yesterday: 125.12 2021-03-04, Today: 120.13. Yesterday: 122.06 Buying 2021-03-05, Today: 121.42. Yesterday: 120.13 Strategy is over, final cash: 122076.85999999993
@marsario said in Custom indicator, how datas work in init:
self.lines.hasgrown = self.data(0) > self.data(-1)
Backtrader has a number of shortcut conventions that make life easier, but also more confusing for the uninitiated. When the line is not indicated a default line will be chosen. In your case above, that would be
Your ohlcv data lines are stored in a list called
datas. It's really that simple. So
self.datas[0]is the first ohlcv data line in the list, since in python [0] gets you the first in a list.
self.datas[1]give you the second ohlcv line in that list, and so on. If you omit the
[0]component as in
self.datathen the first ohlcv data line in the list is selected for you.
I personally find these very confusing and opt when I code to use the full and formal written version, eg the following in next:
def next(self): self.datas[0].close[0]
Which translates into the first ohlcv data line, closing value, at the current bar. For comparison,
def next(self): self.datas[2].volume[-4]
This would give me the third ohlcv dataline in my list, retuning the volume, 4 bars ago.
If I wanted to use the whole line in init to create new indicator lines, then I would still use square brackets to select my line, but use round brackes to identify bar position. eg:
def __init__(self): self.newindicatorline = self.datas[1].high(0) -self.datas[2].low(-15)
This would create a new line that would have a value at each bar that is the high of ohlcv line 1 - the low 15 periods ago of ohlcv line 2. If you want to use this new indicator in next:
def next(self): new_ind_current_bar = self.newindicatorline[0] new_ind_last_bar = self.newindicatorline[-1] | https://community.backtrader.com/topic/3564/custom-indicator-how-datas-work-in-init/4?_=1615636587447 | CC-MAIN-2022-33 | refinedweb | 931 | 60.61 |
Moving_0<<
A signal is a physical quantity that varies with time, space, or other independent variables. In signal processing, a signal is a function that conveys information about a phenomenon [1]. A signal can be 1 - Dimensional (1D) like audio and ECG signals, 2D like images, and could be any dimensional depending upon the problem we are dealing with in general.
Signals in raw form are not always usable, we need to pass them through the filtering layer to ensure bare minimum quality for further analysis. So first let us try to understand signal filtering and its significance.
Signal Filtering
Signal filtering is a primary pre-processing step that is used in most signal processing applications. The raw signal is not always in a usable form to perform advanced analysis i.e various noises are present in the raw signal. We have to apply a filter in order to reduce the noise in the signal as a part of pre-processing step.
There are many pre-processing steps applied one of which is de-noising which is essential when signals are sampled from the surrounding environment. The moving average filter is one such filter that is used to reduce random noise in most of the signals in the time domain.
Now that we have understood the significance of signal filtering, let us understand the moving average filter.
Moving Average Filter
Moving Average Filter is a Finite Impulse Response (FIR) Filter smoothing filter used for smoothing the signal from short term overshoots or noisy fluctuations and helps in retaining the true signal representation or retaining sharp step response. It is a simple yet elegant statistical tool for de-noising signals in the time domain.
These filters are favourite for most Digital Signal Processing (DSP) applications dealing with time-series data. It is simple, fast, and shows amazing results by suppressing noise and retaining the sharp step response. This makes it one of the optimal choices for time-domain encoded signals.
The Moving Average filter is a good smoothing filter in the time domain but a terrible filter in the frequency domain. In applications where only time-domain processing is present Moving average filters shine, but in applications where information is encoded in both time and frequency or in frequency domain solely it can be a terrible option to choose.
Types of Moving Average Filter
There are various types of moving averages filters but on a broader level simple, cumulative moving average, weighted moving average, and exponentially weighted average filters form the basic block for most of the other variants. There are many moving average filter variants, more or less the fundamental structure boils down to four core types illustrated below figure.
This article will cover most of them on a broader level and will show use cases of it, we will also understand their variants which we had used in our research use case, and boosted performance metrics.
Now let us, deep-dive, into each of these types along with code snippets and some basic mathematical formulation.
Don’t get intimidated by coding and mathematics I have tried to keep it short and simple.
Simple Moving Average (SMA)
This is one of the simplest forms of moving average filter that is easy to understand and apply to the desired application. The main advantage of the simple moving average is that we don't need exorbitant mathematics to understand it i.e it can be interpreted by its formula itself.
The con of SMA is that it gives equal weightage to all samples due to which it does not suppress the noisy signal effectively.
Let's take an example.
Let's say we have an array of numbers \(a_{1}, a_{2}, ..., a_{n}\)
If we take periodicity (window length) as k, then the average of 'k' elements would be
For easy understanding, let's assume k = 4
\(SMA_{1} = \frac{a_{1}+a_{2}+a_{3}+a_{4 }}{4}\)
\(SMA_{2} = \frac{a_{2}+a_{3}+a_{4}+a_{5 }}{4}\)
\(\vdots\)
\(SMA_{N-k+1} = \frac{a_{n}+a_{n-1}+a_{n-2}+a_{n-3 }}{4}\)
So simple moving average result would be
SMA Array = \([SMA_{1}, SMA_{2}, \cdots, SMA_{N-k+1}]\) which contains 'N-K+1' elements.
Let us implement this simple moving average filter using Python. We will be using the convolution concept for convolving the input signal with all ones with the given window size.
import numpy as np def simple_moving_average(signal, window=5): return np.convolve(signal, np.ones(window)/window, mode='same')
We will choose a simple sine wave and superimpose random noise and demonstrate how effective is a simple moving average filter for reducing noise and restoring to the original signal waveform.
Fs = 8000 # sampling frequency = 8kHz f = 5 # signal frequency = 5 Hz sample = 8000 # no. of samples time = np.arange(sample) original_signal = np.sin(2 * np.pi * f * time / Fs) # signal generation noise = np.random.normal(0, 0.1, original_signal.shape) new_signal = original_signal + noise
Following is the plot showing the effectiveness of the simple moving average filter for random noise reduction:-
One characteristic of the SMA is that if the data has a periodic fluctuation, then applying an SMA of that period will eliminate that variation (the average always contains one complete cycle). But a perfectly regular cycle is rarely encountered.
Cumulative Moving Average (CMA)
CMA is a bit deviated from other types in moving average family and usability of this filter in noise reduction is nil. One benefit of CMA is that it also accounts for past data considerably by also accounting for the recent data point, unlike SMA which will be just an average of past data points within a defined sliding window size and equal weights.
The simple moving average has a sliding window of constant size k, contrary to the CMA in which the window size becomes larger as the time passes during computation.
\[CMA_{n} = \frac{x_{1}+x_{2}+\cdots+x_{n}}{n}\]
For reducing computational overhead we use a generalised form:-
\[CMA_{n+1} = \frac{x_{n+1}+n.CMA_{n}}{n+1}\]
This is called cumulative since the ‘n+1’ terms will also account for the cumulative of ‘n’ previous data points while averaging the points.
import numpy as np def cumulative_moving_average(signal): cma = [] cma.append(signal[0]) # data point at t=0 for i, x in enumerate(signal[1: ], start=1): cma.append(((x+(i+1)*cma[i-1])/ (i+1))) return cma
It is very evident that CMA is not at all useful for reducing noise since the retrieved signal is nowhere pristine in shape as w.r.t original signal. This filter is worst to be used for reducing noise pursuit.
CMA is quite alien in terms of output as compared to its other family members.
But what we can learn from its formula and concept is we can give certain weights not necessarily equal i.e playing with weights and adjustments that was a setback in the case of SMA. This is where our next topic will come into play Weighted Moving Average.
One of the interesting applications is in the stock market where data stream arrives in an orderly manner, an investor may want the average price of all trades for a particular financial instrument up until the current time. As each new transaction occurs, the average price at the time of the transaction can be calculated for all transactions up to that point using the cumulative average [3].
Weighted Moving Average
Weighted Average works similarly to that of a simple moving average filter, inhere we are considering the relative proportion of each data point within a window length. SMA is unweighted means while WMA contains weight for relative proportion adjustments which makes it even more captivating for application.
Let's take an example for getting a better understanding of this. Let's say we have an array of numbers \(x_{1}, x_{2}, \cdots, x_{n}\). Let's keep the periodicity or window length the same as in the previous example i.e k =4
The general Weighted Average formula is
\[W = \frac{\sum_{i=1}^{n} w_i\cdot x_i}{\sum_{i=1}^{n} w_i}\]
The purpose of giving weights is to give more importance to a few data points over the other data points. And this weight depends on the application. One of the applications is in image processing to specific the filters we use. Those filter coefficients are nothing but the weights and the local patch of pixel values to which the filters are being applied serves as the data points and the combination serves as the weighted average. There are many filters for e.g blurring filters (nothing but equally weighted filter).
The characteristics of weights will decide what effects we want given input samples.
These WMA techniques are also being used extensively in stock markets but more than this its better variant is used like EWA or EWMA.
Exponentially Weighted Average (EWA) or Exponential Weighted Moving Average (EWMA)
An exponentially weighted average gives more weightage to recent data points and less to previous data points overall. This ensures the trend is maintained by still accounting for a decent portion of the reactive nature of recent data points. In comparison with SMA, this filter suppresses noisy components very well i.e effective and optimal in de-noising signals in the time domain. It is 1st order infinite impulse response(IIR) filter.
The general formula is
\[\begin{equation*}EMA_t = \left\{ \begin{array}{ll} x_0 & \quad t = 0 \\ \beta \cdot x_t + (1-\beta) \cdot EMA_{t-1} & \quad t \geq 1 \end{array} \right. \end{equation*}\]
For different applications, the beta coefficients definition changes but overall the meaning remains the same giving more importance to recent data points and less to previously averaged data points.
The EWMA is a recursive function, recursive property leads to the exponentially decaying weights.
\[\begin{equation*}EMA_t = \beta \cdot x_t + (1-\beta)\cdot EMA_{t-1} \end{equation*}\]
\[\begin{equation*}EMA_t = \beta \cdot x_t + (1-\beta)\cdot (\beta \cdot x_{t-1} + (1-\beta)\cdot EMA_{t-2}) \end{equation*}\]
\[\begin{equation*}EMA_t = \beta \cdot x_t + \beta \cdot x_{t-1} - \beta^2 \cdot x_{t-1}+(1-\beta)^2\cdot EMA_{t-2} \end{equation*}\]
\((1-\beta)^k\) will keep on exponentially decaying as shown in the figure below.
The significance of this EWMA or EMA in deep learning is known by very few people. The variants of this specific filter is been used in Deep Learning optimisers for faster convergence to global minima, because the loss landscape is highly stochastic especially when the data dimension is enormous, for the optimiser to converge faster with the optimal converging path by giving relative importance to the recent path to trace out a future path while making sure the new averaged data points doesn't deviate too much from past data points. So it tries to maintain momentum while still avoiding any overshoots refer to the below figure.
This type of moving average filter has excellent usage in many other fields like in the stock market where it is considered to be one of the most simple and reliable indicators to do analysis and make decisions. It is also used in many machine learning research.
In one of our research use cases for efficient peak detection in ECG signals, we used a moving average filter to suppress noise and it incredibly boosts the peak detection performance. But we need to be sure about where we are using this kind of filter, especially where frequency domain operations are involved it's better to avoid this filtering.
Comparative Study
Now that we have a good understanding of the Moving Average filter we will dive into more analytical aspects of those filters. We will compare moving average filters w.r.t Savgol filter a time-domain smoothing filter and will compare its performance.
As we know moving average filters smoothens out the signal. The smoothing effect is inversely proportional to the window length i.e the larger the window length (up to a certain limit) more clearer the signal or more the de-noising. The following figure shows this relationship clearly. We have used Gaussian white noise with sigma = 0.1. From the plot, it is apparently clear that taking 500 points averaging works really well, the signal is quite smoother and resembles that of the original signal.
Now let's understand how effective the moving average filter is when the noise factor is more.
We can see in Fig 11. that there is a large amount of white noise which was added to the signal still after applying the moving average filter we are able to retrieve the original signal envelope i.e we are able to filter out the noise effectively.
Now let us compare the smoothing effect of the simple moving average filter with that of the Savitzky–Golay filter is also known as the Savgol filter. Savgol filter increases the precision of data without much distortion in signal tendency [4]. It is one of the most widely used smoothing signal methods.
Clearly, we can see how the SMA filter retrieves the original signal envelope even with a much stronger white noise addition too. This is really suitable for Digital Communication.
We can characterise the noise in the channel as Gaussian white noise. We can retrieve signal shape by smoothing out the signal or removing Gaussian white noise. In digital communication, it is important to consider the Shannon Channel Capacity Limit [5] since moving the average filter is less effective for a low Signal-to-Noise ratio. But moving average filter is a basic filtering technique applied in most communication systems to reduce the impact of white noise.
Moving Average filters has many potential applications. Since the Moving average filter is an excellent time-domain filter but has an inferior response in the frequency domain it is important to consider the application before applying it. So applications where one is dealing with spectral features (or Frequency domain operations), it is best to avoid applying moving average filters in spatial or time domain.
I hope you find this blog educative and interesting in theoretical as well as practical aspects.
Cheers to filtering noise ahead! | https://codemonk.in/blog/moving-average-filter/ | CC-MAIN-2022-33 | refinedweb | 2,383 | 51.78 |
ZFS Replication To the Cloud Is Finally Here and It's Fast (arstechnica.com) 150
New submitter kozubik writes: Jim Salter at Ars Techn."
rsync and zfs do different things (Score:5, Informative)
rsync synchronises files. ZFS synchronises a file system. Of course it is better to work that way because you can transfer just the changed components of a file. Moving a file just changes a pointer, so send the pointer. That sort of thing.
Opens up another major possibility (Score:2)
ZFS doesn't think in terms of files. It thinks in terms of blocks, and in a redundant z-volume (similar to a RAID array) it distributes those blocks over multiple virtual devices (vdevs) - you can think of them as disks, but they don't have to be. These vdevs can be a disk
Re: (Score:2)
Look into how HDFS works [apache.org] it's the filesystem underlying Hadoop.
Re: (Score:2)
It's more than that - ZFS is basically taking fast snapshots and syncing just the deltas between the latest snapshot and the previous snapshot, which are blocks. Files and pointers don't matter - it's syncing individual changed blocks. You change one letter in a file, it's not syncing the whole file - just the changed block. It's substantially more efficient.
Exactly, and it's why ZFS' transfer speed is so much faster and does not go up with the size of the file (as rsync does), as shown in the article.
Re: rsync and zfs do different things (Score:1)
Re: (Score:1)
rsync does the same thing (block level transfers). ZFS wins this race because it is the filesystem and keeps track of which blocks are changing. rsync has to read every block, compute a checksum, and communicate that checksum to determine which block(s) need to be transfered. That's an expensive process, and thus why rsync defaults to "whole-file" on local storage. (you should disable that on an SSD.)
VM Replication (Score:4, Interesting)
I was a little unexcited by (although interested in) the article, even by the general speedups until I got to the part about VM replication. This really makes an enormous difference.
ZFS licensing has kept this as a grey area for me, so I I've largely kept away from deployment (save for an emergency FreeNAS box I needed in a hurry), but I'd clearly benefit from looking here again. Thanks for the reminder.
Oh, I also appreciate the rsync.net advertisement. Good guys, good service
;-)
Re: (Score:3)
The article did feel like an advertisement.
They offer a VM with lots of a disk space, is that really that special ?
I know of at least one that offers something similar:... [vultr.com]
I guess not at the same scale and with a bandwidth limit.
What I think is kind of funny is how people are surprised that ZFS works well for VM-images.
rsync is meant/optimized for transfering files, not blocks.
ZFS is meant for transfering filesystem blocks, VM-images are blocks too.
So ZFS works better than rsync
Re: (Score:3)
Re: (Score:3)
but what Linux calls "containers" are crappy attempts to containerize.
Not sure what you mean. Jails have been around for a long time, but LXC/LXD containers have almost identical functionality.
container templates...check
filesystem snapshot integration (ZFS, btrfs) with cloning operations...check
resource limits...check
unprivileged containers...check
network isolation...more flexible under LXC than Jails, in my opinion
bind mounts in containers...check
nice management utilities
Re: (Score:2)
Only difference I can see really is that LXC doesn't support nested containers...
It most certainly does. Linux can nest user namespaces to almost any depth.
Re: (Score:2)
Security can't be bolted on after the fact, it must be baked into the design
Re: (Score:2)
The difference is BSD Jails are entirely separate environments with their own unshared kernel datastructures, and the jail communicates with the host via an API. Linux namespaces is just metadata added to shared environments.
I'm sorry, but this notion is completely wrong. A BSD Jail is a forked process (the "jail process"), which calls the "jail" kernel system call and then executes a chroot. The jail syscall serves to attach the "prison" data structure to the "proc" data structure of the jail process, allowing the kernel to identify the process as "jailed" and treat it accordingly. The isolation of the environments is dependent entirely on the kernel recognizing that the process is jailed and putting the appropriate restrictio
Re: (Score:2)
This is how the FreeBSD kernel devs describe BSD Jails. Each jail get's it's own kernel network stack, kernel memory allocator, and almost every other kernel datastructure. They said this is nearly identical to paravirtualization. Breaking out of a jail requires a kernel flaw in both a system call and the paravirtualization layer.
Think KVM+QEMU, with most of the benefit an
Re: (Score:2)
This is how the FreeBSD kernel devs describe BSD Jails. Each jail get's it's own kernel network stack, kernel memory allocator, and almost every other kernel datastructure.
What you are describing is VPS (Virtual Private System), not Jails. VPS is the successor to Jails, written to address some of the shortcomings of Jails and make them more useful in situations where you want true virtual environments, rather than just the extra security that Jails has to offer. Incidentally, the mechanisms used to implement VPS in FreeBSD are nearly identical to the mechanisms for implementing containers on linux. Here is the relevant description from the whitepaper (
Re: (Score:2)
Re: (Score:2)
You're able to run as-root / Set-UID binaries with-in them? Nope. LXC emulates this by mapping UID-0 in the container to UID-x on the host via namespaces.
No, that is not correct. Root is root in an lxc container subject to some limitations (ex: making device entries), just like it is with BSD Jails. The mapping that you are referring to is a security mitigation feature, should an attacker manage to break out of the container. If a root-user within the container breaks out of the chroot (containers are essentially chroot with cgroups added in), but are still within the container process (iow, no buffer overflow or similar vulnerability), they will be subject
Re: (Score:2)
From some of the benchmarks in the article it didn't seem like rsync had any strength over syncoid, other than his tool requiring ZFS on both en
Re: (Score:2)
There is no grey area with respect to the licensing. It's CDDL, a free software licence. It's 100% Free.
It might be incompatible with the GPL, but that's a non-issue. The userland tools are fine under this licence. The kernel modules are fine under this licence. Now, it means that the kernel modules aren't going to appear in a kernel release anytime soon, but that in no way makes for any legal problems in using them as loadable modules, today. It works fine from a technical point of view, and it's als
ZFS + Linus is not a GPL violation (Score:2)
Don't let the licensing FUD scare you. Linus has publicly stated that licensing in a case that's a very near equivalent to ZFS' licensing is fine.
The anticipated problem with the license has always been on the Linux side. The license ZFS is released under doesn't in any way prohibit the ZFS code from being used in other places with other licenses (like the *BSD's). There has never been a concern that using ZFS with Linux violates the ZFS license (and thus could bring Oracle's well-fed lawyers down upon y
Charming (Score:2, Insightful)
Who cares, right? As the service itself states, If you're not sure what this means, our product is Not For You.
Ah, there's that welcoming open-source community spirit.
Re:Charming (Score:4, Informative)
there are things in this world that simply aren't meant for participation award winners. so go get offended somewhere else.
if somebody doesn't know what ZFS replication is, their product clearly isn't meant for them. why bother with explanation to a visitor that has no use for the product/service?
the attitude of these ZFS people is still quite welcoming compared to some connectivity providers i've dealt with. e.g. bogons.net will just politely tell you to f*ck off if you don't fully understand what you're purchasing from them (dwdm/cwdm rings).
Re: (Score:3, Informative)
their howtos hold newbies' hands sufficiently. they simply don't provide a free "Oracle ZFS Storage Appliance Administration course", which is what some people seem to expect. it seems i am discussing this with people who haven't even visited their website, so i'll stop here.
It's a subset not usable outside the set (Score:2)
Re: (Score:2)
Snapshotting has been in ZFS from (practically?) the beginning.
This article is about a cloud provider specifically providing a workable service to act as a ZFS snapshot receiver, which before required you to do some serious customization on a general-purpose compute environment like Amazon EC2.
At the prices that rsync.net charges for what it is, this is a pretty compelling off-site solution for my media storage, as it's already on a ZFS pool via FreeNAS.
The filesystem so fast... (Score:1)
Re: (Score:1)
That was ReiserFS, not ZFS.
Re: (Score:1)
Only after the Russian mail-order bride steals the money from your open source "wealth" to fund her new boyfriend's BDSM hobbies.She actually sounded a lot like my ex, the one with the website on breast feeding with nipple rings.
And no, I'm not making *any* of this up.
Rsync could have done this too! (Score:5, Informative)
Reading this article, it seems that this "ZFS replication" is very similar to rsync, with one straightforward addition:
Rsync works on an individual file level. It knows how to synchronized each modified file separately, and does this very efficiently. But if a file was renamed, without any further changes, it doesn't notice this fact, and instead notices the new file and sends it in its entirety. "ZFS replication", on the other hand, works on the filesystem level so it knows about renamed files and can send just the "rename" event instead of the entire content of the file.
So if rsync ran through all the files to try to recognize renamed files (e.g., by file sizes and dates, confirming with a hash), it could basically do the same thing. This wouldn't catch the event of renaming *and also* modifying the same file, but this is rarer than simple movements of files and directories. The benefit would have been that this would work on *any* filesystem, not just of ZFS. Since 99.9% of the users out there do not use ZFS, it makes sense to have this feature in rsync, not ZFS.
Re: (Score:3)
I was wondering what this offers over a (theoretical?) inotify+rsync app.
In the comments at the linked-to Ars article, Jim discusses just this approach.
Basically, and from memory, he determined that it would just be too much work to re-implement something that already works solidly (ZFS) and comes with a huge amount of other features out of the box.
Re:Rsync could have done this too! (Score:5, Insightful)
Re: (Score:2)
The crucial difference is ZFS send is unidirectional and as such is not affected by link latency. rsync needs to go back-and-forth, comparing notes with the other end all the time.
But this is *not* what the article appears to be measuring. He measured that the time to synchronize a changes were nearly identical in rsync and "ZFS replication" - except when it comes to renames.
Re: (Score:3)
Re: (Score:2)
In addition, when it comes to VM hosting in the filesystem, ZFS deduplication can offer a significant space savings by deduping all the common files in the VM images (operating system files).
If you are hosting Windows VMs, this effectively nullifies many gigabytes of storage bloat. This is, of course, a feature of ZFS, and has nothing to do with snapshotting other than the fact that your snapshots will be smaller.
Re: (Score:2)
deduplication takes an insane amount of RAM and is really only useful for static rarely written datasets, its strongly recommended against for VM images.
OTOH enabling lz4 compression is recommended - cpu/ram usage is minimal and the compression levels can be quite impressive, plus it can actually improve disk i/o as less data is read/written from disk. I have many VM's with compression enabled, compression usually reduces the image by about 30%
Re: (Score:2)
Depending on what your setup is and what the requirements are, it's fully feasible to have a 'storage server' where all it's RAM is handed over to ZFS for caching and dedup, and you export via NFS to your VM hosting systems on 10GbE. It adds a touch of latency, but if you can host a hundred machines that don't require super low latency and save 90% of the disk space by only having 1 copy of your server OS (for the most part), then you're probably doing better.
It's a viable config depending on what the need
Re: (Score:2)
Renames and changes to large files (VM images were the author's example).
Re: (Score:3)
Yet this is what the article says. Does he really have to measure read time to the millisecond instead of providing an estimate? How fast can your disk system read off 2TB of information, anyway?
"Virtualization keeps getting more and more prevalent, and VMs mean gigantic single files. rsync has a lot of trouble
Re: (Score:2)
Not quite - zfs needs to contact the destination zfs fs to compare with the last snapshot, but that is a very quick process. Once done zfs already knows whats blocks have changed since the last snapshot, whereas rsync has to scan the contents of each file at *both* ends which is where all the time comes in.
Re: (Score:2)
Not quite zfs needs to contact the destination zfs fs to compare with the last snapshot
Ehm, no, sorry. No communication with the destination machine is required while generating an incremental send stream. How can I claim this? Well besides being quite intimate with the ZFS source base (and I can point you to the relevant source files if you so desire), just a quick read through the zfs(1M) manpage will mention this example:
# zfs send pool/fs@a | ssh host zfs receive poolB/received/fs@a
As you are no doubt aware, pipes are by definition unidirectional. There is no way the zfs receive can tal
Re: (Score:2)
Re: (Score:1).
ZFS send/recv works at a very low level using the fundamental infrastructure in ZFS that makes snapshots work. When you send an incremental ZFS snapshot it doesn't have to check anyth
Re: (Score:3).
The rename issue is actually *very* important. It's not likely that you'll have a lot of independent renames, but something very likely is that you rename one directory containing a lot of files - and at that point rsync will send the entire content of that directory again. I actually found myself in the past stopping myself from renaming a directory, just because I knew this will incur a huge slowdown next time I do a backup (using rsync).
Re: (Score:3)
Re: (Score:3)
So if rsync ran through all the files to try to recognize renamed files (e.g., by file sizes and dates, confirming with a hash), it could basically do the same thing.
As a sibling comment points out, rsync does have a mode which handles this. As they don't point out, it is horrendously costly. Making this the default would be a pure idiot move. ZFS has metadata that permits detecting these sort of files, so it is possible to do it cheaply with ZFS.
What is really wanted IMO is for rsync to detect this stuff and use it when ZFS is present.
Re: (Score:2)
ZFS has metadata that permits detecting these sort of files
Side note for your entertainment in case it interests you, the way ZFS actually handles the rename case has nothing to do with trying to follow file name changes. In fact, in order to handle a rename, we don't need to look at the file being renamed at all. The trick is in the fact that directories are files too (albeit special ones) with a defined hash-table structure. ZFS send simply picks up the changes to the respective directories as if they were regular files and transfers those. The changed blocks the
Re: (Score:2)
Side note for your entertainment in case it interests you
It does
The trick is in the fact that directories are files too (albeit special ones) with a defined hash-table structure. ZFS send simply picks up the changes to the respective directories as if they were regular files and transfers those.
That does seem like functionality which rsync could be enhanced to use. At least, it could be used to more rapidly find duplicates when both ends are using ZFS. rsync ain't going away anytime soon.
I am interested in ZFS but will probably wait until a Linux distribution makes it trivial to implement. I am past the point where messing around with filesystems seems fun.
Re: (Score:2)
the scopes of what "zfs send" and "rsync" do are so profoundly different, it's almost silly to compare them. they're at completely different layers of storage stack. when i sync my local filesystem with a remote site (every hour), i sync snapshots, clones, (sub)filesystems while things are mounted and heavily in use. there's also compression and deduplication to consider.
the rsync feature you suggested isn't possible without a complete zfs rewrite or another layer of abstraction. too costly in either case.
Re: (Score:3)
The biggest difference is that ZFS has full knowledge of the state of the file system, rsync on the other side doesn't, it's stateless, it has to start from zero each time and regather the information on each and every run on both sides, which is a really slow and potentially error prone process (i.e. when files change while rsync runs). ZFS knows what's going on in the filesystem and its snapshots the filesystem at a single point in time, so it thus it can be be far quicker and won't produce inconsistencie
Re: (Score:2)
Re: (Score:2)
ZFS replication is for synchronizing file system snapshots. rsync is for syncing some files.
Entirely different purposes even if they seem the same.
ZFS encapsulates the entire storage channel. It is your volume manager all the way to your file system. It knows of every single change that occurs, when and where it occurs and what it changed. Sending a ZFS snapshot gets not only the snapshot being sent, but every one in between. ZFS does deduplication, compression, checksumming, and the snapshots stores ev
Re: (Score:2)
Re: (Score:2)
They definitely are. But it doesn't scale well. The time taken to scan the files and their contents on the source and destination system becomes overwhelming. The largest I've taken it to is a few terabytes, consisting of many thousands of directories each containing thousands of files (scientific imaging data). It ends up taking hours, where with ZFS it would take a few seconds. It also thrashes the discs on both systems as it scans everything, and uses a lot of memory. ZFS does none of these things-
Re: Rsync could have done this too! (Score:2)
Another problem is that rsync has to scan the entire file system, calculate hashes and transfer them and then do the same on the other side before it can transfer the difference.
If you have millions of files and directories that can take significant amount of time. I used to have rsync take a weekend to backup. With ZFS I can do hourly backups.
Re: (Score:2)
Well, sort of....
We switched from rsync to ZFS replication for our production environments and the difference in performance is rather extreme. (and why we made this change)
Medium sized file system, 12 TB and a few hundred million files. Doing a backup with rsync took days, and it was all just tied up in IOPs, even if the number of files changed was rather small. At this scale, it takes more than 24 hours just to get a listing of files.
Switching to ZFS with nighly snapshots and replication dropped backup t
Re: (Score:2)
The other advantage is that ZFS replication, unlike RSYNC, doesn't need to calculate diffs because ZFS it already keeps track of what blocks have changed since the last snapshot. This makes the entire process much faster less resource intensive.
Imagine the following scenario:
You are the sysadmin at a 24x7 company. You have a few hundred user's home directories (shared over NFS or SMB) on a fileserver that needs to be upgraded/replaced for some reason. You are tasked with migrating these home directories
Sheesh, the sheer low quality of TFA (Score:1)
For those who already understand rsync and zfs the article adds nothing new that is of value. 1/3 of the article is telling you what rsync is, which you can fill with lorem ipsum and still not lowering the next-to-none quality of the article. We already fucking know what rsync is. It's in the man pages for, like 10+ years. And why do you need a Jedi picture just for that?
Then the useless benchmark, taking another 1/3. No repeatable experiments. No statistics. Only one-shot timings. And the worst thi
Re: (Score:2)
I guess you missed the RESOLVED tag on that.
Re:BTRFS is the future (Score:5, Interesting)
Er, no. Btrfs may one day make feature parity with ZFS, and it may also achive the reliability of ZFS, but it has a long, long, way to go in both areas to get to those points.
The on-disc structures might have been declared "stable", but what does that mean, really? That you'll be able to mount current filesystems on future kernels, yes. That the frozen design was correct and contains no design flaws? No. Personally, I think they froze it way too early. There are a number of fairly fundamental issues with the Btrfs design which compromise its performance (fsync) and integrity (unbalancing, data loss on recovery), and in some cases place arbitrary limits upon things (e.g. the hardlink issue). Some can be mitigated, while others can not. These and other issues are easily found and researched.
Seriously, I've been using Btrfs since very near the beginning for a variety of tasks. But I've been objective about it, rather than a blinkered fanboi. It's an interesting filesystem with some good ideas. But it has
/always/ been a case of "next year it will be stable", and the performance is dire. Progress has been painfully slow, and the bugs I've encountered along the way have been numerous and show-stopping. Maybe it will "get there", but I think your assertion that "once BTFS userland side gets stable" that it will replace ZFS is incredibly naive. It assumes that there are no major issues remaining on the kernel side, and it also assumes that the only thing needing doing on the user side is stability. Based on its history to date, the likelihood of the kernel side being bug-free is close to zero. On the user side the tools are primitive, feature-incomplete and almost completely undocumented, containing little information and no examples. On the ZFS side, the tools are feature complete and are properly documented, with examples, and with whole sets of training material on top of that.
If you needed to make a decision on which to use for a serious deployment, or even just for a smaller scale home NAS, right now if you objectively compare the two, the choice is quite clear, and it's not Btrfs. Based upon the development history of the two, it's unlikely that this will change much in the next few years. Remember also that ZFS development is very active, perhaps even moreso than Btrfs. But who knows, maybe by 2020 Btrfs will surpass it.
Re: (Score:2)
I am using Btrfs on my NAS/firewall/server quite happily and in my experience it's been stable and performant, but overall I agree with you. The tools could be better and there are a lot of idiosyncracies here and there. Personally, I find the fact that Btrfs is terribly fragmentation-prone somewhat of an issue as running defrag on any snapshotted or deduped content will ruin the reflinks and ends up duplicating all the blocks needlessly, thereby eliminating the whole point of using snapshots in the first p
Re: (Score:2)
Re: (Score:1)
ZFS is an ENTERPRISE file system, it will eat all the RAM you give it and get faster with more RAM as it can cache more I/O. It is designed run on a well spec'ed server with a UPS.
Of course you can run it on anything FreeBSD supports and try your luck, it works well even then for most people.
I wonder why Docker doesn't deploy to OpenIndiana (Score:2)
If btrfs has so many issues, I wonder why Docker doesn't have a deployment on Illumos [openindiana.org]. or SmartOS [smartos.org].
I would think that Docker enthusiasm would be damped by a beta filesystem and (the lack of) verifiable security in package content.
Re: (Score:2)
BTRFS is less mature than ZFS, but it has a lot of useful functionality and is in some ways more elegant. For example, the snapshot of a subvolume is a first class filesystem in itself without dependency on it's parent. It's also a lot better about handling replacement of physical volumes underneath it if you have mirroring turned on. In particular, you can arbitrarily increase the size of the filesystem by using a larger replacement or just adding on more drives.
On the other hand, I'm not touching the rai
Re:BTRFS is the future (Score:5, Insightful)
Are you for real AC, or just trolling?
Your Synology "reference" is a classic "appeal to authority", only it's a really bad choice of authority due to its complete lack of any technical detail or substance of any kind. That link is to a marketing page for a company which makes money selling hardware. It's just a few bullet points (snapshotting, checksumming in essence), without any discussion of the actual tradeoffs or comparison with other systems. It's worthless. It's only purpose is to tick a feature box to act as an incentive to purchase their systems; as for the actual performance and reliability of those features--that's the customer's problem. Caveat emptor.
I've done more than casual work and development with Btrfs. For example, from back when I was a Debian developer, here's the original inital support for Btrfs snapshotting in schroot [github.com]. This lets you create virtual environments from Btrfs snapshots, as well as other types such as LVM and overlays. You can then plug this into other tools such as sbuild, and then build the whole of Debian using snapshotted clean build environments. Doing this, Btrfs fails hard around every 18 hours, going read-only. Why? Creating and deleting 18000 snapshots for 8 parallel builds quickly unbalances the filesystem, requiring a manual rebalance. You don't see that unfortunate detail in the Synology fluff page, do you?
You can also get snapshots and decent recovery (albeit without block-level checksums) from LVM and mdraid. In my experience, its recovery behaviour after real hardware failure is vastly more reliable than Btrfs. Simply put, it has always resynched the data without problem, while Btrfs has caused irrecoverable data loss, despite it theoretically being much better. LVM snapshots have very different tradeoffs as well. And on modern Linux with udev, we had to abandon using them due to races in udev/systemd making them randomly fail.
The point I'm making is that the reality of the chosen tradeoffs between performance, reliability and featureset of the different filesystems is a subtle one. You can't reduce it down to "Btrfs is better" or "ZFS is better". That's marketing. But I have spent over seven years pushing Btrfs to its limits, and have found it sorely lacking. It's unacceptable that it unbalances itself to the point of unusability. It's unacceptable that it has led to irrecoverable dataloss on several occasions. It's also unacceptable that in its eight years of existence, none of the developers could be bothered to write any decent documentation. The dataloss was down to bugs, some of which are fixed, but it does leave you in a position of lacking trust in it in the face of such problems. If you compare this with ZFS, while it's not fair to say it has been totally bug free, it has been almost bug free, and the number of dataloss incidents is small. I've yet to encounter any problems with ZFS myself, but I've encountered many serious issues with Btrfs.
Anyone who uses Btrfs or ZFS on a NAS system does so at their own risk after researching the various options and their tradeoffs. Just because a vendor decides to make and market a system using Btrfs does not make that system the best choice. It just means they thought they could make some profit from it.
Re: (Score:2)
To be fair, the race existed in udev prior to the systemd merge as well. When lvremove randomly stops working, it's a bit surprising, and it took a while to pinpoint udev as the culprit keeping the snapshot devices open and preventing their removal. "Helpful" such behaviour is not. We had to move all the debian buildds from using lvm snapshots to unpacking tar files as a result (btrfs being too fragile as mentioned).
Re: (Score:3)
Re: BTRFS is the future (Score:2)
ZFS disk structures were stable a decade ago but frankly the userland is still a bit buggy today, and that's with ten times as many people working on it as btrfs and people knowing full well where the problems are and what needs to be done to fix them. btrfs hasn't gone through that discovery process yet.
Don't assume undone work is easy. I'll be delighted to be proven wrong in five years (I said the same thing five years ago).
ZFS vs BTRFS (Score:3)
Jim Salter writes some great pieces on file systems for Ars Technica.
At the linked article are Related Links. Of particular note is "Atomic Cows and Bit Rot" -- read that if you're interested in modern file systems.
never RAIDZ yourself, but run run run to get some (Score:2)
Yeah, he writes okay pieces, but it kind of annoys me when he throws up blanket advice and then practically trips over himself extolling the opposite.
ZFS: You should use mirror vdevs, not RAIDZ [jrs-s.net]
Guess what? The entire rsync.net service is built on top of RAID-Z3, if I read their promotional portal correctly.
One use case I can see for this is using ZFS to back up Postgres databases. I'm not the only person to think this might be a good idea. A while back, I listened to this talk, which I really enjoyed:
Keith [youtube.com]
Re: (Score:3)
Whereas
/. is filled with people such as yourself...
I've been on
/. & ars for close to 2 decades & the level of idiot posts is unfortunately much higher here.
Re: (Score:2)
ZFS is nice
.. but it's just not been stable
By your definition of stable, nothing is stable. ZFS is not perfect, but it is closer to perfect than anything else.
Re: (Score:3)
Without some kind of incremental snapshot, with read-only privileges after the snapshot, straight replication is next to useless if someone does "rm -rf
/". And it happens *all the time*.
So
... zfs covers that ... since it does exactly what you suggest.
Sure, if you can afford to buy 3 times as much disk
What? If you want mirroring or RAID like qualities, yes, you need to duplicate data, thats true of any mechanism like this... you do realize thats what things like NetApp do too
... right, just mirroring or raid?
and roughly 10 times as much network bandwidth as you ever really process with,
... this makes no sense? How does the network come into play here? You're just making random shit up?
ZFS is nice if you can afford one sys-admin/Terabyte of data to try to keep it up to date, but it's just not been stable.
The company I work at rolls over roughly 50tb of data PER DAY, several petabytes worth
... in ZFS ...
You'll have to pardon me if I
Re: (Score:2)
Fortunately, zfs also supports snapshots, and those can be sent/received as well.
Re: (Score:2)
Without some kind of incremental snapshot, with read-only privileges after the snapshot, straight replication is next to useless if someone does "rm -rf
/". And it happens *all the time*.
So, exactly what ZFS provides then... You take periodic snapshots (hourly, daily, weekly, or whatever), then send the deltas between the snapshots to the destination system. You can easily put that in a cron job and have a regular push to a backup system (hey, exactly like what the tool in TFA is doing...). If someone does wipe out all their files, you have the snapshot(s) containing it on both the source and destination system, depending upon your schedule for dropping old snapshots. However you decide
Re: (Score:2)
1. ZFS Snapshotting is incremental, just like NetApp. In fact, it's so 'just like NetApp' that NetApp sued Sun Microsystems over it.
2. You don't know what the hell you're talking about. See #1.
Re: (Score:2)
Having trouble distinguishing between rsync, the tool, and rsync.net, the online service? Having never used either, the distinction was still perfectly clear to me.
"The cloud" (Score:2, Informative)
Anyone else getting tired of is term? All it means is "someone else's computer". All you're doing is renting server space and replicating your data there. There's nothing special about it.
Re: (Score:1)
Yep. 'The Cloud' is just shifting responsibility to someone else, who may or may not be doing a proper job of security or backups. This seems germane [textfiles.com].
Re: (Score:2)
Re: (Score:2)
Hmmm.... good point... perhaps we need "Smart cloud 2.0"
Re: (Score:2)
Anyone else getting tired of is term? All it means is "someone else's computer".
To be fair, that's kind of what it has meant for years. I have a networking textbook that's 15 years old that represents unspecified parts of a network in a network diagram as a cloud shape. So "piece of computer network that I don't care much about the details", e.g. the Internet, has been called "cloud" for a while.
Of course, this is not to be confused with "cloud computing", which has a more precise definition (basically distributed processing, but with on-demand virtual machines instead of physical n
Re: (Score:2)
Never heard of a private cloud then? We run a large virt cluster here & "the cloud" is the most straightforward & friendly way for me to refer to it to the higher ups. "Cloud" is just the same as "cluster", however the former is more widely recognised.
Discount for slashdot folks (Score:1)
We've had a very significant discount for HN readers for years and we'd be happy to extend that to
/. readers. Just email and ask.
Really happy to be here - I am not sure why I am labeled as "new submitter" since I have been a slashdot user for
... 15 years ?
Happy to answer any questions about our service here as well.
Re: (Score:2)
Er, OpenZFS...
ZFS originated within Sun, which was bought by Oracle. Oracle then laid off most (all?) of the ZFS developers, who then went to work for other companies. The current ZFS development is no longer inside Oracle, and nor is it owned by them. They own the copyright on the original CDDL releases. Big deal. Not using it because of the historic association with Oracle would be a little... extreme.
Re: (Score:2)
You do realise that Btrfs originated within Oracle, right? ZFS was merely acquired by them.
Re: (Score:2)
Oh, so in your hatred of Oracle, you're recommending a filesystem project that was started by... Oracle.
Only reason Oracle isn't still the major contributor to btrfs is because they bought Sun and got a complete version of what they were trying to create with btrfs. | https://slashdot.org/story/15/12/22/026209/zfs-replication-to-the-cloud-is-finally-here-and-its-fast | CC-MAIN-2016-40 | refinedweb | 6,284 | 71.14 |
"pygame.draw.polygon" fails to draw polygon on all edges of shape.
As shown in the attached image, when drawing a Pygame polygon (using the
pygame.draw.polygon
function) the polygon is mostly filled correctly apart from one edge. Switching the drawing function to
lines drew all the edges correctly (but had no polygon fill which I want). To see the problem better, I added code that drew red dots in the Pygame windows on the vertices of the polygon edge (with and without fill) and a black alignment square which should have 1 white pixel (gap) between it's top left sides and the shape. In the attached image there is also the relevant code (which I have added, simplified, below). It seems quite clear that this is a Pygame problem (as using the
lines function worked) so please take a look and see if you can fix it.
Additional case information: Pygame version:
1.9.2a0, Python version:
3.5.1.
Simplified example code (full, to run as a file with Python 3.5.1 and Pygame 1.9.2):
import pygame, sys pygame.init() surface = pygame.display.set_mode((335, 80)) pygame.display.set_caption("Demonstration of Pygame bug") border, rect = {"width": 5}, pygame.Rect((0,0,60,60)) # Defined here for debug. while True: surface.fill((200,200,200)) surface_2 = pygame.Surface((60,60)) surface_2.fill((255,255,255)) # White is contrasting with black shape. for index in range(1):#2): path_data = [({0:0,1:rect.width-1}[index],{0:0,1:rect.height -1}[index]),(rect.width-1,0), (rect.width-border[ "width"],border["width"]-1),({0:border["width"]-1,1: rect.width-border["width"]}[index],{0:border["width"]-1, 1:rect.height-border["width"]}[index]),(border["width"]- 1,rect.height-border["width"]), (0, rect.height-1)] #pygame.draw.lines(surface_2,(0,0,0),True,path_data) # Normal function. pygame.draw.polygon(surface_2,(0,0,0),path_data) # Function with bug. for dot in path_data: # Draw vertices for debugging. pygame.draw.line(surface_2, (255,0,0), dot, dot) pygame.draw.polygon(surface_2, (0,0,0), [(border["width"]+1, border[ "width"]+1), (border["width"]*2, border["width"]+1), (border[ "width"]*2, border["width"]*2), (1+border["width"], border["width"] *2)]) surface.blit(surface_2, (10,10)) pygame.display.update() for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_ESCAPE: pygame.quit() sys.exit()
Moved to github. | https://bitbucket.org/pygame/pygame/issues/313/pygamedrawpolygon-fails-to-draw-polygon-on | CC-MAIN-2019-09 | refinedweb | 410 | 54.69 |
Using pip to install nltk
A nubile question. I have just installed Pythonista on y new iPad Pro and can not work out how to run pip.
I would like to be able to use nltk.
Search of the FAQ did not help, and entering pip in the console returns "invalid syntax"
Any pointers welcome - thanks.
- Webmaster4o
Pythonista doesn't include
pip. That would violate Apple's app review policies. However, StaSh does include it.
You can install StaSh by pasting
import urllib2; exec urllib2.urlopen('').read()
into the console.
@Webmaster4o is correct, as long as you're using Pythonista 2, not the Pythonista 3 beta (which you can find info about on this forum)
@MichaelC basically after you install stash, force quit Pythonista, then to launch stash you run the newly installed "launch Stash" script, then from the Stash prompt you can do pip
Worked first time! Thanks everyone.
Lots to learn ...... | https://forum.omz-software.com/topic/3102/using-pip-to-install-nltk/5 | CC-MAIN-2021-21 | refinedweb | 153 | 74.69 |
.
Configuring Maven
Add the following to the file
settings.xml (usually in your Maven
$HOME/.m2 or
conf directory, for example,
/usr/local/apache-maven/apache-maven-<version>/conf) so that Maven will allow you to execute Mule plug-ins.
settings.xml
Using the Archetype
First, open a command shell and change to the directory where you want to create your project.
Next, you will execute the archetype and generate the code. If this is your first time running this command, Maven will download the archetype for you.
At minimum, you pass in two system parameters:
artifactId: The short name for the project (such as 'myApp'). This must be a single word in lower case with no spaces, periods, hyphens, etc..
Example Console Output
The plug-in prompts you to answer several questions about the project you are writing. These may vary according to the options you select. An example of the output is shown below.
Note: OGNL is deprecated in Mule 3.6 and will be removed in Mule 4.0.
After you have answered all the questions, the archetype creates a directory using the project name you specified that project. A new
MULE-README.txt file will be created in the root of your project explaining what files were created.
The Questions Explained
Provide a description of what the project does:
You should provide an accurate description of the project with any high-level details of what you can or cannot do with it. This text will be used where a description of the project is required.
Which version of Mule is this project targeted at?
The version of Mule you want to use for your project. This will default to the archetype version passed in on the command line.
What is the base Java package path for this project?
This should be a Java package path for you project, such as com/mycompany/project. Note that you must use slashes for separators, not periods.
Which Mule transports do you want to include in this project?
A comma-separated list of the transports you plan to use in this project (such as HTTP and VM). This will add the namespaces for those transports to the configuration file. | https://docs.mulesoft.com/mule-user-guide/v/3.9/creating-project-archetypes | CC-MAIN-2018-13 | refinedweb | 369 | 75.3 |
I'm having problems with an assignment in C++. The assignment is this:
Make a program that reads two moment of times, given in hours and minutes. Theses moment of times are the start and end for a day of work. After that the hour's pay must be read in. The program are going to estimate the workday in hours and minutes, and estimate the gross wage for a day.
The screen-printout are going to tell how long it has been worked in hours and minutes, and the gross wage.
My code is this:
#include <iostream.h>
main()
{
int Hour1, Hour2, Minute1, Minute2;
double Hour_Pay;
cout << "Key the hour when you start at work: ";
cin >> Hour1;
cout << "Key the minute in that hour you start at work: ";
cin >> Minute1;
cout << "Key the hour when you leave work: ";
cin >> Hour2;
cout << "Key the minute in that hour you leave work: ";
cin >> Minute2;
cout << "Key the hour's pay: ";
cin >> Hour_Pay;
cout << endl;
int Hours = Hour2-Hour1;
int Minutes = Minute1-Minute2;
double Total = (Hours)+(Minutes/60);
double Gross = Total*Hour_Pay;
cout << "You work " << Hours << " hours and " << Minutes << " minutes every day at work."
<< endl
<< "Your gross day's pay is " << Gross << " dollars.";
}
This code is probably not any good at all, i guess i have to do some "variable%variable" but i don't know how or where.
Anyone that does have a suggestion on how to do this assignment? | http://cboard.cprogramming.com/cplusplus-programming/40585-assignment.html | CC-MAIN-2015-35 | refinedweb | 239 | 76.05 |
I get an error when I'd like to load rJava. JDK is installed. (I run R on a CentOS vm (cloudera demo vm cdh3u4))
> library(rJava)
Error : .onLoad failed in loadNamespace() for 'rJava', details:
call: dyn.load(file, DLLpath = DLLpath, ...)
error: unable to load shared object '/home/cloudera/R/x86_64-redhat-linux-gnu-library/2.15/rJava/libs/rJava.so':
libjvm.so: cannot open shared object file: No such file or directory
Error: package/namespace load failed for ‘rJava’
[cloudera@localhost ~]$ java -version
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Getting rJava to work depends heavily on your computers configuration. The following is working at least on a windows platform. You could try and check, if this will help you on your platform, too.
If you use 64bit version make sure, that you do not set JAVA_HOME as a enviorment variable. If this variable is set, rJava will not work for whatever reason. You can check if your JAVA_HOME is set inside R with:
Sys.getenv("JAVA_HOME")
If you need to have JAVA_HOME set (e.g. you need it for maven or something else), you could deactivate it within your R-session with the following code before loading rJava:
if (Sys.getenv("JAVA_HOME")!="") Sys.setenv(JAVA_HOME="") library(rJava)
This should do the trick in most cases. Furthermore this will fix issue Using the rJava package on Win7 64 bit with R, too. I borrowed the idea of unsetting the enviorment variable from R: rJava package install failing. | https://codedump.io/share/EtU9cBTVeNi5/1/error-while-loading-rjava | CC-MAIN-2017-04 | refinedweb | 269 | 59.8 |
using UnityEngine;
using System.Collections;
public class EnemyAI : MonoBehaviour {
public Transform target;
public int moveSpeed;
public int rotationSpeed;
//);
//Move Towards Target
myTransform.position += myTransform.forward * moveSpeed * Time.deltaTime;
}
It's on the line after //Move Towards Target myTransform.position += myTransform.forward * moveSpeed * Time.deltaTime;
//Move Towards Target myTransform.position += myTransform.forward * moveSpeed * Time.deltaTime;
I have a problem too it's also a parsing problem in unity can you help me i'm a beginner so i'm not very smart at this area. i have error assets/testing.cs(17,9): error CS8025: Parsing error this is the code:
using UnityEngine; using System.Collections;
public class Testing : MonoBehaviour {
// Use this for initialization void Start () { }
// Update is called once per frame void Update () { if(Input.GetKeyUp(KeyCode.Space)) { gameObject.transform.up.y = +1 ; } } Thanks in advance Paul
PS: I'm Dutch so sorry if i have some write errors
Answer by Styn
·
Aug 04, 2011 at 12:37 PM
You don't have a closing bracket for your class! (assuming this is the full code) You should try to provide more details about the error
Ok, i added a closing bracket for my class but then I get an error that says "namespace can only contain types and namespace declarations " on lines 11 and 17 (the void Start and void Update lines)
anyone? Please?
Answer by dibonaj
·
Aug 04, 2011 at 01:53 PM
You need to have closing braces for your class and you need to declare myTransform.
Parsing error. No idea what it is or what the problem is.
2
Answers
C# Errors CS1525,CS8025
2
Answers
Multiple Cars not working
1
Answer
Assets/Scripts/PlayerController.cs(22,25): error CS8025: Parsing error
0
Answers | http://answers.unity3d.com/questions/152621/parsing-error.html | CC-MAIN-2017-17 | refinedweb | 287 | 56.45 |
Feature #7149
Constant magic for everyone.
Description:
Adenosine = ChemicalSpecies.new initial_concentration: 5.micromolar
Adenosine.name #=> "Adenos:
class ChemicalSpecies
constant_magic true
end
and imbue ChemicalSpecies with the same constant magic ability that Class and Struct classes have. Could it be made possible, please?
Updated by nobu (Nobuyoshi Nakada) over 7 years ago
- Status changed from Open to Feedback
What do you expect if the object is assigned to two or more constants?
Updated by prijutme4ty (Ilya Vorontsov) over 7 years ago
May be any hook can be implemented to make it possible in such a way?
on_const_set{|const, obj|
def obj.name
const
end
}
Updated by Anonymous over 7 years ago
nobu (Nobuyoshi Nakada) wrote:
What do you expect if the object is assigned to two or more constants?
Same behavior as with Class and Struct objects. I do not understand the implementation details, but it seems that for these objects constant magic has already been consistently implemented. Actually, the implementation consistency is the main reason why I am begging here for an official solution of this.
Updated by nobu (Nobuyoshi Nakada) over 7 years ago
Do you expect the following?
class Foo
Bar = 42
end
p (6*7).name #=> Foo::Bar
Updated by Anonymous over 7 years ago
Not out of the box, only if the user turns it on:
class Fixnum
constant_magic # or constant_magic( true ); or const_magic(); etc.
end
But, oh, do I feel your point. Saying that naming 42 is stupid is not enough.
Giving objects - any objects, not just fringe cases like Numerics, Symbols,
Vectors etc. - wrong names is a sin, root of evil surrounding the true nature :)
They say the devil himself is perhaps skin alone. I revulse names, unless these
be good hash functions, like those of Tolkien's Ents. But alas, biologists
deserve not the name of their science: they indulge in dissecting and giving
names that obscure rather than identify. Should this be remedied at the language
level? Definitely. But there is no way we would see this done before the end of
the century, if at all.
So, sadly, I have to live in the world I find myself in, refrain from proposing
to prohibit constants altogether, and satisfy myself with proposals that seem to
even foster the evil of deficient synonyms. Going back to naming 42, please note
that the rope to hang oneself with is already there: This behavior is achievable
with present devices of Ruby. I am only begging to make the possible more efficient.
Searching whole namespace gives me goosebumps.
Updated by mame (Yusuke Endoh) over 7 years ago
- Target version set to 2.6
Updated by Anonymous almost 7 years ago
I have put my library in public ( ), so I can now exemplify. After installing the gem:
require 'y_support/name_magic' class Klass; include NameMagic end UJAK = Klass.new Klass.instance_names #=> [:UJAK] UJAK.name #=> :UJAK Klass.new name: :ANEC Klass.instance_names #=> [:UJAK, :ANEC] Klass.instance( :ANEC ) == Klass::ANEC #=> true
I am too busy using the library to be working on its documentation, sorry.
I use this all the time, in expressions such as
Length = Quantity.standard of: :L
the above creates a Quantity instance named :Length with physical
dimension :L
METRE = Unit.standard of: Length, short: "m"
the above creates a physical unit named :metre (constant assignment
alone is enough to convey the information about the unit name, and
hook is used to downcase :METRE to :metre), so that 1.metre and 1.m
both start working.
more examples in
Updated by marcandre (Marc-Andre Lafortune) almost 7 years ago
- Category set to core
- Status changed from Feedback to Open
- Assignee set to matz (Yukihiro Matsumoto)
I've never needed this, but I could envision a
const_assigned callback. Whenever a constant is assigned to an object, then object.const_assigned("FullyQualified::Name") would be called.
In other words, the "magic" around Modules and Classes could be understood with the "equivalent" Ruby code:
class Module def const_assigned(name) @name ||= name end def to_s @name || "#<Class:#{object_id}>" end end
I would guess that the runtime impact would be completely negligible.
In the examples given above, ChemicalSpecies could have a similar method. And yes, even (6*7).name could be "Foo::Bar" with:
class Fixnum @@reg = {} def name @@reg[self] || to_s end def const_assigned(name) @@reg[self] ||= name end end
What do you expect if the object is assigned to two or more constants?
That would be entirely up to the implementer of
const_assigned, but
const_assigned would be called multiple times.
Updated by headius (Charles Nutter) almost 7 years ago
const_assigned is not a bad hook at all, imho.
Updated by naruse (Yui NARUSE) over 2 years ago
- Target version deleted (
2.6)
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/7149 | CC-MAIN-2020-16 | refinedweb | 786 | 65.01 |
Using Vue with TypeScript
A type system like TypeScript can detect many common errors via static analysis at build time. This reduces the chance of runtime errors in production, and also allows us to more confidently refactor code in large-scale applications. TypeScript also improves developer ergonomics via type-based auto-completion in IDEs.
Vue is written in TypeScript itself and provides first-class TypeScript support. All official Vue packages come with bundled type declarations that should work out-of-the-box.
Project Setup
create-vue, the official project scaffolding tool, offers the options to scaffold a Vite-powered, TypeScript-ready Vue project.
Overview
With a Vite-based setup, the dev server and the bundler are transpilation-only and do not perform any type-checking. This ensures the Vite dev server stays blazing fast even when using TypeScript.
During development, we recommend relying on a good IDE setup for instant feedback on type errors.
If using SFCs, use the
vue-tscutility for command line type checking and type declaration generation.
vue-tscis a wrapper around
tsc, TypeScript's own command line interface. It works largely the same as
tscexcept that it supports Vue SFCs in addition to TypeScript files.
vue-tsccurrently does not support watch mode, but it is on the roadmap. In the meanwhile, if you prefer having type checking as part of your dev command, check out vite-plugin-checker.
Vue CLI also provides TypeScript support, but is no longer recommended. See notes below.
IDE Support
Visual Studio Code (VSCode) is strongly recommended for its great out-of-the-box support for TypeScript.
Volar is the official VSCode extension that provides TypeScript support inside Vue SFCs, along with many other great features.
TypeScript Vue Plugin is also needed to get type support for
*.vueimports in TS files.
WebStorm also provides out-of-the-box support for both TypeScript and Vue. Other JetBrains IDEs support them too, either out of the box or via a free plugin.
Configuring
tsconfig.json
Projects scaffolded via
create-vue include pre-configured
tsconfig.json. The base config is abstracted in the
@vue/tsconfig package. Inside the project, we use Project References to ensure correct types for code running in different environments (e.g. app vs. test).
When configuring
tsconfig.json manually, some notable options include:
compilerOptions.isolatedModulesis set to
truebecause Vite uses esbuild for transpiling TypeScript and is subject to single-file transpile limitations.
If you're using Options API, you need to set
compilerOptions.strictto
true(or at least enable
compilerOptions.noImplicitThis, which is a part of the
strictflag) to leverage type checking of
thisin component options. Otherwise
thiswill be treated as
any.
If you have configured resolver aliases in your build tool, for example the
create-vueproject, you need to also configure it for TypeScript via
compilerOptions.paths.
See also:
Takeover Mode
This section only applies for VSCode + Volar.
To get Vue SFCs and TypeScript working together, Volar creates a separate TS language service instance patched with Vue-specific support, and uses it in Vue SFCs. At the same time, plain TS files are still handled by VSCode's built-in TS language service, which is why we need TypeScript Vue Plugin to support Vue SFC imports in TS files. This default setup works, but for each project we are running two TS language service instances: one from Volar, one from VSCode's built-in service. This is a bit inefficient and can lead to performance issues in large projects.
Volar provides a feature called "Takeover Mode" to improve performance. In takeover mode, Volar provides support for both Vue and TS files using a single TS language service instance.
To enable Takeover Mode, you need to disable VSCode's built-in TS language service in your project's workspace only by following these steps:
- In your project workspace, bring up the command palette with
Ctrl + Shift + P(macOS:
Cmd + Shift + P).
- Type
builtand select "Extensions: Show Built-in Extensions".
- Type
typescriptin the extension search box (do not remove
@builtinprefix).
- Click the little gear icon of "TypeScript and JavaScript Language Features", and select "Disable (Workspace)".
- Reload the workspace. Takeover mode will be enabled when you open a Vue or TS file.
Note on Vue CLI and
ts-loader
In webpack-based setups such as Vue CLI, it is common to perform type checking as part of the module transform pipeline, for example with
ts-loader. This, however, isn't a clean solution because the type system needs knowledge of the entire module graph to perform type checks. Individual module's transform step simply is not the right place for the task. It leads to the following problems:
ts-loadercan only type check post-transform code. This doesn't align with the errors we see in IDEs or from
vue-tsc, which map directly back to the source code.
Type checking can be slow. When it is performed in the same thread / process with code transformations, it significantly affects the build speed of the entire application.
We already have type checking running right in our IDE in a separate process, so the cost of dev experience slow down simply isn't a good trade-off.
If you are currently using Vue 3 + TypeScript via Vue CLI, we strongly recommend migrating over to Vite. We are also working on CLI options to enable transpile-only TS support, so that you can switch to
vue-tsc for type checking.
General Usage Notes
defineComponent()
To let TypeScript properly infer types inside component options, we need to define components with
defineComponent():
import { defineComponent } from 'vue' export default defineComponent({ // type inference enabled props: { name: String, msg: { type: String, required: true } }, data() { return { count: 1 } }, mounted() { this.name // type: string | undefined this.msg // type: string this.count // type: number } })
defineComponent() also supports inferring the props passed to
setup() when using Composition API without
<script setup>:
import { defineComponent } from 'vue' export default defineComponent({ // type inference enabled props: { message: String }, setup(props) { props.message // type: string | undefined } })
See also: type tests for
defineComponent
TIP
defineComponent() also enables type inference for components defined in plain JavaScript.
Usage in Single-File Components
To use TypeScript in SFCs, add the
lang="ts" attribute to
<script> tags. When
lang="ts" is present, all template expressions also enjoy stricter type checking.
<script lang="ts"> import { defineComponent } from 'vue' export default defineComponent({ data() { return { count: 1 } } }) </script> <template> <!-- type checking and auto-completion enabled --> {{ count.toFixed(2) }} </template>
lang="ts" can also be used with
<script setup>:
<script setup // TypeScript enabled import { ref } from 'vue' const count = ref(1) </script> <template> <!-- type checking and auto-completion enabled --> {{ count.toFixed(2) }} </template>
TypeScript in Templates
The
<template> also supports TypeScript in binding expressions when
<script lang="ts"> or
<script setup is used. This is useful in cases where you need to perform type casting in template expressions.
Here's a contrived example:
<script setup let x: string | number = 1 </script> <template> <!-- error because x could be a string --> {{ x.toFixed(2) }} </template>
This can be worked around with an inline type cast:
<script setup let x: string | number = 1 </script> <template> {{ (x as number).toFixed(2) }} </template>
TIP
If using Vue CLI or a webpack-based setup, TypeScript in template expressions requires
vue-loader@^16.8.0. | https://vuejs.org/guide/typescript/overview.html | CC-MAIN-2022-21 | refinedweb | 1,209 | 55.95 |
Subject: Re: [OMPI users] Segfaults w/ both 1.4 and 1.5 on CentOS 6.2/SGE
From: Joshua Baker-LePain (jlb17_at_[hidden])
Date: 2012-03-13 16:54:23
On Tue, 13 Mar 2012 at 7:53pm, Gutierrez, Samuel K wrote
>?
Would it be best to use 1.4.4 specifically, or simply the most recent
1.4.x (which appears to be 1.4.5 at this point)?
> Any more information surrounding your failures in 1.5.4 are greatly
> appreciated.
I'm happy to provide, but what exactly are you looking for? The test code
I'm running is *very* simple:
#include <stdio.h>
#include <mpi.h>
main(int argc, char **argv)
{
int node;
int i, j;
float f;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &node);
printf("Hello World from Node %d.\n", node);
for(i=0; i<=1000000000000; i++)
f=i*2.718281828*i+i+i*3.141592654;
MPI_Finalize();
}
And my environment is a pretty standard CentOS-6.2 install.
--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF | http://www.open-mpi.org/community/lists/users/2012/03/18745.php | CC-MAIN-2014-52 | refinedweb | 171 | 72.32 |
First most important resources. The wxPython wiki has pretty much all you need to now to get started. First is the TwoStageCreation process that the XRC implementation uses to create classes. The second is the UsingXmlResources page.
There is also the XRCTutorial. But you should be past that if you’re trying to create custom controls.
Here is a summary of the basic process. First, create your XRC file. Here is a simple sample I generated with DialogBlocks.
wxtest.xrc:
Notice at the top of the file, the wxFrame object has the attribute subclass=”wxtest.MyFrame”. This tells XRC that when instantiating this class, use my subclass of wxFrame instead of wxFrame itself.
Here is the python code to load the frame:
wxtest.py:
import wx from wx import xrc import logging as log log.basicConfig ( format='%(message)s', level=log.DEBUG ) class MyFrame(wx.Frame): def __init__(self): log.debug ( "__init__") f=wx.PreFrame() self.PostCreate(f) self.Bind( wx.EVT_WINDOW_CREATE , self.OnCreate) def OnCreate(self,evt): log.debug ( "OnCreate" ) self.Unbind ( wx.EVT_WINDOW_CREATE ) wx.CallAfter(self.__PostInit) evt.Skip() return True def __PostInit(self): log.debug ( "__PostInit" ) self.Bind ( wx.EVT_BUTTON, self.OnButton, id=xrc.XRCID ( "ID_BUTTON" ) ) def OnButton(self,evt): log.debug ( "Button Pressed" ) if __name__ == '__main__': app=wx.PySimpleApp(redirect=False) res=xrc.XmlResource ( 'wxtest.xrc' ) frame=res.LoadFrame( None, "ID_WXFRAME" ) frame.Show() app.MainLoop()
You might wonder why bind the OnCreate method to the WINDOW_CREATE event instead of simply doing your initialization in the init method. The answer is that 1) the window isn’t really all the way created at the __init__ stage and 2), it doesn’t get all the way created until after the events start being processed. Why use wx.CallAfter and a __PostInit method then? This seems to not be absolutely necessary if you’re creating a frame or panel as part of your main program. If you are creating an XRC control withing an event handler, e.g., you have a button that is used to create a dialog that is defined in an XRC file, the control still isn’t all the way loaded until after the Create Event. I’m not sure why you can’t just use wx.CallAfter in your __init__. I tried that and found it to work, but the documented procedure is to use both OnCreate and PostInit.
Important summary note: Child controls obtained with xrc.XRCCTRL aren’t available to be loaded until the __PostInit method.
Now, having the basic example up and running, it’s time to add a little customization. I found it annoying to write the __init__, OnCreate, and __PostInit methods for each of my custom controls. You can create a parent class to do this.
custxrc.py:
class XrcControl: def __init__(self): log.debug ( "__init__") self.Bind( wx.EVT_WINDOW_CREATE , self.OnCreate) def OnCreate(self,evt): log.debug ( "OnCreate" ) self.Unbind ( wx.EVT_WINDOW_CREATE ) wx.CallAfter(self._PostInit) evt.Skip() return True def _PostInit(self): raise RuntimeError ( "Extend this method." ) class XrcFrame(wx.Frame, XrcControl): def __init__(self): f=wx.PreFrame() self.PostCreate(f) XrcControl.__init__(self)
It’s pretty easy to see how you could create Xrc
class MyFrame(XrcFrame): def __init__(self): XrcFrame.__init__(self) def _PostInit(self): log.debug ( "__PostInit" ) self.Bind ( wx.EVT_BUTTON, self.OnButton, id=xrc.XRCID ( "ID_BUTTON" ) ) def OnButton(self,evt): log.debug ( "Button Pressed" )
Summary:
* You can’t access child elements until OnCreate for controls that are loaded outside of an event handler and PostInit (call it whatever you want) for controls that are loaded from within an event handler.
* You can use Python’s very flexible style to put lots of the code for creating XRC controls into base classes. | https://allmybrain.com/2008/09/06/custom-derived-classes-for-wxpython-xrc-resources/ | CC-MAIN-2020-45 | refinedweb | 618 | 52.76 |
Created on 2019-02-22 14:18 by n8falke, last changed 2020-03-16 16:20 by paul.j3.
Example source:
from argparse import ArgumentParser, SUPPRESS
==============
parser = ArgumentParser()
parser.add_argument('i', nargs='?', type=int, default=SUPPRESS)
args = parser.parse_args([])
==============
results in:
error: argument integer: invalid int value: '==SUPPRESS=='
Expected: args = Namespace()
In Lib/argparse.py:
line 2399 in _get_value: result = type_func(arg_string)
with arg_string = SUPPRESS = '==SUPPRESS=='
called by ... line 1836 in take_action: argument_values = self._get_values(action, argument_strings)
which is done before checking for SUPPRESS in line 1851:
if argument_values is not SUPPRESS:
action(...)
Some more details:
The problem is not the order of assignment in take_action:
Defaults have been set by:
def parse_known_args(self, args=None, namespace=None):
...
# add any action defaults that aren't present
for action in self._actions:
if action.dest is not SUPPRESS:
if not hasattr(namespace, action.dest):
if action.default is not SUPPRESS:
setattr(namespace, action.dest, action.default)
Assignment without argument should not happen, like the example shows:
==============
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('i', action="count", default=42)
args = parser.parse_args([])
print(repr(args))
==============
Namespace(i=43)
==============
Defaults are handled into two stages.
At the start of parsing defaults are added to the Namespace.
At the end of parsing intact defaults are evaluated with 'type'.
But a nargs='?' positional gets special handling. It matches an empty string, so it is always 'seen'. If its default is not None, that default is put in the Namespace instead of the matching empty list.
It's this special default handling that lets us use a ?-positional in a mutually exclusive group.
I suspect the error arises from this special default handling, but I'll have to look at the code to verify the details.
By defining a custom 'type' function:
def foo(astr):
if astr is argparse.SUPPRESS:
raise KeyError
return astr
I get the full traceback
1831 def take_action(action, argument_strings, option_string=None):
1832 seen_actions.add(action)
-> 1833 argument_values = self._get_values(action, argument_strings)
and in '_get_values' the error is produced when it calls '_get_value' (which runs the 'type' function):
# optional argument produces a default when not present
if not arg_strings and action.nargs == OPTIONAL:
if action.option_strings:
value = action.const
else:
value = action.default
if isinstance(value, str):
--> value = self._get_value(action, value)
self._check_value(action, value)
It identifies this as an OPTIONAL action that has received an empty argument list, and assigns it the action.default.
ZERO_OR_MORE * also gets the action.default, but without a _get_value() call. That default can be SUPPRESSed by the test at the end of take_action.
A couple of fixes come to mind:
- add a SUPPRESS test at the start of take_action
- add a SUPPRESS test to _get_values block I quote above, maybe bypassing the `_get_value` call
There is a unittest case of a suppressed optional positional; it just doesn't also test for a failed type.
class TestDefaultSuppress(ParserTestCase):
"""Test actions with suppressed defaults"""
argument_signatures = [
Sig('foo', nargs='?', default=argparse.SUPPRESS)
I'm inclined go with the second choice, but the alternatives need to be throughly tested.
In the mean time, an 'int' type could be replaced with one that is SUPPRESS knowledgeable:
def bar(astr):
if astr is argparse.SUPPRESS:
return astr
else:
return int(astr)
Note that this use of action.default is different from the normal default handling at the start of parse_known_args (and the end of _parse_known_args). It's specifically for positionals that will always be 'seen' (because an empty argument strings list satisfies their nargs).
Thanks for so fast looking into this.
Good idea to use the workaround with a own conversion function. I'll use this for now.
To see what's happening, I tried a own Action with print in __call__ and a own conversion function with printing. I found following workflow:
1) direct assignment of unconverted default value (if not SUPPRESS, in parse_known_args)
2) conversion of argument string into given type
3) call to Action.__call__ which sets the converted value
4) only in edge cases: Convert default into given type and set in target
When there is no option there is only:
default | arg, narg = '?' | --opt | arg, narg = '*'
-----------------------------------------------------
None | 1), 3) | 1) | 1), 2) with []
SUPPRESS | 2)! | |
str | 1), 2), 3) | 1) | 1), 2)
not str* | 1), 3) | 1), 4) | 1), 2)
*can be int, float or other calss
It gets more complex the deeper I get into the source...
Yes, your second choice has probably less side effects.
I ran into the same issue and looked into the code, and found it more complicated than I thought. The more I went on, more issues occur. I wonder if I should open a new issue, but I will first comment here. If you feel like this should be a new issue, I will open one then. And I apologize in advance for possible vaguenesses in this comment because I modified it several times as I explored the code and found more issues. (also because of my poor English):)
It seems the issue happens only on positional arguments but not optional ones. Empty optional arguments will not call `take_action` and default values are handled and converted after consuming all arguments.
It also leads to inconsistancy between positional and optional arguments behaviour. Positional arguments always go through `take_action`, but optional arguments don't if an argument doesn't appear.
This inconsistancy causes another I think is strange behaviour,
parser = ArgumentParser()
parser.add_argument('i', action='count')
parser.parse_args([])
got
Namespace(i=1)
On the other hand, in `_get_values` function, `_check_value` is called to handle `choices=`, but there is no such guard for optional arguments, which means,
parser = ArgumentParser()
parser.add_argument('-i', nargs='?', type=int, default='2', choices=[1])
parser.parse_args([])
doesn't raise an error.
Besides Paul's two instructive solutions, I think it better to make both sides behave the same. However, I found things seem not that simple.
First, ZERO_OR_MORE, no default value, positional arguments have `required=True` by default, but
parser.add_argument('foo', nargs='*')
parser.parse_args([])
got no problems. So it at least appears not required. (The document says `required` is only for optionals, so I guess it's just a implementation level but not a user level thing)
Second, the last case above gives
Namespace(foo=[])
which seems logically incorrect or at least controversial, because the default is not set and you give no arguments, how does this list come? The document says nothing about the case (it's true it's a very corner one) and it also differs from the optional arguments case which gives
Namespace(foo=None)
A walk around which doesn't change it is possible and I've written a patch fixing it.
And I'm not sure what we usually do if I propose to make them give the same result, is a PEP needed or I just raise a discussion about it? The change might break current code.
This is a complicated issue that needs a lot of thought and testing before we make any changes.
While all Actions have the 'required' attribute, the programmer can only set it for optionals. _get_positional_kwargs() will raise an error if the programmer tries to set it for a positional. For a positional its value is determined by the nargs value.
The distinction between positionals and optionals occurs through out argparse, so we shouldn't put much effort (if any) into making their handling more uniform. (The fundamental distinction between the two is whether the action.option_strings list is empty or not.)
A fundamental difference in parsing is that an optional's Action is called only if the flag is used. A positional's Action is always called.
_parse_known_args starts with a list of positionals
positionals = self._get_positional_actions()
The nested consume_positionals function removes actions from this list.
Earlier versions raised an error at the end of parsing if this list was not empty. In the current version that's been replace by the 'required_actions' test (which tests both positionals and optionals). It was this change over that resulted in the bug/feature that subparsers (a positionals Action) are no longer required (by default).
Positionals with nargs='?' and '*' pose an extra challenge. They are, in a sense, no longer 'required'. But they will always be 'seen' because their nargs is satisfied by an empty list of values. But that would overwrite any 'default' in the Namespace. So there's the added step in _get_values of (re)inserting the action.default. And it's the handling of that 'default' that gives rise to the current issue.
These two positionals can also be used in a mutually_exclusive_group. To handle that 'take_action' has to maintain both the 'seen_actions' set and the 'seen_non_default_actions' set.
I found what appears to be a very similar issue so instead of creating a new issue I will place it here as a comment.
The following code:
==
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('letter', choices=['a', 'b', 'c'], default=argparse.SUPPRESS, nargs='?')
args = parser.parse_args([])
==
results in this error:
==
usage: pok.py [-h] [{a,b,c}]
pok.py: error: argument letter: invalid choice: '==SUPPRESS==' (choose from 'a', 'b', 'c')
==
You are right, this part of the same issue.
_get_value() tests '==SUPPRESS==' both for type and choices. | https://bugs.python.org/issue36078 | CC-MAIN-2020-34 | refinedweb | 1,534 | 58.18 |
Paraboloid Tutorial - Simple Optimization Problem¶
This tutorial will show you how to set up a simple optimization of a paraboloid. You’ll create a paraboloid Component (with analytic derivatives), then put it into a Problem and set up an optimizer Driver to minimize an objective function.
Here is the code that defines the paraboloid and then runs it. You can copy this code into a file, and run it directly.
from __future__ import print_function from openmdao.api import IndepVarComp, Component, Problem, Group class Paraboloid(Component): """ Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """ def __init__(self): super(Paraboloid, self).__init__() self.add_param('x', val=0.0) self.add_param('y', val=0.0) self.add_output('f_xy', shape=1) def solve_nonlinear(self, params, unknowns, resids): """f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """ x = params['x'] y = params['y'] unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0 def linearize(self, params, unknowns, resids): """ Jacobian for our paraboloid.""" x = params['x'] y = params['y'] J = {} J['f_xy', 'x'] = 2.0*x - 6.0 + y J['f_xy', 'y'] = 2.0*y + 8.0 + x return J if __name__ == "__main__":'])
Now we will go through each section and explain how this code works.
Building the component¶
from __future__ import print_function from openmdao.api import IndepVarComp, Component, Problem, Group
We need to import some OpenMDAO classes. We also import the print_function to ensure compatibility between Python 2.x and 3.x. You don’t need the import if you are running in Python 3.x.
class Paraboloid(Component):
OpenMDAO provides a base class, Component, which you should inherit from to build your own components and wrappers for analysis codes. Components can declare three kinds of variables, parameters, outputs and states. A Component operates on its parameters to compute unknowns, which can be explicit outputs or implicit states. For the Paraboloid Component, we will only be using explicit outputs.
def __init__(self): super(Paraboloid, self).__init__() self.add_param('x', val=0.0) self.add_param('y', val=0.0) self.add_output('f_xy', shape=1)
This code defines the input parameters of the Component, x and y, and initializes them to 0.0. These will be design variables which could be used to minimize the output when doing optimization. It also defines the explicit output, f_xy, but only gives it a shape. If shape is 1, the value is initialized to 0.0, a scalar. If shape is any other value, the value of the variable is initialized to numpy.zeros(shape, dtype=float).
def solve_nonlinear(self, params, unknowns, resids): """f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 Optimal solution (minimum): x = 6.6667; y = -7.3333 """ x = params['x'] y = params['y'] unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
The solve_nonlinear method is responsible for calculating outputs for a given set of parameters. The parameters are given in the params dictionary that is passed in to this method. Similarly, the outputs are assigned values using the unknowns dictionary that is passed in.
def linearize(self, params, unknowns, resids): """ Jacobian for our paraboloid.""" x = params['x'] y = params['y'] J = {} J['f_xy','x'] = 2.0*x - 6.0 + y J['f_xy','y'] = 2.0*y + 8.0 + x return J
The linearize method is used to compute analytic partial derivatives of the unknowns with respect to params (partial derivatives in OpenMDAO context refer to derivatives for a single component by itself). The returned value, in this case J, should be a dictionary whose keys are tuples of the form (‘unknown’, ‘param’) and whose values are n-d arrays or scalars. Just like for solve_nonlinear, the values for the parameters are accessed using dictionary arguments to the function.
The definition of the Paraboloid Component class is now complete. We will now make use of this class to run a model.
Setting up the model¶
if __name__ == "__main__": top = Problem() root = top.root = Group()
An instance of an OpenMDAO Problem is always the top object for running a model. Each Problem in OpenMDAO must contain a root Group. A Group is a System that contains other Components or Groups.
This code instantiates a Problem object and sets the root to be an empty Group.
root.add('p1', IndepVarComp('x', 3.0)) root.add('p2', IndepVarComp('y', -4.0))
Now it is time to add components to the empty group. IndepVarComp is a Component that provides the source for a variable which we can later give to a Driver as a design variable to control.
We created two IndepVarComps (one for each param on the Paraboloid component), gave them names, and added them to the root Group. The add method takes a name as the first argument, and a Component instance as the second argument. The numbers 3.0 and -4.0 are values chosen for each as starting points for the optimizer.
Note
Take care setting the initial values, as in some cases, various initial points for the optimization will lead to different results.
root.add('p', Paraboloid())
Then we add the paraboloid using the same syntax as before, giving it the name ‘p’.
root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y')
Then we connect up the outputs of the IndepVarComps to the parameters of the Paraboloid. Notice the dotted naming convention used to refer to variables. So, for example, p1 represents the first IndepVarComp that we created to set the value of x and so we connect that to parameter x of the Paraboloid. Since the Paraboloid is named p and has a parameter x, it is referred to as p.x in the call to the connect method.
Every problem has a Driver and for most situations, we would want to set a Driver for the Problem using code like this
top.driver = SomeDriver()
For this very simple tutorial, we do not need to set a Driver, we will just use the default, built-in driver, which is Driver. ( Driver also serves as the base class for all Drivers. ) Driver is the simplest driver possible, running a Problem once.
top.setup()
Before we can run our model we need to do some setup. This is done using the setup method on the Problem. This method performs all the setup of vector storage, data transfer, etc., necessary to perform calculations. Calling setup is required before running the model.
top.run()
Now we can run the model using the run method of Problem.
print(top['p.f_xy'])
Finally, we print the output of the Paraboloid Component using the dictionary-style method of accessing variables on the problem instance. Putting it all together:'])
The output should look like this:
-15.0
The IndepVarComp component is used to define a source for an unconnected param that we want to use as an independent variable that can be declared as a design variable for a driver. In our case, we want to optimize the Paraboloid model, finding values for ‘x’ and ‘y’ that minimize the output ‘f_xy.’
Sometimes we just want to run our component once to see the result. Similarly, sometimes we have params that will be constant through our optimization, and thus don’t need to be design variables. In either of these cases, the IndepVarComp is not required, and we can build our model while leaving those parameters unconnected. All unconnected params use their default value as the initial value. You can set the values of any unconnected params the same way as any other variables by doing the following:
top = Problem() root = top.root = Group() root.add('p', Paraboloid(), promotes=['x', 'y']) top.setup() # Set values for x and y top['x'] = 5.0 top['y'] = 2.0 top.run() print(top['p.f_xy'])
This can only be done after setup is called. Note that the promoted names ‘x’ and ‘y’ are used.
The new output should look like this:
47.0
Future tutorials will show more complex Problems.
Optimization of the Paraboloid¶
Now that we have the paraboloid model set up, let’s do a simple unconstrained optimization. Let’s find the minimum point on the Paraboloid over the variables x and y. This requires the addition of just a few more lines.
First, we need to import the optimizer.
from openmdao.api import ScipyOptimizer
The main optimizer built into OpenMDAO is a wrapper around Scipy’s minimize function. OpenMDAO supports 9 of the optimizers built into minimize. The ones that will be most frequently used are SLSQP and COBYLA, since they are the only two in the minimize package that support constraints. We will use SLSQP because it supports OpenMDAO-supplied gradients.
top = Problem() root = top.root = Group() # Initial value of x and y set in the IndepVarComp. root.add('p1', IndepVarComp('x', 13.0)) root.add('p2', IndepVarComp('y', -14.0)) root.add('p', Paraboloid()) root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') top.driver = ScipyOptimizer() top.driver.options['optimizer'] = 'SLSQP' top.driver.add_desvar('p1.x', lower=-50, upper=50) top.driver.add_desvar('p2.y', lower=-50, upper=50) top.driver.add_objective('p.f_xy') top.setup() # You can also specify initial values post-setup top['p1.x'] = 3.0 top['p2.y'] = -4.0 top.run() print('\n') print('Minimum of %f found at (%f, %f)' % (top['p.f_xy'], top['p.x'], top['p.y']))
Every driver has an options dictionary which contains important settings for the driver. These settings tell ScipyOptimizer which optimization method to use, so here we select ‘SLSQP’. For all optimizers, you can specify a convergence tolerance ‘tol’ and a maximum number of iterations ‘maxiter.’
Next, we select the parameters the optimizer will drive by calling add_param and giving it the IndepVarComp unknowns that we have created. We also set high and low bounds for this problem. It is not required to set these (they will default to -1e99 and 1e99 respectively), but it is generally a good idea.
Finally, we add the objective. You can use any unknown in your model as the objective.
Once we have called setup on the model, we can specify the initial conditions for the design variables just like we did with unconnected params.
Since SLSQP is a gradient based optimizer, OpenMDAO will call the linearize method on the Paraboloid while calculating the total gradient of the objective with respect to the two design variables. This is done automatically.
Finally, we made a change to the print statement so that we can print the objective and the parameters. This time, we get the value by keying into the problem instance (‘top’) with the full variable path to the quantities we want to see. This is equivalent to what was shown in the first tutorial.
Putting this all together, when we run the model, we get output that looks like this (note, the optimizer may print some things before this, depending on settings):
... Minimum of -27.333333 found at (6.666667, -7.333333)
Optimization of the Paraboloid with a Constraint¶
Finally, let’s take this optimization problem and add a constraint to it. Our constraint takes the form of an inequality we want to satisfy: x - y >= 15.
First, we need to add one more import to the beginning of our model.
from openmdao.api import ExecComp
We’ll use an ExecComp to represent our constraint in the model. An ExecComp is a shortcut that lets us easily create a component that defines a simple expression for us.
top = Problem() root = top.root = Group() root.add('p1', IndepVarComp('x', 3.0)) root.add('p2', IndepVarComp('y', -4.0)) root.add('p', Paraboloid()) # Constraint Equation root.add('con', ExecComp('c = x-y')) root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') root.connect('p.x', 'con.x') root.connect('p.y', 'con.y') top.driver = ScipyOptimizer() top.driver.options['optimizer'] = 'SLSQP' top.driver.add_desvar('p1.x', lower=-50, upper=50) top.driver.add_desvar('p2.y', lower=-50, upper=50) top.driver.add_objective('p.f_xy') top.driver.add_constraint('con.c', lower=15.0) top.setup() top.run() print('\n') print('Minimum of %f found at (%f, %f)' % (top['p.f_xy'], top['p.x'], top['p.y']))
Here, we added an ExecComp named ‘con’ to represent part of our constraint inequality. Our constraint is “x - y >= 15”, so we have created an ExecComp that will evaluate the expression “x - y” and place that result into the unknown ‘con.c’. To complete the definition of the constraint, we also need to connect our ‘con’ expression to ‘x’ and ‘y’ on the paraboloid.
Finally, we need to tell the driver to use the unknown “con.c” as a constraint using the add_constraint method. This method takes the name of the variable and an “upper” or “lower” bound. Here we give it a lower bound of 15, which completes the inequality constraint “x - y >= 15”.
OpenMDAO also supports the specification of double sided constraints, so if you wanted to constrain x-y to lie on a band between 15 and 16 which is “16 > x-y > 15”, you would just do the following:
top.driver.add_constraint('con.c', lower=15.0, upper=16.0)
So now, putting it all together, we can run the model and get this:
... Minimum of -27.083333 found at (7.166667, -7.833333)
A new optimum is found because the original one was infeasible (i.e., that design point violated the constraint equation). | http://openmdao.readthedocs.io/en/latest/usr-guide/tutorials/paraboloid-tutorial.html | CC-MAIN-2017-17 | refinedweb | 2,250 | 58.79 |
I have the following code which has two interfaces which have two methods of the same name. However each method throws a different type of Exception.
public interface xyz { void abc() throws IOException; } public interface qrs { void abc() throws FileNotFoundException; } public class Implementation implements xyz, qrs { // insert code { /*implementation*/ } }
I know that in inheritance if a subclass method overrides a superclass method, a subclass's throw clause can contain a subset of a superclass's throws clause and that it must not throw more exceptions. However, I am not sure how exceptions are dealt with in interfaces.
For the implementation of the function
abc() in the class
Implementation, can this method throw both of the exceptions or just one? For example, is the following method valid?
public void abc() throws FileNotFoundException, IOException
Any insights are appreciated.
A class that implements an interface must satisfy all of the requirements of that interface. One requirement is a negative requirement--a method must not throw any checked exceptions except those declared with a
throws clause on that interface.
FileNotFoundException is a specific kind (subclass) of
IOException, so if your
Implementation class declares
void abc() throws FileNotFoundException, it satisfies the requirements of both
qrs (which permits only that specific exception) and
xyz (which permits any kind of
IOException). The inverse is not true, however; if it says that it
throws IOException, it doesn't meet the contract of
qrs.
They do not have to throw the exceptions of an interface. Your function may not throw exceptions that were not in the interface.
I am writing an implementation for a powershell clientThis implementation is in java | https://cmsdk.com/java/understanding-exceptions-in-java-with-interfaces.html | CC-MAIN-2020-10 | refinedweb | 271 | 52.09 |
InterSystems has corrected a defect that could lead to invalid backups on Windows platforms. The defect causes upgrades to disable the EnableVSSBackup setting. By default, EnableVSSBackup is enabled (value set to 1) and the upgrade sets its value to 0. Windows VSS backups taken with this setting disabled may contain invalid CACHE.DAT files.
This problem is limited to Windows platforms on the following versions:
- Caché and Ensemble 2018.1.0, 2018.1.1, and 2018.1.2
- HealthShare Health Connect (HSAP) 15.032 on Core versions 2018.1.0, 2018.1.1, and 2018.1.2
- HealthShare 2019.1 (Unified Care Record, Patient Index, Health Insight, Personal Community, and Provider Directory) on Core version 2018.1.2
- HealthShare 2018.1 (Information Exchange, Patient Index, Health Insight, and Personal Community) on Core versions 2018.1.1 or 2018.1.0
The defect only occurs if you are upgrading to a version listed above. Once you have upgraded to an affected version, you must manually enable the setting; otherwise, it will be disabled on future upgrades, even when upgrading to versions containing the correction.
For customers using Windows VSS backups, InterSystems recommends enabling this setting on any 2018.1 instances of Caché or Ensemble. Once you have enabled the setting, future upgrades (including to affected versions) will preserve its value.
You can enable the EnableVSSBackup setting either in the Management Portal or using the cache.cpf file:
In the Management Portal
- On the Startup Settings page (System Administration > Configuration > Additional Settings > Startup), click the Edit link for EnableVSSBackup. This displays the Edit: EnableVSSBackup page.
- On this page, select the EnableVSSBackup checkbox and the click the Save button.
This enables the feature immediately.
Using the cache.cpf File
- In the cache.cpf file in the installation directory, in the [Startup] section, modify the EnableVSSBackup line: EnableVSSBackup=1
- In the Terminal, in the %SYS namespace, activate the CPF changes with the following call: %SYS>set status = ##class(Config.CPF).Activate()
For more information on the Activate class method of the Config.CPF class, see the class reference for your instance of Caché or Ensemble.
For More Information
The correction for this defect is identified as STC3008, which will be included in all future releases. If you have any questions regarding this advisory, please contact the Worldwide Response Center. | https://community.intersystems.com/post/september-19-2019-%E2%80%93-advisory-windows-enablevssbackup-setting-disabled-upgrade | CC-MAIN-2020-05 | refinedweb | 385 | 51.44 |
We have already seen a piece of Swift 4 program while setting up the environment. Let's start once again with the following Hello, World! program created for OS X playground, which includes import Cocoa as shown below −
/* My first program in Swift 4 */ var myString = "Hello, World!" print(myString)
If you create the same program for iOS playground, then it will include import UIKit and the program will look as follows −
import UIKit var myString = "Hello, World!" print(myString)
When we run the above program using an appropriate playground, we will get the following result −
Hello, World!
Let us now see the basic structure of a Swift 4 program, so that it will be easy for you to understand the basic building blocks of the Swift 4 programming language.
You can use the import statement to import any Objective-C framework (or C library) directly into your Swift 4 program. For example, the above import cocoa statement makes all Cocoa libraries, APIs, and runtimes that form the development layer for all of OS X, available in Swift 4.
Cocoa is implemented in Objective-C, which is a superset of C, so it is easy to mix C and even C++ into your Swift 4 applications.
A Swift 4 program consists of various tokens and a token is either a keyword, an identifier, a constant, a string literal, or a symbol. For example, the following Swift 4 statement consists of three tokens −
print("test!") The individual tokens are: print("test!")
Comments are like helping texts in your Swift 4 program. They are ignored by the compiler. Multi-line comments start with /* and terminate with the characters */ as shown below −
/* My first program in Swift 4 */
Multi-line comments can be nested in Swift 4. Following is a valid comment in Swift 4 −
/* My first program in Swift 4 is Hello, World! /* Where as second program is Hello, Swift 4! */ */
Single-line comments are written using // at the beginning of the comment.
// My first program in Swift 4
Swift 4 does not require you to type a semicolon (;) after each statement in your code, though it’s optional; and if you use a semicolon, then the compiler does not complain about it.
However, if you are using multiple statements in the same line, then it is required to use a semicolon as a delimiter, otherwise the compiler will raise a syntax error. You can write the above Hello, World! program as follows −
/* My first program in Swift 4 */ var myString = "Hello, World!"; print(myString)
A Swift 4 identifier is a name used to identify a variable, function, or any other userdefined item. An identifier starts with an alphabet A to Z or a to z or an underscore _ followed by zero or more letters, underscores, and digits (0 to 9).
Swift 4 does not allow special characters such as @, $, and % within identifiers. Swift 4 is a case sensitive programming language. Thus, Manpower and manpower are two different identifiers in Swift 4. Here are some examples of acceptable identifiers −
Azad zara abc move_name a_123 myname50 _temp j a23b9 retVal
To use a reserved word as an identifier, you will need to put a backtick (`) before and after it. For example, class is not a valid identifier, but `class` is valid.
The following keywords are reserved in Swift 4. These reserved words may not be used as constants or variables or any other identifier names, unless they're escaped with backticks −
A line containing only whitespace, possibly with a comment, is known as a blank line, and a Swift 4 compiler totally ignores it.
Whitespace is the term used in Swift 4 to describe blanks, tabs, newline characters, and comments. Whitespaces separate one part of a statement from another and enable the compiler to identify where one element in a statement, such as int, ends and the next element begins. Therefore, in the following statement −
var age
There must be at least one whitespace character (usually a space) between var and age for the compiler to be able to distinguish them. On the other hand, in the following statement −
int fruit = apples + oranges //get the total fruits
No whitespace characters are necessary between fruit and =, or between = and apples, although you are free to include some for better readability.
Space on both side of a operator should be equal, for eg.
int fruit = apples +oranges //is a wrong statement int fruit = apples + oranges //is a Correct statement
A literal is the source code representation of a value of an integer, floating-point number, or string type. The following are examples of literals −
92 // Integer literal 4.24159 // Floating-point literal "Hello, World!" // String literal
To print anything in swift we have ‘ print ‘ keyword.
Print has three different properties.
Items – Items to be printed
Separator – separator between items
Terminator – the value with which line should end, let’s see a example and syntax of same.
print("Items to print", separator: "Value " , terminator: "Value") // E.g. of print statement. print("Value one") // prints "Value one \n" Adds, \n as terminator and " " as separator by default. print("Value one","Value two", separator: " Next Value" , terminator: " End") //prints "Value one Next Value Value two End"
In the above code first print statement adds \n , newline Feed as terminator by default, where as in second print statement we’ve given " End " as terminator, hence it’ll print "End " instead of \n.
We can give our custom separator and terminators according to our requirement. | https://www.tutorialspoint.com/swift/swift_basic_syntax.htm | CC-MAIN-2019-47 | refinedweb | 916 | 60.65 |
echoAR Flutter Plugin
echoAR is a cloud-based 3D-first content management system (CMS) and delivery network (CDN) that provides server-side solutions to help scale augmented and virtual reality (AR/VR) applications. Our 3D-ready cloud platform helps manage & deliver AR/VR content to apps & devices everywhere. This is a plugin which makes it easier to get your assets from echoAR.
:book: Guide
1. Setup the config file
Add echoAR plugin to your
pubspec.yaml.
An example is shown below.
dev_dependencies: echoar_package: "^0.0.1+2"
2. Get package
After setting up the configuration, all that is left to do is run the package.
flutter pub get
Now inside your Dart code you can import it.
import 'package:echoar_package/echoar_package.dart';
:mag: Explore echoAR
Now that you have your echoAR package ready, you can call it and start using echoAR with Flutter. If you don't have an echoAR API key yet, make sure to register for FREE at echoAR.
To access your echoAR project, simply initialize your Flutter echoAR object:
EchoAR(apiKey: "<YOUR-API-KEY>");
Now your object is ready to go! Our example project is avalable for you to keep exploring echoAR Flutter plugin.
:muscle: Support
Feel free to reach out at support@echoAR.xyz or join our support channel on Slack. | https://pub.dev/documentation/echoar_package/latest/ | CC-MAIN-2021-31 | refinedweb | 215 | 66.64 |
Intro: Hand Tracking Mechanical Arm - Pyduino + Leap Motion
Hello!
In this tutorial I will show you guys and gals how to assemble a hand tracking mechanical arm with a Leap Motion Controller, Arduino device, a few motors, some balsa wood, paper clips, hot glue and a little python code. At the end of this you'll hopefully have a mechanical arm which we can control using a Leap Motion Controller. I will walk you guys through an in depth process on how to hook up your Leap Motion controller to get just one servo motor working and hopefully when you see how its done with one motor you can just copy and paste the code around a few times to make it work with 4. Of course I'll help you along the way and don't be shy about asking any questions you have along the way. I aim to provide a sufficient enough description of how this all works so that even if you're not a Python expert you can use the same process I take in my python code and apply it to other languages such that you can communicate with your Arduino device and Leap Motion controller.
This project is composed of several parts that we'll work through separately and then combine at the end.
The Steps
Project Theory + Explanation
Materials List
Setting up Pyduino Library
Setting up the Pyduino Sketch
Setting up just one servo motor (Circuit)
Controlling the servo using Pyduino
Setting up the Leap Motion Controller
Linking the Leap motion Controller to one servo motor
Assembling the arm
Working out the final circuit
Testing, Debugging and Optimizing the code
Step 1: Project Theory + Explanation
The first part and maybe most foreign part of this project to some is the hand tracking part of things. Luckily we do not have to develop the hand tracking software or hardware ourselves. Our good friends at Leap Motion have invented a very nifty controller for doing just what we want. We want to be able to control our arm without touching any sort of controller and we can do just that with a Leap Motion Controller. If you are unfamiliar with this controller please head over to the Leap Motion site and check out a few of their videos. The Leap Motion controller is composed of 3 infrared LEDS that emit around 850 nanometers and it has 2 IR cameras to capture motion. The idea of the controller is that the light from the IR LEDS reflects off our hands and gets captured by the two IR cameras on the device. The device then streams the data it captured to the Leap Motion tracking software on your computer. The tracking software probably uses some sort of parallax effect and other algorithms to reconstruct a 3D representation of what the device sees. The tracking algorithm then interprets the data such as what is the orientation and position of your hands and fingers. After the algorithm determines to the best of its ability where your hand is, the data is exported to the API which is ready for you to interface with and pull data from. The neat thing is the Leap Motion controller can interface with a variety of programming languages so if don't know Python or can't decipher my code you can use your own programming language of choice.
We will end up writing a little piece of software that listens for the data from the Leap Motion controller. Once we receive the data we will normalize it by the size of the Leap Motion controller's interaction area so that we can obtain a fraction of how far in the space our hand is. Once we have that fraction we can multiply it by the range of some motor and tell our Arduino to move our servo motor to that position. We will be able to synchronize our hand motion in 3 dimensions to 3 different motors. Moving around each axis will control a different motor. That makes sense because when we move our hand up further away from the Leap Motion controller we want that to move our arm up, which we can make happen.
If you're familiar with how an Arduino works you know that we have to upload a sketch onto the board before we can make use of the device. In order for us to dynamically interact with the Arduino board we need to write a little piece of software such that we can control our Arduino without having to constantly keep uploading sketches onto it. Luckily I have a little piece of code that will facilitate this process. I wrote two previous instructable tutorials on how to interface your Arduino with Python over serial communication. That's just a fancy way of saying we're going to have our Python code send a little message to our Arduino device and then our Arduino will interpret the message and then perform sent task. Don't worry I'll walk you guys through a basic example of how to set up the Pyduino library on your Arduino device and on your computer so that we can start talking to our Arduino. And if you don't know Python I have a solution for you! The sketch that we upload onto our Arduino device is versatile enough to interface with using any language capable of sending serial messages so if don't want to use Python to send messages to our Arduino you don't have to! But you will have to write a little extra code.
Let me direct your attention now to the figure above. Lets define some terminology so you can all understand what I'm talking about later. At the base of our arm we have 2 servo motors, we will call these motors the Azimuthal motor/angle and Altitude motor/angle. If you are up to date on your observational astronomy terminology the Azimuthal angle typically refers to some cardinal direction (N,W,E.S) so this is the motor that will move left and right and the Altitude angle is that which starts at the horizon and extends upward to some star. The primary arm is that which is attached to the Altitude motor. The secondary arm is attached to what I like to call the Wrist motor which is attached to the primary arm. Lastly we have The Claw. (insert Toy story clip here) The claw is responsible for grabbing things and is operated by a motor as well.
Step 2: Materials List
Get yo mats here!
Materials
Computer with Python installed ()
Leap Motion Controller
Arduino Uno (or other arduino device)
USB Cable
Breadboard
Lots of wires
2 Parallax Standard Servo Motors
2 Micro Servos (SG-90)
Balsa Wood Pack (I got mine at Michaels, picture attached)
Hot Glue
Paper Clips
Pliers
X-acto Knife
Sand Paper
Most of these supplies you can pick up at a Micro Center shop or online. For building the structure you shouldn't need to go to Home Depot or Lowes, you can pick up most of the supplies at your local craft shop or Michaels.
Balsa wood and hot glue is a stronger combination than you may think. The balsa wood really helps us keep the torque on the servos due to the structure at a minimum compared to using aluminum or denser wood. It also makes the servos more responsive since there is less weight to tug around.
Step 3: Setting Up Pyduino Library
In order for us to be able to move our motors on the fly we need to be able to tell our Arduino device; "Hey! Move this motor to this angle yo!" This can be done by sending a message through the USB port to our Arduino.. The nifty thing about the sketch we'll upload to our Arduino board is that it can be used with any programming language capable of sending serial messages. So if you don't like Python you can use another language
.
Check out the code below: You will want to save the code below to a file called: pyduino.py
You'll want to put that file in the directory you're working in. Take a moment to read over the piece of code and see what it does. There is also a piece of sample code at the bottom of the library showing you how to use it which I'll get into next. If you would like to see a working example of how to use the library to control something other than a servo motor check out my other instructables.
If you do not want to use Python then you'll have to take a moment and transcribe this piece of code to another language. For more information about the pyduino library check out
"""
A library to interface Arduino Servos through serial connection """ import serial
class Arduino(): """ Models an Arduino connection """
def __init__(self, serial_port='/dev/ttyACM0', baud_rate=9600, read_timeout=5): """ Initializes the serial connection to the Arduino board """ self.conn = serial.Serial(serial_port, baud_rate) self.conn.timeout = read_timeout # Timeout for readline() print 'Connection initiated' def servo_write(self, pin_number, digital_value): """ Writes the digital_value on pin_number Internally sends b'WS{pin_number}:{digital_value}' over the serial connection """ command = "WS{}:{}".format(str(pin_number),str(digital_value)).encode() self.conn.write(command)
def close(self): """ To ensure we are properly closing our connection to the Arduino device. """ self.conn.close() print 'Connection to Arduino closed'
Step 4: Setting Up Pyduino Sketch
In order for us to dynamically control our Arduino device we need to upload a sketch on it so that it can interpret the messages we send it from our Python code. The set up of this sketch is pretty straightforward. The Arduino device will communicate with our computer over the serial port. For more information on Serial functions for Arduino check out: . The Arduino device will check to see if it has any characters available in the serial receive buffer and if it does it will being to parse the incoming message. Upon receiving the whole message the device will then interpret what to do whether it be a read function, write function or setting a pin mode. This is sketch is versatile enough to be able to set pin modes, perform read and write of digital and analog values as well as perform servo writes. Any additional functions will need to be coded in yourself but for the purposes of this project we do not need to add anything.
Check out the piece of code below. This is what we're going to use to get one servo motor working with the Leap Motion Controller. We want to be able to get one servo working before we go ahead and get all 4 in there. This file is also available on the github page for this instructable at:...
You can go ahead and upload this sketch to your Arduino device. When you upload it, you should get a picture that looks similar to the one above. One thing you might need to write down is the location of your Arduino device on your computer, you can find this at the bottom right of the arduino program after you upload a sketch. For me my arduino is located at: /dev/ttyACM0
/*
* Sketch to control the pins of Arduino via serial interface * * Commands implemented with examples: * * - RD13 -> Reads the Digital input at pin 13 * - RA4 - > Reads the Analog input at pin 4 * - WD13:1 -> Writes 1 (HIGH) to digital output pin 13 * - WA6:125 -> Writes 125 to analog output pin 6 (PWM) */
; int SERVO2_PIN = 2; // this is where the servo is attached to on our board
void setup() { Serial.begin(9600); // Serial Port at 9600 baud Serial.setTimeout(500); // Instead of the default 1000ms, in order // to speed up the Serial.parseInt() SERVO2.attach(SERVO2_PIN); SERVO2.write(180); // 5: Setting Up Just One Servo Motor (Circuit)
Alright so if you had the chance to look over the Arduino sketch on the previous step you may have noticed that we predefined a pin for our Servo Motor, Pin 2. Go ahead and set up your arduino with any one of your servo motors following the circuit schematic above. If you're using an SG-90 micro servo like I am you may have noticed that the wire colors do no match those above. Typically the darker colored wire will be the ground, so for me that wire is brown. Power will always be the middle wire which leaves us with just one wire left that we wire into one of our digital pins. We'll use that digital pin to tell the servo where to move.
Alright now that our circuit is set up its time to get this motor moving using python!
Step 6: Controlling the Servo Using Pyduino
Make sure your Arduino device is plugged into your computer. Before we get our Leap Motion controller set up we want to make sure we can successfully control one servo with our Pyduino library. We're going to follow an example similar to the sweep example on the Arduino site however just with different pin wirings. Our python code is pretty simple, we're going to import the pyduino library so that we can establish a serial connection to our Arduino device and then its as simple as using 2 lines of code to move our servo.
Save the piece of code below to a file called one_servo_test.py and make sure it is in the same directory as your pyduino.py file.
And then you can run it through the terminal by typing $ python one_servo_test.py
Don't worry the program will lag a little bit in the beginning which ensures a connection to your Arduino device and then you should see your servo start to move! The code should be documented enough for you to figure out everything that is going on. If you're unable to establish a connection to your arduino device you'll recieve an error that says something like: [serial.serialutil.SerialException: could not open port /dev/ttyACM0: [Errno 2] No such file or directory: '/dev/ttyACM0'] Then you'll need declare your arduino device in the code using a different serial port. The serial port for your Arduino device can be found at the bottom right of the Arduino software after you upload a sketch to your board. Another complication that you may run into is that if your servo is not wired up to PIN #2 on your board then you'll need to either change your wiring or change the Arduino sketch we uploaded onto our board, check the code documentation for how and where to do that.
from pyduino import *
import time
if __name__ == '__main__': # if your arduino was running on a serial port other than '/dev/ttyACM0/' # declare: a = Arduino(serial_port='/dev/tty') a = Arduino()
# sleep to ensure ample time for computer to make serial connection time.sleep(3)
# declare the pin our servo is attached to # make sure this matches line 26 of one_servo.ino # the line that says: int SERVO2_PIN = 2; PIN = 2
try: for i in range(0,1000): if i%2 == 0: print '180' # move servo on pin to an angle of 170 deg a.servo_write(PIN,170) else: print '10' # move servo on pin to an angle of 10 deg a.servo_write(PIN,10)
time.sleep(1)
except KeyboardInterrupt: # reset position of servo to 90 deg and close connection a.servo_write(PIN,90) a.close()
Step 7: Setting Up the Leap Motion Controller
If this is your first time using the Leap Motion controller I'll show you how to set up the files we need to be able to use the controller effectively. I am running all of this code from an Ubuntu machine so for those Windows and Mac users out there your experience may be slightly different. I'll guide you through installing the development kit from the Leap Motion developer site and then hopefully when you plug our hardware in it will work. Afterwards we can start to make our controller do custom things like control our servo motor! P.S. If you already have the Leap Motion SDK installed go to the Library section of this step to learn how to set up our first project directory.
Head over to: You'll need to sign in before you can download the SDK for your machine. For LInux: you will have to untar the SDK using a command like "tar -xvzf LeapDeveloperKit...." After extracting the Dev Kit read the "README" file for instructions on how to install the linux dependencies for Leap and then check the instructions on how to install the SDK afterwards. If you're not running on a linux machine read the "README.txt" file and it will have instructions on how to install the software for Windows and Mac.
Depending on which version of the API you're going use, check out... for all the available platforms. The API for each language is well documented. Take a moment to read through the API of your choice a little bit so you can get a better understanding of how the Leap Motion Controller and software works together. Hopefully you were able to to install everything correctly. As your attorney I advise you to test your installation now before continuing. Plugging your LeapMotion device into your computer will lead to the OFF picture above. Simply running the visualizer while the controller is off will result in no output from the device and show no visuals. So to combat this issue we need to start the "leapd" service. On a linux machine this can done by typing "sudo leapd" into your terminal and letting that process run. After you do that you should see an output that says something like WebSocket server started, Leap Motion Controller detected and Firmware is up to date. If you do not see those outputs take a moment to consult the all mighty google about any issues you're having. I have to admit that I had some troubles when I first installed this controller and was able to find answers on google and stack overflow. After you get the leapd service running your Leap Device will turn on and you should be able to see the 3 LEDs light up. (See ON image above) You should now be ready to use the Visualizer. When we're testing and creating code we want to make sure the leapd service is always running otherwise we can't use our device.
LIBRARY SECTION
Before we can make a piece of code we'll need to set up the libraries for our project. To see which libraries you need to copy from the SDK tar ball above check out this link:... (You'll need a different link if you're not using the Python API) For our python code we'll need to go to the directory: LeapDeveloperKit_*_linux/LeapSDK/lib/ to find the libraries. Afterwards you'll need to copy the Leap.py file and the two shared object files in the x64 or x86 directory to whichever directory your other pieces of code are in.
In your working directory you should now have the following files
one_servo.ino # arduino sketch for serial interfacing with one servo one_servo_test.py # python code to test one servo w.o leapmotion pyduino.py # serial interfacing library for arduino LeapLib/ # directory with Leap Motion libraries Leap.py # these are from Leap SDK LeapPython.so libLeap.so
In the next step we'll set up our first Leap Motion script to stream data from the device and then use the data to move our one servo motor.
Step 8: Linking the Leap Motion Controller to One Servo Motor
Now that you hopefully have your Leap Motion device up and running we are going to create our first program to stream data from the device and then we're going to link it up to our servo motor with the pyduino library. Any time we run a script we need to have the leapd service running. To know that this service is running you should be able to see the 3 illuminated LEDs on your Leap device. (See ON image in previous step)
Our first Leap script will be composed of several key components. There is a controller class which connects to the Leap Device and a listener class which will get the data from the tracking software. For an in depth tutorial on how to set up a simple Leap script check out:...
I'll give a brief overview of what the piece of code below does and how it works. In the main part of the code we initialize an instance of our listener and controller classes afterwards we add the custom listener to our controller. The important part of the script is what occurs within the listener. The listener is a class that will obtain data from the Leap device and assess what to do with the information being streamed. The listener has the ability to obtain 60 frames a second from the device however that is a little too much information for what we want. You're certainly welcome to try and use the 60 frames a second to control your Arduino however keep in mind it takes a small amount of time to send a message to the Arduino device and have it respond. To fix this issue we're only going to work with 10 frames a second instead of 60. The listener has a few basic functions: On Initialize, On Connect, On Disconnect, On Exit, On Frame. They're all pretty self explanatory from the names but lets go through them one by one and explain how we're going to use them to control our servo.
On Initialize - Here we want to declare our Arduino and establish a serial connection to it
On Connect - We can enable some gestures for the controller, no need to do anything with the Arduino here
On Disconnect - Nothing really here.
On Exit - We are going to reset the servo position and close the connection to our Arduino device
On Frame - The first thing to do is get the time because we only want to get a frame every 10ms or so. Once we obtain the frame from the controller it will tell us which hand is in the frame and what its position is. From this we can extract finger and joint positions as well. We are going to use the normalized position of our hand in the interaction box and multiply that by the range on our motor. For instance if our hand is all the way left, the normalized position will be 0 and so the angle on our servo will be 0 as well. For more insight you can read up on the interaction box in the Leap API documentation:... Another useful link to look at is the one for the frame because the frame stores all our information about the hand and positions of everything. To see what information is accessible within our frame check out this link:...
We are going to use the same Arduino circuit that we created earlier. If you happened to change the servo pin to something other than 2 you will need to modify the code below (line 17). If you are unable to get the Arduino part working make sure you have the serial connection on the correct port (line 23). Lines 78,82 are what control the servo motor. When you run the code take some time to move your hand around and see what gets output. The position of your hand will be output in terms of the normalized position in the interaction box. Before you run this code make sure that your leapd service is running and that your Arduino is plugged into your computer.
my_first_leap.py
# Simple Leap motion program to track the position of your hand and move one servo
# import the libraries where the LeapMotionSDK is import sys sys.path.insert(0, "LeapLib/")
import Leap, thread, time from Leap import CircleGesture, KeyTapGesture, ScreenTapGesture, SwipeGesture from pyduino import *
class SampleListener(Leap.Listener): oldtime = time.time() newtime = time.time()
# FIXME if servo is not attached to pin 2 SERVO_PIN = 2 # Azimuthal Servo motor pin AZIMUTHAL_LIMIT = 180 # we want our motor to go between 0 and 180): # Reset servo position when you stop program self.a.servo_write(self.SERVO_PIN,90) self.a.close()
print "Exited"
def on_frame(self, controller):
# we only want to get the position of the hand every so often self.newtime = time.time() if self.newtime-self.oldtime > 0.1: # if difference between times is 10ms
# Get the most recent frame and report some basic information frame = controller.frame() interaction_box = frame.interaction_box # print some statistics) print " %s, id %d, x-position: %s" % (handType, hand.id, normalized_point.x ) print " %s, id %d, y-position: %s" % (handType, hand.id, normalized_point.y ) print " %s, id %d, z-position: %s" % (handType, hand.id, normalized_point.z )
# FIXME depending on orientation of servo motor # if motor is upright, Leap Device will register a 0 degree angle if hand is all the way to the left XPOS_servo = abs(AZIMUTHAL_LIMIT-normalized_point.x*AZIMUTHAL_LIMIT) print " Servo Angle = %d " %(int(XPOS_servo)) # write the value to servo on arduino self.a.servo_write(self.SERVO_PIN,int(XPOS_servo)) # turn LED on
# update the old time self.oldtime = self.newtime else: pass # keep advancing in time until 10 millisecond()
Step 9: Assembling the Arm
Hopefully you can see how we're going to set up the rest of the project from our little example with moving just one servo. If you got the one servo working with the Leap Motion controller we pretty much have the rest of the project in the bag. All we need to do is set the ranges for our other servos and then tell the Arduino to move those servos and that's pretty much the rest of the coding that's required.
Unfortunately I do not have solid instructions on how to assemble the arm because it was a trial and error process for myself and I think it will be a good puzzle for you guys too :) I'll do my best to explain the design and show pictures of each arm but it will be up to you guys to do the assembly. I used only Balsa wood, hot glue and paper clips to assemble my arm. You certainly aren't restricted to creating it how I have, feel free to use your own judgement on this one. Some of the servo motors are screwed into the balsa wood for extra support, I use screws that are about as big as the ones you would use to screw an arm into one of your servo motors.
Step 10: Working Out the Final Circuit
The final circuit looks a little complicated but its really not. We only have a total of 6 pins attached to our arduino device when we use the breadboard. You will have 4 digital pins (2,3,4,5) occupied and then a power and ground one. On the breadboard you'll see 4 main rows of wires, each row corresponds to a single servo. The power for each servo motor is attached to the power column on the breadboard which is wired to the 5V out on the arduino. Remember to put the breadboard close to the arm so the wires connected to the servo are long enough to reach. I trust you guys to be able to wire in 4 servos, its not rocket science :p
Before you can use the Leap Motion to control all four servos we need to upload a new sketch onto our Arduino device that will let us do that. At the moment our current sketch can only support one motor. Check out the sketch below, if you are not planning on using pins 2,3,4,5 to control your servo motors then you will need to change lines 30,31,32,33....
/*
* Sketch to control the servo pins of Arduino via serial interface * */
; Servo SERVO3; Servo SERVO4;
int SERVO2_PIN = 2; // azimuth angle int SERVO3_PIN = 3; // altitude angle int SERVO4_PIN = 4; // wrist angle
void setup() { Serial.begin(9600); // Serial Port at 9600 baud Serial.setTimeout(500); // Instead of the default 1000ms, in order // to speed up the Serial.parseInt() SERVO2.attach(SERVO2_PIN); SERVO2.write(180); // reset to original position
SERVO3.attach(SERVO3_PIN); SERVO3.write(90); // reset to original position // servo3 limit 0-85
SERVO4.attach(SERVO4_PIN); SERVO4.write(90); //); } else if (pin_number == SERVO3_PIN) { SERVO3.write(servo_value); delay(10); } else if (pin_number == SERVO4_PIN) { SERVO 11: Testing+Debugging the Code
Alright hopefully you have successfully assembled an arm or at least have four servos connected to your Arduino device which are ready to test. We're going to set the code up much like how we set it up with just one servo. Getting the claw work will involve a little more than what we did early but its still not too hard. For the claw all we're going to do is calculate a normalized distance between our pointer finger and thumb. To find the distance between our two points we will need to know a little vector arithmetic. We're going to find the difference between the vector pointing to our thumb and pointer finger. Afterwards we can calculate the distance/magnitude by finding the 2norm which you can sort of think of as a 3D Pythagorean theorem. To see how this is done check out lines 91 to 107 in the code below. If you have any comments about the code feel free to ask but it should be fairly straight forward if you've been able to follow along so far.
If you can't get the code to work make sure you have your leapd service running and your arduino device connected to your computer. You can also try to add in the control for the four servos one servo at a time if you got the code in the previous step to work.
Optimizing the code
There are a few tweaks to make like the normalization limit for your pointer and thumb finger as well as the ranges on the servos. I set the ranges on the servo based on how I attached the motors to the arm because I didn't want the motors moving the the arm into the ground or into the Arduino device itself. You can find the limit for the normalization between the thumb and pointer finger on line 104. Also the ranges for the motors can be found on lines 118,119,120.
The code can also be found here:...
# Simple Leap motion program to track the position of your hand
# import the libraries where the LeapMotionSDK is import sys sys.path.insert(0, "LeapLib/")
import Leap, thread, time from Leap import CircleGesture, KeyTapGesture, ScreenTapGesture, SwipeGesture from pyduino import * import numpy as np
class SampleListener(Leap.Listener): finger_names = ['Thumb', 'Index', 'Middle', 'Ring', 'Pinky'] bone_names = ['Metacarpal', 'Proximal', 'Intermediate', 'Distal'] state_names = ['STATE_INVALID', 'STATE_START', 'STATE_UPDATE', 'STATE_END'] oldtime = time.time() newtime = time.time()) # Define Pins for Arduino Servos self.PIN2 = 2 # Azimuthal self.PIN3 = 3 # Altitude self.PIN4 = 4 # Wrist self.PIN5 = 5 # Claw self.previous_angles = [0,0,0,0]
# allow time to make connection time.sleep(1)): time.sleep(1) # Reset arduino when you stop program self.a.servo_write(self.PIN2,90) # Az self.a.servo_write(self.PIN3,0) # Alt self.a.servo_write(self.PIN4,100) # Wrist self.a.servo_write(self.PIN5,70) # Claw self.a.close()
print "Exited"
def on_frame(self, controller):
# we only want to get the position of the hand every so often self.newtime = time.time() if self.newtime-self.oldtime > 0.1: # every 10 ms get a frame
# Get the most recent frame and report some basic information frame = controller.frame() interaction_box = frame.interaction_box)
self.XPOS = normalized_point.x self.YPOS = normalized_point.y self.ZPOS = normalized_point.z print " %s, id %d, x-position: %s" % (handType, hand.id, int(self.XPOS*180) ) print " %s, id %d, y-position: %s" % (handType, hand.id, int(self.YPOS*85) ) print " %s, id %d, z-position: %s" % (handType, hand.id, int(self.ZPOS*180) )
print 'my fingers =',len(hand.fingers) if len(hand.fingers) >= 2: x1 = hand.fingers[0].tip_position[0] y1 = hand.fingers[0].tip_position[1] z1 = hand.fingers[0].tip_position[2] # calc distance between two fingers x2 = hand.fingers[1].tip_position[0] y2 = hand.fingers[1].tip_position[1] z2 = hand.fingers[1].tip_position[2] # calc 2norm for difference between vector to thumb and pointer finger r = ( (x1-x2)**2 + (y1-y2)**2 + (z1-z2)**2 )**0.5
# perform a crude normalization dist_norm = r/100. # may need to change the 100 to something else
print 'Finger Tip Distance = ',dist_norm # not really a normalized position, sometimes can be more than 1 if dist_norm >= 1: dist_norm=1 #for finger in hand.fingers: #print finger,' - ',finger.tip_position
# determine motors - adjust angle ranges here too XPOS_servo = abs(145-self.XPOS*145) # 0-Azimuth YPOS_servo = abs(85-self.YPOS*85) # 1-Altitude ZPOS_servo = 35+135*self.ZPOS # Wrist Angle # write the value to servo on arduino self.a.servo_write(self.PIN2,int(XPOS_servo)) # Azimuth self.a.servo_write(self.PIN3,int(YPOS_servo)) # Altitude self.a.servo_write(self.PIN4,int(ZPOS_servo)) # Wrist
# claw range CLAW_SERVO = abs(90-dist_norm*70) print 'Claw Angle =',CLAW_SERVO self.a.servo_write(self.PIN5,int(CLAW_SERVO))
# update the old time self.oldtime = self.newtime
# update previous values self.previous_angles[0] = XPOS_servo self.previous_angles[1] = YPOS_servo self.previous_angles[2] = ZPOS_servo self.previous_angles[3] = CLAW_SERVO else: pass # keep advancing in time until 1 second()
14 Discussions
Question 7 weeks ago
Sir , thank u for ur tutorial. It really help me. may I ask for ur help? I had followed ur tutorial to make this project. it works perfectly but I have some difficulties in adding some extras :
1. I want to add OLED LCD or LCD 16x2 to read each servos position, (X, Y ,Z). I had put standart code to void loop, inside your main code arduino ex "LCD.print(value_to_write);" after i uploaded, the value showed up on screen but just Y pos, and the servos started glitch and couldn't sycn with my hand.
2. I want to add two motors for a wheel,to differentiate the gesture between arm and wheel. I confuse how to add different gesture to python, in order to control the motors.
Pls help me to figure out. Thx a lot Sir for sharing this. You're awesome!!!
9 months ago
could you pls explain the use of following codes:
XPOS_servo = abs(145-self.XPOS*145) # 0-Azimuth
YPOS_servo = abs(85-self.YPOS*85) # 1-Altitude
ZPOS_servo = 35+135*self.ZPOS # Wrist Angle
1 year ago
Hi! Awesome Project!
Any idea about this?:
Traceback (most recent call last):
File "my_first_leap.py", line 77, in on_frame
XPOS_servo = abs(AZIMUTHAL_LIMIT-normalized_point.x*AZIMUTHAL_LIMIT)
NameError: global name 'AZIMUTHAL_LIMIT' is not defined
Thanks in advance!
Reply 1 year ago
Hey,
I would check to make sure that variable name is declared before you use it in the code.
Just do a ctrl+f for "AZIMUTHAL_LIMIT" and see if you can find it in the code otherwise declare the variable somewhere to a value of 180.
Cheers!
2 years ago
Help me please
Reply 2 years ago
Hi,
I am not too familiar with developing this type of thing on windows. I think the DLL for the leapmotion library needs to be compiled in a special way. I would reccomend starting at the leapmotion sdk page for official instructions on how installing their sdk for windows. I was developing my project on linux which is slightly different.
Reply 2 years ago
i made it
thanks for your support
3 years ago on Introduction
I appreciate your help with the proper licensing and references. The source code for the project has now been changed to reflect the comments and to reduce the amount of extraneous information being given to reader.
3 years ago on Introduction
Hi,
I'm the original author of the library pyduino. As it is free software, I don't have any problem with you incorporating to your project, or enhancing it, but:
1) It is licensed as GNU GPLv2, any derivative (as your work) should be equally licensed. I do not see the license with your source code. This is very important.
2) You should give attribution to the original codebase on which you have based your work, mention and link it
3) As you are using GitHub, the preferred way would be to fork my repository so that this could be traceable and even thinking in contributing back to the original library (I accept pull requests, of course, this is Free Software)
Cheers,
3 years ago on Introduction
Hi! Nice and cool tutorial.
Just a question, did you, by chance, took the pyduino code from here? It looks pretty much the same, even the comments in the code :)
Reply 3 years ago on Introduction
Ah yes that is the old version of library. That version does not support servo control.
3 years ago on Introduction
<3! Very cool. This should totally win the Move It contest!
3 years ago on Introduction
Thanks for the tutorial. I finally know how to program Arduino in Python. Although I will use the Arduino Language in most of my projects.
3 years ago on Introduction
Really impressive stuff! Thank you for sharing this! | https://www.instructables.com/id/Hand-Tracking-Mechanical-Arm-Pyduino-Leap-Motion/ | CC-MAIN-2018-47 | refinedweb | 6,285 | 69.72 |
06 December 2012 18:59 [Source: ICIS news]
TORONTO (ICIS)--Talison Lithium said on Thursday that it has agreed to a Canadian dollar (C$) 7.50/share takeover offer from ?xml:namespace>
Talison is headquartered in
Tianqi’s cash offer, valuing Talison's equity at about C$848m ($848m), was superior to the C$6.50/share takeover offer by US-based specialty chemicals firm Rockwood in August, Talison said.
"This price represents an attractive premium for security holders relative to the price under the Rockwood proposal and reflects positively on Talison’s position in the global lithium market," board chairman Peter Robinson said.
However, under its agreement with Tianqi, Talisman is granting Rockwood five business days to match Tianqi’s offer.
Rockwood media officials were not immediately available for comment. The company had said last month it would not raise its offer.
Talison's shares were up 6.6% at C$7.32/share at 12:55 hours on the Toronto Stock Exchange.
($1 = C | http://www.icis.com/Articles/2012/12/06/9621944/talison-lithium-agrees-to-c848m-takeover-bid-from-chinas-tianqi.html | CC-MAIN-2015-06 | refinedweb | 166 | 57.98 |
This is the fourth part of the FlareOn 6 CTF WriteUp series.
4 - Dnschess
The challenge reads
Some suspicious network traffic led us to this unauthorized chess program running on an Ubuntu desktop. This appears to be the work of cyberspace computer hackers. You'll need to make the right moves to solve this one. Good luck!
We have three files - ChessUI, ChessAI.so and capture.pcap. The first two are ELF binaries compiled for Linux x64. Running ChessUI on an Ubuntu 18.04 system we are greeted with a Chess game.
Our opponent DeepFLARE is the black side and waits for our move. Let's make a move. We move the knight from B1 to A3.
DeepFLARE resigns immediately without making a move. Trying other first moves doesn't change the outcome. Let's have a look at the PCAP.
The PCAP in its entirety consists of DNS requests along with their responses. There are DNS A queries for domain names which have the form of
<name>-<pos1>-<pos2>.game-of-thrones.flare-on.com where
nameis the name of a chess piece
pos1and
pos2are two positions on the chess board
Corresponding to these DNS queries, we have responses as well.
However, we get a
NXDOMAIN response when we try to lookup the names on our system.
$ nslookup rook-c3-c6.game-of-thrones.flare-on.com 1.1.1.1 Server: 1.1.1.1 Address: 1.1.1.1#53 ** server can't find rook-c3-c6.game-of-thrones.flare-on.com: NXDOMAIN
Analyzing ChessUI
ChessUI as it's name suggest must be responsible for the game GUI. Let's analyze the binary in Ghidra. Make sure that "Fixup Unresolved External Symbols" is unchecked in ELF loading options.
The presence of function names starting with gtk implies that the GUI was developed using the GIMP Toolkit framework. The third parameter to
g_signal_connect_data takes the address of a callback handler function
FUN_00103ab0.
Going through the decompiled code of
FUN_00103ab0 we can notice that it loads ChessAI.so and obtains the addresses of symbols
getAiName,
getAiGreeting and
getNextMove as shown in Figure 6.
Among the three functions,
getNextMove looks interesting. Lets check out its code in ChessAI.so
Analyzing ChessAI.so
Near the beginning there is a call to
gethostbyname. This function can be used to obtain the IPv4 address of a domain name. Calling this function will result in a DNS query as we saw in the PCAP.
gethostbyname returns a
hostent structure filled with the requested info.
struct hostent { char *h_name; /* official name of host */ char **h_aliases; /* alias list */ int h_addrtype; /* host address type */ int h_length; /* length of address */ char **h_addr_list; /* list of addresses */ };
For now, let's try to hook
gethostbyname using the
LD_PRELOAD technique. We define our own version of
gethostbyname which will simply print its argument.
// Compile using // gcc -Wall -fPIC -shared -o gethostbyname gethostbyname.c #include <stdio.h> #include <stdlib.h> #include <string.h> struct hostent *gethostbyname(char *name) { printf("[+] %s\n", name); return NULL; }
Let us inject the library using
LD_PRELOAD and run ChessUI. When we make a move such as B1-A3 we get the following output on the terminal.
$ LD_PRELOAD=./gethostbyname ./ChessUI [+] knight-b1-a3.game-of-thrones.flare-on.com
Essentially the application makes a DNS request for a name which contains the information about our move. The IP address returned as a response to this request must contain the move DeepFLARE should make.
The returned value from
gethostbyname is a pointer to a
hostent structure.
h_addr_list is a double pointer to the octets in the IP address. There are a few checks done on the octet. If the returned IP address is of the form
W.X.Y.Z then
W must be 127, the last bit of
Z must be zero (even).
uParm is a counter starting from 0 which increases by 1 for each turn.
Y & 0xf i.e. the last 4 bits must equal to this counter value.
If all of the conditions satisfy, it xors some data from
DAT_00102020 with
Y - the second octet, with the result stored to
DAT_00104060. The data at
102020 looks to be array of encrypted bytes.
The following is the list of IP addresses obtained from the DNS responses which have the first octet equal to 127 and the last octet even.
This can be further be sorted in ascending order according to the value of
octet[2] & 0xf
>>> ip_list = ['] >>> ip_list.sort(key=lambda x:int(x.split('.')[2]) & 0xf) >>> ip_list [']
This is the order of DNS responses the game expects to receive. Note that the legality of our move is not checked client-side within the game. So its possible that we make any move as long as the DNS response is correct.
However, for completeness let's make our moves in proper order too. Corresponding to the sorted list of IP addresses we have this list of DNS requests which indicate the move we should make.
╔═════╦═════════════════╦═══════════════════════════════════════════╗ ║ No. ║ IP ║ Domain Name ║ ╠═════╬═════════════════╬═══════════════════════════════════════════╣ ║ 1 ║ 127.53.176.56 ║ pawn-d2 2 ║ 127.215.177.38 ║ pawn-c2 3 ║ 127.159.162.42 ║ knight-b1 4 ║ 127.182.147.24 ║ pawn-e2 5 ║ 127.252.212.90 ║ knight-g 6 ║ 127.217.37.102 ║ bishop-c1 7 ║ 127.89.38.84 ║ bishop-f1-e 8 ║ 127.230.231.104 ║ bishop-e 9 ║ 127.108.24.10 ║ bishop-f4 10 ║ 127.34.217.88 ║ pawn-e4 11 ║ 127.25.74.92 ║ bishop-f3 12 ║ 127.49.59.14 ║ bishop-c6-a 13 ║ 127.200.76.108 ║ pawn-e5 14 ║ 127.99.253.122 ║ queen-d1 15 ║ 127.141.14.174 ║ queen-h5-f7.game-of-thrones.flare-on.com ║ ╚═════╩═════════════════╩═══════════════════════════════════════════╝
For example, our first move will be to move the pawn from d2 to d4. Now all that is left is to modify
gethostbyname.c such that it also returns the response in the correct order.
#include <stdio.h> #include <stdlib.h> #include <string.h> unsigned char ip_list[16][4] = { }}; struct hostent { char *h_name; /* official name of host */ char **h_aliases; /* alias list */ int h_addrtype; /* host address type */ int h_length; /* length of address */ char **h_addr_list; /* list of addresses */ }_hostent; int idx = 0; void* addr_list[] = {NULL, NULL}; struct hostent *gethostbyname(char *name) { addr_list[0] = &ip_list[idx++]; _hostent.h_addr_list = addr_list; return &_hostent; }
We can compile and
LD_PRELOAD it the same way.
$ gcc -Wall -fPIC -shared -o gethostbyname gethostbyname.c $ LD_PRELOAD=./gethostbyname ./ChessUI
Winning the game
Let's play the game executing the moves in the order specified in the table.
Playing out all 15 moves we win and reach the stage as shown in Figure 15. The flag is also printed.
FLAG:
LooksLikeYouLockedUpTheLookupZ@flare-on.com | https://blog.attify.com/flare-on-6-ctf-writeup-part4/ | CC-MAIN-2021-04 | refinedweb | 1,111 | 77.53 |
Your Account
Register for an upcoming free, live webcast or browse our on-demand archive of past events.
There are no upcoming webcasts at this time. Please check back soon.
Webcasts are made available as a video shortly following each live event. | June 29, 2016
Join Amit Saha, author of Doing Math with Python, in this hands-on webcast, and learn how to use Python to solve calculus problems, make sense of numbers with graphs and statistics, do symbolic math with SymPy, and perform basic machine-learning tasks... | June 21, 2016
Viktor Farcic gets you started on your Docker journey, addressing Docker's challenges and outlining the steps you need to take to create a fully automated Jenkins pipeline that continuously builds, tests, and deploys microservices into a Docker Swarm....
By Patrick Wolf | May 17, 2016
Patrick Wolf offers an overview of what's new in Jenkins 2.0, demonstrates how to configure traditional jobs and orchestrate delivery pipelines, and discusses where Jenkins is going in the future.
By Evan Sparks | May 17, 2016
You'll learn the KeystoneML programming model, how to work with KeystoneML to construct new pipelines, how salient aspects of the KeystoneML optimizer work, and how KeystoneML achieves high performance and scalable model training while maintaining a ...
By David Crespo, E. Dunham, Hadi Hariri, Ashley McNamara | May 10, 2016
In an online conference inspired by the Emerging Languages track at the upcoming O'Reilly Open Source Convention (May 16–19 in Austin, TX), you'll get the lowdown on four relative newcomers—Kotlin, Rust, Elm, and Go.
By.
By Sean Leach | February 02, 2016
This talk will introduce unique API caching methods to address the common performance and scalability challenges experienced by providers of RESTful APIs, and also discuss how cached APIs can protect against the latest security threats to web infrastructure...
By. Matt Richardson, Shawn Wallace | ...
By Emily Xie | January 12, 2016
Coding: Art or Craft? explores how we think about the act of programming through metaphor.
By Steven Lott | January 12, 2016
In this hands-on workshop led by Steven Lott, author of 'Python for Secret Agents', 'Functional Python Programming' and 'Mastering Object-oriented Python' you will learn what Namespaces are, all the places we use them, using the three built-in namespace...
By.
By Arun Gupta | September 29, 2015
This talk will provide a quick introduction to Docker.
By Abraham Marin-Perez | September 22, 2015
This webcast presents ways in which CD can be scaled up so you can keep growing your application without sacrificing quality..
By Scott Davis | August 26, 2015
In this hands-on webcast, Scott Davis (author/presenter of Architecture of the MEAN Stack ) will give a code-first example of using MongoDB, ExpressJS, AngularJS, and NodeJS together to build modern, 21st century web applications..
By Patrick Catanzariti | July 22, 2015
Patrick will provide an overview of what's possible with recent tech including Arduinos, Particle Cores (previously known as Spark Core), Tessel, controlling Android via on{X}, voice recognition and AI using Wit.ai.
By.
By code. Hunter | J. J. 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/webcasts/programming/index.html | CC-MAIN-2017-04 | refinedweb | 538 | 57.3 |
Chiba development is discontinued since 2009. Joern and Lars went on developing a better XForms implementation at the betterFORM project:
Beta 2 of the XForms 1.1 engine Chiba 3.0 is out passing 86% of the normative tests. Dozens of improvements have been made as well as added support for dynamic loading of subforms.
Chiba Web 3.0.0b1 is out. Completely renewed with new JavaScript layer, improved XForms 1.1 support, localisation, XPath 2.0 and more.
Chiba Project Page:
Download:
Chiba, the modular XForms processor has released a new Core version featuring migration to Saxon for XPath 2.0 support, many XForms 1.1 functions as well as further new XForms 1.1 features.
This release further improves the server-side XForms implementation Chiba Web regarding XForms conformance, improved stability and configurability. As Chiba Core the Web processor now also uses Maven 2 as buildsystem and introduces automated web-testing.
The Chiba project released version 1.4.0 of its W3C XForms processor which fixes a bunch of detail issues and bugs. Further the build system has been migrated from Ant to Maven 2.
Chiba Web, the server-side XForms processor has released a new version featuring a ServletFilter architecture instead of the older Servlet-centered approach. Besides this a lot of detail improvements have been made. Chiba Web now uses Saxon (and therefore provides XSLT 2.0) instead of Xalan.
The server-side XForms processor Chiba Web 2.0.0 has been released and comes with full Ajax support. A live Demo is available from the homepage.
A new Chiba Core has been released. It fixes a lot of bugs and issues regarding XForms 1.0 Second Edition and Errata and contains a bulk of improvements to the code base.
Chiba Web now provides a full AJAX interface in addition to the traditional non-scripted mode. While processing is done on server-side the user-experience is comparable to a full client-side implementation. The AJAX mode supports Firefox/Mozilla, IE6 and Safari 2.
Chiba Web 2 is a complete new generation and this release shouldn't be considered production-ready. RC1 serves the purpose of gathering potential outstanding issues and preparing the final production release.
Contains corrections to fix the vast majority of the XForms errata and moves Chiba to XForms 1.0 Second Edition. Important areas of the source have been refactored for simplification and some performance improvements (100% in some areas). Many new tests have been introduced to ensure the new behaviour and some of the long-standing bugs have been fixed.
This is the first release of the Chiba XForms processor integrated within a servlet. Formerly this has been the main release package but the core has been extracted and is used by several different integrations now. Future releases of this package will be independent from core releases.
This release mainly established the new modularization and changed build and distribution strategy. Chiba
now only contains the core XForms implementation along with all generic resources needed/usefull for any
XForms processing environment either client- or serverside. Main changes therefore are related to the build-file.
The ChibaAdapter interface has been refactored to condense some of the experiences with the different integrations.
Though, this process is still not complete.
This is the first release of the Chiba XForms processor as a separate package. The servlet integration will be available as a separate download from now on. Along with a lot of detail fixes and enhancements the first XForms 1.1 features have been implemented.
Chiba has released version 1.0 of its XForms processing engine. This is the first production release and considered stable. Chiba is now coming in different flavours either client- or server-side on every java-enabled platform, thus making the start to distributed XForms processing.
Chicoon is the integration of the Chiba XForms processor into Apache Cocoon. Chicoon was updated to use latest Chiba code and is the first W3C XForms conformant processor for Cocoon.
This near-production-quality release of the W3C XForms-conformant processor 'Chiba' fixes some smaller problems in the codebase, provides further cleanups and javadoc as well as adds some new Connector features such as directory-browsing and context submission. Improvements on itemset and select controls have also been done.
A preview of Convex, the XForms applet client has been rolled out. It provides the full Chiba functionality inside an offline, browser-based client. While generally browser-independent this first version is tailored for use in IE 6. Chiba provides an implementation of the W3C XForms standard, thereby delivering generic, xml-based form-processing for the web.
This release further improves Chiba's conformance to the XForms Spec, namely the full support of the Submission module, improved namespace handling in XPathes, basic support for the Range control as well as refinements and extensions to Upload handling. The XMLRPC connector package has been extended and now provides a URIResolver/SubmissionHandler pair and a sample server. As always some tests have been added and dead parts have been removed.
The new version closes some important gaps on the way to full XForms 1.0 conformance. New features include full XML Schema validation, full Submission protocol support, further improvements in the UI Generation and more.
The Chiba project has released a new version of its W3C XForms 1.0 implementation. Many details improvements in event processing and XSLT now make it suitable for either Client- or Server-side XForms processing. Many new unit-tests have been added to improve quality and stability of the features.
The Chiba project has released Version 0.9.5 of its W3C XForms implementation. It features various detail refinements as well as upload and EXSLT support.
The Chiba project has released version 0.9.4 of its Java W3C XForms implementation. It adds some detail functionality and some new Connectors and well as a bunch of unit-tests.
Chiba, the server side XForms processor has released version 0.9.3. The API and UI generation have been refined and it contains a lot of fixes and detail improvements.
Chiba 0.9.2 a server side XForms implementation is out. This release features adaptions to XForms PR, support
for XML Base, improved API and many fixes. | https://sourceforge.net/p/chiba/news/ | CC-MAIN-2017-09 | refinedweb | 1,041 | 58.89 |
This LINQ tutorial for beginners and Professionals will guide you to learn LINQ with C# with some real-time examples. Please feel free to ask question, I will keep updating this tutorial with answer of your query.
LINQ (Language-Integrated Query) is very powerful query language introduced with .Net 3.5 & Visual Studio 2008. We can use LINQ with C# or Visual Basic to query different data sources like SQL, MySql, Oracle etc.
LINQ stands for Language Integrated Query. LINQ offers easy data access from databases, in-memory objects, XML document and many other data sources.
In this tutorial you will learn how to work with LINQ step by steo using C#, we explain all frequently used LINQ queries with example.
Before learning LINQ, if you have some basic knowledge of .Net C# and Visual Studio that will be helpful, you may find it easy working with LINQ, if you are completely new to visual studio application development, still you can start, slowly you will get familiar with .net development environment.
We have designed this linq tutorial for professional developer who wants to learn and implement LINQ in data access layer of their .Net project. If you have basic understanding of writing SQL queries, that will be added advantage, though not necessary.
While developing any application as a developer we have always encountered problems in querying data in easy way, we had to learn multiple of technologies like SQL, Web Services just to make a single data call.
Now with the help of LINQ we can directly work with data without depending on any other technologies, and also easily can be integrated with all previous technologies.
using System.Linq; using System.Data.Linq;
Learn more about Query expression overview, Language Integrated Query and its new features, and how LINQ can help working with different data source using ado.net and entity framework. | https://www.webtrainingroom.com/linq | CC-MAIN-2021-49 | refinedweb | 312 | 54.32 |
I have deployed
watson-developer-cloud/personality-insights-python module in to bluemix and created an APP in Bluemix. The link for my app is running absolutely fine. However, when i want to invoke /v2/profile api, with a post request i am getting an error. Here is the code i used in Python.
import requests import json payload = {'id': 'my-id', 'userid': 'id-here', 'sourceid' : 'twitter', 'contenttype' : 'text/plain', 'language' : 'en', 'content' : 'text to analyse goes here' } input_data=json.dumps(payload); r = requests.post("", auth=("username", "password"), headers = {"content-type": "application/json"}, data=input_data ) print(r.content)
i keep getting this error.
b'{"help": "", "error": "The number of words 1 is less than the minimum number of words required for analysis: 100", "code": 400}'
If I change the url with out V2, then we are getting this error
b'{"code": 400, "error": "No text provided"}'
Please help me on this.
Answer by German Attanasio Ruiz (4960) | Jun 07, 2015 at 02:51 PM
According to the API documentation, the Personality Insights service expects a json structure like:
{ "contentItems": [ { "id": "", "userid": "", "sourceid": "", "created": "int", "updated": "int", "contenttype": "", "charset": "", "language": "", "content": "", "parentid": "", "reply": false, "forward": false } ] }
Looking at your code, you are not wrapping the json into a
contentItems object.
On the other hand, the service supports
text/plain and
text/html so if you want to analyze text you can do:
r = requests.post("", auth=("username", "password"), headers = {"content-type": "text/plain"}, data="TEXT TO ANALYZE" ) print(r.content)
Finally, Are you exposing?
Thanks for the answer. it helped. but that link () was dummy it wont work.
Answer by hbadenes (473) | Jun 08, 2015 at 07:39 AM
You have posted and got the reply in StackOverflow as well. As a general recommendation, please do not cross-post your questions. As you can see, you got unrelated responses and future users will either find one of them when searching for this issue...
Nevertheless, here is the answer to your question. If you need more details, continue commenting on the stackoverflow page (since it got the complete answer first).
41 people are following this question.
Way to include file name in Personality Insights API call 2 Answers
.NET force COM objects release 0 Answers
How to resolve SSL error while calling watson personality-insights service repeatedly for different inputs using a "for loop" python. 1 Answer
Connecting to Personality Insights on Python - error 415 2 Answers | https://developer.ibm.com/answers/questions/195163/i-am-getting-this-error-bcode-400-error-no-text-pr.html | CC-MAIN-2019-13 | refinedweb | 405 | 55.74 |
Eager fetching is the ability to efficiently load subclass data and related
objects along with the base instances being queried. Typically, OpenJPA has to
make a trip to the database whenever a relation is loaded, or when you first
access data that is mapped to a table other than the least-derived superclass
table. If you perform a query that returns 100
Person
objects, and then you have to retrieve the
Address for
each person, OpenJPA may make as many as 101 queries (the initial query, plus
one for the address of each person returned). Or if some of the
Person instances turn out to be
Employees,
where
Employee has additional data in its own joined
table, OpenJPA once again might need to make extra database trips to access the
additional employee data. With eager fetching, OpenJPA can reduce these cases to
a single query.
Eager fetching only affects relations in the active fetch groups, and is limited by the declared maximum fetch depth and field recursion depth (see Section 6, “ Fetch Groups ”). In other words, relations that would not normally be loaded immediately when retrieving an object or accessing a field are not affected by eager fetching. In our example above, the address of each person would only be eagerly fetched if the query were configured to include the address field or its fetch group, or if the address were in the default fetch group. This allows you to control exactly which fields are eagerly fetched in different situations. Similarly, queries that exclude subclasses aren't affected by eager subclass fetching, described below.
Eager fetching has three modes:
none: No eager fetching is performed. Related objects are
always loaded in an independent select statement. No joined subclass data is
loaded unless it is in the table(s) for the base type being queried. Unjoined
subclass data is loaded using separate select statements rather than a SQL UNION
operation.
join: In this mode, OpenJPA joins to to-one relations in the
configured fetch groups. If OpenJPA is loading data for a single instance, then
OpenJPA will also join to any collection field in the configured fetch groups.
When loading data for multiple instances, though, (such as when executing a
Query) OpenJPA will not join to collections by default.
Instead, OpenJPA defaults to
parallel mode for collections,
as described below. You can force OpenJPA use a join rather than parallel mode
for a collection field using the metadata extension described in
Section 9.2.1, “
Eager Fetch Mode
”.
Under
join mode, OpenJPA uses a left outer join (or inner
join, if the relations' field metadata declares the relation non-nullable) to
select the related data along with the data for the target objects. This process
works recursively for to-one joins, so that if
Person has
an
Address, and
Address has a
TelephoneNumber, and the fetch groups are configured
correctly, OpenJPA might issue a single select that joins across the tables for
all three classes. To-many joins can not recursively spawn other to-many joins,
but they can spawn recursive to-one joins.
Under the
join subclass fetch mode, subclass data in joined
tables is selected by outer joining to all possible subclass tables of the type
being queried. As you'll see below, subclass data fetching is configured
separately from relation fetching, and can be disabled for specific classes.
Some databases may not support outer joins. Also, OpenJPA can not use
outer joins if you have set the
DBDictionary's
JoinSyntax to
traditional. See Section 6, “
Setting the SQL Join Syntax
”.
parallel: Under this mode, OpenJPA selects to-one relations
and joined collections as outlined in the
join mode
description above. Unjoined collection fields, however, are eagerly fetched
using a separate select statement for each collection, executed in parallel with
the select statement for the target objects. The parallel selects use the
WHERE conditions from the primary select, but add their own
joins to reach the related data. Thus, if you perform a query that returns 100
Company objects, where each company has a list of
Employee objects and
Department
objects, OpenJPA will make 3 queries. The first will select the company objects,
the second will select the employees for those companies, and the third will
select the departments for the same companies. Just as for joins, this process
can be recursively applied to the objects in the relations being eagerly
fetched. Continuing our example, if the
Employee class
had a list of
Projects in one of the fetch groups being
loaded, OpenJPA would execute a single additional select in parallel to load the
projects of all employees of the matching companies.
Using an additional select to load each collection avoids transferring more data than necessary from the database to the application. If eager joins were used instead of parallel select statements, each collection added to the configured fetch groups would cause the amount of data being transferred to rise dangerously, to the point that you could easily overwhelm the network.
Polymorphic to-one relations to table-per-class mappings use parallel eager fetching because proper joins are impossible. You can force other to-one relations to use parallel rather than join mode eager fetching using the metadata extension described in Section 9.2.1, “ Eager Fetch Mode ”.
Parallel subclass fetch mode only applies to queries on joined inheritance hierarchies. Rather than outer-joining to subclass tables, OpenJPA will issue the query separately for each subclass. In all other situations, parallel subclass fetch mode acts just like join mode in regards to vertically-mapped subclasses.
When OpenJPA knows that it is selecting for a single object only, it never uses
parallel mode, because the additional selects can be made
lazily just as efficiently. This mode only increases efficiency over
join mode when multiple objects with eager relations are being loaded,
or when multiple selects might be faster than joining to all possible
subclasses.
You can control OpenJPA's default eager fetch mode through the
openjpa.jdbc.EagerFetchMode and
openjpa.jdbc.SubclassFetchMode configuration properties. Set
each of these properties to one of the mode names described in the previous
section:
none, join, parallel. If left unset, the eager
fetch mode defaults to
parallel and the subclass fetch mode
defaults to
join These are generally the most robust and
performant strategies.
You can easily override the default fetch modes at runtime for any lookup or query through OpenJPA's fetch configuration APIs. See Chapter 9, Runtime Extensions for details.
Example 5.22. Setting the Default Eager Fetch Mode
<property name="openjpa.jdbc.EagerFetchMode" value="parallel"/> <property name="openjpa.jdbc.SubclassFetchMode" value="join"/>
Example 5.23. Setting the Eager Fetch Mode at Runtime
import org.apache.openjpa.persistence.*; import org.apache.openjpa.persistence.jdbc.*; ... Query q = em.createQuery("select p from Person p where p.address.state = 'TX'"); OpenJPAQuery kq = OpenJPAPersistence.cast(q); JDBCFetchPlan fetch = (JDBCFetchPlan) kq.getFetchPlan(); fetch.setEagerFetchMode(JDBCFetchPlan.EAGER_PARALLEL); fetch.setSubclassFetchMode(JDBCFetchPlan.EAGER_JOIN); List results = q.getResultList();
You can specify a default subclass fetch mode for an individual class with the
metadata extension described in Section 9.1.1, “
Subclass Fetch Mode
”.
Note, however, that you cannot "upgrade" the runtime fetch mode with your class
setting. If the runtime fetch mode is
none, no eager
subclass data fetching will take place, regardless of your metadata setting.
This applies to the eager fetch mode metadata extension as well (see
Section 9.2.1, “
Eager Fetch Mode
”). You can use this extension to
disable eager fetching on a field or to declare that a collection would rather
use joins than parallel selects or vice versa. But an extension value of
join won't cause any eager joining if the fetch
configuration's setting is
none.
There are several important points that you should consider when using eager fetching:
When you are using
parallel eager fetch mode and you have
large result sets enabled (see Section 9, “
Large Result Sets
”)
or you place a range on a query, OpenJPA performs the needed parallel selects on
one page of results at a time. For example, suppose your
FetchBatchSize is set to 20, and you perform a large result set query
on a class that has collection fields in the configured fetch groups. OpenJPA
will immediately cache the first
20 results of the query
using
join mode eager fetching only. Then, it will issue the
extra selects needed to eager fetch your collection fields according to
parallel mode. Each select will use a SQL
IN
clause (or multiple
OR clauses if your class has a
compound primary key) to limit the selected collection elements to those owned
by the 20 cached results.
Once you iterate past the first 20 results, OpenJPA will cache the next 20 and again issue any needed extra selects for collection fields, and so on. This pattern ensures that you get the benefits of eager fetching without bringing more data into memory than anticipated.
Once OpenJPA eager-joins into a class, it cannot issue any further eager to-many joins or parallel selects from that class in the same query. To-one joins, however, can recurse to any level.
Using a to-many join makes it impossible to determine the number of instances the result set contains without traversing the entire set. This is because each result object might be represented by multiple rows. Thus, queries with a range specification or queries configured for lazy result set traversal automatically turn off eager to-many joining.
OpenJPA cannot eagerly join to polymorphic relations to non-leaf classes in a table-per-class inheritance hierarchy. You can work around this restriction using the mapping extensions described in Section 9.2.2, “ Nonpolymorphic ”. | http://openjpa.apache.org/builds/1.0.4/apache-openjpa-1.0.4/docs/manual/ref_guide_perfpack_eager.html | CC-MAIN-2013-48 | refinedweb | 1,610 | 52.7 |
Can you tell me an exact location of Linux kernel driver under Linux file system? Where to find all available modules under Linux operating systems?
[click to continue…]
ls command
Can you tell me an exact location of Linux kernel driver under Linux file system? Where to find all available modules under Linux operating systems?
Q. How do I list all open files for a Linux or UNIX process using command line options?
[click to continue…]
Q. Can you tell me more about dot-files that shell and many UNIX command reads?
[click to continue…]
[click to continue…]).
Q.
Q. How do I list or find the smallest directories or files in the current directory under Linux or UNIX like operating system?
A. There is no direct command exists for this task. However by using shell pipes and combination of other commands one can produced the desired result.
Task: Display list of smallest files
You need to use ls command and pass the option -l (long format) -S (sort) -r (in reverse order), enter:
$ ls -lSr
$ ls -lSr
$ ls -lSr | head
$ ls -lSr | head -5
Output:
-rw-r--r-- 1 root root 0 May 29 07:08 Muttrc.local -rw-r--r-- 1 root root 0 Jan 12 2000 motd -rw-r--r-- 1 root root 0 Jan 12 2000 exports -rw-r--r-- 1 root root 0 Nov 28 2006 environment -rw-rw-r-- 1 root disk 0 Aug 7 2006 dumpdates -rw-r--r-- 1 root root 0 Jul 10 08:50 cron.deny -rw------- 1 root root 1 Aug 23 2006 at.deny lrwxrwxrwx 1 root root 7 Jul 10 08:50 rc -> rc.d/rc lrwxrwxrwx 1 root root 10 Jul 10 08:50 rc6.d -> rc.d/rc6.d lrwxrwxrwx 1 root root 10 Jul 10 08:50 rc5.d -> rc.d/rc5.d lrwxrwxrwx 1 root root 10 Jul 10 08:50 rc4.d -> rc.d/rc4.d lrwxrwxrwx 1 root root 10 Jul 10 08:50 rc3.d -> rc.d/rc3.d lrwxrwxrwx 1 root root 10 Jul 10 08:50 rc2.d -> rc.d/rc2.d
Task: Display list of smallest directories
You need to use du command to display sorted (-S option) output. Use pipe to send du command output to sort command for sorting purpouse:
$ du -S . | sort -n
$ du -S . | sort -n | head -10
Output:
du -S . | sort -n | head -10 4 ./lighttpd/ssl 4 ./monit.d 8 ./acpi 8 ./acpi/actions 8 ./alchemist 8 ./alchemist/namespace 8 ./alternatives 8 ./desktop-profiles 8 ./dev.d 8 ./dev.d/default
Read the man page of ls, sort and du for more options:
man ls
man du
man sort | http://www.cyberciti.biz/faq/tag/ls-command/page/3/ | CC-MAIN-2016-40 | refinedweb | 448 | 72.97 |
GHC/FAQ
(Redirected from GHC:FAQ)
Categories: GHC | FAQ
Please feel free to add stuff here.
This page is rather long. We've started to add some sub-headings, but would welcome your help in making it better organised.
1 GHC on particular platforms
1.1.
1.2 GHC on Linux
1.2.1.
1.2 Ctrl-C doesn't work on Windows
When running GHC under a Cygwin shell on Windows, Ctrl-C sometimes doesn't work. A workaround is to use Ctrl-Break instead. Another workaround is to use the rlwrap program (cygwin package available) to invoke ghci : In addition to proper Ctrl-C, you also get emacs (or vi) key bindings and command history across sessions.
1.4.3 How do I link Haskell with C++ code compiled by Visual Studio?
1.4.3.1 Prerequisites
It is assumed that the reader is familiar with the Haskell Foreign function interface (FFI), and is able to compile Haskell programs with GHC and C++ programs with Visual Studio.
1.4: when Haskell becomes able to use Visual C++ as a backend (see [1]), we would not need to go via a DLL anymore. Instead, we would simply list all source files (Haskell and C++) on the command line of GHC.
1.4.3.3 Invoking a Haskell DLL from a C++ executable
- Make a Haskell DLL as explained in [2]
- Make a module definition file, such as
LIBRARY Adder
EXPORTS
adder
- Create an import library using Visual Studio's lib.exe:
lib /DEF:adder.def /OUT:adder.lib
- Link the C++ program against the import library.
1.4.3.4 Invoking a C++ DLL from a Haskell executable
- Make a DLL project in Visual Studio. It will create a .vcproj and .sln files for you. Add your C++ source files to this project.
- Create a .def file for your DLL. It might look like
LIBRARY MyDLL
EXPORTS
function1
function2
where function1 and function2 are the names of the C++ functions that you want to invoke from Haskell (there can be more of them, of course), MyDLL is the name of your DLL.
-.4.4 GHCi hangs after "Loading package base ... linking ... done."
On a small number of systems GHCi will fail to start, hanging after loading the base package. The "Prelude>" prompt is never reached.
This is believed to be due to a bug in Windows thought to affect tablet PCs, although the details are not fully understood.
A workaround is to open a command prompt, enter "chcp 28591" - if this hangs hit Ctrl-C - and then run "ghci". Some users may find this hotfix useful.
1.5 Why isn't GHC available for .NET?
It would make a lot of sense to give GHC a .NET..
1.6 GHC on Mac OS X
1.6.1 Linking with ghc produces ld: Undefined symbols: _sprintf$LDBLStub ...
This happens on a PowerPC Mac OS X 10.4 if gcc-3.3 is the default compiler and you try to compile with a ghc that has been built with gcc-4.0. For example:
$ cat t2.hs module Main where main = putStr ("t2: Hello trac 1066 2007-Feb-17 19.48\n") $ gcc --version gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1819) ... $ ghc --make t2.hs [1 of 1] Compiling Main ( t2.hs, t2.o ) Linking t2 ... ld: Undefined symbols: _sprintf$LDBLStub _fprintf$LDBLStub _vfprintf$LDBLStub _sscanf$LDBLStub $
To correct this, set the default compiler to gcc-4.0 (sudo gcc_select 4.0) or include linking options -lHSrts -lSystemStubs in that order on the ghc command:
$ ghc --make t2.hs -lHSrts -lSystemStubs [1 of 1] Skipping Main ( t2.hs, t2.o ) Linking t2 ... $
The command
for l in <ghc installation directory>/lib/ghc-<ghc version>/*.a; do nm $l 2>&1 | if grep LDBLStub 1>/dev/null; then echo $l; fi; done
prints the list of libraries that may be needed instead of or in addition to -lHSrts before -lSystemStubs on the ghc command. For example:
$ for l in /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/*.a; do nm $l 2>&1 | if grep LDBLStub 1>/dev/null; then echo $l; fi; done /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSX11_cbits.a /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSrts.a /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSrts_debug.a /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSrts_p.a /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSrts_thr.a /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSrts_thr_debug.a /Users/thorkilnaur/tn/install/ghc-HEAD-for-1066-20070211_1657/lib/ghc-6.7.20070209/libHSrts_thr_p.a $
[3] has additional details.
1.6.2 Linking with a C++ library gives: Undefined symbols: __Unwind_Resume
You need to pass the -fexceptions to the linker. Use -optl -fexceptions.
2 Running GHC
2.1 GHC doesn't like filenames containing +.
Indeed not. You could change + to p or plus.
2.2.
One huge slowdown is also working on remote filesystem, e.g., nfs. Work on a local machine, preferably.
2.3¡ for more details.
2.4.
4.3.
4.4 Why doesn't "x=1" work at the ghci prompt?
Type
let x = 1
instead.
From IRC: "But in general, it's tricky to define function interactively. You can do simple stuff easily enough: "let f x = x * x" or whatever; but for anything other than a simple one-liner, I usually stick it into a file and then load it with ghci..
5.4 How do I call a C procedure from a Haskell program?
First, you'll want to keep open the GHC user manual section on foreign function calling, and the Haskell FFI addendum.
Now, let's assume you got this c-program ffi.c which writes THE answer to a file:
#include <stdlib.h> #include <stdio.h> write_answer(char *userfilename) { FILE *userfile; userfile=fopen(userfilename,"w"); fprintf(userfile,"42"); fclose(userfile); }
You also need a header file ffi.h.
void write_answer(char *userfilename)
Next Step: Write the according Haskell program to include the function write_answer in your Haskell code:
{-# INCLUDE <ffi.h> #-} {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C.String foreign import ccall "ffi.h write_answer" cwrite_answer :: CString -> IO () write_answer :: String -> IO () write_answer s = do s' <- newCString s cwrite_answer s' main = write_answer "ffi.dat"
Now we get to compiling (assume that /tmp/ffi/ is the current path).
cc -fPIC -c ffi.c ar rc libffi.a ffi.o ghc -lffi -L/tmp/ffi --make Main
And the resulting executable should write the file.
The -fPIC parameter to the c compiler is not strictly necessary. But the result will help us in the next step which is to dynamically link the library for use in GHCi.
5.5 How do I compile my C program to use in GHCi?
Suppose you got your c-program compiled (with -fPIC parameter) as described above. If you try to load your file Main.hs in GHCi you get an error similar to this:
Loading object (dynamic) ffi ... failed. Dynamic linker error message was: libffi.so: cannot open shared object file: No such file or directory Whilst trying to load: (dynamic) ffi
What you need is a shared library. To get it you compile once more:
cc -shared -o libffi.so ffi.o
And now it all works fine:
$ ghci -lffi -L/tmp/ffi Main.hs GHCi, version 6.8.2: :? for help Loading package base ... linking ... done. Loading object (dynamic) ffi ... done final link ... done Ok, modules loaded: Main. Prelude Main> write_answer "test" Prelude Main> :! cat test 42 Prelude Main>
6 Input/Output
6 stdin process input.
6.2.
6.3.
6 ...
Now
v is allocated as a thunk. (Of course, that might be well worth it if
e is an expensive expression.)
Instead GHC does "opportunistic CSE". If you have
let x = e in .... let y = e in ....
then it'll discard the duplicate binding. This can still cause space leaks but it guarantees never to create a new thunk, and it turns out to be very useful in practice.
Bottom line: if you care about sharing, do it yourself using
let
or
where..
8.2 Why doesn't GHC use shared My program is failing with head [], or an array bounds error, or some other random error, and I have no idea how to find the bug. Can you help?
Try the GHCi Debugger, in particular look at the section on "Debugging Exceptions".
Alternatively, compile your program with -prof -auto-all (make sure you have the profiling libraries installed), and run it with +RTS -xc -RTS to get a ´stack trace'.
GHC doesn't ship with support for parallel execution; that support is provided separately by the GPH project.
8.6 When is it safe to use unsafe functions such as.
8.7>>).
8.8.
8.9 How can I make GHC always use some extra gcc or linker option?
If you want to *always* use an extra option then you can edit the package configuration for the 'rts' or 'base' package since these packages are used by every program that you compile with GHC. You might want to do this if you had installed something that ghc needs but into a non-standard directory, thus requiring special compilation or linking options.
All you need to do is to dump out the configuration into a human readable form, edit it and re-register the modified package configuration. The exact commands to do that are below, but first here are the fields in the file that you might want to modify:
- include-dirs
- directories to search for .h files
- library-dirs
- directories to search for libraries
- extra-libraries
- extra C libs to link with
- cc-options
- extra flags to pass to gcc when compiling C code or assembly
- ld-options
- extra flags to pass to gcc when linking
to edit the rts package (or base) confiuration just do:
- ghc-pkg describe rts > rts.package.conf
- edit rts.package.conf with your favourite text editor
- ghc-pkg update rts.package.conf
On Unix systems, some options can also be set with environment variables such as LIBRARY_PATH and CPATH. | http://haskell.org/haskellwiki/GHC:FAQ | crawl-001 | refinedweb | 1,733 | 67.25 |
Web scraping, web harvesting, or web data extraction is data scraping
used for extracting data from websites.
the World Wide Web directly using the Hypertext Transfer Protocol, or
through a web browser.
What is Web Scraping?
Web Scraping (also termed Screen Scraping, Web Data Extraction, Web Harvesting,, web scraping software will perform the same task within a fraction of the time.
That being said, the actual code for web scraping is pretty simple.
Steps for Web Scraping using Python
Step 1: Find the URL you want to scrape.
One of my favorite things to scrape the web fo
Before you try to start scraping a site, it’s a good idea to check the rules of the website first. The scraping rules can be found in the robots.txt file, which can be found by adding a /robots.txt path to the main domain of the site.
Step 2: Identify the structure of the sites HTML
Once you have found a site that you can scrape, you can use chrome developer tools to inspect the site’s HTML structure. This is important, because more than likely, you’ll want to scrape data from certain HTML elements, or elements with specific classes or IDs. With the inspect tool, you can quickly identify which elements you need to target.
Step 3: Install Beautiful Soup and Requests
There are other packages and frameworks, like Scrapy. But Beautiful Soup allows you to parse the HTML in a beautiful way, so that’s what I’m going to use. With Beautiful Soup, you’ll also need to install a Request library, which will fetch the URL content.
If you aren’t familiar with it, the Beautiful Soup documentation has a lot of great examples to help get you started as well.
To install these two packages, you can simply use the pip installer.
$ pip install requests
and then
$ pip install beautifulsoup4
Step 4: Web Scraping Code
Finally, we can start writing some code. Here’s how I structured mine:
from bs4 import BeautifulSoup import requests # Here, we're just importing both Beautiful Soup and the Requests librarypage_link = 'the_url_you_want_to_scrape.scrape_it_real_good.com' # this is the url that we've already determined is safe and legal to scrape from.page_response = requests.get(page_link, timeout=5) # here, we fetch the content from the url, using the requests librarypage_content = BeautifulSoup(page_response.content, "html.parser") #we use the html parser to parse the url content and store it in a variable.textContent = [] for i in range(0, 20): paragraphs = page_content.find_all("p")[i].text textContent.append(paragraphs)
In my use case, I want to store the speech data I mentioned earlier. so in this example, I loop through the
Step 5: Isolating the results:
In the line of code above:
paragraphs = page_content.find_all("p")[i].text
This basically finds all of the <p> elements in the HTML. the .text allows us to select only the text from inside all the <p> elements. The difference is that without the .text, our return would probably look like this:
<p class="paragraph" id="7E33CH" >Lorem Ipsum is unattractive, both inside and out. I fully understand whyn. You know, it really doesn't matter what you write as long as you have got a young, <a href="linktoacoolsite.com">and</a> beautiful, piece of text. I think my strongest asset maybe by far is my temperament. I have a placeholding temperament. Lorem Ipsum's father was with Lee Harvey Oswald prior to Oswald's being, you know, shot.</p>
This can be a little messy, and so filtering the results using the Beautiful Soup .textallows us to get a cleaner return, which might look more like this:
Lorem Ipsum is unattractive, both inside and out. I fully understand why it's former users left it for something else. They made a good decision. You know, it really doesn't matter what you write as long as you have got a young, and beautiful, piece of text. I think my strongest asset maybe by far is my temperament. I have a placeholding temperament. Lorem Ipsum's father was with Lee Harvey Oswald prior to Oswald's being, you know, shot.
Beautiful Soup also has a host of other ways to simplify and navigate the data structure:
soup.title # <title>Returns title tags and the content between the tags</title> soup.title.string # u'Returns the content inside a title tag as a string' soup.p # <p class="title"><b>This returns everything inside the paragraph tag</b></p> soup.p['class'] # u'className' (this returns the class name of the element) soup.a # <a class="link" href="" id="link1">This would return the first matching anchor tag</a> // Or, we could use the find all, and return all the matching anchor tagssoup.find_all('a') # [<a class="link" href="" id="link1">link2</a>, # <a class="link" href="" id="link2">like3</a>, # <a class="link" href="" id="link3">Link1</a>] soup.find(This returns just the matching element by ID</a>
There are a lot of other great ways to search, filter and isolate the results you want from the HTML. You can also be more specific, finding an element with a specific class or attribute:
soup.findAll('div',attrs={"class":"cool_paragraph"})
This would fine all the <div> elements with the class “cool_paragraph”.
Should I scrape the web in the first place?
An alternative to web scraping is using an API, if one is available. Obviously, in many cases, this isn’t an option, but API’s do provide faster and often more reliable data. Here are a few great APIs. Some APIs also provide more content than what would be available through web scraping.
Be Polite
Web scraping can also overload a server, if you are making a large amount of requests, and scraping large amounts of data. As I mentioned earlier, it’s a good idea, before you start, to check the robots.txt before scraping.
Another good way to be polite when scraping is to be completely transparent, and even notify people to let them know you’re going to crawl their site, why you are doing it, and what you are using the data for. One way to do this, and highly recommended, is to use a user agent. You can import a user agent library in python by pip installing the user_agent library..
Your user agent can provide information like a link to more information about the scraper you’re using. Your page about the scraper can and should include the information about what you’re using it for, what IP address you are crawling from, and possibly a way to contact you if your bot causes any problems.
The point is that web scraping can cause problems, and we don’t want to cause problems. More than likely, at some point we will probably make mistakes that might affect a website. I think the golden rule is to just be open and honest in communicating with webmasters. If you respond to complaints quickly, you should be fine. Sometimes you may realize that your scraper has caused a problem even before the site you’re scraping realizes it. In that case, it’s an even better idea to make the first contact and basically say, Hey, sorry about that. Here’s what I did, and I fixed it.
Conclusion:
I realize this really just scratches the surface of web scraping. And my intention isn’t to go into a ton of detail here. Web scraping is actually pretty easy to get started. But doing it the right way takes a little more time and effort. Also, I’m still fairly new at this, and am by no means an expert on anything, and appreciate any feedback or tips!
Is Web Scraping Legal?. In this article, we’ll see how to implement web scraping with python.
Talking about whether web scraping is legal or not, some websites allow web scraping and some don’t. To know whether a website allows web scraping or not, you can look at the website’s “/robots.txt” file. You can find this file by appending “/robots.txt” to the URL that you want to scrape. For this example, I am scraping the Flipkart website. So, to see the “/robots.txt” file, the URL is.
import bs4 import urllib.request as url userInput = input("Enter movie name : ") userInput1 = userInput.split() movieName = '+'.join(userInput1) http = url.urlopen(""+movieName) source = bs4.BeautifulSoup(http,'lxml') td = source.find('td',class_='result_text') a = td.find('a') # print(a['href']) href = a['href'] newUrl = "" + href http = url.urlopen(newUrl) source = bs4.BeautifulSoup(http,'lxml') div = source.find('div', class_='title_wrapper') # print(div.text) data = div.text.replace("\n","") # print(data.split()) data = data.split() data = ' '.join(data) print(data) summary = source.find('div', class_='summary_text') print(summary.text.strip()) links = source.findAll('a',class_='quicklink') #print(links) url2 = "" + links[2]['href'] http = url.urlopen(url2) source = bs4.BeautifulSoup(http) titles = source.findAll('a', class_='title') for item in titles: print(item.text) | https://freshlybuilt.com/web-scraping-in-python/ | CC-MAIN-2020-40 | refinedweb | 1,508 | 67.25 |
Technote (troubleshooting)
Problem(Abstract)
Cross join error even though relations are defined in the database view in Framework Manager.
Symptom
Report Studio:
QE-DEF-0103 Cross joins are not permitted
Cause
Cross join error occurs when cross product is disallowed in Framework Manager (FM), and Report Studio, and there are no relationships detected between the query subjects used in the report.
In Framework Manager, the database view contains defined relationships. In this case, instead of creating query subjects from the model to populate the business view, the business view query subjects were created from the data source. Since the relationships defined in the model do not exist in the data source, cross join error is raised when these query subjects are used in a report.
Resolving the problem
From Framework Manager, verify which query subjects are being created from the data source, recreate them from the model.Steps:To determine if a query subject is created from the data source or model:
1. Right click on a query subject in the business view.
2. Edit definition.
3. If the definition of the query subject contains SQL, then the query subject has been created by referencing the data source. For example:
item 1,
item 2, .....
from
data source.table
To recreate the query subject to reference the model:
1. Backup your model.
2. Delete the datasource query subject from the business view.
3. Right-click on the business view namespace, then click Create, then click New Query Subject.
4. Select From Model.
5. Drag the appropriate items from the database layer.
Related information
KB 1004459: How do you allow Cross Product joins in a report?
Historical Number
1023486 | https://www-304.ibm.com/support/docview.wss?uid=swg21340336 | CC-MAIN-2015-48 | refinedweb | 277 | 65.52 |
Common handling of the error case for RResult<T> (T != void) and RResult<void>
RResultBase captures a possible runtime error that might have occured. If the RResultBase leaves the scope unchecked, it will throw an exception. RResultBase should only be allocated on the stack, which is helped by deleting the new operator. RResultBase is movable but not copyable to avoid throwing multiple exceptions about the same failure.
Definition at line 135 of file RError.hxx.
#include <ROOT/RError.hxx>
Definition at line 143 of file RError.hxx.
Definition at line 51 of file RError.cxx.
Used by the RResult<T> bool operator.
Definition at line 146 of file RError.hxx.
Definition at line 159 of file RError.hxx.
Throws an RException with fError.
Definition at line 69 of file RError.cxx.
This is the nullptr for an RResult representing success.
Definition at line 138 of file RError.hxx.
Switches to true once the user of an RResult object checks the object status.
Definition at line 140 of file RError.hxx. | https://root.cern.ch/doc/v622/classROOT_1_1Experimental_1_1Internal_1_1RResultBase.html | CC-MAIN-2021-21 | refinedweb | 170 | 70.6 |
Siarhei Siamashka <siarhei.siamashka at gmail.com> writes: > Hello All, > > ARMv6 has instructions for swapping bytes, a patch for using them is > attached. A minor performance improvement can be observed on MP3 > decoding (benchmarked with mplayer). [...] > Except for 'flashsv' which fails to work with A32_BITSTREAM_READER properly, > ffmpeg regression tests passed (using Nokia N800, chrooted into > debian EABI rootfs) with this bswap patch applied and also with an older > dsputil patch which would be also very nice to get committed: > > > > Index: libavutil/bswap.h > =================================================================== > --- libavutil/bswap.h (revision 10281) > +++ libavutil/bswap.h (working copy) > @@ -47,6 +47,8 @@ > "0" (x)); > #elif defined(ARCH_SH4) > __asm__("swap.b %0,%0":"=r"(x):"0"(x)); > +#elif defined(ARCH_ARM) && defined(HAVE_ARMV6) HAVE_ARMV6 implies ARCH_ARM so only the former needs to be tested. > + __asm__("rev16 %0,%0":"=r"(x):"0"(x)); > #else > x= (x>>8) | (x<<8); > #endif > @@ -72,6 +74,8 @@ > "swap.w %0,%0\n" > "swap.b %0,%0\n" > :"=r"(x):"0"(x)); > +#elif defined(ARCH_ARM) && defined(HAVE_ARMV6) > + __asm__("rev %0,%0":"=r"(x):"0"(x)); > #elif defined(ARCH_ARM) > uint32_t t; > __asm__ ( > Patch is OK as such. However, I don't like the way inline assembler is being sprinkled across various files. I'd prefer we cleaned that up before adding more of it. -- M?ns Rullg?rd mans at mansr.com | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-September/038774.html | CC-MAIN-2016-36 | refinedweb | 219 | 67.86 |
Asked by:
How do namespace work in C#?
Question
Hi,
I'm new to C# and .NET Core Framework. I'm trying to change the file structure of my project. I created a 'Models' Folder and followed convention similar to the 'Controllers' folder that already exists setting the namespace to 'project.FolderName' but when I add this namespace and move the file my controller file cant recognize the class anymore. I've also tried importing the class like this
using User.Models
using System; using System.Data.Entity; namespace project.Models { public class User { public string email { get; set; } public string username {get; set;} public string password { get; set; } } }
What am I missing. When I leave the class in the root of the project with the namespace just ass my project name it works.
Monday, July 20, 2020 5:03 AM
- Edited by Arvind16 Monday, July 20, 2020 5:04 AM
- Moved by Sara LiuMicrosoft contingent staff Monday, July 20, 2020 7:40 AM
All replies
Hi Arvind16,
Thank you for posting here.
I cannot reproduce this error.
After I modify the namespace, as long as I add a reference to this namespace when using this class, it will not affect the use no matter where the class is placed.
Edit:
You show above:
using User.Models
Is your project called User?
If so, you need to use the User class like this:
Models.User user = new Models.User();
If not, could you please show us the error message? 8:57 AM
- Edited by Timon YangMicrosoft contingent staff Monday, July 20, 2020 9:10 AM
- Proposed as answer by CoolDadTx Monday, July 20, 2020 2:02 PM
Hello,
The following may help to understand about namespaces. The project focuses on Entity Framework Core, a folder Models for each table in a database, a Class folder for worker classes, a Context folder for DbContext.
If you right click on the image below and open in a new browser window it will help to see via color coding a class in the class folder how I reference classes in the Model folder and the DbContext in the Contexts folder.
If I comment out using North.Contexts; we get
Then change to this we are working again.
So now in the last line Visual Studio knows where to find NorthwindContext. Does this make sense?
The project is, July 20, 2020 10:31 AM
using project.Models;
It should be a statement at the top of the controller class if you want to access any class class in the Models folder from the controller class. PublishingCompany is the name of the ASP.NET MVC project. The Models folder is a folder in PublishingCompany that contains IAuthorDM and AuthorDM, and there must be a using using PublishCompany.Models in the Authorcontroller to use the AuthorDM class.
ASP.NET MVC issues can be discussed at teh ASP.NET forums.
using Microsoft.AspNetCore.Mvc; using PublishingCompany.Models; namespace PublishingCompany.Controllers { public class AuthorController : Controller { private IAuthorDM adm; public AuthorController(IAuthorDM authorDM) { adm = authorDM; } public IActionResult Index() { return View(adm.GetAll()); } public IActionResult Detail(int id = 0) { return id == 0 ? null : View(adm.Find(id)); } public IActionResult Create() { return View(adm.Add()); } [HttpPost] public ActionResult Create(AuthorVM.Author author, string submit) { if (submit == "Cancel") return RedirectToAction("Index"); if (!ModelState.IsValid) return View(author); adm.Add(author); return RedirectToAction("Index"); } public ActionResult Edit(int id = 0) { return id == 0 ? null : View(adm.Update(id)); } [HttpPost] public ActionResult Edit(AuthorVM.Author author, string submit) { if (submit == "Cancel") return RedirectToAction("Index"); if (!ModelState.IsValid) return View(author); adm.Update(author); return RedirectToAction("Index"); } public IActionResult Delete(int id = 0) { if (id > 0) adm.Delete(id); return RedirectToAction("Index"); } public ActionResult Cancel() { return RedirectToAction("Index", "Home"); } } }
using System.Linq; using ServiceLayer; using Entities; namespace PublishingCompany.Models { public class AuthorDM :IAuthorDM { private IAuthorSvc svc; public AuthorDM(IAuthorSvc authorSvc) { svc = authorSvc; } public AuthorVM GetAll() { var vm = new AuthorVM(); var dtos = svc.GetAll().ToList(); vm.Authors.AddRange(dtos.Select(dto => new AuthorVM.Author() { AuthorID = dto.AuthorId, FirstName = dto.FirstName, LastName = dto.LastName }).ToList()); return vm; } public AuthorVM.Author Find(int id) { var dto = svc.Find(id); var author = new AuthorVM.Author { AuthorID = dto.AuthorId, FirstName = dto.FirstName, LastName = dto.LastName }; return author; } public AuthorVM.Author Add() { return new AuthorVM.Author(); } public void Add(AuthorVM.Author author) { var dto = new DtoAuthor { FirstName = author.FirstName, LastName = author.LastName }; svc.Add(dto); } public AuthorVM.Author Update(int id) { var dto = Find(id); var author = new AuthorVM.Author { AuthorID = dto.AuthorID, FirstName = dto.FirstName, LastName = dto.LastName }; return author; } public void Update(AuthorVM.Author author) { var dto = new DtoAuthor { AuthorId = author.AuthorID, FirstName = author.FirstName, LastName = author.LastName }; svc.Update(dto); } public void Delete(int id) { var dto = new DtoId { Id = id }; svc.Delete(dto); } } }
namespace PublishingCompany.Models { public interface IAuthorDM { AuthorVM GetAll(); AuthorVM.Author Find(int id); AuthorVM.Author Add(); void Add(AuthorVM.Author author); AuthorVM.Author Update(int id); void Update(AuthorVM.Author author); void Delete(int id); } }
Monday, July 20, 2020 4:18 PM
Hi,
Has your issue been resolved?
If so, please click on the "Mark as answer" option of the reply that solved your question, 30, 2020 8:54 AM | https://social.msdn.microsoft.com/Forums/en-US/772ebcb2-7cc6-43ea-91bd-c5df5e8a3fdf/how-do-namespace-work-in-c?forum=csharpgeneral | CC-MAIN-2021-39 | refinedweb | 866 | 53.78 |
Extension Methods
Extension methods are methods that add functionality to .NET types or types that you have defined. For example, you can extend the System.String class by adding a Reverse method that reverses the string. The following program shows an example of an extension method.
using System; namespace ExtensionMethodsDemo { public static class StringExtender { public static string Reverse(this string myString) { char[] reverse = new char[myString.Length]; for (int i = myString.Length - 1, j = 0; i >= 0; i--, j++) { reverse[j] = myString[i]; } string returnString = new string(reverse); return returnString; } } public class Program { public static void Main() { string myMessage = "This message will be reversed."; Console.WriteLine(myMessage); Console.WriteLine(myMessage.Reverse()); } } }
Example 1 – Extention Methods Demo
This message will be reversed. .desrever eb lliw egassem sihT
To define an extension method, you first need to create a static class that will hold the extension method. You must import the necessary namespace for the type that you want to extend. Inside the static class, we defined the extension method. Extension methods must also be static, can have any return type, and can possess any number of parameters just like any function. But extension methods follow a different syntax.
public static returnType methodName(this type name, param1 ... paramN) { method codes... }
As you can see inside the parameters list, we first used the this keyword then the type (such as string) that we want to extend followed by an instance name. This will hold the instance of that type so that we can used it inside our method for manipulation. We then followed them by a comma-separated list of parameters. The extension method we defined in Figure 1 needs no parameters. You can now create strings in your program and use the Reverse method. Note that extension methods are useful if you want to extend predefined types and classes from the .NET Framework. You can also use it on your custom class, but it would be better to just add the method to the class itself.
Inside the method, we declared a character array and assigned the value of the string message. We used the ToCharArray() method to convert the string into a series of characters. We do this so that we can modify each of the characters of the message. We used a for loop that will iterate the character array from the beginning and the message from the last position. The last character of the string was assigned to the first position of the character array. The second to the last character of string is assigned to the second position of the character array and so on. Since the return type of the extension method is string, we need to convert the character array back to a string. We do this by creating a new instance of the string and use its constructor that accepts a character array. We then return the converted reversed string back to the caller.
To give you an example of an extension method that has parameters, let’s modify our extension method in figure one so that it will accept a boolean value that will determine whether to retain the casing of the string message.
using System; namespace ExtensionMethodsDemo2 { public static class StringExtender { public static string Reverse(this string myString, bool retainCase) { char[] reverse = myString.ToCharArray(); for (int i = myString.Length - 1, j = 0; i >= 0; i--, j++) { reverse[j] = myString[i]; } string returnString = new string(reverse); if (retainCase == false) { returnString = returnString.ToLower(); } return returnString; } } public class Program { public static void Main() { string myMessage = "THIS MESSAGE WILL BE REVERSED."; Console.WriteLine(myMessage); Console.WriteLine(myMessage.Reverse(false)); } } }
Example 2 – Parameterized Extention Methods
THIS MESSAGE WILL BE REVERSED. .desrever eb lliw egassem siht
We added a boolean parameter that will determine if the reversed message will retain the casing of the original message. We added an if statement that will use the parameter to determine whether to retain the casing of the reversed message. The ToLower() method of the System.String class converts all the characters into its lowercase representation. The returned converted string is then assigned to the variable that will be returned by the extension method. | https://compitionpoint.com/extension-methods/ | CC-MAIN-2021-31 | refinedweb | 692 | 55.84 |
Hi guys,
so this is really confusing me. I’ve been having success using the script for the light sensor (it’s a grove module) when only running that on it’s own.
However, if I break the code into a larger program it doesn’t want to function.
I’ve placed the items in the same corresponding order - pre-setup, setup and loop. But still to no avail.
Single code that works, based off the library;
#include <Wire.h> #include <Digital_Light_TSL2561.h> void setup() { Wire.begin(); Serial.begin(9600); TSL2561.init(); } void loop() { Serial.print("The Light value is: "); Serial.println(TSL2561.readVisibleLux()); delay(1000); }
The above works. But not if split into a larger program (which is attached as it’s too large to include the code here).
Please help
Thanks guys
light_sensor_problem.ino (9.06 KB) | https://forum.arduino.cc/t/tsl2561-light-sensor-not-firing-up/563027 | CC-MAIN-2022-40 | refinedweb | 139 | 70.7 |
[Note: my last post announced my new book, Modern Java Recipes, is now available from O’Reilly publishers in Early Release form. As a sample, I included a discussion of the
Predicate interface, one of the new functional interfaces in the the
java.util.function package. In this post, I highlight constructor references, which are discussed in another recipe in the book.]
Problem
You want to instantiate an object using a method reference as part of a stream pipeline.
Solution
Use the
new keyword as part of a method reference.
Discussion
When people talk about the new syntax added to Java 8, they mention lambda expressions, method references, and streams. For example, say you had a list of people and you wanted to convert it to a list of names. One way to do so would be:
[sourcecode language=”java”]
List<String> names = people.stream()
.map(person -> person.getName()) // lambda expression
.collect(Collectors.toList());
[/sourcecode]
In other words, take a list of
Person instances, turn it into a stream, map each one to a
String by invoking the
getName() method on it, and
collect them back into a
List.
That works, but most developers would use a method reference instead:
[sourcecode language=”java”]
List<String> names = people.stream()
.map(Person::getName) // method reference
.collect(Collectors.toList());
[/sourcecode]
The method reference is slightly shorter, and makes clear that the only thing being done to each person is transforming it using the
getName method. Lambda expressions can be far more complicated and versatile. Method references are simple.
What if you want to go the other way? What if you have a list of strings and you want to create a list of
Person references from it? In that case you can use a method reference again, but this time using the keyword
new. That’s a constructor reference, which I would like to illustrate here.
First, here is the
Person class, which is just about the simplest Plain Old Java Object (POJO) imaginable. All it does is wrap a simple string attribute called
name.
[sourcecode language=”java”]
public class Person {
private String name;
public Person() {} // default constructor
public Person(String name) {
this.name = name;
}
public String getName() { return name; }
public void setName(String name) { this.name = name; }
// … equals, hashCode, toString methods …
[/sourcecode]
Given a list of string names, I can map them to
Person instances using the one-argument constructor.
[sourcecode language=”java”]
List<String> names =
Arrays.asList("Grace Hopper", "Barbara Liskov", "Ada Lovelace",
"Karen Spärck Jones");
List<Person> people =
names.stream()
.map(name -> new Person(name)) // lambda expression
.collect(Collectors.toList());
[/sourcecode]
Now instead of using the lambda expression that invokes the one-argument constructor directly, I can use a constructor reference instead.
[sourcecode language=”java”]
names.stream()
.map(Person::new) // Constructor reference
.collect(Collectors.toList());
[/sourcecode]
Like all lambda expression or method references, context is everything. The
map method is invoked on a stream of strings, so the
Person::new reference is invoked on each string in the stream. The compiler recognizes that the
Person class has a constructor that takes a single string, so it calls it. The default constructor is ignored.
Copy Constructors
To make things more interesting, I’ll add two additional constructors: a “copy constructor” that takes a
Person argument, and one that takes a variable argument list of strings.
[sourcecode language=”java”]
public Person(Person p) { // copy constructor
this.name = p.name;
}
public Person(String… names) { // varargs constructor
this.name = Arrays.stream(names)
.collect(Collectors.joining(" "));
}
[/sourcecode]
The copy constructor makes a new
Person from an existing
Person instance. Say I defined a person, then used that person in a stream without mapping it, and then converted back into a collection. Would I still have the same person?
[sourcecode language=”java”]
Person before = new Person("Grace Hopper");
List<Person> people = Stream.of(before)
.collect(Collectors.toList());
Person after = people.get(0);
assertTrue(before == after); // exact same object
before.setName("Grace Murray Hopper"); // Change name using ‘before’
assertEquals("Grace Murray Hopper", after.getName()); // Same in ‘after’
[/sourcecode]
The point is, if I have a reference to Admiral Hopper before the stream operations and I didn’t map her to another object, I still have the same reference afterwards.
Using a copy constructor I can break that connection.
[sourcecode language=”java”]
people = Stream.of(before)
.map(Person::new) // use copy constructor
.collect(Collectors.toList());
after = people.get(0);
assertFalse(before == after); // different objects
assertEquals(before, after); // but equivalent
before.setName("Rear Admiral Dr. Grace Murray Hopper"); // Change using ‘before’
assertFalse(before.equals(after)); // No longer the same in ‘after’
[/sourcecode]
This time, when invoking the map method, the context is a stream of
Person instances. Therefore the
Person::new syntax invokes the constructor that takes a
Person and returns a new, but equivalent, instance. I’ve broken the connection between the
before reference and the
after reference.
(Btw, I mean no disrespect by treating Admiral Hopper as an object. I have no doubt she could still kick my a**, and she passed away in 1992.)
Varargs Constructors
The varargs constructor is invoked by the client by passing zero or more string arguments separated by commas. Inside the constructor, the
names variable is treated like
String[], a string array. The static
stream method on the
Arrays class is used to convert that into a stream, which is then turned into a single string by calling the
collect method, whose argument comes from the convenient
joining(String delimiter) method in the
Collectors class.
How does that get invoked? Java includes a
split method on
String that takes a delimiter and returns a
String array:
[sourcecode language=”java”]
String[] split(String delimiter)
[/sourcecode]
Since the variable argument list is equivalent to an array, I can use that method to invoke the varargs constructor.
[sourcecode language=”java”]
names.stream() // Stream<String>
.map(name -> name.split(" ")) // Stream<String[]>
.map(Person::new) // Stream<Person> using String… ctor
.collect(Collectors.toList());
[/sourcecode]
This time I map the strings to string arrays before invoking the constructor. Note that this is one of those times where I can’t use a method reference, because there’s no way using a method reference to supply the delimiter argument. I have to use the lambda expression instead.
Since the context after the first map is now a stream of string arrays, the
Person::new constructor reference now uses the varargs constructor. If I add a print statement to that constructor:
[sourcecode language=”java”]
System.out.println("Varargs ctor, names=" + Arrays.asList(names));
[/sourcecode]
I can then see it in action:
Varargs ctor, names=[Grace, Hopper] Varargs ctor, names=[Barbara, Liskov] Varargs ctor, names=[Ada, Lovelace] Varargs ctor, names=[Karen, Spärck, Jones]
Arrays
With constructor references, not only can you create instances, you can even create arrays. Say instead of returning a
List, I wanted to return an array of person,
Person[]. The
Stream class has a method called, naturally enough,
toArray.
[sourcecode language=”java”]
<A> A[] toArray(IntFunction<A[]> generator)
[/sourcecode]
This method uses
A to represent the generic type of the array returned containing the elements of the stream, which is created using the provided generator function. The cool part is that a constructor reference can be used for that, too.
[sourcecode language=”java”]
names.stream()
.map(Person::new) // Person constructor ref
.toArray(Person[]::new); // Person[] constructor ref
[/sourcecode]
The returned value is a
Person[] with all the stream elements now included as Person references.
Constructor references are just method references by another name, using the word new to invoke a constructor. Which constructor is determined by the context, as usual. This technique gives a lot of flexibility when processing streams. | https://kousenit.org/2017/03/ | CC-MAIN-2020-16 | refinedweb | 1,274 | 57.06 |
Fate Dice Intervals
Last month I checked my Fate dice for biases. One of the things I did was plot an interval for the 4dF outcomes (-4 through 4) we expect from a fair set of dice 99% of the time. In this post I will look at four different methods of computing those regions. While writing this post, I came upon a NIST handbook page that covers the same topic; check it out too!
As per usual, you can find the Jupyter notebook used to perform these calculations and make the plots here (rendered on Github).
Normal Approximation
One of the simplest ways of determining how often we expect an outcome to appear is to assume that the distribution of results is Gaussian.1 If the outcome has a probability P, and the dice are thrown N times, then the range of expected results is:
Where z is the correct value for the interval (2.58 for 99%) and the two M values are the lower (for minus) and upper (for plus) bounds on the region.
Using this approximation yields values that are close to exact, with the exception that they allow negative counts for rare outcomes. The values (the negative outcomes -4 through -1 are removed, because the distribution is symmetric) are:
Wilson Score Interval
The Wilson score interval gives a better result than the normal approximation, but at the expense of a slightly more complicated formula. Unlike the normal approximation, the Wilson interval is asymmetric and can not go below 0. It is defined as:
Plugging in the numbers yields:
Monte Carlo Simulation
The previous two methods were quick to calculate, but only returned approximate results. One way to determine the exact intervals is to simulate rolling the dice. This is often easy to implement, but is slow due to the high trial count required.
The following code (which can be found in the notebook) will “roll” 4dF N times per trial, and perform 10,000 trials:
def simulate_rolls(n, trials=10000): """ Simulate rolling 4dF N times and calculate the expectation intervals. """ # The possible values we can select, the weights for each, # and a histogram binning to let us count them quickly values = [-4, -3, -2, -1, 0, 1, 2, 3, 4] bins = [-4.5, -3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5, 4.5] weights = [1, 4, 10, 16, 19, 16, 10, 4, 1] results = [[], [], [], [], [], [], [], [], []] # Perform a trial for _ in range(trials): # We select all n rolls "at once" using a weighted choice function rolls = choices(values, weights=weights, k=n) counts = np.histogram(rolls, bins=bins)[0] # Add the results to the global result for i, count in enumerate(counts): results[i].append(count) return results
After generating the trials, the intervals are computed by looking at the 0.5 percentile and the 99.5 percentile for each possible 4dF outcomes. The results are:
Binomial Probability
Simulating the rolls is guaranteed to produce the right result, but it takes a lot of time to run. For a simple case like rolling dice, we can calculate the intervals exactly using a little knowledge of probability. This is the method I used in my previous post because it is fast and exact.
The interval indicates the expected results a fair set of dice would roll 99% of the time, but that is exactly what probabilities gives as well! The interval is therefore just the set of rolls that make up 99% of the cumulative probability, centered around the most likely value for each outcome. Equivalently, we can find the set of rolls that make up the very unlikely 1%, which will come (approximately equally) from both tails of the distribution. That is, integrate from the low side (which is a sum, since the bins are discrete) until the cumulative probability is 0.5%, and then repeat for the high side. The stopping points are the correct lower and upper bounds.
Here is an example image showing this process for the probability distribution of the number of zeroes rolled if the dice are thrown 522 times. The red parts of the histogram are the results of the two integrals, each containing about 0.5% of the probability, and the grey lines mark the lower and upper bounds at 98 and 148.
Each of the bins in the plot has probability given by:
Where P is the probability of rolling the outcome on one roll and M is the number of time the outcome happens in N throws.
Applying this process to all the possible outcomes gives the following results:
Comparison
Tables are great when you want exact numbers, but it is much easier to compare the various methods using a plot. The following plot shows the predictions from each of the four methods for the outcomes 0 through 4. The negative outcomes (-4 through -1) are omitted because the distributions are symmetric.
The Monte Carlo method and the estimate using the binomial probability agree exactly, as expected. The naive variance method agrees well for the first few values, but begins predicting lower intervals as the value increases, finally ending with a nonsense negative count. The Wilson interval is consistently higher than the other values, and this discrepancy increases as the value of the roll increases.
The central limit theorem can be used to justify this approximation, but as you can see in the plot in the binomial section, even for large N the distribution is not a perfect Gaussian. ↩ | https://alexgude.com/blog/fate-dice-intervals/ | CC-MAIN-2019-35 | refinedweb | 920 | 57.91 |
0
I am trying to make a program for my AP Computer Science A Class. I wrote the following code that is supposed to allow a user to enter two numbers and then have them guess a number inbetween that number range.
So far, it works 90% of the time, but once in a while it enters numbers either higher or lower than the set number range.
Can somebody help me figure out how to fix this problem?
Thank you.
import java.util.Scanner; public class GuessingGameV1 { public static void main(String[] args) { int rNum = 0; int counter = 0; int guess = 0; Scanner in = new Scanner(System.in); System.out.print("Choose the first number in the range: "); int firstNumber = in.nextInt(); System.out.print("Choose the last number in the range: "); int lastNumber = in.nextInt(); for(int i=firstNumber; i < lastNumber; i++ ){ rNum = (int)(Math.random()*lastNumber); } while (guess != rNum) { System.out.print("Enter your guess: "); guess = in.nextInt(); if( rNum < guess ) { counter ++; System.out.println("Guess too high."); } else if( rNum == guess ) { System.out.println("You're right!"); System.out.println("It took you " + counter + " guesses!"); } else { counter ++; System.out.println("Guess too low."); } } } } | https://www.daniweb.com/programming/software-development/threads/491309/java-number-range | CC-MAIN-2017-39 | refinedweb | 196 | 61.22 |
expo-fontallows loading fonts from the web and using them in React Native components. See more detailed usage information in the Using Custom Fonts guide.
expo install expo-font
To use this in a bare React Native app, follow the installation instructions.
import * as Font from 'expo-font';
@font-faceblock in a shared style sheet for fonts. No CSS is needed to use this method.
fontFamilys to
FontSources. After loading the font you can use the key in the
fontFamilystyle prop of a
Textelement.
await loadAsync({ // Load a font `Montserrat` from a static resource Montserrat: require('./assets/fonts/Montserrat.ttf'), // Any string can be used as the fontFamily name. Here we use an object to provide more control 'Montserrat-SemiBold': { uri: require('./assets/fonts/Montserrat-SemiBold.ttf'), fontDisplay: FontDisplay.FALLBACK, }, }); // Use the font with the fontFamily property return
;;
try/catch/finallyto ensure the app continues if the font fails to load.
fontFamilyhas finished loading.
trueif the the font has fully loaded.
fontFamilyis still being loaded
trueif the the font is still loading.
FontDisplay.AUTO. Even though setting the
fontDisplaydoes nothing on native platforms, the default behavior emulates
FontDisplay.SWAPon flagship devices like iOS, Samsung, Pixel, etc. Default functionality varies on One Plus devices. In the browser this value is set in the generated
@font-faceCSS block and not as a style property meaning you cannot dynamically change this value based on the element it's used in.
AUTO: (Default on web) The font display strategy is defined by the user agent or platform. This generally defaults to the text being invisible until the font is loaded. Good for buttons or banners that require a specific treatment.
SWAP: Fallback text is rendered immediately with a default font while the desired font is loaded. This is good for making the content appear to load instantly and is usally preferred.
BLOCK: The text will be invisible until the font has loaded. If the font fails to load then nothing will appear - it's best to turn this off when debugging missing text.
FALLBACK: Splits the behavior between
SWAPand
BLOCK. There will be a 100ms timeout where the text with a custom font is invisible, after that the text will either swap to the styled text or it'll show the unstyled text and continue to load the custom font. This is good for buttons that need a custom font but should also be quickly available to screen-readers.
OPTIONAL: This works almost identically to
FALLBACK, the only difference is that the browser will decide to load the font based on slow connection speed or critical resource demand.
enum FontDisplay { AUTO = 'auto', BLOCK = 'block', SWAP = 'swap', FALLBACK = 'fallback', OPTIONAL = 'optional', } await loadAsync({ roboto: { uri: require('./roboto.ttf'), // Only effects web display: FontDisplay.SWAP, }, });
loadAsync. Optionally on web you can define a
displayvalue which sets the
font-displayproperty for a given typeface in the browser.
type FontResource = { uri: string | number; display?: FontDisplay; };
loadAsync()function. A font source can be a URI, a module ID, or an Expo Asset.
type FontSource = string | number | Asset | FontResource; | https://docs.expo.io/versions/v36.0.0/sdk/font/ | CC-MAIN-2020-16 | refinedweb | 509 | 57.98 |
The following form allows you to view linux man pages.
Standard C Library (libc, -lc)
#include <signal.h>
int
sighold(int sig);
int
sigignore(int sig);
int
xsi_sigpause(int sigmask);
int
sigrelse(int sig);
void (*)(int)
sigset(int, void (*disp)(int));
int
sigpause(int sigmask); han-
dler; dispo-
sition remains unchanged. If sigset() is used, and disp is not equal to
SIG_HOLD, sig is removed from the signal mask of the calling process.
The sighold() function adds sig to the signal mask of the calling pro-
cess..
kill(2), sigaction(2), sigblock(2), sigprocmask(2), sigsuspend(2),
sigvec(2).
The sigpause() function appeared in 4.2BSD and has been deprecated. All
other functions appeared in FreeBSD 8.1 and were deprecated before being
implemented.
webmaster@linuxguruz.com | http://www.linuxguruz.com/man-pages/sigrelse/ | CC-MAIN-2018-05 | refinedweb | 127 | 68.87 |
pypy / pypy / doc / release-1.3.0.rst
PyPy 1.3: Stabilization
Hello.
We're please to announce release of PyPy 1.3. This release has two major improvements. First of all, we stabilized the JIT compiler since 1.2 release, answered user issues, fixed bugs, and generally improved speed.
We're also pleased to announce alpha support for loading CPython extension modules written in C. While the main purpose of this release is increased stability, this feature is in alpha stage and it is not yet suited for production environments.
Highlights of this release
We introduced support for CPython extension modules written in C. As of now, this support is in alpha, and it's very unlikely unaltered C extensions will work out of the box, due to missing functions or refcounting details. The support is disable by default, so you have to do:
import cpyext
before trying to import any .so file. Also, libraries are source-compatible and not binary-compatible. That means you need to recompile binaries, using for example:
python setup.py build
Details may vary, depending on your build system. Make sure you include the above line at the beginning of setup.py or put it in your PYTHONSTARTUP.
This is alpha feature. It'll likely segfault. You have been warned!
JIT bugfixes. A lot of bugs reported for the JIT have been fixed, and its stability greatly improved since 1.2 release.
Various small improvements have been added to the JIT code, as well as a great speedup of compiling time.
Cheers, Maciej Fijalkowski, Armin Rigo, Alex Gaynor, Amaury Forgeot d'Arc and the PyPy team | https://bitbucket.org/pypy/pypy/src/0f1e91da6cb2/pypy/doc/release-1.3.0.rst?at=default | CC-MAIN-2014-15 | refinedweb | 272 | 68.67 |
102297/how-to-prompt-for-user-input-and-read-command-line-arguments
To read user input you can try the cmd module for easily creating a mini-command line interpreter (with help texts and autocompletion) and raw_input (input for Python 3+) for reading a line of text from the user.
text = raw_input("prompt") # Python 2
text = input("prompt") # Python 3
Command-line inputs are in sys.argv. Try this in your script:
import sys
print (sys.argv)
There are two modules for parsing command-line options: (deprecated since Python 2.7, use argparse instead) and get opt. If you just want to input files to your script, behold the power of file input
The canonical solution in the standard library ...READ MORE
Please use this code.
if len(sys.argv) == 2:
first_log ...READ MORE
Solution is add parameter values to pivot, then add reset_index for column ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
Hi, @There,
Try this:
Rollback pip to an older ...READ MORE
Hi @There,
As the path is fixed, use ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/102297/how-to-prompt-for-user-input-and-read-command-line-arguments | CC-MAIN-2021-21 | refinedweb | 214 | 68.57 |
Python code in one module gains access to the code in another module by the process of importing it. The import statement is the most common way of invoking the import machinery, but it is not the only way. Functions such as importlib.import_module() and built-in __import__() can also be used to invoke the import machinery.
The import statement combines two operations; loader, which the import machinery then invokes to load the module and create the corresponding module object.). If __file__ is set, it may also be appropriate to set the __cached__ attribute which is the path to any compiled version of the code (e.g. byte-compiled file). The file does not need to exist to set this attribute; the path can simply point to whether the compiled file would exist (see PEP 3147).
- __loader__ and that loader has a module_repr() method, call it with a single argument, which is the module object. The value returned is used as the module’s repr.
- If an exception occurs in module_repr(), the exception is caught and discarded, and the calculation of the module’s repr continues as if module_repr() did not exist.
- namespace loader automatically sets __path__ correctly for the namespace package.
As mentioned previously, Python comes with several default meta path finders. One of these, called the path based finder,_module()_module() method as described previously. When the path argument to find_module()_module() loader, which is then used to load the module.
In order to support imports of modules and initialized packages and also to contribute portions to namespace packages, path entry finders must implement the find_loader() method.. Instead path entry finders should implement the find_loader() method as described above. If it exists on_module().
Footnotes | http://www.wingware.com/psupport/python-manual/3.3/reference/import.html | CC-MAIN-2016-40 | refinedweb | 286 | 54.93 |
You only need two Text widgets, placed together, with maybe a separator between them so the user could adjust widths.
from time to time, I tend to write such kind of tasks, but I have different approach to usage of Tk from within Perl.
if you wish, I can extract proper GUI building parts from my recent GUI program to show how your could do the task.
as for another part of your question - efficiency of Text widgets - it is rather efficient.
No worries about editing several thousand of lines in each.
Regards,
Vadim
One thing to investigate is wordwrapping. My app just uses the default behavior where text just disappears off the screen if column width is too small. I seem to remember a post that mentioned some trouble getting word wrap to work - so I'd definitely investigate that issue with a little prototype first - make sure your major feature works before getting too far into this.
You should also be aware that the data storage is a bit strange. TableMatrix uses a hash with keys like: "0,3" or "45,14" to indicate row 0, col 3, etc. That means that inserting a row is a hassle. I didn't do any inserts, but I did do sorts. To sort, on some combination of columns, I wound up just copying to an AoA, sort and then copy back. As it turned out even with 80,000 hash keys, the performance of this was noticeable, but "ok". I don't know what to recommend, but I'd spend some more time looking for something that would allow easier insert operations.
There is a number of tk widgets to deal with task, and tablelist widget is one of them (another alternatives could be treeview from tile package and tktable).
use strict;
use Tcl::Tk;
my $int = new Tcl::Tk;
my $mw = $int->mainwindow;
# the tablelist widget is described at
#.
+html
$int->pkg_require('tablelist');
$int->Eval('namespace import tablelist::*');
my $w_t = $mw->Scrolled('tablelist',-columns =>[0, "First Column", 0,
+"Another column"],
-stretch =>'all', -background =>'white')->pack(-fill=>'both',
+ -expand=>1, -side=>'top');
$w_t->columnconfigure(0, -editable=>1);
$w_t->columnconfigure(1, -editable=>1);
$w_t->insert('end', ["row $_;", "$_ bla bla"]) for 'a'..'zzz';
$int->MainLoop;
. | http://www.perlmonks.org/?node_id=886535 | CC-MAIN-2016-18 | refinedweb | 375 | 69.01 |
You can subscribe to this list here.
Showing
6
results of 6
okay... i've googled for about an hour and still don't seem my answer...
I am using "arrows" to plot a time line that has
|-------------|
This kind of style. But since the time line is long i would also like
to include labels in my data that would show the start -duration - end
of my data in better precision that i can eyeball it looking at the
xlabel...
so that i might get something that looks like :
1.121 (2.1) 3.221
|-----------------------------|
# ------------------------------[ snip ]
---------------------------------
#! /usr/bin/env python
"""arrows.py -- a plot of the durations
Usage: $ python arrows.py data.txt
"""
import sys
import Gnuplot
def main():
# First let's get our data form an external file and stuff it in a
list
[filename] = sys.argv[1:]
all = []
for l in open(filename).readlines():
l = l.strip().split()
all.append( [ float(l[0]), float(l[1]), float(l[2]), int(l[3]) ] )
# Split the data into four datasets by stream
# Each data set will be drawn in 'vector' style with a different
color
allData = [] # This will contain the four datasets
for stream in [1, 2, 3, 4]:
# Select the raw data for the stream
rawData = [ item for item in all if item[3] == stream ]
# Make a Gnuplot.Data to contain the raw data
data = Gnuplot.Data(rawData,
using=(1, 3, 2, '(0)'), # This gives 1-based index into the
# data for x, y, xdelta, ydelta
# ydelta is a literal 0
with='vectors arrowstyle %s' % stream) # This sets the
style of the dataset styles are defined below
#title='Voice %s' % stream) # This names the dataset for
the legend
allData.append(data)
# Set up general plot parameters
g = Gnuplot.Gnuplot(debug=1)
g.title('Overview') # (optional)
g.xlabel('Time')
g.ylabel('Event')
g('set grid')
#g('set xtics (10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120,
130, 140)')
g('set ytics (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)')
# These lines create four vector styles that have bars at each end
# The only difference between the styles is the line type they use,
# which determines the color
# Shamelessly cribbed from the gnuplot arrowstyle demo
g('set style line 1 lt 1 lw 2') # Set a line style for the vectors
g('set style line 2 lt 2 lw 2')
g('set style line 3 lt 3 lw 2')
g('set style line 4 lt 4 lw 2')
# Set an arrow style for the vectors
g('set style arrow 1 heads size screen 0.008,90 ls 1')
g('set style arrow 2 heads size screen 0.008,90 ls 2')
g('set style arrow 3 heads size screen 0.008,90 ls 3')
g('set style arrow 4 heads size screen 0.008,90 ls 4')
#g('set key outside box') # Include a legend; put it outside the
graph
# This actually does the plot
# The * makes it treat the elements of allData as individual
function arguments
# It is the same as g.plot(allData[0], allData[1], allData[2],
allData[3]
g.plot(*allData)
if __name__ == '__main__':
main()
# data here:
1.0 52.98151020408163 1 1
53.98151020408163 360.0 2 2
153.98151020408164 100.0 3 3
254.61807619047627 60.0 4 4
316.94841360544211 60.0 5 1
378.66877120181414 30.0 6 2
409.28659297052155 30.0 7 3
440.74881179138322 24.0 8 4
465.75744104308387 60.0 9 1
496.08856235827682 60.0 10 2
557.0976548752833 60.0 11 3
Hi Michael,
While as an occasional user I cannot offer to be maintainer,
I can offer to help a new maintainer test numpy support.
Also, I suggest posting your query to the SciPy list,
as my impression is that a numer of Gnuplot.py users
lurk there.
Cheers,
Alan Isaac
Hello,
It should be obvious to everybody by now that I haven't had the time or
energy to invest in Gnuplot.py for quite some time now. When I
established the project, I used it almost daily. But since I started my
current job five years ago, I haven't used Gnuplot.py more than a
handful of times. And now that I am involved in another open-source
project (cvs2svn), I want to spend my limited available time on that
project.
For a long time this wasn't much of a problem, because Gnuplot.py did
most of the things people needed and didn't require much maintenance.
(Not that I can't think of nice new features to add, but...)
But now there are a few things that really need to be done:
- Support for numarray. This is a relatively small job but very important.
- Get Gnuplot.py listed on (The Python Cheese
- Updated the documentation and website to point at the new Subversion
repository rather than the old CVS one.
- Look over and possibly integrate some patches that have been submitted
to the mailing list over the past months and years.
- Make a new release. It's been ages, and a few things have changed.
Sooooo.....
Is there anybody out there who can program in Python and would be
interested in taking over the maintenance of Gnuplot.py? I'd be happy
to help you get up to speed and answer occasional questions.
Yours,
Michael
Hello.
I've been having trouble using multiple replot commands, ie:
g.plot(Gnuplot.Data(dataset[0]))
for i in xrange(1,len(dataset)):
g.replot(Gnuplot.Data(dataset[i]))
After about the 40th row in dataset, I start getting errors like this:
gnuplot> 1961992.5 0.0 2.47496008873
^
line 31995: invalid command
gnuplot> e
(if inline == 1) and sometimes about bad filenames (seems to be system
dependent) if inline = 0. Eventually the plot seems to come out correctly,
but I can't verify that all the rows are actually being plotted. Is this a
bug, or am I going about this all wrong? I want to overlay a bunch (~100)
of different lines on the same plot, and then possibly an averaged line in
dark black on top of that.
I can put sample code and plots up somewhere for reference...
Oh, this is on Debian Stable (but the bad filename errors tend to crop up on
a Gentoo box, which I don't have administrative access to).
Thanks for your help.
John
--
*************************
John Parejko
Department of Physics and Astronomy
215 895-2786
Drexel University
Philadelphia, PA
**************************
Adriaan . wrote:
>?
This has been discussed on the mailing list a couple of times before.
Please search the archive.
The bottom line is that there is no built-in support in Gnuplot.py for
the "fit" function, but it shouldn't be much effort to invoke it
explicitly and read the output back into Python.
Michael
Hi all,?
Cheers,
Adriaan | http://sourceforge.net/p/gnuplot-py/mailman/gnuplot-py-users/?viewmonth=200604 | CC-MAIN-2015-06 | refinedweb | 1,144 | 73.68 |
i need to creat a themometer that can show the temperature in degrees C and in Degrees F depending on what the user puts in.
i am very new with clases and i am wondering if i am on the right tracks with this begining code?
if anyone can give me a few pointers that will help then it would be very helpfull.if anyone can give me a few pointers that will help then it would be very helpfull.Code://CLASSES #include <iostream> using namespace std; class C_Themometer { private: float mDegreesC, mDegreesF; public: int SetDegreesC(float DegreesC, float DegreesF) { mDegreesC = DegreesC; mDegreesF = DegreesF; } void SetDegreesF() { } int GetDegreesC(float& DegreesC, float& DegreesF) { cout <<"Please enter a temperature"; return mDegreesC = (9.0/5.0 * mDegreesC) + (32.0); } void GetDegreesF() { } }; void main() { C_Themometer Deg1, Deg2; int mDegreesC; int mDegreesF; system("pause"); } | http://cboard.cprogramming.com/cplusplus-programming/84869-thermometer-class.html | CC-MAIN-2014-41 | refinedweb | 139 | 61.26 |
Wing Tips: Renaming Symbols and Attributes in Python Code with Wing Pro's Refactoring Tool
In an earlier Wing Tip we looked at using multiple selections as a way to edit several parts of code at once. Wing Pro's Refactoring tool provides another more focused way to edit multiple parts of code at once, even if the relevant code is found in multiple files.
What is Refactoring Anyway?
Refactoring is the process of changing code in a way that does not alter its functionality, in order to better organize the code or make it easier to read and maintain. A round of refactoring is often appropriate before working on code that has become a bit crufty over time.
IDEs like Wing Pro can help with this process by automating some of the operations commonly made during refactoring, including renaming symbols or attributes, moving symbols around, collecting code into a new function or method, and so forth.
Renaming Symbols and Attributes
Rename refactoring is often used to make code more readable by selecting clearer or more appropriate names. It may also be used to change a method on a class from __Private form, which in Python can only be accessed from code in the class itself, to a form that can be called from code outside of the class. For example:
Renaming method "__SetPosition" to "_SetPosition" with refactoring, so it can be used from outside of the class
Renaming Modules and Packages
Rename refactoring may also be used on whole modules or packages, by renaming any use of the module or package name. Wing Pro will rename the associated disk files and directories and track the change in the active revision control system, if any.
Renaming module "urlutils" to "urlops" with refactoring
Like-Named Symbols and Symbol Identity
Wing Pro's rename refactoring uses static source analysis of your code to determine which symbols are actually the same symbol. For example, in the following code there are two distinct symbols called name, one in the scope show_name and another in the scope process_name:
def show_name(name=None): if name is not None: print(name) def process_name(name): show = enter_name(name) if show: show_name(name=name)
Renaming name in the first function should only affect that scope, and any code that is passing the argument by name, as in the following example:
Refactoring to rename only one of two distinct but like-named symbols "name"
Uncertain Symbol Identity
In some cases, Wing Pro cannot determine for certain that a like-named symbol is actually the same symbol as the one you are renaming. In the following example, a missing import statement prevents Wing from determining that the instance of name in the file testanother.py is definitely the same symbol:
Renaming "name" finds an uncertain match, where a missing import prevents analysis from establishing the symbol's identity
When this occurs, Wing marks the potential match with a ? and won't rename it unless you check the checkbox next to it. Items can be visited in the editor by clicking on them in the Refactoring tool.
If you find Wing is failing to identify many symbols with certainty, you may want to check that your configured Python Path in Project Properties is allowing Wing to trace down the modules that you import in your code. You should see code warning indicators on imports that cannot be resolved.
In some other cases, adding type hints may also help Wing's static analysis of your code.
Wing Pro also provides a number of other refactoring operations that we'll eventually go through here in Wing Tips. For more information, take a look at Refactoring in the product manual.
That's it for now! We'll be back next week with more Wing Tips for Wing Python IDE.
Share this article: | https://wingware.com/blog/refactor-rename | CC-MAIN-2022-33 | refinedweb | 637 | 53.85 |
Details
Description
Distributed IDF is a valuable enhancement for distributed search across non-uniform shards. This issue tracks the proposed implementation of an API to support this functionality in Solr.
Issue Links
- is related to
-
-
- relates to
LUCENE-3555 Add support for distributed stats
- Closed
Activity
What about this approach: ?
I'm not sure what approach you are referring to. Following the terminology in that thread, this implementation follows the approach where there is a single merged big idf map at the master, and it's sent out to slaves on each query. However, when exactly this merging and sending happens is implementation-specific - in the ExactDFSource it happens on every query, but I hope the API can support other scenarios as well.
I didn't look a the patch, but from your comments it looks like you already have that "1 merged big idf map", which is really what I was aiming at, so that's good!
I was just thinking that this map (file) would be periodically updated and pushed to slaves, so that slaves can compute the global IDF locally instead of any kind of extra requests.
I believe the API that I propose would support such implementation as well. Please note that it's usually not feasible to compute and distribute the complete IDF table for all terms - you would have to replicate a union of all term dictionaries across the cluster. In practice, you limit the amount of information by various means, e.g. only distributing data related to the current request (this implementation) or reducing the frequency of updates (e.g. LRU caching), or approximating global DF with a constant for frequent terms (where the contribution of their IDF to the score would be negligible anyway).
Wich should be the value of the parameter shard.purpose to enable or disable the exact version of global IDF?
Shard.purpose is set by a concrete implementation of the DFSource, so I guess your question is "how to turn ExactDFSource on/off"? If that's the case, then put this in your solrconfig.xml:
<globalIDF class="org.apache.solr.search.ExactDFCache"/>
Updated patch, contains also:
- LRU-based cache that optimizes requests using cached values of docFreq for known terms
- unit tests
Was looking into this a little offline with Mark, who noticed that some queries were not being rewritten, and would thus throw an exception during weighting.
It looks like the issue is this: rewrite() doesn't work for function queries (there is no propagation mechanism to go through value sources). This is a problem when real queries are embedded in function queries.
Solr Function queries do have a mechanism to weight (via ValueSource.createWeight()).
QueryValueSource does "Weight w = q.weight(searcher);" and that implementation of weight
calls "Query query = searcher.rewrite(this);"
This patch calls rewrite explicitly (which does nothing for embedded queries), and then when using the DFSource implementation of searcher, rewrite does nothing, and hence the embedded query is never rewritten and the subsequent createWeight() throws an exception.
Rewrite not working through function query is not the end of the problems either... there is also stuff like extractTerms.
There is also the issue of Lucene changing rapidly... and the difficulty of adding new methods to ValueSource and making sure that all implementations correctly propagate them through to sub ValueSources. Perhaps one idea is to use a visitor pattern to decouple tree traversal with the operations being performed.
My solr version is 1.4. I patched it but failed.
SolrCache<String, Integer> cache = perShardCache.get(shard); it suggests that "The type SolrCache is not generic; it cannot be parameterized with arguments <String, Integer>"
The SolrCache is a interface: public interface SolrCache extends SolrInfoMBean
patching file src/common/org/apache/solr/common/params/ShardParams.java
patching file src/java/org/apache/solr/core/SolrConfig.java
Hunk #1 succeeded at 30 with fuzz 2 (offset 2 lines).
Hunk #2 FAILED at 197.
1 out of 2 hunks FAILED – saving rejects to file src/java/org/apache/solr/core/
SolrConfig.java.rej
patching file src/java/org/apache/solr/core/SolrCore.java
Hunk #5 succeeded at 821 (offset 3 lines).
patching file src/java/org/apache/solr/handler/component/QueryComponent.java
Hunk #1 succeeded at 40 with fuzz 2 (offset -2 lines).
Hunk #6 succeeded at 302 (offset 13 lines).
Hunk #7 succeeded at 324 with fuzz 2 (offset 12 lines).
Hunk #8 succeeded at 343 (offset 21 lines).
Hunk #9 succeeded at 367 (offset 21 lines).
Hunk #10 succeeded at 423 (offset 28 lines).
patching file src/java/org/apache/solr/handler/component/SearchHandler.java
patching file src/java/org/apache/solr/handler/component/ShardRequest.java
Hunk #1 FAILED at 37.
1 out of 1 hunk FAILED – saving rejects to file src/java/org/apache/solr/handle
r/component/ShardRequest.java.rej
patching file src/java/org/apache/solr/search/DFCache.java
patching file src/java/org/apache/solr/search/DFSource.java
patching file src/java/org/apache/solr/search/DefaultDFCache.java
patching file src/java/org/apache/solr/search/ExactDFCache.java
patching file src/java/org/apache/solr/search/LRUDFCache.java
patching file src/java/org/apache/solr/search/SolrIndexSearcher.java
Hunk #1 succeeded at 77 (offset 3 lines).
Hunk #2 succeeded at 149 (offset 3 lines).
Hunk #3 succeeded at 699 (offset 46 lines).
Hunk #4 succeeded at 927 (offset 59 lines).
Hunk #5 succeeded at 1041 (offset 59 lines).
Hunk #6 succeeded at 1190 with fuzz 1 (offset 180 lines).
Hunk #7 FAILED at 1276.
Hunk #8 FAILED at 1311.
Hunk #9 succeeded at 1608 (offset 104 lines).
Hunk #10 succeeded at 1716 (offset 113 lines).
Hunk #11 succeeded at 1774 (offset 113 lines).
2 out of 11 hunks FAILED – saving rejects to file src/java/org/apache/solr/sear
ch/SolrIndexSearcher.java.rej
patching file src/java/org/apache/solr/util/SolrPluginUtils.java
can't find file to patch at input line 1206
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--------------------------
Regarding the comment "Perhaps one idea is to use a visitor pattern to decouple tree traversal with the operations being performed." can you please explain where to implement the Listener/visitor. I had a quick look at the patch and it seems to me that the main functionality is in trunk/src/java/org/apache/solr/search/SolrIndexSearcher.java and the rest is more caching concerns, right?
Recently I updated this patch to trunk and got rid of the threadlocal usage and Query rewriting that was the reason we had to pull this from trunk long ago - then I attempted to override stats on IndexSearcher with global stats - this is when I realized that had no affect on scoring anymore - this will now be addressed
LUCENE-3555. Unfortunately, I didn't pay attention and lost that code. It's unfortunate, because it would have been a nice head start on this issue - I think we may want to make other changes/improvements, but would have been a start with something working. It was a half pain to do since the patch has to be manually applied, but perhaps doing it a second time is faster...
Correction: i got rid of the rewrite that was added for the multi searcher type behavior - I hadn't solved the issue of rewrite to get the terms to retrieve stats for - that patch was not yet going to work with multiterm queries.
Although, actually I'm not even sure if that rewrite is really a problem - I almost don't think it will tickle the same issue as the rewrite that was happening before the search. I didn't have a chance to test it or look into it in depth or anything yet though.
I found this work hidden away in my eclipse workspace! It still has the thread local stuff - either I had only thought of what I was going to do to remove it, or this was not the latest work, but either way, it starts us from a trunk applyable patch, which is much better. There is still a fair amount to do at minimum to switch to using the new scoring stats. I started some really simple moves towards this (super baby step) and so things dont compile at the moment. Patch should be clean though.
Patch updated to trunk (rev. 1232110). I refactored the code and changed the names of new classes to better reflect the fact that we work with complex stats and not primitive freqs. Included unit tests are passing.
Is this something that can be added to branch_3x? With high fuzz and ignore whitespace, the patch applies, but then fails to compile. It also fails to compile when I set fuzz to zero, pay attention to whitespace, and manually fix the patch rejects. I couldn't figure out how to fix the problems.
Is this something that can be added to branch_3x?
Not without porting - Lucene / Solr API-s have changed significantly, and this patch uses some low-level API-s that are different between trunk and 3x.
Haven't had time to look this over that closely, but this did jump out at me:
+public class CollectionStats {
+ public String field;
+ public int maxDoc;
+ public int docCount;
Shouldn't we be using longs here so we can support more than 2B docs?
Yeah, I was curious about this too. However, this is how CollectionStatistics is defined in Lucene, so it's something that we have to change in Lucene too.
This is a diff from my best approximation of applying the trunk patch to 3x. It doesn't compile, but it will probably save someone some time.
However, this is how CollectionStatistics is defined in Lucene, so it's something that we have to change in Lucene too.
TermStatistics too. Lets open a separate issue for this.
Patch updated to use long types, and properly handle -1's in freqs.
Thanks Andrzej: I think it will be nice that all of lucene's scoring algorithms can work in distributed mode.
Just one question about the patch: in StatsUtil I can't tell if termFromString matches termToString?
termToString seems to base64 encode the term text (a good idea, since terms can be binary), but I don't
see the corresponding decode in termFromString (there is an XXX: comment though).
Hmm, indeed... I must have switched to toString() for debugging (its easier to eyeball an ascii string than a base64 string
). This should use base64 throughout. I'll prepare a patch shortly.
(BTW, I'm aware that passing around blobs of base64 inside SolrParams is ugly. I'm open to suggestions how to handle this better).
(BTW, I'm aware that passing around blobs of base64 inside SolrParams is ugly. I'm open to suggestions how to handle this better).
I'd prefer non-base64 at the Solr transport level (e.g. termStats=how,now,brown,cow). It will be both smaller, and much easier to debug other things.
Although Lucene can technically index arbitrary binary now, Solr does not use that anywhere (and won't for 4.0). It would take a good amount of infrastructure work all over to truly allow that. If/when we allow arbitrary binary terms, it should be relatively easy to extend the syntax we pick today to allow selectively base64 encoded terms.
There are already a number of places in Solr where we use StrUtil.join (a comma separated list of strings) to specify a list of terms (both in distrib faceting and distrib search for example).
Although Lucene can technically index arbitrary binary now, Solr does not use that anywhere (and won't for 4.0).
Thats not actually true. Collation uses it already.
Thats not actually true. Collation uses it already.
Hmmm, that's normally just for sorting though. I wonder if that works with distributed search today?
Anyway, we have a schema - that can allow us to do what makes sense depending on the field (i.e. only use base64 or \x?? for fields where there will be non-character terms)
Its also used for locale-sensitive range queries (and of course termquery etc works too, but thats not interesting).
\x or %xx escaping could be ok, I guess - it's safe, and in most cases it's still readable, unlike base64.
Its also used for locale-sensitive range queries
Given that range queries (and other multi-term queries) are constant scoring and may contain many terms, hopefully we avoid requesting term stats for these?
hopefully we avoid requesting term stats for these?
There is no provision for this yet in the current patch.
There is no provision for this yet in the current patch.
There is nothing different from a MTQ generated BQ than a huge BQ a solr user submits.
In my opinion instead of saying "screw scoring certain types of queries", this stuff should
be done by InExact implementations (and maybe that should be the default, fine). e.g. a nice
heuristic could look at the local stats and say: sure there are 100 terms but 50 are low-freq,
lets assume additive constant C for those, batch the other terms into e.g. 5 ranges and only request
stats on 5 "surrogate" terms representative of those groups.
Just make sure any heuristic is always added to what is surely present locally, e.g. distributed
docfreq is always >= local docfreq. Then no scoring algorithms will break.
There is nothing different from a MTQ generated BQ than a huge BQ a solr user submits.
Multi-term queries like range query, prefix query, etc, do not depend on term stats, and can consist of millions of terms. It's a waste to attempt to return term stats for them (estimated or not).
It would also be a shame to use estimates rather than exact numbers for what will be the common case (i.e. when there's really only a couple of terms you need stats for):
+title:"blue whale" +title_whole:[a TO g}
or
+title:"blue whale" +date:[2001-01-01 TO 2010-01-01}
Ideally, we wouldn't even do a rewrite in order to collect terms - rewrite itself has gotten much more expensive in some circumstances (i.e. iterating the first 350 terms to determine what style of rewrite should be used)
Multi-term queries like range query, prefix query, etc, do not depend on term stats, and can consist of millions of terms.
No, they cannot.
it can't be millions of terms because a million exceeds the
boolean max clause count, in which it will always use a filter.
Ideally, we wouldn't even do a rewrite in order to collect terms
You don't have to, Lucene's test case (ShardSearchingTestBase) doesn't do an extra rewrite to collect terms.
@Override public Query rewrite(Query original) throws IOException { final Query rewritten = super.rewrite(original); final Set<Term> terms = new HashSet<Term>(); rewritten.extractTerms(terms); // Make a single request to remote nodes for term // stats: ... return rewritten; }
- rewrite itself has gotten much more expensive in some circumstances (i.e. iterating the first 350 terms to determine what style of rewrite should be used)
Got any benchmarks to back this up with?
Its incorrect to say rewrite has gotten more expensive? More expensive than what?
Its the opposite: its actually much faster when rewriting to boolean queries in 4.0 because it always works per-segment.
it can't be millions of terms because a million exceeds the boolean max clause count, in which it will always use a filter.
So depending on exactly how many terms the range query covers, extractTerms may or may not return any.
So extractTerms() may return 300 terms the first time, and then after someone adds some docs to the index it may suddenly return 0.
This just strengthens the case that we should be consistent and just always ignore the terms from these MTQs.
Its incorrect to say rewrite has gotten more expensive? More expensive than what?().
But back to the original question - I still see no reason to request/return/cache terms/stats from these multi-term queries when by definition they should not change the results of the request.().
Just set in Solr the rewrite mode of MTQ to CONSTANT_SCORE_FILTER_REWRITE - done. There is no discussion needed and no custom RangeQuery in Solr.
Just set in Solr the rewrite mode of MTQ to CONSTANT_SCORE_FILTER_REWRITE - done.
Right - I was considering the best way to do this (passing that info around solr about when to use what method).
It solves both issues - relatively expensive rewrites that are not needed, and ignoring the MTQ terms.
But back to the original question - I still see no reason to request/return/cache terms/stats from these multi-term queries when by definition they should not change the results of the request.
My original point (forgetting about the specifics of MTQ, how things are being scored, or anything) is still that its a general case of Query that can have lots of Terms.
So if there are concerns about "lots of terms", I still think its worth considering having some
limits on how many Terms would be exchanged. Maybe BooleanQuery's max clause count is already good
enough, but another way to do it would be to have an approximate implementation that approximates
when the term count for a query gets too high.
Any progress to report or does anyone have a patch that is updated for trunk?
Updated patch to build for rev: 1447516 (Mon, 18 Feb 2013)
All tests seem to pass.
Nice. I mentioned this to AB not too long ago, but I'm of the mind to simply commit this. It will default to off, and we can continue to work on it.
So unless someone steps in, I'll commit what Markus has put up.
Markus, have you tried this out at all beyond the unit tests - eg on a cluster?
No, not yet. Please let me do some real tests, there must be issues, the patch is over a year old!
It doesn't really seem to work, we're seeing lots of NPE's and if a response comes through IDF is not consistent for all terms. Most request return one of the NPE's below. Sometimes it works, and then the second request just fails.
java.lang.NullPointerException at org.apache.solr.search.stats.ExactStatsCache.sendGlobalStats(LRUStatsCache.java:202) at org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:783) at org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:618) at...
java.lang.NullPointerException at org.apache.solr.search.stats.LRUStatsCache.sendGlobalStats(LRUStatsCache.java:228) at org.apache.solr.handler.component.QueryComponent.createMainQuery(QueryComponent.java:783) at org.apache.solr.handler.component.QueryComponent.regularDistributedProcess(QueryComponent.java:618) at...
We also see this one from time to time, it looks like this is thrown is there are `no servers hosting shard`:
java.lang.NullPointerException at org.apache.solr.search.stats.LRUStatsCache.mergeToGlobalStats(LRUStatsCache.java:112) at org.apache.solr.handler.component.QueryComponent.updateStats(QueryComponent.java:743) at org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:659) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:634) at ..
It's also imposes a huge performance penalty with both LRUStatsCache and ExactStatsCache, if you're used to 40ms response times you'll see the average jump to 2 seconds with very frequent 5 second spikes. Performance stays poor if logging is disabled.
The logs are also swamped with logs like:
2013-02-20 11:54:48,091 WARN [search.stats.LRUStatsCache] - [http-8080-exec-5] - : ## Missing global colStats info: <FIELD>, using local 2013-02-20 11:54:48,091 WARN [search.stats.LRUStatsCache] - [http-8080-exec-5] - : ## Missing global termStats info: <FIELD>:<TERM>, using local
Both StatsCacheImpls behave like this. Each query logs lines like above. Maybe performance is poor because it tries to look up terms everytime but i'm not sure yet.
Finally something crazy i'd like to share
-Infinity = (MATCH) sum of: -Infinity = (MATCH) max plus 0.35 times others of: -Infinity = (MATCH) weight(content_nl:amsterdam^1.6 in 449) [], result of: -Infinity = score(doc=449,freq=1.0 = termFreq=1.0 ), product of: 1.6 = boost -Infinity = idf(docFreq=29800090, docCount=-1) 1.0 = tfNorm, computed from: 1.0 = termFreq=1.0 1.2 = parameter k1 0.0 = parameter b (norms omitted for field)
If someone happens to recognize the issues above, i'm all ears
Hmm, that makes it look like the current tests for this must be pretty weak then.
Things have changed a lot in the past 13 months and i haven't figured it all out yet. I'll try to make sense out of it but some expert opinion and trial on the patch and all would be more than helpful. Is Andrzej not around?
Updated patch for trunk:
Last Changed Rev: 1488431
Last Changed Date: 2013-06-01 01:42:51 +0200 (Sat, 01 Jun 2013)
is this patch currently working in 5.0?
No, it does not work at all. I did spend some time on it but had other things to do. In the end i removed my (not working) changes and uploaded a patch that at least compiles against the revision of that time.
Ok, i updated the patch for today's trunk and it actually works now with ExactStatsCache. We now have correct DF for distributed queries.
I removed the perReaderTermContext in ExactStatsCache, this cached the TermContext for new terms. This was a problem because caching it this way meant that any second term got the same DF as the first.
I also added a local boolean to SolrIndexSearcher's collectionStatistics() and termStatistics() to force it to return only local scores. This is a nasty hack to prevent it from returning the other shard's DF. Without this, DF will increase for every other request, in the end it will crash the systems because the number gets too high.
Also, the warning ## Missing global termStats info: " + term + ", using local should perhaps not be a warning at all. This gets emitted also for fields not having those terms. The check in returnLocalStats doesn't add terms for docFreq == 0.
Add <globalStats class="org.apache.solr.search.stats.ExactStatsCache"/> to your solrconfig in the config section to make it work.
Please check my patch and let's fix this issue so we hopefully can get distributed IDF in Solr 4.7.
I'm looking at a couple of the test fails before I go to bed tonight:
[junit4] Tests with failures:
[junit4] - org.apache.solr.handler.component.QueryElevationComponentTest.testGroupedQuery
[junit4] - org.apache.solr.TestDistributedSearch.testDistribSearch
[junit4] - org.apache.solr.search.stats.TestLRUStatsCache.testDistribSearch
[junit4] - org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basicWithGroupSortEqualToSort
[junit4] - org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_withTotalGroupCount
[junit4] - org.apache.solr.TestGroupingSearch.testGroupingGroupSortingScore_basic
[junit4] - org.apache.solr.search.stats.TestExactStatsCache.testDistribSearch
[junit4] - org.apache.solr.update.AddBlockUpdateTest.testXML
[junit4] - org.apache.solr.update.AddBlockUpdateTest.testSolrJXML
I did not do a thorough review or anything, but here is a patch...
- I cleaned up a lot of things.
- I fixed the things that needed to be fixed for the tests to pass.
- I got precommit passing (though I may have added back in a nocommit after).
Anyway, tests seem to pass for me.
More cleanup in this patch.
The config you need to use to turn this on is now:
<statsCache class="org.apache.solr.search.stats.ExactStatsCache"/>
It needs to go in the top level config section.
The thread local still scares me ... need to look closer at that.
I've got two main concerns - the thread local and it looks like the statscache is not thread safe but shared across threads.
The threadlocal is concerning because you can have thousands of threads and each will cache how many stats? I wish we could do something better.
This patch remove the new thread local by piggy backing on the existing thread local Solr uses for a request (which is already nicely cleaned up per request).
I also attempted to make ExactStatsCache thread safe, but the whole design there needs a review I think.
LRUStatsCache is certainly still not thread safe and needs to be fixed.
Markus Jelsma, how was performance with your most recent patch compared to what you first reported?
Whoops - attached the wrong patch this morning. Anyway, here is a new one.
- Attempted to make LRUStatsCache thread safe.
- Lot's of little clean up / little improvements
It is much faster now, even usable. But i haven't tried it in a larger cluster yet.
Last patch was doubled - pasted twice I guess. Here is a clean one.
Here is my latest work I was playing around with the other night. A lot more cleanup, removed a bunch of dupe code, etc.
I think things are fairly reasonable now given the current design.
I do think we want to look at the design to make sure it's going to work well with the caching impls that we will want to add.
Updated to latest trunk.
Cleaned code duplicates. Fixed org.apache.solr.search.stats.TestLRUStatsCache, added test for org.apache.solr.search.stats.ExactSharedStatsCache.
Fixed javadocs.
Hi Vitaly, are you sure it still works? I tried your and few older patches again but docCounts are no longer the sum of the cluster size. The GET_STATS query is executed though.
Two node test cluster:
384841 [qtp1175813699-17] INFO org.apache.solr.core.SolrCore – [collection1] webapp=/solr path=/select params={distrib=false&debug=track&wt=javabin&requestPurpose=GET_TERM_STATS&version=2&rows=10&debugQuery=false&shard.url=} status=0 QTime=1 384848 [qtp1175813699-17]=138 status=0 QTime=1 384863 [qtp1175813699-17] INFO org.apache.solr.core.SolrCore – [collection1] webapp=/solr path=/select params={ids=} status=0 QTime=7 384870 [qtp1175813699-13] INFO org.apache.solr.core.SolrCore – [collection1] webapp=/solr path=/select params={debugQuery=true&q=wiki} rid=-collection1-1394444039677-12 hits=284 status=0 QTime=33
380242 [qtp1175813699-16] INFO org.apache.solr.core.SolrCore – [collection1] webapp=/solr path=/select params={distrib=false&debug=track&wt=javabin&requestPurpose=GET_TERM_STATS&version=2&rows=10&debugQuery=false&shard.url=} status=0 QTime=0 380249 [qtp1175813699-16]=146 status=0 QTime=2 380263 [qtp1175813699-16] INFO org.apache.solr.core.SolrCore – [collection1] webapp=/solr path=/select params={ids=ördinaten,} status=0 QTime=6
But i get these scores:
12.8123455 = (MATCH) weight(content_nl:wiki in 18636) [], result of: 12.8123455 = score(doc=18636,freq=33.0 = termFreq=33.0 ), product of: 6.0355678 = idf(docFreq=138, docCount=57897) 2.122807 = tfNorm, computed from: 33.0 = termFreq=33.0 1.2 = parameter k1 0.0 = parameter b (norms omitted for field)
12.558066 = (MATCH) weight(content_nl:wiki in 60634) [], result of: 12.558066 = score(doc=60634,freq=25.0 = termFreq=25.0 ), product of: 5.982207 = idf(docFreq=146, docCount=58059) 2.0992365 = tfNorm, computed from: 25.0 = termFreq=25.0 1.2 = parameter k1 0.0 = parameter b (norms omitted for field)
Did it work for you?
I tried your and few older patches again but docCounts are no longer the sum of the cluster size.
Do you see what is missing in the tests to catch this?
No, but i think this happened when the QueryCommand code
public StatsSource getStatsSource() { return statsSource; } public QueryCommand setStatsSource(StatsSource dfSource) { this.statsSource = dfSource; return this; }
got removed.
- Fixed global stats distribution
- Added assert on query explain (docNum, weight and idf should be the same in distributed tests), this assert is valid on 2nd query only since global stats merged in the end of 1st query.
Move issue to Solr 4.9.
I'd created a reviewboard request to look and compare the last few patches. Thought I'd share that here.
I've uploaded and updated patch that applies to current trunk but has a failing TestLRUStatsCache at the review board.
I'm trying to get it to work but Vitaliy Zhovtyuk, can you have a look at it too if you have time?
java.lang.AssertionError: Expected :0.7176591 Actual :0.10904001 <Click to see difference> at __randomizedtesting.SeedInfo.seed([C08012850AFBE274:41669C9D7DA48248]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.search.stats.TestLRUStatsCache.checkResponse(TestLRUStatsCache.java:55) at org.apache.solr.search.stats.TestDefaultStatsCache.dfQuery(TestDefaultStatsCache.java:102) at org.apache.solr.search.stats.TestDefaultStatsCache.doTest(TestDefaultStatsCache.java:67) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:875) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Wrong patch was attached on 1.04.2014.
Updated previous changes to current trunk.
TestDefaultStatsCache, TestExactSharedStatsCache, TestExactStatsCache, TestLRUStatsCache are passing.
Thanks for updating the patch Vitaliy Zhovtyuk.
The tests pass now. I'm looking at the updated patch.
Updated patch.
Updated patch with minor changes. I've also benchmarked this test on my machine with 150k Jeopardy questions dataset over 2 shards with a replication factor of 1. The times aren't off on that one.
It'd be good if someone else can also look at it else I'd like to brush it a little more, document it and commit.
Unless there are objections in the next few days, I think we should get this in now. This would not be enabled by default i.e. LocalStatsCache impl would be used anyways.
I plan on committing this sometime over the weekend.
Final patch. Nothing really changed form the last one but just updated it to be from the latest version of trunk. Will commit this tomorrow morning.
Commit 1647253 from Anshum Gupta in branch 'dev/trunk'
[ ]
SOLR-1632: Distributed IDF, finally.
Thanks to everyone who's contributed on this one! The list is long
I've committed this to trunk, if all stays well, will commit it into 5x later in the (coming) week.
WhoooHooooo!
The commit is too large to digest easily. I assume this is on by default? Can it be enabled and disabled?
I will likely be using this once it's available, but do we have any idea what the performance impact is?
This isn't switched on by default as it certainly comes at some cost (there are no free lunches, remember?)
It can be switched on by specifying what implementation you want via top-level solrconfig setting or System property e.g. here is how you can set it to use ExactStatsCache implementation (non-cached):
<statsCache class="org.apache.solr.search.stats.ExactStatsCache"/>
About the performance impact, I tested it on my machine (which is not really a great thing to do as there's barely any possibility of network issues here) for about 6mn (real and mocked up Jeopardy questions dataset) docs and regular queries and the performance impact was barely noticeable.
I still need to document this (which I'll add to the ref guide once this makes it into 5x) and I suppose things would be easier to understand for the end user then.
We should get some results across real machines, but I also turned my micro bench work onto this. I didn't confirm that the settings are actually taking affect, or review the latest work, but I ran the benchmark twice, once with LocalStatsCache and once with ExactStatsCache.
<statsCache class="org.apache.solr.search.stats.ExactStatsCache"/>
<statsCache class="org.apache.solr.search.stats.LocalStatsCache"/>
The test uses two machines, one to create and send the docs/queries, another to run the Solr JVMs. I ran a query test using a ton of wikipedia data across 6 jvm instances, 6 shards, no replication. I indexed a ton of docs, and then used a bunch of threads and bunch of CloudSolrServer's to pound in some queries. Performance appeared nearly identical.
Right, I saw similar behavior on my tests. I think the impact really would be when there's a ton of query terms across multiple shards that actually use the network.
This isn't switched on by default as it certainly comes at some cost
What would be really nice is to enable this on a per-request basis. Perhaps via "globalStats=true"
We can open up a new issue if it's difficult enough...
Commit 1648428 from Anshum Gupta in branch 'dev/branches/branch_5x'
[ ]
SOLR-1632: Distributed IDF, finally. (merge from trunk)
Yonik Seeley: I did give it a thought but it would be tricky to support something like stats=<implementation> for each request. We could however have something like 'stats=local' or 'stats=global' where in the later case, it uses the implementation specified in the config. But yes, we could evaluate that more.
Marking as resolved.
Closing the issue.
LUCENE-6758 removed part of the test of this issue:
--- lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/stats/TestDefaultStatsCache.java 2015/09/09 03:13:44 1701894 +++ lucene/dev/trunk/solr/core/src/test/org/apache/solr/search/stats/TestDefaultStatsCache.java 2015/09/09 03:16:15 1701895 @@ -79,10 +79,6 @@ if (clients.size() == 1) { // only one shard assertEquals(controlScore, shardScore); - } else { - assertTrue("control:" + controlScore.floatValue() + " shard:" - + shardScore.floatValue(), - controlScore.floatValue() > shardScore.floatValue()); } }
Was it testing something important, and can it be replaced with something else?
I think the check should be modified from ontrolScore.floatValue() > shardScore.floatValue()) to controlScore.floatValue() >= shardScore.floatValue()) .
I understand the motivation here that once a term starts getting 'rare' the score will be higher as the stats are just from the individual shards.
The first part of the test doesn't seem to be triggering this though:
del("*:*"); for (int i = 0; i < clients.size(); i++) { int shard = i + 1; for (int j = 0; j <= i; j++) { index_specific(i, id, docId++, "a_t", "one two three", "shard_i", shard); } }
Initial implementation. This supports the current global IDF (i.e. none
), and an exact version of global IDF that requires one additional request per query to obtain per-shard stats.
The design should be already flexible enough to implement LRU caching of docFreqs, and ultimately to implement other methods for global IDF calculation (e.g. based on estimation or re-ranking). | https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2015-40 | refinedweb | 5,725 | 58.38 |
#include <wx/event.h>
This class is not used by the event handlers by itself, but is a base class for other event classes (such as wxBookCtrlEvent).
It (or an object of a derived class) is sent when the controls state is being changed and allows the program to wxNotifyEvent::Veto() this change if it wants to prevent it from happening.
Constructor (used internally by wxWidgets only).
Prevents the change announced by this event from happening.
It is in general a good idea to notify the user about the reasons for vetoing the change because otherwise the applications behaviour (which just refuses to do what the user wants) might be quite surprising. | https://docs.wxwidgets.org/3.0/classwx_notify_event.html | CC-MAIN-2018-51 | refinedweb | 112 | 57.81 |
GUI display a file/email/help in a viewport with paging. More...
#include "config.h"
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
Go to the source code of this file.
GUI display a file/email/help in a viewport with p.
Flags for mutt_pager(), e.g. MUTT_SHOWFLAT.
Definition at line 51 of file lib.h.
Flags, e.g. NT_PAGER_ACCOUNT.
Definition at line 167 of file lib.h.
Determine the behaviour of the Pager.
Definition at line 127 of file lib.h.
Display an email, attachment, or help, in a window.
This pager is actually not so simple as it once was. But it will be again. Currently it operates in 3 modes:
Definition at line 2360 of file dlg_pager.c.
Display some page-able text to the user (help or attachment)
Definition at line 120 of file do.
Create the Windows for the Pager panel.
Definition at line 121 of file ppanel.c.
Create a new Pager Window (list of Emails)
Definition at line 241 of file pager.c.
Reset the pager's viewing position.
Definition at line 1995 of file dlg_pager.c. | https://neomutt.org/code/pager_2lib_8h.html | CC-MAIN-2021-39 | refinedweb | 185 | 71.92 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.