text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hello, I suppose this topic has been widely discussed, but i’ve not found exactly what i need. I thought validate_associations would in fact validate the associations of my models, but it simply verifies that the fields of the object are valid. So i need to create some custom validations to do I found one solution: validate :obj_must_exist def obj_must_exist errors.add(:obj_id, "must point to an existing obj") if obj_id && obj.nil? end But now i need it in a helper or something like that, because i’ve to reuse the same validation several times. I created a file in config/initializers called validations.rb and put this: ActiveRecord::Base.class_eval do def self.validates_object_exists(obj, obj_id) obj.errors.add(obj_id, “must point to an existing #{obj}”) if obj_id || obj.nil? end end But it’s just not working… So, what’s the “rails way” to do that? Thank you.
https://www.ruby-forum.com/t/about-custom-validations/177168
CC-MAIN-2021-49
refinedweb
151
50.63
Zou Yi, No. 8 middle school, Hengyang City, Hunan Province Many problems have a constant state from which to evolve and transfer, resulting in sub problems with the same properties and smaller scale. So we start with this, analyze the constant state and weight, and then see how to transfer to produce the solution of another sub problem For example, the following question: Median problem A nonnegative integer sequence a with length N is given_ i. For all 1 < = k < = (N + 1) / 2, output the median of the first 1, 3, 5. Input The first line is a positive integer n, which represents the sequence length. N<=10^5 Line 2 contains N integers a_ i (-10^9 < = A_i < = 10^9) Output As shown in the question. Input data 1 7 5 7 8 2 3 1 9 Output data 1 5 7 5 5 Solution: The essence of this question is to find the median of some numbers. If you only find the median once, it's very simple. Just do it quickly. The time complexity is O(N*Log2N). Of course, if you understand fast scheduling deeply enough, you can write O(N) But this problem is to find the median for many times, and it is the median of the first 1, 3, 5, 7, 9... Numbers. This is obviously an arithmetic sequence with a tolerance of 2. After analysis, we can get 1: For an ascending sequence, if 2 numbers are inserted or reduced each time, the position of the median will only be offset by one position or remain unchanged. Taking reduction as an example, if two numbers are reduced, one on the left and one on the right of the median position, the median position remains unchanged; If both are on the left of the median position, the median will move one bit to the right, otherwise, it will move one bit to the left. 2: Because this question gives all the numbers in advance, it belongs to offline inquiry. Therefore, for the original sequence, after a quick row, we can know its position and then its weight according to the definition of median. 3: According to the constant quantity derived from the previous article, we analyze the next problem to be solved: find the median of the first five numbers. Obviously, we only need to remove the last two numbers from the sorted sequence. The consequences are analyzed in Article 1. Of course, at this time, we should be able to quickly find the position of the removed two numbers in the ascending sequence, and carry out corresponding processing according to the relationship between their position and the median position. 4: To get the corresponding results, we must remove the previous two numbers from the ascending sequence. It is not difficult to find that this problem needs to constantly delete elements, so we use the linked list structure for processing. 5: Continue to perform the above steps to obtain all solutions, and then output them The code is as follows: #include<bits/stdc++.h> using namespace std; const int N=1e5+7; int ans[N],pos[N],tot; struct node { int x,id; }a[N]; struct list { int l,r,v; }l[N]; void insert(int p,int v) { tot++,l[tot].v=v; l[tot].l=p,l[tot].r=l[p].r; l[l[p].r].l=tot,l[p].r=tot; } void erase(int p) { l[l[p].l].r=l[p].r; l[l[p].r].l=l[p].l; } bool cmp(node x,node y) { if(x.x==y.x) return x.id<y.id; return x.x<y.x; } int main() { int n,now,flag=0; scanf("%d",&n);now=n/2+1; for(int i=1;i<=n;i++) scanf("%d",&a[i].x),a[i].id=i; ans[1]=a[1].x; sort(a+1,a+n+1,cmp); for(int i=1;i<=n;i++) { insert(tot,a[i].x); pos[a[i].id]=tot; } for(int i=n;i>=1;i--) { if(i%2==1) { if(flag>0) now=l[now].r; if(flag<0) now=l[now].l; flag=0; ans[i]=l[now].v; } if(pos[i]<now) flag++; if(pos[i]>now) flag--; erase(pos[i]); } for(int i=1;i<=n;i+=2) printf("%d\n",ans[i]); return 0; } Example 2: Ksum Give you an array of positive integers with a length of N, so this sequence has n(n+1)/2 sub segments. Now find the sum of the n(n+1)/2 sub segments and sort them in descending order. What is the number of the first K. Input The first line contains two integers n and K. The next line contains n positive integers representing the array. ai≤10^9 k≤n(n+1)/2, n≤100000,k≤100000 Output Output K numbers, representing the first k numbers after descending order, separated by spaces Input data 1 3 4 1 3 4 Output data 1 8 7 4 4 Through analysis, it is not difficult to find out 1: The maximum value of the whole sequence is obviously the sum of all numbers, that is, the number interval [1..N] 2: Now we want to get the sum of the interval with the second largest weight. Obviously, we should move the left end point 1 of interval [1..N] to the right by 1 bit, or the right end point N to the left by 1 bit, resulting from the two sub intervals [1..N-1] and [2..N]. But which of the two weights is greater? It's okay. Just throw it in the pile. 3: Then continue to derive the two derived sub intervals [1..N-1] and [2..N], and it will be found that the interval [2..N-1] will be derived twice. We can forcibly remove one and keep the other. So it is stipulated that for a certain interval [L, R] the left boundary is allowed to move to the right only when R=N. Finally, the following derivatives can be obtained: The code is as follows: #include<cstdio> #include<iostream> #include<algorithm> #include<queue> using namespace std; struct node { int l,r; long long s; } t,z; int n,k,a[100010]; long long s; priority_queue<node> q; bool operator <(node a,node b) { return a.s<b.s; } int main() { scanf("%d%d",&n,&k); for(int i=1; i<=n; i++) { scanf("%d",&a[i]); s+=a[i]; } t.s=s; t.l=1; t.r=n; q.push(t); for(int i=1; i<=k; i++) { t=q.top(); q.pop(); printf("%lld ",t.s); z.l=t.l; z.r=t.r-1; z.s=t.s-a[t.r]; q.push(z); if(t.r==n) { z.l=t.l+1; z.r=t.r; z.s=t.s-a[t.l]; q.push(z); } } return 0; } Neighbor search Given A sequence A of length n, the numbers in A are different. For each number Ai in A, find: min|Ai − Aj|, where 1 ≤ J < I And let the above formula take the minimum value of j (recorded as Pi). If the minimum value point is not unique, select the one that makes Aj smaller. Input format In the first line, enter the integer n, representing the length of the sequence. On the second line, enter n integers A1... An, representing the specific values of the sequence, separated by spaces. Output format Output a total of n-1 lines, each line outputs two integers, separated by spaces. Respectively represent the corresponding values of min|Ai − Aj| and Pi when i takes 2~n. Data range n≤10^5,|Ai|≤10^9 Input example: 3 1 5 3 Output example: 4 1 / / for 5, the smallest absolute value on its left and the difference is 1, and the difference is 4 2 1 / / for 3, the smallest absolute value on its left and the difference between it is 5, and the difference is 2 Solution: The simplest way to solve this problem is to violently enumerate each number I, and then enumerate the numbers on the left, i.e. 1 to i-1 The simulation can be carried out according to the meaning of the problem, but the time complexity is obviously O(N^2). For this practice, it ensures the constraint of location relationship. If we consider it from another angle, first consider the constraint of minimum difference. Obviously, for a sequence, after sorting it, the number with the smallest difference must be its predecessor and successor. Of course, the first number in the sequence has only a successor and no precursor. The last number in the sequence has only a precursor and no successor. So we can arrange the sequence in ascending order. For example: for the input sequence 3 2 9 5 4 After sorting, we get 2 3 4 5 9 Press down to ensure the position constraint, so we find the last number 4 in the input sequence because its position number is the largest. It appears in the third position in the sequence 2 3 4 5 9. At this time, the 3 on the left and 5 on the right are smaller than it in the original sequence, which meets the requirements of the question. So the difference is calculated. Next, we delete 4 from the sequence 2 3 4 5 9 to get 2 3 5 9. Then process the penultimate number 5 input in the original sequence At this point, the problem becomes a smaller problem with the same nature. That is, the number 5 has the largest original position number in the sequence 2 3 5 9. The code is as follows: #include<bits/stdc++.h> using namespace std; const int N=1e5+2; int n,i,lx,rx,l[N],r[N],p[N]; struct str { int x,i; } a[N],ans[N]; bool cmp(str x,str y) { return x.x<y.x; } int main() { scanf("%d",&n); for(i=1;i<=n;i++) scanf("%d",&a[i].x),a[i].i=i; a[0].x=-3e9; a[n+1].x=3e9; sort(a+1,a+n+1,cmp); for(i=1;i<=n;i++) l[i]=i-1,r[i]=i+1,p[a[i].i]=i; for(i=n;i;i--) { lx=abs(a[l[p[i]]].x-a[p[i]].x); rx=abs(a[r[p[i]]].x-a[p[i]].x); if(lx<=rx) ans[i]=(str){lx,a[l[p[i]]].i}; else ans[i]=(str){rx,a[r[p[i]]].i}; l[r[p[i]]]=l[p[i]]; r[l[p[i]]]=r[p[i]]; } for(i=2;i<=n;i++) printf("%d %d\n",ans[i].x,ans[i].i); return 0; } There are other methods to solve the above problems, which will not be repeated here, but the thinking mode of this paper is reflected in many computer algorithms, such as dijkstra finding the shortest path, knapsack algorithm and so on. In fact, the algorithm is constantly starting from a starting state to transfer the state. Just this transfer process, we can make it faster through many methods and means..
https://programmer.help/blogs/starting-from-the-constant-state-the-unknown-state-is-solved.html
CC-MAIN-2021-49
refinedweb
1,886
63.09
Hi! I have imported an LED blinking code to get started with mbed online, but I am getting the following error: Error: Fatal error: C3903U: Argument 'Cortex-M4.fp.sp' not permitted for option 'cpu'. Any ideas on how to get around this? The following is the code I've imported from the "demo_program" file in mbed: #include "mbed.h" DigitalOut myled(LED1); DigitalOut newled(LED2); DigitalOut newled(LED3); DigitalOut newerled(LED4); int main() { while(1) { myled = 1; myled2 = 1; newled = 1; newerled = 1; wait(0.5); myled = 0; myled2 = 0; newled = 0; newerled = 0; wait(0.5); } }# Regards, Muana Kasongo Hello Muana, We do not provide mbed support of any kind. I will recommend you to write this matter in the mbed community: . Although I will recommend to you to start using our own IDE MCUXpresso IDE|Eclipse-based Integrated Development Environment (IDE) | NXP . Here you can download and start building your own projects. Welcome | MCUXpresso SDK Builder, Following this link you would be able to search for your board examples to give you a head start on the development. Have a great day, Fab. -------------------------------------------------------------------------------
https://community.nxp.com/thread/527194
CC-MAIN-2020-24
refinedweb
186
65.52
Immutable class/object is the one whose value cannot be modified. For example, Strings are immutable in Java i.e. once you create a String value in Java you cannot modify it. Even if you try to modify, an intermediate String is created with the modified value and is assigned to the original literal. Whenever you need to create an object which cannot be changed after initialization you can define an immutable object. There are no specific rules to create immutable objects, the idea is to restrict the access of the fields of a class after initialization. Following Java program demonstrates the creation of a final class. Here, we have two instance variables name and age, except the constructor you cannot assign values to them. final public class Student { private final String name; private final int age; public Student(String name, int age){ this.name = name; this.age = age; } public String getName() { return this.name; } public int getAge() { return this.age; } public static void main(String[] args){ Student std = new Student("Krishna", 29); System.out.println(std.getName()); System.out.println(std.getAge()); } } Krishna 29 No, it is not mandatory to have all properties final to create an immutable object. In immutable objects you should not allow users to modify the variables of the class. You can do this just by making variables private and not providing setter methods to modify them. public class Sample{ String name; int age; public Sample(){ this.name = name; this.age = age; } public String getName(){ return this.name; } public int getAge(){ return this.age; } }
https://www.tutorialspoint.com/do-all-properties-of-an-immutable-object-need-to-be-final-in-java
CC-MAIN-2021-43
refinedweb
259
59.4
I remember myself a tweet from Scott Hanselman a couple of months ago, where he asked us to check out a small application he wrote. As one of the first testers of his new "thing", I was interested but there were no documentation or libs available yet. A couple of days a go, I read a blogpost from Maarten Balliauw about SignalR. (Maarten's blog). Because I allready heard about SignalR a couple of months ago, I had to develop a very small application to test how it really works. 1. Install SignalR package You also have to update the jquery package to 1.6.2 or higher 2. Create a chat class public class Chat { private readonly static Lazy _instance = new Lazy(() => new Chat()); public static Chat Instance { get { return _instance.Value; } } } 3. Create a (chat)hub Create a class ChatHub which will inherit from Hub [HubName("chatHub")] public class ChatHub : Hub { private readonly int TimeoutInSeconds = 30; private readonly Chat _chat; public ChatHub() : this(Chat.Instance) { } public ChatHub(Chat chat) { _chat = chat; } } 4. Javascript connection var chatHubClient = $.connection.chatHub; // Start the connection $.connection.hub.start(function () { chatHubClient.join('@Model.Name'); }); 5. ModelDefine a callback method on the server in the Chathub class"); } When a user post's data to the server, the page will be reloaded. When the user hits the F5/refresh button, a message appears, which tells us, that we´ve allready sended the data to the server. And asks us if we want to sent it again. If the user clicks "yes", we have to catch that, we don´t want this in our application - check that the user can only post something every 2 minutes. - save the posted data in viewstate and check if something changed - ... Or we can check if the user hit the refresh button. I haven't found a way to check this in Javascript, which will work in all browser. Or our solution is to check it in code. Add the following code, for example in a basepage. Private _refreshState As Boolean Private _isRefresh As Boolean Private _terminateRaisePostback As Boolean Private Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Page.IsPostBack AndAlso _isRefresh Then Response.Redirect(HttpContext.Current.Request.Url.ToString(), False) HttpContext.Current.ApplicationInstance.CompleteRequest() _terminateRaisePostback = True End If End Sub Protected Overrides Sub LoadViewState(ByVal savedState As Object) Dim AllStates As Object() = CType(savedState, Object()) MyBase.LoadViewState(AllStates(0)) _refreshState = Boolean.Parse(CStr(AllStates(1))) _isRefresh = _refreshState.Equals(CBool(Session("__ISREFRESH"))) End Sub Protected Overrides Function SaveViewState() As Object Session("__ISREFRESH") = _refreshState Dim AllStates() As Object = New Object(2) {} AllStates(0) = MyBase.SaveViewState AllStates(1) = Not (_refreshState) Return AllStates End Function Protected Overrides Sub RaisePostBackEvent(ByVal sourceControl As System.Web.UI.IPostBackEventHandler, ByVal eventArgument As String) If Not _terminateRaisePostback Then MyBase.RaisePostBackEvent(sourceControl, eventArgument) End If End Sub A couple of years ago, I used 'something' to encrypt the connectionstring on a production server. But I never wrote anything down about it. And in the other projects I did later, I never had deploy to the production server. Yesterday, I had to use it again but I couldn't find it. Why? Because I never wrote about it. This blogpost is copy of Chirag Darji post about the encryption of the connectionstring in web.config. All credits to him!. <connectionstrings> <add name="cn1" connectionstring="Server=DB SERVER; database=TestDatabase; uid=UID; pwd=PWD;"> <> </CipherData> </EncryptedKey> </KeyInfo> <CipherData> > </CipherData> </EncryptedData> </connectionStrings> Fig – (2) Encrypted connection string section You do not have to write any code to decrypt this connection string in your application, dotnet automatically decrypts it. So if you write following code you can see plaintext connection string.. Source Just to remember: when you want a new guid: Dim g As New Guid() --> the value of g is empty Just call: Dim g As Guid = Guid.NewGuid() and everything works fine! Last, I needed a function to add paging. The first problem I had, was how to this. A couple of seconds later, I allready had a solution to this problem: Linq Skip and take will do the trick. list<t>.Skip(_howManyRecordsDoYouWantToSkip).Take(_howManyRecordsDoYouWantToTake) This won't crash if you pass one of both statements. It will just return the result. source.Skip(startRowIndex).Take(pageSize) I've created an extension method to use this function. VB Module LinqHelpers <System.Runtime.CompilerServices.Extension()> _ Public Function Page(Of TSource)(ByVal source As IEnumerable(Of TSource), ByVal startRowIndex As Integer, ByVal pageSize As Integer) As IEnumerable(Of TSource) Return source.Skip(startRowIndex).Take(pageSize) End Function End Module C# public static class LinqHelpers { public static IEnumerable Page(this IEnumerable source, int startRowIndex, int pageSize) { return source.Skip(startRowIndex).Take(pageSize); } } Change IEnumerable to IQueryable if you want to use Linq to SQL. Take care about the traffic, if you use large lists and add paging at runtime. You'd better select the rows you want to show! (IQueryable) Last saturday I participated the Code Retreat organised by Agileminds.be There were 6 code retreats at the same moment arround the world. In Belgium, UK, Spain, USA and 2 in Romania. We all had the same challenge, to implement the Conway's Game of Life. By writing beautiful code, using TDD . For me, TDD is a new thing. I allready read about it, but thinking in a TDD way is really different and difficult. I met interesting people, and it was great to skype with the other Code Retreat around the world. We had people working the .Net framework (C# and vb.net), Java, Python, Ruby and it was great to work/test in other frameworks. In the future I will try to apply some of the best practices defined by TDD, but it will be hard. Thanks to Adrian Bolboaca, Erik Talboom and the other participants to introduce me in TDD. Hope to see you again at one of the following Code Retreat. You can check Code Retreat on twitter: #coderetreat Or on Some pictures of the event in Belgium: yfrog.com/user/talboomerik/photos The next community day event will be on 23rd of June. I will attend this event. Spread the word about this free event and subscribe! For more information:
http://geekswithblogs.net/jeroenb/Default.aspx
CC-MAIN-2014-41
refinedweb
1,041
58.48
Red Hat Bugzilla – Bug 14666 missing symlinks in kernel-headers causes compiles to fail Last modified: 2008-05-01 11:37:57 EDT In the process of upgrading the "kernel-headers" RPM from v2.2.14-5.0 to v2.2.16-3, two symlinks which were present in the previous version are now absent in the current version. Without these symlinks, certain compiles fail. Specifically, any C code which includes the header file "/usr/include/sys/param.h" will fail to compile. The missing symlinks in-question are the following: /usr/include/asm -> ../src/linux/include/asm /usr/include/linux -> ../src/linux/include/linux Here is a contrived code example (which will fail to build): #include <stdio.h> #include <sys/param.h> int main() { printf("Hello world\n"); } PS: What's the point of the "Component Text" field in the bugzilla page if I can't submit a component that isn't in the list? I tried submitting "kernel-headers", but I got rejected and had to type the whole Description over again. Pain in my ass... I can not reproduce this bug. Installing "kernel-headers-2.2.16-i386.rpm" on my machine results in correct symlinks. Furthermore, a quick check of the rpm SPEC file shows the following commands being executed: cd /usr/include rm -f linux asm ln -snf ../src/linux/include/linux linux ln -snf ../src/linux/include/asm asm Were you running as root when you installed the rpm? I was running as "root" when I installed the RPM. However, unless you made a typo in your reply to me, it looks like you installed a different RPM than I did. I installed the RPM "kernel-headers-2.2.16-3.i386.rpm", whereas you're telling me that you installed an RPM by the name "kernel-headers-2.2.16-i386.rpm" (it's missing the "-3" suffix to the "2.2.16" version number). And when I do an "rpm -qlp" on the file, the two symlinks in-question are missing from the list. As far as I'm aware, this is the latest binary kernel RPM file. I got it off of the tux.org mirror, but I think it was also the latest according to a search I did on the RedHat updates page. I'll check again to see if another update was slipped by since I last checked. Are you sure you're using the latest RPM? You are right about the typo. However, I still get correct behavior from this RPM. Please execute the command: rpm -U -vv --force kernel-headers-2.2.16-3.i386.rpm and attach the last 50 or so lines of output to this bug report, along with the output of a 'uname -a' command. Created attachment 1613 [details] gzipped tar of text files containing command output from rpm command-lines Okay, I attached a gzipped-tar of command output that would help debug the problem. However, there were two snags. Firstly, in order for it to be useful, I gave more than just the last 50 lines of the rpm command output. Also, the command you told me to use failed due to dependency problems, so I had to install both "kernel-headers" and "kernel-source" on the same command-line. The output that you included shows that the symlinks were created on your machine. From the file typescript.3.txt: ... D: running preinstall script (if any) kernel-headers-2.2.16-3 GZDIO: 467 reads, 3817852 total bytes in 0.009 secs D: running postinstall scripts (if any) + cd /usr/src + rm -f linux + ln -snf linux-2.2.16 linux + cd /usr/include + rm -f linux asm + ln -snf ../src/linux/include/linux linux + ln -snf ../src/linux/include/asm asm ... Okay, I didn't notice that earlier. But that's on lines 1498 and 1499. Later on in "transcript.3.txt", on lines 2870 and 2871, the symlinks get removed again. It looks like there's an "order-of-operations" problem going on here. Perhaps the symlinks want to be created in the "postuninstall" stage. ... D: file: /usr/src/linux-2.2.14/README.kernel-sources action: remove D: file: /usr/src/linux-2.2.14 action: remove cannot remove /usr/src/linux-2.2.14 - directory not empty D: file: /usr/include/linux action: remove D: file: /usr/include/asm action: remove D: file: /boot/kernel.h action: remove D: running postuninstall script (if any) + [ -L /usr/src/linux ] ++ ls -l /usr/src/linux ++ awk { print $11 } + [ linux-2.2.16 = linux-2.2.14 ] + exit 0 D: removing database entry ... It looks like you forgot to enter a comment explaining why the bug is resolved. Is a new RPM being released with the bug fix? _____ Please go to this URL: This file list does not show the symlinks: /usr/include/asm /usr/include/linux
https://bugzilla.redhat.com/show_bug.cgi?id=14666
CC-MAIN-2017-04
refinedweb
819
68.36
Hello, On 16/06/05, Jonathan Carlson <Jonathan.Carlson@...> wrote: > I just read about a feature in BeanShell and I'm hoping it can be done > as easily in Groovy. Sure > You can write a generic Decorator/Proxy by writing a method called > invoke(methodName, args[]) Any non-implemented method call gets > redirected to this method. It is kind of like the Java Dynamic Proxy > interface, but with far less hassle and limitations. > > Is this in Groovy? If not, could it easily be added? It's been there since the beginning of Groovy, but is probably "under-documented" In fact, you've used that feature each time you've used Groovy builders or POGOs (Plain Old Groovy Objects). All classes written in Groovy extend GroovyObjectSupport: Notice the invokeMethod() method? Let's take an example: class Foo { void methodOne() { println "invoked methodOne()" } int methodTwo(bar) { println "invoked methodTwo(${bar})"; return 255 } def invokeMethod(String methodName, Object params) { println "invoked ${methodName}()" } static void main(args) { def f = new Foo() f.methodOne() def i = f.methodTwo("bar") f.inexistantMethod() } } Execute it, and you'll notice that standard methods are called, and if a method doesn't exist, it's routed to the invokeMethod() method. I guess that's what you were looking for? Many objects in Groovy implement that interface (GroovyObject) or extend that class (GroovyObjectSupport), like Expandos, builders, or Proxy, etc... -- Guillaume Laforge
http://article.gmane.org/gmane.comp.lang.groovy.user/4253
crawl-002
refinedweb
231
63.8
Make your spider multi-threaded. Project description MSpider A Multi-threaded Spider wrapper that could make your spider multi-threaded easily, helping you crawl website faster. :zap: Note that this is for python3 only. Install MSpider could be easily installed using pip: pip install mspider Quick Start Automatically create a MSpider cdto the folder you’d like to create a MSpiderin terminal or cmd, then type genspider <your spider name>, such as: $ genspider test A file test.pythat contains a MSpideris created successfully if seeing the following information. create a spider named test. Open the spider file test.py. Find self.source = []in line 14, and replacing it by the sources (usually a list of urls) you’d like to handle by the spider, such as: self.source = ['', ''] Each element of the self.sourceis called src_item, and the index of src_itemis called index. Find the function basic_func, where you could define your spider function, such as: def basic_func(self, index, src_item): url = src_item res = self.sess.get(url) html = res.content.decode('utf-8') # deal with the html # save the extracted information Run the spider to start crawling. $ python3 test.py You just input the number of source items handled by each thread (BATCH SIZE) in the terminal or cmd, then return it,. Mannually create a MSpider Standard import the MSpider. from mspider.spider import MSpider Define the function of your single threaded spider. Note that this function must has two parameters. index: the index of source item src_item: the source item you are going to deal with in this function, which is usually an url or anything you need to process, such as a tuple like (name, url). def spi_func(index, src_item): name, url = src_item res = mspider.sess.get(url) html = res.content.decode('utf-8') # deal with the html # save the extracted information Now comes the key part. Create an instance of MSpiderand pass it your spider function and sources you’d crawl. sources = [('github', ''), ('baidu', '')] mspider = MSpider(spi_func, sources) Start to crawl! mspider.crawl() Then you will see the following information in your terminal or cmd. You just input the BATCH SIZE,. Usages The mspider package has three main modules, pp, mtd and spider pphas a class of ProxyPool, which helps you get the proxy IP pool from xici free IPs. Note that there are few free IPs could work, so try not to use this module. If you’d like to use proxy IP for your spider, this code may be helpful for you to write your own proxy pool. mtdhas two classes, Crawlerand Downloader Crawlerhelps you make your spider multi-threaded. Downloaderhelps you download things multi-threadedly as long as you pass your urls in the form of list(zip(names, urls))in it. spiderhas the class of MSpider, which uses the Crawlerin module mtd, and has some basic configurations of Crawler, so this is a easier way to turn your spider into a multi-threaded spider. Usage of pp.ProxyPool from mspider.pp import ProxyPool pool = ProxyPool() # Once an instance of ProxyPool is initialized, # it will has an attribute named ip_list, which # has a list of IPs crawled from xici free IPs. print(pool.ip_list) """ {'http': ['', '', '', '', '', '', '', '', '', ''], 'https': ['', '', '']} """ # Randomly choose an IP protocol = "http" # or "https" ip = pool.random_choose_ip(protocol) print(ip) """ '' """ # Update the IP list pool.get_ip_list() pool.check_all_ip() # Request an url using proxy by 'GET' url = "" res = pool.open_url(url) print(res.status_code) """ 200 """ # Request an url using post by 'POST' url = "" data = {'key':'value'} res = pool.post(url, data) print(res.status_code) """ 200 """ Usage of mtd.Downloader from mspider.mtd import Downloader # Prepare source data that need download names = ['a', 'b', 'c'] urls = ['', '', ''] source = list(zip(names, urls)) # Download them! dl = Downloader(source) dl.download(out_folder='test', engine='wget') """ [INFO]: 3 urls in total. [INPUT]: BATCH SIZE: 1 [INFO]: Open threads: 100%|███████████████| 3/3 [00:00<00:00, 3167.90it/s] [INFO]: Task done. [INFO]: The task costs 0.3324 sec. [INFO]: 0 urls failed. """ Usage of spider.MSpider See this in Quick Start. License Licensed under the MIT License. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/mspider/0.2.3/
CC-MAIN-2022-33
refinedweb
701
67.96
freopen(), freopen64() Reopen a stream Synopsis: #include <stdio.h> FILE* freopen( const char* filename, const char* mode, FILE* fp ); FILE* freopen64( const char* filename, const char* mode, FILE* fp ); Since: BlackBerry 10.0.0 Arguments: Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The freopen() and freopen64() functions close the open stream fp, open the file specified by filename, and associate its stream with fp. TheF - The underlying file descriptor is invalid or doesn't support the requested mode change. - EBADFSYS - While attempting to open the named file, either the file itself or a component of the filenameINTR - The fre is no memory for FILE structure. - ENOSPC - The directory or filesystem that would contain the new file can't be extended. - ENOSYS - The freopen() function isn't implemented for the filesystem specified in filename. - ENOTDIR - A component of the filename prefix isn't a directory. - ENXIO -. Examples: ; } Classification: freopen() is ANSI, POSIX 1003.1; freopen64() is Large-file support Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/freopen.html
CC-MAIN-2019-35
refinedweb
193
58.38
Archives ScrewTurn Wiki Kind of a crazy name for a piece of software (in this politically correct world, the use of "screw" doesn't go over very well with some management) but a really great example of Open Source in action. I was hunting around for a wiki for our development documentation and standards. My first thought was SharePoint but we're not rolled out yet to 2007 and I didn't want to bank on that yet. I wasn't quite sure what I was looking for, but needed a wiki that did basic features and had a few "must-have" features (like AD integration and content approval). A great site for checking out and comparing wikis is WikiMatrix. This site lets you compare all of the wiki software packages out there and even includes a wizard to step you through what you're looking for (OSS, platform, history, etc.) and gives you a nice side-by-side comparison page (much like comparing cars) to help you select a package. First I took a look at FlexWiki which was fairly popular and easy to setup. I had set it up on my laptop before as I was toying around with using a wiki as my home page. FlexWiki was simple and more importantly (for me anyways) it was C# and windows based so if I wanted to extend it, play around, write extensions, etc. then that would be bonus. Flex is nice and if you don't look at anything else, probably suites the purpose (although CSS-style customization seems to be pretty complex). While I was leaning towards C# type wikis, I knew that the best and most mature ones were PHP/MySQL based (like the one Wikipedia runs on, MediaWiki). However I just didn't want to introduce another stack of technology at my client just for the purpose of documentation. Finally I stumbled across ScrewTurn Wiki. Like Flex, it was easy to setup and like my favorite blogging software (dasBlog) it could be file based so you could just set it up and go. I installed ScrewTurn and messed around with it and it worked well. We handed the duties of really digging into it over to a co-op student we have for the summer and he's really gone to town with it. AD integration was added (it was always there, I just didn't enable it) and he's found some plugins and even written some code to extend it. What's very cool about ScrewTurn is that the common pages are written in C# and live as .cs files on your server. You just edit them and override methods, introduce new ones, whatever. New functionality without having to recompile assemblies or anything (everything is just JIT'd on the fly). Anyways, ScrewTurn looks like a very good IIS based wiki if that's your thing. I find it more mature than Flex, written in C# 2.0 and has a lot of great features. Like I said, if you have a LAMP environment in your world then you might want to look at something like MediaWiki but for a Microsoft world, ScrewTurn is da bomb. The plugin support is great and I'm hoping that the community will step up and build new plugins for the system so it can grow into something special. So you might want to give ScrewTurn a try if you're looking for a simple documentation system for your team. ReSharper 3.0 Beta Available Great stuff from the guys that make the cool tools, JetBrains anounces the release of their latest beta version of ReSharper 3.0. This release includes refactoring for C# and Visual Basic.NET now. The C# side has been beefed up so it gives you some code suggestions that you may or may not choose to implement. Also with this release is XML and XAML support (handy when working with Silverlight now), a neat feature called "Go to Symbol" navigation which I'm prefering over "Go to Declaration", a smart TODO list, and a reworked Unit Test Runner (although I still prefer TestDriven.NET). You can grab the beta from here. I'll see if I can find some time and put together some screenshots or (gasp) a webcast on the features as talking about them is rather boring. Enjoy!. Whoever Has the Most Toys, Wins Or do they? In the past couple of years, Google, Microsoft, and Yahoo have been buying up little niche products and technologies and absorbing them into their collectives like mad little scientists working in a lab. It reads like a whos-who of the IT world. Feedburner - Very recent purchase and something many of us bloggers use. Good move Google! Blogger - Awesome concept but I think it went by the wayside as Blogger became the MySpace for blogger-wanna-bes. Still some good bloggers use it, but I don't think it's panned out the way Google wanted it to. Picasa - Neat desktop app that I tried out a few times and well done. Hopefully they'll do something with this in the near future. Youtube - Biggest acquisition that I know of (who has this much money besides Microsoft?) but with thousands of videos being pulled every day by Viacom or someone else threatening to sue, I wonder what the future holds. DoubleClick - Have no idea what this is all about as DoubleClick is the evil of the Internet. Maybe they bought it to kill it off (doubt it). Yahoo Flickr - Probably the best photo site out there, many new features being added all the time and nobody nearly as interesting as these guys out there in this space. Konfabulator - Never really caught on and too many people compared it to the Mac desktop (which already had this capability OOTB). Windows Gadgets tries to be like this but again not a huge community swelling up around this. del.icio.us - Next to Flickr, one of the best buys for Yahoo and the best social bookmarking system out there. Microsoft Connectix - Huge boost in the virtual space, although I think it still trails behind VMWare Groove - What put Ray Ozzie on the map is now part of Office 2007 and still growing. Winternals - Best move MS made in a long time, awesome tools from these guys who know the inner workings of Windows better than Microsoft in some cases FolderShare - Great peer-to-peer file sharing system, but hasn't really taken off has it? There's a bunch more but I didn't want to get too obscure here. There's a very cool graph here that will show you the acquisitions and timelines. Who's Left? And here's the hot ticket items these days that are still blowing in the wind. It's anyone's guess who goes up on the block next and who walks away with the prize. Facebook - Whoever gets this gets gold at 100,000 new members a day (!). My money is on MS to pull out the checkbook any day now. Digg - Kevin Rose, who's already probably laughing his way to the bank will cash in big time on this if someone grabs it. Maybe Google to offset the Yahoo del.icio.us purchase? Slashdot - Yeah, like anyone would want this except to hear Cowboy Neal talk about himself (don't worry, Slashdotters don't read my blog - I hope) Any others? (SharePointKicks... yeah I wish) Maybe it's a good, maybe it's bad, my question is who will end up with the most toys? Or maybe once all the little ducks are bought up the three top dogs will duke it out with one winner walking away. UFC at the Enterprise level kids. Should be a fun match.). Dude, where's my alarm clock? I just realized, after running Windows XP for 5+ years, that there's no built-in alarm clock. I needed one as I was just dozing on the couch and couldn't be bothered to go to the bedroom and I had no other way to wake up. I figured I would just use XP. I mean, it must have an alarm clock after all. Nope. Nothing that I can find. Had to download a cheap freebie. Does Windows Vista have one? Does nobody really need one except for me? Hmmm... maybe another WPF weekend project I could do to pass the time. An API for my crack addiction I'm addicted to crack. That crack is called Facebook. At first it was a silly thing. A social networking site with very little geek factor. It's fun to connect with old friends, make new ones, and generally keep on top of where people are and what they're doing. However I felt empty. A site like Facebook is just ripe for tearing into it and presenting and using the information you want the way you want to. The REST-like access to it seemed kind of klunky and you had to log in via a web page to obtain a session (there's a bit of a hack to do an infinite session, but it's just that, a hack). So I wasn't too interested in what it could provide. Now my crack addiction has a proper API and a developer toolkit. Finally I can actually do something with my addicition rather than just admire it. The toolkit requires a developer key (which you can get from facebook for free) and the .NET 2.0 framework. You can grab the tookit here. There's also a developer wiki you can checkout with lots of QuickStarts, videos, walkthroughs, tutorials, and discussions. Is it just me, or is everything here very MS centric? Maybe MS should just buy Facebook (as everyone else is buying everything else out there) and call it a day. Of course they would have to rewrite it since it seems to run in PHP, but with dynamic languages and the .NET framework in the pipeline it could probably just be converted on the fly. I'm still waiting for my invite to come through for Popfly, but in the meantime this will keep me happy as I write up some cool new Silverlight/Facebook apps on SharePoint. Yeah, nothing like mashing up all kinds of new stuff together to see how it works.. Woohoo! Finally... This makes the ASP.NET Weblogs upgrade and me not being at DevTeach all that much better. Sleep, I knew you well... Scrum for SharePoint Agile teams are all about co-location and communication. We have a wall where tasks are posted. The wall is life. It is the source of truth. From the wall, the ScrumMaster (me generally) enters in the hours remaining for tasks and updates some backend system (in our case, VSTS with the Scrum For Team System templates). There are many tools out there to do Scrum, XP, etc. and keep track of your items. I think I did a round up of the tools out there but I missed one. SharePoint. Yup, my two favorite topics, SharePoint and Agile, come together. A friend pointed me to an article on Redmond Developer News (a new feed I didn't even know about and one that looks pretty cool) by David Christiansen called Building a Virtual Bullpen with Microsoft SharePoint. Basically he walks you through creating a digital bullpen, complete with product backlogs and sprint backlogs all powered by SharePoint. And easy to do, with a few custom views and all standard web parts and controls. I remember Scott Hanselmen mentioning that they used SharePoint for Scrum awhile back on an episode of Hanselminutes. He said it worked well for them. I've setup clients using standard out-of-the-box lists to track Product Backlog items and such. The only thing 2003 won't give you are burndown charts. With Excel Services, a little bit of magic, and MOSS 2007 behind the scenes this now becomes a simple reality. Check out the article to get your virtual bullpen setup and drop me a line if you need a hand (or just want to share with the rest of the class). What the heck happened to ASP.NET Weblogs? I see (or guess) that ASP.NET Weblogs (where this blog is hosted) upgraded to a newer version of Community Server but boy it doesn't look good. Besides the change in the control panel and things, the look is pretty different on my blog, the sidebar has a few broken things now that I had to remove, but most importantly isn't showing any content. Hopefully they'll have this fixed soon. Normally they announce major upgrades and such, but I guess you get what you pay for (free) so I can't complain too much. Update: Seems a lot of people are complaining about the upgrade. Things are a little messed up here as the CSS has changed. I use the stock Marvin3 from the old .Text blog but it changed (or something around it) so there's additional white space and padding everywhere on the site. Other blogs that are using custom skins/css are really messed up. I noticed Frans' tags are just plain ugly and unreadable. In addition JavaScript is disabled for the sidebar so I had to remove a few links I had and uploading images is disabled (or some kind of security problem is afoot). A couple of other problems were that the tag filtering doesn't seem to be working. On Weblogs, if we tag entries with certain tags they show up on the main page. Now it seems everything is getting up there. I caught a comment by Rob Howard on another blog saying that emails had been sent out regarding the upgrade, but only 115 went out then suddenly stopped, as if thousands of processes suddenly cried out in terror and were suddenly silenced. What a mess. A Scrum by any other name... I'm not getting it. I'm seeing a lot of posts about "Feature Driven Development" (or FDD for short) but I'm just not getting it. All I see is Scrum with different terminology. I was reading the Igloo Boy's blog where he's off at DevTeach 2007 (man I'm so jealous, Montreal in the summer time with geeks) and he posted his review of a FDD session with Joel Semeniuk and I just don't see the bru-ha-ha about FDD. Definition FDD is defined as a process defined and proven to deliver frequent, tangible, working results repeatedly. In other words, what we try to achieve when using Scrum in software development. Characteristics FDD characteristics include minimum overhead and disruption, Delivers frequent, tangible, working results, Emphasizes quality at each step, Highly iterative. Again, Scrum on all fronts. Features FDD centers around working on features (Product Backlog Items in Scrum) which have a naming convention like: <action> the <result> <by|for|of|to> a/an <object> Like user stories where: As a/an <role> I would like to <action> so that <business benefit> Feature Sets FDD Feature Sets is a grouping of features that are combined in a business sense. In Scrum we've called those Themes. So am I way off base here or are we just putting lipstick on a pig? Are we just packaging up Scrum with a different name in order to sell it better? Wikipedia lists FDD as an iterative and incremental software development process and a member of the Agile methods for software delivery (which includes Scrum, XP, etc.). There are differences here between Scrum and FDD, like reports being more detailed than a burndown chart (however for me, a burndown chart was more than enough information to know where we were and where we're headed). Practices include Domain Object Modelling (DDD?) and teams centered around Fetures, but again this is just (to me) just Scrum organized a certain way. I would hazard to say I already do FDD because to me it's all about the domain and business value. Or maybe this is a more refined take on Scrum. Scrum with some more rigor around focusing on the goal? A rose by any other name... I must be missing something here. Read it, live it, love it! If you're struggling with getting in touch to deliver what your customers really want, try this. To me, this is what Agile is all about. Print out the big version of this (available here), put it up on your wall (in your face) and read it every morning before you start. Really. Have you tried out Planning Poker? I'm a big fan of the Planning Poker technique for estimating. It basically is a process where everyone in the room gets together with cards and estimates effort for user stories. Each card has a number on it (a modified fibinacci sequence) of 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100. Then everyone reveals their estimate for a given story at the same time. Any estimates on the fringe are challenged and justified and an estimate is arrived, then the process is repeated for the next user story. Mike Cohn and the Mountain Goat Software people have put together a fabulous website to solve a problem with planning poker, that is the one of remote users. It doesn't help planning poker if the users are off in another city so planning poker, the site solves that. You create an account (free and only requires 4 fields of information) and log in. Then someone creates a game and everyone participates online. Its a great way of doing this, and you can export the results to HTML and CSV at the end of the session. There's even a 2 minute timer that you can invoke once you've discussed enough about a story and a ready to estimate. Some people have even used it internally by everyone bringing their laptops to the session rather than even using physical cards. So check out Planning Poker for yourself as it might be useful in your next planning session. Here are some screenshots that walk you though creating user stories and estimating them with the planning poker site. When you log into the site you can view the active or complete games. Complete games can be re-opened if you need to do them across a few days: To create a new game, click on the link and fill in the name and description. If you have stories already ready to estimate, you can paste them into the field here from a spreadsheet. The first row should contain the field names. To add a story for estimating, just enter it in the form As a/an <role>, I would like to <function> so that <business value>. There's also room for notes that might be something you want to capture but make it light, this isn't the place for requirements gathering details here. Once you've added a story, the estimating game begins. Select a card from the screen for that story. Then you can accept or play again with that estimate. Your estimate shows up along with others (if they're logged into your game). If you were wrong with your original estimate or there's debate on something and you really do agree it's larger/smaller, click play again and select a different estimate. When all the estimates are done and the game is complete you can view all of the estimates online. Finally if you want to take this information elsewhere, you can export it HTML for viewing/publishing/display or to a CSV file for importing into another tool. Note that Planning Poker doesn't work very well under IE7 on Windows XP but the guys are working on it. I flipped over to use Firefox for the screenshots and use FF when I do my sessions using the tool. Feeds not working on weblogs.asp.net Not sure what the problem is but if you subscribe to my feed (via my feedburner url,) you may have noticed that the feeds around here haven't been updated. In fact there's no new feed since May 1st. The feedburner feed is stagnant but what's more disturbing is that feedburner is working correctly, it's the source of the feed from Community Server and weblogs.asp.net that isn't updating. I checked the private feed (sans feedburner) and it also shows May 1st as the last post. I sent a note off to the weblogs.asp.net guys but haven't heard back. I'm posting this here in the hopes that someone on weblogs.asp.net is seeing the same problem (and it's not just me) and maybe something gets done about it. Maybe I forgot to pay the rent on my site? ;) Update: I've been clicking on other peoples RSS link and not seeing items from the past week for many blogs. So either the feeds are not getting through on my end (DNS problem? I doubt it) or something is messed up on weblogs.asp.net/Community Server. Source Code Posted for SharePoint Forums Web Part. Change Sets Restored on CodePlex for Tree Surgeon For those that have been playing along, CodePlex suffered a bit of a hiccup awhile ago and some data was lost. The Tree Surgeon project was one of the casualities (along with some of my other projects). The CodePlex team got the work items restored, but the change sets and source tree was lost. Luckily we had two backups, the zip file and I had several local copies of it on various hard drives and backups. I've rebuilt the latest change set on Code Plex so you can hook up and grab the source if you're a member of the team to work on things (or just grab the latest change sets as we get more work done). I'm just spending some time tonight to update other projects and get new or updated source code uploaded to CodePlex so watch for more announcements over the next few days. Thanks! Generic List Converter Snippet A useful little class that you might find you need someday. It converts a weakly-typed list (like an IList of items) into a strongly typed one based on the type you feed it. It's super simple to use. For example let's say you get back an IList<People> from some service (a data service, web service, etc.) but really need it to be a List<People>. You can just use this class to convert it for you. I know, silly little example but just something for your code snippets that you can squirrel away for a rainy day. 1 public class ListToGenericListConverter<T> 2 { 3 /// <summary> 4 /// Converts a non-typed collection into a strongly typed collection. This will fail if 5 /// the non-typed collection contains anything that cannot be casted to type of T. 6 /// </summary> 7 /// <param name="listOfObjects">A <see cref="ICollection"/> of objects that will 8 /// be converted to a strongly typed collection.</param> 9 /// <returns>Always returns a valid collection - never returns null.</returns> 10 public List<T> ConvertToGenericList(IList listOfObjects) 11 { 12 ArrayList notStronglyTypedList = new ArrayList(listOfObjects); 13 return new List<T>(notStronglyTypedList.ToArray(typeof(T)) as T[]); 14 } 15 } 6 Months of Sprints - A Visual Record I thought I would start off the week by airing my dirty laundry, that laundry being one of the projects I'm Scrum Master and Architect on. It's been 6 months of month long iterations and the project is currently on hold as we shifted resources around to a fewer "higher" priority ones. I'm looking back at the iterations and the burndown charts that we pulled out (via Scrum for Team System). It's not pretty but it's real and it's interesting to see the progress (or sometimes lack of it) along the way. Here we go... Sprint 1 The sprint starts and we're not off to a bad start. In fact, it's one of the better sprints we had, even if it was the first. A little bit of churn the first few days but that's normal. In fact it was a collision between myself and the PM who decided to enter all the tasks again his way. After a few days we got that part straightended out and the rest of the sprint seemed to go pretty well. I was pretty happy so far. Sprint 2 Another sprint that didn't go too badly. The team (myself and one developer) had some momentum and we were moving along at a nice pace. However technical debt built up already and we ended the sprint with about 40 hours of work left. Still overall I was pretty happy and we seemed to have got our stride. We also picked up a new team member so there was that integration that had to happen, but it worked well for the team. Sprint 3 Third sprint, 3 months into the project and we were moving along. Sometime around the middle of the sprint we were going like gangbusters and realized we were going to end early. That's the big dip around the 17-20th of November 2006. Once we got back on the Monday (the 20th) we decided to add more work to the sprint, otherwise we were going to be twiddling our thumbs for 2 weeks. It worked out well for this sprint as we finished without too much overhead (about 12 hours but some of that was BA or QA work which didn't directly affect the project). Sprint 4 Ugh. This is ugly but bonus points for the first person to know why (other than who was on the team). The main cause for the burndown to go flatline here is the Christmas. Yup, with people on holidays and not really wanting to end the sprint in early January right when everyone got back, we decided to push the sprint out a little to make up for the lost time over the Christmas season. In addition to this, the first week of this sprint one of the main developers came down with the flu and was out of commission for almost a whole week. That crippled us. By the 22nd or 23rd of January we decided we had to drop a whack of scope from the sprint (which is the sudden drop at the end you see) and we would have to make it up the next sprint, somehow. Even with that adjustment we were still running about 225 hours over at the end of the sprint. Not a good place to be to start your next iteration. Sprint 5 Doesn't look good for the team that was doing so well. This sprint started off with a couple of hundred hours of deferred backlog items, then ballooned up with more technical debt and decomposition of new tasks. The team was larger now but we obviously took on more than we could chew. In fact I remember going in saying that but I was shot down by certain PTB that said "it'll work itself out". Don't believe them! If your burndown charts are looking like this the first week in (and you can tell that the first week in) you'll certain to not succeed on an iteration. Hands down. I like to take a CSI approach to iterations, let the facts show what's going on not peoples opinion. If your burndown is burning up, you need to make adjustments and not "ride it out" because unless you have magical coding elves come in late at night (and I'm not talking about geeky coders who like to keep odd hours) then you're not going to make it, and it's pretty obvious. Sprint 6 This sprint was just a death march. 800 hours of work which included 80 hours for a task we outsourced to a different group (which really turned into 200 hours of work as that person just didn't turn in any kind of idea for how long it would take) and probably 200 hours of technical debt that's been building for 4 months. We actually got a lot done this sprint, about 200 hours worth of work which isn't bad for 3 developers, 1 QA, and 1 BA but it looks bad here. This is how we ended the project until it went stealth. No, we didn't shut the project down because the progress was horrible. As I said, it slipped down the priority chain and we, as an organization, felt it was better to staff a project with 4-6 developers and bring it home rather than 2-3 developers keeping it on life support. Hopefully this reality trip was fun for you and might seed a few things in your own iterations. Overall a few things to keep in mind on your own projects following Scrum: - No matter what tool you use, try to get some kind of burndown out of the progress (even if it's being drawn on a whiteboard). It's invaluable to know early on in a sprint what is going on and where things are headed. - If you finish a sprint with backlog items, make sure you're killing them off the first thing next sprint. Don't let them linger. - Likewise on technical debt, consider it like real debt. The longer you don't pay it down, the more interest and less principle you end up paying and it will cost you several times over. - If you're watching your sprint and by the end of the first week (say on a 2-3 week iteration) you're heading uphill, put some feelers out for why. Don't just admire the problem and hope it will go away. It might be okay to start a sprint not knowing what your tasks are (I don't agree with this but reality sometimes doesn't afford you this) but if you're still adding tasks mid-sprint and you're already not looking like you're going to finish, don't. It doesn't take a genius to figure out that if you can't finish what you're got on your plate you shouldn't be going back to the buffet. - Be the team. Your team is responsible for the progress of the sprint, not one individual so you succeed as a team and fail as a team. Don't let one individual dictate what is right or wrong in the world. As a team, if the sprint is going out of control, fix it. If a PM says "don't worry" when you can see the iceberg coming, don't sit back and wait for it to hit, steer clear because you know it's coming. Welcome to Magrathea Our 11th episode of the best damn podcast in the greater Calgary area, Plumbers @ Work, is now online. In this episode, we talk about Silverlight, the Calgary Code Camp, Silverlight, GoDaddy refunds, Silverlight, Rhino Mocks, Silverlight, the Entity Framework, Silverlight, and Halo 2. We finally wrap up the show by talking about Silverlight. You can view the details and links for this podcast here or directly download the podcast into your favorite ad-ridden, battery draining, lightweight, podcast player here. Magrathea? Maybe I should have called this post "09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0" just to get in the game. INVEST in your stories with SMART tasks An oldie but a goodie that I thought I would share with the class. For user stories: Independent Stories are easiest to work with if they are independent. That is, we'd like them to not overlap in concept, and we'd like to be able to schedule and implement them in any order. Negotiable A good story is negotiable. It is not an explicit contract for features; rather, details will be co-created by the customer and programmer during development. A good story captures the essence, not the details. Over time, the story. Testable A good story is testable. Writing a story card carries an implicit promise: "I understand what I want well enough that I could write a test for it." . For tasks:. Reconsituting Domain Collections with NHibernate We ran into a problem using NHibernate to persist our domain. Here's an example of a domain object; an Order class with a collection of OrderLine objects to represent the line in each order placed in the system. In the system we want to be able to check if an order exists or not so we use an anonymous delegate as a predicate on the OrderLine collection: 7 class Order 8 { 9 private IList<OrderLine> Lines; 10 11 public Order() 12 { 13 Lines = new List<OrderLine>(); 14 } 15 16 public bool DoesOrderExist(string OrderNumber) 17 { 18 return ((List<OrderLine>)Lines).Exists( 19 delegate(OrderLine line) 20 { 21 if (line.OrderNumber == OrderNumber) 22 return true; 23 return false; 24 } 25 ); 26 } 27 } 28 29 class OrderLine 30 { 31 public string OrderNumber; 32 public int Quantity; 33 public string Item; 34 public double Cost; 35 } This is all fine and dandy but when we reconsistute the object from the back-end data store using NHibernate, it blows its head off with an exception saying it can't cast the collection to a list. Internally NHibernate creates a PersistantBag object (which implements IList) but can't be directly cast to a List, so we can't use our predicate. There's a quick fix we came up with which is to modify the DoesOrderExist method to look like this instead: 16 public bool DoesOrderExist(string OrderNumber) 17 { 18 List<OrderLine> list = new List<OrderLine>(Lines); 19 return (list.Exists( 20 delegate(OrderLine line) 21 { 22 if (line.OrderNumber == OrderNumber) 23 return true; 24 return false; 25 } 26 ); 27 } This feel dirty and smells like a hack to me. Rebuilding the list from the original one when we want to find something? Sure, we could do this and cache it (so we're not recreating it every time) but that just seems ugly. Any other ideas about how to keep our predicates intact when reconsituting collections? GoDaddy and their crazy accounting system I got an email from GoDaddy today, where most of my domains are hosted, about a reduction of ICANN fees. I must say that GoDaddy is absolutely brilliant in crediting me the overpayment I've made as a result of the reduction of the fees: Dear Bil Simser,* -- $.15 has been placed into your Go Daddy® account with this customer number: 9999999. Your in-store credit will be applied to your purchases at GoDaddy.com® until it's gone or for up to 12 months, whichever comes sooner. If you have any questions, please contact a customer service representative at 480-505-8877. As always, thank you for being a Go Daddy customer. Sincerely, Bob Parsons CEO and Founder GoDaddy.com Wow. 15 cents for all my domains. What should I buy with this winfall first? A new laptop? A 42" LCD TV? I understand that it's the law and without sending this note out, I would probably be complaining about them stealing my $0.15 but it's akin to a bank sending you a cheque for $0.02 interest and is somewhat funny (even if others think its not)
http://weblogs.asp.net/bsimser/archive/2007/05
CC-MAIN-2015-18
refinedweb
5,968
70.63
On (01/07/08 18:04), Alexander Beregalov didst pronounce:> 2008/7/1 Mel Gorman <mel@csn.ul.ie>:> > I still have no useful reaction to this. According to Christoph Hellwig,> > this lockup has been appearing since lockdep was introduced but for some> > reason is easier to trigger now. It bisected to the two-zonelist changes> > but it still looks like a red herring as I cannot see how reclaim has> > changed significantly as a result of that patch.> > Do you wait reaction from me? Can I help?> As I mentioned, the lockup does not happen when lockdep is disabled.> Sorry for the slow response Alexander.This bug is likely fixed by commit 494de90098784b8e2797598cefdd34188884ec2ewhich will be visible publicly later when maintenance on master.kernel.orgfinishes. I included it below for convenience.The lockdep warning still exists but it is a false positive and should berelatively hard to trigger again. It would be nice to have confirmationof this.commit 494de90098784b8e2797598cefdd34188884ec2eAuthor: Mel Gorman <mel@csn.ul.ie>Date: Thu Jul 3 05:27:51 2008 +0100 Do not overwrite nr_zones on !NUMA when initialising zlcache_ptr The non-NUMA case of build_zonelist_cache() would initialize the zlcache_ptr for both node_zonelists[] to NULL. Which is problematic, since non-NUMA only has a single node_zonelists[] entry, and trying to zero the non-existent second one just overwrote the nr_zones field instead. As kswapd uses this value to determine what reclaim work is necessary, the result is that kswapd never reclaims. This causes processes to stall frequently in low-memory situations as they always direct reclaim. This patch initialises zlcache_ptr correctly. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Tested-by: Dan Williams <dan.j.williams@intel.com> [ Simplified patch a bit ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>--- mm/page_alloc.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-)diff --git a/mm/page_alloc.c b/mm/page_alloc.cindex 2f55295..f32fae3 100644--- a/mm/page_alloc.c+++ b/mm/page_alloc.c@@ -2328,7 +2328,6 @@ static void build_zonelists(pg_data_t *pgdat) static void build_zonelist_cache(pg_data_t *pgdat) { pgdat->node_zonelists[0].zlcache_ptr = NULL;- pgdat->node_zonelists[1].zlcache_ptr = NULL; } #endif /* CONFIG_NUMA */
http://lkml.org/lkml/2008/7/3/315
CC-MAIN-2016-36
refinedweb
353
52.15
For example I have a poorly documented library. I have an object from it and I want to know what are the types of the arguments certain method accepts. In IPython I can run In [28]: tdb.getData? Signature: tdb.getData(time, point_coords, sinterp=0, tinterp=0, data_set='isotropic1024coarse', getFunction='getVelocity', make_modulo=False) Docstring: <no docstring> File: ~/.local/lib/python3.5/site-packages/pyJHTDB/libJHTDB.py Type: method point_coords Usually, functions in Python accept arguments of any type, so you cannot define what type it expects. Still, the function probably does make some implicit assumptions about the received object. Take this function for example: def is_long(x): return len(x) > 1000 What type of argument x does this function accept? Any type, as long as it has length defined. So, it can take a string, or a list, or a dict, or any custom object you create, as long as it implements __len__. But it won't take an integer. is_long('abcd') # ok is_long([1, 2, 3, 4]) # ok is_long(11) # not ok To answer the question: How can you tell what assumtions the function makes? help(funcname)) get_value.
https://codedump.io/share/Aa32pPCODRS0/1/how-to-determine-function-parameter-type-in-python
CC-MAIN-2018-26
refinedweb
189
56.15
? You can either launch it from python via sublime.run_command or you can create a key binding or create a menu entry. There's a tutorial linked from the root documentation page going into more details. I've reviewed those resources, but I still cannot figure it out. I don't have a view with which to work, so I'm confused as to how I can invoke the plugin that from the Python console built into Sublime. The tutorial shows to run the command as follows: view.run_command('hello') I've also tried using: sublime.run_command('hello') I've tried creating a key binding in my user file: { "keys": "super+shift+h"], "command": "hello" } ] All I'm trying to do at the moment is to just print something to the console that doesn't use the import sublime, sublimeplugin class HelloCommand(sublime_plugin.ApplicationCommand): def run(self, args): print "Hello" Ultimately, I'd like to make a plugin that will prompt me for a URL that will go fetch that file based on specific parameters like drive mappings. But to get there, I need to understand how to do basic stuff like this. It is sublime_plugin. Also check the Console (Ctrl- ' (apostrophe)) for any error message. Bah! I should have known it was some mundane typo. Thanks!
https://forum.sublimetext.com/t/how-are-applicationcommands-invoked-and-other-questions/7736/4
CC-MAIN-2016-18
refinedweb
217
64.51
import itertools class Solution(object): def threeSum(self, nums): res = [] n = len(nums) nums.sort() # O(n^2) for m in xrange(n): # middle l = 0 # left r = n - 1 # right while 0 <= l < r <= n - 1: if m == l: l += 1 continue elif m == r: r -= 1 continue sum = nums[l] + nums[m] + nums[r] if sum == 0: res.append(sorted([nums[l], nums[m], nums[r]])) l += 1 r -= 1 elif sum > 0: r -= 1 elif sum < 0: l += 1 # remove all duplicate result, and this is the reason causing slow res.sort() return list(u for u, _ in itertools.groupby(res)) My very easy understand solution with very slow runtime Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/28227/my-very-easy-understand-solution-with-very-slow-runtime
CC-MAIN-2017-39
refinedweb
130
66.07
ASP.NET Core - Setup MVC In this chapter, we will set up the MVC framework in our FirstAppDemo application. We will proceed by building a web application on top of the ASP.NET Core, and more specifically, the ASP.NET Core MVC framework. We can technically build an entire application using only middleware, but ASP.NET Core MVC gives us the features that we can use to easily create HTML pages and HTTP-based APIs. To setup MVC framework in our empty project, follow these steps − Install the Microsoft.AspNet.Mvc package, which gives us access to the assemblies and classes provided by the framework. Once the package is installed, we need to register all of the services that ASP.NET MVC requires at runtime. We will do this inside the ConfigureServices method. Finally, we need to add middleware for ASP.NET MVC to receive requests. Essentially this piece of middleware takes an HTTP request and tries to direct that request to a C# class that we will write. Step 1 − Let us go to the NuGet package manager by right-clicking on the Manage NuGet Packages. Install the Microsoft.AspNet.Mvc package, which gives us access to the assemblies and classes provided by the framework. Step 2 − Once the Microsoft.AspNet.Mvc package is installed, we need to register all the services that ASP.NET Core MVC requires at runtime. We will do this with the ConfigureServices method. We will also add a simple controller and we will see some output from that controller. Let us add a new folder to this project and call it Controllers. In this folder, we can place multiple controllers as shown below in the Solution Explorer. Now right-click on the Controllers folder and select on the Add → Class menu option. Step 3 − Here we want to add a simple C# class, and call this class HomeController and then Click on the Add button as in the above screenshot. This will be our default page. Step 4 − Let us define a single public method that returns a string and call that method Index as shown in the following program. using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; namespace FirstAppdemo.Controllers { public class HomeController { public string Index() { return "Hello, World! this message is from Home Controller..."; } } } Step 5 − When you go to the root of the website, you want to see the controller response. As of now, we will be serving our index.html file. Let us go into the root of the website and delete index.html. We want the controller to respond instead of index.html file. Step 6 − Now go to the Configure method in the Startup class and add the UseMvcWithDefaultRoute piece of middleware. Step 7 − Now refresh the application at the root of the website. You will encounter a 500 error. The error says that the framework was unable to find the required ASP.NET Core MVC services. The ASP.NET Core Framework itself is made up of different small components that have very focused responsibilities. For example, there is a component that has to locate and instantiate the controller. That component needs to be in the service collection for ASP.NET Core MVC to function correctly. Step 8 − In addition to adding the NuGet package and the middleware, we also need to add the AddMvc service in the ConfigureServices. Here is the complete implementation of the Startup class. using Microsoft.AspNet.Builder; using Microsoft.AspNet.Hosting; using Microsoft.AspNet.Http; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Configuration; namespace FirstAppDemo { public class Startup { public Startup() { var builder = new ConfigurationBuilder() .AddJsonFile("AppSettings.json"); Configuration = builder.Build(); } public IConfiguration Configuration { get; set; } //) { app.UseIISPlatformHandler(); app.UseDeveloperExceptionPage(); app.UseRuntimeInfoPage(); app.UseFileServer(); app.UseMvcWithDefaultRoute(); app.Run(async (context) => { var msg = Configuration["message"]; await context.Response.WriteAsync(msg); }); } // Entry point for the application. public static void Main(string[] args) => WebApplication.Run<Startup>(args); } } Step 9 − Save the Startup.cs file and go to the browser and refresh it. You will now receive a response from our home controller.
https://www.tutorialspoint.com/asp.net_core/asp.net_core_setup_mvc.htm
CC-MAIN-2018-51
refinedweb
680
61.33
Schema handshake problem between server and unity client Hi developers, Did you guys try to use colyseus continuously develop some game? Just like you publish game A and later game B, but use the same server to hold these two games. I meat a problem. I develop game A and publish, example, a write codes about AGameRoom extends Room and AGameState extends Schema, and then I publish game A. Then I write codes about BGameRoom extends Room and BGameState extends Schema, i add the code to the same server and restart. The different game player will enter different game room and use different game logic. Then the game A unity client will crash at the Schema's implicit method 'handshake', because the server side has AGameState and BGameState, but game A client only has AGameState. How can I solve this? I read the schema source code, and I found context may help. Did anyone has the same problem like me? Thanks. - endel administrator last edited by endel Hi @jiabuda, I'm glad you're enjoying it and using it for your second game! You're right about the Context. By default the @type()annotation is going to use a "global context". By having all structures inside the global context, they're all going to be synched during handshake. I'm restructuring most of @colyseus/schemalately, and I'm going to add a utility to help with this, as follows: import { type, Context, DefinitionType } from "@colyseus/schema"; function createContext() { const context = new Context(); return function (definition: DefinitionType) { return type(definition, context); } } You can use this function to generate a special @type()that is going to bind to a specific context, so you can use a different one for each of your games. Example: // "game1.ts" export const type = createContext(); // other file import { type } from "./game1"; class State extends Schema { @type("string") // ... your definitions as usual } This utility is going to be available as Context.create()on a next version! Hope this helps. Cheers! Thanks endel, I use the same way as you provide above. And the u3d colleague already annotate the code inside handshake to prevent schema check. Cause we check the same code in colyseus.js, we found js type client did not check the schema. I think the schema should bind with specific room but not a context? How do you think about it. And another problem, when A game and B game share the same schema, that would be a problem. gameA use PlayeInfo Schema in ARoomSchema and gameB use PlayerInfo too in BRoomSchema. That would be a situation, the PlayerInfo could only implement one specific context but not two.
https://discuss.colyseus.io/topic/390/schema-handshake-problem-between-server-and-unity-client
CC-MAIN-2020-40
refinedweb
442
65.01
Crow C++ webserver software #define CROW_MAIN, unfortunately following CrowCpp/Crow#110 which originally described the problem seems to compile just fine. any ideas? #define CROW_MAINwith my PR. What exactly are you trying to test? I"ve tested inclusion in multiple source files and I've tested putting the crow include inside a header which itself is included in multiple source files (which basically is the same thirg as the first one) and both worked. // test.hpp #pragma once #include <crow.h> #include <string> inline static void start_crow_server(unsigned int port) { crow::SimpleApp app; CROW_ROUTE(app, "/") ([port]() { return "Hello World " + std::to_string(port); }); app.port(port).multithreaded().run(); } void test(); // a.cpp #include <test.hpp> #include <thread> int main(int argc, char** argv) { std::thread t([]() { test(); }); start_crow_server(8002); t.join(); return 0; } // b.cpp #include <test.hpp> void test() { start_crow_server(8001); } #define CROW_MAINinside the header (as expected) or leave it out (as expected). Only if I put it in one cpp file and one only it works (as expected). masterrather than a version number (e.g. blueprints.mdand routes.md), once a release is made the idea is to change those to the version number. This is because I don't want people to be confused by a v0.4in the doc when it doesn't actually exist anywhere else ## Disclaimerregion? to broken webpages (at least for me). //if there is a redirection with a partial URL, treat the URL as a route. std::string location = res.get_header_value("Location"); if (!location.empty() && location.find("://", 0) == std::string::npos) { #ifdef CROW_ENABLE_SSL location.insert(0, "https://" + req_.get_header_value("Host")); #else location.insert(0, "http://" + req_.get_header_value("Host")); #endif res.set_header("location", location); }
https://gitter.im/crowfork/community?at=619ab502b5ba9e5a11cade67
CC-MAIN-2022-27
refinedweb
280
53.58
Last week was the annual MVP Summit on Microsoft’s Redmond campus. We laughed, we cried, we shared stories around the campfire, and we even made s’mores. Ok, I’m stretching it a bit about the last part, but we had a good time introducing the MVPs to some of the cool technologies you saw at Connect() yesterday, and some that are still in the works for 2017. As part of the MVP Summit event, we hosted a hackathon to explore some of the new features and allow attendees to write code along with Microsoft engineers and publish that content as an open source project. We shared the details of some of these projects with the supervising program managers covering Visual Studio, ASP.NET, and the .NET framework. Those folks were impressed with the work that was accomplished, and now we want to share these accomplishments with you. This is what a quick day’s worth of work can accomplish when working with your friends. - Shaun Luttin wrote a console application in F# that plays a card trick. Source code at: - Rainer Stropek created a docker image to fully automate the deployment and running of a Minecraft server with bindings to allow interactions with the server using .NET Core. Rainer summarized his experience and the docker image on his blog - Tanaka Takayoshi wrote an extension command called “add” for the dotnet command-line interface. The Add command helps format new classes properly with namespace and initial class declaration code when you are working outside of Visual Studio. Tanaka’s project is on GitHub. - Tomáš Herceg wrote an extension for Visual Studio 2017 that supports development with the DotVVM framework for ASP.NET. DotVVM is a front-end framework that dramatically simplifies the amount of code you need to write in order to create useful web UI experiences. His project can be found on GitHub at: See the animated gif below for a sample of how DotVVM can be coded in Visual Studio 2017: - The ASP.NET Monsters wrote Pugzor, a drop-in replacement for the Razor view engine using the “Pug” JavaScript library as the parser and renderer. It can be added side-by-side with Razor in your project and enabled with one line of code. If you have Pug templates (previously called Jade) these now work as-are inside ASP.NET Core MVC. The ASP.NET Monsters are: Simon Timms, David Paquette and James Chambers - Alex Sorkoletov wrote an addin for Xamarin Studio that helps to clean up unused using statements and sort them alphabetically on every save. The project can be found at: - Remo Jansen put together an extension for Visual Studio Code to display class diagrams for TypeScript. The extension is in alpha, but looks very promising on his GitHub project page. - Giancarlo Lelli put together an extension to help deploy front-end customizations for Dynamics 365 directly from Visual Studio. It uses the TFS Client API to detect any changes in you workspace and check in everything on your behalf. It is able to handle conflicts that prevents you to overwrite the work of other colleagues. The extension keeps the same folder structure you have in your solution explorer inside the CRM. It also supports adding the auto add of new web resources to a specific CRM solution. This extension uses the VS output window to provide feedback during the whole publish process. The project can be found on its GitHub page. - Simone Chiaretta wrote an extension for the dotnet command-line tool to manage the properties in .NET Core projects based on MSBuild. It allows setting and removing the version number, the supported runtimes and the target framework (and more properties are being added soon). And it also lists all the properties in the project file. You can extend your .NET CLI with his NuGet package or grab the source code from GitHub. He’s written a blog post with more details as well. - Nico Vermeir wrote an amazing little extension that enables the Surface Dial to help run the Visual Studio debugger. He wrote a blog post about it and published his source code on GitHub. - David Gardiner wrote a Roslyn Analyzer that provides tips and best practice recommendations when authoring extensions for Visual Studio. Source code is on GitHub. - Cecilia Wirén wrote an extension for Visual Studio that allows you to add a folder on disk as a solution folder, preserving all files in the folder. Cecilia’s code can be found on GitHub - Terje Sandstrom updated the NUnit 3 adapter to support Visual Studio 2017. - Ben Adams made the Kestrel web server for ASP.NET Core 8% faster while sitting in with some of the ASP.NET Core folks. Summary We had an amazing time working together, pushing each other to develop and build more cool things that could be used with Visual Studio 2015, 2017, Code, and Xamarin Studio. Stepping away from the event, and reading about these cool projects inspires me to write more code, and I hope it does the same for you. Would you be interested in participating in a hackathon with MVPs or Microsoft staff? Let us know in the comments below Join the conversationAdd Comment Sounds like you guys had a lot of fun. I think it would be great to pair with some of the MVPs or Microsoft staff. Please let me know when and where it would take place and I’ll do my best to be there. It looks like the animated gif of my VS Code extension is not working for some reason. A preview is also available at Thanks for sharing. Yeomon. The Jamaican version of Yeoman. 😀 Joker What?! No mention of getting VB working on .NET Core? Feeling kind of left out. 😉 c#
https://blogs.msdn.microsoft.com/webdev/2016/11/22/mvp-hackathon-2016/
CC-MAIN-2017-04
refinedweb
965
71.65
). Note: the power consumption on the table refers to the ESP8266 as a standalone chip. If you’re using a development board, they have passive components that use more current. 89 thoughts on “ESP8266 Deep Sleep with Arduino IDE (NodeMCU)” This is best information from you. Thank you RNT. Many thanks from Russia 🙂 You’re welcome 🙂 Great articles. Thanks for sharing! Steve Hi Steve. Thanks 🙂 Hai nice tutoring. I have questions.Can I use deep sleep for 5 minute? Hi. Yes, you can use deep sleep for 5 minutes. Regards, Sara It seems like this should have been stated clearly near the beginning of the article. A bit misleading for some people. Hi Dave. Yes, you are right. I added a note at the beginning of the tutorial. Regards, Sara 300 μA isn’t it too much? It should be near 20 μA. Probably not accurate multimeter or fake chip You are talking about the ESP8266 as a standalone chip consuming 20 μA. However development boards have passive components that use more current… 300µA is consumed by the 10k resistor,replacing by a 220k drops current to 30µA Hi Sara. Is there any similar way that will allow me to wake up? The problem is I have only 3 seconds for the reed-switch and I need to connect and send web request. So not enough time even with static IP. Thank you in advance. Majo Hi Majo. Instead of deep sleep, you can add an auto power off circuit to your ESP8266. The ESP8266 will only cut power after the task is executed. You can follow these projects: Regards, Sara Ah, I just spotted it. I don’t have any ESP-01s so I didn’t know they had a “power on LED”. That’s why your minimum current is ~ 300uA. A red LED’s forward voltage is ~ 1.8V, so (3.3V – 1.8V) / 4700 (the current limiting resistor) = 313uA. It looks like it’s at the limit of your meter’s range, so 333uA (including the 20uA) is shown as 0.3mA. My Fluke has lower ranges. 🙂 One of the other Espressif docs showed the 20uA Deep Sleep current as being at 3.6V and I was running my D1 Mini a hair over 3V, so 16uA for me. can you show the exact circuit you use to have a 16uA? I’ve see also that there is a ‘sleep’ contact behind the d1 mini, what is it for ? Thanks! If you want to play with some other low-power modes, the Low-Power demo is released, It includes the elusive Timed/Forced Light Sleep that took *lots* of googling to figure out how to engage. Forced Light Sleep only sleeps at ~360 to 380uA, but it recovers with an interrupt in less than 5ms, compared to the 120ms+ that Deep Sleep takes to boot, and your program doesn’t have to start from scratch afterwards. Rui and Sara, Loved the video. However, you missed an important use case that I have been struggling with. I have a Feather that I want to put into deepsleep with a switch. The switch is too be used to turn both on and off. In the above ESP.deepSleep(0) code, the switch is only used to turn back on. While you might say just use the EN-GND pin (), I don’t know of a way to run code before the power is turned off, which is needed. Can you provide any suggestions? Is there any documentation I can look at. I have had no success in finding anything in my own searches. Thanks in advance. Hello, I have a very important decision to take based in the wake up from deep sleep. If ESP32, or ESP8266 can wake up from deep sleep based on RX pin status, then i use ESP. If not i have to use a regular transceiver. What is all about. I have an RFID reader connected to an NodeMcu – ESP8266 RX pin. When i read a card, i send the numbers of the card to a second ESP. I must use Deep sleep for ESP board as it is powered by batteries. I must wake up the ESP only when i do a read with the RFID reader, and this transmit the ID of the card to ESP RX pin. Between that, full deep sleep. I do not need RTC, no Wifi, no nothing. Just the ESP sleeping. Please help me on this. Thank you Hi. I answered your question on the other post: Regards, Sara Thank you Sara great post. ESP.deepSleep(0) is misleading as the ESP8266 cannot sleep for more than ~3.5h at the time. I think you should mention that. Great article otherwise! See thingpulse.com/max-deep-sleep-for-esp8266/ for details. Ha, that would have been my question 🙂 As I understand it, deepsleep(0) puts the DSP8266 to sleep forever or until a reset. Your reference discusses deepsleep max which is not the same. Managed to solve? I have the same problem Sara, this is a great article, thx. Do you have some advice how to handle deepsleep with connecting Bluetooth devices? Hi. Unfortunately, we don’t have any tutorials covering that subject. Regards, Sara Have you considered a tutorial on waking the ESP8266 from deepsleep(0) using the DS3231 Alarm (SQW)? Hi You put a lot of effort into these tutorials – thank you. I’ve got a Wemos ESP8266 board with an integrated 18650 battery. I want to use it to monitor the oil boiler (input and output temperature and burn time) which has no mains nearby other than the demand. I’d like to be able to put it into deep sleep to be woken up when it gets power through the micro USB charging input (ie when the demand mains power is back on). My only thought is to use a transistor to draw the RST down to ground when power is applied. Some research and fiddly soldering to get to the signal i need and the RST pin which is not on the header. Any advice would be appreciated. Thanks Maybe i misunderstand your requirements but I think you should be able to to use the transistor to connect and disconnect the GND of the ESP. That way it just powers off when there is no charge on the demand. And it powers back up when the demand switches on. Ofcourse you do need to put it to sleep when done so uses less power.I guess it’s not needed, but If you want the ESP to reset itself periodically during the demand is on. Then you could can solder GPIO16 to reset. Lauren – Thanks for replying. I really want the unit to be running all the time and reporting instantly when the burner is turned on or off. Also i want the temperatures reported every minute or so. Reporting is over WiFi to an MQTT broker and Home Assistant. When the boiler is off for a while temperature reporting can be at a greater interval. I was thinking I could wake up every minute to report temperature but wanted the appliance of power to also wake the unit up to give more accuracy to burn timings (which I will equate to oil consumption in Home Assistant). The battery will keep the thing going for over a day if i use modem sleep but I’m not sure if the charge will be sufficient in the summer when the boiler is only used for hot water. Still considering my options. My board has an LED on GPIO16 and it does not wake the unit up from deep sleep (I’ve just found this out). The LED does comes on at the end of the sleep period, but when connected to the RST it goes off again but does not restart the sketch. Maybe I’ll just let the battery run out when no meaningful measurements can be taken and the unit will restart pretty much as soon as the power is applied. Just doesn’t seem very professional. Aha yes that makes sense. Having both the timer and a external wakeup is exactly what i wasn’t able to get working either. In my most recent project i used an ATTINY13 for that purpose. I programmed the Attiny to wakeup when it gets an interrupt (in my case from a PIR Sensor) and i also programmed a watchdog so it always wakes up after 2 minutes of sleep. When the Attiny wakes up i put a GPIO to a transistor HIGH and then connect the ESP GND that way. I forward the input of the Interrupt PIN via a GPIO to the ESP so it knows if the PIR (Could be your “demand” input) was high or LOW. First step in ESP is to put a GPIO from ESP to ATTINY to HIGH so it knows the ESP is awake. After the sketch of the ESP is done, just before deepsleep (not sure if it’s really needed) i put that GPIO to LOW so ATTINY knows it can go to sleep and therefor it will also disconnect the GND of the ESP again. Attiny is more efficient in Deepsleep than ESP so it’s also a battery saver. I’ve done some testing and coding using light sleep (which I can waken with other GPIOs). Running it consumes about 50mA and in light sleep less than 10mA. So with the right duty cycle I get the average current about 10mA. The battery will keep it going for 12 days. It charges at about 0.5A so it only needs to be powered for 30 minutes each day (250mAh). I should get away with that in the summer. Now to code for real! Nothing else to do during lockdown! This is a great article. I wanted to know if the code will allow for a wake up to occur after a set amount of time and for an external button to wake up the esp8266 from sleep. In short, I want to be able to have the esp8266 periodically report that it is living as well as be able to be woken (maybe reporting its battery strength) and via the external button. I also wanted to do that. My trial code told me it was one or the other but not both. I’d be interested still if there is a way. Is it possible to simultaneously connect the button and gpio16 with rst to end deep sleep with a timer or a button? I would like to use this setup for knowing when my mailbox is opened. The problem is that in such a setup, the mailbox is closed most of the time. If a reed sensor is used, we need to know when the reed sensor has its magnet taken away from it as opposed to when a magnet is brought near the reed sensor. Can you provide guidance on how this can be done? Hhmm, don’t these reed switches come in normally-closed or normally-open configuration? these one have both amazon.nl/EsportsMJJ-stks-Schakelaar-Magnetische-Switches/dp/B075F3KG2K/ Hi Everyone, Could someone please point me to a reference regarding the ‘e6’ for microseconds? I don’t know whether it’s an ‘esp’ thing or Arduino IDE or ???, but I can’t find out. It’s just because I want to be able to pass (parse?) XX as a parameter to a function with switch() statements. — I could simply use longhand microseconds – but I am curious. Alternatively, how does one pass ’20e6′ as a usable parameter value? TIA, Chris I don’t know what you are referring to but it’s probably the usual notation for x 10^6. ie 20e6 is 20,000,000. So if microseconds, that’s 20 seconds. Aaaah! Of course, it’s that E ! I should have realised. I did have a deep memory of E… but after 30 or so General Anesthetics in my deep past, had misplaced it … @Marcel: Thanks for the link. Now I can sleep better 😉 In the article above: [qoute]To put the ESP8266 in deep sleep, you use ESP.deepsleep(uS) and pass as argument the sleep time in microseconds. In this case, 30e6 corresponds to 30000000 microseconds which is equal to 30 seconds.[/quote] I’m simply trying to find the original specs of the “e6” reference. I’m guessing it’s an Espressive thing, but where…? But instead of 30e6, … would not be better to use micros() or, delayMicroseconds(30); … or, is the ESP.deepSleep() function fussy? Hi everyone. I used this toturial and I have strange problem! After the code uploded I recive this text from serial monitor ets Jan 8 2013,rst cause:2, boot mode:(3,6) load 0x4010f000, len 3584, room 16 tail 0 chksum 0xb0 csum 0xb0 v2843a5ac ~ld m)!⸮⸮%⸮⸮Y ‘I⸮⸮q⸮P⸮⸮@ʎ⸮⸮%[email protected]$VQ⸮ in 74880 baudRate! any suggestion? Hi team, using the example code on an ESP8266 in VScode IDE I notice every time the processor wakup it stucks into boot process. Moving ESP.deepSleep() into loop() dosn’t help. Anny suggestions how to get out of the blocking situation? #include <Arduino.h> void setup() { Serial1.begin(115200); Serial1.setTimeout(2000); while(!Serial1) { } //Serial.printf(“\n\nCompiled from: %s at: %s %s”, FILE, DATE, TIME); //Serial.println(“\nESP8266_deep_sleep-timer-controlled”); Serial1.println(“\n\nback from sleep mode, normal performing 5 sec”); delay(5000); // do whatever you want here Serial1.println(“now going into deep sleep mode “); ESP.deepSleep(5e6); } void loop() { // ESP.deepSleep(5e6); yield(); } Thankyou Hi, Thank you for the advice. I went through the 8266 / BME280 / Thingspeak weather station and it drains a lot of battery, even with longer intervals. So I tried to insert the deep sleep command at the end in the sketch, moving the void loop after it, and removing the original delay time. I only works for one or two times and and then stop. Am I doing anything wrong? #include <ESP8266WiFi.h> #include “ThingSpeak.h” #include <Adafruit_BME280.h> #include <Adafruit_Sensor.h> #include <Wire.h> const char* ssid = “xxxxxxx”; const char* password = “xxxxx”; WiFiClient client; unsigned long myChannelNumber = xxxxxxxxx; const char * myWriteAPIKey = “xxxxxxxxxxxx”; unsigned long lastTime = 0; unsigned long timerDelay = 1800000; float temperatureC; float humidity; float pressure; float volt; float bateria; String myStatus = “”; Adafruit_BME280 bme; //BME280 connect to ESP8266 I2C (GPIO 4 = SDA, GPIO 5 = SCL) void initBME(){ if (!bme.begin(0x76)) { Serial.println(“Could not find a valid BME280 sensor, check wiring!”); while (1); } } void setup() { Serial.begin(115200); initBME(); WiFi.mode(WIFI_STA); ThingSpeak.begin(client); if(WiFi.status() != WL_CONNECTED){ Serial.print("Attempting to connect"); while(WiFi.status() != WL_CONNECTED){ WiFi.begin(ssid, password); delay(5000); } Serial.println("\nConnected."); } temperatureC = 1.034 * bme.readTemperature() - 2.8112; //calibrada Serial.print("Temperature (ºC): "); Serial.println(temperatureC); humidity = 0.95 * bme.readHumidity() + 7.26; //verificar calibracao Serial.print("Humidity (%): "); Serial.println(humidity); pressure = bme.readPressure() / 100.0F; Serial.print("Pressure (hPa): "); Serial.println(pressure); volt = analogRead(0); bateria = volt * 0.0029492188 * 1.3576; Serial.print("Bateria (V): "); Serial.println(bateria); ThingSpeak.setField(1, temperatureC); ThingSpeak.setField(2, humidity); ThingSpeak.setField(3, pressure); ThingSpeak.setField(4, bateria); // Write to ThingSpeak. There are up to 8 fields in a channel, allowing you to store up to 8 different // pieces of information in a channel. Here, we write to field 1. int x = ThingSpeak.writeFields(myChannelNumber, myWriteAPIKey); if(x == 200){ Serial.println("Channel update successful."); } else{ Serial.println("Problem updating channel. HTTP error code " + String(x)); } Serial.println(“going into deep sleep mode”); ESP.deepSleep(6e8); } void loop() { } Hi. What happens exactly when it stops working? Regards, Sara Hi Sara, thank you for your response! The string of funny symbols appears, as usual, followed by the message “Attempting to connect”, and then stops completely. Hi. I think this discussion is about that issue: Try one of the suggestions and see if that solves the problem. I hope this helps. Regards, Sara Great article! Followed it to the letter, and everything is working as needed. Except for when I power up the NodeMCU initially, it needs a push of the physical RST button and then everything runs and cycles correctly. The problem is, that the NodeMCU and the DHT22 are in a 3d printed case without access to the reset button. I’m using a wire from D0 (16) to RST pins and using the timer (not a push button). Any ideas what is causing that initial boot not doing anything (nothing on the serial monitor either) until pushing the RST button? Or a workaround? Thanks! Please disregard my previous comment. Once I unplugged it from my PC USB port and plugged into regular wall outlet, it started working once starting without having to hit the reset button. Strange! Hi ! Thx for this awesome tutorial ! I do have a question though… I built the circuit for the external wake up with a node MCU board and a button as an interrupt. It does work but not exactly as I’d like it to. The board actually wakes from sleep when the button is released, not when it is pressed ? My need is to be able to detect when water reaches a certain level (flood alert) and I was thinking of just putting 2 leads next to each other and hoping to wake the node MCU when the water connects these two leads (that would be when the button is pressed in on my board) but the way this works seems it would only wake up after the water level has dropped. Has anyone done anything of the sort before ? Thx for any hints ! Hi, In my last project with Wemos D1 mini, I use deep sleep with wake up pin connected to reset, using the function ESP.deepSleep(xxxe6)… Everything works fine except one thing: during the restart, the digital outputs go to 1.5v for a moment, even if they are not controlled by the software. It can activate the mosfet I will connect to one of these outputs. I put in the setup () digitalWrite (x, LOW), but I have not solved. The only way I find is to connect a 10uF capacitor…but I don’t like this solution because I will use the output with PWM analog write.. Do you know this behavior? Do you have any idea how to fix it? Hello Sara, In the tutorial you mention that RST has to be connected to GPio16 AFTER uploading the example sketch. I missed that line the first time experimenting. I guess that for the ESP-01 this would mean: ‘soldering after uploading’. Maybe you can expain the reason somewhat more. A WARNING: There are batches of ESP8266 chips for sale that do not recover from deepSleep(). The example below is from the CheckFlashConfig sketch with the sleep command added: Serial.println(“Going to deep sleep.\n”); ESP.deepSleep(10000000); This is what you see after the first boot: … removed a few lines… Flash ide size: 4194304 bytes Flash ide speed: 40000000 Hz Flash ide mode: DOUT Flash Chip configuration ok. Going to deep sleep. … ten seconds later it should boot but … ets Jan 8 2013,rst cause:5, boot mode:(3,6) ets_main.c There is nothing one can do about it, and the issue has been confirmed by others. I have been using plain 8266 chips (not DEV boards). Everything else seems to work more or less, so it may be just a bad batch. Thanks for sharing. I have a routine that makes my ESP8266 (wemos D1 mini) going to deep sleep afetr checkinh wifi spots. It works well, but now I can’t upload any new sketch from Arduino IDE : esptool.FatalError: Failed to connect to ESP8266: Timed out waiting for packet header Do you know any way to correct this ? Hi. If the board is sleeping, you wont be able to upload new code. You need to press the RST button and upload code right after to catch it awake. Regards, Sara Hello! I am trying to change this code to work for a vibration sensor instead of a button. I’ve copied the code but I am still having issues. I am following the schematic as close as I can, which means I’ve plugged in the sensor’s VCC cable to the RST slot instead of GPIO 16. How can I make this work? Do I need a signal inverter?
https://randomnerdtutorials.com/esp8266-deep-sleep-with-arduino-ide/?replytocom=731957
CC-MAIN-2022-27
refinedweb
3,425
75.61
For some reason, I can't get the images to crop and display correctly, even though I think my script makes sense logically. My code is posted below. You can make an image around the resolution of 300x2000 or so to use with this to see the problem I am having. Attached is my practice image that is rough, but works for now. My code starts printing outside of the area that I want it to show (outside the showsize variable) and I can't figure out why. Any help with this problem would be much appreciated. It seems that the crops don't cut the images short enough, but all the information that I found about it make me think my script should be working just fine. I've tried to annotate my code to explain what's going on. from Tkinter import * from PIL import Image, ImageTk def main(): root = Tk() root.title = ("Slot Machine") canvas = Canvas(root, width=1500, height=800) canvas.pack() im = Image.open("colors.png") wheelw = im.size[0] #wide of source image wheelh = im.size[1] #height of source image showsize = 400 #amount of source image to show at a time - part of 'wheel' you can see speed = 3 #spin speed of wheel bx1 = 250 #Box 1 x - where the box will appear on the canvas by = 250 #box 1 y numberofspins = 100 #spin a few times through before stopping cycle_period = 0 #amount of pause between each frame for spintimes in range(0,numberofspins): for y in range(wheelh,showsize,-speed): #spin to end of image, from bottom to top cropped = im.crop((0, y-showsize, wheelw, y)) #crop which part of wheel is seen tk_im = ImageTk.PhotoImage(cropped) canvas.create_image(bx1, by, image=tk_im) #display image canvas.update() # This refreshes the drawing on the canvas. canvas.after(cycle_period) # This makes execution pause for y in range (speed,showsize,speed): #add 2nd image to make spin loop cropped1 = im.crop((0, 0, wheelw, showsize-y)) #img crop 1 cropped2 = im.crop((0, wheelh - y, wheelw, wheelh)) #img crop 2 tk_im1 = ImageTk.PhotoImage(cropped1) tk_im2 = ImageTk.PhotoImage(cropped2) canvas.create_image(bx1, by, image=tk_im2) ##THIS IS WHERE THE PROBLEM IS.. canvas.create_image(bx1, by + y, image=tk_im1) ##PROBLEM #For some reason these 2 lines are overdrawing where they should be. as y increases, the cropped img size should decrease, but doesn't canvas.update() # This refreshes the drawing on the canvas canvas.after(cycle_period) # This makes execution pause root.mainloop() if __name__ == '__main__': main()
http://www.dreamincode.net/forums/topic/283518-cropping-and-displaying-images-using-pil-and-tkinter-problem/
CC-MAIN-2018-13
refinedweb
420
74.69
Date: Nov 11, 2012 8:35 PM Author: kirby urner Subject: Re: How teaching factors rather than multiplicand & multiplier<br> confuses kids! On Sun, Nov 11, 2012 at 5:19 PM, Robert Hansen <bob@rsccore.com> wrote: > > On Nov 11, 2012, at 8:01 PM, kirby urner <kirby.urner@gmail.com> wrote: > > Actually my thinking is quite self-consistent and disciplined > according to David Feinstein, Applied Mathematician. He likes that > about me. > > > Religions are consistent. Crazy people are consistent. Consistency is not a > sufficient criteria for formal reasoning. By chance, did this Feinstein > fellow say something like "Well, at least you are consistent."? > Anyway, consistency isn't always a great virtue, if it prevents "changing channels" i.e. hopping from one namespace to another wherein key terms may appear superficially the same but mean something quite different. Religions are not necessarily self-consistent in my view. Christianity is full of contradictions, as is the Bible. Pro-slavery and anti-slavery rhetoricians / theologians / pulpit-pastors both resorted to the Bible to bolster their respective cases, cherry picking choice quotes. I have no idea why you would take the position that religions are consistent. Seems like sloppy thinking to me. > > I teach logical thinking for a living one could say. To hundreds. > > > Formal reasoning is different than formal logic, if that is what you are > saying. > More I teach problem solving. Little puzzles. Takes skills and abilities very like what we encourage in math class to solve them. There's quite a bit of boolean algebra involved, also string manipulation. One of the problems is to implement exponentiation based on multiplication as a "composing" of two functions e.g. If f(g(x)) is what (f * g)(x) means, then write code to implement (f ** n)(x). That's one of the exercises. If all you learned in school were numeric operations and no string manipulation, then you might not have that "head start". Remedial regexps if you wanna to yer Bible camp. How else are you gonna count how many thees and thous. > I think my rhetoric is really top notch, one of the fastest horses on > the track (some days). > > > How would you rate the rhetoric of some of the best con men? I'll stick with > formal reasoning. > I'm honored if you rank me with among the best con men. That would include stage magicians and other illusionists. When you're willingly conned (suspended disbelief) it's entertainment, and is even possibly ethical. It's a con when you're setting something up, an illusion, with the intent to trick, topple or counter a foe -- a concept in business intelligence. "A con for con" is how some people (and companies) play. I have yet to consider you much of a formal reasoner though. I've been consistent in my assessment ("sloppy"), e.g.: But that's OK. I think you've joined a forum with many logical thinkers and some of it will rub off (has already rubbed off). You're a little smarter now, it seems to me, than when you first appeared. math-teach is a healthy playground for people from many walks of life. Kirby > Bob Hansen
http://mathforum.org/kb/plaintext.jspa?messageID=7921964
CC-MAIN-2016-40
refinedweb
532
67.96
. If you are new to Pygame consider reading first the article A Fast Introduction to Pygame. The video below shows the simulation in action. Background To make 3D simulations, first of all, we must describe the objects in 3D space. The most common way of describing 3D objects is through the use of a 3D cartesian coordinate system. We will use the system shown in the figure below, where Y increases going up, X increases going right, and Z increases going into the screen. This is called a left handed system. An alternative system, is the right handed system where Z increases in the opposite direction. Next, with points defined in 3D cartesian coordinate system, we can apply transformations such as translation, rotation, and scaling. Note that unlike 2D rotations that occur in one plane, rotations in 3D space occur along an arbitrary axis. Below I present the formulas for 3D rotation along the X, Y, and Z axes. Rotation along X: y' = y*cos(a) - z*sin(a) z' = y*sin(a) + z*cos(a) x' = x Rotation along Y: z' = z*cos(a) - x*sin(a) x' = z*sin(a) + x*cos(a) y' = y Rotation along Z: x' = x*cos(a) - y*sin(a) y' = x*sin(a) + y*cos(a) z' = z When we are finished applying the transformations, we must draw the 3D object into the screen. But, since the screen is 2D we will have to convert the 3D coordinates to 2D coordinates. This operation is called 3D projection, and there are many ways of doing it. I will cover just the common one that is the perspective projection. Below I present the formula. 3D Perspective Projection x' = x * fov / (z + viewer_distance) + half_screen_width y' = -y * fov / (z + viewer_distance) + half_screen_height z' -> for now, z is useless In the formula, fov is a constant value that defines the “field of vision”. I like to use values between 128 and 256. view_distance is the distance from the object to the viewer. With all the points converted to 2D space, we can finally draw the object using Pygame 2D drawing functions. The Code I present below the code to simulate the rotation of 8 points/vertices of a cube. To start with, let’s define the Point3D class to represent points in 3D space: import sys, math, pygame, 1) First,we have the constructor that initializes instances of the class. Next, we have methods to rotate a point around X, Y, and Z axes. These functions return the rotated point. Finally, we have the function that projects a point from 3D to 2D space. Next, we define the Simulation class that will do the rest of the job: class Simulation: def __init__(self, win_width = 640, win_height = 480): pygame.init() self.screen = pygame.display.set_mode((win_width, win_height)) pygame.display.set_caption("Simulation of 3D Point Rotation ()")) ] self.angleX, self.angleY, self.angleZ = 0, 0, 0 def run(self): while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() self.clock.tick(50) self.screen.fill((0,0,0)) for v in self.vertices: # Rotate the point around X axis, then around Y axis, and finally around Z axis. r = v.rotateX(self.angleX).rotateY(self.angleY).rotateZ(self.angleZ) # Transform the point from 3D to 2D p = r.project(self.screen.get_width(), self.screen.get_height(), 256, 4) x, y = int(p.x), int(p.y) self.screen.fill((255,255,255),(x,y,2,2)) self.angleX += 1 self.angleY += 1 self.angleZ += 1 pygame.display.flip() if __name__ == "__main__": Simulation().run() When this class is instantiated, it starts with initializing Pygame, and then setting up a window. Next, we create a clock object to lock the frame rate of the animation. Then, we define 8 points in 3D space. After that, we define the variables that will hold angles of rotation around each axis. The run() function runs the main loop where the points are rotated and drawn. First, it checks if a QUIT event is pending, and if so quits the application. Then, it locks the frame rate to 50 FPS, and after that clears the screen. Then, it traverses the vertex list, rotating each one around X, Y, and Z axes. Then, it projects the point, and draws it into the screen. After traversing the vertex list, finally, it increases the angle variables, and then updates the screen. The last line of code runs the simulation. Conclusion With a little bit of 3d computer graphics theory, we were able to make a simple 3d simulation using Pygame. In the next tutorial we will expand on the theory and code from this tutorial to make a rotating wireframe cube. To stay updated, make sure you subscribe to this blog, follow us on twitter, or follow us on facebook. If you’re running Python via IDLE, display window hangs on exit unless you do pygame.QUIT per… def run(self): while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.QUIT sys.exit() Hello Nik! You are right. I will update the code to include pygame.quit() Attention that in your snippet you used pygame.QUIT when it should be pygame.quit() Thank you for pointing out this fact. Drat, it ate the white-space… ps: could you mention how you’re drawing boxes without any rectangle command ?? The statement below draws a 2×2 white rectangle: self.screen.fill((255,255,255),(x,y,2,2)) (255, 255, 255) is an RGB tuple defining WHITE color. (x, y, 2, 2) defines the rect where (x,y) are the coordinates of the upper left corner and (2,2) are width and height of the rect respectively. Thank you !! under def rotateX if you change it to 90 it will display all six sides, where as now it only displays 5 rad = angle * math.pi / 90 Thanks! I make simple 3d demo based on your code, running on microcontroller Hi, I want to create any object of any shape using wireframe.can you help me? I am particularly grateful for this tutorial, and seeing as you seem to have done the projection function in no more than a few lines, i am curious as to whether you could make a slightly more in-depth tutorial on this particular procedure. Just a quick note of thanks. I have spent the last three days trying to figure out to code a 3D Perspective Projection and this solution is so elegant and easy to follow. Nothing else comes close. Best wishes, Philip.
http://codentronix.com/2011/04/20/simulation-of-3d-point-rotation-with-python-and-pygame/
CC-MAIN-2017-09
refinedweb
1,098
66.33
On 11/30/2009 8:12 AM, markolopa wrote: > Hi, > > On 18 Sep, 10:36, "markol... at gmail.com"<markol... at gmail.com> wrote: >> On Sep 11, 7:36 pm, Johan Grönqvist<johan.gronqv... at gmail.com> wrote: >>> I find several places in my code where I would like tohavea variable >>> scope that is smaller than the enclosing function/class/module definition. >> >> This is one of the single major frustrations I have with Python and an >> important source of bugs for me. Here is a typical situation > > Here is another bug that I just got. Twenty minutes lost to find it... > > class ValueColumn(AbstractColumn): > def __init__(self, name, header, domain_names): > if type(domain_names) != tuple: > raise ValueError('a tuple of domain names must be given') > for name in domain_names: > if type(name) != str: > raise ValueError('a tuple of domain names must be > given') > self.domain_names = domain_names > super(ValueColumn, self).__init__(name, header) > > The upper class was initialized with the wrong name, because the for > loop to check > domain_names used "name" which is also the argument to be passed. > > If is an old thread but I am reopening to present real situation where > this Python > "feature" bothers me... > here is another bug you might have if python have an "even-more-local" scope: while True: s = raw_input("enter something: ") if s not in ('q', 'quit', 'exit'): break print s if the while block has become its own namespace; print s would generate NameError. It is one or the other, you will have problem caused by "namespace too small" or "namespace too big". Neither is better than the other, so python's two-level name resolution (global and local level) is the simplest one, is the better one.
https://mail.python.org/pipermail/python-list/2009-November/559814.html
CC-MAIN-2017-04
refinedweb
285
62.58
Opened 2 years ago Last modified 1 week ago Allow limit_choices_to to test for and invoke callable value arguments. This would allow filtering on dynamically determined values. It enhances the possibilities for customising the admin interface. This may be related to Ticket 2193 (sounds similar at any rate). For example: def assigned_tasks(): return get_assigned_tasks_id_list(blah, blah) class TimeRecord(models.Model): task = models.ForeignKey(Task, limit_choices_to = {'id__in': assigned_tasks}) test callable(value) in db/models/query.py I agree with the concept, but the patch is incorrect -- the check for callable() should be made only on the limit_choices_to values, not in the code that parses *every* query. Could you have another look? Removing [patch] from the subject, because the patch is invalid. I took another look - I only have limited familiarity with the code. The value of limit_choices_to is passed to complex_filter() in three places: django/contrib/admin/views/main.py django/db/models/manipulators.py django/db/models/fields/__init__.py I believe that calls to complex_filter() are the only places limit_choices_to is actually used. The limit_choices_to is always obtained via referencing some fragment such as .rel.limit_choices_to The "rel" class needs a method (hopefully in some base class) that evaluates its' limit_choices_to and returns a new hashmap: def get_limit_choices_to(self): limiters = {} if self.limit_choices_to: for id in self.limit_choices_to: value = self.limit_choices_to[id] if callable(value): value = value() limiters[id] = value return limiters The existing direct references to ref.limit_choices_to need to change to ref.get_limit_choices_to() Does this sound like I'm on the right track? I'm proceeding on these lines for now. Patch to allow evaluation of callables in limit_choices_to I attached a new patch that attempts to implement callables in limit_choices_to without involving query.py. I've put [patch] back in the header. The new patch is larger - I'm open to suggestions on how it might be more simply achieved. I like the idea (touched on in #1891) of getting rid of limit_choices_to in favor of a choices that accepts QuerySets? or callables. i think that callables should be passed a reference to self, at least if they accept it -- think of the following model: class Mother(models.Model): firstborn = models.ForeignKey('Child', limit_choices_to={'mother':lambda me: me}) class Child(models.Model): mother = models.ForeignKey('Mother', related_name='children') We discussed this in Chicago and I think the plan is to allow some special methods to generate the choices. class SomeOtherModel(models.Model): name = models.CharField(max_length=255) class MyModel(models.Model): test = models.ForeignKey(SomeOtherModel): def choices_for_test(self): """Hook for providing custom choices. Returns a queryset or an iterable of choices tuples.""" return SomeOtherModel.objects.all() Is anyone still working on this? I've been seeking this functionality so I wrote a patch that actually implements jkocherhanss way of dealing with this. By defining a function in your model called choices_forFIELDNAME you can limit the choices using data from the current row... The patch also allows for using QuerySets? for the choices attribute for any field... I am still working on getting it to work with edit_inline, but it works perfectly with normal models. This patch is still a bit unpredictable with edit_inline objects, but works with normal relationships. Accidentally included some debug print statements in the previous patch...whoops Patch needs work since it does not work with edit inlines. patch for newforms-admin branch. Just one line is different. Still doesn't work with inlines What's the problem with inlines? I've been trying to figure this out all day and I can't seem to figure out what's going on. It seem to be calling the choices_for* on out-of-date instances. The only thing I can think of is that the code in models/base.py isn't being called again, so there is an old choices_for* method attached to the field. I think the problem that appears with inlines is in attaching this method to the model's field (see content of patch). The choices_for__somefield method should not be a Field method it should be a model method (which is what it looks like anyways). This business of attaching it to the field needs to go away and the choices_for__somefield method needs to be called to filter out the choices for somefield within the context of some instance of the model (the self that is passed in). This problem goes away if you always create an instance of the class that has the limited-choice field in it before calling the choices method. Then the choices choices_for__somefield gets reattached to the field and when you ask the field what it's choices are (via get_choices method) you get what you expect...but this is not how inlines work right now, where an instance of the inlined object is not always instantiated before looking at the field's choices, hence you will get the choices of the last-instantiated child object! But re-laod the page again and it works. Only an instance of the model you are editing is instantiated (the non-inlined model) right away, which is why this hack works when editing the inlined model in non-in-lined mode (editing the child directly rather than through the parent). David's approach looks to be correct, however related fields don't actually have a limit_choices_to parameter, so I think we should combine this and #1891 and remove limit_choices_to entirely, and allow callables and querysets for choices, as this does. Do yo know if this patch is going to be applied to trunk in any momment? By Edgewall Software.
http://code.djangoproject.com/ticket/2445
crawl-001
refinedweb
931
57.67
SortField Since: BlackBerry 10.0.0 #include <bb/pim/calendar/SortField> To link against this class, add the following line to your .pro file: LIBS += -lbbpim The SortField class includes possible fields that can be used to sort calendar events. When you retrieve events using functions in the CalendarService class (such as CalendarService::events()), you provide an EventSearchParameters object that includes information about the results that you want. You can set a SortField in the EventSearchParameters that describes what fields you want to use to sort the search results. CalendarService, EventSearchParameters Overview Public Types Index Public Types An enumeration of possible sort fields for calendar events. BlackBerry 10.0.0 - Undefined 0 Indicates an undefined sort field. - Guid 1 Indicates that events are sorted by guid.Since: BlackBerry 10.0.0 - Subject 2 Indicates that events are sorted by subject.Since: BlackBerry 10.0.0 - Location 3 Indicates that events are sorted by location.Since: BlackBerry 10.0.0 - StartTime 4 Indicates that events are sorted by start time.Since: BlackBerry 10.0.0 - EndTime 5 Indicates that events are sorted by end time.Since: BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__pim__calendar__sortfield.html
CC-MAIN-2015-48
refinedweb
205
62.24
Learn Java in a day Learn Java in a day  ... structure. Initially learn to develop a class in java. To meet this purpose we..., "Learn java in a day" tutorial will definitely create your Easiest way to learn Java There are various ways to learn Java language but the easiest way to learn..., video tutorials and lectures that help beginners learn in Java. This tutorials.... In these courses topics like wrapper class, get example, array list, linked list Get first day of week Get first day of week In this section, we will learn how to get the first day of ..._package>java FirstDayOfWeek Day of week: 7 Sunday Learn Java in 21 days Anyone can learn Java in 21 days, all they need is dedication and concentration. Java Guide at RoseIndia can help you learn Java in 21 days. The course... of the class, method, keyword, function and other important parts of a Java Class class. In the above example the sq is the object of square class and rect... Class, Object and Methods Class : Whatever we can see in this world all the Where to learn java programming language fast and easily. New to programming Learn Java In A Day Master Java Tutorials...Where to learn java programming language I am beginner in Java and want to learn Java and become master of the Java programming language? Where Day Format Example Day Format Example This example shows how to format day using Format class. In this program we use a pattern of special characters to day format. Description of the code Learn Java - Learn Java Quickly Learn Java - Learn Java Quickly  ... by the Java compiler into byte code that are stored in .class file. The JVM..., TextPad etc. Java program is a class with a main method in it. The main method Java nested class example Java nested class example Give me any example of Nested Class. Nested Class: Class defined within another class is called nested class... class. Example: public class NestedClass{ private String outer = "Outer learn learn i need info about where i type the java's applet and awt programs,and how to compile and run them.please give me answer Best way to learn Java Best way to learn Java is through online learning because you can learn... or example and then move forward. There are Java experts available 24X7 to guide..., explaining Java features, Java keywords, methods, class and how to use them the problem visit to : Calendar Design the class Calender so that the program can print a calendar for any month starting Junuary 1,1500.Note that the day Learn Java online can learn at his/her pace. Learning Java online is not difficult because every... and web-applications. JRE includes JVM (Java Virtual Machine) Java class library...Learning Java is now as easy as never before. Many websites today provide JDBC Training, Learn JDBC yourself programming language of java then fist learn our following two tutorials: Learn Java in a Day and Master... JDBC Connection Pooling Accessing Database using Java and JDBC Learn in 24 hours find it easy to understand the example. To learn Java in 24 hours, you must clear... you understand the concepts behind Java class, method, keyword, function, etc... systems. Java is based on class, method and objects. What is Learn Hibernate programming with Examples () Example - Learn how to delete the entity using Hibernate. Learn more...Learn the important concepts of Hibernate programming language... to use the Hibernate in real life project. Here you will learn the concepts Class C:\roseindia>java Classtest It's a example of class... class. Now we will learn, how to define a class having methods, objects... of the class "Classtest". Here is the Code of the Example Java Wrapper Class Example Java Wrapper Class Example In this section we will read about how to use... to objects of that class. In Java there are eight primitive data types and each data... about how to use the wrapper classes in Java. In this example we will create Learn java Learn java Hi, I am absolute beginner in Java programming Language. Can anyone tell me how I can learn: a) Basics of Java b) Advance Java c) Java frameworks and anything which is important. Thanks Servlet tutorials: Learn Servlet Easily Servlet is a Java programming language class, which helps to improve server... applications, without the performance limitations of CGI programs. Moreover, Java... and stays in memory. Java applet makes the tasks to be performed faster How to learn Java? to learn Java language than there are two ways you can do that. One is the classroom... training. A young developer who is seeking a career in Java can learn the language..., class, function, method, keyword, etc. there is to learn. Every topic Java Abstract Class Example Java Abstract Class Example In this section we will read about the Abstract class. Abstract class in Java is a class which is created for abstracting... this example into two parts, in first part we will create an abstract class and we Learn Servlet Life Cycle Learn Servlet Life Cycle Servlet is a Java programming language class... for service and the Servlet releases all the resources related to it. Java Virtual... collection. For complete Servlet Life Cycle Example: Click here Class Cast Exception Example in java Class Cast Exception Example in java We are going to describe class cast exception in java. The class cast exception is a converting one data type into another data type. A Class cast exception is a thrown by java. In this example How to create a class in java How to create a class in java I am a beginner in programming and tried to learn how to do programming in Java. Friends please explain how can I create a class in Java Working with java Collections class Working with java Collections class  ... with Collections class of java.util package. Actually java collections framework... of Collections class. For example- add element, copy elements, reverse data Class Average Program ; This is a simple program of Java class. In this tutorial we will learn how to use java program for displaying average value. The java instances of the class, represent classes and interfaces in a running Java application Java Calendar Example The following Java Calendar Example will discuss java.util.Calender class... of the Calendar class is used to display various values. Code of Java Calendar program... of the calendar class returns a Date object. It is then passed to the println Java Stack Example Java Stack Example In this section you will learn about Stack class ... the item in the stack and how far it is from top of stack Example : A Java code how stack is used in java. import java.util.*; public class StackDemo Learn Java Programming in it, Learn Java programming online with tutorials and simple examples that explains every single code. The online learning of Java begins with simple example... of the programmer wants to learn this language quickly. The basic of Java tutorials with out class - Java Beginners with out class can we write a program with out a class in core java?if yes,give example Java RandomAccessFile Example Java RandomAccessFile Example Here you will learn about the RandomAccessFile class. In the given code, we have used BufferedWriter class to write even number... of file. Example: import java.io.*; class RandomAccessFileExample { public Learn the Java 8 and master the new features of JDK 8 JDK 8 - Java 8 Tutorial and examples: Learn JDK 8 with the help of examples... Expression in Java 8 How to use the Java 8 consumer class... many articles and example of JDK 8. JDK 8 is still under development Example to show class exception in java Example to show class exception in java In this Tutorial we are describing the example to show the use of class exception in java .Exceptions are the condition  Bigdecimal class example Java Bigdecimal class example Java bigdecimal class comes under java.math library. Class... increased the usage of this class in building Business applications. Java Vector Example in java Vector Example in java  .... In this example we are using seven methods of a Vector class. add(Object o): It adds...;} } Output of this example is given below: C:\Java Tutorial>javac Java Hello World code example Java Hello World code example Hi, Here is my code of Hello World program in Java: public class HelloWorld { public static void main..."); } } Thanks Deepak Kumar Hi, Learn Java java inner class - Java Beginners java inner class What is the difference between the nested class and inner class? Explain with suitable example. Hi Friend, Differences: 1)Nested class is declared inside another class with the static keyword Object class in java Object class in java want an example and with an explaination about equals() method in object class? Is equals() is similar question on class - Java Beginners question on class A class can act as subclass itself? if yes give me one example if no give me one example java-wrapper class java-wrapper class dear people, may i know how to convert primitive data type into corresponding object using wrapper class with example please!!! public class IntToIntegerExample { public static void Java BufferedWriter example and strings. It write text to a character-output stream. Given below example will give you a clear idea : Example : import java.io.*; public class... In this section, you will learn how to write to a file using BufferedWriter. USER DEFINED CLASS Create a java program for a class named MyDate that contains data members and a constructor that meet the criteria in the following list. This class is used... Create a java program for the class MyDate, and call the new class MyDate2 Java 8 consumer class(interface) example How to use the Java 8 consumer class(interface)? In this example program I...); } } //Consumer implementation example class ConsumerObject implements...("Value is:"+t); } } Read more tutorials and example of Java 8 Java-Abstract Class - Java Beginners Java-Abstract Class Can a abstract class have a constructor ?When would the constructor in the abstract class be called ? Please give me with good example code..Please implement the example with code. THanks in advance.  java class string - Java Beginners java class string Write a program that reads three strings... in it. Example: Input: abcdef sdfertw er... : import java.io.*; public class ReadString { public static void main Real Time Example Interface and Abstract Class - Java Beginners Real Time Example Interface and Abstract Class Hi Friends, can u give me Real Time example for interface and abstract class.(With Banking Example Java class Java class What is the purpose of the Runtime class Java Bigdecimal ; Java BigDecimal compareTo example BigDecimal class compareTo() method...; Java BigInteger long Example below illustrates the bigdecimal class...; Java Bigdecimal class example Java bigdecimal class comes under java.math Class SALE - Java Beginners Class SALE A company sale 10 items at different rates and details... of each item at the end of the week c) Average sale per day Hi.... import java.io.*; import java.util.*; public class SALE { public static void Java FileInputStream Example FileInputStream Java has Two types of streams- Byte & Characters. For reading and writing binary data, byte stream is incorporated. The InputStream abstract class... of the InputStream abstract class. The FileInputStream is used to read data Defining a class - Java Beginners . For example, 197 is a Keith number since it generates the sequence 1, 9, 7, , , , , . We are required to design a class named ?Keith? having the following data Java Get Class Location Java Get Class Location In this section, you will learn how to get the class location. For this, we have used the classes URL and ClassLoader. The class thread class - Java Beginners execution NOTE : In this example use of wait ( ) , notify... the following code: class Incrementor extends Thread{ int cnt1 = 0; boolean...; } } class Decrementor extends Thread{ int cnt2 = 100; int cnt1 Get Example . Java get Next Day In the given example, we have used the class... In the example given below we will learn how to get name of the particular class... java.net.InetAddress class. Java Example program to get java class java class write a java program to display a msg "window closing" when the user attempts to close the window.(a) create a window class (b) create frame within the window class (c) extends window adapter class to display the msg
http://www.roseindia.net/tutorialhelp/comment/89571
CC-MAIN-2015-06
refinedweb
2,077
56.15
This guide shows how to set up a new Google Kubernetes Engine cluster with Cloud Run for Anthos on Google Cloud enabled. Because you can use either the Cloud Console or the gcloud command line, the instructions cover both of these. If you are enabling Cloud Run on an already existing cluster, refer to Enabling Cloud Run for Anthos on Google Cloud on existing clusters Note that enabling Cloud Run for Anthos on Google Cloud installs Istio and Knative Serving into the cluster to connect and manage your stateless workloads. Prerequisites. Setting up gcloud Although you can use either the Cloud Console or the gcloud command line to use Cloud Run for Anthos on Google Cloud, you may need to use the gcloud command line for some tasks. To set up the gcloud command line for Cloud Run for Anthos on Google Cloud: Install and initialize the Cloud SDK. You should set your default project setting for gcloudto the one you just created: gcloud config set project PROJECT-ID Replace PROJECT-ID with the project ID of the project you created. Set zoneto kubectlcommand-line tool: gcloud components install kubectl Creating a cluster with Cloud Run enabled These instructions create a cluster with this configuration: - Cloud Run for Anthos on Google Cloud enabled - Kubernetes version: see Available GKE versions - Nodes with 2 vCPU These are the recommended settings for a new cluster. You can use either the gcloud command line or the console to create a cluster. Click the appropriate tab for instructions. Console To create a cluster and enable it for Cloud Run for Anthos on Google Cloud: Go to the Google Kubernetes Engine page in the Cloud Console: Go to Google Kubernetes Engine Click Create cluster to open the Create a Kubernetes cluster page. Select the Standard cluster template, and set the following values in the template: - Enter the name you want for your cluster. - Choose either Zonal or regional for the location type: either will work with Cloud Run. Configuring gcloud for cluster and platform After you create the cluster, - Set your default platform to gke. - Optionally set defaults for cluster name, and cluster location to avoid subsequent prompts for these when you use the command line. - Get credentials that allow the gcloud command line to access your cluster. To set defaults: Set the default platform to gke, set your default cluster and cluster location, and then get credentials as follows: gcloud config set run/platform gke gcloud config set run/cluster CLUSTER gcloud config set run/cluster_location ZONE gcloud container clusters get-credentials CLUSTER Replace - CLUSTER with the name of the cluster - ZONE with the location of the cluster. Kubernetes clusters come with a namespace named default. For information on namespaces, and why you might want to create and use a namespace other than default, refer to namespace in the Kubernetes documentation. To create a new namespace, run: kubectl create namespace NAMESPACE Replace NAMESPACE with the Namespace you want to create. If you created a new namespace in the previous step, and want to use it rather than the defaultnamespace, set that new namespace as the one to be used by default when you invoke the gcloud command line: gcloud config set run/namespace NAMESPACE Enabling HTTPS and custom domains If you want to use HTTPS and custom domains that apply to the cluster, refer to Enabling HTTPS and automatic TLS certs and mapping custom domains.. From the Cloud Run for Anthos dropdown, select Disable. Click Save.
https://cloud.google.com/run/docs/gke/setup?hl=sv
CC-MAIN-2020-10
refinedweb
581
52.23
:confused:If there exists a class lock and from this class different thread objects or different objects & there corresponding threads are created, may they run simultaneously?:?: e.g. public class Smiley extends Thread { public void run() { while(true) { try { synchronized(Smiley.class) { System.out.print(":"); sleep(100); System.out.print("-"); sleep(100); System.out.print(")"); sleep(100); } } catch (InterruptedException e) { e.printStackTrace(); } } } public static void main() { new Smiley().start(); new Smiley().start(); } } here in spite of instanciation of two separate thread objects (i.e. these are not common shared resource), is it due to having the common class lock which is preventing the other thread at synchronization?:icon_exclaim: :icon_idea:With or without static methods, at the time of running, how much tasks are performed during class loading and what are left for the instances or objects? How these are related? Do they all happen at runtime? For which the JDK and JVM become liable? ......... :X
https://www.daniweb.com/programming/software-development/threads/361048/thread-synchronization-monitor-lock
CC-MAIN-2018-39
refinedweb
155
60.92
Switch-Case in pure Python Project description switchcase implements a simple Switch-Case construct in Pure Python. Under the hood, the switch function works by simply returning a length-1 list containing a matching function. The entire implementation is 3 lines long: from operator import eq def switch(value, comp=eq): return [lambda match: comp(match, value)] Basic Usage >>> from switchcase import switch >>> def func(x): ... for case in switch(x): ... if case(0): ... print("x was 0") ... break ... if case(1): ... print("x was 1") ... break ... else: ... print("x was unmatched") >>> func(0) "x was 0" >>> func(1) "x was 1" >>> func(2) "x was unmatched" Custom Comparisons By default, switch uses operator.eq to compare the value passed to switch and the values subsequently passed to case. You can override this behavior by passing a comparator function to switch as a second argument. >>> import re >>> from switchcase import switch >>> def f(x): ... out = [] ... for case in switch(x, comp=re.match): ... if case("foo_bar"): ... out.append(0) ... break ... if case("foo_.*"): ... out.append(1) ... if case(".*_bar"): ... out.append(2) ... return out >>> f("foo_bar") [0] >>> f("foo_notbar") [1] >>> f("notfoo_bar") [2] >>> f("foo____bar") [1, 2] Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/switchcase/
CC-MAIN-2020-24
refinedweb
226
65.01
Most GUI users also expect that when a control, in particular a button, has the focus, they can activate it by pressing the ENTER key. With Swing applications, however, this may not always be the case; in fact, pressing the ENTER key may sometimes activate a button that the user does not want to be activated. This is due to a change that was made to the Metal look-and-feel in Java 1.2. This is the default look-and-feel for Swing apps, and until version 1.2 it behaved the way most users expect it to, with the ENTER key activating a focused JButton. Since version 1.2 though, pressing ENTER will not activate a focused JButton in a Swing app using the Metal look-and-feel. Instead, the user must press the spacebar to activate the focused JButton. Suppose a user is presented with a Yes/No JOptionPane dialog that asks, "Do you want to delete all of your files?" If the user tabs the focus over to the No button and presses ENTER, the Yes buttonthe default button for the JOptionPanewill be activated instead. This simple static method alters a JButton passed to it so that pressing the ENTER key activates the button when it has focus: import java.awt.event.KeyEvent; import javax.swing.JButton; import javax.swing.JComponent; import javax.swing.KeyStroke; public class EnterButton extends JButton { public EnterButton(String name){ super(name); super.registerKeyboardAction( super.getActionForKeyStroke( KeyStroke.getKeyStroke(KeyEvent.VK_SPACE, 0, false)), KeyStroke.getKeyStroke(KeyEvent.VK_ENTER, 0, false), JComponent.WHEN_FOCUSED); super.registerKeyboardAction( super.getActionForKeyStroke( KeyStroke.getKeyStroke(KeyEvent.VK_SPACE, 0, true)), KeyStroke.getKeyStroke(KeyEvent.VK_ENTER, 0, true), JComponent.WHEN_FOCUSED); } } Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/DevX/Tip/31605
CC-MAIN-2016-18
refinedweb
306
59.7
The Entry widget is a standard Tkinter widget used to enter or display a single line of text. When to use the Entry Widget The entry widget is used to enter text strings. This widget allows the user to enter one line of text, in a single font. To enter multiple lines of text, use the Text widget. Patterns # To add entry text to the widget, use the insert method. To replace the current text, you can call delete before you insert the new text. e = Entry(master) e.pack() e.delete(0, END) e.insert(0, "a default value") To fetch the current entry text, use the get method: s = e.get() You can also bind the entry widget to a StringVar instance, and set or get the entry text via that variable: v = StringVar() e = Entry(master, textvariable=v) e.pack() v.set("a default value") s = v.get() This example creates an Entry widget, and a Button that prints the current contents: from Tkinter import * master = Tk() e = Entry(master) e.pack() e.focus_set() def callback(): print e.get() b = Button(master, text="get", width=10, command=callback) b.pack() mainloop() e = Entry(master, width=50) e.pack() text = e.get() def makeentry(parent, caption, width=None, **options): Label(parent, text=caption).pack(side=LEFT) entry = Entry(parent, **options) if width: entry.config(width=width) entry.pack(side=LEFT) return entry user = makeentry(parent, "User name:", 10) password = makeentry(parent, "Password:", 10, show="*") content = StringVar() entry = Entry(parent, text=caption, textvariable=content) text = content.get() content.set(text) FIXME: More patterns to be added. In newer versions, the Entry widget supports custom events. Document them, and add examples showing how to bind them. Add ValidateEntry subclass as an example? Concepts Indexes The Entry widget allows you to specify character positions in a number of ways: - Numerical indexes - ANCHOR - END - INSERT - Mouse coordinates (“@x”) Numerical indexes work just like Python list indexes. The characters in the string are numbered from 0 and upwards. You specify ranges just like you slice lists in Python: for example, (0, 5) corresponds to the first five characters in the entry widget. ANCHOR (or the string “anchor”) corresponds to the start of the selection, if any. You can use the select_from method to change this from the program. END (or “end”) corresponds to the position just after the last character in the entry widget. The range (0, END) corresponds to all characters in the widget. INSERT (or “insert”) corresponds to the current position of the text cursor. You can use the icursor method to change this from the program. Finally, you can use the mouse position for the index, using the following syntax: "@%d" % x where x is given in pixels relative to the left edge of the entry widget. Reference # -).
http://effbot.org/tkinterbook/entry.htm
CC-MAIN-2017-04
refinedweb
468
58.58
LONDON (ICIS)--European ethyl tertiary butyl ether (ETBE) prices reached record highs for the year on the back of increased methyl tertiary butyl ether (MTBE) trades, ICIS data showed on Thursday.?xml:namespace> Outright ETBE prices were assessed at $1,313-1,343/tonne FOB (free on board) ARA (Amsterdam-Rotterdam-Antwerp). The last time ETBE outright prices were over $1,300/tonne was on 11 July last year, when they hit a high of $1,316/tonne. Increased gasoline consumption, particularly from the Mediterranean, continues to drive higher demand for blending components, sources noted. Producers see very strong offtake from consumers on term contracts, and production economics remain attractive owing to low feedstock fuel ethanol prices. MTBE prices continue to firm on higher upstream energy costs, owing to the ongoing dispute in Iraq.
http://www.icis.com/resources/news/2014/06/26/9795876/europe-etbe-hits-12-month-highs-on-firming-mtbe-prices/
CC-MAIN-2016-30
refinedweb
134
59.43
So you want to migrate from CVSTrac to GitTrac? You're crazy, of course, but that's never stopped anyone. CVS to GIT The first and most obvious thing about this is that you're implicitly migrating from CVS to GIT. This is the point where you may want to rethink the whole idea. Still with us? Prep Your Code You've got a pile of code under your CVS respository, in a bunch of different modules. In CVS, these modules are treated all as independent trees. With GIT, a single repository is one tree. Single Repository Mode What this means is that git-cvsimport expects to be handed a single CVS module, and if you want everything in one place, you need to create that single CVS module (no, you can't do it with "-a" or "&" tricks in CVSROOT/modules). That's what I wanted, so it needed a little fancy footwork to make it happen: mkdir /tmp/repo && cd /tmp/repo mkdir code cp -R <path/to/cvsrepository> code mv code/CVSROOT . Note that this is the place to do any other repository cleanups you need. Also note that we can get away with this only because git-cvsimport ignores the contents of CVSROOT/history. Multiple Repository Mode If, on the other hand, you want to keep the concept of individual modules, you would git-cvsimport each module into its own distinct GIT repository, then point GitTrac at the directory containing all those repositories. You could also have a separate GitTrac database instance for each GIT repository/CVS module. Move Your Code This should be the easy part. cd ~ git-cvsimport -d /tmp/repo -k -m -v -i code At the end of this, if all goes well, you've got a directory called .git. That's now your GIT repository. Check Your Code We're experienced software developers, so we're obviously not going to trust git-cvsimport to not completely destroy our codebase. We need to check it out and have a look at things. mkdir working cd working git-clone <path/to/.git> code You've just checked out the equivalent to the last CVS HEAD into code/. Poke around. Run things like git-log on files and make sure you've got a history. Make sure your tags and branches are all there ( git-show-branch helps here). See if all your code builds. Fix Your CVS Dependencies When you use CVS for too long, you inevitably start to make a lot of assumptions. You distribute ChangeLog's generated from CVS commit messages. You assume magic keywords in code (like $Id$) mean something. You may even assume you know which branch you're working on by looking at CVS/Tag. Really hardcore CVS users might even put those assumptions into Makefiles and build scripts. By "you", I really mean "me". But I didn't come up with all those ideas myself, and if you read the same web sites I do then you're probably in the same amount of trouble that I am. Unfortunately, I don't have the answers here. You need to evaluate whether you really need the same functions, and come up with your own solution. Other things you may have done with CVS include exploiting some of the server-based scripting capabilities such as CVSROOT/commitinto, CVSROOT/loginfo, etc. These sorts of things will obviously not work the same way, if at all, in a distributed SCM like GIT. Publish Your Repository The GIT repository you've created is what's called a private repository. Normally, GIT projects are published to the world as public (or bare) repositories. git clone --bare working/code code.git You need to place this code.git tree somewhere other users can access it. You also need to put it somewhere GitTrac can access it directly (through the filesystem). You were probably planning on doing this anyways, right? There's more than one way to publish a GIT project. I chose to use git-deamon and the GIT protocol for publishing, but there's more than one way to do it. touch code.git/git-daemon-export-ok git-daemon was configured to run from inetd.conf using: git stream tcp nowait cpb /usr/bin/git-daemon git-daemon --export-all --inetd --verbose --export-all --base-path=/home/cpb/git /home/cpb/git and we fetch code.git into a working copy via: git clone git://localhost/code.git Of course, git-daemon doesn't allow you to publish changes that way. git push ssh://localhost/home/cpb/git/code.git master GIT For CVS Users If you're the only user of your codebase, you can do whatever you want. However, larger organizations may have to cater to less "nimble" developers and other users. There's some useful documentation for migrating CVS users to GIT. For the truly backwards users, a CVS server emulator is available which should allow existing users to continue using CVS with minimal disruption and retraining. CVSTrac To GitTrac Setting Up GitTrac First, you need a GitTrac configuration. We assume you've already got a gittrac executable. In that case, we start by initializing the database: gittrac init /path/to/database code Start the gittrac server. Just temporarily, we've got work to do later: gittrac server 8008 /path/to/database code Browse to, login as setup, and configure the GIT repository. In my example, we're pointing it at /home/cpb/git/code.git. Then import the repository. I normally like to drop back to the command-line and use: gittrac update /path/to/database code but you may also just browse to the timeline or something. Have a look around, browse the file tree, look at checkins, and generally make sure you're happy with what you're seeing. If something got broken, you really want to fix it before going any further. Import CVSTrac Data If you were starting from scratch, you're already finished. The rest of us, however, have years of CVSTrac history which we don't want to lose, and ideally we'd like to have it all in the same place. Ergo, we need to move all that CVSTrac data to GitTrac. Many things won't have any issues. Wiki pages and attachments, for the most part, can be transferred directly. The user database and things like custom markups and tools are no problem. Tickets and check-ins are going to give you trouble. The trick is that CVS and GIT have entirely different concepts of what a commit looks like. Then, when we run this stuff through CVSTrac and GitTrac, we try to number everything as discrete checkins. CVSTrac tries to merge distinct commits into seemingly atomic operations, but if you've used it in a non-trivial environment you know that falls apart periodically. So we've got two instances of *Trac containing more or less the same data, but the numbering scheme for the check-ins may be wildly different. And since things like tickets are cross-referenced with specific check-in numbers, a naive database copy from CVSTrac to GitTrac simple will not work. Fortunately, this is something we can tackle with just a decent understanding of how the CVSTrac database works and an unreadable perl script. cvs-to-gittrac.pl is my attempt at an automatic CVSTrac to GitTrac conversion tool. There's nothing too magical about it and you could certainly adjust for a SvnTrac migration. The idea is roughly as follows: - create a (many-to-one) mapping of CVSTrac checkin's to GitTrac checkin's. - do the same thing with milestones. This isn't so easy since GitTrac and CVSTrac have different ideas of how to represent tags and such, so it may not be entirely complete. - copy over the database tables as-is, except the ones GitTrac would have created from the repository (CHNG, FILE, FILECHNG). - fix check-in cross-references. Some tables are easy, such as INSPECT and XREF. Some, such as the contents of Wiki markup, are really hard since we need to use regular expressions on text. Fix Your CVS Dependencies If your CVSTrac installation was relying on CVS-specific tools, you'll need to remove them or switch to GIT equivalents. You'll also need to update reports. For example, if you've got a report which lists CVS branches, it's probably not going to work the same way with GitTrac. GIT revision numbers are longer, so if you're listing CVS file revisions in reports you may have layout issues. Fix This Wiki Page There's no way I've documented every possible problem with this process. If there's something you've encountered which doesn't work or if you've seen better shortcuts, please fix the documentation. Attachments: - cvs-to-gittrac.pl 8512 bytes added by cpb on 2007-Jun-03 17:21:55 UTC. CVSTrac to GitTrac migration script MD5 sum 3de6301e618105ebdaa259b1d2191e9e
http://www.cvstrac.org/cvstrac/wiki?p=GitTracMigration
CC-MAIN-2019-18
refinedweb
1,498
64.91
Novell.. One of the most commonly-suggested ideas was a "send a link" feature for subscribers. Using this feature, a subscriber could generate a link which would enable a non-subscriber to access an article which is still behind the subscription gate. The idea would be to let our readers spread limited access to subscription content, thus helping to hook more readers. We will probably implement this idea, though the specific shape of it remains to be worked out. Stay tuned. Other promotional approaches are being looked at and tried out. Ad campaigns run on That Big Search Engine have been disappointing so far, though we have not yet given up on that approach. What seems more effective is targeted trial subscription offers; a trial offer sent to the GnuCash and KMyMoney lists (so they could read the recent Grumpy Editor article) got quite a few takers. LWN does not need a reputation for spamming developer lists, however, so much care will have to be taken with this approach. The idea of extending the subscription period did not inspire a great many replies, one way or another. We may try a modest extension (to two weeks, perhaps), maybe in conjunction with the "send a link" feature. A few people have asked for a higher-priced subscription option or the ability to simply make donations. We may eventually add the higher level, though we expect that the uptake - which would be necessarily less than we see now for the "project leader" level - would be relatively small. There will not be a donation option added, however. Those of you who were with us when we first decided to try subscriptions will remember that we went through a major hassle with our credit card merchant bank. Donations are a red flag which, it seems, creates major anxiety in merchant bank risk management departments. Our current bank has proved to be far more rational than the one we had back then, but the ability to accept credit cards is our lifeline, and we cannot do things (like accepting donations) which put it at risk. We do have a couple of options for anybody who would like to send more money LWN's way: (1) buy a gift certificate for a friend, or (2) buy a text ad promoting your favorite free software project. A few users have suggested that the site could use a redesign to give it a more professional look. No doubt that is true, and a site makeover has been on the "to do" list for some time. Any such redesign, when it happens, will preserve the core philosophy of the current site: LWN is about high-quality text without a lot of distracting decorative material. So there is no need to worry that we'll be going to a frame-based, flash-encrusted, image-heavy presentation in the future. Thanks to all of you for your support and feedback. LWN has truly been blessed with the best group of readers we could ever have hoped for. Page editor: Jonathan Corbet Security Thunderbird has had spam filtering for some time. Your editor has never given it a full test, however. Happily, an ideal resource exists for this purpose: your editor's 4000-spam-per-day mail stream. A quick config file tweak directed a copy of this stream, unfiltered, into Thunderbird to see how it would react. The bayesian filter built into Thunderbird turns out to be a quick learner. After 100 messages or so, it was busily marking most messages itself. The speed with which it learns tempts the user to turn on automatic spam-canning of marked mail early in the process; it is such a delight to see that stuff simply disappear. Training a SpamAssassin filter takes quite a bit longer. Unfortunately, the Thunderbird filter appears to learn too quickly, with the result that false positives become a problem. As long as Thunderbird is not configured to automatically refile spam, the false positives can be corrected with, one assumes, an appropriate tweaking of the filter. Once spams have been diverted, however, there appears to be no way to tell Thunderbird that it made a mistake. So new Thunderbird users would be well advised to look over its spam classification decisions for some time before empowering it to refile mail automatically. SpamAssassin's more conservative approach may well turn out to be better for people who cannot afford to lose mail. Happily, Thunderbird 1.5 includes an option which causes it to defer to SpamAssassin on filtering. Thus, the system administrator can use SpamAssassin to add headers to mail, and individual users can have Thunderbird act on those headers if desired. A truly new feature in 1.5 is phishing detection. A few simple rules have been added to detect phishy links; essentially, a message will be flagged if a URL contains a numeric IP address or the link text contains an address which fails to match the link destination. In these cases, clicking on a suspect link will result in a dialog explaining the situation and asking if the user wishes to proceed. Thunderbird will also mark such messages with a line saying "Mail/News thinks this message might be an email scam." This capability is a step in the right direction, but it has some obvious shortcomings. It failed to detect a number of random phishes found in your editor's mailbox. The "this might be junk" message also overrides the phishing warning; arguably the scam warning should take priority. The real risk, though, is that users might think that, if Thunderbird does not flag a message, it must be legitimate. Remember, these are people who fall for phishing scams in the first place. The best way to avoid that possibility would be to improve the detection of phishing messages. One wonders if the bayesian filter could be trained to this purpose as well as detecting spam. There is also ample opportunity for cooperation with anti-phishing groups which maintain lists of known phishing sites - though one would have to be careful to preserve a user's privacy when checking links. Quibbles aside, Thunderbird 1.5 is a step in the right direction toward a more secure email environment. More work clearly remains to be done - but that is likely to always be the case. Meanwhile, tools which help to reduce the spam and phishing problems can only be a good thing. New vulnerabilities) Updated vulnerabilities Marc Stern reported an off-by-one overflow in the mod_ssl CRL verification callback. In order to exploit this issue the Apache server would need to be configured to use a malicious certificate revocation list (CRL). A DoS attack against the server when run with "verb 0" and without "tls-auth" when a client connection to the server fails certificate verification, the OpenSSL error queue is not properly flushed. This could result in another unrelated client instance on the server seeing the error and responding to it, resulting in a disconnection of the unrelated client. A DoS attack against the server by an authenticated client that sends a packet which fails to decrypt on the server, the OpenSSL error queue was not properly flushed. This could result in another unrelated client instance on the server seeing the error and responding to it, resulting in a disconnection of the unrelated client. A DoS attack against the server by an authenticated client is possible in "dev tap" ethernet bridging mode where a malicious client could theoretically flood the server with packets appearing to come from hundreds of thousands of different MAC addresses, resulting in the OpenVPN process exhausting system virtual memory. If two or more client machines tried to connect to the server at the same time via TCP, using the same client certificate, a race condition could crash the server if --duplicate-cn is not enabled on the server.) Page editor: Jonathan Corbet Kernel development Brief items The current 2.6 prepatch is 2.6.14-rc4, announced by Linus on October 10. This will be, he says, the last -rc release before 2.6.14 comes out. It contains mostly fixes, but there's also some driver updates, a new Megaraid SAS driver, and a new gfp_t type which has caused a prototype change for many internal functions which perform memory allocations (see below). The details may be found in the long-format changelog. There have been no -mm releases since 2.6.14-rc2-mm2 came out on September 29. Kernel development news For those who are interested in the many projects underway in the networking subsystem, a visit to the new linux-net wiki may be in order. Visitors cannot help being struck by the amount of work which is going on in this area. A while back, the __nocast attribute was added to catch these mistakes. This attribute simply says that automatic type coercion should not be applied; it is used by the sparse utility. A more complete solution is on the way, now, in the form of a new gfp_t type. The patch defining this type, and changing several kernel interfaces, was posted by Al Viro and merged just before 2.6.14-rc4 came out. There are several more patches in the series, but they have evidently been put on hold for now. The patches are surprisingly large and intrusive; it turns out that quite a few kernel functions accept GFP flags as arguments. For all that, the actual code generated does not change, and the code, as seen by gcc, changes very little. Once the patch set is complete, however, it will allow comprehensive type checking of GFP flag arguments, catching a whole class of potential bugs before they bite anybody. The next step in the implementation of that purpose is the hard drive protection patch recently posted by Jon Escombe. This patch adds two new callbacks to the block request queue which drivers can provide: typedef int (issue_protect_fn) (request_queue_t *); typedef int (issue_unprotect_fn) (request_queue_t *); If the driver provides these functions, the request queue, as seen in sysfs, will contain a new protect attribute. If a value is written to that attribute, the block system will interpret it as an integer number of seconds. The issue_protect_fn() will be called, and the request queue will be plugged for the indicated number of seconds. When that time expires, issue_unprotect_fn() will be called and the queue will be restarted.. The idea seems reasonable, but block maintainer Jens Axboe has turned down the patch for now. Says Jens: The number of request queue callbacks is indeed large. Some of them have little to do with drivers (there's one which is called whenever disk activity happens, for example; it can be used to flash a keyboard LED in the absence of a hardware disk activity light), but others, such as the ones discussed here, are direct requests to the underlying block driver. The use of callbacks seems a little redundant in this situation, given that the request queue is, fundamentally, a mechanism for conveying commands to block drivers. The right solution might thus be to use the request queue to carry commands beyond those requesting the movement of blocks to and from the drive. To an extent, the request queue is already used this way. Packet commands, ATA task file commands, and power management commands can be fed to drivers through the queue. In each case, the flags field of struct request is used to indicate that something special is being requested. The use of flags in this way is getting a little unwieldy, however, leading to the consideration of a new approach. That approach, as seen in a patch held by Jens, is to add a new field (cmd_type) to struct request which indicates the type of command embodied by each request. Currently-anticipated types include packet commands, sense requests, power management commands, flush requests, driver-specific special requests, and Linux-specific, generic requests. Oh, and the occasional request to move a disk block in one direction or the other. The addition of cmd_type turns struct request into a generic carrier of commands to a disk drive. With this mechanism in place, the "brace yourself, we're falling!" message becomes just another Linux-specific block request type. When such an event happens, the kernel need only place one of those messages on the queue - preferably at the head of the queue - and call the driver's request() function. The driver can then prepare the drive for the coming catastrophe and plug the queue itself. No additional callbacks required. This approach does involve some significant changes to the block layer, however, and would include a driver API change. So it is not likely to take a quick path into the kernel. The hard drive protection mechanism, which will require the new API, thus looks likely to wait in line for a while yet. The current kernel readahead implementation uses a window 128KB in length. When readahead seems appropriate, the kernel will speculatively bring in the next 128KB of file data. If the application continues to read sequentially through that data, the next 128KB chunk will be brought in when the application is part-way through the first one. This implementation works, but Wu Fengguang thinks that it can be made better. In particular, Wu thinks that the fixed readahead window size should, instead, adapt to both the application's behavior and the global state of the system. His adaptive readahead patch is an implementation of this thought. It is a work of daunting complexity, but the core ideas are reasonably straightforward. The adaptive readahead patch tries to balance two constraints: readahead should be performed aggressively, but not to the point that the system starts thrashing or readahead pages get recycled before the application uses them. Every time a readahead decision is to be made for a specific file, the adaptive code looks at how much memory is available for readahead and how quickly the application has been working through the file. If memory is tight, or if the disk holding the file is congested, readahead will not be performed at all. The code also looks at the pressure on the inactive page lists and tries to figure out whether any readahead pages are in danger of falling off that list and being reclaimed. In that situation, the readahead pages will be moved back up the list, keeping them in memory for a bit longer. This "rescue" operation helps to keep previous readahead work from being wasted; since it is only performed when the application consumes data from the file, it will not happen if the reading process has stalled entirely. But, when the application is working through the data, it will get another chance to benefit from readahead which has already been performed. No more readahead will be started in that situation, however. If, instead, the application is making use of its readahead pages and the memory is available, the readahead window can grow up to 1MB. For streaming media or data processing applications which work their way sequentially through large files, this enlarged window can lead to significant performance gains. In fact, Wu claims results which are "pretty optimistic." They include a 20-100% improvement for applications doing parallel reads, and the ability to run 800 1KB/sec simultaneous streams on a 64MB system without thrashing. The page cache hit rate is claimed to be 91%, which is quite good. The adaptive readahead patch might, thus, be a worthwhile addition to the Linux memory management subsystem. There has been little discussion (none, actually) of the patch on the list, however. Complicated patches working in an obscure corner of memory management do not receive the same level of review as, say, new filesystems, it would seem. In any case, a patch of this nature will require a good deal of testing before it can be considered for any sort of merge. So, while adaptive readahead may indeed make its way into the mainline, it's not something to expect to see in the very near future. Patches and updates Kernel trees Architecture-specific Core kernel code Development tools Device drivers Documentation Filesystems and block I/O Memory management Networking Security-related Miscellaneous Distributions News and Editorials As of. New Releases Full Story (comments: none) Full Story (comments: 7) Distribution News Full Story (comments: 1) The Board of Directors of Software in the Public Interest, Inc. will hold its quarterly meeting on Tuesday, October 18, 2005, at 19:00 UTC in #spi on irc.oftc.net. The public is welcome at all SPI meetings. Bill Allombert covers the Debian menu transition, part 2. Distribution Newsletters Minor distribution updates Ark Linux 2005.2 rc3 was released Package updates Fedora Core 3 updates: libwpd (fix import that causes glitches on export), nut (update to 2.0.2), mc (bug fixes), udev (fix issues with recent kernel updates), wget (update to 1.10.1), xpdf (apply upstream patch to fix resize/redraw bug). Distribution reviews Page editor: Rebecca Sobol Development October. System Applications Audio Projects Clusters and Grids Database Software Printing Web Site Development Desktop Applications Calendar Software Desktop Environments Electronics Financial Applications Games GUI Packages Interoperability Mail Clients Multimedia Music Applications Office Suites Science Web Browsers Languages and Tools Caml Java Lisp PHP Python Ruby Scheme Tcl/Tk Bug Trackers Page editor: Forrest Cook Linux in the news Recommended Reading Trade Shows and Conferences The SCO Problem Linux Adoption Legal Interviews Resources Reviews Announcements Non-Commercial announcements Commercial announcements Full Story (comments: 2) New Books Contests and Awards Upcoming Events Web sites Audio and Video programs Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/154768/bigpage
CC-MAIN-2013-48
refinedweb
2,951
59.94
Interfaces and inner classes provide more sophisticated ways to organize and control the objects in your system. C++, for example, does not contain such mechanisms, although the clever programmer may simulate them. The fact that they exist in Java indicates that they were considered important enough to provide direct support through language keywords. Feedback In Chapter 7 you learned about the abstract keyword, which allows you to create one or more methods in a class that have no definitionsyou provide part of the interface without providing a corresponding implementation, which is created by inheritors. The interface keyword produces a completely abstract class, one that provides no implementation at all. Youll learn thatit knows about and can communicate with the surrounding classand that the kind of code you can write with inner classes is more elegant and clear, although it is a new concept to most. It takes some time to become comfortable with design using inner classes. Feedback. Feedback An interface says, This is what all classes that implement this particular interface will look like. Thus, any code that uses a particular interface knows what methods might be called for that interface, and thats all. So the interface is used to establish a protocol between classes. (Some object-oriented programming languages have a keyword called protocol to do the same thing.) Feedback To create an interface, use the interface keyword instead of the class keyword. Like a class, you can add the public keyword before the interface keyword (but only if that interface is defined in a file of the same name) or leave it off to give package access, so that it is only usable within the same package. Feedback To make a class that conforms to a particular interface (or group of interfaces), use the implements keyword, which says, The interface is what it looks like, but now Im going to say how it works. Other than that, it looks like inheritance. The diagram for the instrument example shows this: You can see from the Woodwind and Brass classes that once youve implemented an interface, that implementation becomes an ordinary class that can be extended in the regular way. Feedback You can choose to explicitly declare the method declarations in an interface as public, but they are public even if you dont say it. So when you implement an interface, the methods from the interface must be defined as public. Otherwise, they would default to package access, and youd be reducing the accessibility of a method during inheritance, which is not allowed by the Java compiler. Feedback You can see this in the modified version of the Instrument example. Note that every method in the interface is strictly a declaration, which is the only thing the compiler allows. In addition, none of the methods in Instrument are declared as public, but theyre automatically public anyway: //: c08:music5:Music5.java // Interfaces. package c08.music5; import com.bruceeckel.simpletest.*; import c07.music.Note; interface Instrument { // Compile-time constant: int I = 5; // static & final // Cannot have method definitions: void play(Note n); // Automatically public String what(); void adjust(); } class Wind implements Instrument { public void play(Note n) { System.out.println("Wind.play() " + n); } public String what() { return "Wind"; } public void adjust() {} } class Percussion implements Instrument { public void play(Note n) { System.out.println("Percussion.play() " + n); } public String what() { return "Percussion"; } public void adjust() {} } class Stringed implements rest of the code works the same. It doesnt matter if you are upcasting to a regular class called Instrument, an abstract class called Instrument, or to an interface called Instrument. The behavior is the same. In fact, you can see in the tune( ) method that there isnt any evidence about whether Instrument is a regular class, an abstract class, or an interface. This is the intent: Each approach gives the programmer different control over the way objects are created and used. Feedback The interface isnt simply a more pure form of abstract class. It has a higher purpose than that. Because an interface has no implementation at allthat is, there is no storage associated with an interfacethe are:.) Feedback any particular effort on the part of the programmer. Feedback Keep in mind that the core reason for interfaces is shown in the preceding methods. Feedback You can easily add new method declarations to an interface by(); }. Feedback.). Feedback The method getD( ) produces a further quandary concerning the private interface: Its //:ll see in a while that this isnt the only difference. Feedback( ).: Feedback In the following examples, the previous code will be modified to use: Feedback. Feedback. Feedback. Feedback. Feedback..) Feedback //: cant just pass a reference to an enclosing object. In addition, you must use the syntax Feedback enclosingClassReference.super(); inside the constructor. This provides the necessary reference, and the program will then compile. Feedback What: Feedback //:. Feedback This example shows that there isnt any extra inner class magic going on when you inherit from the outer class. The two inner classes are completely separate entities, each in their own namespace. However, its still possible to explicitly inherit from the inner class: Feedback //:. Feedback: Feedback //:. Feedback The only reason to make a local inner class rather than an anonymous inner class is if you need to make more than one object of that class. Feedback: Feedback). Feedback.)ll ordinarily have some kind of guidance from the nature of the problem about whether to use a single class or an inner class. But without any other constraints, the approach in the preceding example doesnt really make much difference from an implementation standpoint. Both of them work. Feedback." package c08;()); } } ///:~"); } { private static Test monitor = new Test();();). 14, the Java Swing library is a control framework that elegantly solves the GUI problem and that heavily uses inner classes. Feedback. Feedback. Feedback This is where inner classes come into play. They allow two things: Consider a particular implementation of the control framework designed to control greenhouse functions.[39] in the action( ) implementation. Feedback. Feedback. Feedback Restart is given an array of Event objects that it adds to the controller. Since Restart( ) is just another Event object, you can also add a Restart object within Restart.action( ) so that the system regularly restarts itself. Feedback The following class configures the system by creating a GreenhouseControls object and adding various kinds of Event objects. This is an example of the Command design pattern: Feedback //:). Feedback. Feedback Interfaces. Feedback. Feedback Solutions.
http://www.faqs.org/docs/think_java/TIJ310.htm
CC-MAIN-2019-18
refinedweb
1,071
53.81
On 11/04/2018 11:09, Christophe LEROY wrote: > > > Le 11/04/2018 à 11:03, Laurent Dufour a écrit : >> >> >> On 11/04/2018 10:58, Christophe LEROY wrote: >>> >>> >>> Le 11/04/2018 à 10:03, Laurent Dufour a écrit : >>>> Remove the additional define HAVE_PTE_SPECIAL and rely directly on >>>> CONFIG_ARCH_HAS_PTE_SPECIAL. >>>> >>>> There is no functional change introduced by this patch >>>> >>>> Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com> >>>> --- >>>> mm/memory.c | 19 ++++++++----------- >>>> 1 file changed, 8 insertions(+), 11 deletions(-) >>>> >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 96910c625daa..7f7dc7b2a341 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -817,17 +817,12 @@ static void print_bad_pte(struct vm_area_struct *vma, >>>> unsigned long addr, >>>> * PFNMAP mappings in order to support COWable mappings. >>>> * >>>> */ >>>> -#ifdef CONFIG_ARCH_HAS_PTE_SPECIAL >>>> -# define HAVE_PTE_SPECIAL 1 >>>> -#else >>>> -# define HAVE_PTE_SPECIAL 0 >>>> -#endif >>>> struct page *_vm_normal_page(struct vm_area_struct *vma, unsigned long >>>> addr, >>>> pte_t pte, bool with_public_device) >>>> { >>>> unsigned long pfn = pte_pfn(pte); >>>> - if (HAVE_PTE_SPECIAL) { >>>> + if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) { >>>> if (likely(!pte_special(pte))) >>>> goto check_pfn; >>>> if (vma->vm_ops && vma->vm_ops->find_special_page) >>>> @@ -862,7 +857,7 @@ struct page *_vm_normal_page(struct vm_area_struct >>>> *vma, >>>> unsigned long addr, >>>> return NULL; >>>> } >>>> - /* !HAVE_PTE_SPECIAL case follows: */ >>>> + /* !CONFIG_ARCH_HAS_PTE_SPECIAL case follows: */ >>>> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { >>>> if (vma->vm_flags & VM_MIXEDMAP) { >>>> @@ -881,7 +876,8 @@ struct page *_vm_normal_page(struct vm_area_struct >>>> *vma, >>>> unsigned long addr, >>>> if (is_zero_pfn(pfn)) >>>> return NULL; >>>> -check_pfn: >>>> + >>>> +check_pfn: __maybe_unused >>> >>> See below >>> >>>> if (unlikely(pfn > highest_memmap_pfn)) { >>>> print_bad_pte(vma, addr, pte, NULL); >>>> return NULL; >>>> @@ -891,7 +887,7 @@ struct page *_vm_normal_page(struct vm_area_struct >>>> *vma, >>>> unsigned long addr, >>>> * NOTE! We still have PageReserved() pages in the page tables. >>>> * eg. VDSO mappings can cause them to exist. >>>> */ >>>> -out: >>>> +out: __maybe_unused >>> >>> Why do you need that change ? >>> >>> There is no reason for the compiler to complain. It would complain if the >>> goto >>> was within a #ifdef, but all the purpose of using IS_ENABLED() is to allow >>> the >>> compiler to properly handle all possible cases. That's all the force of >>> IS_ENABLED() compared to ifdefs, and that the reason why they are >>> plebicited, >>> ref Linux Codying style for a detailed explanation. >> >> Fair enough. >> >> Should I submit a v4 just to remove these so ugly __maybe_unused ? >> > > Most likely, unless the mm maintainer agrees to remove them by himself when > applying your patch ? That was my point. Andrew, should I send a v4 or could you wipe the 2 __maybe_unsued when applying the patch ? Thanks, Laurent.
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1661675.html
CC-MAIN-2018-43
refinedweb
389
63.29
#include <FXIconList.h> #include <FXIconList.h> Inheritance diagram for FX::FXIconItem: See also: NULL [inline] Construct new item with given text, icons, and user-data. [virtual] Destroy item and free icons if owned. Change item's text label. Return item's text label. FALSE Change item's big icon, deleting the old icon if it was owned. Return item's big icon. Change item's mini icon, deleting the old icon if it was owned. Return item's mini icon. Change item's user data. Get item's user data. Make item draw as focused. Return true if item has focus. Select item. Return true if this item is selected. Enable or disable item. Return true if this item is enabled. Make item draggable. Return true if this item is draggable. Return width of item as drawn in list. Return height of item as drawn in list. Create server-side resources. Detach server-side resources. Destroy server-side resources. Save to stream. Reimplemented from FX::FXObject. Load from stream. [friend] Reimplemented in FX::FXFileItem.
http://www.fox-toolkit.org/ref16/classFX_1_1FXIconItem.html
CC-MAIN-2017-43
refinedweb
174
74.05
Install Precompiled Dlib on Raspberry Pi Dlib is an open-source library that provides machine learning algorithms and tools for solving classification, regression, and clustering problems. This tutorial demonstrates how to install precompiled Dlib on Raspberry Pi. Debian package We have created Debian package ( .deb) that contains precompiled Dlib 19.22 binaries for Raspberry Pi 3 Model A+/B+ and Raspberry Pi 4 Model B. Binaries are compatible with Raspberry Pi OS Buster (32-bit). We have created a release on GitHub repository and uploaded dlib.deb package. Dlib was built with the following features: - NEON optimization - Linked with OpenBLAS library - Python 2 and Python 3 bindings Tested using Raspberry Pi 4 Model B (8 GB). Install Dlib Connect to Raspberry Pi via SSH. Download the .deb package from releases page of the repository using the following command: wget Once the download is complete, run the command to install Dlib: sudo apt install -y ./dlib.deb The .deb package is no longer necessary, you can remove it: rm -rf dlib.deb Testing Dlib (C++) To test program compilation install GNU C++ compiler: sudo apt install -y g++ Download a test image from the Internet: wget -O test.jpg Create a main.cpp file: nano main.cpp Add the following code when a file has been opened: #include <dlib/image_processing/frontal_face_detector.h> #include <dlib/image_io.h> using namespace dlib; int main() { array2d<rgb_pixel> img; load_image(img, "test.jpg"); frontal_face_detector detector = get_frontal_face_detector(); std::vector<rectangle> bboxes = detector(img); for (unsigned int i = 0; i < bboxes.size(); i++) { draw_rectangle(img, bboxes[i], rgb_pixel(0, 255, 0), 3); } save_jpeg(img, "result.jpg"); return 0; } The code is used to detect faces in an image and draw a bounding box around the detected faces. Final image is saved to a file. Execute the following command to compile a code: g++ main.cpp -o test -O3 -Wno-psabi -ldlib -lopenblas Run a program: ./test Here is result: Testing Dlib (Python) The Python bindings of DLib doesn’t provide draw_rectangle or similar function for drawing rectangles on an image. For this purpose we can install precompiled OpenCV. After you install OpenCV, create a main.py file: nano main.py Add the following code: import dlib import cv2 img = dlib.load_rgb_image('test.jpg') detector = dlib.get_frontal_face_detector() bboxes = detector(img, 1) for b in bboxes: cv2.rectangle(img, (b.left(), b.top()), (b.right(), b.bottom()), (0, 255, 0), 2) dlib.save_image(img, 'result.jpg') Execute a script using the Python 3: python3 main.py Or you can use Python 2: python main.py Python code is doing the same process as C++ code. Uninstall Dlib If you want to completely remove Dlib and related dependencies, run the following command: sudo apt purge --autoremove -y dlib
https://lindevs.com/install-precompiled-dlib-on-raspberry-pi/
CC-MAIN-2021-31
refinedweb
457
59.6
Agenda See also: IRC log <trackbot> Date: 03 December 2008 <scribe> scribe: Gregory_Rosmaita <scribe> ScribeNick: oedipus Agenda Planning Tracker: <ShaneM> bluetooth problems mbe omp Agenda Planning Tracker: <ShaneM> having difficulties with headset RM: brief update on CURIE syntax - had transition call yesterday, got ok to transition, still paperwork to be done, should move forward RM: XML Events 2 haven't done; DOM discussed yesterday - will include action 40 in status update for HTC ... features document? SM: none are completed unless marked "pending review" RM: - policy statement on migration and inclusion SM: started to do, but could decide where to put RM: isn't there another action for that ... close action 11 ... - changes in Mime document ... substantive note from Tina - SM: would rather do when tina here RM: if no tina at meeting, please respond on-list ... action 18 should be closed ... - should be closed RESOLUTION: Actions 11, 13 and 18 are closed RM: action 13 might be worth some discussion SM: delegated it; will ping and update RM: action 15 - not have separate implements module - fold into XHTML2 - is that correct? SM: made note to that effect in action ... leave action until finished RM: action item from GJR - GJR: pf said: "The word "similar" was inserted to satisfy general requirements for HTML processing, since the Role module includes low-level processing specifics, which can't be ported to HTML5; therefore, in order to enable ARIA in HTML5 it is necessary to define low-level DOM parsing whilst still accepting same content, with same accessibility result. Of course, if one is using XHTML2 to author a document, then that author would and SHOULD use the Role Module RM: all ARIA attributes can be used without prefix; defined for us and in XHTML vocab ... for ARIA terms there is no namespaced vocabulary GJR: agree with RM, think that PF punted SM: don't know what is going to be in HTML5 GJR: HTML5 e.t.a. is 2012 at earliest RM: carry on and ignore -- given WAI everything asked for; shouldn't waste our cycles on this ... de facto implementation of HTML5 by developers ... happy to consider item complete <mib_jqd0sf> ops SM: during LC review, can submit formal objection because should be using Role attribute PF helped define GJR: would support that RM: any other actions finished? RM: waiting for comments GJR: i18n issues? <Tina> I'll send my comments on the ACCESS module by Monday SM: comment from forms; Steven brought up in Forms WG - John Boyer (chair) supposed to send us note; can live with it if multiple IDs in XForms and XHTML2 synced; thing XForms comment closed RM: can close XForms comment ... action 35 is complete SM: everything regards Access closed out; implementation report and disposition of comments all ready; need to wait until CURIEs reaches CR for this to reach CR because references CURIE RM: same with Role? SM: yes, not certain if SP has sent in transition request for the 3 RM: haven't seen them ... resolved to request CR Transition [] RM: discussed last week - going back to DOM2 - have to inspect DOM2 spec - anyone done that? SM: no GJR: no RM: need to get done for XHTML 1.0 SE and 1.1 PER will point to that SM: issue: XML 1.0 Fifth Edition became a rec yesterday ... changes rules / definition of ID - changes what chars are legal in ID; historically have just transitioned to current version of XML (any recs we put out use current XML edition), but what are rammifications of changing @id to make more inclusive - with our documents, some point to fourth edition, some to fifth RM: reads errata for fourth edition <mgylling> Before the fifth edition, XML 1.0 was explicitly based on Unicode 2.0. As of the fifth edition, it is based on Unicode 5.0.0 or later. This effectively allows not only characters used today, but also characters that will be used tomorrow. RM: due to unicode changes? SM: previously malformed documents now ok; invalid documents now valid -- don't understand RM: main characters has changed <mgylling> MG: there is a blog entry from James Clark explaining why he thinks fifth edition broken - was controversial SM: jame's blog is exactly what i thought/concluded ... good news (sort of) - always made dated references to 1.0 (reference edition numbers); we are dependent upon namespaces, and they are not referenced ... don't understand rammifications, but they keep me awake at night RM: if stay with Fourth Edition, and say that those in Fifth Edition are ok, but a SUB-SET of those in Fourth Edition SM: hope change is forward compatible RM: should leave the pointer alone for XML Fourth Edition ... if get through PR review and asked why not Fifth, we say "prove to us won't cause problems" SM: reasonable GJR: plus 1 <alessio> +1 RM: keep status quo: publish our specs pointing to XML 1.0 Fourth Edition, until becomes an issue, if becomes and issue <Roland> SM: major change is unicode related RM: same mistakes i had found SM: fixed broken internal links ... compatibility guidelines: i know what problem i was trying to solve with sentence in question: remind validation people at W3C that shouldn't validate against this RM: could be useful to remind of constraints <alessio> for tina and all... I write a note (in italian) on IWA Italy's blog related to tina's article: SM: don't like suggested wording: is a non-sequitor ... "TOPIC: XHTML MIME type: Status? Tina's comment: ""It contains no absolute requirements, and should NEVER be used as the basis for creating conformance nor validation rules of any sort. Period."" RM: constraint over and above language definition; could write style guide ... replace paragraph with something less dogmatic SM: accept suggestion for example 3 - don't use P RM: "A.4. Embedded Style Sheets and Scripts ... didn't we originally say to avoid inline style and scripts? SM: she's attempting to make more assertive, i believe ... trying to explain why suggested trick for embedding works ... we can explain that RM: make clear that is explanation; don't have to if don't want to SM: A.5 - generic advice; i think has to do with XML versus HTML "This sounds like generic advice for writing markup, rather then something relevant to the differences between XHTML and HTML. I could be mistaken and would welcome pointers to the relevant parts of the specifications if so." RM: might be useful if each of these assertions in A.5 are linked SM: they are RM: don't show in ToC SM: no don't show in ToC RM: linebreak attribute values SM: in XML attribute values are ... MG: whitespace neutralized? SM: yes RM: isn't that part of rationale? ensure on single line isn't bad advice SM: don't remember why did in first place - tina wants rationale - thought had to do with whitespace normalization <mgylling> If the attribute type is not CDATA, then the XML processor MUST further process the normalized attribute value by discarding any leading and trailing space (#x20) characters, and by replacing sequences of space (#x20) characters by a single space (#x20) character. MG: section 3.3.3 of XML spec ... depends on type of attribute; if not CDATA discusses discarding leading and trailing space RM: option for collapse as well? MG: 3.3.3 says replace XML def of whitespace by single space only; linebreaks "normalized" to single space, leading or trailing SM: section 2.1.1 on end of line handling ... end of lines normalized even if inside attribute value ... turns linefeeds into spaces RM: read section 3.3.3 and 2.1.1 and best to avoid those situations since don't know what non-XML parsers would do ... A.11 - "Perhaps an example showing how to convert to lower case before checking would help clarify this for some people?" SM: do ensure that attribute names ... are case insensitive ... can show people how to call to lower RM: ok SM: A.25 - i know answer and will send it to her ... A.26. - "to justify removing accessiblity feature..." -- we aren't removing, we are telling people not to do it -- same problem as NOSCRIPT RM: deal with NOSCRIPT in whatever answer you send to tina SM: Example Document concerns: good point about style element (no bad stuff to escape) - rather than remove CDATA markers, should put bad stuff in ... final comment - grouping selector - RM: because HTML and BODY elements are identical, can define style once using "html,body { }" <Roland> html, body {background-color: #e4e5e9; } RM: list, not heirarchy SM: right ... have bunch of changes to make - between last publication and now, pub rules have changed for Notes - additional reqs on Note we need to satisfy; will make process changes along with changes stemming from tina and our discussion of it RM: use of ABBR or ACRONYM GJR: have proposed INIT (initialism) SM: will fold in WG's response to Tina's comments into Mime today along with other pub-related stuff RM: question on "do we need nl?" - motivation, wanted navigation, but maybe use "nav" as a section - more than list - complete block, like a section GJR: similar to Role/ARIA concept of "nav" RM: yes, big major area, not just detail, but block of navigational options ... look at way NAV is defined when return to question: ... would nav obviate need for NL via specialized container SM: had action to send out conversation starter RM: ol role="nav" versus nl - will reply and through into mix that this is bigger question: NAV as structural element; will kick off conversation by replying to shane's note <Tina> I have a half-finished reply to Shane's conversation starter SM: point i was trying to make is have diff mechanisms to satisfy diff needs; should think about needs GJR: positive "yes!" reaction to shane's post ADJOURN This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/XML Events/XML Events 2/ Succeeded: s/class/collapse/ Succeeded: s/low tech crap/having difficulties with headset/ Found Scribe: Gregory_Rosmaita Found ScribeNick: oedipus Default Present: Gregory_Rosmaita, Roland, ShaneM, Markus_Gylling, Alessio Present: Gregory_Rosmaita Roland ShaneM Markus_Gylling Alessio Tina_on_IRC Regrets: Mark_Birbeck Steven Agenda: Found Date: 03 Dec 2008 Guessing minutes URL: People with action items: WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2008/12/03-xhtml-minutes.html
CC-MAIN-2016-22
refinedweb
1,759
59.03
Microsoft provides a host of project template wizards for creating your initial projects. These include template wizards for creating a Console Application, a Windows Service, a Windows Form Application, a Control Library and much more. So what do these project wizards consist of? The table below illustrates the list of files included in a typical Project Wizard Template. Lets examine one of the project template wizards in detail in order to understand how they work. In an effort to learn how the template functions, we will examine the CSharpWinService Wizard. The CSharpEx.vsdir file that contains a list of CSharp template projects is located in C:\Program Files\Microsoft Visual Studio .NET 2003\VC#\CSharpProjects and The CSharpWinService.vsz file for this project can be found there as well. Below is the line in the CSharpEx.vsdir file that contains the information about the Windows Service Wizard: CSharpWinService.vsz|{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}|#2349|80|#2350|{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}|4556| |WindowsService Table 1: Components of a Wizard Template Each field in the VSDir file is delimited by the | character. The first field is required and tells the name of the visual studio wizard parameter file(or the vsz file). The next field is an optional GUID that points to a resource file for the wizard. The last GUID appearing in the file points to a dll containing an icon for the project template (this can also be a full path to a dll). The field appearing directly after the icon dll is a resource id to the icon inside the dll. The last field is a default name for the project and appears in the dialog when the project template wizard launches. Lets take a look now at the vsz file shown below: VSWIZARD 7.0 Wizard=VsWizard.VsWizardEngine Param="WIZARD_NAME = CSharpWindowsServiceWiz" Param="WIZARD_UI = FALSE" Param="PROJECT_TYPE = CSPROJ" This file takes the form of the old INI file (you may recall this from way back in the days of Windows 3.1). The second line indicates the program ID of the wizard engine to use. This program ID is read from the Windows registry and associated with a program with a Dispatch interface. The default wizard engine is the one shown in this file. You can use your own wizard simply by changing the dispatch program id that represents your custom dll. Well show you how to do this later in the article. The last 3 lines are parameters for the engine. Below are descriptions of some parameters in this file used by the default Wizard Engine. Table 2: VSWizard Parameter Options So to reiterate the vsz and vsdir file go into C:\Program Files\Microsoft Visual Studio .NET 2003\VC#\CSharpProjects. The vsdir contains the COM Dispatch program id of which wizard engine to use. The vsz file contains the name of the folder of where to locate the templates and the scripts (indicated by the WIZARD_NAME variable). The scripts are located in \Microsoft Visual Studio .NET 2003\VC#\VC#Wizards\[WIZARD_NAME]\Scripts\1033 and by default the script is default.js. The Java Script in the script folder called default.js extends the behavior in the common.js file and allows you to perform custom functionality for your wizard as well as respond to events thrown by the wizard. Below the default java script provided by Microsoft responds to the OnFinish event of the VSWizard (Ive added some comments to help the reader understand what is going on): Listing 1: The OnFinish event handler from default.js function OnFinish(selProj, selObj){var oldSuppressUIValue = true;try{oldSuppressUIValue = dte.SuppressUI;// Get the Project path from the wizardvar strProjectPath = wizard.FindSymbol("PROJECT_PATH"); // Get the Project name from the wizard var strProjectName = wizard.FindSymbol("PROJECT_NAME"); // create a safe name from the project name (this call is in// common.js) var strSafeProjectName = CreateSafeName(strProjectName); // Add an additional symbol called SAFE_PROJECT_NAME and associate it// with the safe project name we created wizard.AddSymbol("SAFE_PROJECT_NAME", strSafeProjectName);var bEmptyProject = 0; //wizard.FindSymbol("EMPTY_PROJECT"); // Create the CSharpPRoject pasing in the Project Name, the Project// Path, and the default project name (this call is in common.js) var proj = CreateCSharpProject(strProjectName, strProjectPath, "DefaultWinExe.csproj"); // Create the template information file containing the list of template files. var InfFile = CreateInfFile();if (!bEmptyProject){// Add references specific to Windows ServicesAddReferencesForWinService(proj); // Render each template and add Files specific to the CSharp Project that are listed in the template inf file. AddFilesToCSharpProject(proj, strProjectName, strProjectPath,InfFile, false);}// Save the projectproj.Save();}catch(e){if( e.description.length > 0 )SetErrorInfo(e); return e.number;}finally{dte.SuppressUI = oldSuppressUIValue;if( InfFile )InfFile.Delete();} } Upon completion, the wizard calls the OnFinish Event to fill in the templates and generate the project. You can modify OnFinish to suit the needs of your particular project. You can also modify the template files themselves so they contain specifics for details of the files in your project. Below is a portion of a the template file file1.cs in the CSharp Windows Service Wizards template folder: Listing 2: Part of a template for creating a class using System;using System.Collections;using System.ComponentModel;using System.Data;using System.Diagnostics;using System.ServiceProcess; namespace [!output SAFE_NAMESPACE_NAME]{public class [!output SAFE_CLASS_NAME] : System.ServiceProcess.ServiceBase{/// <summary> /// Required designer variable./// </summary>private System.ComponentModel.Container components = null;public [!output SAFE_CLASS_NAME](){// This call is required by the Windows.Forms Component Designer.InitializeComponent();// TODO: Add any initialization after the InitComponent call}} Listing 2: Code from common.js that renders the template if So now you get the idea of how the java script and the template works together in the default wizard. As we stated before, it's possible to bypass all of the java scripting and create the wizard completely in C#. The way to do this is simply to create our own automation object in .NET that inherits the IDTWizard dispatch interface. The IDTWizard is an interface with exactly one method, Execute. Overriding this method allows us to execute the wizard in any way you wish and passes in parameters that help us in accomplishing the task of wizarding the IDE. The steps to creating our wizard are simple. First create a simple dll library in .NET using the New->Project menu and choosing class library. Figure 1: Choosing the project for the custom Wizard dll In order to use the Windows Forms and EnvDTE assemblies, you will need to manually add these references. To add the references, right click on your solution in the solution explorer and choose Add Reference. Then find the references and add them to your project. Figure 2: Choosing the references that are not already part of the library Next, replace class1 with the following code. The class below implements the IDTWizard interface. Notice it contains a program dispatch id attribute above the class definition that allows the vsz file to recognize the wizard: Listing 3 Class that implements the template wizard functionality using Figure 3(a): Invoking regasm through the IDE Build Property Also, I noticed that the wizard dll assembly has to either be in the local path with the devenv.exe (the absolute path is C:\Program Files\Microsoft Visual Studio .NET 2003\Common7\IDE\devenv.exe) or you need to make MyWizard1.dll a strong named assembly that is placed in the GAC(anotherwords a shared assembly). Otherwise the development environment has no clue where to find the wizard. (You would think it wouldnt matter since we registered it in the registry, but with .NET, what really gets registered by regasm is the mscoree.dll with the absolute name of the assembly and public key token). In the Visual Studio IDE, I changed the output path of my project to point to the IDE directory so the assembly is kept local and so I didn't have to keep copying the file all the time: Figure 3(b): Project Properties window with Output Path changed Also the name of the wizard engine needs to be updated in the vsz file shown below: VSWIZARD 7.0Wizard=VSWizard.VSCustomWizardEngineParam="WIZARD_NAME = CSharpCustomServiceWiz"Param="WIZARD_UI = FALSE"Param="PROJECT_TYPE = CSPROJ" As we stated before, you'll also need to list the CSharpCustomServiceWiz in the vsdir file with the other CSharp Project Templates so the IDE can find the vsz file. You don't need scripts in this case, because all of the functionality is handled by the custom engine. Thats all there is to plugging your Wizard into the framework. Now you can add any code you wish to the Execute method to create a project. You can use the Application object parameter passed into Execute to help build your project. The Application object is the root automation object of the IDE and can be unboxed with the extensibility (_DTE) class to be used to manipulate the environment. For example: Say we wanted to create a solution from a solution project template and save it to the disk. Also we want to provide our own wizard interface and not the one provided by the VSWizard. Inside our wizard project, we can simply create a dialog based on a Window Form and then prompt for the information our wizard needs. Then well take advantage of the Extensibility interface to create our solution. Our WizardPage dialog is created simply by adding a Windows Form to our Wizard Project. The Dialog is shown below: Figure 4: Custom Wizard User Interface Dialog The dialog is a simple one that accepts the name of the project and the directory we wish to place the project solution. The directory input even comes with a browser button to pick a directory for our project This directory button actually invokes the SHBrowseForFolder in the shell32.dll API, but this is a topic for a completely new article, so we wont get into it here. Once weve gotten our input from the "Wizard", we can create the solution. The code for creating the solution is shown below: Listing 4: Extensibility code in C# to create a Project public In our example, we bypass the template method provided by the wizard and created these files with our own rendering method. The way we do this is read the template into a file stream, use the replace routines of the string class to populate the template substitution values, and write the string back out to our new C# Class file. As you can see the .NET library and C# language give you a great deal of power and flexibility in how you create your wizards. Below is the code that renders the template in C#: Listing 5: C# Code used to render the template from a project name The extensibility object has many good features for customizing your wizard. Some ideas may be to put special menu items in the IDE specific to your project or create your own project item wizards. It essence you can create project template wizards that not only customize the project itself, but the entire Visual Studio environment! Creating Custom Project template Wizards Using C# and .NET More AI...(GEP) Gene Expression Programming in C# and .NET i have been making an application where i need to write a code to create a new WPF Application project using the C# console application anyhelp will be appreciated
http://www.c-sharpcorner.com/uploadfile/mgold/customwizard11292005004112am/customwizard.aspx
crawl-003
refinedweb
1,884
55.24
Table of Contents Mastering ASP.NET with C# Introduction Part I - Basic Web Programming Chapter 1 - Behind the Scenes — Chapter 2 - HTML Basics Chapter 3 Part II - How Web Applications Work - Brief Guide to Dynamic Web Applications III - Accessing Data with ASP.NET Chapter 12 - Introduction to Relational Databases and SQL Chapter 13 - Introduction to ADO.NET Chapter 14 - Accessing Data Chapter 15 - Using XML in Web Applications Part IV - C# Web Applications Chapter 16 - Introduction to C# Web Applications Chapter 17 - State Maintenance and Cacheing Chapter 18 - Controlling Access and Monitoring Chapter 19 - Planning Applications Part V - AdvancedVisual C# Web Applications Chapter 20 - Leveraging Browser Clients Chapter 21 - Web Services Chapter 22 - Web Services, COM Components, and the SOAP Toolkit Chapter 23 - Build Your Own Web Controls Chapter 24 - Efficiency and Scalability Afterword Part VI - Appendices Appendix A - Quick HTML Reference Appendix B - JScript 5.5 Reference Index List of Figures List of Tables List of Listings List of Sidebars Mastering ASP.NET, Nancy Riddiough Indexer: Ted Laux Cover Designer: Design Site Cover Illustrator: Design Site Copyright © 2002 SYBEX Inc., 1151 Mar; ina Village Parkway, Alameda, CA 94501. World rights reserved. standalone103165 ISBN: 0-7821-2989.. Internet screen shot(s) using Microsoft Internet Explorer 6 reprinted by permission from Microsoft Corporation. stand-alone product.release I dedicate this book to my friend Brenda Lewis, who cares not at all about its contents, but has nurtured its author since near childhood, and to my wife, Janet, who has— yet again— had the patience to endure a book's creation. Acknowledgments I would like to acknowledge the considerable talents of the editorial staff at Sybex, who have been both patient and thorough, particularly Richard Mills, Tom Cirtin, Erica Yee, Gene Redding, Denise Santoro Lincoln, and Mike Gunderloy, and the many, often unrewarded people who spend time answering questions in technical newsgroups. You do make a difference. Introduction For the past 20 years, programming efforts have alternated between servers and clients. From mainframe batch processing to stand-alone applications to client-server to Internet, the focus of development shifts back and forth according to the current hardware, software, and communications model available. From teletypes to terminals, mainframes to minicomputers to modern servers, desktops to laptops to handheld devices, hard-wired direct connections to private networks to the Internet, programmers have concentrated their efforts either on improving the user interface or building the backend systems that serve data to the devices that run the user interface. During the 1980s and early 1990s, the rapid evolution of microcomputers forced developers' attention toward the latter, which is why today's computer buyers enjoy high-resolution, deep-color displays; sound and voice capabilities; fast processors; a surfeit of data storage options; cheap memory; and powerful, graphical, interactive operating systems. The rapid improvement in microcomputers caused a corresponding fragmentation of data; people worked with individual files on their own computers. Interestingly, that very fragmentation led to a rapid corresponding rise in networking capabilities, because businesses needed workers to be able to share information— and they also needed centralized, secure control of that information. Those needs drove the development of client-server computing, which couples the rich graphical user interface and fast processing of microcomputers with fast centralized databases. Unfortunately, client-server computing, as it was initially conceived, caused several problems. The "fat" client programs were difficult to deploy, install, maintain, and upgrade. What companies needed was a different kind of client application: one that could accept data and application code from the centralized servers but display and let users interact with that data as with the desktop applications they had come to expect. The advent of the World Wide Web and browser technology seemed to promise an answer. In the past several years, we've seen the resurrection of the "thin" client— typically a browser or small executable that retrieves data on demand from a central server much as mainframe terminals did back in the early days of computing. While the new thin clients have much more functionality than their mainframe-terminal counterparts did, they're still not completely satisfying to a public used to the richness of commercial applications such as Microsoft Office, Quicken, and thousands of custom clientserver applications. However, despite these shortcomings, browsers running HTML-based front-ends have changed the world. People and businesses are growing increasingly dependent on location irrelevance. They want to be able to reach any server, anywhere, anytime— and they're well on the road to realizing that desire. Location irrelevance trumps ease-of-use, so browsers and other remote clients are now ubiquitous. Unfortunately, browsers haven't completely replaced the rich desktop client applications. They leave many people feeling as if they've been transported a couple of decades into the past. Browsers work extremely well when delivering static data, such as reports, documents, and images, but considerably less well when they're forced into client-server, form-driven, data-entry roles. The smooth, point-andclick page transitions you experience when browsing the Web often stumble when the application suddenly requires you to enter data. I believe .NET has the capability to change the situation. With the .NET framework, it's possible to create more interactive and responsive centrally located software. At the same time, .NET improves the tools and simplifies the process for building rich clients. Finally, it bridges the two by making it extremely easy to provide both rich and thin clients (remember, you can't be too rich or too thin) with centrally located and managed data, meaning your users can have their familiar graphical controls and behavior, and you can manage the application centrally by having it dynamically update on demand. What's in This Book? This is a book of exploration (mine) as much as it is a book of explication. Microsoft's .NET framework is extremely well designed for such a large and complex entity— but it is both large and complex. The biggest problem I faced during the writing of this book wasn't what to include, but what to leave out, and that is a severe problem. There's so much material I would have liked to include, but time, space, the dramatic changes in the .NET framework and Visual Studio during the early portions of the writing and my own still-immature knowledge of the .NET framework prevented that. The driving force behind this book was the idea that .NET provides a completely new model for building Web applications, as well as two brand-new languages for doing so (C# and VB.NET) and an expanded version of server-side JScript (JScript.NET). For those of you who may be former VB programmers switching to C#, let me get something out of the way. In my opinion, VB.NET is a brand-new language whose only connection to "classic" VB (all earlier versions) is a name and some shared syntax. Other than those elements, everything else has changed. However, you'll find that C# is much closer to the spirit of VB than any other language that uses C-like syntax, and that Visual Studio .NET (VS.NET) makes using C# very straightforward. In fact, after using VB for many years, I came to detest having to code in case-sensitive languages, but due to the Intellisense technology in VS.NET, I haven't been bothered by that at all (yes, C# is a case-sensitive language). If you've been building Web applications already, using any technology, you're way ahead of the average programmer, because you already understand how the Web works. Microsoft has made a huge— probably very successful— effort in Visual Studio and ASP.NET applications to hide how the Web works. Consequently, I've spent a considerable amount of time in this book trying to explain how ASP.NET applications make it so easy. In some ways, ASP.NET and C# are like classic VB— they make it easy to build moderate size, inefficient Web programs in much the same way that VB made it easy to build moderate size, inefficient Windows programs. You see, while Visual Studio .NET and the .NET framework change Web programming, the Web itself hasn't changed one iota due to .NET; it's still the same page-oriented, stateless communication mechanism it's always been. It's easy to forget that when you're building Web applications with C#. I think the biggest danger for Web programmers using .NET is that it does successfully hide complexity behind a rich programming model. However, complexity doesn't disappear just because it's been strained through the colander of Visual Studio. It's still there, hiding in the closet waiting to bite you when you're not looking. Fortunately, .NET not only makes formerly complex tasks easier, but it also gives you the capability to open the closet, grab complexity by the ear, and drag it into the light, where you can see it clearly. After working with .NET for nearly a year during the writing of this book, I'm thoroughly convinced that .NET and similar systems constitute a great improvement in programming. Although you don't absolutely have to have Visual Studio to build the projects in this book, you'll be thoroughly dissatisfied with the book if you don't have Visual Studio. Although Visual Studio combines most Web technology development into a single interface and assists and simplifies writing HTML and other file formats, the litany of technologies you need to know to be a complete Web programmer is still long, and none of them are simple. They are as follows: C# The language you use to build classes, retrieve and manipulate data, and handle events. Hypertext Markup Language (HTML) A formatting/layout language you use to design the user interface. Cascading Style Sheets (CSS) A robust, extensible, and hierarchical method for specifying the visual styles applied to page objects. JavaScript/JScript/ECMAScript A programming language you use to manipulate page objects within a client browser. JScript is Microsoft's proprietary version of ECMAScript. The name JavaScript was initially introduced by Netscape. Note Don't confuse client-side JScript with Microsoft's new JScript.NET language. JScript is to JScript.NET as C# is to C++— the syntax is similar but the languages are different. Extensible Markup Language (XML) A general-purpose markup language used throughout Visual Studio and .NET as a way to hold and manipulate data retrieved from a database; a format for specifying application configuration information; a way to persist data and objects; and a generic data container for passing messages, objects, and data from one component or tier to another. Extensible Stylesheet Language (for Transformations) (XSL/XSLT) An XML vocabulary created for the exclusive purpose of transforming XML documents from one state to another. That state can be from XML to XML, from XML to HTML, from XML to text, or from XML to any other form. XML Schema (XSD) An XML vocabulary created for the exclusive purpose of transforming XML documents from one state to another. That can be XML to XML, XML to HTML, XML to text, XML to PDF documents, or XML to anything else. Document Object Model (DOM) A model for manipulating objects created in a document tree structure. The document can be either XML or HTML. For example, you can use the .NET XML namespace classes to manipulate objects stored within an XML document, whereas you typically use JavaScript to manipulate the DOM objects that make up an HTML page. In some cases, you may even need to use the older COM-based MSXML parser to manipulate XML stored as data islands in Internet Explorer (IE). That parser also exposes DOM objects and methods, although they're slightly different than those in .NET. Dynamic HTML (DHTML) A name for the technology of manipulating objects created in the browser and responding to events raised by those objects or initiated by a user. DHTML-enabled browsers, such as IE and Netscape, let you specify the position, content, and display characteristics of every object within the page. In other words, DHTML lets you take an otherwise static HTML display and make it nearly as responsive as a stand-alone Windows application. In Microsoft's previous Web programming systems (WebClasses in VB 6 and ASP with Visual InterDev), you still had to be able to write raw HTML. Although this version of Visual Studio makes a brave attempt at eliminating the need to know HTML, it hasn't succeeded entirely. Therefore, I've included a short tutorial on HTML because you'll need to know a minimum amount to be able to create C# Web applications. If you've been using FrontPage or Dreamweaver in an effort to avoid learning how to code raw HTML, I recommend that you study the tutorial thoroughly, because unless you're completely comfortable with writing HTML using a text editor, you will have a very hard time writing HTML indirectly using a programming language— and doing so is a requirement for many Web applications. Who Should Read This Book? This book is aimed squarely at beginning Web programmers who are minimally familiar with C# and the .NET framework. You don't have to be an experienced C# programmer to read this book by any means, but you shouldn't be a rank beginner, either. There's neither time nor space to explain the C# language or the frameworkitself other than as it relates to ASP.NET and Web programming. If you've taken an introductory C# programming course, built a couple of C# windows or console applications, or even read through a C#-specific programming book, you won't have much trouble with the code in this book. Beyond a little C#, you don't have to know anything about the Internet, intranets, browsers, HTML, JavaScript, VBScript, XML, XSLT, the DOM, or any other Web-related technology to read this book. This is a beginner book. What you will find here is a thorough basic explanation of the principles of Web programming with C# and ASP.NET and a bit of exposure to each of the other Web technologies you'll need to build robust, scalable Web applications with C#. Why Did I Write This Book? I wrote this book because I'm fascinated with the processes of programming and writing. I've written two other Web programming books: one on WebClass programming with Visual Basic 6, Visual Basic Developer's Guide to ASP and IIS (Sybex, 1999) and one titled Mastering Active Server Pages 3 (Sybex, 2000). Both books sold reasonably well, but that's not why I wrote them, nor is that why I wrote this one. The act of writing this book gave me both a reason and an excuse to explore the technology more broadly than if I had approached .NET simply as a tool to create Web applications— and that broad exploration provided a corresponding breadth and depth of information about the topic that I suspect is nearly impossible to obtain any other way. As I firmly believe that .NET and similar environments are the future of programming, I wanted to evangelize that belief as well as give myself an excuse to work with this technology from the first beta version through the final release. I like learning computer languages. I've been programming for over 20 years now and programming for the Web since before classic ASP became available. Along the way, I've learned and worked with a large number of computer languages. While I am in no way an expert in any programming language or technology and don't pretend to be, I do have extensive experience with Visual Basic, databases, Web programming, XML, XSLT, and the other technologies discussed in this book. My scholastic background is in science and biology, music, computer-based training (CBT), interactive video training (IVT), and most recently, Web-based training (WBT), database applications, and general purpose human resources (HR) Web-based applications. I was a premed student before deciding not to work in the medical field; instead, I worked at the Knoxville, Tennessee, zoo for several years, where I eventually became the head keeper of reptiles under curator John Arnett, working with (at that time) the tenth largest reptile collection in the world. But the strands of my herpetological curiosity eventually wore thin on the sharp edges of poor pay. My musical interests called, and I went back to college as a music major, studying piano and music theory. I first became involved with computers in 1979 when I was an undergraduate piano student at the University of Tennessee and discovered Dr. Donald Pederson's music theory computer lab full of brandnew Apple II microcomputers with— believe it or not— 8K of main memory. Back then, small was not only beautiful— it was imperative. My first program of substance taught people how to recognize and write musical chords— one facet of a class generally known as music theory. That work sparked a fascination with computing that continues to this day. After completing a master's degree in music theory, I attended the University of Illinois to work on a doctorate in secondary education. The university was the site of the first important computer teaching system, called PLATO. As a research assistant, I worked with Dr. Esther Steinberg, author of Teaching Computers to Teach, investigating the relative importance of various interface features for beginning versus expert computer users. After graduating, I worked for InterCom, Inc. building computer-based training programs and HR applications for about 12 years. Toward the end of that time, I began writing technical articles, the first of which were for Fawcette's Visual Basic Programmer's Journal and XML Magazine, and then I began writing books for Sybex. Since 2000, I've worked briefly for the Playwear division of VF Corporation, one of the world's largest clothing manufacturers, and now work for DevX, Inc. (), initially as a Web developer and now as the Executive Editor, where I write, commission, and edit Web-related programming articles in all Web-related technologies. What Will You Learn? This book shows you how to use C# and the ASP.NET framework in a specific way— by using codebehind classes to build Web applications. In classic ASP, you could mix executable code and HTML in the same file. You can still do that in ASP.NET, but the technology described in this book is more like VB6 WebClasses, which used HTML templates in conjunction with a compiled VB-generated DLL. The DLL code could access the HTML templates to "fill" them with data, thus creating a very clean separation between the user interface (the HTML) and the code. Code-behind classes in C# follow that same logic but are considerably easier to use. At the simplest level, you create an HTML template, called a Web Form, that contains the user interface elements. From the Web Form, you reference the code in a class in the code-behind file; finally, you program the contents of the HTML elements from the C# class. Like WebClasses, separating the code that activates the HTML templates from the templates themselves gives you a much cleaner separation. For example, it's very easy, after you have a defined set of user-interface elements, to let HTML designers build an interface and modify that interface by adding static elements or changing the positions and/or the lookand-feel of those elements without interfering with the way the page works. Similarly, you can reuse the user-interface templates, filling them with different data or copying them from one application to the next without having to rebuild the interface. For these reasons, C# Web applications using the ASP.NET framework and code-behind classes are the base technology used in this book. I've devoted roughly half the book to explaining how to use and explore Web Forms, but as I've already mentioned, there are several ancillary technologies that you either must know, such as HTML and CSS, to build Web applications, or should know, or at least be aware of, such as database access with ADO.NET, Web services, caching data, writing components and services, XML, and transforming XML documents with XSLT. How to Read This Book Those who are truly Web beginners should profit from reading the first few chapters of the book, which discusses how the Web works, and has a short HTML tutorial. In contrast, those who already know HTML and CSS or who have classic ASP programming experience can skip sections covering technologies they already know without any problems. Don't treat this book as a reference— it's not. It's a narrative exploration. As you progress through the book, you'll build a relatively large Web application and several related applications in which each individual chapter containing code becomes a subdirectory of the main project. There's no overarching plan to the application; it doesn't "do" anything other than provide a framework for exploration. When you're finished, you'll have a set of Web Forms, as well as some other .NET features such as User Controls, Composite Controls, and Web Services that contain the basic functionality you'll need to build similar features into your applications. Although you can install the sample code from the Sybex website at, I don't recommend you use the book that way. Instead, you should manually type in the code for each chapter. Copy the sample code if you get stuck or encounter problems or errors you can't solve. Along the way, you'll probably find shortcuts and better ways to solve a problem, and you'll discover your own way of working. You'll probably notice some changes in the book code as you go through it as well, where the code to accomplish something— a loop for example— changes during the course of the book. In some cases, those changes are intentional; there are many ways to solve problems, and I've included different examples in the code. There's not always a single most efficient method or the perfect syntax. Some people prefer one syntax; some another. In other cases, the changing code reflects my own changing and growing experience with the .NET framework and the C# language. In still others, the framework itself grew and changed while this book was being written. What's Not in This Book? This book is an exploration of a very specific technology— ASP.NET Web Forms using C# code-behind classes, and it's aimed squarely at the beginning Web developer. The code isn't always fully formed— it's not meant to be copied and reused in production applications; it's designed to teach you how .NET works, so you can build and debug your own production-quality code. Most of the code was written with specific learning points in mind. You shouldn't expect a comprehensive listing of methods and properties. There are a few such lists, but not many. You can find those in the online .NET framework and Visual Studio documentation and in other books. The amount of material that's not in this book would fill many other books— and probably already does. I've concentrated on the basics: building Web applications intended for browser clients. Even with that limitation, however, I have had to omit many interesting and pertinent topics. For example, if you're looking for advanced DataGrid-handling techniques or pointers on how to build commercial custom controls, you won't find it here. If you're looking for a book on using .NET for e-commerce or help with your Web design, this book isn't it. If you are seeking information on how to internationalize your Web application or deliver applications to mobile devices or you want a fully developed reusable application example, look elsewhere. If you want to know how to integrate other Microsoft .NET technologies, such as Passport and MyServices, this book doesn't tell you how. But if you want to explore .NET Web Forms from the code-behind class viewpoint, I hope you'll find this book both interesting and informative. Part I: Basic Web Programming Chapter List Chapter 1: Behind the Scenes: How Web Applications Work Chapter 2: TML Basics Chapter 3: Brief Guide to Dynamic Web Applications Chapter 1: Behind the Scenes — Applications Work How Web Overview Before you can understand much about what a C# application can do, you need to understand what happens with Web requests in general. Because a Web application is often a combination of simple informational HTML pages and more complex dynamic pages, you should understand how the server fulfills requests that don't require code. A considerable amount of background negotiation and data transfer occurs even before the user's request reaches your code. A Web application is inherently split between at least two tiers— the client and the server. The purpose of this chapter is to give you a clearer understanding of how the client and the server communicate. Additionally, you will learn how C# integrates into this communication process and what it can do to help you write Web applications. How Web Requests Work A Web request requires two components, a Web server and a client. The client is (currently) most often a browser, but it could be another type of program, such as a spider (a program that walks Web links, gathering information) or an agent (a program tasked with finding specific information, often using search engines), a standard executable application, a wireless handheld device, or a request from a chip embedded in an appliance, such as a refrigerator. In this book, you'll focus mostly but not exclusively on browser clients; therefore, you can think of the words "browser" and "client" as essentially the same thing for most of the book. I'll make it a point to warn you when the terms are not interchangeable. The server and the browser are usually on separate computers, but that's not a requirement. You can use a browser to request pages from a Web server running on the same computer— in fact, that's probably the setup you'll use to run most of the examples in this book on your development machine. The point is this: Whether the Web server and the browser are on the same computer or on opposite sides of the world, the request works almost exactly the same way. Both the server and the client use a defined protocol to communicate with each other. A protocol is simply an agreed-upon method for initiating a communications session, passing information back and forth, and terminating the session. Several protocols are used for Web communications; the most common are Hypertext Transfer Protocol (HTTP), used for Web page requests; Secure Hypertext Transfer Protocol (HTTPS), used for encrypted Web page requests; File Transfer Protocol (FTP), used to transfer binary file data; and Network News Transfer Protocol (NNTP), used for newsgroups. Regardless of the protocol used, Web requests piggyback on top of an underlying network protocol called Transmission Control Protocol/Internet Protocol (TCP/IP), which is a global communications standard that determines the basic rules two computers follow to exchange information. The server computer patiently waits, doing nothing, until a request arrives to initialize communication. In a Web application, the client always gets to send the initialization to begin a session— the server can only respond. You'll find that this can be a source of frustration if you are used to writing stand-alone programs. Session initialization consists of a defined series of bytes. The byte content isn't important— the only important thing is that both computers recognize the byte series as an initialization. When the server receives an initialization request, it acknowledges the transmission by returning another series of bytes to the client. The conversation between the two computers continues in this back-and-forth manner. If computers spoke in words, you might imagine the conversation being conducted as follows: Client Hello? Server Hello. I speak English. Client I speak English, too. Server What do you want? Client I want the file /mySite/myFiles/file1.htm. Server That file has moved to /mySite/oldFiles/file1.htm. Client Sorry. Goodbye. Server Goodbye. Client Hello? Server Hello. I speak English. Client I speak English, too. Server What do you want? Client I want the file /mySite/oldFiles/file1.htm. Server Here's some information about that file. Client Thanks; please send the data. Server Starting data transmission, sending packet 1, sending packet 2, sending packet 3… Client I got packet 1, packet 2 has errors, I got packet 3, I got packet 4. Server Resending packet 2. The conversation continues until the transmission is complete. Server All packets sent. Client All packets received in good condition. Goodbye. Server Goodbye. TCP/IP is only one of many computer communication protocols, but due to the popularity of the Internet, it has become ubiquitous. You won't need to know much more than that about TCP/IP to use it— the underlying protocol is almost entirely transparent. However, you do need to know a little about how one machine finds another machine to initiate a communications session. How a Client Requests Content When you type a request into a browser address bar or click a hyperlink, the browser packages the request and sends an important portion of the URL, called the domain name to a naming server, normally called a DNS server, typically located at your Internet Service Provider (ISP). The naming server maintains a database of names, each of which is associated with an IP address. Computers don't understand words very well, so the naming server translates the requested address into a number. The text name you see in the link or the address bar is actually a human-friendly version of an IP address. The IP address is a set of four numbers between 0 and 255, separated by periods: for example, 204.285.113.34. Each 3-digit grouping is called an "octet." Each IP address uniquely identifies a single computer. If the first naming server doesn't have the requested address in its database, it forwards the request to a naming server further up the hierarchy. Eventually, if no naming server can translate the requested name to an IP address, the request reaches one of the powerful naming servers that maintain master lists of all the publicly registered IP addresses. If no naming server can translate the address, the failed response travels back through the naming server hierarchy until it reaches your browser. At that point, you'll see an error message. If the naming server finds an entry for the IP address of the request, it caches the request so that it won't have to contact higher-level naming servers for the next request to the same server. The cache times out after a period of time called the Time to Live (TTL), so if the next request exceeds the TTL, the naming server may have to contact a higher-level server anyway, depending on when the next request arrives. The naming server returns the IP address to the browser, which uses the IP address to contact the Web server associated with the address. Many Web pages contain references to other files that the Web server must provide for the page to be complete; however, the browser can request only one file at a time. For example, images referenced in a Web page require a separate request for each image. Thus, the process of displaying a Web page usually consists of a series of short conversations between the browser and the server. Typically, the browser receives the main page, parses it for other required file references, and then begins to display the main page while requesting the referenced files. That's why you often see image "placeholders" while a page is loading. The main page contains references to other files that contain the images, but the main page does not contain the images themselves. How the Web Server Responds— Preparation From the Web server's point of view, each conversation is a brand-new contact. By default, a Web server services requests on a first-come, first-served basis. Web servers don't "remember" any specific browser from one request to another. Modern browsers and servers use version 1.1 of HTTP, which implements keep-alive connections. As you would expect, that means that the connection itself, once made, can be kept active over a series of requests, rather than the server and client needing to go through the IP lookup and initialization steps for each file. Despite keep-alive HTTP connections, each file sent still requires a separate request and response cycle. Parts of a URL The line that you type into the browser address field to request a file is called a Uniform Resource Locator (URL). The server performs a standard procedure to service each request. First, it parses the request by separating the requested URL into its component parts. Forward slashes, colons, periods, question marks, and ampersands— all called delimiters— make it easy to separate the parts. Each part has a specific function. Here's a sample URL request: The following list shows the name and function of each part of the sample URL. http Protocol. Tells the server which protocol it should use to respond to the request. Domain name. This part of the URL translates to the IP address. The domain itself consists of several parts separated by periods: the host name, www; the enterprise domain name, microsoft; and the top-level Internet domain name, com. There are several other top-level Internet domain names, including org (organization), gov (government), and net (network). 80 Port number. A Web server has many ports. Each designates a place where the server "listens" for communications. A port number simply designates one of those specific locations (there are 65,537 possible ports). Over time, the use of specific port numbers has become standardized. For example, I used 80 as the port number in the example, because that's the standard (and default) HTTP port number, but you can have the server listen for requests on any port. CSharpASP Virtual directory. The server translates this name into a physical path on a hard drive. A virtual directory is a shorthand name, a "pointer" that references a physical directory. The name of the virtual and physical directories need not be the same. One way to define virtual directories is through the Web server's administrative interface. Another way to create virtual directories is by creating a new Web application or Web service project in VS.NET. For example, VS.NET creates a virtual directory for you whenever you create a new Web application or a Web service project. default.htm Filename. The server will return the contents of the file. If the file were recognized as executable via the Web server (such as an ASP file) rather than an HTML file, the server would execute the program contained in the file and return the results rather than returning the file contents. If the file is not recognized, the server offers to download the file. ? (Question Mark) Separator. The question mark separates the file request from additional parameters sent with the request. The example URL contains two parameters: Page=1 and Para=2. Page Parameter name. Programs you write, such as ASP pages, can read the parameters and use them to supply information. = (Equals Sign) Separator. The equals sign separates a parameter name from the parameter value. 1 Parameter value. The parameter named Page has a value of 1. Note that the browser sends all parameter values as string data. A string is a series of characters: A word is a string, a sentence is a string, a random sequence of numbers and letters is a string— text in any form is a string. Your programs are free to interpret strings that contain only numeric characters as numbers, but to be safe, you should cast or change them to numeric form. & (Ampersand) Separator. The ampersand separates parameter=value pairs. Para=2 Parameter and value. A second parameter and value. Server Translates the Path You don't make Web requests with "real" or physical paths; instead, you request pages using a virtual path. After parsing the URL, the server translates the virtual path to a physical pathname. For example, the virtual directory in the URL is myPath. The myPath virtual directory maps to a local directory such as c:\inetpub\wwwroot\CSharpASP\myFile.asp or to a network Universal Naming Convention (UNC) name such as \\someServer\somePath\CSharpASP\myFile.asp. Server Checks for the Resource The server checks for the requested file. If it doesn't exist, the server returns an error message— usually HTTP 404 -- File Not Found. You've probably seen this error message while browsing the Web; if not, you're luckier than I am. Server Checks Permissions After locating the resource, the server checks to see if the requesting account has sufficient permission to access the resource. By default, Internet Information Server (IIS) Web requests use a special guest account called IUSR_Machinename, where Machinename is the name of the server computer. You'll often hear this called the "anonymous" account, because the server has no way of knowing any real account information for the requesting user. For ASP.NET pages, IIS uses the SYSTEM account or another guest account named aspnet_wp_account (ASPNET) by default. For example, if the user has requested a file for which that account has no read permission, the server returns an error message, usually HTTP 403 -- Access Denied. The actual error text depends on the exact error generated. For example, there are several sublevels for 403 error messages. You can find a complete list of error messages in the IIS Default Web Site Property dialog. Web servers provide default error messages but usually allow you to customize them. By default, IIS reads error message text from the HTML files in your %SystemRoot%\ help\common\ directory, where the variable % SystemRoot% stands for the name of your NT directory, usually named winnt. How the Web Server Responds— Fulfillment Graphics files, Word documents, HTML files, ASP files, executable files, CGI scripts— how does the server know how to process the requested file? Actually, servers differentiate file types in a couple of different ways. Internet Information Server (IIS) differentiates file types based on file extensions (such as .asp, .htm, .exe, and so on) just like Windows Explorer. When you double-click a file or icon in Windows Explorer, it looks up the file extension in the Registry, a special database that holds system and application information. The Registry contains one entry for each registered file extension. Each extension has an associated file type entry. Each file type entry, in turn, has an associated executable file or file handler. The server strips the file extension from the filename, looks up the associated program, and launches that program to return the file. IIS follows the same series of steps to determine how to respond to requests. Other Web servers also use file extensions to determine how to process a file request, but they don't use Registry associations. Instead, they use an independent list of file extension–to–program associations. The entries in these lists are called MIME types, which stands for Multipurpose Internet Mail Extension, because e-mail programs needed to know the type of content included with messages. Each MIME type— just like the Registry associations— is associated with a specific action or program. The Web server searches the list for an entry that matches the file extension of the requested file. Most Web servers handle unmatched file extensions by offering to download the file to your computer. Some servers also provide a default action if you request a URL that doesn't contain a filename. In this case, most servers try to return one of a list of default filenames— usually a file called either default.htm or index.htm. You may be able to configure the default filename(s) for your Web server (you can with IIS), either globally for all virtual directories on that server or for each individual virtual directory on that server. The server can begin streaming the response back to the client as it generates the response or it can buffer the entire response and send it all at once when the response is complete. There are two parts to the response: the response header and the response body. The response header contains information about the type of response. Among other things, the response header can contain the following: n A response code n The MIME type of the response n The date and time after which the response is no longer valid n A redirection URL n Any cookie values that the server wants to store on the client Cookies are text strings that the browser saves in memory or on the client computer's hard drive. The cookie may last for the duration of the browser session or it may last until a specified expiration date. The browser sends cookies associated with a site back to the server with each subsequent request to that site. Note There's a lot of hype in the media about cookies. Some people have been so intimidated by these scare tactics that they use their browser settings to "turn off cookies." That means the browser will not accept the cookies, which can have a major impact on your site because you must have some way to associate an individual browser session with values stored on the server tier in your application. While methods exist for making the association without using cookies, they're not nearly as convenient, nor do they persist between browser sessions. What the Client Does with the Response The client, usually a browser, needs to know the type of content with which the server has respond-ed. The client reads the MIME type header to determine the content type. For most requests, the MIME type header is either text/html or an image type such as image/gif, but it might also be a word processing file, a video or audio file, an animation, or any other type of file. Browsers, like servers, use Registry values and MIME type lists to determine how to display the file. For standard HTML and image files, browsers use a built-in display engine. For other file types, browsers call upon the services of helper applications or plug-ins, such as RealPlayer, or Microsoft Office applications that can display the information. The browser assigns all or part of its window area as a "canvas" onto which the helper program or plug-in "paints" its content. When the response body consists of HTML, the browser parses the file to separate markup from content. It then uses the markup to determine how to lay out the content on-screen. Modern HTML files may contain several different types of content in addition to markup, text, and images; browsers handle each one differently. Among the most common additional content types are the following: Cascading Style Sheets These are text files in a specific format that contain directives about how to format the content of an HTML file. Modern browsers use Cascading Style Sheet (CSS) styles to assign fonts, colors, borders, visibility, positioning, and other formatting information to elements on the page. CSS styles can be contained within a tag, can be placed in a separate area within an HTML page, or can exist in a completely separate file that the browser requests after it parses the main page but before it renders the content on the screen. Script All modern browsers can execute JavaScript, although they don't always execute it the same way. The term JavaScript applies specifically to script written in Netscape's JavaScript scripting language, but two close variants— Microsoft's JScript scripting language and the ECMA-262 specification (ECMAScript)— have essentially the same syntax and support an almost identical command set. Note Note that the JScript scripting language is distinct from JScript.NET— another, much more robust version of JScript that Microsoft released as an add-on to Visual Studio.NET. In addition to JScript, Internet Explorer supports VBScript, which is a subset of Visual Basic for Applications, which, in turn, is a subset of Microsoft's Visual Basic (pre-VB.NET) language. Note You can find the complete ECMA-262 specification at. ActiveX Components or Java Applets These small programs execute on the client rather than the server. ActiveX components run only in Internet Explorer on Windows platforms (roughly 60 percent of the total market, when this book was written), whereas Java applets run on almost all browsers and platforms. XML Extensible Markup Language (XML) is similar to HTML— both consist of tags and content. That's not surprising, because both are derived from Standard Generalized Markup Language (SGML). HTML tags describe how to display the content and, to a limited degree, the function of the content. XML tags describe what the content is. In other words, HTML is primarily a formatting and display language, whereas XML is a contentdescription language. The two languages complement each other well. XML was first used in IE 4 for channels, a relatively unsuccessful technology that let people subscribe to information from various sites. IE4 had a channel bar to help people manage their channel subscriptions. With IE 5, Microsoft dropped channels but extended the browser's understanding of and facility with XML so that today you can use it to provide data "islands" in HTML files. You can also deliver a combination of XML and XSL/XSLT (a rules language written in XML that's similar in purpose to Cascading Style Sheets but more powerful) to generate the HTML code on the client. The XML/XSL combination lets you offload processing from the server, thus improving your site's scalability. Netscape 6 offers a different and— for display purposes— more modern type of support for XML. Netscape's parsing engine can combine XML and CSS style sheets to format XML directly for viewing. Unfortunately, Netscape doesn't directly support XSLT transformations, so you're limited to displaying the data in your XML documents without intermediate processing. Introducing Dynamic Web Pages The client-to-server-to-client process I've just described is important because it happens each time your client contacts the server to get some data. That's distinctly different from the stand-alone or clientserver model you may be familiar with already. Because the server and the client don't really know anything about one another, for each interaction, you must send, initialize, or restore the appropriate values to maintain the continuity of your application. As a simple example, suppose you have a secured site with a login form. In a standard application, after the user has logged in successfully, that's the only authentication you need to perform. The fact that the user logged in successfully means that he's authenticated for the duration of the application. In contrast, when you log in to a Web site secured by only a login and password, the server must reauthenticate you for each subsequent request. That may be a simple task, but it must be performed for every request in the application. In fact, that's one of the reasons dynamic applications became popular. In a site that allows anonymous connections (like most public Web sites), you can authenticate users only if you can compare the login/password values entered by the user with the "real" copies stored on the server. While HTML is an adequate layout language for most purposes, it isn't a programming language. It takes code to authenticate users. Another reason that dynamic pages became popular is because of the ever-changing nature of information. Static pages are all very well for articles, scholarly papers, books, and images— in general, for information that rarely changes. But static pages are simply inadequate to capture employee and contact lists, calendar information, news feeds, sports scores— in general, the type of data you interact with every day. The data changes far too often to maintain successfully in static pages. Besides, you don't always want to look at that data the same way. I realize I'm preaching to the choir here— you wouldn't have bought this book if you weren't aware that dynamic pages have power that static HTML pages can't match. But it's useful to note that even dynamic data usually has a predictable rate of change— something I'll discuss later in the context of caching. How Does the Server Separate Code from Content? In classic Active Server Pages (ASP), you could mix code and content by placing special code tags (<% %>) around the code or by writing script blocks, where the code appeared between <script> and </script> tags. Classic ASP uses an .asp filename extension. When the server receives a request for an ASP file, it recognizes— via the extension associations— that responding to the request requires the ASP processor. Therefore, the server passes the request to the ASP engine, which parses the file to differentiate the code tag content from the markup content. The ASP engine processes the code, merges the results with any HTML in the page, and sends the result to the client. ASP.NET goes through a similar process, but the file extension for ASP.NET files is .aspx rather than .asp. You can still mix code and content in exactly the same way, although now you can (and usually should) place code in a separate file, called a code-behind class, because doing so provides a cleaner separation between display code and application code and makes it easier to reuse both. In ASP.NET, you can write code in all three places— in code-behind classes and also within code tags and script blocks in your HTML files. Nevertheless, the ASP.NET engine still must parse the HTML file for code tags. How and When Does the Server Process Code? The ASP.NET engine itself is an Internet Server Application Programming Interface (ISAPI) application. ISAPI applications are DLLs that load into the server's address space, so they're very fast. Different ISAPI applications handle different types of requests. You can create ISAPI applications for special file extensions, such as .asp or .aspx, or to perform special operations on standard file types such as HTML and XML. There are two types of ISAPI applications: extensions and filters. The ASP.NET engine is an ISAPI extension. An ISAPI extension replaces or augments the standard IIS response. Extensions load on demand when the server receives a request with a file extension associated with the ISAPI extension DLL. In contrast, ISAPI filters load with IIS and notify the server about the set of filter event notifications that they handle. IIS raises an event notification (handled by the filter) whenever a filter event of that type occurs. Note You can't create ISAPI applications with C#— or indeed in managed code— although you can create them in Visual Studio.NET using unmanaged C++ and the Active Template Library (ATL). However, you can override the default HttpApplication implementation to provide many of the benefits of ISAPI applications using C#. ASP.NET pages bypass the standard IIS response procedure if they contain code tags or are associated with a code-behind class. If your ASPX file contains no code, the ASP.NET engine recognizes this when it finishes parsing the page. For pages that contain no code, the ASP.NET engine short-circuits its own response, and the standard server process resumes. With IIS 5 (ASP version 3.0), classic ASP pages began short-circuiting for pages that contained no code. Therefore, ASP and ASPX pages that contain no code are only slightly slower than standard HTML pages. How Do Clients Act with Dynamic Server Pages? How do clients act with dynamic server pages? The short answer is this: They act no differently than with any other request. Remember, the client and the server know very little about one another. In fact, the client is usually entirely ignorant of the server other than knowing its address, whereas the server needs to know enough about the client to provide an appropriate response. Beginning Web programmers are often confused about how clients respond to static versus dynamic page requests. The point to remember is that, to the client, there's no difference between requesting a dynamic page and requesting a static page. For example, to the client there's no difference between requesting an ASPX file and requesting an HTML file. Remember, the client interprets the response based on the MIME type header values— and there are no special MIME types for dynamically generated files. MIME type headers are identical whether the response was generated dynamically or read from a static file. When Is HTML Not Enough? I mentioned several different types of MIME type responses earlier in this chapter. These types are important because, by itself, HTML is simply not very powerful. Fortunately, you're getting into Web programming at the right time. Browsers are past their infancy (versions 2 and 3), through toddlerhood (version 4), and making progress toward becoming application delivery platforms. While they're not yet as capable as Windows Forms, they've come a long way in the past five years and are now capable of manipulating both HTML and XML information in powerful ways. All of these changes have occurred because HTML is a layout language. HTML is not a styling language; therefore, CSS became popular. HTML is not a graphics description or manipulation language; therefore, the Document Object Model (DOM) arose to let you manipulate the appearance and position of objects on the screen. HTML is not a good language for transporting or describing generalized data; therefore, XML is rapidly becoming an integral part of the modern browser's toolset. Finally and, for this book, most importantly, HTML is not a programming language. You must have a programming language to perform validity checks and logical operations. Modern browsers are partway there; they (mostly) support scripting languages. In Internet Explorer 5x and, to a lesser degree, Netscape 6x, all these technologies have become intertwined. You can work with XML through CSS or XSL/XSLT. You can use the DOM to change CSS styles and alter the appearance of objects dynamically. You can respond to some user events with CSS directly (like changing the cursor shape), and you can respond to or ignore almost all user events through script. What C# Can Do Since you're about to commit yourself to programming the latest server-side technology for creating dynamic Web applications, you should know what C# can do. Surprisingly, when you break Web programming down into its constituent parts, there's very little difference between Web programming and standard applications programming. Make If/Then Decisions If/Then decisions are the crux of all programming. C# can make decisions based on known criteria. For example, depending on whether a user is logged in as an administrator, a supervisor, or a line worker, C# can select the appropriate permission levels and responses. Using decision-making code, C# can deliver some parts of a file but not others, include or exclude entire files, or create brand-new content tailored to a specific individual at a specific point in time. Process Information from Clients As soon as you create an application, you'll need to process information from clients. For example, when a user fills out a form, you'll need to validate the information, possibly store it for future reference, and respond to the user. With C#, you have complete access to all the information that clients send, and you have complete control over the content of the server's response. You can use your existing programming knowledge to perform the validation, persist data to disk, and format a response. Beyond giving you the programming language to do these tasks, C# Web applications provide a great deal of assistance. C# Web applications use the ASP.NET framework to help you validate user input. For example, you can place controls on the screen that can ensure that a required field contains a value, and automatically check whether that value is valid. C# Web applications provide objects that simplify disk and database operations and let you work easily with XML, XSLT, and collections of values. With C#, you can write server-side code that behaves as if it were client-side script. In other words, you can write code that resides on the server but responds to client-side events in centralized code rather than in less powerful and difficult-to-debug client-side script. ASP.NET helps you maintain data for individual users through the Session object, reduce the load on your server through caching, and maintain a consistent visual state by automatically restoring the values of input controls across round trips to the server. Access Data and Files In most applications, you need to read or store permanent data. In contrast to previous versions of ASP, ASP.NET uses the .NET framework to provide very powerful file access. For example, many business applications receive data, usually overnight, from a mainframe or database server. Typically, programmers write special scheduled programs to read or parse and massage the new data files into a form suitable for the application. Often, major business disruptions occur when something happens so that the data files are late or never appear. Similarly, have you ever written a program that created a file and later tried to access it only to find that the user had deleted or moved the file in the interim? I know— you're sure to have written defensive code so that your program could recover or at least exit gracefully, right? Many applications would be much easier to write and maintain if the program itself could interoperate with the file system to receive a notification whenever the contents of a specific directory changed. For example, if you could write code that started a data import process whenever data arrived from the mainframe, you could avoid writing timing loops that check for the appearance of a file or scheduling applications that run even though the data may not be available. Similarly, if you could receive a notification before a user deleted that critical file, you could not only avoid having to write the defensive code but also prevent the problem from occurring in the first place! You'll find that you can perform these types of tasks much easier using C# than you could in earlier versions of any programming language. You'll find that the most common file and database operations are simpler (although wordier) in C#. For example, one of the more common operations is to display the results of a database query in an HTML table. With VBScript or JScript code in a classic ASP application, you had to loop through the set of records returned by the query and format the values into a table yourself. In C#, you can retrieve a dataset and use a Repeater control to perform the tedious looping operation. Format Responses Using XML, CSS, XSLT, and HTML As I said earlier, you have complete control of the response returned by your application. Until recently, Web applications programmers needed to worry only about the browser and version used by the application's clients, but now an explosion of other Web client types has complicated things. Handheld devices, dedicated Internet access hardware, pagers, Web-enabled telephones, and an ever-increasing number of standard applications are raising the formatting requirements beyond the capability of humans to keep up. In the past, for most pages with simple HTML and scripting needs, you could usually get away with two or three versions of a page— one for complete idiot browsers without any DHTML or scripting ability, one for Netscape 4, and one for IE 4 and higher. But as the number and type of clients expand, creating hand-formatted HTML pages for each new type of client becomes a less and less viable and palatable option. Fortunately, the wide and growing availability of CSS and XML is a step in the right direction. Using CSS styles, you can often adjust a page to accommodate different resolutions, color depth, and availability. But CSS styles only affect the display characteristics of content— you can't adjust the content itself for different devices using CSS alone. However, through a combination of XML, CSS, and XSLT, you can have the best of both worlds. XML files hold the data, XSLT filters the data according to the client type, and CSS styles control the way the filtered data appears on the client's screen. Visual Studio helps you create all these file types, and C# lets you manipulate them programmatically. The end result is HTML tailored to a client's specific display requirements. Launch and Communicate with .NET and COM+ Objects For the past year or two, the most scalable model for ASP has been to use ASP pages as little more than HTML files that could launch COM components hosted in Microsoft Transaction Server (MTS) or in COM+ applications. Microsoft termed this model Windows DNA. If you've been building applications using that model, you'll find that little has changed except that it's now much easier to install, move, rename, and version components. Of course, that's not such a small change. Until .NET, you had to use C++ or Delphi to create free-threaded COM objects suitable for use in Web applications. (To be completely honest, some people did write code that let VB use multiple threads, but it wasn't a pretty sight, nor was it a task for programmers with typical skills.) Multithreading may not seem like such a big deal if you've been writing stand-alone applications. After all, most stand-alone and client-server applications don't need multithreading. However, in the Web world, it is a big deal. Web applications almost always deal with multiple simultaneous users, so for .NET to be a language as suitable for Web applications as Java, it had to gain multithreading capabilities. Many classic ASP programmers migrated from classic VB, and so they naturally tended to use that language to generate components. Unfortunately, VB5/6–generated DLLs were apartment threaded. Without going into detail, this meant that Web applications couldn't store objects written using VB5/6 across requests without causing serious performance issues. C#-generated objects are inherently free threaded, so your Web applications can store objects you create with C# across requests safely. Of course, you still have to deal with the problems caused by multiple threads using your objects simultaneously, but you can mark specific code sections as critical, thus serializing access to those sections. But that's a different story. C# also lets you access legacy COM DLLs, so you can use existing binary code without rewriting it in a .NET language. There's some debate over exactly how long you'll be able to do this. Personally, I think you have several years' grace to upgrade your COM DLLs to .NET. To use an existing COM DLL in .NET, you "import" the type library. One way to do this is by using the TlbImp.exe utility, which creates a "wrapper" for the class interface through which you can call the methods and properties of the class. Of course, there's a slight performance penalty for using a wrapper for anything, but that's often acceptable when the alternative is rewriting existing and tested code. You can just as easily go in the opposite direction and export .NET assemblies for use with unmanaged C++, VB5/6, Delphi, or any COM-compliant language. To do that, you use the TlbExp.exe utility. This utility creates a type library but doesn't register it. Although TlbExp is easier to remember (it's the opposite of TlbImp), another utility, called RegAsm.exe, can both register and create a type library at the same time. Use the /tlb flag with RegAsm.exe to tell the utility to create the type library file. You can also use RegAsm.exe to create a REG (registration) file rather than actually registering the classes in your assembly, which is useful when you're creating setup programs to install application code on another machine. Advantages of C# in Web Applications C# is an extremely powerful tool for building applications for the Windows platform (and maybe someday soon for other operating systems as well). But it's certainly not the only tool for building applications. There's very little C# can do that older languages can't do if you're willing to delve deeply enough into the API or write enough code. However, by providing built-in support for certain kinds of applications, for memory management, and for object-oriented development, C# greatly reduces the effort involved in building them. Web Services A Web service is nothing more than a Web interface to objects that run on the server. Wait, you say, isn't that the same as Distributed COM (DCOM)? Not exactly, but it's similar. DCOM lets your applications launch and use remote applications and DLLs as if they were running on the local machine. It does this by creating proxy "stubs" on both sides of the transaction. DCOM wraps up the function, subroutine, method, or property call from your local application, along with any accompanying parameters, and forwards them over the network to a receiving stub on the server. The server stub unwraps the values, launches the object or application (if necessary), and makes the call, passing the parameters. The reverse operation occurs with return values. DCOM uses a highly efficient binary wrapper to send the data over the network. DCOM was created in an era when remote calls came from machines that resided on a hard-wired proprietary network. As companies began to use the public Internet for business purposes, the network was no longer proprietary; instead, DCOM calls had to cross the boundary between the public network and the private corporate network. However, letting binary data cross that boundary is inherently dangerous because you can't know what the data will do. For example, the data may contain viral programs. Therefore, companies also put up firewalls that prevent binary data from crossing the boundary. Text data, like HTML, can cross the boundary unobstructed, but binary data cannot. Unfortunately, that had the side effect of preventing DCOM from operating easily through the firewall, because the firewalls are generally unable to differentiate between potentially unsafe public binary data and perfectly safe DCOM binary data. Web services solve that problem. Web services perform exactly the same tasks as DCOM— they let you use remote objects. However, they typically use a different system, called the Simple Object Access Protocol (SOAP), to wrap up the call and parameter data. SOAP is a text file format. It uses XML to simplify the syntax for identifying the various types of data values needed to make generic remote calls. Because SOAP is a text file, it can cross firewall boundaries. However, SOAP is not a requirement for making remote calls; it's simply a standardized and therefore convenient method for doing so. In other words, you're perfectly free to write your own remoting wrapper— but if you do that, you'll need to create your own translation functions as well. C# and Visual Studio have extensive support for SOAP. In fact, using SOAP in C# is transparent; the .NET framework takes care of all the value translation and transport issues, leaving you free to concentrate on building the applications themselves. The process for building a Web service is extremely similar to the process for building a COM DLL— or for that matter, writing any other .NET code, because all you need to do to expose a method or an entire class as a Web service is add attributes— bits of metadata that contain information about the code. The biggest problem with Web services and SOAP is performance; it's simply not as efficient to translate values to and from a text representation as it is to translate them to and from a binary format like those used by DCOM and CORBA. Nevertheless, in a dangerous world, SOAP is a necessary evil, and I think you'll be pleasantly surprised by how fast Web services work. While the actual performance difference is certainly measurable, the perceived performance difference is negligible unless you're performing a long series of remote calls within a loop (and you should avoid that with any remote technology). Thin-Client Applications (Web Forms) C# works in concert with ASP.NET to let you build Web Form–based applications. A Web Form, as you'll see in Chapters 4, "Introduction to ASP.NET," and 5, "Introduction to Web Forms," is an HTML form integrated with C# (or any of the multitude of .NET languages sure to appear soon) code. If you're familiar with Active Server Pages (ASP), JavaServer Pages (JSP), or PHP Hypertext Processor (PHP), you'll quickly feel comfortable with C# Web applications and Web Forms. If you haven't written Web applications using one of these technologies, you're lucky to be entering the Web application field now rather than earlier, because C# makes building Web applications similar to building Windows applications. You build Web Forms by dragging and dropping controls onto a form design surface. After placing a control, you can double-click it to add code to respond to the control's events. Web Forms support Web analogs of most of the familiar Windows controls such as text controls, labels, panel controls, and list boxes. They even support invisible controls such as timers. The convenience of Web Forms aside, you're still building browser-based or thin-client applications, so you can expect to lose some of the functionality that you get with Windows clients. However (and I think this is the most important change you'll see with .NET), you're no longer limited to thin-client Web applications. By combining Windows clients with Web services, you can build rich-client applications almost as easily. In fact, the technology makes it simple to build both types of applications— and serve them both with a common centralized code base. Rich-Client Applications (Windows Forms) It may seem odd that I've included Windows Forms applications in a book about building Web applications, but I can assure you that it won't seem odd by the time you finish the book. The distinction between rich-client and thin-client applications is diminishing rapidly. As browsers add features, they get fatter, and as Windows Forms applications gain networking capability, they become more capable of consuming Web-based services. The result is that the only real decision to be made between a Web Form and a Windows Forms application is whether you can easily deliver the Windows Forms application code to the client base or if you must rely on the functionality of whatever browser or "user agent" is already installed on the client machines. You'll build both types of applications in this book. You'll see the differences in application design and distribution, and then you can decide for yourself. Summary You've seen that clients communicate with the Web server in short transactional bursts. Client requests are typically made anonymously, so you must plan and code for security and authentication if your application deals with sensitive data. Between requests, the server "forgets" about the client, so unless you force the client to pass a cookie or some other identifying token for each request, the server assumes the client is brand new. Web applications use these identifying tokens to associate data values with individual browsers or (with secured sites) individual users. The strategy you select for maintaining these data values across requests is called "state maintenance," and it's the single most difficult problem in building Web applications. C# helps simplify the process of building Web applications through Web Forms, Web services, robust networking abilities, and tight integration with ASP.NET, which provides the infrastructure for servicing Web requests. Despite the existence of Visual Studio's Web Form editor, there's still an advantage to learning the underlying language used to create Web Forms— HTML. Fortunately, as a programmer accustomed to memorizing complex code operations, you'll find that HTML is straightforward and simple. You can learn the basics of HTML in about half an hour. In Chapter 2, "HTML Basics," you'll get my half-hour tour of HTML, which should be sufficient for you to understand the HTML code you'll see in the rest of this book. If you already know HTML, you can browse through this as a review or simply skip it and begin reading again at Chapter 3, "Brief Guide to Dynamic Web Applications." Chapter 2: HTML Basics Overview This chapter contains a half-hour tour to teach you the basics of the Hypertext Markup Language (HTML) structure and editing. If you already know HTML, you can probably skip this chapter and move directly to Chapter 3, "Brief Guide to Dynamic Web Applications." If you're not already comfortable with HTML, you should read this chapter and practice creating HTML files using the included files as a starting point. You should feel reasonably comfortable with HTML before you begin creating C# Web applications. HTML is a simple idea that, like many simple ideas, you can apply, combine, and extend to build very complex structures. What Is HTML? HTML is a markup language, although the original intent was to create a content description language. It contains commands that, like a word processor, tell the computer— in a very loose sense— what the content of the document is. For example, using HTML, you can tell the computer that a document contains a paragraph, a bulleted list, a table, or an image. The HTML rendering engine is responsible for actually displaying the text and images on the screen. The difference between HTML and word processors is that word processors work with proprietary formats. Because they're proprietary, one word processor usually can't read another word processor's native file format directly. Instead, word processors use special programs called import/export filters to translate one file format to another. In contrast, HTML is an open, worldwide standard. If you create a file using the commands available in version 3.2, it can be displayed on almost any browser running on almost any computer with any operating system— anywhere in the world. The latest version of HTML, version 4.0, works on about 90 percent of the browsers currently in use. HTML is a small subset of a much more full-featured markup language called Standard Generalized Markup Language (SGML). SGML has been under development for about 15 years and contains many desirable features that HTML lacks, but it is also complex to implement. This complexity makes it both difficult to create and difficult to display properly. HTML was developed as an SGML subset to provide a lightweight standard for displaying text and images over a slow dial-up connection— the World Wide Web. Originally, HTML had very few features— it has grown considerably in the past few years. Nevertheless, you can still learn the core command set for HTML in just a few hours. HTML contains only two kinds of information: markup, which consists of all the text contained between angle brackets (<>), and content, which is all the text not contained between angle brackets. The difference between the two is that browsers don't display markup; instead, markup contains the information that tells the browser how to display the content. For example, this HTML: <html> <head><title></title></head> <body> </body> </html> is a perfectly valid HTML file. You can save that set of commands as a file, navigate to it in your browser, and display the file without errors— but you won't see anything, because the file doesn't contain any content. All the text in the file is markup. In contrast, a file with the following content contains no markup: This is a file with no markup Although most browsers will display the contents of a file with no markup, it is not a valid HTML file. The individual parts of the markup between the brackets are tags, sometimes called commands. There are two types of tags— start tags and end tags, and they usually appear in pairs (although they may be widely separated in the file). The single difference is that the end tag begins with a forward slash, for instance </html>. Other than the forward slash, start tags and end tags are identical. What Does HTML Do? HTML lets you create semistructured documents. The heading commands separate and categorize sections of your documents. HTML also has rudimentary commands to format and display text, display images, accept input from users, and send information to a server for back-end processing. In addition, it lets you create special areas of text or images that, when clicked, jump— or hyperlink— from one HTML file to another, thus creating an interlinked series of pages. The series of pages you create via hyperlinks is a program; however, it isn't a program like the ones you'll learn to create in this book because a series of pages has no intelligence and makes no decisions. All the functionality resides in the tag set selected by the HTML author (people whose primary task is creating HTML documents are called authors, not programmers). A series of pages linked together in a single directory or set of directories is called a site, or a Web site. Despite the lack of decision-making capability, a Web site serves two extremely useful purposes: n It provides a way for non-programmers to create attractive sites full of useful information. (Of course, it also provides a way for people to create unattractive sites full of useless information, but I won't pursue that.) n In conjunction with the Internet, Web sites make that information available globally. Why Is HTML Important? Until HTML, it wasn't so easy to create screens full of information containing both text and graphics that anyone could read using any operating system. In fact, there was no easy way to display anything without either writing a program yourself or using a presentation program such as PowerPoint. This limitation meant that the output was available only to other people using the same operating system and the same program— often only to those using the same version of the program. HTML is important because it provided millions of people with access to information online that they could not or would not have seen any other way. HTML was the first easy method for non-programmers to display text and images on-screen without limiting the audience to those who own or have access to the same program (or a viewer) that the author used to create the content. In a sense, browsers are universal content viewers, and HTML is a universal file format. In fact, HTML and plain text were the only universal file formats until recently; however, we have now added XML, which solves many problems with representing information that plain text and HTML do not address. The Limitations of HTML Despite its popularity, its availability, and the fact that it is a universal file format, HTML has some serious limitations as a way of creating structured documents, as a layout language, and as a file format. First, plain HTML has no way to specify the exact position of content on a page, whether horizontally, vertically, or along the z-axis, which controls the "layer" in which objects appear. Second, HTML, as I've said already, is not a programming language; it has no decision-making capabilities. Third, HTML is a fixed markup language. In other words, the tags are predefined, and you can't make up your own. The World Wide Web Consortium, a standards body more commonly known as the W3C, defines the tags that make up HTML. Unless the W3C extends the standard, the tag set never changes. This is both good and bad. It's good because most browsers can display most HTML. It's also bad, because the limited command set encourages— no, forces— companies to build proprietary extensions to perform more advanced functions. Many of the useful concepts available in HTML today, such as forms, tables, scripts, frames, and Cascading Style Sheets (CSS), began as proprietary extensions but were later adopted and standardized by the (W3C) (see for more information). These extensions eventually became common usage, forcing the W3C to reevaluate and update the HTML standard. Through this extension and revisions process, many once-proprietary extensions have now become part of the standard HTML command set. Because of this, HTML has gone through several standard versions, the latest being HTML 4.01. Syntax: Tags and Attributes A valid HTML file has only a few requirements. Look at the following example: <html> <head> <title>Hello World</title> </head> <body>Hello World </body> </html> This example contains both tags and content. A tag is text enclosed in angle brackets (<>). If you look at the file in a browser, you'll see that it looks similar to Figure 2.1. Figure 2.1: Hello World file (HelloWorld.htm) The HelloWorld.htm file is a short— but complete— HTML file. All HTML files begin with an <html> tag and end with a </html> tag (read "end html" or "close html"). Between those two tags are other tags as well as content, so <html> tags can contain other tags. Tags that contain other tags are called, appropriately enough, containing tags, or more properly, block elements. I'll use the term block elements in this book to mean a tag that can contain other tags. Note that the <head></head> tag is also a block element; among other things, it contains a <title></title> tag. HTML tags have two parts— a start tag and an end tag. Although not all browsers require you to write the end tag in all cases, you should immediately get into the habit of doing so. As you move into XML (and you probably will want to move into XML at some point), the end tags are required in all cases. At this point, I'm going to stop writing both the start and end tags in the text every time I refer to a tag. For example, rather than writing <head></head> every time I need to refer to that tag, I'll just write <head>. You can assume that the end-head tag is present. Note HTML files are text files. They contain only two types of items: commands (also called tags or markup) and content. You can edit an HTML file with any text editor. I tend to use Notepad for small, quick edits and an HTML-aware text editor for larger files, such as the Visual Studio HTML editor, HomeSite, FrontPage, or Dreamweaver, because those editors color-code tags, insert end tags automatically, provide predictive syntax help via IntelliSense or tag/attribute lists, and provide many other helpful editing features. What Is a Tag? You can think of tags in several ways, depending on your interest in the subject matter. For example, one way to think of a tag is as an embedded command. The tag marks a portion of text for special treatment by the browser. That treatment may be anything from "make the next character bold" to "treat the following lines as code." Another way to think of tags is as containers for hidden information. The browser doesn't display information inside the tags. In fact, if the browser doesn't understand the tag type, it ignores it altogether, which is extremely convenient if you need a place to hold information that you don't want the browser to display on-screen. Yet a third way to think about tags is as objects. A <p> tag, for example, contains a single paragraph. A paragraph has properties— an indent level, a word or character count, a style— I'm sure you have run across programs that treat paragraphs as objects when using a word processor. What Is an End Tag? The end tag simply marks the end of the portion of the document influenced by the tag. Computers aren't very smart— once you turn on bold text, it's on until you explicitly turn it off. Just a warning: Most browsers will allow you to skip some of the most common end tags, but take my advice and don't skip them. In the future, you're likely to want to convert some of those documents to XML— and in XML, the end tags are required. Why Does HTML Look Like <THIS>? The bracketed commands used in HTML have a long history. HTML inherited its syntax from SGML, but that's not the only use for bracketed commands. I first saw them used in XyWrite in the late 1980s. XyWrite was a word processor that was most popular with journalists precisely because it used HTMLlike embedded commands. The reason it was so popular is bound up in bits and bytes, but it's an interesting story, so bear with me. Each character you type on a computer is associated with a specific number. There are several different sets of these numbers for different computer systems, but the most common, even today, is called ASCII (American Standard Code for Information Interchange). For example, the ASCII value of a capital A is 65, the value of a space is 32, and the value of a zero is 48. The computer doesn't represent numbers as you do— it performs binary arithmetic. For historical reasons, most modern microcomputers work with bits in multiples of eight. Each set of 8 bits is called a byte— and a byte can hold 256 unique values, enough for the alphabet, the numbers and punctuation, some control characters, some accented characters, and a few lines suitable for drawing simple images. All the visible characters have a value below 128. Most file types, including word processors of that time, use the upper range of characters as embedded commands. For example, a file format might use 157 as a marker for the beginning of a paragraph and 158 as the marker for the end of the paragraph. The reason for this is that files were much smaller if commands could be limited to one or two characters— and those characters weren't used in most text. You have to remember that at that time, memory was expensive and in limited supply. In contrast, the smallest possible XyWrite command was three characters long, and many people thought that was a waste of space. Back to the story… Reporters were among the first to use electronic computer communications to send files over the telephone system. Early versions of the communications programs could use only seven of the bits for content— the last bit was a stop bit. It turned out that they couldn't use programs that needed the upper range of characters because they would lose the formatting if they transmitted the file electronically. But because XyWrite used the bracketed commands, which used common characters that fit into 7 bits, it was possible to transmit both the text and the formatting for XyWrite files. So XyWrite made its mark by being the first word processor to use bracketed commands. OK, enough stories. The real reason HTML uses the bracketed commands is much less interesting— they were already present in SGML, they were easy for people to read and write, and they were also relatively easy for a program to parse— which means to separate into its component parts. Attribute Syntax Tags can contain one main command and an unlimited number of associated values, called attributes. Each attribute has a name and a value. You must separate the attribute from the command or any preceding attribute value with white space. White space includes spaces, tabs, and carriage return/line feed characters. The browser ignores this white space (except when it doesn't exist). White space, to a browser, is another form of command typically called a delimiter. A delimiter is any character or sequence of characters that separates one item from another. Using white space as a delimiter is completely natural because that's what we use between words. Different types of delimiters mean different things. For example, in addition to using white space between words, we also use periods between sentences. In HTML, angle brackets separate tags, white space separates attributes, and an equals sign separates the name of an attribute from its value. Similarly, HTML uses quotes to delimit the value because an attribute value might contain any of the other delimiters: white space, equals signs, or angle brackets. Here are some examples: <font face="Arial" size=12> The <font> tag has two attributes— face and size, each of which has a value. Not all values are that simple. Consider this tag: <input type="hidden" name="txtPara" value="He was a codeslinger, lean and nervous, with quick hands that could type or shoot with equal accuracy."> Once again, not all browsers require the quotes around every attribute value; and once again, even though they aren't required, you should school yourself to enter them every time. I can assure you that failing to enter the quotes around attribute values will cause problems for you at some point in your .NET programming career. Here are two versions of an HTML tag, one with and one without quotes: <input type="text" value="This is my value."> <input type=text value=This is my value> In a browser, these tags show up as text input controls— the equivalent of the familiar single-line text boxes from Windows applications. The first input control will contain the text "This is my value." But the second version will contain only the word "This." That's because, without the quotes, the browser has to fall back on the next delimiter to mark the end of the attribute value. In this case, the next delimiter is a space. The browser then ignores the next three words in the sentence, is, my, and value, because they aren't recognizable keywords and aren't properly formed attributes either— they don't have an equals sign or a value. You may use either single or double quotes to delimit the attribute value; in other words, both of the following are equally valid: <script language='VBScript'> <script language="VBScript"> You can embed quotes in a value three ways: n Switch the outer enclosing quotes to the opposite type; for example, value="Mary's socks", or value='The word is "important"'. n Enter each inner quote twice: 'value=Bill''s cat'. n Use an entity— characters that substitute for characters that are otherwise not allowed. There are some special entities; for example, the entity for a quote character is the six characters ". But you can display any character— including Unicode characters— in most browsers using an entity that consists of an ampersand followed by a number sign (&#), the decimal value of the character you want to display, and a trailing semicolon. You can use hexadecimal values instead by placing an x after the number sign. So the entity value for a single-quote character (ASCII 39) is ', or using a hex value, '. Therefore, yet another way to embed a single quote is "value='Bill's cat'" or using a hexadecimal value, "value='Bill's cat'". More HTML Syntax Attribute values have the most involved syntax. The other syntax rules for HTML are straightforward. White Space Is Optional Unless you specifically include tags to force the browser to include the white space, the browser will ignore it. The sentences "Welcome to Visual C# and ASP.Net!" and "Welcome to Visual C# and ASP.NET!" both print exactly the same way on-screen when rendered by a browser. Case Is Irrelevant HTML parsers ignore case, so you can write tags in either uppercase (<FONT>) or lowercase (<font>). Having said that, you should try to be consistent (yes, case is relevant in XML). There are two advantages to using lowercase. First, the W3C standardized lowercase tag commands for an XML-compatible version of HTML, called XHTML. Second, lowercase requires fewer keystrokes. Compatibility aside, choose either uppercase or lowercase for tags and practice writing them consistently. I typically write tags in lowercase, but I admit I'm not completely consistent about case. The Order of Tags Is Important An enclosing tag must completely enclose any inner tags. For example, <font size=12><b>This is bold</font></b> is an invalid HTML syntax because you must close the bold <b> tag before the <font> tag. The proper way to write the tags is <font size=12><b>This is bold</b></font> . These simple rules will help you write perfect HTML every time. Write the Ending Tag when You Write the Beginning Tag For example, don't write <html> then expect to remember to type the end </html> tag later. Write them both at the same time, then insert the content between the tags. Write Tags in Lowercase They're easier to type. Use Templates Templates are prewritten files into which you place content. Templates save a lot of time because they already contain the required tags. Indent Enclosed Tags Set the tab or indent levels in your editor to a small value— I find three spaces works well. That's enough to make the indents readily apparent, but not so much that longer lines scroll off the page. Use Comments Liberally A comment, in HTML, is text enclosed in a tag that begins with a left angle bracket, includes an exclamation point and two dashes, and ends with two dashes and a right angle bracket: <!--This is a comment-->. Comments help you understand the content and layout of a file. They can also help separate sections visually. Browsers don't render comments, so you can use them whenever you like. Creating a Simple Page You should usually start a new file with an HTML template. The most basic HTML template contains only the required tags. You fill in the content as needed. Type the following listing into your HTML editor, then save it as template.html. <html> <head> <title><!-- Title --></title> </head> <body> <!-- Your content here --> </body> </html> You'll use that file a great deal. If you're using a dedicated HTML editor, it probably loaded a similar file as soon as you selected New from the File menu. Add a title between the title tags. Replace the comment <!-- Title --> with the title "HTML Is Easy." Move past the first <body> tag and add your content in place of the comment <!-- Your content here -->. The completed file should look similar to Listing 2.1. Listing 2.1: HTML Is Easy (ch2-1.htm) <html> <head> <title>HTML Is Easy</title> </head> <body> <h1 align="center">HTML Is Easy</h1> <p>Although HTML has about 100 different tags, you'll quickly find that you use only a few of them. The most useful tags are the paragraph tag--the tag that encloses this paragraph; the <b>bold</b> tag; the <i>italics</i> tag (most commonly seen in Microsoft products as the <strong> strong</strong> tag and the <em>emphasis</em> tag; the heading tags; and the most useful of all--the table tags, used to produce formatted tables, both with and without borders.</p> <!--<p> </p>--> <table align="center" border="1" width="50%"> <thead> <tr> <th align="center">Product</th> <th align="center">Price</th> </tr> <tr> <td align="left">Cap</td> <td align="right">$14.50</td> </tr> <tr> <td align="left">Boots</td> <td align="right">$49.99</td> </tr> </table> </body> </html> After you have entered the listing, save it as a file, and then view it in your browser. To do that, type file://<drive><path><filename> where drive is the drive letter where you saved the file, path is the directory or hierarchy of directories, and filename is the actual name of the file. In your browser, the page should look similar to Figure 2.2. Figure 2.2: HTML is easy. (ch2-1.htm) When you view Listing 2.1 in a browser, you should notice several features. The title appears in the title bar of the browser window, not in the body of the document. That's because the title isn't properly part of the document at all— it's part of the header, which is why the <title> tag is inside the <head> tag. If you entered the text exactly as it's written, you should notice that the line breaks in the listing and the line breaks in the browser are different. Each line you entered (although you can't see it) ends with two characters called a return and a line feed (ASCII characters 13 and 10). The browser treats these as white space and ignores them. If you aren't willing to let the browser break lines for you, you'll need to break the lines explicitly yourself, using a <br> or break tag. Note The <br> tag is one of several exceptions to the rule that you must always enter an end tag. The end break tag </br> is not required (although you can enter it if you like). However, even though your pages work fine without the end tags, get in the habit of writing them so that your pages will be XHTML and XML compatible. Another interesting feature is that the line breaks are relative to the area of the screen into which the browser renders content, called the client area of the window. Resize your browser window and watch what happens to the text. As you change the width of the browser window, the browser rerenders the text, changing the line breaks so that the text still fits inside the window— the text listing just gets longer. Note What font did your browser render the paragraph text in? A serif font, such as Times New Roman, or a non-serif font such as Arial? What's the point size? As an HTML page designer, you should bear in mind that the default font face and the default point size are determined by the user through browser preference settings, not by your document. Both the default font face and the default size are user selectable in both Netscape and Internet Explorer (IE). If you want the text to appear in a specific face or size, you must include the appropriate font tags or (better) CSS styles. Even then, the results depend on exactly what the end-users have on their computers. Next, look at the <h1> tag. It has an attribute called align="center", which forces the browser to display the content in the center of the page. There's another way to align content on the page. You could just as easily have written the following: <center> <h1>HTML Is Easy</h1> </center> In the browser, that construction would look identical. You would still see that type of syntax, although most HTML editors align each element separately. The <center></center> syntax is most useful when you want to force the alignment of many consecutive elements. In HTML 4.0, the <center> tag is deprecated but has been marked as shorthand for <div align="center">. Again, you should use the newer syntax except when required for older browsers.
https://ar.b-ok.org/book/488959/37abcd
CC-MAIN-2020-24
refinedweb
16,684
61.46
One, based. > Anytime I put an assertion into my code, it’s a tacit acknowledgment that I don’t have complete trust that the property being asserted actually holds. That is not what assert() means to me. When I write assert(), it means that I, the writer, do absolutely believe that the condition is always true and that you, the reader, ought to believe it as well. If I have doubts about the truth of the condition, I’ll write something like: if( NEVER(condition) ){ … remedial action… } The NEVER() macro is so defined as to be a pass-through for release builds, but calls abort() for debug builds if the condition is true. An assert() is an executable comment. The fact that it is executable gives it more weight because it is self-verifying. When I see a comment like “/* param is never NULL */” I am much less likely to believe that comment than when I see “assert( param!=0 )”. My level of trust in the assert() depends on how well-tested is the software, but is almost always greater than my level of trust in the comment. Used in this way, assert() is a very powerful documentation mechanism that helps to keep complex software maintainable over the long term. That is not how I view the use of assert(). Basically, if the condition is not true, stop before irreparable damage occurs or the system cannot recover in any meaningful way. The idea of an ‘executable comment’ doesn’t make sense to me. Any code, not used for the direct purpose of the system, is just another failure point. Hi Richard, if the assertion is guaranteed to be true, I guess I don’t understand why you would bother making it executable, instead of just leaving a comment? In many cases an English comment would be a lot more readable. I agree more with Pat: an assertion is not only documentation but also part of a defense-in-depth strategy against things going wrong, whether it is a logic error, some sort of unrepeatable state corruption, a compiler bug, or whatever. Of course I agree that an assertion is never used for detecting things that can legitimately go wrong in sane executions of the program. assert is not defense in depth because it is compiled out of release builds which will encounter malicious input. It is a way to write pre- and post-conditions or even contracts that are used to debug software to catch/pinpoint/isolate errors as soon as possible instead of letting bad data affect state somewhere down the line and then not know how we got the incorrect/invalid state in the first place. > Any code, not used for the direct purpose of the system, is just another failure point. Agreed and for this reason I use the (non-default) behavior of disabling assert() for release builds. (Mostly. In applications where assert()s are less frequent, where the application is less well-tested, and where the assert()s do not impose a performance penalty I will sometimes be lazy and leave them in releases.) You obviously cannot achieve 100% MC/DC if you have assert() enabled in your code. An assert() is a statement of an invariant. The more invariants you know about a block of code or subroutine, the better you are able to reason about that block or subroutine. In a large and complex system, it is impractical to keep the state of the entire system in mind at all times. You are much more productive to break the system down into manageable pieces – pieces small enough to construct informal proofs of correctness Assert() helps with this by constraining the internal interfaces. Assert() is also useful for constraining internal interfaces such that future changes do not cause subtle and difficult-to-detect errors and/or vulnerabilities. You can also state invariants in comments, and for clarity you probably should. But I typically do not trust comments that are not backed up by assert()s as the comments can be and often are incorrect. If I see an assert() then I know both the programmer’s intent and also that the intent was fulfilled (assuming the code is well-tested). I was starting to write out some examples to illustrate my use of assert(), which I previously thought to be the commonly-held view. But it seems like the comment section for EIA is not the right forum for such details. I think I need to write up a separate post on my own site. I have a note to do that, and will add a follow-up here if and when I achieve that goal. That some of the best practitioners in the field do not view assert() as I do is rather alarming to me. I have previously given no mind to golang since, while initially reviewing the documentation, I came across the statement that they explicitly disallow assert() in that language. “What kind of crazy, misguided nonsense is this…” I thought, and read no further, dismissing golang as unsuitable for serious development work. But if you look at assert() as a safety-rope and not a statement of eternal truth, then I can kind of see the golang developers’ point of view. So I have the action to try to write up my view of assert() (which I unabashedly claim to be the “correct” view :-)) complete with lots of examples, and provide a link as a follow-up. Surely we are all in agreement that assert() should never be used to validate an external input. > Anytime I put an assertion into my code, it’s a tacit acknowledgment that I don’t have complete trust that the property being asserted actually holds. I wouldn’t phrase it this way, because it looks like an invitation to write less assert() in an attempt to fake the certainty of a condition. In my view, a condition is what’s required for next piece of code to work properly. If it is expected to be always true, that’s exactly the right moment to write an assert(). Otherwise, this is the domain of error trapping and management. The assert is one of the best “self-documented piece of code” I can think of. It can, and should, reduce the need for some code comments. A real code comment can come on top of that, to give context, on why this condition is supposed to be true, and why it is required, when that’s useful. But in many case, it’s not even necessary. assert(ptr!=NULL); is often clear enough. assert() become vital as software size and age grows, with new generation of programmers giving their share to the code base. An assert might be considered a “superfluous statement of truth” for a single programmer working on its own small code base. After all, it might be completely obvious that this ptr is necessarily !=NULL. But in a different space and time, another programmer (which might be the same one, just a few years older) will come and modify _another_ part of the code, breaking this condition inadvertently. Without an assert, this can degenerate into a monstrous debug scenario, as the condition might trigger non-trivial side effects, sometimes invisible long after being triggered. The assert() will catch this change of condition much sooner, and help solve the situation much quicker, helping velocity. Which means, I feel on Richard’s side on assert() usage. I would usually just keep it for me and not add to the noise, but somehow, I felt the urge to state it. I guess I believe it’s important. I agree with the defense-in-depth view of asserts. For this reason they are not compiled out in my release builds and I have a test to ensure that this stays true. Compiling them out would mean test/debug and release builds having different semantics, which I don’t find acceptable. You can imagine what I think about the semantics of Python’s assert! From an industry perspective on “how other CS instructors are dealing with these issues” (of understanding when and how to deploy particular defensive measures), internal training courses I co-developed try to situate detailed coding advice in a broader security engineering context. It includes threat modeling as a general practice, with a specific set of advice for our lines of business. We also frame security efforts as part of normal engineering work, so subject to the same kinds of tradeoffs practitioners make all the time, and approachable using the same families of tools (automated testing, code review, automated deployment and rollback, usability testing, defect tracking, etc.) that are already established for regular work. Of course there are things which are more specialized, and the adversarial context is important for the way we think about security problems, but this is setting the tone. In structuring the material in this way, we hope to make it more generally valuable than a more rigid list of recommendations. The additional engineering context is meant to make the advice more applicable to novel situations, and to prompt people to think about security engineering as something where they have a voice – not just receiving directives. Specifically to defensive measures, the material includes prompts for people to think more like an attacker. This starts with the threat modeling, where people are asked to go through the systems they work with at an architectural level, figure out avenues of attack, assess the consequences of attacks, and consider various countermeasures. These sessions often bring out findings relating to trust boundaries in the way you describe. At a more advanced or specialized level we have course offerings which involve hands-on penetration testing, which is a really useful way to foster an understanding of the kinds of attacks which are possible, as well as exploring non-obvious parts of the attack surface. Therefore, at the code level, some of the discussion above about assertions ends up being conditioned (haha) on how the code in question is being developed and run, and what it does. There are certainly specific practices and gotchas around assert() in particular which are good to know. But people should also be able to negotiate through the context of what the assert is trying to prevent; how likely it is; how bad it would be if it happened; what combination of language features, unit tests, integration tests, smoke tests, etc., are appropriate to use; what we expect to happen in the abort case (is something logged? is there an alert? who gets the alert and what are they meant to do? what does the normal user see while this is happening?); and so on. Richard, I look forward to your writeup! I do indeed hope that assertions represent eternal truths, but I also have seen that hope dashed too many times by factors beyond my (previous) understanding. Yann, I suspect we largely agree (as I think Richard and I mostly do). The distinctions here are subtle ones. Alex, thank you for returning us to the topic that I had hoped to discuss :). The internal courses you describe sound really interesting, I would love to learn more. The teaching scenario you describe, where we strongly encourage people to think both like an attacker and a defender, is a really nice way of putting it. That is what I also try to do. I’m teaching a Security (which, this being me, means some Anderson readings + lotsa static and dynamic analysis for “good” and “evil”) class for the first time. I think an important distinction to get across is whether mistrust is due to: – 1. error only or – 2. an adversary In the first case, probability may help you out; in the second, it does not matter, an adversary drives P(trust violated) to 1. Mistaking the relatively rare case 1 for case 2 is a concept to drive home. I learned about “assert” as a beginning programmer from Kernighan and Plauger’s books “The Elements of Programming Style” and “Software Tools” (not sure which, or both) back in the day. Obviously nobody is infallible including them, but they advocated asserts as “defense in depth” and leaving them enabled in release builds, using the comparison that disabling them in release builds was like wearing a parachute on the ground but taking it off once your plane was in the air. They said their asserts tripped many times during development and kept them from going too far wrong. That said, I don’t understand how C has survived into the present day, when we could have so much more sanity and static safety. I haven’t tried Rust yet but I’ve played with Ada a little, and it seems preferable to C in a lot of ways. > If you fail to recognize and properly fortify an > important trust boundary, it is very likely that > someone else will recognize it and then exploit it. I think I disagree with “very,” and my uncertainty is part of the problem. In my extremely limited pre-Snowden experience using static analysis for security concerns, a glaring gap was the one between a bug and an externally-exploitable vulnerability. We didn’t have a good way to rank the bugs, and Snowden’s leaks suggest that we needed to worry about the many not-very-likely ones as well as the few very-likely exploitables. (I’m taking it for granted that “Fix All The Bugs” is a slogan rather than a plan.) Perhaps one of the most important trust boundaries is between code and data. We used to think that commingling code and data was good. Early computers (I’m thinking of the IBM 701) had no index instructions and so array operations had to be accomplished using self-altering code, and everybody thought that was super-cool because it reduced the number of vacuum tubes. In the 70s and 80s everybody was raging about how great Lisp was, because it made no distinction between code and data. Javascript has eval() because less than 20 years ago everybody thought that was a great idea, though now we know better and hence disable eval() using CSP. I spend a lot of time doing SQL. An SQL statement is code – it is a miniature program that gets “compiled” by the query planner and then run to generate the answer. But many SQL statements are constructed at run-time using application data. This commingling of code and data often results in SQL injection attacks, which are still one of the leading exploits on modern systems. The mixing of code and data is a exceedingly powerful idea. It forms the essential core of Godel’s incompleteness theorem, to name but one prominent example. It is a seductive and elegant idea that can easily lure in unwary. So perhaps the trust boundary between code and data should be given special emphasis when talking about security? Thanks for the comments, folks, this is all good material! Efficiency and safety could both be improved by having a means of telling the compiler that execution must be trapped and aborted if a condition doesn’t hold when this point is reached, but execution may at the compiler’s convenience be aborted at almost any time the compiler can determine that it will inevitably reach some kind of an abort-and-trap. Granting such freedom to a compiler could greatly reduce the run-time cost associated with such assertions by not only allowing them to be hoisted out of loops, but also by allowing a compiler that has generated code to trap if an assertion is violated to then exploit the fact that the assertion holds within succeeding code. For example, given: for (int i=0; i<size; i++) { action_which_cannot_normally_exit(); __EAGER_ASSERT(i = b && (a-b) < 1000000); return (a-b)*123 + c; } a compiler could at its leisure either perform all of the computations using 64-bit values or trap if the parameters are out of range and then perform the multiply using 32-bit values. Precise trapping for error conditions is expensive, but allowing traps to be imprecise can make them much cheaper. A language which automatically traps dangerous conditions imprecisely may be able to generate more efficient code than one which requires that all dangerous conditions be checked manually.
https://blog.regehr.org/archives/1576
CC-MAIN-2019-18
refinedweb
2,733
58.42
80985/how-to-expire-session-due-to-inactivity-in-django Hello @kartik, Expire the session on browser close with the SESSION_EXPIRE_AT_BROWSER_CLOSE setting. Then set a timestamp in the session on every request like so. request.session['last_activity'] = datetime.now() and add a middleware to detect if the session is expired. something like this should handle the whole process... from datetime import datetime from django.http import HttpResponseRedirect class SessionExpiredMiddleware: def process_request(request): last_activity = request.session['last_activity'] now = datetime.now() if (now - last_activity).minutes > 10: # Do logout / expire session # and then... return HttpResponseRedirect("LOGIN_PAGE_URL") if not request.is_ajax(): # don't set this for ajax requests or else your # expired session checks will keep the session from # expiring :) request.session['last_activity'] = now Then you just have to make some urls and views to return relevant data to the ajax calls regarding the session expiry. when the user opts to "renew" the session, so to speak, all you have to do is set requeset.session['last_activity'] to the current time again Hope this helps!! Thank you!! You should use django-pjax which is built exactly for ...READ MORE This actually gives you the property names ...READ MORE In case someone googles here searching for ...READ MORE If you need to use something similar .. do this via Ajax. ...READ MORE Hii, Django does not support free group by ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/80985/how-to-expire-session-due-to-inactivity-in-django
CC-MAIN-2022-05
refinedweb
250
52.26
- Which marketing medium (TV, radio, print, online ads) returns maximum return (ROI)? - How much to spend on marketing activities to increase sales by some percent (15%)? - Predict sales in future from investment spent on marketing activities - Identifying Key drivers of sales (including marketing mediums, price, competition, weather and macro-economic factors) - How to optimize marketing spend? - Is online marketing medium better than offline? Imagine you are a chief strategist and your role requires you to come up with strategies that can boost company's revenue growth and profitability. With the use of insights from marketing mix model, you know sales can be increased by 50 million dollars for every 5 million dollars you spend on advertising. MMM would also help you to determine how much to spend on each advertising medium to get maximum return on investment (ROI). Types of Marketing MediumsLet's break it into two parts - offline and online. - Print Media : Newspaper, Magazine - TV - Radio - Out-of-home (OOH) Advertising like Billboards, ads in public places. - Direct Mail like catalogs, letters - Telemarketing - Below The Line Promotions like free product samples or vouchers - Sponsorship - Search Engine Marketing like Content Marketing, Backlink building etc. - Pay per Click, Pay per Impression - Social Media Marketing (Facebook, YouTube, Instagram, LinkedIn Ads) - Affiliate Marketing Data Required for Marketing Mix ModelingTo build robust and accurate MMM model, we need data of different types. See the details below. Product Datarefers to product and sub-product information along with price and no of units sold / unsold. It is required to understand various aspects related to product - Whether it is a new product, which product category the product falls into? Is it the largest selling product? At what rate sales of this product is growing? What's the price for each product in the category? Promotion Dataincludes details about days when promotion or offers were active and the offer type like free delivery, cash back etc. Advertising DataIt is used to measure how effective advertisement is running. In TV Advertising, Gross Rating Point (GRP) is a method which measures total audience exposure to advertisement. You can also include other traditional methods such as Newspapers and Magazines and Radio. You can include SPENDS which is dollar value spent on these methods. Digital marketing is popular these days and a large percentage of population can access internet now which opens up a whole new world for advertising. We can use the variables like Digital Spend, Affiliation Marketing Spend, Content Marketing Spend, Sponsorship Spend and Social Media Spend to capture online media investment information. SeasonalityIncrease in sales due to seasonality can make calculation of identifying top drivers of sales biased. We would not be able to assess the impact of promotion on sales. For example, sales in woolen jackets will be higher in winter season than summer. We also need to incorporate holidays and major events planned in a particular month. For example, sales during Christmas season is generally high than average sales. Geographical DataIt refers to location of retail store. It can be city, state or postal code of store. Macroeconomic DataSales can also be affected by macro economic factors like inflation, unemployment rate, GDP etc. Companies generally report negative growth rate in sales during recession. We need to incorporate these factors in our model so that it understands recession and cyclical effects. SalesIt is not possible to build MMM without sales variable. Sales can be in volume in units as well as revenue($) - Sales, product and promotion data are generally stored internally within company's relational databases management system. - Advertising data is either managed by internal marketing team or through external marketing agency. - Macroeconomic data can be extracted through websites like World Bank, IMF and Economagic. Data Preparation The level of data can be decided depending on the implementation of promotion campaign. Ideally data should NOT be prepared at daily level as daily sales data generally has too much variation which leads to poor accuracy. We generally use weekly time period and aggregate data at weekly level as promotions are live from Monday to Sunday.Another question : Do we have data at product level which means product information in each invoice? See the example below. If you observe the above table, you would see transaction data at daily level. We can transform it to weekly level without losing any required information. Transformed data contains number of distinct customers, no of transactions, total sales of different products in each week.If you observe the above table, you would see transaction data at daily level. We can transform it to weekly level without losing any required information. Transformed data contains number of distinct customers, no of transactions, total sales of different products in each week. +----------------+------------------+-------------+--------------+-------+ | Transaction ID | Transaction Date | Customer ID | Product Code | Sales | +----------------+------------------+-------------+--------------+-------+ | 1 | 24-09-2019 | 12 | AAA | 824 | | 1 | 24-09-2019 | 12 | AAB | 809 | +----------------+------------------+-------------+--------------+-------+ +------+--------------------+--------------+-----------+-----------+ | Week | Distinct Customers | Transactions | Sales AAA | Sales AAB | +------+--------------------+--------------+-----------+-----------+ Data Exploration and TransformationBefore building a model, we as analysts need to perform various quality checks. It's a crucial stage of building a model. If your data is not prepared and handled correctly, model development would be of less use. Accuracy and robustness of model can also be deteriorated badly if you compromise data cleaning and transformation. Importing DataIf you have data in CSV or Excel format, you need to load data into R/Python. After importing data, you need to ensure whole data have been loaded correctly by checking the number of rows and columns. Length of character variables can be truncated while importing. Missing ValuesIt is important to check the number of missing values in each variable. Many statistical algorithms are not immune to missing values. They simply remove the rows when missing values exist. It can cause a loss of a good amount of data depending on percentage of missing values. Sometimes missing values are specially coded as '999'. We also need to understand the reason behind missing data. Is it a genuine missing because it is not applicable to some customers? Or customers or respondents did not provide the information? Missing values can be treated by replacing them with mean, median or mode value. We can also remove missing values if it constitutes a very small percentage of data. Moving average is also a common technique to impute missing values when data is in time series format. Outlier Valuesare referred to as extreme values (in plain English). For example, customer having 150 years age is an outlier. Sometimes data are entered incorrectly into the system. We can handle outliers by percentile capping at 99th or 95th percentile depending on the distribution. Outliers can also be removed if percentage of these kind of observations in the dataset is very small. Univariate and Bivariate Analysisare ways to perform data exploration. Univariate analysis refers to examine the distribution of a single variable. It can be done by calculating mean, median, mode, standard deviation and percentiles. Bivariate analysis refers to checking the association between two variables. For example, inspecting relationship between TV GRPs and Sales. AdstockThe concept behind Adstock is that advertising has been shown to have an effect extending several periods after you see it first time. In other words, an advertisement from a previous period may have some effect of an advertisement in the current period. It is also called advertising carryover. It's a very old concept (first used in 1980s) when TV was the main medium of advertisement. It is not restricted to TV only and can be applied to online, radio and print media as well. Let's understand through example. Suppose you are watching your favorite TV show. During commercial breaks, you see an ad of perfume brand 'X'. You would not buy this perfume immediately after the commercial break. Let's say you see the ad of the same brand 'X' a couple of times in next few days. It would increase awareness to a new level and there is a high chance that you would purchase perfume of this brand (if you need it). If you would not have seen the advertisement again after first time, you would not be able to recall the brand easily. This is the decay effect of Adstock. This decay is reduced by new advertising exposure. Ads shown on TV are generally remembered for longer than online ads At = Tt + λAt −1 + λAt −2 Here t =1,...,nWhere At is the Adstock at time t. Tt is the value of the advertising variable at time t and λ is the 'retention' or lag weight parameter and can be interpreted as percentage of effectively remembered ad contact from previous week plus contacts from current week. λ lies between 0 and 1. Let's learn via example. Adstock rate = 0.5 TV GRP in week 1 = 115 TV GRP in week 2 = 120 Adstock in week 2 = 120 + 0.5*115 = 177.5 Techniques used in Marketing Mix ModelingThe common statistics techniques used in MMM are linear and non-linear regression techniques. Multiple Linear Regression Sales = β0 + β1*X1 + β2*X2 + .... + βn*Xn Xi are independent variables (or predictors), βi can be interpreted as change in sales corresponding to unit change in predictor. Log-Linear Regression LN(Sales) = β0 + β1*X1 + β2*X2 + .... + βn*Xn LN is natural log. Xi are independent variables (or predictors), βi can be interpreted as % change in sales corresponding to unit change in predictor. Log-Log Regression LN(Sales) = β0 + β1*LN(X1) + β2*LN(X2) + .... + βn*LN(Xn) βi can be interpreted as % change in sales corresponding to 1% change in predictor. ElasticityElasticity answer this question "What marketing activities to pull or push and the corresponding impact on sales". For example a price elasticity of -1.9 means that when price is increased by 1%, sales will be reduced by 1.9 percent keeping all other factors being constant. Similarly we can calculate elasticity of TV, radio and online advertisement. When you have simple linear regression model, you can calculate elasticity using the formula below - In the case of non-linear regression models, the above defined elasticity formula needs to be tweaked according to the equation. Refer the table below. Model PerformanceModel performance of MMM can be checked like we do in any linear regression model. Common model performance metrics are R-Squared, Adjusted R-Squared, Mean Absolute Percentage Error (MAPE) etc. Marketing Mix Modeling using SAS, Python and RIn the program below, I have shown how to implement basic MMM model using SAS, R and Python. This model takes takes into account three variables which are price and exposure on two different advertising mediums (let's say TV and online). You can download dataset from this link R Code # Read CSV File df <- read.csv("Furniture.csv") adstock <- function(data, rate) { return(as.numeric(filter(x=data, filter=rate, method="recursive"))) } x1 <- adstock(df$Advt1, 0.5) x2 <- adstock(df$Advt2, 0.5) # Model mod <- lm(formula=Sales ~ x1 + x2 + Price.Furniture,data=df) summary(mod) # Price Elasticity (PE <-as.numeric(mod$coefficients["Price.Furniture"] * mean(df$Price.Furniture)/mean(df$Sales))) Python Code import pandas as pd df = pd.read_csv("C:/Users/DELL/Documents/Furniture.csv") #Adstock import numpy as np def adstock(data, rate): tt = np.empty(len(data)) tt[0] = data[0] for i in range(1, len(data)): tt[i] = data[i] + tt[i-1] * rate return tt x1 = adstock(df.Advt1, 0.5) x2 = adstock(df.Advt2, 0.5) #Linear Regression Model from sklearn.linear_model import LinearRegression x = pd.DataFrame({"x1":x1, "x2":x2, "Price": df["Price.Furniture"]}) y = df.iloc[:,0] lm = LinearRegression() lm = lm.fit(x,y) coefficients = pd.concat([pd.DataFrame(x.columns),pd.DataFrame(np.transpose(lm.coef_))], axis = 1) lm.intercept_ #Elasticity coefficients.iloc[2,1] * np.mean(x.Price) / np.mean(y) SAS Code FILENAME PROBLY TEMP; PROC HTTP URL="" METHOD="GET" OUT=PROBLY; RUN; OPTIONS VALIDVARNAME=ANY; PROC IMPORT FILE=PROBLY OUT=WORK.MMM (rename='Price.Furniture'n=Price) REPLACE DBMS=CSV; RUN; %let adstock = 0.5; data Adstock; set MMM; format x1 x2 10.2; if _N_ = 1 then do; x1 = Advt1; x2 = Advt2; retain x1 x2; end; else do; x1 = sum(Advt1, x1*&adstock.); x2 = sum(Advt2, x2*&adstock.); end; run; ods output ParameterEstimates = coefficients ; proc reg data=adstock; model sales = x1 x2 Price; run; proc sql; select estimate into: beta from coefficients where Variable = 'Price'; select mean(sales) into: meansales from MMM; select mean(Price) into: meanprice from MMM; quit; %let elasticity = %sysevalf(&beta. *(&meanprice./&meansales.)); %put &elasticity.; The above code returns elasticity value of -1.796211 which means increase in price of furniture by 1% will decrease the sales of furniture by 1.79%. Love your work. Do you have SAS code for MMM? Thanks. Ethan Yes I have added in the article. Thanks! Could you also address how to use "proc mixed" and "proc GAM" in MMM given that the lower geographic data are available? Thanks. Good question.. But, more interesting how to get positive coefficients for all marketing drivers? @Deepanshu- Nice post, very concise. I dont understand one thing though- Mix models differ from simple Linear models in that they allow both fixed and random effects. Where do you capture these in your equation. Those are mixed models - a statistical term. The post is about marketing mix models, that is modeling of how marketing mix impacts sales Excellent job done. Thanks. What a marketing touch you have put in this long detailed article! Marketing mix modeling concept and features you disclosed were really a knowledgeable and helpful read. The Introduction, use of it, big marketing world, effective chart, types, required, date preparation and transformation, used techniques and using SAS- every points was incredibly rich with high valued data and analysis. It's indeed a cool stuff. Love your super talented job and presentation. Much respect. Keep it up please. good post share it nice article This comment has been removed by the author. Hey let me say thanks for this brilliant quality concept. I just landed on this recourse world and amazed with what has been explained about MMM. You really spread huge effective data and analysis which must be able to draw anybody's prompt attention. Every item was full of authentic and realistic information and you can't imagine how much beneficial read it is! I appreciate your time and thoughts. Finally this guidelines can be really very useful for sure. This comment has been removed by a blog administrator. This comment has been removed by a blog administrator. Just would like to say that it was a fascinating read for me and perhaps one of the best experience regarding the marketing mix modeling concepts. The steps and thoughts you revealed pointless to state that were the adoring and age-worth thinking. Including the brilliant introduction all the data and clarification were proving the thoughtful and genius expression to me. This type of quality work always be able to draw the rapt attention of any audience. Hi! How do you calculate the variable contributions in log-linear and log-log models? It's easy in case of additive one. Excellent post , does any body have data needed for Marketing mix model? Hi! Fascinating article! Wonder why did you multiply the coefficient with np.mean(x.Price) / np.mean(y). I assume this way you can predict the percentage change right? Without it, it's if you decrease price by 1 USD Sales would increase by 10.0713 units. hey thank you for this post. it is really helpfull to me bu ı have question. ı am using adaboostregressior which function ı should be using for elasticity ? Thanks. So wonderful job done. Hi Deepanshu, you really gifted a analytical and resourceful content regarding marketing development issue. This was definitely a cool read and experience to have me into your long details. The mix modeling data you delivered step by step like the uses of MMM, types, required data, data sources, exploration and transformation- evey point was so effective and thoughtful. I like the way you discovered all the module. Anyway, my browsing story today was to hunt some data related to saavi ordering system an AU based leading company that performs as a wholesale business solution for the business owners to manage their orders from the customers and fortunately it also ensures mobile ordering apps that helps the customers to connect the account instantly. Hope this information will be helpful to many business guys here. Very useful information shared. Thank you. Hi, can anyone elaborate on calculating saturation curves? Hi Deepanshu, Great article for starting and having an idea of the MMM. I have a question on Media Mix Model. I have weekly level data on affiliates,banner, social media ( lets see they are called vehicles ), most of them have a granularity for example under banner we have display, video etc. Many times they are higly correlated . How do we keep variables which have high VIF or do we really keep them in Model ?Is Transformation of those a best practise ? Business every quater would look for planning and they may want to plan all those vehicles .Let me know your thoughts.
https://www.listendata.com/2019/09/marketing-mix-modeling.html
CC-MAIN-2021-31
refinedweb
2,865
57.67
Full Disclosure mailing list archives It seems as if our backdoor was found so we figured we cant sell this in the ac1db1tch3z CANVAS pack (PhosphoricAc1d Exploit pack). P.S. Since it took months and months for the community to find the system() exploit, we still have a more complicated zerday unrealircd hack module. Please inquire when our website is finished. Brought to you by Ac1dB1tch3z: still using system() like it was 1992AD, and still owning everyone with it. Thanks. ------------------------------------------------------------------------ $ stat ABunreal.py File: `ABunreal.py' Size: 830 Blocks: 8 IO Block: 4096 regular file Device: fd02h/64770d Inode: 16891994 Links: 1 Access: (0777/-rwxrwxrwx) Uid: ( 1003/ ag) Gid: ( 1010/ ag) Access: 2010-04-05 14:26:14.000000000 -0400 Modify: 2009-11-10 00:04:33.000000000 -0500 Change: 2010-04-05 14:26:59.000000000 -0400 ------------------------------------------------------------------------ #!/usr/bin/env python # Ac1db1tch3z 09 import sys import socket import struct def injectcode(host, port, command): host1 = host port1 = int(port) cmd = command print "!# () #@! Ac1db1tch3z is just Unreal # () !#%%\n" print "- Attacking %s on port %d"%(host1,port1) print "- sending command: %s"%cmd packet = "AB" +";"+ cmd + ";"+"\n" s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.connect((host1, port1)) except socket.error: print "No connection..." return 0 s.sendall(packet) blah = s.recv(5000) print blah s.close() if __name__ == "__main__": if len(sys.argv) == 1: print "Usage:", sys.argv[0], "<target host> <target port> <command>" print sys.exit(1) else: injectcode(sys.argv[1],sys.argv[2],sys.argv[3]) _______________________________________________ Full-Disclosure - We believe in it. Charter: Hosted and sponsored by Secunia - By Date By Thread
http://seclists.org/fulldisclosure/2010/Jun/303
CC-MAIN-2014-35
refinedweb
266
70.6
28 May 2009 11:25 [Source: ICIS news] LONDON (ICIS News)--The European Commission is expected to call for an extension of antidumping and anti-subsidy duties on biodiesel imports from the ?xml:namespace> “Our case is strong…there are no claims from the Many producers and organisations linked to biodiesel in the EU said they were confident the proposals would be accepted, based on the fact that the temporary tariffs were put into effect in the first place. The commission - the executive arm of the EU, which oversees trade policy - was presenting its proposal to a committee of trade experts who were expected to reach a decision towards the end of Thursday or on Friday. The decision must then be endorsed by EU ministers next month. If the proposal is accepted then ‘definitive’ or permanent tariffs would be put in place for up to five years. Temporary duties on the imports of subsidised biodiesel, known as B99, were imposed in March following a probe prompted by a complaint from EU producers. A spokesman for the European Commission declined
http://www.icis.com/Articles/2009/05/28/9219915/ec-expected-to-extend-import-tariffs-on-subsidised-us-biodiesel.html
CC-MAIN-2015-06
refinedweb
179
54.66
Do I get a msg when I resize a dialog? On 22/01/2016 at 11:01, xxxxxxxx wrote: Hey guys, so here is my question: I have a dialog with a GeUserArea wich is deisplaying an image that is rendered with RenderDocument(). But I don't want, that my image rerenders when I am changin the size of my Dialog window (wich is currently happening every time I resize my Dialog). So I wanted to ask if I get a msg when I resize my dialog to catch this msg. Hope everything is clear. greetings, neon On 22/01/2016 at 14:57, xxxxxxxx wrote: Using this method in your GeDialog class will send a message if the users changes the size of the dialog. def Message(self, msg, result) : if msg.GetId() == c4d.BFM_ADJUSTSIZE: print("Changing Dialog Dimensions") return gui.GeDialog.Message(self, msg, result) -ScottA On 23/01/2016 at 08:07, xxxxxxxx wrote: Hi ScottA, thank you for your reply, it is working ^^ grettings, neon
https://plugincafe.maxon.net/topic/9317/12428_do-i-get-a-msg-when-i-resize-a-dialog
CC-MAIN-2019-39
refinedweb
169
71.24
Various things cause an Exception and make a program bomb out in the middle. For debugging purposes it is very useful top know exactly where this happens. Xamarin Studio gives feedback, but it takes some understanding to extract the useful information. This program below, traceback/traceback.cs takes user input and is designed to bomb out with bad input in both an obvious and in a subtle way. using System; namespace IntroCS { class Traceback // allows several kinds of run-time error traceback { static void Main() { DoRemainder (); DoRemainder (); DoSTwice (); DoSTwice (); } static void DoRemainder() { int num = UIF.PromptInt("Enter integer: "); int divisor = UIF.PromptInt("Enter divisor: "); int r = Remainder (num, divisor); Console.WriteLine ("Remainder is " + r); } static int Remainder(int n, int m) { Console.WriteLine ("About to calc remainder..."); return n % m; } /// Get data, print remainder results static void DoSTwice() { string s = UIF.PromptLine("Enter string: "); ShowTwice (s); } static void ShowTwice(string s) { Console.WriteLine ("About to print string..."); Console.WriteLine ("string: " + s + "\nTwice: {0}", s+s); } } } The remainder part works fine if the user enters integers other than 0, but it will blow up if the user enters 0 for the divisor. The program was also consciously designed so that there are nested function calls: For example Main calls DoRemainder (in two places) which calls Remainder. You can run it, first entering reasonable numbers like 13 and 5. Then at the request for two more numbers try entering 3 and 0. You get an error traceback something like the following. There are several very long lines. I have added numbers to reference different ones: Unhelpfully it repeats about the same data twice, so I chopped off the part after “[ERROR]”. Let us look at this a line at a time: Remainderfunction. Main. See the ”:19” at the end of this long line: Check that the call to Remainderdid come from line 19 in DoRemainder. Main. We made the first call to DoRemainderso it had non-zero parameters and worked fine, and the problem came in the second call, from what line? 10. In summary, look at the general description of the error in the second line, and then follow the chain of functions in which the error appeared by look at the line numbers after the colons at the ends of the long “at ...” lines. In this case, we passed bad data to a function in the program source. Now let us look at what happens if we pass bad data to a C# library function: Run the program again giving non-zero integers twice, to successfully pass the DoRemainder calls. The next prompt is for a string. Enter a short word like “hi”. You should see output with hi” and then “hihi”. This looks like a silly but simple sequence showing any string twice. Not so fast. The DoSTwice function gets called again, asking for another string. Try the exact string “set {5}”. What a torrent! The traceback output is way longer. We will cut out some again, number lines, and explain: Discussion: WriteLine- certainly used, and does involve formatting. ShowTwice. Now it makes sense to look at the program line numbers: 38 - that is the more complcated call to WriteLine. ShowTwicecalled from program line 33 in DoSTwice. Main. We cannot explain the line number here (10) for sure. It is either from a bug in Xamarin or it may be that the compiler does some optimization that messes up line numbers slightly - be warned, this does happen. So to understand, we need to look at the data actually passed to Console.WriteLine in program line 38. If you used the data that I gave, s is "set {5}" and the expressions in the two string parameters evaluate so the statement is in effect Console.WriteLine("string: set {5} \nTwice: {0}", "set {5}set{5}"). If you now look back at the description of the error, it talks about a bad index, and you see that the "{5}" embedded in the format string, so it is interpreted to be looking for a parameter with index 5, and there is none. This particular example is a cautionary tale about embedding an arbitrary string in a format string. Format strings actually form an embedded language inside of C#, which is interpreted at runtime, not compile time. It turns out that there are major security issues with such embeddings in other circumstances. For example embedding unfiltered user text in SQL queries is a major source of network intrusion. This is still true after many years, though the exploit, SQL injection, is well known! Line numbers are not tremendous helpful if the error line has a very involved calculation, and you do not know what part is messed up. If you find that there is an error on such a line, it pays to split the statement up and have a number of separate assignment statements (on different lines), and then see what part triggers the error. Though we did use the error traceback for finding errors in traceback.cs, we also threw in the most basic way of tracing errors: we have several print statements that just indicate where the execution is. Can you find them? Print statements are particularly useful to put in key places when the program does not bomb out, but just produces the wrong answer. They can indicate location and current, possibly wrong values. You do want to remove the print statements before the final version! Still they can be very handy during development. They are also useful with involved statements. You can split up a complicated statement, making multiple assignments for important pieces, and print out the intermediate results. What other kind of runtime error could be forced in the traceback.cs program? The program uses UIF which in not bulletproof if you enter a response that is not planned for (unlike the UI class coming later). Cause a runtime exception running traceback.cs, triggered in a call to a UIF function. Look carefully at the error traceback, and make sure you thoroughly understand it.
http://books.cs.luc.edu/introcs-csharp/functions/traceback.html
CC-MAIN-2019-09
refinedweb
1,007
64.81
>>?" finally??? (Score:3) What the hell was the i960 then? Meatloaf? Re:finally??? (Score:4, Insightful) Re: (Score:3) Re: (Score:2) I thought the Paragon [wikipedia.org] used the i860-- a different, later chip. Re: (Score:2, Funny) Oh hell no. He'll do anything for love, but being an AMD aficiando, he won't do that. Re:finally??? (Score:5, Informative) Intel has had several RISC chips on the market at various times; the i960, the i860, even ARM designs (XScale). TFA doesn't say Intel is going to be bringing out RISC technology, though, just that it's "taking aim" at markets that are still RISC strongholds: Re: (Score:2) TFA doesn't say Intel is going to be bringing out RISC technology, though, just that it's "taking aim" at markets that are still RISC strongholds: Yeah, this is more about adding fault tolerance features than it is about anything that would qualify as RISC. ...and z/Architecture (Score:2) Actually Re: (Score:2) x86 compatible? (Score:2) That, as well as the i860 too (which was even earlier than i960, but used in the Intel Paragon supercomputer). And this new CPU - is it x86 compatible? Or are we about to see a new instruction set? Even aside from those, Intel had rights to the DEC Alpha once it made its settlement w/ DEC. That was still #1 in performance when Compaq/HP killed it. If this new CPU is going to be incompatible w/ x86, I don't think it has any more of a future than the Itanic, much less EM64. HP was out of its mind to kil, S Re: (Score:2) the actual Intel instruction set is quite inelegant, both from a programmer standpoint multi Re: (Score:2) Lack of FMA support.. Relatively starved for registers, although since it's not a load/store arch (another issue, imho) that matters less than it does in, say, ARM. There are also implementation issues (lack of a directory cache makes scalability suck), but architecturally, it's a pretty standard and slightly boring CISC. I don't quite understand all the hate it gets - it does tend to be slower than Power or z, and doesn' Re: (Score:2) Re: (Score:2) Why? When transistors were expensive, fixed-length instructions made some sense on die (although they tend to inflate system memory needs), but transistors are extraordinarily cheap today. Instruction decode is such a small part of a modern processor die, and so fast, that it makes no difference. Sure, the world would be aesthetically more appealing if the 68000 had won the microprocessor war rather than the 8086, but the performance difference at this stage of evolution would be infinitesimal. Re: (Score:2) I was always under the impression that the 68k vs 8086 architecture produced far less heat for the same throughput. If that was true then, and is still true, then current processors could be consuming less power under a different architecture and doing the same work. Given that my cell phone's ARM chip is more powerful than my old PC, and heats up far less no matter how much I gab on it might give some credence to the concept. Re: (Score:2) The thing is, times have changed, and you have to look back at the real-world issues, not just at low level "small" applications. The more complex things become, the less the CISC vs. RISC argument matters, especially when internally, CISC instructions get broken down into RISC-type instructions anyway. So, if you are doing something really complex, a well-written application done with CISC instructions won't be any better or worse than if you did the same thing under RISC. It is like the old idea that Re: (Score:2) Variable-length instructions are also kind of annoying. Annoying to some, but useful in practice: [wikipedia.org] Re: (Score:2) The SSE extensions are ugly, if you're including that in the category of x86. Why? x87 is definitely ugly, but sse? Lack of FMA support.. Like this? [wikipedia.org] Relatively starved for registers, although since it's not a load/store arch (another issue, imho) that matters less than it does in, say, ARM. x86-64 improves on this There are also implementation issues (lack of a directory cache makes scalability suck), but architecturally, it's a pretty standard and slightly boring CISC. I don't quite understand all the hate it gets - it does tend to be slower than Power or z, and doesn't scale well, but the problems are implementation problems, not architectural ones. Problem is Intel has a lot of money. So even if Power or Alpha is 'better', Intel has the money to make it better (in general) than the competition (see Apple dropping the PPC because IBM couldn't make a mobile G5, amongst other things) Re: (Score:2) Re: (Score:2) Yes, I read it, I was just pointing out it's going to be there (hopefully) Re: (Score:2) Because multiplications by a constant that's but an entry in a list having a couple of powers of two are all the rage these days. Re:Itaniums is **NOT** RISC (Score:4, Insightful) As far as x86-64 goes, isn't that mainly because AMD trotted out a 64bit processor that was backwards compatible with 32bit programs and whomped Intel's 64bit processors which required specially compiled programs to work with? Re: (Score:3) Re: (Score:2) They STILL refuse to call it AMD64, which is what AMD calls the architecture AMD called it x86-64. People called it AMD64 because IA64 was used for Itanium. AMD64 is misleading, since x86-64 is a relatively small set of tweaks to x86, yet it gives all of the credit (or, perhaps, blame) to AMD. Calling it x86-64 is vendor neutral and descriptive. Re: (Score:2) x87 is definitely ugly x86 with it's stack based apparoach is certainly ugly. But (and here's a big but) it works internally at 80 bit for free which was fantastic. With careful coding on could write very effective and accurate single precision floating point code (or get better precision with doubles) essentially at no cost. It also supported loading and saving to memory of long-doubles so one could have hardware assisted super precision floating point numbers if needed. That was all very nice, but required Re: (Score:2) One might argue that the whole concept of (general) registers is an ugly hack to get around limited or nonexistent cache controllers in old processors. It certainly isn't "elegant" by any stretch of imagination to divide general storage into two separate namespaces, and it also wastes memory with what are basically explicit cache control commands (load/store). Also, d Re: (Score:2) There are new processors without cache too. RISC isn't just for high end systems. Most of the lower power chips for embedded market are RISC based, and this includes a wide variety of ARM CPUs. Even when you do have a cache you are often at the range of power where you don't want a very complicated instruction decoder because you're not building a top of the line PC. The point of RISC is to keep the entire machine design simple and straight forward and uniform, not just instruction decoding; the more sp Re: (Score:2) One might argue that the whole concept of (general) registers is an ugly hack to get around limited or nonexistent cache controllers in old processors. One might, but one could also argue that they're just another part of the memory heirachy (registers/L1/L2/L3/RAM/disk/stone-tablets). Registers usually require fewer cycles to access than even L1 cache, and can also do several more fast things, like parallel access of the two operands of some opcode and read-modify-write in a single cycle. Of course, many p Why we hate x86 (Score:4, Insightful) multiply is an almost unforgivable sin, but overall, I rather like it. What are the things you would like to see changed? We need specifics to have an interesting discussion. :) Limited number of registers Instructions that require certain registers or a certain subset of the registers No three register operations. This impacts pipelining because it is not possible not overwrite one of the source registers. Variable instruction length makes decode a headache Lots of really bad stuff that isn't used much by modern code by still must be maintained for compatiblity: segments, 286 protection, IO instructions, etc. I've wondered sometime what attitudes would be if a more likable contemporary instruction set had won. VAX and 68000, for instance, are much more palatable to program but they have performance flaws that are probably worse than x86. Re: (Score:2) X86-64, with register renaming 16 is more than enough. AMD did a lot of research before settling on 16, more added significantly to complexity but on increased average program executing speed by low single digit percentages. Variable instruction length makes decode a headache Meh, who cares, the whole decoder stage is a couple percent of the non-cache transistor budget. It mattered more back in the PPro era when it was a significant amount of the budget but today it's peanuts Re: (Score:2) Meh, who cares, the whole decoder stage is a couple percent of the non-cache transistor budget. On the high end, the processor has a massive slew of very fast FPU and integer execution units, and a whole bunch of hardware dedicated to getting the absolute best use out of them possible (the out of order unit). The compute hardware tends to be very well utilised and the flops per Watt are actually rather good for a general purpose CPU. In that case, the decoder has little effect. On the low end, it is a very d Re: (Score:2) AMD did a lot of research before settling on 16, more added significantly to complexity but on increased average program executing speed by low single digit percentages. This is not constant, it depends a lot on the language. For a more dynamic language, like Lisp or JavaScript, more registers give you a significant benefit. For C, 16 is usually more than enough. Re: (Score:2) I wonder about this one. Adding 3 register instruction support also means adding an additional set of read ports to the register file. Is it better to execute more instructions in parallel at a higher clock rate or have 3 register instructions? Re: (Score:2) I manage a high-performance library that contains, among others, a SIMD abstraction layer, not unlike Framewave or Accelerate (but better, of course ;)) The SSE/AVX variants are clearly the most annoying to support, and are not really orthogonal at all. The PowerPC and NEON variants have much more straightforward implementations. Re:Itaniums is **NOT** RISC (Score:5, Informative) The x86 architecture is horribly unorthogonal. Each register in the basic set has it's own special purpose which are required by some instruction or other, thus no register is general purpose. The instruction set is clearly CISC with variable instruction size, multiple ways to do the same operation, etc. So many instructions operate directly on memory instead of being a load-store architecture with a lot of registers. It was designed to not take up a lot of program space as opposed to being efficient to decode and execute. It's really not that elegant compared to even other CISC chips of it's era (68000 for example). Ie, you've got the EAX "accumulator", EBX base register, ECX counter register, EDX for division, SI source index, DI destination index, etc. The closest to a general purpose data register is EAX, and EBX is sort of like a general purpose address register, but there aren't any pure general purpose registers that can be used for anything. And so your programs tend to spend a lot of time shuffling stuff into the register that's needed or using a memory location directly as an operand. But that make sense since the x86 instruction set was more an evolution than a design. Start with 4004 (first microprocessor), go to 4040, 8008, 8080, 8085, then finally 8086. Along the way every new CPU was vaguely compatible (either very similar instructions, or you could write a program to convert existing code to the new CPU). Along that evolution the instruction set grew. It was important in the 8080 era to save program space since RAM was expensive. Without a cache it meant that instruction fetching was just as expensive as fetching a memory operand. The more complex instruction sets meant that most CPUs along this line were microcoded, but the performance hit from that wasn't so big since most of these early chips weren't meant to be speed demons but were for low cost designs (low cost relative to the big computers anyway). Microcode meant you could add a new instruction easily without a lot of design overhead. The snag is that along the way RAM got cheaper and the need for performance become the key feature. But Intel adapted because in the Pentium and later these chips really are RISC under the hood. They convert the x86 instructions on the fly into a something that's a step up from microcode which are much more suitable for a pipelined or superscalar architecture. So basically everyone uses RISC these days, it would be foolish not to. But Intel is a prisoner of it's own design. It can't change the instruction set without breaking compatibility. Every time it has a better architecture it's a flop because that's not PC compatible and they're competing with others for the same product space. Re: (Score:2) I'd like to add to your comment that the x86 front end, although hideously ugly compared to say, the 68k mentioned above, acts basically as an instruction compression engine. So you have all the advantages of dense CISC-y instructions with a powerful RISC engine under the hood. Memory is still expensive and very small --> is a cache huge? is it cheap? No, and no. CISC-style instructions pack more easily into those tiny spaces, making cache misses less often and less expensive. RISC didn't win. CISC di Re: (Score:2) Actually that's 4G address space in the original 68000. The address registers were fully populated with 32 bits with the very first 68k. Only 24 address lines were actually connected (er, 23, was something odd with the odd addresses if I recall correctly), or 20 address lines in the 68008. Motorola (and Commodore, but NOT Apple) documentation said not to use the upper 8 bits of the address registers as they would one day be connected to address lines. Lo and behold, the 68020 came out, and it had a full 32 VLIW != RISC (Score:2) Itanium is not RISC in any sense of the word. It's pretty much the exact opposite of RISC - instead of using small, simple operations, it uses massive, complex instructions, often ones that produce multiple effects (most words produce three logical instructions). (Note for the acronym-deficient: RISC == "Reduced Instruction Set Computing", VLIW == "Very Long Instruction Word") Re: (Score:2) a VLIW does multiple instructions in parallel, but each of these are usually pretty small and simple. Intel not going after RISC? (Score:3) Ehhh? The summary seems a little cockeyed. Does anyone on /. really believe this is the first time Intel is using "the R-word'? Intel has been positioning its chips against RISC for ages. Yes, in the past it was using Itanium as its "high end" chip, because it was more directly competitive with IBM's and Sun's offerings (and it probably had bigger margins). But here's an article from 2004 [infoworld.com] which claims "Intel markets the [Itanium] chip as a replacement for RISC processors from companies like Sun and IBM" -- pretty much exactly what the summary is claiming is "a first" here. If anything, Intel has chosen not to throw around a lot of rhetoric about x86/x64 as a replacement for RISC servers out of deference to its partners. Back in 2007, you will recall, Sun started marketing x86 servers [infoworld.com] in addition to its RISC product line. How would it look if Intel went around claiming x86 was a replacement for Sparc servers? Intel left it to Sun's marketing to clarify where it saw its x86-based products in comparison to Sparc. Similarly, around the same time HP was putting out x86 and Itanium servers -- Intel wasn't going to muddy the waters there, certainly. On the other hand, Red Hat and Dell would certainly talk about Linux servers (read: x86) as replacements for proprietary Unix servers (read: RISC). So it's certainly not like this is the first time anyone floated the idea, and it's certainly not like Intel has backed off from competing with RISC at any point in the past, no matter which component gets positioned against RISC chips. RISC was born as RITC (Score:2) "RISC" and "freedom" are two of the most bent out of shape words in the computer science lexicon. When RMS designed "freedom" a new API, he fired off a scripting command to his global botnet s/freedom/free_as_in_beer/gggggggggggg/! but he missed the last "g" and it's been confusion ever since. RISC actually meant Reduced Implementation Team Computing. In practice it meant "this is very cool, but we are way behind the big boys, but maybe we can catch up through a policy of extreme simplification clothed in Hmmm... (Score:3) Are most of the Big Serious Iron RISC/*NIXes available from only a single vendor, often one with rather predatory pricing philosophies? Yeah, arguably so. However, x86-with-Serious-RISC-level-RAS-features isn't exactly a vibrant competitive market... It's pretty much Intel and, um, *crickets*... The low end of x86 actually has a number of weirdo 3rd parties, in addition to the big two, the middle of the market is a duopoly, but a pretty feisty one; but x86 high enough to compete with the classical serious RISC stuff on its own ground(as opposed to on the grounds of architectural changes that favor big clusters of expendable servers) is basically a single-shop thing. AMD has some pretty decent x86 servers; but Intel is the one bringing the itanium RAS stuff down to their Xeons. Arguably, the lower end of RISC is substantially more competitive than that of x86: there are some huge number of ARM licencees, a whole bunch of random MIPS stuff floating around, and so forth. Only the middle-performance area, which is an effective duopoly(VIA? right...), but a pretty cutthroat one, where most people find their price/performance sweet spot, really makes x86 look like a competitive market at all... Re: (Score:2) Who in this day and age has predatory pricing? Re: (Score:2) IBM... IBM.... Oh and IBM.... Too bad that their top-end equipment is rather nice.... Re: (Score:2) Re: (Score:2) I'd define that more as malign parasite pricing. IBM is happy to price it high enough to make you feel it in your budget, but not high enough to negate the value of their products to your business. Pay no attention to the man behind the curtain (Score:5, Informative) Remember all those slow, complex, cumbersome instructions from the 80x86, they're still around, just moved to microcode while all the simple stuff is implemented using the same techniques pioneered by RISC designers. But since this is a server, you're probably running x64 code, which was designed to be much more RISC like in the first place. So, I guess the real message is "Replace your non-Intel based RISC systems with Intel based RISC systems. But wait, don't answer yet! As an added bonus, Intel chips have extra hardware added so they can run all your old x86/CISC code too, that way we can pretend they're not RISC systems based on the AMD designed x64 instruction set." Re: (Score:2) Maybe that should be Intel's "next big thing". A Xeon that just supports the x64 instruction set drop real mode, drop segments, drop 286, drop the I/O instructions and make a pure 64bit ISA. Re: (Score:2) Great idea. Make it like a real RISC CPU, without all the x86 backwards compatibility addons. What a concept. Of course, then Intel couldn't claim "it'll run all you legacy software", and they might even have to admit it's a RISC design. And where would that leave them? Re: (Score:2) real mode, I/O instructions, etc. can't possibly take up that much of the transistor budget. Especially not when they can cram several cores + 30 MB of cache on one die. Re: (Score:2) Wasted space is wasted space. Most of that code has been moved into microcode but why even bother with it all? Yes your Xeon will not run DOS apps but who cares. Where they really need to do this is on the Atom line. Re:Pay no attention to the man behind the curtain (Score:5, Informative) You do know that x64 has a simplified instruction set, simplified addressing modes, larger registers, a larger logical register file, and a much larger physical register file with register renaming, right? It still supports the full x86 instruction set when running in "legacy mode", but in "long mode", it only supports a subset of instructions, and supports only 16, 32, and 64 bit registers and operands (no 8 bit support), and standardizes the instruction lengths to provide better memory alignment, and simplified instruction processing. And in either mode, all the instructions are converted to one or more macro/micro-ops before running on the "real" RISC core. You knew all that, right? Of course you did. Re: (Score:2) I don't remember hearing about this part... what significant chunk of instructions was removed? Re: (Score:2) Re: (Score:2) Re: (Score:3) Actually, while those extra gates do take up die space, they're probably fully power gated, drawing no power and producing no heat when in "long mode". How much die space is probably small, remember a 486 only had around 1M transistors, including it's cache. Even if there are 10M transistors dedicated to maintaining compatibility in a modern CPU, that's ~1% of a modern CPU. x64 mode already breaks backwards compatibility with quite a bit of x86 code, particularly x86 code that isn't 32-bit code. Anything wri Re: (Score:2) From what I've gathered they also have a form of "soft depreciation" where obsolete instructions are implemented in microcode, meaning the code still runs but much slower and a smart compiler wouldn't use those instructions anymore. That's pretty effective without breaking compatibility left and right. Re: (Score:2) As far as I know, all instructions are implemented in microcode... aside from in 6502s. Re:Pay no attention to the man behind the curtain (Score:5, Informative) IBM mainframes are z/Architecture machines, and they are certainly not RISC. z/Architecture has about 1000 opcodes, including things like 'Square Root' and 'Perform Cryptographic Operation' and 'Convert Unicode to UTF-8'. Re: (Score:2) More exactly, 894 opcodes, of which 3/4 are implemented in hardware. That's a bit less than 700 "classic" CISC opcodes. Those are the figures for the newest z/Architecture CPU, the z10 microporcessor. Re: (Score:2) All recent IBM computers, from what I understand, are based on Power7. Or am I mistaken? [wikipedia.org] It's just spin (Score:2) The 64-bit x86 machines have been eating away at IBM's, HP's, and Sun's market share for years. Partnered with a good Linux distribution and VMWare, they're more than capable of taking on "the big boys." Oracle/Sun has been resting on their laurels for far too long. Time will tell whether Oracle manages to plug the holes in that sinking ship. HP's Itanium boxen have never had significant market share. That leaves IBM. And IBM doesn't sell you just a POWER based system -- they sell you the whole suite Re: (Score:2) Re: (Score:2) and POWER7 does seem to kick ass too, no? Wow, what a terrible article (Score:2) First off, Intel went RISC in 1995 with the PentiumPro, the ISA is CISC, but the uISA is RISC. (Semantics. Bite me.) Second, Itanium is VLIW, not RISC. Third, who cares? Sun and IBM are phoning-it-in with this market, just look at the ISSCC proceedings for the past decade. I'm surprised Intel is even bothering. Is the market that big? Will it grow their bottom line? Anyone? Re: (Score:2) There was an article over at arstechnica [arstechnica.com] looking into why Itanium is still around. Apparently the Itanium market is worth $4 billion. Not exactly chump change. Re: (Score:2) x86 isn't RISC if they decode microcode into smaller RISC like operations; an internal RISC. The outside instructions must be RISC; how they pull those off internally is not really part of it. Its a black box. You do realize that even IBM's POWER chips (the final bastion of "RISC") decode instructions into uops too, right? So, are you willing to concede that POWER isn't RISC? Hard to take the story seriously (Score:3, Insightful) We live in a post-RISC world. Nearly every modern processor's "core" use the major innovations of a RISC chip. The size of the instruction set is of little importance; many so-called "RISC" architectures (such as Power) have a larger instruction set than the "CISC" x86_64. The main issue that spawned the development of RISC (that instruction sets were getting so large and unwieldy that instruction decode would take the lion's share of a die's transistors) turned out to be less of a problem than anticipated. At the time, many CISC chips (VAX in particular) were implementing high-level programming features in the architecture's assembly language. Nearly all of us have decided that efficient compilers have made a high-level, expressive assembly language unnecessary. Another factor is that modern processors are superscalar, with multiple execution pipelines per core - one instruction decoder then feeds several pipelines, which further reduces the relative size of the instruction decode. However, modern chips do implement (at least internally), other "core" ideals of the RISC processor: - Numerous registers - Load/Store memory access - Multi-stage Pipelines - One instruction per clock tick (ie. keep the complexity of an instruction down to what can execute in one tick - if something takes more than one tick, break it down into smaller pieces). The one thing that the so-called "RISC" chips have historically been known for is dependability: The machines that use them don't crash. This requires more than just a good CPU: It requires good hardware in general, and a good operating system. The "RISC" vendors - such as Sun (now Oracle), IBM, HP and SGI, control the quality of the entire system - from the electrical components, to the chassis, to the airflow in the chassis. Even the datacenter's abilities (power, cooling capacity, airflow) are specified. There are a lot of things that go into making a system that's mission-critical, and the CPU is a small part of the equation (and usually is the least troublesome). Putting an CPU on a motherboard doesn't give me guarantees about airflow, power reliability, I/O stability and speed, vibration tolerance, nonblocking I/O, and reliability - to say nothing about core OS stability. Intel isn't interested in doing anything other than selling chips. Unless Intel is willing to take upon themselves a whole-system approach - covering everything from the chassis, cooling and airflow, power supply, motherboard, and core operating system - they'll never play in the league. Making a mission-critical system is left to others who use Intel's chips, such as HP's high-end Itanium line, and SGI's Altix and Altix UV systems (using Itanium and x86_64). Re: (Score:3) That's not really true. The lack of high-end features in x86 CPUs was the weak link in getting reliable servers for some time. And when those features started being added, they appeared in servers almost immediately. Even now Xeons lag significantly behind proprietary CPUs, and Intel is just once again on a marketing push to claim every increm x86 are RISC since P6 (Score:5, Informative) When the PentiumPro came along (the first P6 processor) it used internal RISC architecture, and all Intel x86 cores from that time to today stilldecode the x86 instructions in what intel calls r-ops (risc operations) and then it processes them. Nevertheless the part where Intel says "The days of IT organizations being forced to deploy expensive, closed RISC architectures" it is a lie. You can get the UltraSPARC-T2 Verilog code to make those chips yourself and hte code is GPL. You can't do that with any Intel processor. So Intel processors are the really "closed" processor. It is true that RISC processor are more expensive, but it has nothing to do with "closed" It's not the CPU, it's the whole product. (Score:2) Sometimes I need to scale vertically and not horizontally. There are times when you need a single chassis with 200+ cores and 8TB of ram and hundreds of PCIe slots for IO. You can take my pSeries [ibm.com] from my cold dead hands. Intel solutions are getting there [hp.com] with 80 cores and 2TB of RAM. However, when it comes to moving IO, nothing beats big iron. Re: (Score:2) Re: (Score:2) Re: (Score:2) So if one woman can produce one baby in 9 months, does that mean if you assign 9 women to the job you'll get one baby delivered in one month? There are lots of workloads that are inherently single threaded (and probably always will be). If you've got a bigger, faster, more powerful CPU (or vertically scalable server, which fast shared memory and super fast I/O), that'll be a better fit for those sorts of workloads. IBM zEnterprise mainframes are the preeminent examples of the type, and they're selling extre Re: (Score:2) Another reason the z10 sells well is native BCD calculations, meaning that in some tasks coupled with their massive I/O, they are so much faster than Intel/AMD offerings that you'd need AT LEAST 10-15 times more Intel/AMD hardware, with the requisite floorspace, networking, power cabling, cooling and UPS's for all that to compare merely on the theoretical side. In practice, it can get even worse, since the tasks don't parallellize well. Yawn.. (Score:2) Anyone buying POWER or SPARC is a lost cause anyway. Sure Intel might gain a few sales, but frankly the RISC volumes are pretty small and a huge number of them are "stuck" because they have existing applications that they are unwilling/unable to port to an alternative. Or the IT guys are religious zealots. This is the same reason you find AS400s/i5, Nonstops, OpenVMS, zos, etc machines running in data centers the world over. Its not because those OS's or the hardware actually provide some huge benefit that Re: (Score:2) Re: (Score:2) POWER and PowerPC are two different things. Maybe you meant "nothing runs Mac OS 9 like PowerPC"? Or "nothing can hold up Steve Jobs plans like PowerPC"? Re: (Score:2) You're just showing how little you know. When it comes to for example IBM's mainframes, for the jobs where they are used, they massively outperform any Intel/AMD cluster both in raw performance and in operational costs over the years. Re: (Score:2) That is what IBM tells you, try generating your own numbers for once instead of spouting the ones the IBM sales guy tells you. Sure, some of those machines have very high raw performance numbers... But a very large percentage of the installs actually partition that expensive machine up into a dozen or so smaller system images. Which of course negates a lot of the argument about operational costs because the majority of long term operational costs is related to the number of system images you are maintaining. Re: (Score:3) Actually, it is the numbers we generated on our own that I'm running. For the project I worked on, a single loaded mainframe outperformed the Altix, off-the-shelf Dell cluster and a couple of other solutions the client looked at. Hardware support for BCD and the massive external I/O. As for partitioning, in secure environments, the low overhead and the ease with which you can do it on IBM's mainframe reduces the operational costs. The biggest operational cost over the years is floorspace+cooling+power, and th Good news everyone! (Score:2) " days of IT organizations being forced to deploy expensive, closed RISC architectures for mission-critical applications are nearing an end" Indeed, the days of IT organizations being forced to deploy expensive, closed, sorta-RISC is upon us! Happy days! Re: (Score:2) Re: (Score:2) it's always just a few years from evicting the RISC and mainframe architectures from their niches, no matter when you ask. I think it's pretty damn close to evicting RISC today -- or at least, putting it into a niche, when I'd hardly have called RISC/Unix a "niche market" ten or more years ago. Mainframes are definitely a niche, but where they exist they are well entrenched. Re: (Score:2) Re: (Score:2) Re: (Score:3) It's definitely a RISC processor set... the problem with the Itanium was the EPIC instruction set. A complete waste of time, as the compiler is asked to generalize decisions about the thread and multi-core state of the machine during program compilation. I mean... who the hell thought that was a good idea? It makes for a nice benchmark, but a terrible architecture. Bring us back the Alpha chip... make it a 64 core monster. Re: (Score:2) not to defend itanium, but by not foisting it on the compiler, you foist it onto an interpreter running on the CPU. Although the interpreter was wasteful enough, it had no opportunity to usefully work around the kind of dependence shown by: mov xyz, %eax add %eax, %ebx sub %ebx, %ecx or %rcx, %edx It could only insert bubbles until the each op finished. That was the crazy solution to the CPU:Memory speed imbalance. Multi cor Re: (Score:2) Why does it need bubbles? Can't an X86 keep its other ALUs busy simultaneously doing other instructions nearby that sequence using standard register renaming and opcode reordering techniques? At any rate, from what I've read it's the branch prediction that really bottlenecks performance with today's deep pipelines. The advanced runtime branch prediction in the latest CPUs (which can see and react to the actual data at hand) just plain outperforms static compile-time branch analysis.. The x86 trend,. Yes, that sequence of instructions would have to executed sequentially whether for EPIC, Power, or x86, and compilers for any architecture know that they need to expose the maximum amount of ILP to the processor. However only compilers for EPIC need to know the latency of every operation, the number of each type of functional unit, and any slot restrictions that may apply, so that the VLIW instructions can be assembled optimally. Because only by doing so can the ILP be exploited. Otherwise, like in the ex Re: (Score:2) 125W is a gaming CPU nowadays. Re: (Score:2) 125W is a gaming CPU nowadays. An i5-2500 at stock speeds takes about 60W at full load. But yeah, if you buy AMD then all bets are off. Re: (Score:2) Re:Are they also gonna shut down the gibson? (Score:4, Funny) RISC architecture is gonna change everything! I'm still waiting for the P6 chip. Triple the speed of the Pentium. With a PCI bus, too. Re: (Score:3) What are you up to with all that power? I hope you're not planning to hack a Gibson... Re: (Score:2) Or more to the point: why organizations are picking RISCs at all? Either Intel or author of RTFA is missing the point. Most organizations use RISC based systems which come as part of the business critical solutions. Hardware rarely accounts for 10% of the deal. Software licenses, deployment, testing and long term support are where the real money are. Unless Intel introduces an architecture which it commits to support for at least one decade, I do not see a thing changing on corporate landscape. The pro Re: (Score:2) GPUs are routinely 256 bit for both integer and floating point instructions. The Cell processor of Sony PS3 fame has seven 128 bit SPEs (Synergistic Processing Elements), which are controlled by a 64 bit PPE (PowerPC Processing Element). Re: (Score:2) Nah, it's just Intel admitting they lost the mobile market to ARM and the value-for-money market to AMD, so all they have left is the ricer and more-money-than-sense market.
http://hardware.slashdot.org/story/11/09/19/2131214/Intels-RISC-y-Business
CC-MAIN-2014-42
refinedweb
6,281
61.06
In this article, we will learn how to Convert an Image to ASCII art using Python. We will see the implementation in Python. Check out the Python series . Like it and share it, if you find it useful! First we understand, What is ASCII? ASCII, abbreviated from American Standard Code for Information Interchange. It is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Libraries Used Pywhatkit Module PyWhatKit is a Python library with various helpful features. It is an easy to use library which does not requires you to do some additional setup. PyWhatKit is a Python library with different accommodating components. It is a simple to utilize library which doesn’t expect you to do some extra arrangements. This module help us to changing any picture over to ASCII craftsmanship utilizing python This module has lots of other cool features as well. If you wish to know more about it, you can refer to pywhatkit Module Documentation. Now that you are familiar with ASCII basics and have acquired basic knowledge of pywhatkit module. In order to access the Python library, you need to install it into your Python environment pip install pywhatkit Time to Code! Now, we will import pywhatkit package in our python script. Use the following command to do so. Code language: JavaScript (javascript)Code language: JavaScript (javascript) import pywhatkit as kt Now that we have imported the library using the command import pywhatkit as kt, let’s proceed. Let’s display a welcome message. we will make use of Code language: PHP (php)Code language: PHP (php) print("Let's convert images to ASCII art!") Now, let’s capture the image you wish to convert to ASCII art and let’s store it in source_path. This is the image I am going to use as demo here. The image depicts a Robot. Make sure to give the correct path here. If the python script could not locate an image with the given name at the mentioned path, it will result into an error. Next, create a placeholder to store the output. As its an ASCII file, make sure to use the correct extension, i.e. .text and let’s store it in target_path. Code language: JavaScript (javascript)Code language: JavaScript (javascript) source_path = 'robo.png' target_path = 'demo_ascii_art.text' Finally, let’s call image_to_ascii_art method. Code language: CSS (css)Code language: CSS (css) kt.image_to_ascii_art(source_path, target_path) NOTE: The output file will be generated in the same folder where the python script resides if no alternate path is mentioned. Let’s have a look at the output file and its comparison with the actual image file. Let’s have a look at another example. I am making use of my image as demo here. Post successful run, a new text file will get created and will look something like this With these steps, we have successfully Converted an image to an ASCII art using python. That’s it! Thank you for reading, I would love to connect with you at LinkedIn. Do share your valuable suggestions, I appreciate your honest feedback!
https://techarge.in/convert-an-image-to-ascii-art-using-python/
CC-MAIN-2021-49
refinedweb
523
67.15
I have just completed my first program in c++.I am an ultra newbie,have only been doing c& c++ for 12 days,so please forgive me if my program is shit.Anyways i would like the experts on this forum to evaluate my work and give me tips and hints to make myself a better programmer. Moschops gave me a very useful way to search for strings,and although it worked on it's own i couldn't figure it out ,most probably because it's my 2nd day in c++. I would like to know an analogue of system("cls); since the String print i used an my first screen clearing does not reset the cursor at the top left of the screen.Also i find my program to be repetitive.I am hoping that you could help me with that Here is my code #include <iostream> #include <conio.h> #include <fstream> #include <stdio.h> #include <cstdlib> #include <stdio.h> char DRL[2]; int a; int b; void INTRO(); void INPUT1(); void FILE_WRITING(); void OSCHECK(); void WRIT2USB(); void TERMINATUS(); using namespace std; int main() { INTRO(); INPUT1(); FILE_WRITING(); OSCHECK(); WRIT2USB(); TERMINATUS(); } void INTRO() { cout<<"\n\n\n"; cout << "\t\t\t Make Your USB Device Bootable" << endl; cout<<"\t\t\t\t Coded By Sawal\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"; cout<<"Press Any Key To Continue"; cin.get(); cout << string(53, '\n'); } void INPUT1() { cout<<"\n\n Please Input The Drive Letter Of Your Input Device \n"; cout<<"Please make sure that you add a colon "; cin>>DRL; cout<<DRL; } void FILE_WRITING() { ofstream f1; f1.open("c:\\batch.bat"); f1<<"@echo off\n"; f1<<"diskpart /s c:\\script.txt\n"; f1.close(); ofstream sc; sc.open("c:\\script.txt"); sc<<"list disk"; sc.close(); system("c:\\batch.bat"); cout<<"Please enter the disk number of your USB device above\n"; cout<<"It can be found under the heading DISK ### \n"; cin>>a; remove("c:\\batch.bat"); remove("c:\\script.txt"); ofstream sc1; sc1.open("c:\\script.txt"); sc1<<"Select Disk "<<a<<"\n"; sc1<<"Clean \n"; sc1<<"Create Partition Primary \n"; sc1<<"Select Partition 1 \n"; sc1<<"Active\n"; sc1<<"Format FS=NTFS \n"; sc1<<"Assign Letter "<<DRL<<"\n"; sc1<<"exit"; sc1.close(); } void OSCHECK() { system("cls"); cout<<"Press[1] for windows 7 32 bit bootcode\n\n\n Press[2] for Windos 7 64 bit bootcode \n"; cin>>b; } void WRIT2USB() { { system("diskpart.exe /s c:\\script.txt"); remove("c:\\script.txt"); ofstream ds; ds.open("work.bat"); ds<<"@echo off \n" ; ds<<"bootsect.dll -x -y BSBIN.hex\n"; ds.close(); system("work.bat"); remove("work.bat"); } switch(b) { Case1: cout<<"Writing 32 bit bootcode"; ofstream ps; ps.open("work32.bat"); ps<<"cd bsbin\\BS32\n"; ps<<"bootsect.exe /nt60 "<<DRL; ps.close(); system("work32.bat"); remove("work32.bat"); break; goto IMUM; Case2: cout<<"Writing 64 bit bootcode \n"; ofstream gs; gs.open("work64.bat"); gs<<"cd bsbin\\BS64\n"; gs<<"bootsect.exe /nt60 "<<DRL; gs.close(); system("work64.bat"); remove("work64.bat"); break; goto IMUM; } { IMUM: ofstream rs; rs.open("del.bat"); rs<<"@echo off"; rs<<"cd bsbin\\bin64 \n"; rs<<"del *.* /q \n"; rs<<"cd.. \n"; rs<<"rd bin64 \n"; rs<<"cd bin32 \n"; rs<<"del *.* \n"; rs<<"rd bin32 \n"; rs<<"cd .. \n"; rs<<"rd bsbin \n"; rs<<"del del.bat \n"; rs.close(); system("del.bat"); } } void TERMINATUS() { system("cls"); cout<<"writing bootcode finished"; } Your feedback will help me make myself a better programmer Just as a footnote, This file will delete every file in your current directory.
https://www.daniweb.com/programming/software-development/threads/392911/evaluation-of-my-c-programwin-7-bootcode-writer
CC-MAIN-2015-48
refinedweb
608
67.76
Five tips to move your project to Kubernetes Here are five tips to help you move your projects to Kubernetes with learnings from the OpenFaaS community over the past 12 months. The following is compatible with Kubernetes 1.8.x. and is being used with OpenFaaS - Serverless Functions Made Simple. Disclaimer: the Kubernetes API is something which changes frequently and you should always refer to the official documentation for the latest information. 1. Put everything in Docker It might sound obvious but the first step is to create a Dockerfile for every component that runs as a separate process. You may have already done this in which case you have a head-start. If you haven't started this yet then make sure you use multi-stage builds for each component. A multi-stage build makes use of two separate Docker images for the build-time and run-time components of your code. A base image may be the Go SDK for example which is used to build binaries and the final stage will be a minimal Linux user-space like Alpine Linux. We copy the binary over into the final stage, install any packages like CA certificates and then set the entry-point. This means that your final is smaller and won't contain unused packages. Here's an example of a multi-stage build in Go for the OpenFaaS API gateway component. You will also notice some other practices: - Uses a non-root user for runtime - Names the build stages such as build - Specifies the architecture of the build i.e. linux - Uses specific version tags i.e. 3.6- if you use latestthen it can lead to unexpected behaviour FROM golang:1.9.4 as build WORKDIR /go/src/github.com/openfaas/faas/gateway COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o gateway . FROM alpine:3.6 RUN addgroup -S app \ && adduser -S -g app app WORKDIR /home/app EXPOSE 8080 ENV http_proxy "" ENV https_proxy "" COPY --from=build /go/src/github.com/openfaas/faas/gateway/gateway . COPY assets assets RUN chown -R app:app ./ USER app CMD ["./gateway"] Note: If you want to use OpenShift (a distribution of Kubernetes) then you need to ensure that all of your Docker images are running as a non-root user. 1.1 Get Kubernetes You'll need Kubernetes available on your laptop or development machine. Read my blog post on Docker for Mac which covers all the most popular options for working with Kubernetes locally. If you've worked with Docker before then you may be used to hearing about containers. In Kubernetes terminology you rarely work directly with a container, but with an abstraction called a Pod. A Pod is a group of one-to-many containers which are scheduled and deployed together and get direct access to each other over the loopback address 127.0.0.1. An example of where the Pod abstraction becomes useful is where you may have an existing legacy application without TLS/SSL which is deployed in a Pod along with Nginx or another web-server that is configured with TLS. The benefit is that multiple containers can be deployed together to extend functionality without having to make breaking changes. 2. Create YAML files Once you have a set of Dockerfiles and images your next step is to write YAML files in the Kubernetes format which the cluster can read to deploy and maintain your project's configuration. These are different from Docker Compose files and can be difficult to get right at first. My advice is to find some examples in the documentation or other projects and try to follow the style and approach. The good news it that it does get easier with experience. Every Docker image should be defined in a Deployment which specifies the containers to run and any additional resources it may need. A Deployment will create and maintain a Pod to run your code and if the Pod exits it will be restarted for you. You will also need a Service for each component which you want to access over HTTP or TCP. It is possible to to have multiple Kubernetes definitions within a single file by separating them with --- and a new line, but prevailing opinion suggests we should spread our definitions over many YAML files - one for each API object in the cluster. An example may be: - gateway-svc.yml - for the Service - gateway-dep.yml - for the Deployment If all of your files are in the same directory then you can apply all the files in one step with kubectl apply -f ./yaml/ for instance. When working with additional operating systems or architectures such as the Raspberry Pi - we find it useful to separate those definitions into a new folder such as yaml_arm. Here is a simple example of a Deployment for NATS Streaming which is a lightweight streaming platform for distributing work: apiVersion: apps/v1beta1 kind: Deployment metadata: name: nats namespace: openfaas spec: replicas: 1 template: metadata: labels: app: nats spec: containers: - name: nats image: nats-streaming:0.6.0 imagePullPolicy: Always ports: - containerPort: 4222 protocol: TCP - containerPort: 8222 protocol: TCP command: ["/nats-streaming-server"] args: - --store - memory - --cluster_id - faas-cluster A deployment can also state how many replicas or instances of the service to create at start-up time. apiVersion: v1 kind: Service metadata: name: nats namespace: openfaas labels: app: nats spec: type: ClusterIP ports: - port: 4222 protocol: TCP targetPort: 4222 selector: app: nats Services provide a mechanism to balance requests between all the replicas of your Deployments. In the example above we have one replica of NATS Streaming but if we had more they would all have unique IP addresses and tracking those would be problematic. The advantage of using a Service is that it has a stable IP address and DNS entry which can be used to access one of the replicas at any time. Services are not directly mapped to Deployments, but are mapped to labels. In the example above the Service is looking for a label of app=nats. Labels can be added or removed from Deployments (and other API objects) at runtime making it easy to redirect traffic in your cluster. This can help enable A/B testing or rolling deployments. The best way to learn about the Kubernetes-specific YAML format is to look up an API object in the documentation where you will find examples that can be used with YAML or via kubectl. Find out more about the various API objects here: 2.1 helm Helm describes itself as a package manager for Kubernetes. From my perspective it has two primary functions: - To distribute your application (in a Chart) Once you are ready to distribute your project's YAML files you can bundle them up and submit them to the Helm repository so that other people can find your application and install it with a single command. Charts can also be versioned and can specify dependencies on other Charts. Here are three example charts: OpenFaaS, Kakfa or Minio. Helm supports in-line templates written in Go, which means you can move common configuration into a single file. So if you have released a new set of Docker images and need to perform some updates - you only have to do that in one place. You can also write conditional statements so that flags can be used with the helm command to turn on different features at deployment time. This is how we define a Docker image version using regular YAML: image: functions/gateway:0.7.5 With Helm's templates we can do this: image: {{ .Values.images.gateway }} Then in a separate file we can define the value for "images.gateway". The other thing helm allows us to do is to use conditional statements - this is useful when supporting multiple architectures or features. This example shows how to apply either a ClusterIP or a NodePort which are two different options for exposing a service in a cluster. A NodePort exposes the service outside of the cluster so you may want to control when that happens with a flag. If we were using regular YAML files then that would have meant maintaining two sets of configuration files. spec: type: {{ .Values.serviceType }} ports: - port: 8080 protocol: TCP targetPort: 8080 {{- if contains "NodePort" .Values.serviceType }} nodePort: 31112 {{- end }} In the example "serviceType" refers to ClusterIP or NodePort and then we have a second conditional statement which conditionally applies a nodePort element to the YAML. 3. Make use of ConfigMaps In Kubernetes you can mount your configuration files into the cluster as a ConfigMap. ConfigMaps are better than "bind-mounting" because the configuration data is replicated across the cluster making it more robust. When data is bind-mounted from a host then it has to be deployed onto that host ahead of time and synchronised. Both options are much better than building config files directly into a Docker image since they are much easier to update. A ConfigMap can be created ad-hoc via the kubectl tool or through a YAML file. Once the ConfigMap is created in the cluster it can then be attached or mounted into a container/Pod. Here's an example of how to define a ConfigMap for Prometheus: kind: ConfigMap apiVersion: v1 metadata: labels: app: prometheus name: prometheus-config namespace: openfaas data: prometheus.yml: | scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] You can then attach it to a Deployment or Pod: volumeMounts: - mountPath: /etc/prometheus/prometheus.yml name: prometheus-config subPath: prometheus.yml volumes: - name: prometheus-config configMap: name: prometheus-config items: - key: prometheus.yml path: prometheus.yml mode: 0644 See the full example here: ConfigMap Prometheus config Read more in the docs: 4. Use secure secrets In order to keep your passwords, API keys and tokens safe you should make use of Kubernetes' secrets management mechanisms. If you're already making use of ConfigMaps then the good news it that secrets work in almost exactly the same way: - Define the secret in the cluster - Attach the secret to a Deployment/Pod via a mount The other type of secrets you need to use is when you want to pull an image in from a private Docker image repository. This is called an ImagePullSecret and you can find out more here. You can read more about how to create and manage secrets in the Kubernetes docs: 5. Implement health-checks Kubernetes supports health-checks in the form of liveness and readiness checking. We need these mechanisms to make our cluster self-healing and resilient to failure. They work through a probe which either runs a command within the Pod or calls into a pre-defined HTTP endpoint. A liveness check can show whether application is running. With OpenFaaS functions we create a lock file of /tmp/.lock when the function starts. If we detect an unhealthy state we can remove this file and Kubernetes will re-schedule the function for us. Another common pattern is to add a new HTTP route like /_/healthz. The route of /_/ is used by convention because it is unlikely to clash with existing routes for your project. If you enable a readiness check then Kubernetes will only send traffic to containers once that criteria has passed. A readiness check can be set to run on a periodic basis and is different from a health-check. A container could be healthy but under too much load - in which case it could report as "not ready" and Kubernetes would stop sending traffic until resolved. You can read more in the docs: Wrapping up In this article, we've listed some of the key things to do when bringing a project over to Kubernetes. These include: - Creating good Docker images - Writing good Kubernetes manifests (YAML files) - Using ConfigMaps to decouple tunable settings from your code - Using Secrets to protect sensitive data such as API keys - Using liveness and readiness probes to implement resiliency and self-healing For further reading I'm including a comparison of Docker Swarm and Kubernetes and a guide for setting up a cluster fast. Compare Kubernetes with Docker and Swarm and get a good overview of the tooling from the CLI to networking to the component parts If you want to get up and running with Kubernetes on a regular VM or cloud host - this is probably the quickest way to get a development cluster up and running. Follow me on Twitter @alexellisuk for more. Five tips to move your project to Kubernetes - @kubeweekly @kubernetesio @Docker— Alex Ellis (@alexellisuk) March 19, 2018 Acknowledgemnts: Thanks to Nigel Poulton for proof-reading and reviewing the post.
http://brianyang.com/five-tips-to-move-your-project-to-kubernetes/
CC-MAIN-2018-22
refinedweb
2,114
59.84
Hello. I am trying to "git format-patch" for allegro5 and i have encounter a smallproblem with trailing white space. My editor is set to remove the trailingwhite space and i was so sure( Oh how wrong i was ! ) that every other editorin the world does it. Well who the hell needs a bunch of white space characterson an empty line.And now when i formated the patch i noticed that the files i was working withhad this white space trailing on almost every empty line. The patch is now plaguedwith lines like this: Not only is the patch unnecessary large but it is also harder to look troughand see what was really done. Does allegro have any kind of guidelines regarding white space trailing in the code.Should it be removed? Or should it be kept as the original author wrote it? I am asking because it will be a real pain to return the white space in the filesto original state. My GIT-KUNG-FU is not that good to remove this kind of problem( if it is possible).And my patch looks ugly. Your best bet is to reapply your changes to a fresh source checkout with a different editor that doesn't eat white space. My Website! | EAGLE GUI Library Demos | My Deviant Art Gallery | Spiraloid Preview | A4 FontMaker | Skyline! (Missile Defense) Elizabeth Warren for President 2020! | Modern Allegro 4 and 5 binaries | Allegro 5 compile guideKing Piccolo will make it so We prefer the whitespace changes to be their own commits (although we rarely do them unless the whitespace is really bad). It's not particularly hard to split up a commit though: First, undo the commit (this'll keep the changes): git reset HEAD~1 Then use the interactive add mode. git add -p <optionally put the files you meant to change here> That'll put you in a mode where you'll go through every diff hunk. Press y if you want to stage it (i.e. the changes you really meant to do), or n if not. After that's done you can do: git commit -m "Your commit message" After that's done you'll still have your whitespace changes... I'd probably avoid committing them, but you can if you want. If that gives you too much trouble, then don't worry too much we can do the split for you as well. "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro] ^ Excellent post SiegeLord. Interactive add is probably the best option here. Edgar, I'm getting old too, but seriously learn something new and master such an important tool. Git can save you so much fuss if you just take a weekend to learn it. There's even a free/libre "book" available online to walk you through most of it. Beyond that, ask in #git on freenode any question and they'll steer you in the right direction and hold your hand if you need it. And when we're present we can help you in #allegro too. On a side note: if you are viewing a patch with a Git command and the patch is full of white-space only noise you can add a -w option to suppress it. It won't help with merging such a patch, but it can at least help for reviewing it. UIs for viewing Git history should support such an option too (or should be complained to if it doesn I do use git, but I haven't really had the need for its more advanced features. I rarely need any more than git branch and git merge. If git can somehow magically remove whitespace changes from a commit that's cool and its news to me. Thank you all. I will try to get rid of the white space changes on my own for now. Improving my GIT-KUNG-FU along the way.It will be propably a real pain, but on the other hand, a valuable experience. Will post the result when i am satisfied with them. You may be able to hand edit the patch and literally remove whole swaths of ws changes. Ok. I have managed to get rid of the white space changes and i have compiled apatch preview so that you can see what i am trying to do and if it is worthto continue( if it is actually desired to make such changes ). PATCH HERE I have noticed a little differences between some bitmap drawing functions so i made those changes: 1:al_draw_scaled_bitmap and al_draw_scaled_rotated_bitmap where one takesabsolute destination bitmap size and one takes scale value. Made them both take scale value and to compensate for the change i addedanother functions named al_draw_stretched_bitmap and al_draw_stretched_rotated_bitmapthat takes absolute destination bitmap size. 2:al_draw_tinted_bitmap_region and al_draw_tinted_scaled_rotated_bitmap_region hadthe tint argument in different order. Corrected that. 3:Now only region drawing functions use source bitmap position and dimensions arguments. 4:Made a function for every sane combination of Tinted, Scaled, Stretched and Rotated drawing.Also made those functions with region variants. 5:Added documentation for all available functions. 6:Adjusted examples to use the new or changed drawing functions. To compile the allegro with this preview patch you have to disableWANT_DEMO and WANT_TESTS in cmake for now.Still trying to figure out how the tests works for now. Take a look and share your opinions please. Woah man, are you from the future? {"name":"611906","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/2\/72e00ab6eb7bf3b17b698d64d215eedd.png","w":1920,"h":1080,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/2\/72e00ab6eb7bf3b17b698d64d215eedd"} I'm looking at your patch now... A tiny bit of work has been done on the patch again. The demos and tests now compiles too.Also managed to get the test_bitmaps2.ini to pass all the tests.But the test_bitmaps.ini is much bigger beast and i still fully do not understandwhole test_driver.c so it fails some of the tests.A few tips from someone who knows what is going on there would be really helpfull. New patch preview can be found here. @Edgar:Seems like all that white space trailing in the patch also messed up my TTD ( a Time Travel Device, not a Tranport Tycoon Deluxe )settings and the patch ended in the wrong time Seems there are options for dealing with white space changes ; The merge mechanism (git merge and git pull commands) allows the backend merge strategies to be chosen with -s option. Some strategies can also take their own options, which can be passed by giving -X<option> arguments to git merge and/or git pull. ... ignore-space-changeignore-all-spaceignore. Just to be clear - items 1 & 2 in your list are breaking changes to the Allegro API, are they? Yes, they are. Unfortunately. But then i could just name the new functions something like: al_blit_bitmapal_blit_tinted_bitmapal_blit_stretched_rotated_bitmap_region and so on to keep the current al_draw_bitmap routines untouched. If we're going to make sweeping changes, I have some suggestions : Make naming and parameters consistent (its hard enough to remember the parameters as it is) : See how all the drawing routines start with the destination? And how the region drawing routines start with the source rectangle and then the destination? After that, it's just a matter of adding the parameter named by the function to the argument list before the flags parameter. Everything lines up, the parameters are accepted in the order given by the naming of the function, and it just makes things so much easier. *The bonus of this naming scheme is that it doesn't require any changes to the ABI of Allegro. The current versions of drawing routines could be mapped to the new ones to maintain compatibility, while at the same time, creating consistency and order in the naming and use of these functions. I'm still looking at your patch. Thank you Edgar for your suggestion and wording my intention with this patch correctly.As my english is self-taught, sometimes its hard to write down exactly what i want. Yes. The whole point of this patch is to make the drawing functions naming and parameter list consistent.But my propposed changes are actually breaking the compatibility that is why i like your naming of the functions better.I am willing to completely rewrite it if we can come up with a reasonable naming and parameter order. I am proposing to use your naming witch does not interfere with the current drawing functions:al_draw_bitmap_all-sane-variations and al_draw_bitmap_region_all-sane-variations And parameter order list: bitmap, tint, sx, sy, sw, sh, cx, cy, dx, dy, dw or scalex, dh or scaley, angle, flags Witch would make the longest function name: al_draw_bitmap_region_tinted_stretched_rotated( bmp, tint, sx, sy, sw, sh, cx, cy, dx, dy, dw, dh, angle, flags ); But i am realy fine with anything that is consistent. Current naming and parameter order list in my patch ( breaks compatibility ): Okay, let me try to explain my naming scheme : It goes like this (if it were C++, these would be the functions names, but since C doesn't allow function overloading this will have to do); al_draw_bitmapal_draw_bitmap_region Which makes the first parameters the bmp , and then source and destination. After that you have either stretched, scaled, tinted, or rotated. Any version that is stretched or scaled should be the next part of the function name. _stretched_scaled This makes the next parameter either the destination width and height, or the scale factor for x and y. This parameter is optional, and then the next ones are _tinted, _rotated, and _tinted_rotated in that order. And then the flags come last. So it is : al_draw_bitmap[_region][_stretched | _scaled] | [_tinted | _rotated | _tinted_rotated] I think that's pretty easy to do, but you'd have to ask others for their opinion on this, I'm not a developer, but I can help you. And in the end, there should basically only be one function that every one of those function calls. draw_bitmap should call draw_bitmap_region. Or if performance is a concern I guess you have to write every single function. But I would try to avoid that. #include <std_disclaimer.h> Side question: Do... you actually need GIF support? I mean, yeah yeah, lots of people on forums say that and they're often wrong and need to just STFU and answer the question... But, High-level question: do you really need GIF support? Are you using it for animations? Could you implement animations a different way? Because the only real use-case I can think of for GIF in >2010 is for an image editor or conversion tool and the thing is... Allegro is a bad candidate for that because Allegro is designed for games--not tools. It fails on all kinds of edge cases of images that if you were simply making a game, you'd already have complete control of file formats and just use a different file format, or the same format, but with a different internal header that works. That is, is Allegro fails on some variation of BMP/GIF/PNG, you simply convert all your files to be a workable format. PNG is basically the only file format anyone uses anymore. Even BMP's are fat, and, may even be slower than PNG because compression increases the speed you can suck them from the hard drive while modern CPUs can easily run decompression. (That's the entire premise for NTFS compression, btw. And we've been using it since... the 2000's. Hard drives are slower than CPUs, so make the files smaller and decompress them once they hit the CPU.) So I'm not sure if I'm illustrating my point effectively but if this is a game, you should be able to control your file formats and GIF doesn't add any features that other, better, formats don't already have. Except animation... for which I really wish APNG (animated PNG) would become a wildly accepted format. But for a game, there's almost zero situations (that I can think of) where you'd want to intentionally use the GIF format. But if it's a tool (I'm speaking from experience here!) Allegro is going to bite you in the butt in strange places. Like loading BMP's which can have tons of different header styles and Allegro fails on some of them--even ones that GIMP outputs. Even if it's just one type, end-users aren't going to appreciate "It's not in my control, it's Allegro's fault." when your tool can't load every image on their computer. They just need it to work. And in that case, you should use a dedicated image loading library. Because Allegro is focused on making games, not making every edge-case image format work--unlike an image library. -----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs"Political Correctness is fascism disguised as manners" --George Carlin Beautiful post I'm not sure it belongs in this thread though. Ok. Here it is. This is how the function names and parameter list looks like: There are always trade-offs when adding new API. On one hand, you add new functionality that was not easy or possible before. On the other hand, you add complexity that makes it more difficult to learn and use it. Only when the first outweighs the second does should we consider adding it. I don't think it does in the case of this change. Allegro already has 11 ways to draw a bitmap. This patch adds 11 more ways (or 22, since you're coming up with new names). That's up to 33 ways to draw a bitmap. What new functionality do you gain from this? It would appear that you're saving a person from occasionally writing dw / al_get_bitmap_width(bmp) or xscale * al_get_bitmap_width(bmp) if their requirements are mismatched with what Allegro expects. But, in return, you now need to know the difference between scaled and stretched; you need to understand exactly what it means for dw to be specified when angle is e.g. PI/3 (it clearly doesn't actually correspond to the destination width). And, of course, you need to spend time thinking about which variant to use given what values you have. Is Allegro's current bitmap drawing API perfect? No; I for sure can appreciate the oddity of having al_draw_scaled_bitmap and al_draw_scaled_rotated_bitmap taking different methods of scaling. And why does al_draw_scaled_bitmap need to know the source width and height? I actually know the reason why. It's because this is Allegro 5, and it just copied Allegro 4's API with a few renames. People often say that Allegro 5 is completely different than Allegro 4, but this is one of the places where the resemblance is clear. Maybe it should have done something different, but at this point it's 8 years too late. 8 years too late but never a day too soon to fix old mistakes. Izual, That's closer to what I had in mind, but there's an important problem. cx and cy should not be the second parameter in draw bitmap or the fifth in draw bitmap region. It should come after dw,dh. So that you always have al_draw_bitmap_X(bmp , dx , dy , ...) al_draw_bitmap_scaled_X(bmp , dx , dy , dw , dh , ...) and al_draw_bitmap_region_X(bmp , sx , sy , sw , sh , dx , dy , ...) al_draw_bitmap_region_scaled_X(bmp , sx , sy , sw , sh , dx , dy , dw , dh , ...) Also, I have a question, are cx and cy pivot points in the source bitmap? SiegeLord - the destination width and height would be assumed to be pre-rotation. stretched vs scaled is obvious in their naming. Scaling uses factors and stretching involves dimensions. Consistency is key. And with autocomplete these days you can type al_draw and then select which one you want and it will give you the parameters. Anyway, this could all be hidden under ALLEGRO_UNSTABLE. And we could easily deprecate the old style drawing functions and have them forward themselves to the new ones. It could all happen rather painlessly. The only thing that would change is that you would get deprecation warnings when defining ALLEGRO_UNSTABLE. My 2¢ Beautiful post I'm not sure it belongs in this thread though. lulul I'm embarassed. Don't post at 3 am folks. @EdgarYes. cx,cy is a point on the source bitmap. This point is drawn at the dx,dy position and the bitmap is rotated around this point.That is why i placed it together with the parameters that deal with the source bitmap. It seemed logical to me. @SiegeLordThank you for your answear and stating the reasons. Just wanted to add a little bit to allready a great library. With the availability of transforms, similar code can be easily written, but at the expense of a great deal of verbosity ; The question is, do we want this : Or do we want this : al_draw_bitmap_scaled_tinted_rotated(bmp , cx , cy , scalex , scaley , al_map_rgb(0,96,255) , M_PI/2.0 , 0); The choice seems clear.
https://www.allegro.cc/forums/thread/617749/1041331
CC-MAIN-2019-51
refinedweb
2,869
72.46
This preview shows pages 1–4. Sign up to view the full content. Introduction to Functions [ Let’s summarize and introduce more ] Why do we write FUNCTIONS? • Better organization of code. • Functions can be reused: call the same function from different parts. Eg. Add 5 marks as bonus to all subjects: then we can also call c o nve rtAll to update all grades. • One single function can serve different sets of input data values. (just need to pass the input values through parameters whenever we call the function). Function Names • Should be descriptive. Bad examples: • Often composed of 2 or more words: first one is a verb, eg. c o nve rt All, s ho w De ta ils • A common way: First word starts with lowercase, others start with uppercase. • Some other constraints: CANNOT start with 0 - 9. CANNOT be a keyword. Eg. 9To5 and else are wrong Format of Function Definition : function function_name ( parameter-list ) { ... some code . .. (may or may not return a value) } To Call a Function: function_name(argument-list) function Function1 () { .. } bad names (not meaningful) function temp () { .. } View Full Document This preview has intentionally blurred sections. func tio n s ho wMultiple ( a ,b ) { ale rt( a * b ); } … s ho wMultiple ( 10 , 20 ); Parameters / Arguments parameters The showMultiple function "sees" 10, 20 as the values of its parameters a, b . a*b is the product of a and b. arguments showMultiple is called with the arguments 10 and 20 . ü Parameters m ake a func tio n fle xible : C an wo rk fo r diffe re nt data. Arguments (eg. 10, 20 here) are the actual values that we pass to the function in the function call, that fit the function parameters (ie. a and b). Arguments and parameters are separated by commas. Introduction to Functions Parameter names (a and b) should appear in the function only. Arguments in the argument-list should match the parameter-list. function calculateMultiple(a,b) { return (a*b); } … do c um e nt.F1.X.value = c alc ula te Multiple (10,20); ale rt(c alc ulate Multiple (11,22)); The return statement • The return keyword tells the browser to return a value from the function definiti on to the statement that called the function, eg. return 'A' ; • After running the return statement, the function is stopped immediately , eve n if there exists any other statement below the return statement. Example: View Full Document This preview has intentionally blurred sections. This note was uploaded on 03/10/2012 for the course CS 1301 taught by Professor Dr.wong during the Winter '08 term at City University of Hong Kong. - Winter '08 - Dr.Wong - Computer Programming Click to edit the document details
https://www.coursehero.com/file/6835332/11Programming-Structure-VII/
CC-MAIN-2017-17
refinedweb
454
67.04
Java date issue with Daylight saving One of our UK clients complained that some of the timestamp settings in IFS are incorrect. This issue was noticed in Tasks where the received timestamp is one hour behind of the actual time. After some debugging the function call to save the task, we identified that the received time is set from the application server and seems this is an issue with Java Date class. It could be verified that Java is picking the wrong date by writing a small program which gets the java date, exactly the same way the IFS middletier code sets the value for Received field. import java.util.Date.*; public class date { public static void main(String[] args) { System.out.println(new java.util.Date().toString()); } } Making this issue more complicated, we’ve found that this issue only exist for the Java version packed with IFS App8 (Java version 1.7.0_45) but not exist in java 1.8 We could not find any information in java release notes regarding this sort of an issue but Java 8 troubleshooting guide shed light on this issue as it seems Java timezone depends on the Time Zone information registry value. [HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlTimeZoneInformation] In the customer registry settings, DynamicDaylightTimeDisabled registry value which java looks for is correctly defined but seems DisableAutoDaylightTimeSet value causing the issue. Resolution Setting the DisableAutoDaylightTimeSet=0 seems to solve the issue. - Launch the Registry Editor (regedit.exe). - Navigate to HKLMSYSTEMCurrentControlSetControlTimeZoneInformation - Double-click on DisableAutoDaylightTimeSet, change the value to 0 and click on OK button. 2 Comments Nice article and good info!! Does this happen only for tasks or can it be observed in other places? Thanks Buddie for following my blog. This issue should exist for all places where the date is fetched from the application server logic. All the dates in application forms are fetched from database so they won’t affect. But I guess date values in application messages, application server log files should have the incorrect date. Cheers!
https://dsj23.me/2017/04/21/java-date-issue-with-daylight-saving/
CC-MAIN-2022-27
refinedweb
333
53.61
31 August 2012 10:16 [Source: ICIS news] SINGAPORE (ICIS)--Taiwan VCM (TVCM) has postponed the scheduled turnaround at its 410,000 tonne/year vinyl chloride monomer (VCM) plant in ?xml:namespace> The producer initially planned to shut the facility for two weeks in the second half of August to replace the plant's catalyst, the source said. However, the company decided to postpone the shutdown after discovering that the catalyst is still performing well during a recent decoking exercise early this month, the source said. Consequently, TVCM has some cargo available for September-loading exports, the source said. The VCM unit is currently running at more than 95% of
http://www.icis.com/Articles/2012/08/31/9591517/taiwan-vcm-delays-kaohsiung-plant-maintenance-to-end-octearly-nov.html
CC-MAIN-2014-49
refinedweb
110
50.57
20 March 2008 07:00 [Source: ICIS news] SINGAPORE (ICIS news)--Jinzhou Petrochemical, China’s largest isopropanol (IPA) producer, plans to shut down one of its two 50,000 tonne/year production lines for 25 days of scheduled maintenance starting 20 April, a company official said on Thursday. The other line was expected to be running during the shutdown period. Both units are located in the northeastern ?xml:namespace> The official said plans to start up a third 100,000 tonne/year line in August this year had been delayed as the company was focusing on bringing on stream another petrochemical unit. He did not reveal further details. “Construction of the IPA plant has yet to begin. It would take at least one year to start up the plant,” he said in Mandarin, declining to comment on the new time frame for building the third.
http://www.icis.com/Articles/2008/03/20/9109871/jinzhou-plans-april-turnaround-for-ipa-unit.html
CC-MAIN-2013-20
refinedweb
145
68.5
table of contents NAME¶ toupper, tolower, toupper_l, tolower_l - convert uppercase or lowercase SYNOPSIS¶ #include <ctype.h> int toupper(int c); int tolower(int c); int toupper_l(int c, locale_t locale); int tolower_l(int c, locale_t locale); toupper_l(), tolower_l(): - Since glibc 2.10: - _XOPEN_SOURCE >= 700 - Before glibc 2.10: - _GNU_SOURCE DESCRIPTION¶ These functions convert lowercase letters to uppercase, and vice versa. If c is a lowercase letter, toupper() returns its uppercase equivalent, if an uppercase representation exists in the current locale. Otherwise, it returns c. The toupper_l() function performs the same task, but uses the locale referred to by the locale handle locale. If c is an uppercase letter, tolower() returns its lowercase equivalent, if a lowercase representation exists in the current locale. Otherwise, it returns c. The tolower_l() function performs the same task, but uses the locale referred to by the locale handle locale. If c is neither an unsigned char value nor EOF, the behavior of these functions is undefined. The behavior of toupper_l() and tolower_l() is undefined if locale is the special locale object LC_GLOBAL_LOCALE (see duplocale(3)) or is not a valid locale object handle. RETURN VALUE¶ The value returned is that of the converted letter, or c if the conversion was not possible. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ toupper(), tolower(): C89, C99, 4.3BSD, POSIX.1-2001, POSIX.1-2008. toupper_l(), tolower_l(): POSIX.1-2008. NOTES¶ The. SEE ALSO¶ isalpha(3), newlocale(3), setlocale(3), towlower(3), towupper(3), uselocale(3), locale(7) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/bullseye/manpages-dev/toupper.3.en.html
CC-MAIN-2022-05
refinedweb
293
57.16
Installing mongoose is as easy as running the npm command npm install mongoose --save But make sure you have also installed MongoDB for your OS or Have access to a MongoDB database. 1. Import mongoose into the app: import mongoose from 'mongoose'; 2. Specify a Promise library: mongoose.Promise = global.Promise; 3. Connect to MongoDB: mongoose.connect('mongodb://127.0.0.1:27017/database'); /* Mongoose connection format looks something like this */ mongoose.connect('mongodb://USERNAME:PASSWORD@HOST::PORT/DATABASE_NAME'); Note: By default mongoose connects to MongoDB at port 27017, Which is the default port used by MongoDB. To connect to MongoDB hosted somewhere else, use the second syntax. Enter MongoDB username, password, host, port and database name. MongoDB port is 27017 by default; use your app name as the db name.
https://riptutorial.com/mongoose/example/3773/installation
CC-MAIN-2021-31
refinedweb
131
58.58
Subject: Re: [OMPI users] what is inside mpicc/mpic++ From: Jeff Squyres (jsquyres_at_[hidden]) Date: 2008-09-18 08:30:11 I believe that the problem is your "-DMPI" in your Makefile. The line in question in mpicxx.h is: namespace MPI { When you use -DMPI, the preprocessor replaces this with: namespace 1 { which is not legal. In short, the application using the name "MPI" is illegal. That name is reserved for the MPI C++ namespace. If you change the name to something else (like -DUSE_MPI, and change the source code to match), this particular problem should be solved. But then again, I'm not sure why you changed CPP to g++ and CC to gcc; shouldn't they be mpicc? It's also not clear from the context of Makefile.common whether CPP is supposed to be the C preprocessor or the C++ compiler. If it's supposed to be the C preprocessor, then "mpicc -E" would be fine; if it's supposed to be the C++ compiler, then mpic++ (or mpiCC) would be fine. On Sep 18, 2008, at 1:46 AM, Shafagh Jafer wrote: >: > > I am trying to figure out a problem that i am stuck in :- > ( could > anyone please tell me how their mpicc/mpic++ looks like? is there > any thing > readable inside these files?because mine look corrupted and are > filled with > unreadable charachters. > > Please let me know. > > > > _______________________________________________ > > users mailing list > > users_at_[hidden] > > > > _______________________________________________ > users mailing list > users_at_[hidden] > > > <Makefile.common>_______________________________________________ > users mailing list > users_at_[hidden] > -- Jeff Squyres Cisco Systems
http://www.open-mpi.org/community/lists/users/2008/09/6601.php
CC-MAIN-2014-41
refinedweb
257
81.83
I am still having slight troubles with functions and arrays. I have created an array that has been filled with random numbers. I am trying to create a function that sets up the array and returns the highest value. I am not to sure on how to approach this but this and what I have written so far has not been able to compile. namespace Task_1._13 { class Program { static void Main(string[] args) { gettingMaximum(int i); } public int gettingMaximum(int i); { int maximum = 0; int[] myArray = new int[10]; Random rand = new Random(); for (int i = 0; i < myArray.Length; i++) { myArray[i] = rand.Next(19); } for (int i = 0; i < 10; i++) { if (i == 0) maximum = myArray[i]; else if (myArray[i] < maximum) maximum = myArray[i]; int result = i; return result; } } } } That's what I have gotten so far. I am not very experienced in programming so help would be appreciated. for (int i = 0; i < myArray.Length; i++) { if(myArray[i] > maximum) { maximum = myArray[i]; } } return maximum; OR you can just use the Max function like this return myArray.Max();
https://www.codesd.com/item/function-returning-the-highest-value-in-a-table.html
CC-MAIN-2020-34
refinedweb
183
71.65
31 March 2010 18:29 [Source: ICIS news] LONDON (ICIS news)--African high density polyethylene (HDPE) prices slipped by $10-20/tonne (€7.5-15/tonne) this week as high upstream costs, improving availability and buyer hesitation continued to weigh on values, sources confirmed on Wednesday. At least two Middle Eastern producers confirmed they were now offering reductions of $20/tonne on HDPE, which was immediately apparent in the eastern, northern and southern African markets that rely heavily on imports from the Middle East, sources said. Elsewhere, linear low density polyethylene (LLDPE) spot prices also began to slip by $20/tonne on ample supply in both the eastern and northern African markets. “Prices are starting to come down like crazy,” reported one Indian trader offering into the eastern African market. “?xml:namespace> Indeed, several sources noted reductions of up to $100/tonne had been seen on the HDPE market from one Middle Eastern supplier, although such offers had now been ‘withdrawn’ according to traders. One global producer offering from its Saudi Arabian plant noted: “I do not understand why these Middle Eastern suppliers offer so low. The Chinese market is starting to recover and feedstock costs are so high. They cannot be making money on these prices.” The western African market, meanwhile, remained stable. A trader offering into the region said: “The [western African] market is buffered by its location – the Middle Eastern suppliers are more active in the eastern and northern [African] markets, and the downward sentiment has not reached western Africa yet.” However, the source added similar decreases in coming weeks in the western African market could not be ruled out. “April is still up in the air and prices are bouncing around,” the trader said. “The producers want increases, or a rollover at minimum, but demand [for HDPE and LLDPE] is not particularly good and buyers expect a price collapse. They will hold out until that
http://www.icis.com/Articles/2010/03/31/9347589/african-polyethylene-prices-slip-by-around-20tonne.html
CC-MAIN-2015-11
refinedweb
319
50.67
Principles of XML design When to use elements versus attributes Exploring the oldest question in XML design Content series: This content is part # of # in the series: Principles of XML design This content is part of the series:Principles of XML design Stay tuned for additional content in this series. Several frequently pondered questions of DTD design in SGML have followed the legacy to its offshoot, XML. Regardless of what XML schema language you use, you might find yourself asking: - When do I use elements and when do I use attributes for presenting bits of information? - When do I require an order for elements, and when do I just allow arbitrary order? - When do I use wrapper elements around sequences of similar elements? In my experience working with users of XML, the first question is by far the most common. In some cases the answer is pretty unambiguous: - If the information in question could be itself marked up with elements, put it in an element. - If the information is suitable for attribute form, but could end up as multiple attributes of the same name on the same element, use child elements instead. - If the information is required to be in a standard DTD-like attribute type such as ID, IDREF, or ENTITY, use an attribute. - If the information should not be normalized for white space, use elements. (XML processors normalize attributes in ways that can change the raw text of the attribute value.) Unfortunately, design decisions don't often fall into such neat categories. The question remains how to make the right choice in the gray areas. The usual answer is No single answer is right, use your best judgment. But this is not very helpful for those trying to find their feet with XML. True, even experts do not always agree on what to do in certain situations, but this is no reason not to offer basic guidelines for choosing between XML and attributes. First I want to comment on two guidelines that I have heard and do not recommend. I have heard Just make everything an element. The reasons given range from Attributes just complicate things to Attributes can stunt extensibility. But if you do not use attributes, you are leaving out a very important aspect of XML's power, and you're probably better off using some delimited text format. I have also heard If it is the sort of material you would expect to display in a browser, use element content. The problem with this guideline is that it encourages people to think of XML content design in terms of presentation, two considerations that should not be mixed. I present a very similar guideline in this article, but I express it in terms of the intent of the content, rather than in terms of presentation. In the rest of this article, I present a set of guidelines that I do recommend when choosing between elements and attributes. Recommended guidelines I have divided these guidelines into a set of principles that I think frame the choice between elements and attributes overall. None of the guidelines are meant to be absolute; use them as rules of thumb and feel free to break the rules whenever your particular needs require it. Principle of core content avoids cluttering up the core content with auxiliary material. For machine-oriented records formats, this generally means application-specific notations on the main data from the problem-domain. As an example, I have seen many XML formats, usually home-grown in businesses, where document titles were placed in an attribute. I think a title is such a fundamental part of the communication of a document that it should always be in element content. On the other hand, I have often seen cases where internal product identifiers were thrown as elements into descriptive records of the product. In some of these cases, attributes were more appropriate because the specific internal product code would not be of primary interest to most readers or processors of the document, especially when the ID was of a very long or inscrutable format. You might have heard the principle data goes in elements, metadata in attributes. The above two paragraphs really express the same principle, but in more deliberate and less fuzzy language. Principle of structured information. I see names in attributes a lot, but I have always argued. I should point out that> I hope to expand on the treatment of people's names in markup in a future article. Principle of readability. In some cases, people can decipher the information being represented but need a machine to use it properly. using IDs that could have business meaning), but such IDs are usually props for machine processing. For these reasons I recommend putting URLs and IDs in attributes. Principle of element/attribute include> In this case I applied a mix of the Principle of core content and the Principle of readability to the decision to put the value in content and the units in an attribute. This is one of those cases that are less cut and dried, and other schemes might be as suitable as mine. The solution also involves contradicting the original decision to put the portion size into an attribute based on the Principle of core content. This illustrates that sometimes the principles will lead to conflicting conclusions where you'll have to use your own judgment to decide on each specific matter. No substitute for study and experience XML design is a matter for professionals, and if you want to gain value from XML you should be willing to study XML design principles. Many developers accept that programming code benefits from careful design, but in the case of XML they decide it's OK to just do what seems to work . This is a distinction that I have seen lead to real and painful costs down the road. All it takes for you to learn sound XML design is to pay attention to the issues. Examine standard XML vocabularies designed by experts. Take note of your own design decisions and gauge the positive and negative effect each has had on later developments. As you gain experience, your instinct will become the most important tool in making design decisions, and the care you take will pay certain rewards if you find yourself using XML to any significant extent. I plan to continue covering XML design principles in future articles such as this one. I'll also continue to touch briefly on issues of XML design in my Thinking XML column. Downloadable resources Related topics - Check out the famous Cover Pages, which offer several compilations of information on the topic aggregated on these pages: SGML/XML: Elements versus attributes, April 1998 and SGML/XML: Using Elements and Attributes. - Read Effective XML by Elliotte Rusty Harold (Addison-Wesley, 2003). In my experience, the author has an excellent command of XML design and best practices. You can read some of the sections online, including "Store metadata in attributes" and "Make structure explicit through markup", both of which are directly relevant to this article. - Don't miss the any of articles in this series on XML design: - "Use XML namespaces with care" (developerWorks, April 2004). - "Element structures for names and addresses" (developerWorks, August 2004). - IBM trial software: Build your next development project with trial software available for download directly from developerWorks.
http://www.ibm.com/developerworks/xml/library/x-eleatt/index.html
CC-MAIN-2017-13
refinedweb
1,231
59.74
A. In this blog I’ll describe: - The open-source Intel graphics chip - Sun news on open-source java - Motorola promotes mobile Linux while Nokia releases device - Black Duck and Palamida keep coders clean - Nat Friedman declares Linux desktop ready (for many uses) The open-source Intel graphics chipWhy did Intel choose to put a driver for a valuable 3D graphics chip under an open source license, when so many companies tenaciously hold all information secret about their hardware products? Often, when products start to go open source, analysts treat it as a sign that the field is becoming commoditized and that no further advantage can be derived from trying to differentiate products. But it’s hard to believe that there’s little innovation left in video cards. Analysts decided that Intel was trying to get a leg up on its competition by freeing its driver. Certainly, it wants its chip used in Linux systems, and open-sourcing the driver means it can keep up with kernel changes; its driver will continue to work as each version of the kernel is released every few weeks. But the main lesson this decision holds for me is the power of the free software community. It was their lobbying (and sometimes, their intransigent refusal to support binary drivers) that forced this most prominent of hardware companies to make the open-source decision. Every successful company knows it has to give the customer what she wants–but now it’s a good idea to give the developer and hacker community what it wants too. Sun news on open-source javaSo I attended tonight’s reception, hosted by Sun at the ultra-metropolitan W Hotel, where managers discussed their plans to make Java open source. Considering they have already said it’s not a matter of “whether” but “how,” I considered having two or three extra drinks (at Sun’s expense) so I could go back to the hotel and blast them for moving too slow and for saying nothing new. I know other people may say this, but I’m glad I held back; to be fair, they’re doing quite a bit. In summary, the key points I took from the announcement are: - All of Sun’s software (”the entire software portfolio”) will eventually be open source. Both Java virtual machines (the CLDC and the CDC) are to be open-source within the next 18 months. “Significant pieces” will be opened this year, and the rest in 2007. - An “OSI-approved license” will be used on Java. A manager told me it would be very unlikely that Sun will create a new license; one of the existing known OSI-approved licenses will probably be used. - Current practices for certifying and trademarking Java will probably not change, because the community is happy with them. - Open-sourcing will benefit the new wave of complex Java-based systems emerging in the market place, notably mobile and other embedded systems. I asked the head of the GlassFish project whether they anticipate a flood of contributions and suggestions for open-source Java; he said that on his project the stream of input was “incremental” and easy to handle. Let me explain why I am sympathetic to Sun’s pacing and cautious approach. O’Reilly Media itself used to have a software division, making Web software. When profits weren’t as good as we expected, managers closed down the division. They considered open-sourcing the software, but decided it took too much trouble. Not only the legal changes, but the cultural changes, presented enormous hurdles. So I know that a shift like that made by Sun is hard to do and must be done carefully–they have only one chance to get it right. News will be posted online as the project continues. Motorola promotes mobile Linux while Nokia releases deviceMotorola has been exceedingly busy in the Linux and free software space over the past six months. They pulled together a wide range of vendors and service providers in the mobile space (the likes of Docomo) to cooperate on providing open-source software for mobile devices. This differs from the OSDL Mobile Linux Forum, which focuses on developing standards more than code. Motorola works hard to make their partners play the game right and release Linux code under the GPL. And putting their money where their mouths are, Motorola has released a lot of Java code (as well as their own Linux changes) on a developer website. I talked to Mark VandenBrink, lead architect of mobile devices software, about the Motorola view of the mobile device market and Linux’s role. He thinks that the adoption of WiMAX (an area where Motorola is collaborating with Sprint) will provide a fat pipe for content on mobile devices, putting pressure on operating systems to support large amounts of streaming data. Linux has to become ready for this. Linux’s support for IPv6 may also prove valuable. Nokia lent copies of its 770 tablet to journalists for use during the show. This PDA (based on Linux) is pretty chock full of devices, including a memory chip, a rather tinny speaker and earphone jack, Bluetooth and 802.11 wireless support, and a USB connection. I mastered the menus and interface fairly quickly and tried out various input systems. I didn’t get to the handwriting interface, but I was frustrated by the finger keyboard (typing interface), which didn’t always recognize my finger presses. The stylus input worked well. Unfortunately, the device interacted strangely with the wireless hubs provided by the Expo and the W Hotel, leading to the hubs stealing the web page regularly or closing connections unexpectedly. Black Duck and Palamida keep coders cleanWhile cases are usually settled quietly, there’s widespread understanding in the free software community that proprietary companies are violating free software licenses by incorporating code from free software into proprietary products. Sometimes the companies’ management are innocent victims of software developers who steal the code to cut corners. And many companies worry about what they’re buying when they license code or acquire a code base during a merger or acquisition. Two companies, Black Duck and Palamida, provide tools for companies who want to remain honest and avoid legal vulnerabilities. Their premise is simple: keep a base of software signatures or fingerprints which the company’s code can be matched, like virus checking software. But the filched code is a needle in a haystack, so techniques must get sophisticated. Black Duck has been around much longer. They boast a scalable system that is being used by companies with a global reach, such as Siemans. They recently announced the growth of their knowledge base to a staggering 71 gigabytes, driven partly by 100 vendors who recently added their code, and the incorporation of examples from Dr. Dobbs Journal and the C/C++ Users Journal. Black Duck CEO Douglas Levin says they have also been working intensively with the Free Software Foundation on its GPL version 3.0, and he is very impressed with the emerging license. He says FSF leadership has listened effectively to the many inputs. Their constituency, particularly the businesses he’s in contact with, have a positive inclination to the license. Palamida offered its first produce, IP Amplifier, in 2005. Like the Black Duck system, it is intended to find code that violates open source licenses by running the code through a set of matches. In addition to strings of source code, IP Amplifier checks Java namespaces, MD5 checksums on files of binary code, and copyright notices. Palamida has just announced a new product, IP Authorizer, that lets developers submit free software excerpts they’ve found for approval, and tracks the workflow as the managers and legal staff determine whether it’s safe to include the code in the company’s product. Palamida also offers services along the same lines as its products. Nat Friedman declares Linux desktop ready (for many uses)Finally, I got to see a cute demo of the SUSE Linux Enterprise Desktop from Novell VP Nat Friedman. Nat has been coding for, and arguing for, the Linux desktop for years. Now he says, “it’s ready.” To be precise, the Linux desktop is ready for the large class of what Novell calls “basic office workers.” Unlike home users, who want a wide range of games and multimedia tools, basic office workers run just four or five common applications. They make up the majority of people working in companies, and their needs can be satisfied by free software such as Evolution, Firefox, and OpenOffice.org. Furthermore, their desktop systems can be configured to fit smoothly in with a Windows network so that a company can make a gradual transition to free software.
http://www.oreillynet.com/onlamp/blog/2006/08/linuxworld_2006_opens_as_compa.html
crawl-002
refinedweb
1,460
58.82
django-viewtracker 1.2 Object view tracker for Django Copyright 2010 - 2013 Caramel (formerly uAnywhere). Released under the 3-clause BSD license. django-viewtracker is a simple Django application to allow you to do view tracking on objects. The primary purpose of the module is so that a user can determine what objects they have already seen. It contains some additional “smarts” for dealing with updates as well - so if you haven’t read it since an update, then you haven’t read it at all. The module is generic, and can be applied to any model you like. Unlike some other view tracking modules for Django, this one stores it’s data in a Model (instead of the session), and thus only works for users who are registered. Unregistered users won’t have any view tracking at all, and all objects will appear as read for them. The advantage of doing this is that view tracking works across all devices/browsers that someone uses. Installation Install the module with the normal sudo ./setup.py install. Add viewtracker to your INSTALLED_APPS in the settings.py of your application. You’ll need to ./manage.py syncdb so that viewtracker’s modules can be installed. Now you’re ready to use it. Design The system has three levels of view tracking. It will check each of these in order to determine if an object has been viewed. The first level is “all view” tracking. Your application may allow a user to mark all objects as read. This is checked first against the object’s modification time. When you mark everything as viewed, all other records of views for the user are deleted to save space. The second level is “model view” tracking. Your application may allow a user to mark all objects of a particular type as read. This the second thing to be checked against the object’s modification time. When you mark all of a model as viewed, all other records of individual item views are deleted to save space. The third level is “instance view” tracking. A user views an object, you would use this to mark it as having being viewed. This means to determine if an object has been viewed, it goes through three levels of checks, each increasing in complexity. The viewtracker tables may become quite large. Using First, track views of your object in your view (normally object detail view): from viewtracker.models import ViewTracker from .models import Car class CarDetailView(View): def get(self, request, object_id): # Get the instance of the object to view # (Though in reality you'd probably use the Django Generic CBVs instead # of this.) car = get_object_or_404(Car, id=object_id) # Create an instance of the tracker for this user # If they are anonymous, then everything is viewed. tracking = ViewTracker(request.user) # Mark the instance viewed by this user tracking.mark_instance_viewed(car) # Render the template to display the object. return render(request, 'my_app/car_detail.html', dict(car=car, user=user)) You can then look up if an object has some changed data. You could show this in a list: from django_globals import globals as django_globals class Car(models.Model): # ... other fields .. # Use the name "last_updated" as the app will automatically to use this # field name last_updated = models.DateTimeField(auto_now=True) def has_viewed(self): tracker = ViewTracker(django_globals.request.user) # Here we've manually supplied the updated field name. return tracking.has_viewed(self, updated_field='last_updated') Then in the template, you would have something like: {% for car in car_list %} {# ... #} <td> <a href="{{ car.get_absolute_url }}">Listing #{{ car.id }}</a> {% if not car.has_viewed %} <img src="{{ STATIC_URL }}img/new.png" alt="New!" /> {% endif %} </td> {# ... #} {% endfor %} There are also some built-in views, which you can use for marking all objects read, or all instances of a model as read: url( r'^cars/mark_all_read/$', login_required()(viewtracker.views.mark_model_as_viewed), dict(model=Car), name='mark_all_cars_as_viewed' ), Then call it in your template with something like: <form method="post" action="{% url 'mark_all_cars_as_viewed' %}"> {% csrf_token %} <input type="submit" value="Mark all cars as viewed" /> </form> Note that by default all objects will be “unviewed”, so when you first roll the application out, you may wish to set everyone as having viewed all objects up to a particular point in time. - Author: Caramel - License: BSD - Categories - Package Index Owner: micolous - DOAP record: django-viewtracker-1.2.xml
https://pypi.python.org/pypi/django-viewtracker/1.2
CC-MAIN-2016-30
refinedweb
722
57.98
SBECK has asked for the wisdom of the Perl Monks concerning the following question: In the past year, I was introduced to two new tools I hadn't previously been aware of (Devel::Cover and Travis CI) that I am now using for my modules, and I was just wondering what other tools might be out there that I could benefit from. What I'm looking for are tools that will improve the overall quality of my modules in terms of usability, readability, completeness, or whatever other metric. I looked around the monestary and didn't find such a list... after some feedback, I'd be hapy to add it as a tutorial. Tools that I use now are listed below. I know that many of these are pretty obvious, but perhaps for someone just starting out, they should be included. What am I missing? Update: I'm going to add the suggestions to the list as they come in, so I don't necessarily use all of them... and of course, not every tool will fit everyone's needs and/or wants, but they are a great place to start looking. The tool I wish I had the most, but don't (to my knowledge) would be a place where I could log in to and select the OS, version of perl, and version of any prerequisite modules in order to debug a test from the cpantesters site. If this exists and I don't know about it, please fill me in! Nice list. Here's another one you might add... Perl::Critic I too have frequently desired a place where I can throw a new module at to see test results, instead of uploading a new version to CPAN. I recently started Release::Checklist. It is far from complete. Use README.mdChecklist.md to see the current state. All feedback welcome. Super! I've had ideas along this path but never acted on them. Keep it going! Hello SBECK, Since you are looking to compile a comprehensive list, I think reference should be made to Task::Kensho, in particular Task::Kensho::ModuleDev and Task::Kensho::Testing. Hope that helps, One that I use a lot (and encourage others to use) is Perl::Critic. For those for aren't aware, it is a static source code analyzer. It critiques your code against best practices and recommendations from both the Perl community and Damien Conway's excellent book Perl Best Practices. <rant>A common criticism of Perl::Critic I've heard before is that some people disagree with this or that default policy. So for those folks I recommend Perl::Critic::Lax, which has policies that get Perl::Critic to loosen its tie a bit . There are also 167 modules in the Perl::Critic namespace, many of which are collections of policies and 65 in the Perl::Critic::Policy sub-namespace itself. Chances are that there's a policy in there that might scratch your itch. Failing that they can always RTFM and learn to make their own policies.</rant> I have found static source code analysis to be a great tool when beginning work on a very large codebase. It helps point out things that could very well be long-standing bugs of which the team working on the code may not even be aware. It also helps me zero in on areas of the code that may have only been put through perfunctory testing that may be in need of extra attention. I highly recommend trying it out if you've never used it. If you are using Test::Perl::Critic, please be sure to make the tests only run if some environment variable such as RELEASE_TESTING is set. There are a couple of reasons for this. First, Perl::Critic takes time, and what it tests is not likely to actually change from the time you test your release to the time it gets on a user's system. So there's no good reason to tie up user install time testing what cannot have changed since you built the distribution. Second, it is possible that others have a global Perl::Critic config file set that alter what Perl::Critic looks for. You could discover your tests are suddenly failing on those user's systems, not because the code has changed, but because the test's behavior has changed. Conversely, if you have your own .perlcriticrc, and if it doesn't ship with the distribution, then what you are testing will again be different from what the tests do on a typical user's system. For these reasons it's wise to not cause a test suite failure based on Test::Perl::Critic running on user's systems. The best approach is to only run it when you are preparing a release. Dave This is good advice. It's why I run it on the side outside of the test suite. What I'm looking for are tools that will improve the overall quality of my modules in terms of usability, readability, completeness... Some nodes I've written over the years related to code and module quality: Sorry,.
https://www.perlmonks.org/?node_id=1138589
CC-MAIN-2020-05
refinedweb
860
70.63
There are some built-in higher-order functions in python, such as map(), filter(), reduce(); they are called higher-order functions because one of the parameters accepted by such functions is a function object. map() function syntax: map(func,seq1[,seq2,……]) The first parameter received by the map function is a function object, followed by one or more sequences; the map function will apply func to each value in the following sequence and return an iterator. example: def func(a): return a**2 >>>map(func,[1,2,3]) <map object at 0x000002B127AEA700>#Return a map object, which is an iterator; >>>list(map(func1,[1,2,3])) [1, 4, 9] #Pass 1,2,3 as parameters into func one by one, get 1,4,9 respectively; finally convert the result to list You can also pass in multiple sequences, one sequence corresponds to the parameter of a function; the sequence length can be inconsistent, the default is to take the sequence value of the common length; example: def func(a,b): return a+b >>>b=list(map(func,[10,20,30],[1,3,10]))#The values of the corresponding positions of the two sequences are applied to func as a and b respectively; >>>print(b) [11, 23, 40]# >>>list(map(func,[10,20,30,40],[1,3,10]))#The length is inconsistent, and a sequence of the same length is taken by default [11, 23, 40] >>>list(map(func,[1,2,3],[10,20,30,40],[1,3,10]))#The number of sequences must be the same as the number of parameters of func TypeError: func() takes 2 positional arguments but 3 were given >>>b=list(map(func,[1,2,3])) TypeError: func() missing 1 required positional argument:'b' As you can see, the function implemented by the map() function is very similar to the for loop and the list comprehension, so what is the efficiency of the map, for loop and the list comprehension? import time start=time.time() def func(a,b): return a+b c=list(map(func,range(1000000),range(1000000))) end=time.time() >>>end-start 0.16860485076904297 import time start=time.time() c=list() for i in range(1000000): c.append(i+i) end=time.time() >>>end-start 0.2443540096282959 import time a=list(range(1000000)) b=list(range(1000000)) start=time.time() c=[a[i]+b[i] for i in range(1000000)] end=time.time() print(end-start) 0.2124321460723877 From the above comparison, we can see that map has the highest efficiency and for loop has the lowest efficiency; Based on this type of function and efficiency of map, when the project involves more loops, you can consider whether you can replace the for loop with Map. On the one hand, the code is more brief and pythonic, and on the other hand, the time efficiency of the code will be higher.
https://stdworkflow.com/146/what-is-the-advantage-of-the-map-function-in-python
CC-MAIN-2022-40
refinedweb
478
53.34
In this tutorial, we will measure different temperature and humidity data using temperature and humidity sensor. You will also learn how to send this data to Ubidots. So that you can analyze it from anywhere for different application. Also by sending this data to google sheets, predictive analysis can be achieved. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Hardware and Software Required Hardware Required: - NCD ESP32 IoT WiFi BLE Module with Integrated USB - NCD IoT Long Range Wireless Temperature and Humidity Sensor Software Required: Library Used: - PubSubClient Library - Wire.h Step 2: Uploading the Code to ESP32 Using Arduino IDE: - Before uploading the code you can view the working of this sensor at a given link. - Download and include the PubSubClient Library and Wire.h Library. #include <WiFi.h> #include <PubSubClient.h> #include <Wire.h> #include <HardwareSerial.h> - You must assign your unique Ubidots TOKEN, MQTTCLIENTNAME, SSID (WiFi Name) and Password of the available network. #define WIFI SSID "XYZ" // Put your WifiSSID here #define PASSWORD "XYZ" // Put your wifi password here #define TOKEN "XYZ" // Put your Ubidots' TOKEN #define MQTT_CLIENT_NAME "XYZ" // MQTT client Name - Define variable and device name on which the data will send to Ubidots. #define VARIABLE_LABEL "Temperature" // Assing the variable label #define VARIABLE_LABEL2 "Battery" #define VARIABLE_LABEL3 "Humidity" #define DEVICE_LABEL "esp32" // Assig the device label - Space to store values to send: char payload[100]; char topic[150]; char topic2[150]; char topic3[150];// Space to store values to send char str_Temp[10]; char str_sensorbat[10]; char str_humidity[10]; - Code to publish data to Ubidots: sprintf(topic, "%s", ""); // Cleans the topic content sprintf(topic, "%s%s", "/v1.6/devices/", DEVICE_LABEL); sprintf(payload, "%s", ""); // Cleans the payload content sprintf(payload, "{\"%s\":", VARIABLE_LABEL); // Adds the variable label sprintf(payload, "%s {\"value\": %s", payload, str_Temp); // Adds the value sprintf(payload, "%s } }", payload); // Closes the dictionary brackets client.publish(topic, payload); - Compile and upload the temp_humidity. Step 3: Serial Monitor Output. Step 4: Making the Ubidot Work: - Create the account on Ubidots. - Go to my profile and note down the token key which is a unique key for every account and paste it to your ESP32 code before uploading. - Add a new device to your Ubidots dashboard name esp32. - Click on devices and select devices in Ubidots. - Now you should see the published data in your Ubidots account, inside the device called "ESP32". - Inside the device create a new variable name sensor in which your temperature reading will be shown. - Now you are able to view the Temperature and other sensors data which was previously viewed in the serial monitor. This happened because the value of different sensor reading is passed as a string and store in a variable and publish to a variable inside device esp32. Step 5: Export Your Ubidots Data to Google Sheets In this we can extract the data stored in the Ubidots cloud for further analysis. The possibilities are enormous; for instance, you could create an automatic report generator and send it. - Also add the Token Id, device Id taken from your Ubidots account to the following code. - Done! now open your Google Sheet again and you'll see a new menu to trigger the functions. Discussions
https://www.instructables.com/id/TempHumidity-Data-Analysis-Using-Ubidots-and-Googl/
CC-MAIN-2019-39
refinedweb
548
54.32
set<string> generate_all_substrings(string s) { set<string> substrings; string substring = ""; string::iterator iter; for(iter = s.begin(); iter != s.end(); ++iter, substring = "") { for(string::iterator iter_2 = iter; iter_2 != s.end(); ++iter_2) { substring += *iter_2; substrings.insert(substring); } substrings.insert(substring); } return substrings; } Generating Substrings 31 Replies - 6975 Views - Last Post: 14 July 2013 - 12:09 AM #1 Generating Substrings Posted 03 July 2013 - 07:23 PM Replies To: Generating Substrings #2 Re: Generating Substrings Posted 03 July 2013 - 08:04 PM By building up the string one character at at time in your inner loop, your are incurring the cost of reallocating a string each character added if it is a naive implementation of the string class. This post has been edited by Skydiver: 03 July 2013 - 08:05 PM #3 Re: Generating Substrings Posted 04 July 2013 - 04:47 PM 'just to work as desired', may result in NOT optimal code, (at first) ... You may like to add some permutations to the original string to get all possible subsets (if order is considered)... #include <iostream> #include <iomanip> #include <string> #include <set> #include <algorithm> // re. next_permutation( beginPtr, endPtr ); // main // // using a C++ string // string s = "abcd"; set< string > substrings; generate_all_substrings( substrings, s ); while( next_permutation( s.begin(), s.end() ) ) { generate_all_substrings( substrings, s ); } print_set( substrings ); substrings.clear(); cout << endl; This post has been edited by David W: 04 July 2013 - 04:50 PM #4 Re: Generating Substrings Posted 04 July 2013 - 10:55 PM Skydiver, on 03 July 2013 - 08:04 PM, said: By building up the string one character at at time in your inner loop, your are incurring the cost of reallocating a string each character added if it is a naive implementation of the string class. Would you kind enough to give your interpretation of how string::substring() is implemented? Wouldn't the substring have to do the same thing I am doing in my for loop? Does it use a more efficient algorithm for getting the substring? #5 Re: Generating Substrings Posted 04 July 2013 - 11:02 PM David W, on 04 July 2013 - 04:47 PM, said: (if order is considered)... Yeah, it is nice if you can get it to work at all. But it does not mean it is optimal. I only want to generate the substrings as they appear, not considering the different orders of the string, however. Does this work when not considering permutations? #6 Re: Generating Substrings Posted 05 July 2013 - 05:15 AM salazar, on 05 July 2013 - 01:55 AM, said: Wouldn't the substring have to do the same thing I am doing in my for loop? Does it use a more efficient algorithm for getting the substring? If were to implement substr(), the core of my code would allocate the space needed and then copy the characters across instead of allocate one character, copy, allocate one character, copy, etc. I've not looked at the GCC implementation of substr(), but here's the Microsoft implementation which is based on the Dinkumware implementation: _Myt substr(size_type _Off = 0, size_type _Count = npos) const { // return [_Off, _Off + _Count) as new string return (_Myt(*this, _Off, _Count, get_allocator())); } basic_string(const _Myt& _Right, size_type _Roff, size_type _Count, const _Alloc& _Al) : _Mybase(_Al) { // construct from _Right [_Roff, _Roff + _Count) with allocator _Tidy(); assign(_Right, _Roff, _Count); } _Myt& assign(const _Myt& _Right, size_type _Roff, size_type _Count) { // assign _Right [_Roff, _Roff + _Count) if (_Right.size() < _Roff) _Xran(); // _Roff off end size_type _Num = _Right.size() - _Roff; if (_Count < _Num) _Num = _Count; // trim _Num to size if (this == &_Right) erase((size_type)(_Roff + _Num)), erase(0, _Roff); // substring else if (_Grow(_Num)) { // make room and assign new stuff _Traits::copy(this->_Myptr(), _Right._Myptr() + _Roff, _Num); _Eos(_Num); } return (*this); } static _Elem *__CLRCALL_OR_CDECL copy(_Elem *_First1, const _Elem *_First2, size_t _Count) { // copy [_First2, _First2 + _Count) to [_First1, ...) return (_Count == 0 ? _First1 : (_Elem *)_CSTD memcpy(_First1, _First2, _Count)); } _Grow() is what does the memory allocation, and as you can see, copy just copies all the data in one go. There are a lot of smart people who have contributed to GCC's standard library. I would be very surprised if the implementation is not optimized as well. #7 Re: Generating Substrings Posted 05 July 2013 - 01:35 PM Quote I didn't realize that I was allocating every time. Wow, that's cool. Took me a little bit of time to understand, but I think I get the main flow of the code. Okay, thanks for this info, I'll try this and so how it goes. This post has been edited by salazar: 05 July 2013 - 01:36 PM #8 Re: Generating Substrings Posted 06 July 2013 - 01:34 PM #9 Re: Generating Substrings Posted 06 July 2013 - 01:56 PM Quote Not efficient enough for what? Post your current code and describe exactly what your trying to accomplish. And if you have a problem, clearly state the problem. Jim #10 Re: Generating Substrings Posted 06 July 2013 - 02:06 PM #11 Re: Generating Substrings Posted 06 July 2013 - 06:35 PM You could pass the set as a reference parameter. void generate_all_substrings(set<string> &substr, string str) Your function could just generate one sub-string at a time, which might be more efficient in certain situations. #12 Re: Generating Substrings Posted 08 July 2013 - 11:36 PM #include <algorithm> #include <iostream> #include <set> using namespace std; set<string> generate_all_substrings(string s) { set<string> substrings; string substring = ""; size_t size = s.size(); for(int start = 0; start < size; ++start, substring = "") { for(int count = 1; count <= size; ++count) { substring = s.substr(start, count); substrings.insert(substring); } } return substrings; } #13 Re: Generating Substrings Posted 09 July 2013 - 12:12 AM #14 Re: Generating Substrings Posted 09 July 2013 - 12:30 AM #15 Re: Generating Substrings Posted 09 July 2013 - 12:36 AM Also I would recommend that you create the set<> instance in your calling function and pass that instance by reference into your function. void generate_all_substrings(set<string>& substrings, string& s) Passing the parameters by reference will avoid the copying that is being done in your current function. Jim This post has been edited by jimblumberg: 09 July 2013 - 12:36 AM
http://www.dreamincode.net/forums/topic/324190-generating-substrings/
CC-MAIN-2017-04
refinedweb
1,030
58.11
Pattern Matching in the Java Object Model This document describes a possible approach for how pattern matching might be integrated more deeply into the language. (this document) — Explores how patterns fit into the Java object model, how they fill a hole we may not have realized existed, and how they might affect API design going forward. Why pattern matching? The related documents offered numerous examples of how pattern matching makes common code constructs simpler and less error-prone. These may be enough reason on their own to want to add pattern matching to Java, but we believe there is also a deeper reason to go there: that it fills in a hole in the object model, and one we might not have even realized we were missing. Recap — what is pattern matching? Readers who are unfamiliar with pattern matching should first read the above-referenced documents, but to summarize: pattern matching fuses three operations that are commonly done together: an applicability test, zero or more conditional extractions if the test succeeds, and binding the extracted results to fresh variables. A pattern match like: if (p instanceof Point(int x, int y)) { ... } expresses all three of these things in one go — it asks if the target is a Point; if it is, casts it to Point and extracts its x and y coordinates; and binds these to the variables x and y. Aggregation and destructuring Object-oriented languages provide many facilities for aggregation; the Gang of Four book defines several patterns for object creation (e.g., Factory, Singleton, Builder, Prototype). Object creation is so important that languages frequently include specific features corresponding to these patterns. Aggregation allows us to abstract data from the specific to the general, but recovering the specifics when we need them is often difficult and ad-hoc. For example, it is common to provide factory methods for particular specific configurations of aggregates, and these factories typically return an abstract type, such as: static Shape redBall(int radius) { ... } static Shape redCube(int edge) { ... } At the invocation site, it is obvious what kind of shape is being created, but once we start passing it around as a Shape, it can be harder to recover its properties. Sometimes we reflect some of the properties in the type system (such as concrete subtypes of Shape for Ball and Cube), but this is not always practical (especially when state is mutable). Other times, APIs expose accessors for these properties (such as a shapeKind() method on Shape), but attempting to distill a least-common-denominator set of properties on an abstract type is often a messy modeling exercise. (For example, should a size() method correspond to the edge size for a cube and the radius of a ball, or should it attempt to normalize sizes somehow? Or, should we have partial methods like ballRadius() which fail when you apply them to a cube?) In this approach, clients must be aware of how factories map their parameters to abstract properties, which is complex and error-prone. Destructuring provides this missing ability to recover the specific from the abstract more directly. Destructuring is the dual of aggregation. Java provides direct linguistic support for aggregation, in the form of constructors that take a description of an object’s initial state and aggregates it into an instance, but does not directly provided the reverse. Instead, we leave that problem to APIs, which may expose accessors for individual state components, and those accessors may or may not map to the arguments presented to constructors or factories. But this is a poor approximation for all but the simplest classes, because the code for aggregation and for destructuring often operate at different levels of granularity, and look structurally different — and these differences provide places for bugs to hide. Whatever tools the language offers us for aggregation (e.g., constructors, factories, builders), it should also offer us complementary destructuring, ideally that is syntactically similar in declaration and use, and operate at the same level of abstraction. For example, if our Shape library provided factories like the above, it could provide destructuring patterns that are similar in name and structure: switch (shape) { case Shape.redBall(var radius): ... case Shape.redCube(var edge): ... } Enabling API designers to provide complementary destructuring operations for their aggregation operations enables more reversible APIs while still allowing the API to mediate access to encapsulated state. Of course, there may be some aggregates that don’t want to give up their state, and that’s fine. The goal is not to force transparency on all classes, but instead to give API designers the tools with which to expose destructuring operations that are structurally similar to its aggregation operations, should they so desire. Many, but not all, APIs would benefit from this. Object creation in Java The most common idioms for creating objects in Java are constructors, factories, and builders. Constructors are a language feature; factories and builders are design patterns implemented by libraries. A specific API typically prefers one of these idioms; either it exposes constructors, or hides the constructors and provides factories, or hides the constructors and provides builders. A sufficiently rich pattern matching facility would allow API designers to provide complementary destructuring facilities for each of these idioms, which leads to more readable and less error-prone client code. If an object is created with a constructor: Object x = new Foo(a, b); ideally, it should be destructurable via a “deconstruction pattern”: case Foo(var a, var b): ... The syntactic similarity between the construction and deconstruction is not accidental; appealing to a constructor-like syntax allows the user to see this as asking “if x could have come into existence by invoking the Foo(a, b) constructor, what values of a and b would have been provided?” Similarly, if an object is created with a static factory: Object x = Foo.of(a, b); ideally, it should be destructurable via a “static pattern”: case Foo.of(var a, var b): ... As a more concrete example, we construct Optional instances with: o = Optional.of(x); o = Optional.empty(); so we should be able to discriminate between Optional instances via static patterns: case Optional.empty(): ... case Optional.of(var x): ... (with bonus points if we can make the combination of these two patterns total on non-null instances of Optional.) Composition Another aspect in which we would like destructuring to mirror aggregation is in composition. Suppose, in our Shape example, we put a unit-sized red ball in an Optional: Optional<Shape> os = Optional.of(Shape.redBall(1)); The creational idioms we use — here, static factories — compose, allowing us to express this compound creation clearly and concisely. To determine if an Optional<Shape> could have derived from the above action, using the sorts of APIs we have today, we would have to do something like: Shape s = os.orElse(null); boolean isRedUnitBall = s != null && s.isBall() && (s.color() == RED) && s.size() == 1; if (isRedUnitBall) { ... } The code to take apart the Optional<Shape> looks nothing like the code to create it, and is significantly more complicated — which means it is harder to read and harder to get right. (As evidence of this, the first draft of this example had a subtle mistake, which wasn’t caught until review!) And, as we compose more deeply, these differences can multiply. On the other hand, if Optional and Shape provided complementary destructuring APIs to their creation APIs, we could compose these just as we did with aggregation: if (os instanceof Optional.of(Shape.redBall(var size)) && size == 1) { ... } This will match an Optional that contains a red ball of unit radius, regardless of how it was created. First-class destructuring leads to composable APIs. Method invocations compose from the inside out; pattern matching, which works like a method invocation in reverse, composes from the outside in. A nested pattern if (os instanceof Optional.of(Shape.redBall(var size)) { ... } simply means if (os instanceof Optional.of(var alpha) && alpha instanceof Shape.redBall(var size)) { ... } where alpha is a synthetic variable. We apply the outer pattern, and, if it matches, we apply the inner patterns to the resulting bindings. We might still like to get rid of the boolean expression size == 1; this can be handled as a nested constant pattern, should we decide to support them. This might look like (illustrative syntax only): if (os instanceof Optional.of(Shape.ofRedBall(== 1))) { ... } where == c is a constant pattern that matches the constant c. Isn’t this just multiple return? It may appear at first that patterns — which can “return” multiple values to their “callers” — are really just methods with multiple return values, and that if we had multiple return (or tuples, or “out” parameters), we wouldn’t need patterns. But bundling return values is only half of the story. What’s going on here is more subtle; the production of the multiple “return values” is conditional on some other calculation, and this conditional relationship — that the bindings are only valid if the match succeeds — is understood by the language (and can be incorporated into control flow analysis through definite assignment.) The conditionality of pattern bindings enables a more sophisticated scoping of pattern variables, which in turn enables patterns to compose more cleanly than simple multi-return method calls. The “unit red ball” example above illustrates what happens when the language can’t reason for us about under what conditions a variable is assigned; we would have to make up the difference with explicit control flow logic (e.g., if), that grows more complex and error-prone the more deeply we try to combine conditions. Patterns as API points One does not need to look very far to see the amount of work we expend working around the lack of a first-class destructuring mechanism; we’re so used to doing it that we don’t even notice. Consider the following two methods from java.lang.Class: boolean isArray() { ... } Class<?> getComponentType() { ... } The latter method is partial, and should only be called when the former method returns true; this means that the author has to specify the precondition, specify what happens if the precondition is not met, check for the precondition in the implementation, and take the corresponding failure action (return null, throw an exception, etc.) Similarly, the client has to make two calls to get the desired quantity, and therefore has more chance to get it wrong (to say nothing of race conditions). This is the worst of all worlds — more work for the library author, more work for the client, and more risk of subtle bugs. These two methods really should be one operation, which simplifies life for both the library and client: if (c instanceof Class.arrayClass(var componentType)) { ... } Now the library need expose only one API point, the client can’t misuse it, and the act-before-check bug is impossible. First-class destructuring leads to APIs that are harder to misuse. Data-driven polymorphism Java offers several mechanisms for polymorphism; parametric polymorphism (generics), where we can share code across a family of instantiations that vary only by type, and inclusion polymorphism (subtyping), where we can share an API across a family of instantiations that differ more broadly. These are effective tools for modeling a lot of things, but sometimes we want to express commonality between entities with some similar property, without needing to reflect it in the type system. Pattern matching allows us to easily express ad-hoc, or data-driven polymorphism. One can easily pattern match over a number of unrelated types or structures if needed. Supporting ad-hoc polymorphism with pattern matching doesn’t mean that inheritance hierarchies and virtual methods are wrong — it’s just that this is not the only useful way to attack a problem. Sometimes, using the hierarchy is the natural way to express what we mean, but sometimes it is not, and sometimes it is not even possible, such as when an endpoint listens for a variety of messages, and not all message types have a common supertype (or even come from the same library.) In these cases, pattern matching offers clean and simple data-driven polymorphism. Patterns as class members In languages with structural aggregate types (e.g., tuples and maps), aggregation and destructuring are simple, well-defined, built-in operations — because these aggregate types are just transparent wrappers for their data. In object-oriented languages like Java, aggregation operations are expressed in imperative code, which can use arbitrary logic to validate the arguments and construct the aggregate from the inputs. For certain well-behaved classes — those whose representation is sufficiently coupled to their API — it may be possible to derive deconstruction logic without explicit imperative code. One category of classes whose state is trivially and transparently coupled to its API are records, and these will acquire destructuring patterns automatically, just as they do with constructors. In other cases, someone is going to have to write some code to describe how to map the representation to the external deconstruction API, just as constructors imperatively map the invocation arguments to the representation. So there is going to be some code somewhere which captures the applicability test and the mapping between target state and the pattern’s binding variables — and the natural place to put that code is in the same class as the constructor or factory. Patterns are executable class members. Anatomy of a pattern In a pattern match like: if (p instanceof Point(int x, int y)) { ... } the pattern has a name ( Point), a target operand ( p), and zero or more binding variables ( x and y). For a deconstruction pattern like this, the applicability test is simple and transparent — is the target an instance of the class Point. But we can have patterns with more sophisticated applicability tests, such as “does this Optional contain a value” or “does this Map contain a mapping for a given key.” Deconstruction patterns A simple form of pattern is the dual of construction. A constructor takes zero or more state elements and produces an aggregate; the reverse starts with an aggregate and produces corresponding state elements. (If the term “destructor” were not already burdened by its resource-release association from C++, we’d likely call it that; instead we’ll describe this form of pattern as a deconstructor or deconstruction pattern.) For a deconstruction pattern, there is an obvious way to describe and reference the target operand — the receiver. And there is an obvious way to declare the arity, type, and order of the binding variables — as a parameter list. This might look something like: class Point { int x, y; public Point(int x, int y) { this.x = x; this.y = y; } public deconstructor(int x, int y) { x = this.x; y = this.y; } } The duality between constructor and deconstructor is manifest in multiple ways: Use site syntax. Invoking a constructor, and invoking a deconstructor through a pattern match, bear a deliberate syntactic similarity: we construct a Pointvia its constructor new Point(x, y), and we deconstruct a Pointwith the deconstruction pattern Point(int x, int y). Invocation shape. The constructor arguments and deconstructor bindings have the same names and types. The argument list for the constructor describes its inputs, and the binding variable list for the deconstructor describes its outputs — and they are the same, describing the external state of the object. Behavior and body. The constructor copies values from the arguments to the object state, and the deconstructor copies values from the object state to the binding variables. And, just as the constructor is free to perform defensive copying on the inputs, the deconstructor may wish to do the same with the outputs (perhaps delegating to a getter, if one is available.) Invocation and inheritance. Constructors are an unusual hybrid of instance and static behavior — they are instance members, but are invoked statically and not inherited. Deconstructors are the same. Method patterns Since patterns are class members, let’s enumerate some of the degrees of freedom that methods have, and ask if they make sense for patterns as well. (The final design may or may not incorporate all of these aspects, but the intent of this section is to show that the tools we use for modeling existing class members extend equally well to patterns.) Does it make sense to have accessibility modifiers on pattern declarations? Yes! For example, we routinely declare protected constructors for use by subclasses in implementing their own constructors. In exactly the same way, protected deconstructors can be used by subclasses in implementing their own deconstructors. And, just as we use private or package-private constructors to restrict who can instantiate an object, we can use private or package-private deconstructors to restrict who can take an instance apart and access its state. Does pattern overloading make sense? Yes! Just as we may want to provide multiple overloads of a constructor, each with different descriptions of an object’s state, we may want to provide corresponding overloads of deconstructors to mirror the state descriptions accepted by the various constructors. Do static patterns make sense? Yes! Just as some APIs expose static factories rather than constructors, it also makes sense to expose static patterns as the dual of these static factories. For example, the factory method Optional.of(t)takes a Tand wraps it in an Optional; a corresponding static pattern would take an Optional<T>and deconstruct it, conditionally, giving up the contained value, and fail to match when the target Optionalis empty. Static methods have another advantage, which is that they can be declared outside of the relevant class — and the same is true of static patterns. Just as a class can expose a static factory for an unrelated class, a class can also expose a static pattern for another class (as long as the relevant state is accessible to the implementation of the pattern). So, if the Optionalclass failed to provide suitable patterns for deconstructing its instances, these could still be provided by an unrelated library. Similarly, APIs that use “sidecar” classes like Collectionsto hold factories, can put patterns there as well. Do generic patterns make sense? Yes! Static factories are often generic methods, such as Optional::of; these generic type parameters can be used to express type constraints on the signature. The same is true for patterns. Do instance patterns make sense? Yes! Just as instance methods allow a supertype to define the API and for subtypes to define the implementation, the same can be done with patterns. Like deconstruction patterns, instance patterns implicitly use the receiver as the target to be matched. A pattern on Mapfor “does the map contain this key” would be an instance pattern, for example — and each Mapimplementation might want to provide its own implementation for this. An API idiom that might make use of instance patterns are the dual of builders. Builders accept a sequence of calls to add content or set properties on the object being built. To deconstruct an object that has been built in this manner, we can define an unbuilder, which could iterate through the structure of the object, and expose various patterns (likely corresponding to the builder methods) for “does the current item have this structure”. This allows the API developer to expose a rich deconstruction API without having to make the structure directly accessible, or copy the data to another format. Do abstract patterns make sense? Yes! An interface such as Mapmay want to expose a pattern which matches a Mapwhich has a certain key, and if so, binding the corresponding value, and leave the implementation to concrete subtypes. Does overriding patterns make sense? Yes! A skeletal implementation such as AbstractMapcould provide a least-common-denominator implementation of a pattern, allowing concrete subtypes to override it. Do varargs patterns make sense? Yes! Consider a method like String::format, which takes a format string and a varargs of Object...arguments to be substituted into the string: String s = String.format("%s is %d years old", name, age); To reverse the encoding from data to string, we might want to expose a complementary pattern, which exposes an Object...of extracted values — which can be refined further by pattern composition: if (s instanceof String.format("%s is %d years old", String name, Integer.valueOf(int age))) { ... } Does it make sense for patterns to delegate to other patterns? Yes! Just as constructors delegate to other constructors via this()or super(), we expect that patterns will also want to delegate to other patterns to bind a subset of their binding variables — and we want it to be easy to do so. It should not be surprising that all the degrees of freedom that make sense for constructors and methods, also make sense for patterns — because they describe complementary operations. Additional degrees of freedom There are also some characteristics that are applicable to patterns but not to constructors or methods, and these influence how we might declare patterns in source code. Targets and applicability. Patterns have a target operand, which is the instance against which the pattern will be matched. They also have a static target type; if the static type of the operand is not cast-convertible to the target type, the pattern cannot match. For a type pattern T tor a deconstruction pattern T(...), the target type is T; for a static pattern such as Optional::of, the target type is Optional. For deconstruction / instance patterns, the target type is the receiver. (For static patterns, the target type must be explicitly specified somehow as part of its declaration, along with some way of denoting a reference to the target.) Totality vs partiality. Some patterns are total on their target type, meaning that all (non-null) instances of that type will match the pattern; type patterns and deconstruction patterns are total in this way. Others are partial, where not only must the target be of a certain type, but it also must satisfy some predicate; Optional.of(T t)is such a pattern (it doesn’t match anything that is not an Optional, but it further requires it to be non-empty.) If totality is a property of a pattern declaration, not just its implementation, then the compiler can reason about when it can guarantee a match (and thus not require some sort of unreachable elselogic.) For partial patterns, there must also be some way for the declaration to indicate a failure to match, whether this be throwing an exception, returning a sentinel, or some other mechanism. (For simplicity, we may decide that deconstruction patterns are always total and other patterns are always partial.) Input and output arguments. The patterns we’ve seen so far have a target and zero or more binding variables, which can be thought of as outputs. Some patterns (such as “the target is a Mapcontaining a mapping whose key is X”) may also need one or more inputs. This puts some stress on the syntactic expression of both the declaration and the use of a pattern, as it should be obvious on both sides which arguments are inputs and which are outputs. Exhaustiveness. In some cases, groups of patterns (such as Optional::ofand Optional::empty) are total in the aggregate on a given type. It would be good if we could reflect this in the declaration, so that the compiler could see that the following switch is exhaustive: Optional<Foo> o = ...; switch (o) { case Optional.empty(): ... case Optional.of(var foo): ... // Ideally, no default would be needed } Combining patterns We’ve already seen one way to combine patterns — nesting. A nested pattern: if (x instanceof Optional.of(Point var x, var y)) { ... } is equivalent to: if (x instanceof Optional.of(var p) && p instanceof Point(var x, var y)) { ... } (Note that it is possible to express this decomposition using if and instanceof, but not currently when the pattern is used in switch; there are a number of possible ways to address this, which will be taken up in a separate document.) Another possibly useful way to combine patterns is by AND and OR operators. Suppose we have a pattern for testing whether a map has a given key (the use-site syntax shown here is solely for purposes of exposition): if (m instanceof Map.withMapping("name", var name)) { ... } If we want to extract multiple mappings at once, we could of course use && (if we’re in the context of an if): if (m instanceof Map.withMapping("name", var name) && m instanceof Map.withMapping("address", var address)) { ... } But, we can more directly express this as an AND pattern (combination syntax chosen only for purposes of exposition): if (m instanceof (Map.withMapping("name", var name) __AND Map.withMapping("address", var address))) { ... } This is somewhat more direct, eliminating the repetition of asking instanceof twice, but more important, allows us to use compound patterns in other pattern-aware constructs as well. One of the nice properties of combining patterns (whether through nesting or through algebraic combinators) is that the compound pattern is all-or-nothing; either the whole thing matches and all the bindings are defined, or none of them are. A possible approach for parsing APIs The techniques outlined so far pave the way for vastly improving APIs for decomposing aggregates like JSON documents. Taking an example from the JSONP API, the following builder code: JsonObject value = factory.createObjectBuilder() .add("firstName", "John") .add("lastName", "Smith") .add("age", 25) .add("address", factory.createObjectBuilder() .add("streetAddress", "21 2nd Street") .add("city", "New York") .add("state", "NY") .add("postalCode", "10021")) .build(); creates the following JSON document: { "firstName": "John", "lastName": "Smith", "age": 25, "address" : { "streetAddress": "21 2nd Street", "city": "New York", "state": "NY", "postalCode": "10021" } } The builder API allowed us to construct the document cleanly, but if we wanted to parse the result, the code is far larger, uglier, and more error-prone — in part because we have to constantly stop and ask questions like “was there a key called address?” “Did it map to an object?” We want to express the shape of document we expect, and then have it either match, or not — all-or-nothing. One possible way to get there is by composing patterns. To start with, let’s posit that we add some patterns to the API for intKey, stringKey, objectKey, etc, which mean "does the current object have a key that maps to this kind of value (similar to our Map.withMapping pattern.) Now, we could parse the above document with something like: switch (doc) { case stringKey("firstName", var first) __AND stringKey("lastName", var last) __AND intKey("age", var age) __AND objectKey("address", stringKey("city", var city) __AND stringKey("state", var state) __AND ...): ... } This code looks more like the document we are trying to parse, and also has the advantage that either the whole expression matches and all the bindings are defined, or none of them are. Down the road: structured patterns? For each of the idioms for aggregation, we have seen that we can construct a pattern which serves as its dual. If, in some future version of Java, we added collection literals, this would be a new idiom for aggregation. And, as with the other forms, there is an obvious corresponding dual for destructuring. Suppose, for example, we could construct a Map as follows (again, syntax purely for sake of exposition): Map<String, String> m = { "name" : name, "age" : age }; Then, we could similarly expose a map pattern for deconstructing it: if (m instanceof { "name": var name, "age": var age }) { ... } And, just as with other patterns, these compose via nesting: if (doc instanceof { "firstName": var first, "lastName", var last, "age", Integer.valueOf(var age), "address" : { "city": var city, "state": var state } }) { ... } Flatter APIs One possible consequence of having patterns in our API toolbox is that APIs may become “flatter”. In the Shape example above, its conceivable that one might expose an API that has lots of public static factories that correspond to private implementation classes, and public static patterns for identifying these instances, but not exposing types for these various configurations at all. This allows APIs to use subtyping primarily as an implementation technique, but expose polymorphism through patterns rather than type hierarchies. For some APIs, this may be a perfectly sensible move.
https://openjdk.java.net/projects/amber/design-notes/patterns/pattern-match-object-model
CC-MAIN-2021-49
refinedweb
4,695
51.07
You can find the original article here: Below is a full transcript of the interview: Lucas Carlson: The idea of having Docker containers as your basic foundation instead of virtual machines in the cloud. It’s a new way of thinking, he’s on the bleeding edge of this new hosting thinking. Borja Burgos: Last 9 months have been actively working at the CEO and co-founder of Tutum. LC: So tell us about Tutum, what is it? and who should use it? BB: So the way we think of Tutum. It’s a single endpoint, it’s an infinite docker host. So for people that are familiar with Docker should relate to this. With docker you have a number of individual hosts. Think of a single infinite endpoint where I can just run containers regardless of what the infrastructure is. That’s what Tutum is. On top of that, a bunch of value add services. Who should use it? Really anybody from a developer to a DevOps. Anybody that’s running backend infrastructure software, is a potential candidate for Tutum and what we’re building. LC: Why not just run your own docker? So docker the whole premise of Docker is that you can set it up very easily. Any linux distribution can run a Docker daemon and you should just be able to set up Docker and run containers anywhere. Why not just set up a Digital Ocean or a CenturyLink Cloud virtual machine, put Docker on it, and deploy your apps that way? Why use a hosted Docker? BB: Right. Docker is great, they’ve been able to put a great interface on some primitives and build a great open source project. But at the end of the day the Docker container is nothing but a building block. An awesome building block, but a building block. Meaning the moment you start trying to do containers at scale, you start running into problems. How do I run containers in two different hosts for redundancy purposes? How do I get visibility into what containers deployed into which hosts? How am I supposed to load balance the traffic that’s coming to the different containers running across multiple hosts, across multiple clouds, right? And these problems are not solved by the basic building block, which is the Docker container. Hence, the reason for having something on top. That layer of orchestration, management, deployment, and that is what Tutum is. LC: Got it. So it runs more than a Docker daemon. It provides a sort of mesh, so you don’t have to worry about which Docker host runs your container, is that right? BB: Right, the whole infrastructure as we know it today is abstracted away, in such a way that your running and managing containers. You don’t have to worry about that T2 instance, or that Digital Ocean droplet, or CenturyLink machine that’s running my infrastructure. Now all that I see is containers. And I can link my applications in containers, that’s the beauty of it. LC: So how is Tutum different than PaaS like Heroku? Or even some of the Docker PaaS like Deis or Flynn? How does Tutum look different than a PaaS? BB: So Tutum tends to sit more on the infrastructure spectrum. If you think of Tutum right in between IaaS and PaaS, the ones you mentioned tend to be more on the PaaS. If you think of Heroku, their basic building block is code. My code needs to satisfy a specific box, and if my code doesn’t fit that box, then I can’t deploy my code to Heroku. So there are restraints in place that prevent me from deploying my code to a PaaS. Also most of these PaaS, the paradigm that they follow is that this platform will run your code/application. Now any services that the application needs to consume need to come in as services. Meaning, I will have to pay or set up a MySQL somewhere that my application will then consume. We see the potential of also containerizing those services, and have that be a part of the infrastructure. So we are focusing more on the infrastructure aspects of it. A barebones container infrastructure with the value add from PaaS and we do see services as potentially containerized in a single solution as opposed to being treated as different citizens within this ecosystem. With my application and all my services. LC: So how is it different than Deis or Flynn? So some of our audience is familiar with the Docker PaaS approach. Does Tutum do more, less, or something different than some of those Docker PaaS solutions? BB: Ultimately, the biggest differentiator is that Tutum is a service. Whereas Deis and Flynn are open source initiatives that you have to manage and what not. Even if there is overlap, that’s going to be a differentiator. I do see Deis and Flynn leaning more towards the developer centric tools moreso than the infrastructure portion of it and devops portion of it. Whereas we see Tutum leaning more towards the left. But I agree that there’s a big overlap in feature set and end-goal. And I would be hard-pressed to think that one would run both Flynn, and Deis, and Tutum or a combination of those. It’s hard to imagine that happening. LC: And just to clarify for the audience, would you ever be able to run Flynn or Deis on Tutum? BB: So that would be a great conversation for me to have with Jeff Lindsay or the guys from Deis. I don’t see why not as far as how they handle the container deployments. And today absolutely not, but moving forward, why not? It may be a possibility. It may be something to explore. LC: So where are Tutum Docker containers actually hosted? BB: Today we are running on top of AWS. And I say today, because from the get-go we built everything with the intention of being infrastructure agnostic. Any services we’re using from AWS could be replaced by another infrastructure or bare metal by a private data cloud, say OpenStack, or something along the lines of VMWare solutions. So it’s more about the layer on top more than the hosting services for Tutum. US East. LC: Are you planning on going to to multiple data centers within Amazon? BB: Absolutely, so as we develop and as Tutum grows, it makes absolute sense to be in as many places as possible for our customer base. LC: And you might not know this yet, would you be able to pick which data center you’d be in? Or is that going to be abstracted away? BB: From a user perspective? LC: yeah BB: Right, so there are use-cases where one needs data for privacy or legal concerns to not leave a specific geographical region, in which case you have to choose. As we’re able to satisfy more and more complex use-cases, one should be able to choose where your applications, code, and data are stored and running. LC: How much does it cost to get started with Tutum? BB: So our current options, our smallest container is a $4/mo container. Unlike Amazon, we charge by the minute. And I believe we go all the way up to $64/mo, which has 4GB of ram. We’re still in beta, this is our pricing model for today. Everybody gets $4 of free credit when they sign up. So you can run a container up to a month for free. LC: So what hardware do you get for $4? That’s less than Digital Ocean. So what’s the underlying server? What is it running on? BB: So the way Tutum works today, and again we’re looking at how does Tutum grow from here on. Today we are running containers in a multi-tenant fashion. Meaning, there are hosts that are running multiple containers from different customers. Ultimately a fraction of their compute unit up to being 4 ECU’s, ranging from 256 Mb of Ram to 4 GB of ram. And there’s no reason why we can’t eventually make larger and smaller instances. But for today we don’t want to make things more complicated than they have to be, so that’s what we’re running. LC: So how many containers per server? on your hosts? BB: We’ve been exploring with different size hosts. Anywhere between a couple of dozen, to a few hundred is possible. LC: Great. And what happens if one of the hosts goes down? BB: What we do know and what one of the things Tutum does now and abstracts away; if you have an application, what Heroku calls 12-factor stateless application, that is very easy to scale horizontally. You can do that with Tutum. You choose your Docker image, you deploy that and you can deploy that into four containers. Now those containers are actually going to be deployed into different hosts in Amazon. And those hosts are actually going to be running in different availability zones. So you’re running in such a redundant way that your application is running in two containers. And if the host were to go down, you’d still be up and running in the sense that there would be a host with your application. Even if the whole availability zone goes down, which has happened with Amazon, your application would still be running. We’re looking at how do we do things like live migration. So we want to migrate nodes or containers within a node to different nodes. How are we able to store the state, if we stay on Amazon, on EBS. So if the host goes down, we can create a new host and the containers and Docker would be able to spring up those containers based on the EBS data. So we’re looking at solutions for that. So right now the simplest way to not experience down time is to make sure your application is running in two or more containers. LC: Got it. So does Tutum offer a load balancing service that works with four containers? or do you have to build that as a fifth container, a load balancer? BB: So this is something that we asked ourselves early on. We love control, but we like flexibility, but we also like ease of use. So we try to do something that satisfies both. Meaning if your application that’s containerized and listens on port 80, then we will automatically register it to our load balancer. So if your application is deployed to three containers, you will get a single endpoint that will be load balanced automatically by Tutum. But if you don’t like our load balancer for whatever reason and you want to run your own load balancer, then you can simply containerize HA Proxy or Hipachy, or whatever load balancer you prefer. Configure it however you want, deploy that in a container in Tutum, and then link your containerized load balancer to your application that’s listening on port 80 and load balance that way. So that’s an approach we’ve seen some people take, and something we want to be able to support as we move forward. LC: Very cool. So that’s really neat, that’s above and beyond the Docker container hosting. It’s that additional automatic load balancing feature. Are there any other features that go above and beyond the container hosting? BB: Absolutely. Going back to what I said earlier, we’re looking to provide value add on top of just running containers. Zero down-time deployments. You can very easily update all of your containers with an image and an image tag that might be prior to the one you’re running prior today or one you just built. So with one click, you can tell it to reeploy to all of my containers the new image. So we’re looking at different ways to do that in a zero-downtime fashion. You mentioned load balancing, and high availability. Being able to do a git-push, do a build, do my CI/CD, deploy to Tutum, so that whole integration from code all the way to up and running. That’s something we’re looking into. So yes, value adds on top of the basic Docker hosting is something that is very close to our end goal here. LC: yeah, that’s great. Because just last week we were talking to Avi at Shippable, a hosted CI/CD for Docker. And that was very interesting. I’m friends with the guy behind Drone.io. How does one with CI/CD in the context of Tutum. BB: The way I see and envision the big picture here is to say that I’m a developer, I do a git push. Now this git push would trigger Docker Build somewhere. We have an open source project called Boatyard.io, that you’re welcome to check out. So somewhere there’s a web hook that triggers the build. And that build will then trigger the tests, the continuous integration. That will tell you everything is working with your code. In that case, it would trigger an automatic redeployment of your updated application ideally with no downtime. So that whole schema, in that regard you have things like GitHub, Docker builder like Docker Hub. Something like Shippable or Drone.io and eventually after that it would be the running and the management and deployment of the application. LC: Very cool. So I think this is the ultimate vision for the workflow of developers for Docker. To be able to have the exact same container that you built on your laptop be what you test in CI/CD and QA. And also be the exact same environment that goes over to Tutum. Is this why developers might pick something like Tutum over Digital Ocean? Bieng able to have the same environment on their laptop as in production? BB: Absolutely. Ultimately the way I see it, Digital Ocean is great. I myself use them, I can pay $5 and I have a development environment in the cloud. A development environment that I can go in, configure once, and work with. Now personally I would much rather be able to run my code or application and all of its dependencies. Meaning the services it needs, on my laptop. Now if I can do this with fig where I can have MySQL containerized, Redis cache, my application, my Web server, everything containerized. Have some sort of manifest like Fig that defines this environment, this stack. And with a single command run that locally, and see how that behaves. And then once I see it’s working, I can push that to the cloud and have the exact same environment, that exact same application and all of its services running at scale in the cloud, publicly redundant and highly available, load balanced. That in my mind, is a developers dream. So that is something we definitely have in mind. How do we get to this point. LC: That’s awesome. That does sound like a dream. It seems like next generation kind of stuff. So that all sounds great, so what isn’t great about Tutum yet? BB: So there’s a number of things that are still in the pipeline. Volumes and data storage and persistent storage. And dealing with data in general is a challenge. It’s something we’re looking and actively working to solve. Today anything that satisfies the 12-factor stateless application, it’s a great use-case and we work seamlessly. But if I want to run MySQL today with Tutum, the data that has been stored in MySQL would die when that container dies. Now that isn’t great for anything close to production level systems. So that is the one thing we need to keep developing and working. We hope to have a solid solution for that in the immediate future. In the next two to three months, have a persistent storage solution. And also as I mentioned, we don’t want to tie ourselves to IaaS. We see Tutum as more of as a IaaS 2.0. We want to remain agnostic to the infrastructure underneath, and allow the end user to bring in their own infrastructure. So we’re looking at the best options for how to move forward with that. And allowing people to choose whatever infrastructure that best satisfies their use-case. LC: Can you tell us anything about the technology that you’re going to use for persistent storage? or is that not public yet? BB: It’s not public yet, so I can’t talk much about it. But we have a number of initiatives that we have in place, and we’re talking to different groups, some of which are very active in the Docker ecosystem and looking to partner with them as we look for something that we can develop that will benefit us and ultimately anyone in the Docker ecosystem can use as a multi-host persistent volume solution. LC: Cool. That’s very exciting. So are Docker containers going to replace virtual machines? it sounds like Infrastructure 2.0 is all going to be containerized. What’s the future of virtual machines and that world? BB: I don’t think so. The way I see it, if you rewind a few years. Provisioning hardware sucked. So the virtual machine solved a hardware problem. We have these big machines and now we can cut them up into little pieces and make many virtual machines. So we have an operating system that thinks its running on its own hardware. But it’s really not, it just thinks it is. Now you move that up the stack, and you take an application and you make it think that it’s operating on it’s own operating system. But really its not, it’s a shared operating system. So I think the use case for containers and virtual machines, there may be some overlap. Ultimately one is not going to replace the other. And there’s going to be some things that are going to be done better with one or the other. But I see that there’s going to be a need for both of them. One is a hardware problem, the other is a software problem. Now can these problems be solved with just containers or VMs? Absolustely. But ultimately it’s about choosing the right tool for the right thing. LC: Got it. So what about the security? You mentioned that these are multi-tenant containers and there have been some stuff recently about the security concerns of linux containers. Is there a worry about multi-tenant security? BB: If you ask the guys from the virtual machines camp. They’ll say of course, we have the hypervisor, and no one can defeat the hypervisor, and that’s another layer of security. The way we see it, there’s going to be a lot of work being done, and a lot has already been done in the last 18 months, about increasing the level of privacy/security isolation that containers enjoy today. We ourselves are building things around and on top of containers to ensure that containers remain isolated. So we have support for user namespaces. To make the user in the container doesn’t map to the root user or to the user with any privilege really outside of the container. How do I do an app armor round the container and limit the sys calls. SO there are many measures that can take place for you to run a secure multi-tenant container cloud. But absolutely, this is something that I think as we see time moves forward, like we saw during Solomon’s keynote at DockerCon that security, authentication, privacy, are things that for the next 12-24 months we’ll see a lot more being developed in that regard, so it’s interesting. LC: Yeah, that’s very cool. So what’s possible when you combine Tutum with GitHub and Docker Hub altogether. How does that change the ecosystem for developers? BB: I think it goes back to what we mentioned earlier. If you’re a developer and I know that a single git push is ultimately going to result in my code, my custom application with all of its services, running at scale in the cloud, and I don’t have to worry about where that’s running, how that’s running, or if it’s not running? That’s a game changer. That’s the exact same code that was running in my laptop, is the code that I was able to test, and ultimately I have the peace of mind that what’s running in the cloud is the exact code I want. With a couple hacks, that’s possible today with Tutum, Docker Hub, and GitHub. So one can trigger an automatic build from GitHub, and one can set using our API an automated deployment or redeployment of an application using Docker Hub and Tutum. So the possibilities are really a true Build Once and Deploy and Forget About it, until you build again kind of solution moving forward. LC: Whats the future of linux containers look like? BB: Thats an interesting question. I think one of the reasons why Docker has become so cucessful was that they standardized containers in general. And the fact that they were to gather the support of the big players in the industry with this standardization. So they defined the standard container, these are the tools to interact and collaborate with th see containers. And then you have the big guys, the Google, Amazon, IBM’s of the world saying they support it. That’s huge. So is there the need for more container technology? Like other proprietary container specific projects? It’s hard to tell. I still think there’s a lot of work to be done in the containers we have today. Things like live migration and security like we mentioned earlier, are things that containers need to support out of the box. And I think thats the direction of where things are going to go for containers. Seeing initiatives like CoreOS. A container native operating system. Where I no longer have to do an apt-get piece of software, but all of my software comes in the form of a container. That’s the sort of future of containers that I see, more so than new container technologies come into this space. We may see a couple here and there, but I’d like to see Docker succeed, everybody embracing this technology, and building on top of it as opposed to other projects coming up on the sides. LC: Do you think other big players are going to start doing native Docker hosting? BB: Yes, I wouldn’t be surprised if they didn’t start enabling and making it easier to deploy containers. We’ve already seen it from the likes of AWS Elastic Beanstalk. I can now, through a text file deploy a container using their service. One container per VM. Google likewise has announced a number of integrations with Docker and it’s very likely that those integrations will move forward. Ultimately the way I see these players, their business model revolves around computing cycles. And they make money off of the VMs that you’re running. And they have to provide value adds and different services for you to use their cloud computing services, and that’s who they monetize. How much of that value they provide moving forward? It would be interesting to see. But absolutely I do see these players moving into this space. LC: So that’s why you’re building on top, other than pure Docker hosting, you’re offering these services to keep you differentiated. BB: Right. Tutum is not in the business of competing on infrastructure against Google or Amazon. We have seen the last few years that compute and memory and storage is becoming a commodity that’s becoming cheaper and cheaper and cheaper. Let’s be realistic, Tutum is great, we’re doing something awesome, we love it, but ultimately I cannot compete with AWS or GCE or any of the big players when it comes to commoditizing infrastructure. Now the way we see it, Docker brings a lot of new things to the table. And there’s a lot of value add to be implemented on top of the bare infrastructure, and that’s where we see the value Tutum provides. LC: What is the biggest barrier to entry to adopting Docker for most businesses today? BB: I think it’s a low barrier to entry. Until recently maybe it was the fact that the technology was so new and there’s just a sense of fear that somethings only been around for fifteen months. Will this be supported? What kind of support can we expect? Also the shift in paradigm. You even see this from people who have heard of Docker that have sort of tried out Docker, maybe have gone through a tutorial. It takes maybe from a couple of minutes to a few days until it really clicks and you say, “Wow, now I get containers, now I get Docker, and I see the benefit.” So some people will read articles online about containers and Docker and until they experience and try it out, it doesn’t click, and they don’t see the value add it provides. They say, “well i can do that with virtual machines and XY technology.” So those two things. The technology might not be mature enough, being around for only 15 16 months. And it’s a shift in mentality for how developers, devops, sysadmins, how these teams work and how they deploy new software. LC: So it’s been great talking to you. It’s been great picking your brain. What’s next for Tutum? BB: Lots of things. Stay tuned, we have a number of services and cool features coming up. We’ll be doing a transition from doing the multi-tenant cloud hosting, and really focusing on the value add that our users are requesting from us. We’re working hard on these like persistent storage, support for secure socket layers, and lots of exciting new developments coming up in the next two to three months. I think everybody will be pleasantly surprised. And we’re also looking now that we’ve grown the team a bit, to be more active with the community. We love this space, we love to be part of the Docker ecosystem. And as much as possible it’s about giving back. So expect to see some awesome contributions from us. LC: I cannot wait. It’s been a pleasure talking to you, and hope to keep in touch and I’d love to hear when you make these announcements. And talk to you about your new open source projects. Thank you so much for your time. BB: Absolutely, thank you so much Lucas.
https://blog.tutum.co/2014/07/15/borja-interview-with-lucas-carlson-of-centurylink-labs/
CC-MAIN-2017-13
refinedweb
4,571
73.88
In the beginning, we started off with a very simple view of components and what they do. As we learned more about React and did cooler and more involved things, it turns out our components aren't all that simple. They help deal with properties, state, events, and often are responsible for the well-being of other components as well. Keeping track of everything components do sometimes can be tough. To help with this, React provides us with something known as lifecycle methods. Lifecycle methods are (unsurprisingly) special methods that automatically get called as our component goes about its business. They notify us of important milestones in our component's life, and we can use these notifications to simply pay attention or change what our component is about to do. In this tutorial, we are going to look at these lifecycle methods and learn all about what we can do with them. Onwards! OMG! A React Book Written by Kirupa?!! To kick your React skills up a few notches, everything you see here and more (with all its casual clarity!) is available in both paperback and digital editions.BUY ON AMAZON Meet the Lifecycle Methods Lifecycle methods are not very complicated. We can think of them as glorified event handlers that get called at various points in a component's life, and just like event handlers, you can write some code to do things at those various points. Before we go further, it is time for you to quickly meet our lifecycle methods. They are componentWillMount, componentDidMount, componentWillUnmount, componentWillUpdate, componentDidUpdate, shouldComponentUpdate, and componentWillReceiveProps. We aren't quite done yet. There are three more methods that we are going to throw into the mix even though they aren't strictly lifecycle methods, and they are getInitialState, getDefaultProps, and render. Some of these names probably sound familiar to you, and some you are probably seeing for the first time. Don't worry. By the end of all this, you'll be on a first name basis with all of them! What we are going to do is look at these lifecycle methods from various angles...starting with some code! See the Lifecycle Methods in Action Learning about these lifecycle methods is about as exciting as memorizing names for foreign places you have no plans to visit. To help make all of this more bearable, I am going to first have you play with them through a simple example before we get all academic and read about them. To play with this example, go to the following URL. Once this page loads, you'll see a variation of the counter example we saw earlier: Don't click on the button or anything just yet. If you have already clicked on the button, just refresh the page to start the example up from the beginning. There is a reason why I am saying that, and it isn't because my OCD is acting up :P We want to see this page as it is before we interact with it! Now, bring up your browser's developer tools and take a look at the Console tab. In Chrome, you'll see something that looks like the following: Notice what you see printed. You will see some messages, and these messages start out with the name of what looks like a lifecycle method. If you click on the plus button now, notice that your Console will show more lifecycle methods getting called: Play with this example for a bit. What this example does is allow you to place all of these lifecycle methods in the context of a component that we've already seen earlier. As you keep hitting the plus button, more lifecycle method entries will show up. Eventually, once your counter approaches a value of 5, your example will just disappear with the following entry showing up in your console: componentWillUnmount: Component is about to be removed from the DOM!At this point, you have reached the end of this example. Of course, to start over, you can just refresh the page! Now that you've seen the example, let's take a quick look at the component that is responsible for all of this: var CounterParent = React.createClass({ getDefaultProps: function(){ console.log("getDefaultProps: Default prop time!"); return {}; }, getInitialState: function() { console.log("getInitialState: Default state time!"); return { count: 0 }; }, increase: function() { this.setState({ count: this.state.count + 1 }); }, componentWillUpdate: function(newProps, newState) { console.log("componentWillUpdate: Component is about to update!"); }, componentDidUpdate: function(currentProps, currentState) { console.log("componentDidUpdate: Component just updated!"); }, componentWillMount: function() { console.log("componentWillMount: Component is about to mount!"); }, componentDidMount: function() { console.log("componentDidMount: Component just mounted!"); }, componentWillUnmount: function() { console.log("componentWillUnmount: Component is about to be removed from the DOM!"); }, shouldComponentUpdate: function(newProps, newState) { console.log("shouldComponentUpdate: Should component update?"); if (newState.count < 5) { console.log("shouldComponentUpdate: Component should update!"); return true; } else { ReactDOM.unmountComponentAtNode(destination); console.log("shouldComponentUpdate: Component should not update!"); return false; } }, componentWillReceiveProps: function(newProps){ console.log("componentWillReceiveProps: Component will get new props!"); }, render: function() { var backgroundStyle = { padding: 50, border: "#333 2px dotted", width: 250, height: 100, borderRadius: 10, textAlign: "center" }; return ( <div style={backgroundStyle}> <Counter display={this.state.count}/> <button onClick={this.increase}> + </button> </div> ); } }); Take a few moments to understand what all of this code does. It seems lengthy, but a bulk of it is just each lifecycle method listed with a console.log statement defined. Once you've gone through this code, play with the example one more time. Trust me. The more time you spend in the example and figure out what is going on, the more fun you are going to have. The following sections where we look at each lifecycle method across the rendering, updating, and unmounting phases is going to be dreadfully boring. Don't say I didn't warn you. The Initial Rendering Phase When your component is about to start its life and make its way to the DOM, the following lifecycle methods get called: What you saw in your console when the example was loaded was a less colorful version of what you saw here. Now, we are going to go a bit further and learn more about what each of these lifecycle methods do: getDefaultProps This method allows you to specify the default value of this.props. It gets called before your component is even created or any props from parents are passed in. getInitialState This method allows you to specify the default value of this.state before your component is created. Just like getDefaultProps, it too gets called before your component is created. componentWillMount This is the last method that gets called before your component gets rendered to the DOM. There is an important thing to note here. If you were to call setState inside this method, your component will not re-render. render This one should be very familiar to you by now. Every component must have this method defined, and it is responsible for returning a single root HTML node (which may have many child nodes inside it). If you don't wish to render anything, simply return null or false. componentDidMount This method gets called immediately after your component renders and gets placed on the DOM. At this point, you can safely perform any DOM querying operations without worrying about whether your component has made it or not. If you have any code that depends on your component being ready, you can specify all of that code here as well. With the exception of the render method, all of these lifecycle methods can fire only once. That's quite different from the methods we are about to see next. The Updating Phase After your components get added to the DOM, they can potentially update and re-render when a prop or state change occurs. During this time, a different collection of lifecycle methods will get called. Yawn. Sorry... Dealing with State Changes First, let's look at a state change! When a state change occurs, we mentioned earlier that your component will call its render method again. Any components that rely on the output of this component will also get their render methods called as well. This is done to ensure that our component is always displaying the latest version of itself. All of that is true, but that is only a partial representation of what happens. When a state change happens, these are all the lifecycle methods that get called: What these lifecycle methods do is as follows: shouldComponentUpdate Sometimes, you don't want your component to update when a state change occurs. This method allows you to control this updating behavior. If you use this method and return a true value, the component will update. If this method returns a false value, this component will skip updating. That probably sounds a little bit confusing, so here is a simple snippet: shouldComponentUpdate: function(newProps, newState) { if (newState.id <= 2) { console.log("Component should update!"); return true; } else { console.log("Component should not update!"); return false; } } This method gets called with two arguments which we name newProps and newState. What we are doing in this snippet of code is checking whether the new value of our id state property is less than or equal to 2. If the value is less than or equal to to 2, we return true to indicate that this component should update. If the value is not less than or equal to 2, we return false to indicate that this component should not update. componentWillUpdate This method gets called just before your component is about to update. Nothing too exciting here. One thing to note is that you can't change your state by calling this.setState from this method. render If you didn't override the update via shouldComponentUpdate, the code inside render will get called again to ensure your component displays itself properly. componentDidUpdate This method gets called after your component updates and the render method has been called. If you need to execute any code after the update takes place, this is the place to stash it. Dealing with Prop Changes The other time your component updates is when its prop value changes after it has been rendered into the DOM. In this scenario, the following lifecycle methods get called: The only method that is new here is componentWillReceiveProps. This method returns one argument, and this argument contains the new prop value that is about to be assigned to it. The rest of the lifecycle methods we saw earlier when looking at state changes, so let's not revisit them again. Their behavior is identical when dealing with a prop change. The Unmounting Phase The last phase we are going to look at is when your component is about to be destroyed and removed from the DOM: There is only one lifecycle method that is active here, and that is componentWillUnmount. You'll perform any clean-up related tasks here such as removing event listeners, stopping timers, etc. After this method gets called, your component is removed from the DOM and you can say Bye! to it. Conclusion Our components are fascinating little things. On the surface they seem like they don't have much going on. Like a good documentary about the oceans, when we look a little deeper and closer, it's almost like seeing a whole another world. As it turns out, React is constantly watching and notifying your component every time something interesting happens. All of this is done via the (extremely boring) lifecycle methods that we spent this entire tutorial looking at. Now, I want to reassure you that knowing what each lifecycle method does and when it gets called will come in handy one day. Everything you've learned isn't just trivial knowledge, though your friends will be impressed if you can describe all of the lifecycle methods from memory. Go ahead and try it the next time you see!
https://www.kirupa.com/react/component_lifecycle.htm
CC-MAIN-2017-09
refinedweb
1,991
65.01
How do you change namespaces in routines or new $namespace vs znamespace Hi developers! Just want to share an old but always relevant best practice on namespaces changing @Dmitry Maslennikov shared with me (again). Consider method: classmethod DoSomethingInSYS() as %Status { set sc=$$$OK set ns=$namespace zn "%SYS" // try-catch in case there will be an error try { // do something, e.g. config change } catch {} zn ns ; returning back to the namespace we came in the routine return sc } And with new $namespace the method could be rewritten as: classmethod DoSomethingInSYS() as %Status { set sc=$$$OK new $namespace set $namespace="%SYS" // do something return sc } So! The difference is that we don't need to change the namespace manually as it will be back automatically once we return the method. and we don't need try-catch (at least for this purpose) too.
https://community.intersystems.com/post/how-do-you-change-namespaces-routines-or-new-namespace-vs-znamespace
CC-MAIN-2022-05
refinedweb
143
65.76
older compilers.. Hello! I’m having issues with my code. The auto does not see that I’m using std::string. Even when I change the const auto (in for-each-loop condition) to std:string, the compiler still complains about bad referencing ‘std::string’ to ‘char’. This is the code: main.cpp function.h function.cpp Your code has several issues: 1) Your array of names is defined incorrectly. 2) Your array of names is passed into lookForName incorrectly. Once you address those two issues, your program should work. But once your array is passed into lookForName, it decays into a pointer, so it won’t work either, right? Another issue is that your for-each loop will print “nameRef was not found” for every element in the list, if it’s not equal that element. Help me please. . . #include <iostream> const int ANGKA_RAHASIA = 11,213; const BESAR_GAJI = 18.35 main() { int satu, dua; double first, second; satu = 18; dua = 11; pertama = 25; kedua = pertama * three; kedua = 2 * ANGKA_RAHASIA; ANGKA_RAHASIA = ANGKA_RAHASIA + 3; cout<<pertama<< " " <<kedua <<ANGKA_RAHASIA<< endl; jumlahGaji = jamKerja * BESAR_GAJI cout<<"Gaji = "<<jumlahGaji<< endl; return 0; } This program has a lot of bugs. The only one that might be confusing is the first one: the literal value 11,213 should not have a comma. For the rest of the bugs, compile your program, look at what it’s complaining about, and fix it. The rest of the issues are all very straightforward. Hi! I’m learning C++, and I feel that the auto keyword should be avoided because adding more implicit things seems like a very bad idea. I feel more comfortable when I know exactly what the variable I’m working with actually is. If I see auto, I have to go and look above until I can myself figure out what it’ll be converted to by the compiler. Or, what is worse, I can make a false assumption, which would lead to undefined behaviour. It just seems more work to do when looking at the code. It might be easier to write, but a lot harder to maintain. As the python zen says "explicit is better than implicit". Alex,I have a simple question.This code below is the one you taught. But there is this one I noticed.You didn’t tell anything about this.I wonder why this works,or when is this implemented and why you never said anything about.Thanks in advance. I didn’t mention it because I wasn’t aware of it. It looks like Visual studio supports use of the “in” keyword instead of a colon -- it performs the same function. This is a historical artifact left over from the fact that Visual Studio supported for each loops prior to them being formally defined as part of C++11. Because this “in” keyword is not part of the official C++ standard and is not cross-browser compatible, its use is not recommended. Ahh,thanks! I’m not gonna use it.Thanks for the great tutorial. I can not understand why is const std::string neccessary rather than using just std::string? It’s usually not necessary, it just helps ensure we don’t accidentally change the value of the strings. Hi Alex, I getting one compilation error using for-each loop the error is: 1>c:\users\hp1\documents\visual studio 2010\projects\cpp_basics\cpp_basics\for_each.cpp(13): error C2143: syntax error : missing ‘,’ before ‘:’ Is it because of compiler standard? I don’t know how to change the compiler standard. Can you please help me to do this Sounds like your compiler may not be C++11 capable. You might try using Google to see if there’s a way to turn C++11 functionality on for your specific compiler, or whether you’ll need to upgrade to a newer compiler. Hi Alex, I wasn’t sure whether the variable declared in the element_declaration, was initialized (and created and destroyed) every iteration; or whether it was simply initialized with the value of the first array element at the beginning and simply assigned the values of the subsequent array elemements for its relevent iteration. I have validated this and the latter is true: i.e. is the equivalent of: I found this out by printing the address of the element_declaration variable every iteration to see whether the address would be identical or not: This printed: array[0] has memory address 003EFCF4 array[1] has memory address 003EFCF4 array[2] has memory address 003EFCF4 array[3] has memory address 003EFCF4 array[4] has memory address 003EFCF4 As you can see the memory address is identical. #include "stdafx.h" #include <iostream> #include <string> int main() { std::string names[4] = { "Jester","Paulette","Lorreinne","Trisha" }; std::cout << "How many times? "; int multiplier; std::cin >> multiplier; std::string getNames; bool found(false); for (int index = 1;index < multiplier+1;++index) { std::cout << "Enter a name #" << index << " "; std::cin >> getNames; for (auto &NAMES : names) { if (NAMES == getNames) { found = true; } } if (found) { std::cout << getNames << " is found.\n"; } if (!found) { std::cout << getNames << " is not found.\n"; } } } Alex, I get confused with some of the wording in C++. When you say, an array decays into a pointer, what does that actually mean? Isn’t it true that you always have a pointer, pointing to the first element of any array? Another problem I have is with the words dereference a pointer. When you dereference a pointer you are actually getting a copy of the content from the memory address the pointer points too, right? What happens if you have more than one pointer pointing to the same memory address? Can’t you get the same contents for both pointers by dereferencing them both? > Isn’t it true that you always have a pointer, pointing to the first element of any array? No. An array is not the same thing as a pointer. An array contains type information about how long the array is, a pointer does not. > When you say, an array decays into a pointer, what does that actually mean? It means that in most circumstances, when you use an array, C++ first implicitly converts the array into a pointer (losing the size information), and then uses the pointer. > When you dereference a pointer you are actually getting a copy of the content from the memory address the pointer points too, right? Yes, you’re accessing the value at the memory address the pointer points to. However, it doesn’t make a copy unless you do something with that value that causes a copy to be made (e.g. assign it to another variable). > What happens if you have more than one pointer pointing to the same memory address? Exactly what you’d expect -- two pointers holding the same address. There’s no problem with that. > Can’t you get the same contents for both pointers by dereferencing them both? Yes. Hi Alex, Hope u are doing fine. I was wondering that, in the last chapters, if references were said to be implicitly constants, and that they acted like const pointers, how can they be used in for each loops where the reference element refers to a different array element each time during an iteration. I used the &reference on std::cout to print the address of the referred elements, and saw that indeed the address reference referred to was changing during each iteration. Please put some light onto it. If I understand correctly, you’re asking how the for each loop is changing the value of a const reference with each iteration (since const values can typically only be initialized, not assigned new values once created)? The answer is that the element_declaration is local to the loop body, so it gets created and initialized anew with each array iteration and goes out of scope and gets destroyed at the end of each iteration. Alex, I think Sachin is asking how a reference can be “redirected” to different array elements in different iterations. I had the same doubt. Your answer cleared it for me before asking. Thanks! BTW you put an extra period at the end of the "rule". Now that I think of it, it feels pretty strange. As per your answer each iteration in a for each loop uses it’s own local reference variable which gets destroyed at the end of the iteration. The odd bit is that it’s declared in the same place as a loop variable in a traditional for loop (which don’t get destroyed during each iteration), but still behaves like a variable declared in the body of a loop. Yup. The syntax for the for-each loop obscures the nature of when and where the reference is created, destroyed, and initialized. From a usage standpoint, it really doesn’t matter. It’s only when you start digging into how these things actually work that you start to uncover oddnesses. Which is exactly what an astute student should do! 😀 Thanks a lot for the quick and continued responses. Readers like me are benefiting a lot. 🙂 Hi Alex, I’m getting 0->51 in readback instead of 1-52. How do I fix this? Should I revert to traditional loops for the remainder of the exercises (almost at blackjack exercise)? It’s not a good idea to mix a for-each loop with explicit indexing like you’re doing in populate_array(). If you need to access specific indices, use traditional loops. Hi Rob, May I help you? Inside populate_array(), use: instead. This works. Here element is a reference to the actual array element, so modifying it modifies the array itself. You don’t need to access the members with indices. Keep coding. 🙂 “C++11 introduces a new type of loop called a for-each loop (also called a range-based for loop)” Note - don’t confuse this with the for_each(..) STL function in <algorithm>. If using a range-based for loop then call it a range-based for loop and not a for-each loop as that could cause confusion. Hi Alex, in the sentence "Can I get the index of an the current element?" the "an" can be removed. Fixed. Thanks! In the first example the comment on line 7 should be "// keep track of our largest score" instead of "// keep track of index of our largest score" Alternatively you can change the code if you want. Thanks, fixed. 1 2 3 int array[5] = { 9, 7, 5, 3, 1 }; for (auto element: array) // element will be a copy of the current array element std::cout << element << ‘ ‘; This means each array element iterated over will be copied into variable element. Copying array elements can be expensive, and (((*most of the time we really just to reference the original element.*))) Fortunately, we can use references for this: Type problem: most of the time we really just want to refer the original element. Updated. Your suggested wording is clearer. Thanks. Its still wrong 🙁 , You did not put the string "want" 🙂 . Hah. Oops. Now it’s fixed. For real this time. 🙂 Under section "For-each doesn’t work with pointers to an array", I believe should instead be Correct, thanks for catching that error. Alex, I misunderstood the objective: I’m sorry I thought we were to continue the loop ad infinitum until the name was found. I’ve got a traditional control loop nesting a for-each. My question: are we supposed to experience any actual change in output of this program using the reference to original storage address(&)& vs. not? Or, are we just supposed to know it will reduce overhead if used? Also what are the liabilities of leaving const out? > are we supposed to experience any actual change in output of this program using the reference to original storage address(&)& vs. not? Or, are we just supposed to know it will reduce overhead if used? No change in output, just an increase in efficiency. > Also what are the liabilities of leaving const out? In this case, not much. Const in this case is used primarily to ensure we don’t change something we didn’t intend to change. Hi Alex, are we to assume that the for-each loop can have only one statement or am I doing something incorrectly? Loops can only have one statement. However, you can easily work around this by using blocks: For the sake of a good example, it might be a good idea to use this foreach code in the Quiz solution: Agreed. Updated. Is there a reason why you chose to not let ‘score’ be a reference in your rewriting of the max score program? As in this: As opposed to the (supposedly more efficient as covered just before in the lesson): Loving your lessons, by the way. They are very easy to follow and explains underlying concepts very well. I’ll make sure to recommend them to anyone who wants to look into c++! I tried to modify your program to detect the maximum score by asking the user to insert the number n of scores and then letting the machine to generate n random numbers. The program should print all the random generated scores and the maximum. If I declare everything works great. If I declare I get a bunch of problems. Here is the code. Could you explain me why? Thanks a lot for your help! For-each loops won’t work with dynamically allocated arrays, because dynamically allocated arrays don’t know how large they are. Fixed arrays don’t have this problem (unless they decay to a pointer). If you want to use the built-in dynamic arrays, you’ll need to iterate with a standard for loop. If you want to use the for-each loops, you’ll have to use fixed arrays (or even better, std::array) or std::vector (which we cover in a few lessons). Thanks for the answer! in example in section "For-each loops and non-arrays", add Fixed, thanks! Hi Alex, I have a question for you that doesn’t refer to quiz in this chapter (it was easy as you also said). While doing your quizzes I sometimes look at the c++ documentation on, usually about ‘cin’ and ‘string’ and to add some errors checking in my quizzes solutions. But reading all that stuff is incredibly hard for me and I end up understanding only few example code. I’ve spent last 2 years learning and writing tons of objective-c code (actually without creating nothing very interesting, just for the fun of learning), and I know how important is to be able to read official Apple documentation. My question is: do you think I will be able to consult documentation at at the end of all these tutorial? Thx Cesare I think it will be easier for you to do so, as you’ll have more C++ concepts under your belt. That said, the technical documentation can still be quite hard to interpret -- even I struggle to follow sometimes. So I don’t think it will be easy -- just easier than it is now. That’s ok, I know it won’t be easy. Anyway this is what a needed to hear 🙂 A question Alex: for (auto &element: array) Since we are just referencing the variable "element" to original array element but not actually copying the value of original element to variable "element", how can you print the valve of “element” variable to console while it don’t have the valve at all? std::cout << element << ‘ ‘; ??????????????? Element is just an alias for array[i], where i is the current index number. Element doesn’t need a copy of the original variable because it is a reference to the original variable. It’s like if I gave you the name “Joe”. I’m not making a copy of you, I’m just giving you another name. You’re still the original Gopal, but I can now also refer to you by the name “Joe”. As per my understanding after reading some next lessons, in above statement &element holds address of array[i], where i is the current index number. so if we want to print value which &element is pointed to, we should dereference it. In this case we should write std::cout << *&element << ‘ ‘;. Pl. clarify. Aah yes, this is a common confusion amongst new programmers. In the context of a variable declaration, the & doesn’t mean address-of, it means reference-to. We talk about references (and revisit this example) in lesson 6.11 -- References. Perhaps I should move this lesson until after we’ve covered references. Got it Alex Thanks. I spent 1/2 hour to find this comment on Friday and 1/2 hour on Monday in earlier chapters. Today i found this comment here. I don’t know how this comment came here, perhaps you moved entire chapter here. It was really irritating to find comment i posted. Anyway finally i got this comment and more importantly i understood difference between address-of and reference-to. I moved the whole lesson. Lots of people were getting confused about the references stuff since I hadn’t introduced references yet. This way, the reference lessons come first. One suggestion: I think For-each loops section should not be in this chapter. Instead it should belongs to chapter 5 control flow only. Apart from that "For-each loops and references" section should not be here. It should belongs to 6.11 reference variable section. If you want to keep "For-each loops and references" in this section, then you should provide link to section 6.11 reference variable section (If you decide to move this section in chapter 5). Thanks for the thought. It’s kind of an odd-ball lesson. It doesn’t make sense to put it in chapter 5 because although it is a control flow topic, it needs a sequence of data to operate on, and we don’t introduce any of those until the beginning of chapter 6 (arrays). I do like how arrays and references are discussed before they’re used in this lesson now. That avoid confusion and splitting the topic over multiple lessons. There is a syntax error : missing ‘,’ before ‘:’ in even tried copy pasting from this page No, the example is correct. My guess is that your compiler is either not C++11 capable, or has that capability turned off (if you’re using Code::Blocks, go to Toolbar -> Settings -> Compiler, select the “Compiler settings” tab and the “compiler flags” tab, and make sure the checkbox “Have g++ follow the C++11 ISO C++ language standard [-std=c++11]” is checked). Hi Alex, For some reason I cannot get the code to compile when it is in a function, but it works fine when it is in "main". I am really stuck on this. What is wrong with this code? thanks, Len Hello Len, May I help you? The problem with simple fixed length arrays is that when passed to a function, they break into pointers. I guess you have defined the array in your main(). When main passes the array to function printArray(), it decays into a pointer that points to that array. Thus, the variable a, that you declared as printArray’s parameter, never receives the actual array. It just holds the address of the array that you have passed in. for each loop prints the elements in the array only if it’s size is known. Because value of a is not your array, compiler throws an error where you tried to print elements using for each loop. A simple solution is to use std::array (covered in 6.15) or std::vector (covered in 6.16) instead of simple arrays. Arrays defined using these methods doesn’t decay into pointers. They remain arrays even if passed to a function. I suspected that Alex was not passing arrays yet because he had not covered the "trick" in how to do this.. yet. Thanks for sharing the "trick", Devashish! When I get to those sections I will play around with it more. Hi Len, It is possible to pass the address of the array as a parameter. Once this pointer is dereferenced it is possible to iterate this array (dereferenced pointer) inside a function using a for-each loop. Consider the following: This prints: 9 7 5 3 1 This works because ‘ptr’ in printArray() is pointing to the array itself, and not the first element of the array (which would be the case if the argument passed to printArray() was ‘array’). Hi Alex, Could I use goto statement for this problem. Does it harmful for this kind of situation. Although I learn it from your lesson 5.4 If it compiles and runs you can. Doesn’t seem too harmful in this particular situation. hi, Does the "for each" loop provide also the index of the element we are dealing now with? (Of course, you can calculate it manually, but it’s a little annoying to add such variable and increment it) It’s useful when the index has a meaning (e.g. the example in 6.2 - arrays and enums) Thanks, Eli The for each loop does not provide an index, because it can be used with aggregates that don’t support indexing (e.g. trees or linked lists). Hello Alex, Thanks for the tutorials 🙂 Using the for each loops how can we find the index of a member in the array. i.e., How can we find the index of the largest value in the array. When using a for each loop, there’s no direct way to get the index. This is intentional, as for each loops can be used with other (non-array) kinds of data structures (like trees or linked lists) that don’t support direct indexing. If you need the index, you’re best off using a normal for loop. It occurs to me that if you know you’re looping over an array, you could use a reference loop variable (to ensure the loop variable represents the actual array element and not a copy) and some pointer arithmetic: Thank you for the quick response Alex and Thank you for the wonderful tutorials again 🙂 hello alex, #include <iostream> #include<string.h> int main() { using namespace std; string str[]={"alex","betty","cameron","james","broad","finn"}; string item; int count=0; int flag=0; cout<<"enter a string"<<endl; getline(cin,item); for(string name:str) { count++; if(item==name) { flag=1; break; } else { flag=0; break; } } if(flag==1) { cout<<item <<"is found"<< "at position" <<count<<endl; } else { if(flag==0) cout<<item <<"is not found"<<endl; } return 0; } when i run this code,it works fine. but when i provide the string input as ("james" for example),it gives the output :james is not found. where as the string is already present in the array. could you please clarify on this!! Your code almost works, but it has a simple logic error that’s causing the loop to terminate earlier than expected. This is the perfect kind of problem to use a debugger to find! Place a breakpoint on the if statement, and watch what happens when it compares “james” and “alex”. On the else branch you don’t need the break, that causes the problem. The for loop exists early when the first name mismatches not having the chance to go to the next names. Name (required) Website
http://www.learncpp.com/cpp-tutorial/6-12a-for-each-loops/
CC-MAIN-2017-26
refinedweb
3,931
63.8
In the rails guides tutorial creating a blog app after we create the rails app and create a resources in the routes then we start working on a form_for for creating a posts title and text in the guide it tells me that we need to add this line <%= form_for :post, url: posts_path do |f| %> the posts_path helper is passed to the :url option. What Rails will do with this is that it will point the form to the create action of the current controller, the PostsController, and will send a POST request to that route. <h1>Here Lets create a simple post</h1> <%= form_for :post, url: posts_path do|f| %> <p> <%= f.label :title %> <%= f.text_field :title %> </p> <p> <%= f.submit %> </p> <% end %> class PostsController < ApplicationController def new @post = Post.new end def create @post = Post.new(post_params) end def post_params params_require(:post).permit(:title) end end <h1>THis is the post create action</h1> <%= @post.title %> Learnnobase::Application.routes.draw do resources :posts root "welcome#home" end uninitialized constant PostsController::Post We generally use Rails to build database-backed applications, but for learning purposes, you can do it this way. The problem you are facing here is: You are tyring to create an object of the Post class, that will be the model in the example you are referring to. The error comes up since you have not created the Post model. To meet your requirement you can make your create action be: def create @post = post_params #this will be a hash end Then change your view to: <h1>THis is the post create action</h1> <%= @post[:title] %>
https://codedump.io/share/cfRRVgzpTh1V/1/how-do-you-create-a-rails-app-without-using-all-of-resource-in-createupdatedelete
CC-MAIN-2018-13
refinedweb
269
60.85
Opened 8 years ago Last modified 5 years ago #6135 new New feature Introduce a short-cut for template filters that has needs_autoescape = True Description After the autoescaping was introduced I had to rewrite all filters to comform new requirements: put is_safe=True and needs_autoescape=True where needed. I have written about 25 filters that have needs_autoescape=True and absolutely of them contained contained the same strings def filter(text, autoescape=None): if autoescape: esc = conditional_escape else: esc = lambda x: x text = esc(text) # Then goes the part which is different for all filters # # return mark_safe(result) I have checked the filters in django. All the same. I think it is way to much of writting. I propose to split property needs_autoescape = True on two properties: needs_manual_autoescape = True, then Do all the stuff above, manually. needs_autoescape = True, then all above steps will do template system for user. All this thinks can do a simple decorator as well. Change History (8) comment:1 Changed 8 years ago by anonymous - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 8 years ago by jacob I think the decorator approach is the best way. I'd expect it to work like this: @autoescaped def filter(text): ... comment:3 Changed 8 years ago by Eratothene In case if @autoescaped def filter(text): It will be inconsistent with filter.is_safe = True In one case we have decorator (autoescaped), in other a property (filter.is_safe). I think we should stick to one syntax, either decorator or propertry. One way (Backward incompatible): @autoescaped def filter(text): @safe def filter(text): @manually_autoescaped def filter(text, autoescape=None): Another way (Backward Compatible): def filter(text): filter.needs_autoescape = True def filter(text): filter.is_safe = True def filter(text, autoescape=None): filter.needs_manual_autoescape = True comment:4 Changed 8 years ago by Eratosfen I prefer the decorator syntax more than existing one comment:5 Changed 8 years ago by Simon G <dev@…> - Triage Stage changed from Unreviewed to Design decision needed comment:6 Changed 8 years ago by mtredinnick - Triage Stage changed from Design decision needed to Accepted Comment 3 is mistaken. The is_safe property performs quite a different function to the needs_autoescape attribute (which is really just a flag). There's no inconsistency imposed by changing needs_autoescape functionality into a decorator. Also, the decorator is probably fully backwards compatible, since all it will do is add the needs_autoescape attribute to the resulting function and accept the extra argument (plus doing the initial processing). It's an addition to needs_autoescape (turning that into an implementation detail, for the most part), not a replacement. Filters are possible which don't subscribe to the pattern mentioned, so the decorator approach shouldn't be the only way to do this -- don't remove needs_autoescape, just implement a helper, in other words. Jacob's decorator name doesn't feel right, though. Perhaps autoescape_aware is a better name. The function itself isn't a past-tense of anything. comment:7 Changed 5 years ago by gabrielhurley - Severity set to Normal - Type set to New feature comment:8 Changed 5 years ago by jezdez - Easy pickings unset - UI/UX unset Forgot to mention that above code, which very common and annoying to write, can be transformed to Or even simplier version
https://code.djangoproject.com/ticket/6135
CC-MAIN-2016-22
refinedweb
547
50.97
I am a bit of newbie to programming on the raspberry pi. I am trying to develop a dementia style clock for my mum with help from my 7 year old son for a Christmas present for my mum. I think it would be nice as a learning exercise to do something bespoke which could have an open source development? Many years ago I dabbled in programming on a BBC micro with basic and a little assembly language that was gleaned and mostly copied and hacked from magazines at the time, when I was maybe 16-17? So I kind of understand loops and procedure calls but don't really get object oriented programming? The idea of the project is for a picture/audio frame in my mum's kitchen. The idea: My mum walks into the kitchen in the morning , a PIR triggers playing BBC radio 3 (she often forgets to put the radio on) The script plays radio 3 and then prompts my mum at various times during the day with audio prompts and pictures with text. 0900: "Grandma, it's time for breakfast..." voiced by my son or family member 1200: "Mum it's time for lunch..." 1700: "Grandma its time for dinner" 1800: "Mum its time to take your medication" etc I have managed to make a python script that in theory would do the audio part, but it is clumsy. It would involve a lot of "if hour is 9 then fade down radio stream by incrementally reducing the system volume, stop the audio stream , wait, then fade sys system sound up, play audio file "Grandma.." then fade down.....etc" Obviously this project could be evolved into a massive project with a CMS backend for family members, integrated with a google calendar with appointment reminders etc but I want to keep it simple for the time being. Christmas is soon!! I had thought about having a sort of config file either local or online that had for instance: 09:00, "Breakfast...wav", "Breakfast.jpg" 12:00, "Lunchtime...wav etc that could be read at startup into an array? that could be passed to a generic audio/picture procedure that could break a general screensaver and play the necessary audio and pictures? *** Can anyone give some hints to tutorials or ideas of program structure? *** Should I stick with python or is there a better language to use? *** Can anyone suggest a suitable IMAGE module for a really simple image rotater, no effects just cuts is fine for now., I have been dabbling with a pygame script? below is my poor code which has some testing stuff in it, ie the main loop is arbitary and not configured for the PIR or an end time. The volume settings are to do some level shifting between the stream and my recording which needs to be normalised/level changed. Many Thanks for any suggestions Code: Select all import pygame import datetime import os import time import vlc pygame.mixer.init() pygame.mixer.pre_init(44100, -16, 2, 2048) pygame.init() os.system("amixer sset 'Master' 80%") def fadedown(): for x in range(0, 5): os.system("amixer sset 'Master' 5%-") time.sleep(0.5) return def fadeup(): for x in range(0, 5): os.system("amixer sset 'Master' 5%+") time.sleep(0.5) return i = datetime.datetime.now() h = i.hour m = i.minute s = i.second print ("Start Up Current hour = %s" %i.hour) station="" if h>=0 and h<24: player = vlc.MediaPlayer(station) player.play() print ("moving on") while True: i = datetime.datetime.now() h = i.hour m = i.minute s = i.second print ("Current hour = %s" %i.hour) if h==12 and m==0 and s<10: fadedown() player.pause() time.sleep(1) fadeup() os.system("amixer sset 'Master' 100%") audiofile='/home/pi/Music/Lunchtime.mp3' player = vlc.MediaPlayer(audiofile) player.play() time.sleep(2) os.system("amixer sset 'Master' 80%") fadedown() player = vlc.MediaPlayer(station) player.play() fadeup() time.sleep(10) print ("loop") Simon
https://lb.raspberrypi.org/forums/viewtopic.php?f=102&p=1401459
CC-MAIN-2019-35
refinedweb
669
75.81
At the heart of every .NET event is a Delegate. In part one of this series: “Beyond the Event Horizon: Delegate Basics”, I introduced my motivation for writing EventApprovals (part of ApprovalTests 2.0). I briefly touched on ApprovalTests basics (a woefully inadequate introduction, visit Llewellyn’s YouTube series for a thorough overview). After this preamble, I covered the basic method for extracting invocation lists from delegates, then looked at some potential confusion which could be caused by the dual unicast/multicast nature of the .NET delegate implementation. In this part of the series I’ll cover “simple” events or what I’ll often refer to as POCO events. It may be helpful to define what I mean by POCO event, since it’s a term I made up. To me a POCO event is what you get when you are writing your own class and you create an event like this: public event EventHandler ProcessCompleted; Declaring an event this way is similar to declaring an auto property, the compiler sees that you want to setup an event and generates a default implementation for you. In this article I’ll cover what the compiler generates and how to test the compiler implementation. In a later article we’ll see the custom event implementation which backs events in Windows Forms. This article should cover some useful techniques, but remember that I’m re-implementing a system similar to EventApprovals, I’m not actually covering the internals of what made it into ApprovalTests. The code for this demo is available on GitHub, but if you just need to do what this article describes, don’t waste your time with cut and paste, go straight to ApprovalTests and let it do the heavy lifting for you. Simple Events Before jumping headfirst into the .NET event implementation and the reflection required to retrieve the subscriber list, it is worthwhile to consider whether doing any of that is even necessary. If a more simple testing approach can verify your requirements then you shouldn’t hesitate to use that approach. Here, I’ll give an example of a scenario that doesn’t require reflection and consider when this approach is better replaced with a reflection-based approach. Was the Event Raised When Expected? Consider my simple business object Poco: public class Poco { public event EventHandler ProcessCompleted; public int DoWork() { // ...Do very hard work... OnProcessCompleted(this, EventArgs.Empty); return result; } protected virtual void OnProcessCompleted(object sender, EventArgs e) { EventHandler handler = ProcessCompleted; if (handler != null) { handler(sender, e); } } } In this scenario I want to ensure that Poco raises the process completed event when the process completes. Assuming that the tasks Poco performs in DoWork are sufficiently amenable to mocking, then you could write a test that verified the call to OnProcessCompleted in isolation. [TestMethod] public void RaiseCompletedEventWhenProcessCompletes() { bool raisedEvent = false; Poco poco = new Poco(); poco.ProcessCompleted += (s, e) => raisedEvent = true; poco.DoWork(); Assert.IsTrue(raisedEvent); } This test works because it only requires access to operations that are externally accessible. I create a flag and set it to false. Later, I’ll assert that the flag is true. The test will only pass if something in DoWork causes the flag to become true. I set this up by assigning an anonymous event handler to ProcessCompleted. If the event handler is invoked, then my flag changes, and the test passes. I don’t care if there are any other methods in the invocation list, I just care that my test handler has been invoked. Testing Events Without Raising Events I may not be able to raise the event so easily. The tasks in DoWork might not be amenable to mocking. The method might have undesirable side effects, dependencies which I can’t satisfy, or the task simply takes longer than I’m willing to allow when testing. Lets assume that one of the conditions listed above is true for DoWork. If this is the case, then I can’t test by raising the event. If ensuring that DoWork raises the ProcessCompleted event is my only concern, then the invocation list can’t help me either. Membership in the invocation list only indicates that some subscriber is listening to some event when its raised, but listening to an event doesn’t imply the event will ever be raised. I know that Poco uses “POCO” events. If I forget that I wrote Poco and begin to worry that Poco has a custom event implementation that doesn’t register handlers in the expected fashion, it might make sense to hook up a dummy handler then check the invocation list to see if my handler is there. However, knowing that ProcessCompleted doesn’t hide any secrets, testing the invocation list would only prove that Delegate.Combine works correctly. I usually trust the framework to have correct implementations. . If ensuring that DoWork raises the event is part of a requirement that must be satisfied and delivered, then I need to refactor DoWork until it’s testable. I won’t cover that here, but you should check out Llewellyn Falco’s video on the “Peel” technique. Ok, so there are some scenarios where raising the event is the best way to test. When raising the event is impractical, there are still scenarios where you shouldn’t go after the subscriber list, because it won’t actually verify your requirements. Is there any point in reading on? Can’t I—the person who has wanted this functionality for 3 years—even come up with an example of when it’s useful? Actually, I can describe a couple scenarios involving WinForms where this is very useful, but introducing WinForms into the equation without first understanding the basic technique would just introduce complications that would hurt your brain. So I’ll dream up a scenario where this is useful without talking about WinForms and I’ll talk about WinForms later. Start by assuming that Poco is not part of the top-level API, but instead is part of a “primitives” layer. The purpose of the primitive API is to enable multiple programming models against the same functionality (MEF, for example, has a primitive layer). Suppose I have a class called PocoClient that is part of a specific programming model built on top of the more primitive layer. Part of PocoClient‘s job is to hide details like the ProcessCompleted event. PocoClient must subscribe to the ProcessCompleted event and provide a default handler. public class PocoClient { private readonly Poco primitive; public PocoClient(Poco primitive) { this.primitive = primitive; this.primitive.ProcessCompleted += this.LogCompletionTime; } private void LogCompletionTime(object sender, EventArgs e) { // write to log... } } Now the question my test needs to ask is a little different. I’m still assuming that DoWork remains too expensive to call, but I want to know that PocoClient correctly wires itself up to Poco. Observing PocoClient‘s logger for activity is not an option, because DoWork is too expensive. However, if I could see Poco.ProcessCompleted‘s subscriber list, and test that LogCompletionTime is in the invocation list, then I could verify the requirement statically, without running any code other than the constructor. Quick and Dirty Solution At the heart of every event is a delegate. Although you can use any delegate (including Func or Action) when declaring an event the most common practice is to use EventHandler, EventHandler<T> or some pre-defined delegate type like PropertyChangedEventHandler. At the language level, delegates are types, which means they are their own things (with their own rules) and they are peers with classes, structs and interfaces. However, the compiler implements delegates as special classes that all derive from MulticastDelegate. This can be confusing because delegates can share some behavior with classes in certain coding contexts. I’ll start with my simple Poco class. public class Poco { public event EventHandler ProcessCompleted; // ... the rest of Poco ... } Poco has a “Field-like event” which is a horrible name for a concept which shouldn’t be hard to explain. I alluded to this a few paragraphs ago, but here is an analogy: "Field-like Event" : Delegate Field :: "Auto-Implemented Property" : Data Field An automatically implemented property consists of a compiler generated field, and a pair of compiler generated methods ( get and set). A better name for “Field-like Event” might have been “Automatically-Implemented Event”. At least we can be happy that they didn’t call autoproperties “Field-Like Properties”. The Implementation So, what do I get when I declare Poco this way? I can use ILSpy to see what the compiler did in IL, but the C# view looks just like what I wrote in Visual Studio. In this case Reflector can provide better insight if you drill into the event node. public class Poco { // Fields private EventHandler ProcessCompleted; // Events public event EventHandler ProcessCompleted { add { EventHandler handler2; EventHandler processCompleted = this.ProcessCompleted; do { handler2 = processCompleted; EventHandler handler3 = (EventHandler)Delegate.Combine(handler2, value); processCompleted = Interlocked.CompareExchange<EventHandler>(ref this.ProcessCompleted, handler3, handler2); } while (processCompleted != handler2); } remove { EventHandler handler2; EventHandler processCompleted = this.ProcessCompleted; do { handler2 = processCompleted; EventHandler handler3 = (EventHandler)Delegate.Remove(handler2, value); processCompleted = Interlocked.CompareExchange<EventHandler>(ref this.ProcessCompleted, handler3, handler2); } while (processCompleted != handler2); } } } There is a lot going on there for one line of code. The good news is that a lot of it is ceremony around ensuring an atomic update to the backing field, and we don’t need to remember it. I can simplify this code to something roughly equivalent, which just contains the parts that matter to the discussion at hand: public class Poco { // Fields private EventHandler ProcessCompleted; // Events public event EventHandler ProcessCompleted { add { this.ProcessCompleted = (EventHandler)Delegate.Combine(this.ProcessCompleted, value); } remove { this.ProcessCompleted = (EventHandler)Delegate.Remove(this.ProcessCompleted, value); } } } Once again, I’ll refer you to the autoproperty analogy. The compiler gives us a backing field, and two methods. One interesting (and useful) difference between the autoproperty implementation and the autoevent implementation is the name the compiler chooses for the backing field. If the compiler used the same rules for autoevents as it uses for autoproperties, the declaration might look something like this: private EventHandler <ProcessCompleted>k_BackingField; However, instead of using this unspeakable and potentially un-guessable generated name, we have this declaration: private EventHandler ProcessCompleted; This name is much nicer because its easy to guess, it has the exact same name as the event. Why do I like that this field’s name is easy to guess? Simple, this field is just a reference to a delegate. In other words, this field is the thing I want and although it is private and compiler-generated, guessing it’s identity is trivial. I’ll file that useful fact away for now, and take a look at the compiler-generated add/remove methods. Compiler Tricks – An Aside I can’t help but comment on the expanded, Reflector-generated code shown above. It doesn’t compile because the event and the field have the same name. This is a trick the compiler can do when writing IL, but we can’t do when writing C#. When the compiler requests the member field, it uses the ldfld IL instruction. According to MSDN, this instruction loads the value of a field onto the stack. It doesn’t load the value of a property, it doesn’t load the value of a local, it doesn’t load the value of an event (whatever that would mean). It loads the value of a field. So it seems that IL only cares that the backing field as a unique name among other fields, while C# insists that the name be unique among all members whether they are fields, properties, events, etc. So C#’s rules appear more restrictive, which also prevents us from accidentally creating some unrelated field with the same name as our event and usurping the name the compiler wants to use. I’m not sure why they chose to use unspeakable names when creating autoproperties, it seems like the same trick they used on events would have also worked for that feature. The Add/Remove Methods As you’ve seen, the compiler generates a few lines of code that use Delegate.Combine or Delegate.Remove (depending on which method you are looking at) to create a new Delegate, then atomically updates the backing field to point at the new delegate. I could replace this implementation with whatever I want, as long as I follow the language rules enforced by C# (meaning, as long as I use a compatible backing field name). Because anyone (including Microsoft) could decide to use a custom event implementation, you can’t assume that events will always be constructed this way. For example, suppose I needed to create a unicast event. Because all delegates are multicast by default, I would need to create custom add/remove events to implement an event that replaced its backing field rather than using Delegate.Combine to update the backing field. (Of course, a mischievous reader might point out that this wouldn’t prevent a caller from passing in a multicast delegate). On the other hand, most people probably need a reason to do extra work the compiler could do instead. I think its safe to assume, that out in the universe, there are many, many compiler implemented events. A Test that Doesn’t Work and Why I’ll write a test that wires up a handler then see if I can get the invocation list. Notice that I’m using DelegateUtility and Domain, which I created in part one of this article. [TestMethod] public void GetPocoEventInvocationList() { Poco poco = new Poco(); poco.ProcessCompleted += Domain.HandleProcessCompleted; EventHandler pocoDelegate = poco.ProcessCompleted; // does not compile. DelegateUtility.VerifyInvocationList(pocoDelegate); } In case you missed the comment, this test wont compile. The compiler produces an error when we try to assign ProcessCompleted to pocoDelegate The event ‘ProcessCompleted’ can only appear on the left hand side of += or -= (except when used from within the type ‘Poco’) Not to beat a dead horse, but events are a lot like properties. Remembering this helps us to make sense of the compiler error. The local pocoDelegate is null, waiting to get a reference to a delegate. If I have a compatible method, I can use the method group syntax to assign a value to pocoDelegate. An event is just a pair of methods used to update a delegate (usually–they could do something weird or nothing at all). Regardless of what they actually do, they are methods, why can’t we assign them to pocoDelegate? For starters, the event represents a pair of methods. How can the compiler determine which method to assign to the delegate? The compiler can make the distinction between property getters and setters by looking at which side of the statement the property appears on— get appears on the right side, while set appears on the left. One of the points that the compiler error makes is that events can only appear on the left-hand side of the += or –= operator. The operator used determines which method you want when you try to use add or remove, but you can’t use those operators to help the compiler make a decision about which delegate to generate and assign to the local variable. Finally, even if you could tell the compiler which method to use, it wouldn’t matter. Neither add or remove can be assigned to the local variable: the methods do not have signatures compatible with EventHandler. It all comes down to this: the only thing you can do with an event is choose between calling add or remove and the only way to make that choice is by writing down the appropriate operator. Confusing Duality Its easy to get confused about what an event is because the compiler/IDE is “helpful” in other ways. From within the class, I can still use += or -= (or AddHandler and RemoveHandler if you’re writing VB). But, inside the class I’m also allowed to use assignment in either direction: var handler = this.ProcessCompleted; The compiler can see that this statement doesn’t map in any logical way to add or remove. So, it helpfully treats the event like a property (or a field if you prefer) and gets a reference to the invisible, private, backing field and puts it into handler. I’m not saying it is a property. It just looks a hell of a lot like one when you are writing code inside your class. Likewise, this statement feels a lot like a property setter: this.ProcessCompleted = null; This detaches ProcessCompleted‘s handler into the void, and is perfectly legal. So this is the what the second half of the compiler error means. except when used from within the type ‘Poco’ The illegal statement that I made in my test would be perfectly legal if I did it from within the class. With that insight, I can think of a way to rewrite my class so that my test could run. I’ll add a public method that exposes the delegate. public class Poco { public event EventHandler ProcessCompleted; public EventHandler GetProcessCompletedHandler() { return this.ProcessCompleted; } // ... I’ll go ahead and admit up front that this is a degenerate solution with an odor of indecent exposure, but for now my interest is only whether it works. An update to the test should reveal the answer: [TestMethod] public void GetPocoEventInvocationList() { Poco poco = new Poco(); poco.ProcessCompleted += Domain.HandleProcessCompleted; EventHandler pocoDelegate = poco.GetProcessCompletedHandler(); DelegateUtility.VerifyInvocationList(pocoDelegate); } This test compiles and runs, so far–so good. Here are the results: [0] = Void HandleProcessCompleted(System.Object, System.EventArgs) While exposing the event handler hardly qualifies as a best-practice, I can see that the test works as expected. I can verify that the functionality in PocoClient is implemented correctly without attempting to call DoWork. It’s quick and dirty, emphasis on the dirty. But I’m not done, because this quick and dirty method only works when I control both ends of the subscription. What if I don’t? If you don’t control the class the class the event is declared on, you’ll have to use reflection to get your hand on the delegate. This will be the subject of the next post in this series. Relationship with EventApprovals There isn’t a strong relationship between this quick and dirty technique and what you will find in EventApprovals. Exposing the delegate is only necessary if you don’t know how to retrieve it with reflection, EventApprovals uses the reflection approach, and you can too by calling ApprovalTests.Events.EventApprovals.VerifyEvents. Up Next The next article in this series will be “Beyond the Event Horizon: Events You Don’t Own”. That article will cover how to retrieve event handlers using reflection and then expand the technique to include taking a complete inventory of an object’s events.
https://ihadthisideaonce.com/2012/09/02/beyond-the-event-horizon-event-basics/
CC-MAIN-2020-05
refinedweb
3,136
53.81
Introduction: Vibrating Timekeeper I made a watch without a face. Instead, the time is given every quarter hour through a series of pulses on a vibration motor, in the same format as a grandfather clock. So, if it is 3:15, then the motor will make three long vibrations followed by one short one. Pretty cool, right? You will never have to look down at your phone or watch again. The To build this project, you will need the following Step 2: Add Socket and Battery Holder to Stripboard Place the coin cell battery holder close to the edge, and place the IC socket sideways next to it, as close as possible. Note the orientation of the battery holder and socket, because the attiny needs the power from the battery to connect to it's voltage in pins, and it is easy to solder an under board jumper to the power lines under the board to power it if the voltage line is closer to the attiny's voltage line, etc. On my board, there are power rails which make power connection easy, if you have one of these make sure the socket straddles it. Step 3: Populate More Components Add the transistor to the board, near the upper edge, and as close to the chip socket as possible, without coming in contact with any of the socket's pins/stripboard connection lines. Bend the 220 ohm resistor and the diode's legs as shown in the picture, to save space, and place them as shown in the picture, on the transistor's strips. Make sure the transistors flat side is facing the resistor and diode. The, take the 3300 ohm resistor and connect one side to the strip leading to the transistors middle pin, and the other side connecting to the corner of the chip socket. Notice where it is connected to the socket, because this is important and if the wrong pin connection is made, then the motor will not work. Step 4: Jumper for Joy Next, wire jumpers will be needed to connect the transistor to ground. Take a little tiny bit of scrap wire, strip both ends, and bend both stripped ends at a 90 degree angle. Insert this wire between the transistor's far left pin (with the flat side facing you) and the boards ground, which is in this case the lower rail. It's easier to see the two rails on the back of the stripboard. Step 5: Populate Pushbutton+friends Add the pushbutton to the opposite side of the board as the transistor. Insert this up-and-down, not sideways, as shown in the picture. Then, connect the far left side with a jumper, the same style as in step 4, to the upper power rail. Add the final resistor between the far left pin of the pushbutton and the lower power rail. Last but not least, add a jumper between the far left side of the pushbutton and the third pin from the lower left on the IC socket, like in the picture below. Step 6: Solder It Up Whew! Now the bottom of the board should have a forest of leads on it. Solder all of the leads at the bottom to their respective holes and clip them. Now the board should be looking nice and neat. Be sure to solder all of the pins in the socket to the board too. Step 7: Power Rails On the back of the board there should be two rails running horizontally across. These are the power rails, there should be some components soldered to these. Following the correct orientation of the coin cell holder, connect the pins to the respective power rails via solder jumper lines on the bottom. Pay close attention to the next step. Connect the pin on the top left of the IC socket to its nearest power rail, it should be the top one. Do the same with direct opposite (bottom right) of the IC socket, connecting that to the lower power rail. If you get lost, look at the picture. Step 8: Vibration Motor Take the leads of your vibration motor. Connect one(it does not matter) to the far right pin rail of the transistor, and connect the other one to the upper power rail. Stop now and review all of your connections, and insert the coin cell battery. Nothing should happen and if the motor is spinning/some some components are warming up, then double check your wiring with the diagram and the instructions. Sometimes solder can pool and flow over to other areas of the breadboard. Secure the motor with a dot of hot glue. Step 9: Program the Attiny Now you might be thinking when the attiny comes into play. Before it is added to the 8 pin socket, it must be programmed. It would be a waste of space to list the details of programming the attiny85 with an arduino, so take a break now and follow these directions. Flash the program file below. Step 10: It's ALIVE Put the coin cell battery in the holder, and slot the attiny in the socket, the dot on the chip should be on the lower left of the socket. Press the button, and it should start vibrating the time! Fee free now to carve away the unused section of the board. Step 11: It Didn't Work If the board was powered up correctly and nothing is happening, try this. First, double check the connections with the instructions and the diagram. If something looks out of place, then go back and fix that. Use a fresh coin cell battery. Re-upload the code and ensure that the programming was successful. Press an LED on the power rails, negative side facing down. If it fails to light up, then the connections between the board and battery are bad. Press an led on the top left and lower right of the atmega, negative side on the lower right pin. If it fails to light up then the connections between the chip socket and breadboard are bad, or the chip isn't oriented correctly. Remember that the dot on the attiny faces the lower left. Press an led on the far right and far left pins of the transistor, negative side on the far right. If it fails to light up the problem lies within the transistor, if it lights up then the vibration motor is bad. Leave a comment and I'll reply. Step 12: Going Further/a Final Note Admitably this project has a lot of flaws, which leaves a lot of opportunity for the community to improve on it. I did not design a band or case for this watch, so it is pretty delicate. This watch rips through coin cell batteries at a rate of one per 10 hours, possibly some power saving firmware would help? Because the attiny keeps track of time on its own, it is rather inaccurate. A external RTC would benefit this project, however the DS1307 needs 5 volts. Any suggestions? A small lipo battery in place of the watch battery would save a lot of space. On a final note, I read a story about designers who created a watch with no display and a vibrator motor instead(just like me!) but they programmed their watch to vibrate every 5 minutes, creating a unique perspective on time. I promptly copied them, the firmware is below(beware, untested) with some experimental power saving stuffs. This is my first technology instructable, any suggestions? Please comment below. Participated in the Supercharged Contest Be the First to Share Recommendations 6 Comments 2 years ago If you try to compile the sketch GranfatherClok.ino with a recent version of the Arduino IDE (or any alternate IDE) on Windows, you will get a compilation error: The solution is to change line 1 of the sketch to: #include <TimeLib.h> @qquuiinn, if you read this, please update the file so that others won't encounter this error. 5 years ago Innovative Project ! BTW, did you take this idea from Mr. Robot Television series ?! :) 7 years ago The battery lasts for about 11 hours, but with some new code I made it should last for a few years 7 years ago on Introduction Wowwww this is amazing, thanks for sharing. In the market exist a watch doing exactly the same and cost more than 100 bucks. this is so sellable, how can I change the frequency of vibration ? an aprox which is the battery duration? 7 years ago on Introduction You just need to add a 32kHz watch crystal. See AVR4100, the '85 datasheet or any of the "How to add a watch crystal to the ATtiny85" tutorials. As a side effect running it slower will make the battery last longer. Reply 7 years ago on Introduction Agreed. The internal resonators on these MCUs are not reliable enough to give you an accurate time after 12 to 24 hours. It's a fun project though!
https://www.instructables.com/Vibrating-Timekeeper/
CC-MAIN-2021-31
refinedweb
1,510
79.5
. As this is quite a lot of information, I have divided it up into three parts: - Part 1 – Mosquitto up-and-running and creating a Python subscriber client with paho-mqtt - Part 2 – Set up the ESP8266 boards for publishing data with MQTT - Part 3 – Storing measurements locally and in the cloud Part 1 – Mosquitto up-and-running and creating a Python subscriber client Quality-of-Service levels that guarantees that messages are not lost. For my setup I use QoS 0, i.e. “fire-and-forget”. As I only send non-critical sensor data it does not matter if a measurement is missed or duplicated. For more details on MQTT I really recommend this tutorial by HiveMQ: Installing and testing Mosquitto I’m using Raspbian as OS on the Raspberry Pi, and to test Mosquitto I first need to install the broker. The default Raspbian package repository has a very old version of Mosquitto. To setup the repository with a newer Mosquitto, follow the instructions in this link: sudo apt-get install mosquitto This will install and activate a mosquitto broker service. You can check that it is running with: service mosquitto status When testing the setup it can be better to start mosquitto manually and get the print outs directly to the terminal window. To do this we need to stop the mosquitto service and start it manually with verbose output: sudo service mosquitto stop mosquitto -v To try out Mosquitto via the bash I install the Mosquitto cmd line clients: sudo apt-get install mosquitto-clients Now, to test publish and subscribe I start up two ssh sessions to the Pi from my laptop (if you operate the Pi directly via the Raspbian desktop you can just start two terminal windows). In the first ssh session/terminal, I start a subscriber for messages with topics that match either of two patterns: mosquitto_sub -h 192.168.1.16 -t 'Hellos/+' -t 'Goodbyes/+' 192.168.1.16 is the IP where the broker is running, i.e. the IP of my Raspberry on my LAN. When running the clients on the same host as the broker, the -h host option is not needed, but I have specified it for clarity. The mosquitto_sub command starts a subscription to topics where the first parts are “Hellos” or “Goodbyes”. The sub command will echo any messages that are received. You can use the -v option on mosquitto_sub to echo the topic of the message that is received. In MQTT you can have any number of topic levels separated by forward slashes that together form a topic. The topic levels are case-sensitive and can have any form that (preferably) describes the data being sent. For example: Home/TopFloor/Temperature Home/GroundFloor/Temperature Home/GroundFloor/Humidity When subscribing to a topic, you can use wildcards: + matches any single topic level, like Home/+/Temperature # matches several topic levels at the end, like Home/# To see if the broker and the subscriber are working, let’s publish some messages. In the second ssh session/terminal window: mosquitto_pub -h 192.168.1.16 -t 'Hellos/Pi3' -m 'Hello from Pi3 via MQTT' mosquitto_pub -h 192.168.1.16 -t 'Hellos/Pi3' -m 'Hello again from Pi3 via MQTT' mosquitto_pub -h 192.168.1.16 -t 'dummysubject' -m 'Is someone listening to me?' mosquitto_pub -h 192.168.1.16 -t 'Goodbyes/Pi3' -m 'Goodbye from from Pi3 via MQTT' If everything is setup correctly, the messages will be echoed in the subscriber terminal. Publish 1,2 and 4 will be received by the subscriber. Publish 3 uses a subject that the subscriber does not listen to, so it is not pushed from the broker to the subscriber. Creating a Python subscriber client To do something useful with the received messages, we can use Python and the paho-mqtt client library. To get it installed for Python 3.*: sudo pip3 install paho-mqtt We can check that it works with Python 3 by starting the Python 3 REPL and making an import: import paho.mqtt.client as mqtt You use the paho-mqtt client in this way: - Create a client instance: client = mqtt.Client() - Connect to a broker using one of the connect() functions: client.connect([IP of broker]) - Call one of the loop() functions to maintain the connection with the broker. The simplest one would be: client.loop_forever() - Use subscribe() to subscribe to a topic and receive messages: client.subscribe([TOPIC]) - Use publish() to publish messages to the broker: client.publish([Topic]) - Use disconnect() to disconnect from the broker: client.disconnect() Here is a complete example: The script creates a client instance and attaches handlers for on_connect and on_message callbacks. It then connects to the broker and enters the loop where notifications will cause callbacks to the defined methods. By having the subscription statements in the on_connect method, the client will re-subscribe to the desired topics if the broker goes down and comes back again. If you start subscriber.py in one terminal window, you can use the mosquitto_pub calls from another terminal to check that the mosquitto publisher, the mosquitto broker and the paho mqtt client are wired correctly. Next step The next step is to program the ESP8266 boards so that they publish sensor data to the Mosquitto broker. This is described in the next post: 7 Comments […]… […] […]… […] […]… […] […]… […] […] an introduction to my MQTT setup, see these posts: A self-hosted MQTT environment for Internet of Things – Part 1 A self-hosted MQTT environment for Internet of Things – Part 2 A self-hosted MQTT environment for […] Hi, nice article. Here’s a small error (missing opening quote before the message): mosquitto_pub -h 192.168.1.16 -t ‘Hellos/Pi3′ -m Hello from Pi3 via MQTT’ instead of mosquitto_pub -h 192.168.1.16 -t ‘Hellos/Pi3’ -m ‘Hello from Pi3 via MQTT’ Hi, thanks for finding and pointing out the typo. I have now corrected the text.
https://larsbergqvist.wordpress.com/2016/06/24/a-self-hosted-mqtt-environment-for-internet-of-things-part-1/
CC-MAIN-2018-26
refinedweb
992
60.55
The browser for S60 enables mobile phone users to browse the World Wide Web. Web pages can be implemented in Hypertext Markup Language (HTML), Extended Hypertext Markup Language (XHTML), or Wireless Markup Language (WML). A Browser Control is a browser: The Browser Control API is compatible with any application that complies with S60 3rd Edition. The Browser Control API complies with the following standards: Basic Browser Control functionality includes: When using the browser control in your application, you need to use the CBrCtlInterface interface. For this interface you need to include brctlinterface.h in your cpp file. #include <brctlinterface.h> you also need to link against browserengine.lib which you can do by putting the following in your MMP file. LIBRARY browserengine.lib The file brctrlinterface.h is filled with usefull comments, its probably better to read that file in order to get more familiarity. For more information see: Browser Control API Developer's Guide
http://wiki.forum.nokia.com/index.php/Browser_Control_API
crawl-002
refinedweb
155
57.77
Components and supplies Necessary tools and machines Apps and online services About this project Project Summary The Project This project is a combination of the many smart fridges and pantry's of the past. The idea is to combine all of them while also introducing Amazon's DRS system through Alexa and the Echo products as a bonus feature. The process will be simple: - As you put the item you purchased via Amazon Prime Pantry in your fridge, it will be scanned by an RFID tag scanner. This will detect an assigned UID, which will then correspond to a given ASIN. The ASIN will give details VIA web connection, such as weight, size, product name, etc. - Once scanned, the item will then be put onto a shelf with a built in scale. It will keep in mind the item that is going on, then detect the weight change, and track a percent and amount of that given item left. - These details will then go to an Amazon-hosted webpage where your Alexa can respond to simple questions, such as asking how much of an item is left, or asking Alexa to order something for you. She can also track expiration dates and give warnings. - The system will not be solely based around Alexa, though. It can have a small LCD screen that will display the information, and will allow programming to reorder if a certain item drops below a pre-determined amount. - Additionally, when you order a Pantry Box, it will include an RFID tag that you can scan to your pantry. This will assume that all of the items are full and ready to stock, and in turn saves time. What will make this system better than others? While there have been smart pantry systems in the past, many of them required too much user input, and many do not have the interconnections between the sensors that will be in this product. RFID smart pantrys only track what has been put in the fridge, not what is left; Scale based pantry's only track weight and require the user interface with it and record what was put in. As stated above, not only will this project fix those, but it will also interweave it into a service, this case is Amazon. Building the Project: Part 1, Materials: This is very vague because it can be done in many methods, this is just the method I chose to do. - Arduino of choice (Preferably AVR over STM) - MFRC522 RFID scanner and Tag - ESP8266, source of internet connection. - Load cell ( I recommend one over 20 kg load) - HX711 (For load cell amplification) - 5 Volt Power Supply - Ethernet cable (If card) - Breadboard for testing - Wire - Acrylic or plastic for scale Part 2, Libraries: Again, much of this is up to personal choice. I am using these because I found them first and they worked for me best. - Wire Library, found in Arduino IDE - SPI library, found in Arduino IDE To install the ones you do not have (HX711, ENC, MFRC) just unzip then install them into the Library Folder of where your IDE is installed. For the Ethernet you do NOT have to delete the Ethernet folder, instead, all you have to do is rename all "Ethernet.h" Libraries to "UIPEthernet.h" Update: Because of the library being used, this will only work with utilization of an ESP8266 module Part 3, Other Resources: - Lots of time. Lots and Lots of time. Seriously though, I have put in 150+ hours over the past 4 weeks and it is no where near done, and half of those hours were from just simple wiring issues and code bugs that were not even relevant to the whole project. This is why I am bumping up there experience level. - Along with time, patience. For obvious reasons. - Google Developer/API Part 4, Wiring: I'll post a diagram later, but for now: RFID on UNO #1 ( I will refer to it as #1 for the rest of the writeup) RFID Arduino VCC 3.3 Volts--While it will work on 5V, IT IS HIGHLY ADVISED AGAINST RST Any digital lower than 10; I have it assigned to pin 9 GND Ground on Arduino, who would have guessed MISO PIN 12 on UNO, 50 on MEGA MOSI PIN 11 on UNO, 51 on MEGA SCK PIN 13 on UNO, 52 on MEGA SS PIN10, however this can be changed. IRQ Unassigned. Ground Also to Ground on Arduino #2 Serial Connections (WIRE) among Arduinos #1 and #2 UNO #1 MEGA Analog_A4 20 Analog_A5 21 Ground Ground Ethernet on UNO #2 (again, will refer to it as #2 for the whole thing) Updated: The Library being used does not work with Ethernet, instead it uses the ESP module. Similar to the HX711 below, wiring the ESP will be completely dependent on if you have a 3.3v power supply, or if you need a voltage divider. ETHERNET Arduino VCC 3.3 Volts--While it will work on 5V, IT IS HIGHLY ADVISED AGAINST RST Unassigned how I have it right now, I might add a reset function later GND Ground on Arduino, again MISO PIN 12 on UNO, 50 on MEGA MOSI PIN 11 on UNO, 51 on MEGA SCK PIN 13 on UNO, 52 on MEGA CS PIN10, Not sure yet if it can be changed. I haven't snooped that far CLOCKOUT Unassigned. INT Unassigned. WOL Unassigned. Ground Also to Ground on Arduino #1 HX711: I would check the datasheet you get it with, whether it is Amazon, Ebay, or anywhere else. Two of mine were marked incorrectly and did not work as planned. Trial and error will get you there! Part 5, Setting up all of the forwarding accounts. - First off, set up a Google Drive account. I will start assuming you already have one, and Create a new forms. The key here is that for any data variables you want, make a question and have it written as a short/long answer. - Then, when it is done, go ahead and click "send" and copy the link. The long list of numbers/letters in that URL is the FormKey. Save it for later. - Next up is to right click the text entry box and "inspect element." This will bring up the source for the webpage, as well as the target entry ID's. Find then, and save for later. You now have two of the parts for making an "active/autofill link for Forms." All to do next is create it! This form is what has worked for me: The long chain of numbers at the end is irrelevant for what knowledge I know, I just know it needs it in order to "submit" the form. Congratulations, now you can submit through Google forms with just a URL! Part 6, setting up an account with Pushingbox: I will reiterate why we can't just use that Google forms link with our Arduino, but for a more in depth reason, search "Arduino SSL" or look at my update from 2/21 below. That "HTTPS" you see every so often means that website has an encryption with it, for a provided security. Arduino's cannot handle that Encryption themselves, so we call upon a non-SSL service to translate, essentially. Obviously the first thing to do is set up the account. It is straightforwards and simple, no private details, no payments. You can also set it up through Google. The first page to visit is the "My Services" Tab, then "New Service." Scroll to the bottom of the list and choose "CustomURL." Name it, paste the link UP TILL "formRESPONSE=..."in the middle, and choose GET. POST will work, there is no difference here. I just choose GET. Now that we have the Host URL submitted, we need to create a "Scenario." Go to My Scenarios, click "new." Fill out the details, as seen below: It will spit out a "DEVID" that you will use in a link to trigger the Google Forms URL you submitted. Subsection: Variables: Notice how on the picture above, it has the entry as $Test$, not just Test. This is because I have chose to write Test as a variable. On top of adding this tag, you must add &VARIABLE_NAME=VARIABLE into your code, but there is more on that later. What this does it it plays telephone between the sensor, Arduino, Pushing Box, and Google. Your final link through PushingBox should be something like this: But, congrats! Now you have a Non-SSL Link that you can use to submit. If you want to test it, you can just throw it into your WebClient Example and see if it works! Part 7, Arduino #2 Code: These next two code files will be without a "Wire.h" library, just so you can test your link. Here is an example code for the "EtherCard" Library, which is no longer updated. Update: I changed the first one to the Wifi module/ESP. I ended up using it for the final project as the DRS library I am using uses ESP, not Ethernet. Note that these are just for using Pushingbox, scroll down further for the complete code: //// // // General code from for Arduino WiFI shield (official) v1.0 // //// #include <WiFi.h> ///////////////// // MODIFY HERE // ///////////////// char wifissid[] = "WIFI_SSID"; // your network SSID (name) char wifipass[] = "WIFI_PASSWORD"; // your WPA network password char DEVID1[] = "Your_DevID_Here"; //Scenario : "The mailbox is open" //Numeric Pin where you connect your switch uint8_t pinDevid1 = 3; // Example : the mailbox switch is connect to the Pin 3 // Debug mode boolean DEBUG = true; ////////////// // End // ////////////// char serverName[] = "api.pushingbox.com"; boolean pinDevid1State = false; // Save the last state of the Pin for DEVID1 boolean lastConnected = false; int status = WL_IDLE_STATUS; // the Wifi radio's status WiFiClient client; void setup() { // initialize serial: Serial.begin(9600); pinMode(pinDevid1, INPUT); // attempt to connect using WPA2 encryption: Serial.println("Attempting to connect to WPA network..."); status = WiFi.begin(wifissid, wifipass); // if you're not connected, stop here: if ( status != WL_CONNECTED) { Serial.println("Couldn't get a wifi connection"); while(true); } // if you are connected, print out info about the connection: else { Serial.println("Connected to network"); } }); client.println(" HTTP/1.1"); client.print("Host: "); client.println(serverName); client.println("User-Agent: Arduino"); client.println(); } else { if(DEBUG){Serial.println("connection failed");} } } And here it is for the UIPEthernet Library, which I am personally using: // // General code from for Arduino + Ethernet Shield (official) v1.2 // //// #include <SPI.h> #include <UIPEthernet.h> ///////////////// // MODIFY HERE // ///////////////// byte mac[] = { 0x00, 0x13, 0x02, 0xCC, 0xD1, 0x19 }; // Be sure this address is unique in your network //Your secret DevID from PushingBox.com. You can use multiple DevID on multiple Pin if you want char DEVID1[] = "Your_DevID_Here"; char DEVID2[] = "Your_DevID_Here"; //Numeric Pin where you connect your switch #define pinDevid1 3 // #define pinDevid2 2 // // Debug mode #define DEBUG true ////////////// // End // ////////////// char serverName[] = "api.pushingbox.com"; boolean pinDevid1State = false; boolean lastConnected = false; // state of the connection last time through the main loop // Initialize the Ethernet client library // with the IP address and port of the server // that you want to connect to (port 80 is default for HTTP): EthernetClient client; void setup() { Serial.begin(9600); pinMode(pinDevid1, INPUT); pinMode(pinDevid2, INPUT); // start the Ethernet connection:()); } // give the Ethernet shield a second to initialize: delay(1000); } void loop() { //// // Listening for the pinDevid1 state //// digitalWrite(pinDevid1,); //If you are using a Variable, decomment the next line: //client.print("$VARIABLE_NAME="); //client.print(VARIABLE); client.println(" HTTP/1.1"); client.print("Host: "); client.println(serverName); client.println("User-Agent: Arduino"); client.println(); } else { if(DEBUG){Serial.println("connection failed");} } } Both were provided on the API link of the website. Once this is done, you will be able to do this: I apologize for the quality and the vertical presentation, I was not intending on using this video. Part 8, Amazon AWS setup: Because this is based off of the library written by Brian Carbonette, a suggestion would be to read his write up on his DRS button, Arduino based. It essentially boils down to this: - First off all, all of the libraries must be installed. This includes the Wifi101 Library, ArduinoJson Library, and the AmazonDRS library. - Next you must register on Amazon Web Services/Developers console - Then go to the Login With Amazon page (LWA) to create a security profile. This is essentially the gateway for allowed clients or devices to login to your Amazon account. This would be helpful, considering that in order to buy from Amazon, you must log in. - The next step is to sign up for the DRS push services, using that LWA you just created. Unfortunately, one of these must be created for each item, but they are very quick to make. - While making the DRS push service, you will need AWS SNS service. This is a "Simple Notificiation Service." All it does is send alerts, in our case, alerts about DRS. To make SNS, go to the developer console and search "SNS," Then start, then under topics, "Create new Topic." You will now have the Topic ARN for the above DRS. - Fill out all the details it asks for (Device name, picture, etc) and eventually you will find yourself on this page: This is what we have wanted since we started this goose chase! This is the page for defining slots for your device. Again, because this is a private venture, we have to go about creating slots manually. This is slight hassle, but realistically it should not take longer than a minute or so per item if you have the ASIN. Returning to the details page gives us the overview and the slot ID. This is needed to insert into the next bit of code below! So save it! Last but not least, we need to return the the Login With Amazon and incorporate it into our Arduino. Once again, Mr. Carbonette has made it quite easy to do this. He has provided an example script that calls all of the security profiles and keys to provide a oAUTH security token. This will then go in the "AmazonTokens.h" and will provide a clearance for you to access Amazon! Just for a heads up, we can use this Token system here, but for Alexa, google no longer will accept the tokens. We must use a full key system! Yikes! Part 9, The Final Code, Final Stretch! As said up above, lets introduce you to the full code now: Arduino #1, the RFID and weight sensor Arduino #include <SPI.h> #include <MFRC522.h> #include <Wire.h> #include <HX711.h> #define SS_PIN 10 #define RST_PIN 9 byte nuidPICC[5]; // Init array that will store new NUID byte weight; byte percentRemaining; byte weightSpecific; byte cardData; static String slotId; //Insert given Amazon DRS slot id here int t; int c; int x; int v; int b; int z; HX711 scale; MFRC522 rfid(SS_PIN, RST_PIN); MFRC522::MIFARE_Key key; void setup() { //DOUT is A1 //SCK is A0 scale.begin(A1, A0); scale.set_scale(2000.f); Wire.begin(); // join i2c bus (address optional for master) Serial.begin(9600); SPI.begin(); // Init SPI bus rfid.PCD_Init(); // Init MFRC522 pinMode(7, INPUT); } void loop() { if ( ! rfid.PICC_IsNewCardPresent()) return; // Verify if the NUID has been readed if ( ! rfid.PICC_ReadCardSerial()) return; // Store NUID into nuidPICC array for (byte i = 0; i < 4; i++) { nuidPICC[i] = rfid.uid.uidByte[i]; } scale.tare(); z = nuidPICC[0]; x = nuidPICC[1]; c = nuidPICC[2]; v = nuidPICC[3]; Serial.print(nuidPICC[0]); Serial.print(nuidPICC[1]); Serial.print(nuidPICC[2]); Serial.print(nuidPICC[3]); Serial.print(nuidPICC[4]); Serial.println(); delay(5000); weight = scale.get_units(), 1; rfid.PICC_HaltA(); // Stop encryption on PCD rfid.PCD_StopCrypto1(); delay(500); Wire.beginTransmission(8); // transmit to device #8 Wire.write(z); // sends one byte Wire.endTransmission(); // stop transmitting. This must be repeated for all bytes. Wire.beginTransmission(8); Wire.write(x); Wire.endTransmission(); Wire.beginTransmission(8); Wire.write(c); Wire.endTransmission(); Wire.beginTransmission(8); Wire.write(v); Wire.endTransmission(); Wire.beginTransmission(8); Wire.write(weight); Wire.endTransmission(); } This code is pretty straightforwards. IF there is an RFID card detected, the Arduino then transmits the first four digits of the ID of this card to the second, as well as the weight after a delay of 5 seconds. This will allow the user to set the item in in the pantry or fridge and have the scale get an accurate reading. This number can be manipulated, or a button can simply be added if desired. Arduino #2, the Internet enabled and DRS decoder Arduino. // Arduino DRS System--Sender // Made for Arduino Mega, UNO will not have enough memory // By Tanner Stinson, with inclusion of libraries from /////////////////////LIBRARIES TO INCLUDE/////////////////////////////////////////////////////// // // // //////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////LIBRARIES/////////////////////////////////////////////////////////////////// #include <SPI.h> //For SPI devices, like Ethernet #include <UIPEthernet.h> #include <Wire.h> //Needed if you have two arduinos connected via I2C #ifdef ESP8266 #include <ESP8266WiFi.h> #include <WiFiClientSecure.h> #else #include <WiFi101.h> #endif #include "AmazonDRS.h" ////////////////////////////DEFINITIONS///////////////////////////////////////////////////////// #define slotNumber 1 //This will vary for multi slot devices - dash buttons typically only serve one product/slot #define DEBUG true //Activate DEBUG mode ////////////////////////////DATA//////////////////////////////////////////////////////////////// byte mac[] = { 0x00, 0x13, 0x02, 0xCC, 0xD1, 0x19 }; //Needed if Ethernet char DEVID1[] = "v61EFDA501D771ED"; //Device ID for using PushingBox char serverName[] = "api.pushingbox.com"; //Keep int r; byte UID[2]; //This is the data coming in, the UID and the weight int i = 0; // Counting to change array slot for UID and weight int a; char itemName; char SSiD[] = ""; // SSID, or network name char PASS[] = ""; // WiFi password static String slotStatus = "true"; // As stated, status must be true to order. static String slotId; //= "11111111-2222-3333-4444-555555555555"; //Insert given Amazon DRS slot id here int status = WL_IDLE_STATUS; byte weightFinal; byte weightSpecific; //////////////////////// Initializers////////////////////////////////////////////////////////// //EthernetClient client; //Needed for Ethernet, but ethernet is not compatible with DRS capabilities. Can be used for Google Forms still WiFiClient client; AmazonDRS DRS = AmazonDRS(); void setup() { Wire.begin(8); // join i2c bus with address #8 //Wire.onReceive(receiveEvent); // register event Serial.begin(115200); // start serial for output, needed to be at 115200 for ESP ///////////////////////////////ETHERNET, NOT NEEDED FOR ESP AND DRS///////////////////////////////////////////////////////////// /*()); } */ /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// #ifdef ESP8266 WiFiClientSecure client; #else WiFiSSLClient client; #endif DRS.begin(&client); // Start up DRS code Serial.println("Connected!"); // Begin connection to WiFi while (status != WL_CONNECTED) { status = WiFi.begin(SSiD, PASS); delay(3000); status = WiFi.status(); } Serial.print("Connected to "); Serial.println(WiFi.SSID()); delay(1200); DRS.retrieveSubscriptionInfo(); slotStatus = DRS.getSlotStatus(slotNumber); slotId = DRS.getSlotId(slotNumber); } void loop() { // The meat and potatoes of the program, the loop delay(100); Wire.onReceive(receiveEvent); if ( a >= 4) { // a is just a variable defined in the function 'recieveEvent' that signifies when the wire transfer is done. When is done, or when greater than the 4th value in the array, it continues. /////////////////////////////////////////////////////////////////DATA FOR INDIVIDUAL RFID TAGS/////////////////////////////////////////////// ////////////////COPY AND CHANGE FOR EACH RFID CARD//////////// if (UID[0] == 82 && UID[1] == 83 && UID[2] == 107 && UID[3] == 133) { Serial.println("Chees Its"); itemName = "CheeseIts"; slotId = "11111111-2222-3333-4444-555555555555"; weightSpecific = 30; // Because this is a prototype, we will have to measure this ourselves. This is the full weight of the item, as our scale reads it. If this was a service through Amazon, it would be provided. Then again, // much of this would be provided ;) } ////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// weightFinal = UID[4]/weightFinal; if (client.connect(serverName, 80)) { if (DEBUG) { Serial.println("connected"); } if (DEBUG) { Serial.println("sendind request"); } client.print("GET /pushingbox?devid="); client.print(DEVID1); client.print("&itemName="); client.print(itemName); client.print("&weight="); client.print(weightFinal); client.println(" HTTP/1.1"); client.print("Host: "); client.println(serverName); client.println("User-Agent: Arduino"); client.println(); delay(3000); client.stop(); } delay(10000); if (!client.connected()) { Serial.println(); Serial.println("disconnecting."); client.stop(); //return; } else { loop(); } if ( UID[4] <= 10) { if (slotStatus == "true") { DRS.requestReplenishmentForSlot(slotId); // Calls for order via Amazon AWS and the DRS service } else { Serial.print("Slot for"); Serial.print(itemName); Serial.println("cannot be retrieved."); digitalWrite(4, HIGH); } } } a = 0; } void receiveEvent(int howMany) { float x = Wire.read(); UID[i] = x; Serial.println(UID[i]); i++; a = i; } This Arduino is a bit more complicated than the previous. In the Setup, the ESP and DRS system are initialized. Upon a "recieveEvent" event in the loop, the ID and weight are decoded and saved into a new array, still UID[]. The Arduino then sends these two to the Google forms, where they are saved for access later via phone, computer, or Alexa. The Arduino then does a quick check to see if the amount of the item is less than 10 percent. If so, it goes ahead and attempts to reorder. If there is no internet connection or the slot cannot be called, it gives an error. Each UID will have to be programmed into the code as a new item, along with it's slotId and "specific weight." This is the weight that the scale measures it at. This issue of running a given weight is composed of two issues. The first is package weight and how it isn't factored into the food weight, and the second is that when transferring data over Wire, only integers can be sent and received. This is not an issue if you have all the SPI devices on one Arduino, though you may start running into other problems like possible interference and memory. Again, Many other add ins can be places. We have an error LED connected to pin digital 4, but a success LED can be added as well. Part 10, Test It Out! That's all! The opportunities are endless, well as long as you have the RFID cards to fit them! Also try expanding your range. The DXF files provided only makea 10"x10" surface, but why not a shelf? Or a shelf of a fridge? Or a full fridge or pantry! What's Next So, What's next? As it seems, this is Version 1. It runs off of two Arduinos: an Uno and a Mega, and ESP8266, and and the extra peripherals. Here is a list of what I would like to change on it for the second version: - Larger surface area, possibly an entire fridge tray - Running on a single Arduino Mega - Adding a third SPI device, an SD card, to store texts of the card ID, specific weight, and item names. It would make it easier to update - Better long term connection to WiFi. This has a tendency to cut out after a given time, it might be code and it might be the module itself, or my WiFi. - Better Construction - Half bridges, spread among the scale, rather than a single full bridge load cell. - Have a 'level-out' system on the scale that auto-detects the final weight of the item placed. - Alexa! Scroll down Errors On a similar topic, errors. While running and building, I noticed a few errors. First of all is the ESP problems. It is very finicky, and quite frankly I can't give a proper way to wire it. It will work for 20 minutes, stop for 5, then work again for 2 hours. This could be the one that I have, too. I also noticed a lot of random errors pop up while compiling. They would be for un-explainable reasons and I would often have to copy the code into a new file, to which it would then work. No clue on this one. Right now I do not have an organization system for the Google Sheets. The Forms Dumps all of the "reads" into an excel, but I would like for it to be able to have a detection of each individual item and repeated items, then sort it out. Alexa/Echo Alright, here is the bonus event I tried doing while waiting for my parts to come in. While the main part of this project was to create a DRS-enabled project, I thought it would be cool if I could get Alexa to read a Google Sheets file, one which had my inventory in it. This would be useful in a kitchen or household. The owner could ask "if he had any butter," "how much butter," or ask to order more butter through a separate DRS system. Here are my attempts at learning how to program Alexa. Starting Off Programming an Echo is not too far off from what we have sign up for previous to this. We will still be using the Amazon Web Services and LWA, but we will also be using the Alexa Skill Set. This is essentially broken into 3 parts: - Intent schema, which define the voice interface and system, - Utterances, which are what words the program will respond to, and - The Lamba function program, which is the program that runs when asked to Here is all three for a very simple "Hello World" function: - Intent Schema, in JSON format: { "intents": [ { "intent": "MyIntent" } ] } - Utterances: MyIntent hello - Index.js: /** * Created by me on 3/16/16. */ exports.handler = function( event, context ) { var response = { outputSpeech: { type: "PlainText", text: "Hello World!" }, shouldEndSession: true }; context.succeed( { response: response } ); }; All this program does is respond with a simple "Hello World!" when the user says "Alexa, Hello." Now this is neat, but now we need something much more complicated, we need a system to READ an excel file, and this means more authentication, but through Google. Go ahead and sign up for Google Developers then click "Enable API," and choose Google Drive or Google Sheets. Click "Credentials" on the left side, then "Create Credentials," and choose the Service Account Key one. You will want to choose "p12" not JSON, as that is what we are using. You will be provided with a download of your security key- SAVE IT. this is the only one you will be provided for his ID and Key, so you don't want to lose it. Using the ZIP I have provided, throw the key in there and edit the "index.js" to match the files and details that you have. If you wish to do the full command prompt base by yourself, you need to install node.js. This will allow you to install the Alexa Skills Kit, Google Spreadsheets, and many more Node Modules into the Modules file. I have it all set up for Sheets, so I suggest you just edit the file ;) Warning: As of 2/25/27, I have yet to have the program call up the Excel Sheet data. I have, however, successfully logged into google and the sheets. I just now need to learn how to read them! Now, Lambda: Lambda is a computing function under the AWS services. It is essentially a cloud service for JS files in the Amazon Cloudbase, for certain triggers. Instructions: - Click "create new" - For your trigger, you should select "Alexa Skills Set." These are the triggers I told you about above. - Throw the ZIP in there! It should work fine. If we didn't need the "Node Modules" file we could just put the code in the inline editor. - Your configuration page should look like this: I also added in a 30 second time out under advanced settings. The only problem I am having with this currently is that every other access command or so, It will not connect. No clue why it is doing this, which is a reason why I added the 30 second time out. It did fix it a little bit, but not as much as I had hoped, considering it was a 2 second time out before. Updates First Update: February 5th, 2017: The rough outline of the project is there, however there are a lot of thins preventing me from moving forwards. I managed to get the RFID to work, which has been the easiest part so far. I used this library, so many thanks to those who put this together. The only issue is that this library is arduino, AVR systems. The Intel Galileo that I am using is a stm32f1 system, and while there are people out there running RFID on the Galileo, most are the 125Mhz systems. I will be moving to Arduino unless I can figure the RFID system and library on the Galileo, although so far I have had no luck in creating my own. However, on the Galileo, I have been (kind of) able to connect it to the internet. What I have been able to do is use the example sketch to determine the IP address, which works. What I have not been able to do is host a web server. Instead, the program resets too fast and does not ping with my computer. Considering it is the example sketch, I have high doubts that the program is the issue, and because there are only the 3 cables coming in and out of the Galileo, it's not there either. Here are my guesses: - The overall problem is the fact that I am on a college campus - It could be that my computer is connected to a WiFi router, but needs to be connected to the same Ethernet as the Galileo. I can test this by using my spare router I have as a switch. - It could be the settings I have on my computer, although I have high doubts because I can't even ping it. The Galileo is not having a problem connecting to the internet, it is just the connection between the computer and the Arduino. In other news, I managed to get the Load Cell to read properly, then one of my friends knocked it off my desk which tore a wire inside. Bummer! I have another one on order and it will get here soon. Once I can get the Galileo to host the webserver, I am going to send the data from the Arduino Uno which is running the RFID card to it. This will let me begin working on Alexa while I try and figure out the ESP8266 for the Arduino, or the RFID for the Galileo. Will hopefully update soon! -Tanner Mini Update: February 6th, 2017 I finally got it the Galileo to connect and transmit to a "server"! The problem was in fact that hosting on Ethernet and loading over wifi would not work. I think this is because Arizona State University has them set up as different networks, which for whatever reason, does not allow them to connect. Anyways, I ran to goodwill and grabbed a second Ethernet cable to connect to my own router(Not exactly allowed to do this, but sometimes for progress you gotta break rules!). This immediately solved the problem. Next up on my plate is still going to be trying to figure out the library for RFID or the ESP8266. Woohoo! Don't worry, I'll eventually post code, too! Right now everything has been down with examples, so there is not much missing! BIG Update: February 7th, 2017 Made big progress the past hour. I decided to do a fresh install of Arduino IDE, along with trying out a Library (for Galileo RFID) that I have not been able to get to work, but I finally managed to get it to work! It is an older version of the on I posted above. I am not quite sure what quite happened, but sometimes clean installs do that! I am going to try putting the newer library in, as I have a code for it already, but that is the one giving me the conflict between the AVR and STM systems. I guess well see! Now I have successfully: - Connected RFID 522 chip to a Galileo, received a UID Serial - Connected, calibrated, and received data from a Load Cell through an MX711 chip. - Connected and hosted a web server via IP on the Galileo(Ethernet). Next will be to create a PHP website that is dependent on data put out through the Galileo, over internet. I am also going to buy a WiFi/bluetooth chip for the Galileo. Next update I will also post my code and schematics for the electronics. Cheers! -Tanner Update: 2/12/2017 This isn't too much of a update involving me as much as links posting to stuff around the web. I found this link while looking for a handshake between AWS and Arduino. unfortunetly, it is "not possible." It's mainly the fact that the software and libraries can't handle it, and the Arduino can't physically handle the authorization process. The second link is somewhat of a disagreement, from AWS itself. It provides a good list of startup kits for IoT, and lists two for Arduino. This require the YUN board, which I don't have, and the SEEDuino and Grove. It also includes information for the Edison board, which I also don't have. Also from what I have read is that I do not think this will be able to handle my requests, yet. I would also like to give a huge thanks to Brian Carbonette and his library for the DRS button system. It's a good stepping stone for what I need to do; Once the RFID tag is read and the weight is calculated, if it is under a certain amount, it will automatically reorder for you. An LCD and button can be incorporated. Alexa comes in as a side job; she can't exactly order for you, but you can ask inventory. \ This appears like it could be a useful article too, however it is only 'pushing' through Alexa, and I need 'pulling.' Another Major update (almost kind of): 2/13/17 So while I've been taking a break from connecting my Arduino to internet and google docs, I decided to spend the last week trying to connect Alexa and my Echo Dot to Google docs. Here's what all I did: - My first step was to load hello world' and get familiar with JS. With Arduino, I am familiar with the code, I can understand almost everything, with the exception of creating libraries. It also helped get familiar with intents and schema, as well as loading node.js and the mode_modules. - Afterwards, I spent some time fiddling around with a calendar reader. This got me acquainted and helped me learn about the different node modules. I also may use this for personal use. - Next, I tried modifying this code to download a text file rather than an ical file. Obviously this didn't work, but it was worth a shot. - After that, I found a simple looking Google Bulletin/docs reader, however I still could not get it to work. - Finally, I gave up on Alexa and branched out to node.js programs instead. This was very very worthwhile! I found two modules, edit-google-spreadsheets and some other module, I can't remember. I combined an example program with 'hello world' to proof it and voila, it worked! The program was limited, though. I have been trying to read the document and it is giving me permissions errors. Google API does not recognize them as attempted log-ins, though, so I am not sure what's going on. my thoughts are that I don't have the access keys set up properly. I'll throw the screen-caps down below, and I am going to post on the AWS forums as well. var Spreadsheet = require('edit-google-spreadsheet'); var Alexa = require("alexa-sdk"); Spreadsheet.load({ debug: true, spreadsheetId: '<numbers-and-letters-from-url>', worksheetName: 'Sheet 1', oauth : { email: 'dev-email@developer.gserviceaccount.com', key: '/Doc Test 2-7d74afca889a' } }, function run(err, spreadsheet) { if(err) throw err; //insert 'hello!' at E3 spreadsheet.add({ 3: { 5: "hello!" } }); spreadsheet.send(function(err) { if(err) throw err; console.log("Updated Cell at row 3, column 5 to 'hello!'"); }); }); 'use strict'; var Alexa = require("alexa-sdk"); exports.handler = function(event, context, callback) { var alexa = Alexa.handler(event, context); alexa.registerHandlers(handlers); alexa.execute(); }; var handlers = { 'LaunchRequest': function () { this.emit('SayHello'); }, 'HelloWorldIntent': function () { this.emit('SayHello') }, 'SayHello': function () { this.emit(':tell', 'Hello World!'); } }; Update 2/10/17 I felt that this update was necessary so I put it on the top. I have successfully been able to load and tamper with examples on Alexa. What I am working on now is to incorporate HttpRequest into a program, so that Alexa can download a public text file, break it into lines, then read from it. I am also moving strictly to Arduino, rather than Galileo. It seems as though each board can pick two out of the three: RFID, internet, or the Load Cells. I'm going to use Google Docs as a "handshake" for both Arduino and Alexa. Interfacing Arduino and DRS has been done, but that seems to be it. I have been looking and looking and as of right now, it seems there are no strict Alexa Skills for just reading a document, or at least I haven't found any. As I have stated before, my programming experience is very, very, very, limited to single library, arduino based C. This adventure in JS is really a doozy. Getting it to compile is straightforwards, but once index goes into AWS, it seems to break and gives an error such as an inability to locate index js or more recently, an inability to initialize a module at line 103, column 16. Well, this is the XMLHttpRequest, so is it just because my nodes_Module needs to include this file? Does Amazon AWS not like this code? It compiles on my JS software just fine. I understand that this will most likely not be answered, but I am keeping it here for my thoughts. Thanks! Update: 2/14/17 Read Below for other updates!! Today I started creating the prototype. I used a laser to cut two 1/4" acrylic pieces. These are about 10"x10", for testing purpose, but can be up to whatever size and design you want. The Load cell fits right into the middle of the two pieces, and vinyl was cut to adhere to the bottom and top of the top peice. Pictures for more details: I also managed to get the AWS program to pass security for my google drive, now I just need to figure out how to have Alexa read the data. Exciting stuff! In the meantime I am going to get my Arduino working for posting the data. -Tanner Most Recent Update: 2/21/2017 Okay, so here we are, a week before the contest ends and the project is still fragmented. Anyways, lets get into the update: As of a while ago I had the ethernet and RFID working seperatly; great! Putting them together on the other hand, is much more difficult. Both are SPI devices, which means that they both want to communicate with the SPI ports. In case you don't know, there are 4: - SCK: Generates the clock from master. It essentially keeps everything in step and in time, syncronized. - MOSI: "Master out Slave in" -As simple as it sounds, it sends data out of master and into the slave - MISO:" Master in Slave Out: Same as above, just as it sounds. - SS/CS: This pin chooses the client or slave that you want to function. To choose one, the pin must be wrote as digital where the other clients must be written as high so they are not read. - And of course Ground, VCC There are plenty more articles you can read online about this so I am just barely going to touch on it. I could get two SPI devices working JUST FINE for about an hour, then all of a sudden they stopped working. Not entirely sure why though. I tried rewiring completely, swapping to a different uno, swapping to a MEGA, still nothing. Definetely wasn't code either, because I had changed nothing. There was some interference involving the SCK and MOSI ports of the RFID that would mess up the connection. I tried for a couple hours to get it to work but it just would not work, so instead, I wired two UNO's together via Slave/Master Wire library and communicated through that. This worked to my advantage because the memory of a single Uno could not handle all of the library's, and I am currently using the MEGA for a different project I am working on. While it is not very efficient and not the most professional method, two arduinos will work if you cannot get the SPI system working. And yes, to end this, I did make sure to use an auxiliary 3.3 volt power supply. Next is the internet side of things, connecting it to Google Forms. I chose Google Forms because I can submit simple data by calling a URL on the Ethernet Client. Or so I thought. Turns out that in 2013, Google switched over to SSL, so here is another quick history lesson: SSL(Secure Sockets Layer), or HTTPS, uses a more secured method of data transfer among website and browser. Because it is so secure, it is too much for an Arduino to handle. Many IoT applications do not use SSL because of exactly that. So how do we get around this? Well, the truth is we don't. We find a service to handle it for us! The source I am using is PushingBox.com; It works by plugging in your google forms URL and variables, creating a new, non-SSL link for you to link your device through. The connection between variables and the link is quite tricky, and as of this post I do not have it quite figured out, but I do have it posting to my google forms. What's next? Creating a scan box for the two Arduino's, RFID, and ethernet, then trimming up and fixing my current code problems, then posting it! Alexa is still halfway, however it may be difficult to get finished before the contest ends. I will still fiddle with it afterwards to try and get the code working, though. -Tanner Code ESP library, Secure Client Amazon DRS Library MFRC522 Library Wifi101 Library ArduinoJSON EtherCard Library UIPEthernet NodeJS Files for Google Spreadsheet Custom parts and enclosures Schematics Author Tanner Stinson - 2 projects - 2 followers Additional contributors - Rfid library by Miguel Balboa - Library for arduino drs system by Brian Carbonette Published onFebruary 5, 2017 Members who respect this project you might like
https://create.arduino.cc/projecthub/tanner-stinson/amazon-kitchen-drs-75fc24
CC-MAIN-2018-43
refinedweb
7,067
72.36
Are there any tools which generate a project layout for python specific projects, much similar to what maven accomplishes with mvn archetype:generate It is the good news: you do not need any tool. You can organise your source code in any way you want. Let recap why we need tools in the java world: In java you want to generate directories upfront because the namespace system dictates that each class must live in one file in a directory structure that reflects that package hierarchy. As a consequence you have a deep folder structure. Maven enforces an additional set of convention for file location. You want to have tools to automate this. Secondly, different artefacts require use of different goals and even additional maven projects (e.g. a ear project requires a few jars and war artefacts). There are so many files to create you want to have tools to automate this. The complexity makes tools like mvn archetype:generate not just helpful. It is almost indispensable. In python land, we just do not have these complexity in the language. If my project is small, I can put all my classes and functions in a single file (if it makes sense) If my project is of a bigger size (LOC or team size), it makes sense to group .py files into modules in whatever way makes sense to you and your peers. At the end of the days, it is about striking a balance between ease of maintenance and readability.
https://codedump.io/share/TbbWbortK1om/1/project-structure-for-python-projects
CC-MAIN-2017-43
refinedweb
250
71.65
Deque¶ A double-ended queue, or deque, supports adding and removing elements from either end. The more commonly used that lists support,¶']) Consuming¶ Similarly, the elements of the deque can be consumed from both or either end, depending on the algorithm being applied. import collections print 'From the right:' d = collections.deque('abcdefg') while True: try: print d.pop() except IndexError: break print '\nFrom the left:' d = collections.deque('abcdefg') while True: try: print d.popleft() except IndexError: break Use pop() to remove an item from the “right” end of the deque and popleft() to take from the “left” end. $ python collections_deque_consuming.py From the right: g f e d c b a From the left: a b c d e f g Since deques are thread-safe, the contents can even be consumed from both ends at the same time: 10 Right: 9 Left: 1 Right: 8 Left: 2 Right: 7 Left: 3 Right: 6 Left: 4 Right: 5 Left done Right done Rotating¶ - WikiPedia: Deque - A discussion of the deque data structure. - Deque Recipes - Examples of using deques in algorithms from the standard library documentation.
http://pymotw.com/2/collections/deque.html
CC-MAIN-2014-42
refinedweb
187
59.84
You have two cows/26 From Uncyclopedia, the content-free encyclopedia You have two cows: You Have n Cows left This article is part of the You have two cows series. - You have 2 cows. - Hinduism - You have 2 deities. - Unary - You have || cows. - Binary - You have 10 cows. - Negabinary - You have 110 cows. - Binary from ASCII - 01011001 01101111 01110101 00100000 - 01101000 01100001 01110110 01100101 - 00100000 01110100 01110111 01101111 - 00100000 01100011 01101111 01110111 - 01110011 - Binary from EBCDIC - 11101000 10010110 10100100 01000000 - 10001000 10000001 10100101 10000101 - 01000000 10100011 10100110 10010110 - 01000000 10000011 10010110 10100110 - 10100010 - 1337 - y0u h4v3 +vv0 c0vvz...nubz0rz pWn3D - 1337 2 - J00 HAV 2 C0WZ U L33T SUPA H4X0R - l337 3 (Max) - `/[]|_| |-|/-\\/3 7\/\/[] [0\/\/5 - Boolean - You have TRUE cows. - Boolean (SQL) - You have two cows. One of them is true (or null), the other one false (or null). - Ternary - You have 0t02 cows. - Octal - You have 002 cows. - Hexadecimal - You have 0x0002 cows. - Innumerate - You have some cows. - Floating-Point - You have 1.999999999 cows. - Integer - You have 2 cows. - Greatest Integer - You have [[1<cows<3]] cows. - Long - You have 2L cows. - Modulo 2 - You have 0 cows. - 32-Bit Wrong Endianness - You have 33554432 cows. - 64-Bit Long - You have 2LL cows. - Scientific notation - You have 0.002 × 103 cows. - Fibonacci - You have no cows. You have 1 cow. You have 1 cow. You have 2 cows. - Fibonacci (again) - You have 3 cows. You have 5 cows. You have 8 cows. You have 13 cows. - Fregeian Bovine Comprehension Principle (simplified form) - There exist x and y such that x falls under the concept is your cow and y falls under the concept is your cow and x is not equal to y, and for every z such that z falls under the concept is your cow, either z must equal x or z must equal y. - Fregeian Bovine Comprehension Principle (general form) - There exist x and y such that x falls under the concept is your cow and y falls under the concept is your cow and x is not equal to y; and it is not the case that there exist x, y and z such that x falls under the concept is your cow and y falls under the concept is your cow and z falls under the concept is your cow and x is not equal to y and x is not equal to z and y is not equal to z. - Frege-Russell / New Foundations - Set notation - You have x cows where x є {k|2≤k≤2} - String - printf("You have %d cows.\n",2); - Shift - YOU HAVE @ COWS> - British Shift - YOU HAVE " COWS> - Church encoding - You have λf. λx. f (f x) cows. - Dvorak - tsf ja.d 2 is,;e - Reverse Polish notation - You cows 2 × have. - Japanese counting - You have two cattle of cow, or You have 2*1 cows. - Vous avez deux vaches. - Greek Numerals - You have β´ cows. - Roman Numerals - You have II cows. - Roman Numerals (dj Arctic extended remix) - You have IIIV cows. - Chinese/Japanese Numerals - You have 二 cows. - Hebrew Numerals - You have 'ב cows. - Pebbles - You have .:. cows. - Intel Pentium 60 - A80501-60 - You have 2.0000000056987983 cows. - Natural numbers - You have {{},{{}}} cows. - Surreal numbers - You have { 0 , 1 | } cows. - Quantum physics - You have < 1 | 1 > cows, but observing them turns them into bulls. - Integers - You have [5 - 3] cows. - Reciprocal - You have 1/2 a cow. - Absolute Value - You have |-2| cows. - Rational numbers - You have [4/2] cows. - Real numbers - You have [1 + 1/2 + 1/4 + 1/8 + ...] cows. - Imaginary numbers - You have cows. - Trigonometry - You have csc 30° cows. - Quadratics - You have n cows, where n²+n-6=0 and n≥0 - Random numbers - You have rand(2,2) cows. - Complex numbers - You have 2 + 0i cows. - Complex conjugates - You have cows. - Quaternions - You have 2 + 0i + 0j + 0k cows. - Surreal numbers - You have { -1, 0, 1 | } cows. - Fuzzy - You have between 0 and 5 of what are probably cows. - Mathematical Induction - You have two cows. You prove that, If you have n cows, you have n+1 cows. You now have as many cows as you want. You study Linear Algebra. - Morse - You have ..--- cows. - Pythagorean Theorem - Given two cows, c²+o²=w² - In the sound studio - You have twenty decicows. - In the marketing department - Congratulations! You are now the proud owner of two thousand millicows! - sdrawkcaB - .swoc 2 evah uoY - Word Problems - You have two cows. Cow A started in Boston at 6:30 A.M. and walked to New York at 5 miles per hour. Cow B started at New York at 4:00 A.M. walking to Boston at 2 miles per hour. At which town will the cows meet? - Word Problems 2 - If eight cows, over period of 10 days, can produce one gallon of milk, how much milk can your two cows produce in a period of 15 days? - Word Problems Never End - If it takes one cow 30 minutes to run a mile, how long does it take your two cows to run the same mile? - Short Term Memory - You have... um... what was the question again? - Zen - You are two cows. - Lisp - '(cows (you have) 2) - Real life - Fill out forms in triplicate, then see Fact. - Quantum Mechanics - You have two cows who have just eaten Schrodinger's cat. What does this say of the nature of reality at the bovine level? - Quantum Mechanics 2 - You have two cows. One runs away really fast and reaches the speed of light. Einstein kills the other cow, then Stephen Hawking, then himself. - Quantum Mechanics 3 (Heisenberg Uncertainty Princimoo) - You have two cows. If I told you where they were and how fast they were going then you'd no longer have two cows - Quantum Mechanics 4 - You have two cows. One of them is an anticow. They touch. Now you have no cows. Or planet. - Quantum Mechanics 5 - You have one cow in a two-fold coherent superposition, creating an interference pattern with itself. You have a zebra. - Relativity - You have two cows. One of them goes faster than light and goes back in time. Now you have three cows. No wait one. Or one per universe. Whatever. - Schrodinger's Cows - Your two cows are both dead and alive. - Jung - The cows are an archetype in your collective unconscious that may have a direct affect on the subatomic particles of the body. In fact, you just are the cow. - Differential Cow-culus - You have 2. - Differential Cow-culus 2 - You have e2cow X 2 - Indefinite Integral Cow-culus - You have cow2/2 + C. - Russian Reversal - In Soviet Russia, two cows have YOU!!! - Gematria - "You have two cows" (יש לך שתי פרות) has the numerical value of 1756, which is equivalent to "You will arise and have mercy on Zion" (אתה תקום תרחם ציון). - Statistics - An analysis of available data suggests that you have two cows. These results are correct 19 times out of 20, with a ±4% margin of error. - Lost - You have 4 8 15 16 23 42 cows. Whatever that means. - ? - You have ? cows. - Proudhonian Mutualism - Property is theft, therefore you should not "have" these two cows at all. - IM - u hv 2 bk brgrz. - MediaWiki - You have {{{Number}}} {{{Species}}}. To successfully use this template, copy following code: {{You_have... | Number = | Species = }} Where Number should be any number, and Species should be any species. Example: You have two cows. - BB Code - You have [cows]2[/cows]. - Text Message - U hav 2 cows. :) - Insulting - U have 1 cow + your mum - SQL Select Command - "SELECT bovines, count FROM animals WHERE count = '" & Number.Text & "'" - SQL Select Command 2 - "SELECT animal, count(*) FROM animals WHERE animal = 'cow' GROUP BY animal" - Internet - - Hypothetical - Suppose you have two cows... - LIE - You have 3 cows. - Buddha - There both is a cow and not a cow at the same time. - Aristotle - There either are two cows or there are not two cows. There is no middle ground. - Tao - There is no cow. - Zen - Moo - Comedian - Stop me if you've heard this. Two cows walk into a bar. One says to the other, Moo. - Iambic Pentameter - Two cows you have, you surely have two cows. - Letter Poem - Cow cow U R, Cow cow U B, I C U R cow cow 4 me - Java - you.setCows(2); - Python - self.has(cows(2)) - Yoda - 2 cows you have - Gamer - All your cow are belong to us - Uncyclopedia addict - You have a large field, two cows, a hedge, three sheep, two cows, a tractor, two cows, a garden, a gnome, two cows, and a tractor... with two cows in it. Repetition rocks! - Unix - [root@localhost] # cat /dev/cows > /dev/null ; echo $? 0 [root@localhost] # - eBay - Looking for 2 cows ? Buy 2 cows here ! - MD5 - bae02f74c37921ccddda87814fc8f6a7 - PHP - <?php $cows=0; for ($i=0; $i<2; $i++) { $cows++; } echo 'You have '.$cows.' cows.'; ?> - ASP - <% Response.Write "You have two cows." %> - Invalid HTML - <html><head>You have two cows.</body></head> - Valid XHTML - <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns="" xml: <head> <title></title> <meta http- </head> <body> You have two cows. </body> </html> - Javascript - alert("You have two cows."); - ACS - # include "zcommon.acs" script 1 (void) { if (CheckInventory("Cow") = 2) Print(s:"You have two cows."); } - Wikipedia Article - "You have two cows" is the beginning phrase for a series of political joke definitions. - TI-83 - PROGRAM:TWOCOWS :{1,2}→LCOWS :Disp "YOU HAVE",(2),"COWS." : - Piglatin - Ou-yay ave-hay o-tway ows-cay. - Opposite Day - I don't have -2 uncows. - ASM - ld a,2 ld hl, you_numcows ld (hl),a - FORTH - cow cow + ." - Guy who can't count - You have the number of cows that comes after the number that's more than none. I think. - Bored person with a cowculator - You have 2 cows. You put "+1" in the cowculator and hit =. You have 3 cows. You hit =. You have 4 cows. You hit =. You have 5 cows. You hit =. You have 6 cows... - Porn site - Uncensored movies from just $2.99! Guaranteed authentic pair of cows! Just download the TwoCows Media Payment software, disable all virus scanners and firewalls, install it and watch the cows! - Spam e-mail - (FROM: Canadian BovineMeds) meds on=lnie! make yuor Two Cwos the VERY BEST IN THE WORLD EVER with V1@gra. - Nigerian 419 spam - Hello plese, I am a depozed prince who need to get my two cows out of the contry. If you gives me your bank acount informashon I will give you one of my cows in paiment. - Phishing e-mail - According to our database, your Two Cows listing on eBay is out of date. Please send us your Two Cows within 24 hours, along with your credit card information and Social Security/Insurance number, or your listing will be deleted. - Phreaking - If you train your two cows to moo a specific series of tones into a pay phone, you can make calls for free. - Scientology - You pay Tom Cruise to measure your cows with his e-meter and see if there are two or not. - Amazon.com - Customers who bought two cows also bought: two sheep, two pigs, Harry Potter and the Deathly Hallows - Impossible Creatures triggering - [ countdown 0 THEN set number of cows of You to 2 ] - KLF - mu-mu - Cockney - You are a bigamist. - tucows.com - You have us. - LOLCATS - U HAS A KOW. U HAD ANNUDER WON, BUT I EATED IT. KTHXBYE - Portal - The cow is a lie. - Team Fortress 2 - Spay Sappin Mah Bovine! - Linux - $ cat /proc/cows/number 2 $ - Identi.ca - Two autonomous cows subscribe to each others' feed. - Economist - Let us assume that you have two cows...
http://uncyclopedia.wikia.com/wiki/You_have_two_cows/26
CC-MAIN-2013-20
refinedweb
1,967
86.3
Raw Types Although the compiler treats different parameterizations of a generic type as different types (with different APIs) at compile time, but only one real type exists at runtime. For example, the class of Gen<Integer> and Gen<String> share the plain Java class Gen : Gen is called the raw type of the generic class. Every generic has a raw type. It is the degenerate, “plain” Java form from which all of the generic type information has been removed and the type variables replaced by a general Java type ike Object. You can use the generic type name by itself to define variables. For example : This creates a variable with the name rgen that is of type Gen from the Gen<T> generic type. This type that results from eliminating the parameters from the generic type is referred to as a raw type. When generics were introduced into the language, the language designers decided that in order to maintain compatibility with pregenerics code, they would need to allow the use of raw types. However, the use of raw types is strongly discouraged for newer (post–Java 5) code, so compilers will generate a raw type warning if you use them and you lose all the safety and expressiveness benefits of generics. For example, the rgen.setT() method accept any type of argument and explicit cast are necessary when the call of rgen.getT() method : Program class Gen<T> { private T t1; void setT(T t) { t1 = t; } T GetT() { return t1; } } public class Javaapp { public static void main(String[] args) { Gen rgen = new Gen(); rgen.setT("Ten"); String s = (String)rgen.GetT(); rgen.setT(20); Integer i = (Integer)rgen.GetT(); rgen.setT(30f); Float f = (Float)rgen.GetT(); System.out.println("s -> "+s); System.out.println("i -> "+i); System.out.println("f -> "+f); } }
https://hajsoftutorial.com/java-raw-types/
CC-MAIN-2019-47
refinedweb
303
63.59
Raspberry Pi Cluster Node – 05 Talking to nodes with JSON This post builds on my previous posts in the Raspberry Pi Cluster series by changing the format of the data I send. In this tutorial I am now sending data as JSON to allow a richer set of messages to be sent. Why use JSON to send data In previous tutorials I was sending raw data to the client and printing out the output. However there was no way to tell the end of one message from the start of another. There was also no predefined format for the messages I would send so it couldn’t be parsed by any program. One of the advantages of using JSON is that the format is widely used in different applications and can express rich objects in a simple data structure. Python has the easy to use json module which can encode and decode json into primitive python data structures. If I want to have other languages talk to my python cluster the only requirements would be that it can speak sockets and send JSON. This is something that the majority of languages can do which will not close our options. There are some more advanced ways to perform remote operations using libraries such as RPyC which could be used. However this handles a lot of the complexity internally and restricts us to purely using python. To be able to fully explore some of the interesting problems of distributed computing I want to have to deal with some of these issues ourselves. So instead of sending raw data I am going to start sending JSON encoded data that can read and understood by our cluster. The DataPackager file I have created the DataPackager file in our module to hold all the functions used to format and retrieve messages for the cluster. The first issue I needed to address was how to tell the difference between the end of one message and the beginning of the next. Since I have decided to use JSON to send the data I can easily use a newline character to signify the end of a message. JSON can be encoded onto a single line with no line breaks and is still valid JSON. This means I don’t need to worry about newlines in the JSON object causing issues. So at the end of each full message sent from a slave I will add a carriage return ( \r). This is what I will use to determine if I have reached the end of a message. This means the code on the master retrieving the message only needs to keep reading until it has reached a carriage return. If I receive a long string of messages I can easily split on this carriage return and process each message. This makes the code to create a message relatively simple: MESSAGE_SEPARATOR = "\r" def create_msg_payload(message): return json.dumps({'msg': message}) + MESSAGE_SEPARATOR Here currently I have a simple function which creates a dictionary containing the message to send to the client. This is then json serialized using json.dumps which turns an object into a JSON string. The final piece of code then adds the message separator which is the carriage return. Receiving the messages on the master is a little more complex but is it performed using the following code: _buffered_string = "" _buffered_messages = [] First we initialize two variables, the first to hold the buffered string we haven’t yet processed and the second to hold any messages we haven’t processed. This is because we will handle one message at a time and may need to hold many messages while we deal with others. Inside the module we have some helper methods, one of them is to check if there is a message in the buffer: def _get_message_in_buffer(): global _buffered_messages if len(_buffered_messages) &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; 0: return json.loads(_buffered_messages.pop(0)) else: return None Here we check to see if we have any messages in the buffer. If we do we remove the first item in the list using pop(0) and decode the JSON using json.loads. Then we return this newly decoded JSON object. Note that using pop(0) on a list isn’t very efficient but I will talk about that in a later tutorial when looking at performance. Another helper method I have created is used to check the currently buffered string for any messages. def _check_buffer_for_messages(): global _buffered_string, _buffered_messages split_buffered_data = _buffered_string.split(MESSAGE_SEPARATOR) if len(split_buffered_data) &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; 1: # If we find more than one item, there is a message messages_to_process = split_buffered_data[0:-1] for message in messages_to_process: _buffered_messages.append(message) _buffered_string = split_buffered_data[-1] Here I am first splitting the buffered data string by our chosen message separator (carriage return). If the split returns more than one element it means we have at least one message to parse. Using list slicing I can select every element but the last and store it in the messages_to_process variable using split_buffered_data[0:-1]. This then lets me append each of these messages to the buffered messages variable. Once I have stored all of the newly parsed messages I set the buffered string to the final element of the array which is any additional data which we haven’t yet been able to convert into a full message. I use both of these functions in the publicly available get_message function that my master will call. def get_message(clientsocket): global _buffered_string, _buffered_messages message_in_buffer = _get_message_in_buffer() if message_in_buffer: return message_in_buffer while True: try: data = clientsocket.recv(512) #Get at max 512 bytes of data from the client except socket.error: #If we failed to get data, assume they have disconnected return None data_len = len(data) if data_len &amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; 0: #Do something if we got data _buffered_string += data #Keep track of our buffered stored data _check_buffer_for_messages() message_in_buffer = _get_message_in_buffer() if message_in_buffer: return message_in_buffer else: return None The first thing I do here is to check if there are any messages already in the buffer. If there are then I return the first one using my helper method _get_message_in_buffer. This is because I don’t want to receive any further messages until I have dealt with the previous ones. Once there are no more messages I start a while loop to keep getting data from the slave. I call the blocking call recv to receive some data from the slave. If new data is received then it is addedthis to the buffered string variable. Once again I check to see if there are any new messages in the buffer, If I find any then I return the first one. If there was a socket exception recieving the data or I recieve data with length 0 I know the socket has went away. In this case I return None so that the script calling this knows there will be no more messages. The script calling this would always expect a parsed JSON message or None in the event there are no more messages to handle. Once the method has returned None it is expected that the method will not be called again. Changes to the master and slave Now I have my DataPackager module I can change my master and slave to use the new methods. The slave changes very little as instead of sending raw data it sends it packaged up by the new module. logger.info("Sending an initial hello to master") sock.send(create_msg_payload("Hello World, I am client {num}".format(num=client_number))) while True: time.sleep(5) sock.send(create_msg_payload("I am still alive, client: {num}".format(num=client_number))) Here you can see that the only change is that I am using the create_msg_payload function to format the data. The master has also changed slightly however now I am handling socket errors the master will be more robust. message = True while message: message = get_message(clientsocket) if message: logger.info("Received message: " + message['msg']) else: logger.info("Client disconnected") Here I have a while loop that continues until I get a message with the value of None indicating the socket has been closed. I loop through calling get_message each time passing in our client socket to see if there is any data. Summary In this tutorial I have shown how you can send messages that can easily be parsed by a computer. In the future I am going to send more complex messages and have the master handle them differently. This modulized code allows for a simple message passing system to be created reducing the overall complexity ny using the DataPackager shared module I have written. The full code is available on Github, any comments or questions can be raised there as issues or posted below.
https://chewett.co.uk/blog/1072/raspberry-pi-cluster-node-05-talking-to-nodes-with-json/
CC-MAIN-2020-05
refinedweb
1,480
60.24
When creating a GUI that has to communicate with the outer world, a common stumbling block that comes up is how to combine GUI code with I/O. I/O, whether HTTP requests, RPC protocols, plain socket communication or the serial port, tends to be blocking in nature, which doesn't play well with GUI code. No one wants his GUI to "freeze" while the program is blocking on a read call from a socket. There are many solutions to this issue, the two most common of which are: - Doing the I/O in a separate thread - Using asynchronous I/O with callbacks integrated into the GUI event loop In my opinion, option 1 is the simpler of the two, and it's the one I usually end up with. Here I want to present a simple code sample that implements a socket client thread in Python. Although this class is general enough to be used in many scenarios, I see it more as a pattern than as a completed black-box. Networking code tends to depend on a lot of factors, and it's easy to modify this sample to various scenarios. For example, while this is a client, re-implementing a similar server is easy. Without further ado, here's the code: import socket import struct import threading import Queue class ClientCommand(object): """ A command to the client thread. Each command type has its associated data: CONNECT: (host, port) tuple SEND: Data string RECEIVE: None CLOSE: None """ CONNECT, SEND, RECEIVE, CLOSE = range(4) def __init__(self, type, data=None): self.type = type self.data = data class ClientReply(object): """ A reply from the client thread. Each reply type has its associated data: ERROR: The error string SUCCESS: Depends on the command - for RECEIVE it's the received data string, for others None. """ ERROR, SUCCESS = range(2) def __init__(self, type, data=None): self.type = type self.data = data class SocketClientThread(threading.Thread): """ Implements the threading.Thread interface (start, join, etc.) and can be controlled via the cmd_q Queue attribute. Replies are placed in the reply_q Queue attribute. """ def __init__(self, cmd_q=None, reply_q=None): super(SocketClientThread, self).__init__() self.cmd_q = cmd_q or Queue.Queue() self.reply_q = reply_q or Queue.Queue() self.alive = threading.Event() self.alive.set() self.socket = None self.handlers = { ClientCommand.CONNECT: self._handle_CONNECT, ClientCommand.CLOSE: self._handle_CLOSE, ClientCommand.SEND: self._handle_SEND, ClientCommand.RECEIVE: self._handle_RECEIVE, } def run(self): while self.alive.isSet(): try: # Queue.get with timeout to allow checking self.alive cmd = self.cmd_q.get(True, 0.1) self.handlers[cmd.type](cmd) except Queue.Empty as e: continue def join(self, timeout=None): self.alive.clear() threading.Thread.join(self, timeout) def _handle_CONNECT(self, cmd): try: self.socket = socket.socket( socket.AF_INET, socket.SOCK_STREAM) self.socket.connect((cmd.data[0], cmd.data[1])) self.reply_q.put(self._success_reply()) except IOError as e: self.reply_q.put(self._error_reply(str(e))) def _handle_CLOSE(self, cmd): self.socket.close() reply = ClientReply(ClientReply.SUCCESS) self.reply_q.put(reply) def _handle_SEND(self, cmd): header = struct.pack('<L', len(cmd.data)) try: self.socket.sendall(header + cmd.data) self.reply_q.put(self._success_reply()) except IOError as e: self.reply_q.put(self._error_reply(str(e))) def _handle_RECEIVE(self, cmd): try: header_data = self._recv_n_bytes(4) if len(header_data) == 4: msg_len = struct.unpack('<L', header_data)[0] data = self._recv_n_bytes(msg_len) if len(data) == msg_len: self.reply_q.put(self._success_reply(data)) return self.reply_q.put(self._error_reply('Socket closed prematurely')) except IOError as e: self.reply_q.put(self._error_reply(str(e))) def _recv_n_bytes(self, n): """ Convenience method for receiving exactly n bytes from self.socket (assuming it's open and connected). """ data = '' while len(data) < n: chunk = self.socket.recv(n - len(data)) if chunk == '': break data += chunk return data def _error_reply(self, errstr): return ClientReply(ClientReply.ERROR, errstr) def _success_reply(self, data=None): return ClientReply(ClientReply.SUCCESS, data) SocketClientThread is the main class here. It's a Python thread that can be started and terminated (joined), and communicated with by passing it commands and getting back replies. ClientCommand and ClientReply are simple data classes to encapsulate these commands and replies. This code, while simple, demonstrates many patterns in Python threading and networking code. Here's a brief description of some points of interest, in no particular order: The standard Queue.Queue is used to pass data between the thread and the user code. Queue is a great tool in a Python programmer's toolbox - I use it all the time to decouple multi-threaded code. The biggest difficulty in writing multi-threaded programs is protecting shared data. A Queue makes this a non-issue, essentially transforming the sharing model into message passing, which is much simpler to use safely. You will notice that SocketClientThread uses two queues, one for getting commands from the main thread, the other for passing replies. This is a common idiom that works well for most scenarios. In general, you can't force a thread to die in Python. If you need to manually terminate threads, they have to agree to die. The alive attribute of SocketClientThread demonstrates one common and safe way to achieve it. alive is a threading.Event - a thread-safe flag that can be cleared in the main thread by calling alive.clear() (which is done in the join method). The communication thread occasionally checks if this flag is still set and if not, it exits gracefully. There is a very important implementation detail here. Note how the thread's run method is implemented. The loop runs while alive is set, but to actually be able to execute this test, the loop can't block. So pulling commands from the command queue is done with get(True, 0.1), which means the action is blocking but with a 100 millisecond timeout. This has two benefits: on one hand, it doesn't block indefinitely, and at most 100 ms will pass until the thread notices that its alive flag is clear. On the other hand, since this does block for 100 ms, the thread doesn't just spin on the CPU while waiting for commands. In fact, its CPU utilization is negligible. Note that the thread can still block and refuse to die if it's waiting on the socket's recv with no data coming in. SocketClientThread uses a TCP socket, which will transmit all data faithfully, but can do so in chunks of unpredictable size. This requires to delimit the messages somehow, to let the other side know when a message begins and ends. I'm using the length prefix technique here. Before a message is sent, its length is sent as a packed 4-byte little-endian integer. When a message is received, first 4 bytes are received to unpack the length, and then the actual message can be received since we now know how long it is. For the same reason as stated in the previous bullet, some care must be taken when sending and receiving data on a TCP socket. Under network load, it's not guaranteed that it will actually send or receive all the bytes you expected in one try. To handle this potential problem while sending, Python provides the socket.sendall function. When receiving, it's just a bit more tricky, requiring to loop on recv until the correct amount of bytes has been received. To show an example of how to use SocketClientThread, this code also contains a sample GUI implemented with PyQt. This GUI uses the client thread to connect to a server (by default localhost:50007), send "hello" and wait for a reply. In the mean time, the GUI keeps painting a pretty circle animation to demonstrate it's not blocked by the socket operations. To achieve this effect, the GUI employs yet another interesting idiom - a timer which is used to periodically check if SocketClientThread placed new data in its reply queue, by calling reply_q.get(block=False). This timer + non-blocking get combination allows effective communication between the thread and the GUI. I hope this code sample will be useful to others. If you have any questions about it, don't hesitate to ask in a comment or send me an email. P.S. As almost all samples posted here, this code is in the public domain.
http://eli.thegreenplace.net/2011/05/18/code-sample-socket-client-thread-in-python/
CC-MAIN-2016-44
refinedweb
1,384
58.38
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Q 1 : How should I create an immutable class ? Ans... to return its string content. If an object refers to a String or other type by using... dynamically at runtime. Q 4 : How can I call a constructor Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Q 1. When should I use the abstract class rather... a class having all its methods abstract i.e. without any implementation. That means... use the equals method to make an exact match. Q 3. How can I Reply Me - Java Beginners Reply Me Hi Rajnikant, I know MVC Architecture but how can use this i don't know... please tell me what is the use... class (which contain your database information). Check previous I send U two You know - Java Beginners and send me its very urgent.... other wise reply me i am wait ur reply please reply me how can i solve.. Thanks Hi Ragini, I don't know php. Can u say me how to install and configure then i will try. I know the logic Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Q 1. How can I get the full path of Explorer.exe.... : How do I limit the scope of a file chooser? Ans : Generally FileFilter is used... makes a class available to the class that imports it so that its public variables reply - Java Beginners reply Hi friends I have a 255 fields i want to insert data in the table please let me know how to insert data in the database please reply me its very urgent tell me i don't know about cms and history - Java Interview Questions tell me i don't know about cms and history what is cms,when can it is used ,what are the advantages and what are the more information about CMS Reply Me - Struts Reply Me Hi Friends, I am new in struts please help me... file,connection file....etc please let me know its very urgent Hi Soniya, I am sending you a link. This link will help you. Please Reply Me - Java Beginners Reply Me Hi, I m working in jsp please any information provide the jsp code.not servlet or any other. I m writing a problem clearly...... i here suppose that columnname1 is the column for name in the table .so reply must reply must is it critical to do a software job based on games(java) i know core java & advanced java basics only please give me answer reply me its urgent - Java Beginners reply me its urgent Hi friends I am facing problem in my application in this type please help me i am using database mysql5.0 version...'@'localhost' (using password: YES)" please tell me what is the error....its reply - Java Beginners reply Hi, I am chk insert query please chk and let me know what is incorrect here i am using insert query please chk this StrSqlColm="Insert into FAFORM24GJMASTER("; StrSqlColm="USERCODE,POSTDATE Reply to the mail(import files error) Reply to the mail(import files error) Hi Its already there in the bin. If its class path should be set , how can i do dat? Hi, Wait for 20 minutes. I will send you link to download new project file. You should Reply - Struts Reply Hello Friends, please write the code in struts and send me I want to display "Welcome to Struts" please send me code its very urgent... and which file need to be run and compile struts without using database Don't Understand Don't Understand Find the errors in this SQL code: SELECT first_name, salary FROM employees ORDER... quotes and it should be in YYYY-MM-DD format. Here is the corrected sql: CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account title and keywords don't refresh in webpage - JSP-Servlet title and keywords don't refresh in webpage Hi, I changed... application. However when I restart server and do view source I still see the old title and keywords. If I change anything else on the jsp page it refreshes... details to have. Name,address,date of birth. We don't need how many houses he reply me - Java Beginners reply me Hi, user enter vend_id in text box ten open... addform.jsp is display with fetching data in all text box I m sending code before 5... immediately its very urgent Thanks & Regard Reply me - Java Beginners Reply me Hi, There are one form product_master these fields...,vendor_name All fields are input manualy but i want this vendor_name take... in the place of vend_name Please write this code and send me its very urgent Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava interface is an abstract data type like a class having all its methods abstract Reply Me - Java Beginners Reply Me Hi deepak, your sending web application is good...:but i can click update button selection of the user updation page will be not displayed. please help me how it is open selection of the user its very urgent Reply me - Java Beginners Reply me HI I m sending structure of table vend_id,vend_name... this is the table and i want searchning table field using name and display wole... page where the user typed the name to be searched...... here is my code for u Reply - Java Beginners Reply Hi Friends, Thanking for ur continue response I know your giving idea but can u send me the code of this problem just example for 20 data Reply me - Java Beginners Reply me Hi, this code is .java file but i am working in jsp tchnologies and i wantr this if user input a in text box the table have... tell me its urgent package services; import java.sql.Connection; import Validating Emp ID Reply - JSP-Servlet and Update... Here's the code: I am sending the coding which you sent me earlier... anum thanks and regards prashu hai frnd.... here i have mistyped...Validating Emp ID Reply Respected Sir/Madam, I am Ragavendran.R. rEPLY - Java Beginners rEPLY hi, This is session code i am calling admin_home1.jsp then data will be not displaying its urgent requirement please send me correction Thanks Hi friend, To display the session Reply - Java Beginners Reply Hi amardeep Thanks for reply i know my code is very large....i am sending database table file please chk sequence and let me know... DATE Hi Ragini, I checked your database table. Your database Reply - JSP-Servlet Reply Dear prashu sir, Actually i dont know where to start the image selection process.. I am really struggling to get through it.. If you get the codings please send it to me at that moment itself.. Because i am i am confused here on what to write can some 1 help out here - Java Beginners i am confused here on what to write can some 1 help out here i don't quite understand how to code it so can some one help out to know execution time to know execution time Hai this is sravanthi Is there any possibility to know the execution time of an SQL in its order to execute... to know the execution time for where,having and group by separately.If corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass... arguments to the method call is passed to the method as its parameters. Note very goto statement problem. reply fast. goto statement problem. reply fast. I have done some operation, and want to repeat same operation again at users choice. but its not working properly. I am using goto statement. void main() { float rs, p; char z Forward Log Records to Its Parent to its parent. You know that a logger sends both log records (messages declared...); log.warning("Don't something here!");  ...; log.severe("Don't severe here!");   corejava - Java Beginners findings here on this page.Thanks corejava - Java Beginners ,Here is nice example of deadlock in Java.Please check at Beginners ). Here is the code: import java.io.*; import javax.servlet.(int i = 1;i <= 10;i++) { System.out.println(i Corejava - Java Interview Questions ); } for(int i=0;i corejava CoreJava one error but i dont know how to fix it o_O!!! one error but i dont know how to fix it o_O!!! where is the mistake here ?? import java.utill.Scanner; public class Time { public static void main (String [] args) { Scanner keyboard = new Scanner (System.in Reply - Struts Reply Thanks For Nice responce Technologies::--JSP please write the code and send me....when click "add new button" then get the null value...its urgent... Hi can u explain in details about your project What is Java I/O? What is Java I/O? Introduction The Java Input/Output (I/O) is a part of java.io package... with its stream classes in a hierarchy structure shown below Top 10 Web Developments Concepts Designers Should Know Top 10 Web Developments Concepts Designers Should Know When it comes to web... of the web development concepts designers should know at this time. Application... should know, but it is the most evident reality of web development now. Content 5 Important Things to know About LTO 6 and it is believed to sweep the market with its haughty features and high end specifications. Let us introduce some of the most important things to know about LTO 6... feature with all new LTO versions and subsequent developments. Like its Error here Error here <%@ taglib prefix="s" uri="/struts-tags" %> its giving me error as: org.apache.jasper.JasperException: File "/struts-tags" not found org.apache.jasper.compiler.DefaultErrorHandler.jspEr Reply me - Java Beginners Reply me Hi Friends, Quest:- Tell em what is the difference between java and php, dotnet Quest:- what is the similar point of php and java... Please tell me its very - JSP-Servlet Reply Dear prashu sir, I am R.Ragavendran.. Actually I managed... and for Invalid emp id, i passed the control to process.jsp and caught...... Thanks for your fast reply.. Thanks/Regards, R.Ragavendran..   Reply me - Java Beginners Reply me Hi, I am using this code for session but i am... will be not displaying, and i am calling with .html extention means admin_home1.html... and i want to use session in each page then please tell me that how can i please help me here.. - Java Beginners please help me here.. ..uhhmm.. im a little bit new to java and i go..."); } } } this is the program i made..it's supposed to compare the strings but it doesn't work.. it should compare the second string to the first string to... searching using name with alphabetical order). I want to display only record based on name plz send this code immediately Thanks i have May I know how to create a web page? May I know how to create a web page? can u suggest me how to start Reply Me - Java Beginners Reply Me Hi I am sending some code please check it and solve... table and reply me fast...I got two error 1 is catch is without try 2 is try..."); java.util.Date d1 = new java.util.Date(); String dy; int dte=0,i=0; //int i=0 Reply Me - Java Beginners Reply Me Hi, For table I have a two table first is sales transaction and second is sales_line sales_transaction fields are comp_id,vend_id,invoice_no,invoice_date,lpono,lpo_date,purcase_order_no,country_of_origin reply - Java Beginners reply Hi Friends, action *.do please tell me what is need of this coding and each file this code require must......and also tell me what is main role of *.do please tell me its urgent..... HI Reply - Java Beginners Reply Hi friend I want to use eclipse in my system then send me url of this I have a jdk1.4, tomcat 4.1 so please tell me which version is support give detail how to install Please Reply - Java Beginners Please Reply Hi, I have a form these fields r prod_id,prod_name,prod_opstock,prod_excise,prod_vat,reorder_level,reorder_qty,vend_name... manullay but i want vend_name take from vend_master table but in prod_master table Reply - Java Beginners Reply Hi, This is the admin_home page i give .html extention then data is not displaying and i give .html extention then all data...").getElementsByTagName("LI"); for (var i=0;i fast Reply - Java Beginners fast Reply Hi Friend I want to passing value one page to another page without using session varible....please help me write the code and send me Hi friend, For solving the problem visit to : http reply Logout if its not active for longer period in jsp its not working please reply.. thanks in advance Please visit...:// hi its amrin but i want...Logout if its not active for longer period in jsp hi all, Please i have created interface program for simple arithmetic operation but its show some error can i get reason and mistakes i have created interface program for simple arithmetic operation but its show some error can i get reason and mistakes import java.io.*; interface...(); divide v=new divide(); s.add(); t.mul(); u.sub(); } } Here Reply to ur mail(Hibernate Error) Reply to ur mail(Hibernate Error) Hi! Im using Eclipse. Steps : 1. Downloaded the example code and library from rose india and extracted the content... of the project using windows explorer. As im using Oracle 10G i changed pls help me sir its urgent pls help me sir its urgent thanks for reply, but i am getting this error pls help me its urgent type Exception report description The server encountered an internal error () that prevented it from Why is super.super.method(); not allowed in Java? Why is super.super.method(); not allowed in Java? Why is super.super.method(); not allowed in Java please help me here then i should proceed here if i choose b: YOur Balance is __ if i chose D... output that i need please continue doing this program using only if else and do while please please" here is the problem Automatic Teller Machine [B] Balance [D Spectrum SCM ; We chose Spectrum SCM because of its simplicity of use, dynamic changing of components, email notification capabilities and cost. It has allowed... management tool that I have ever used. It is very powerful yet easy to learn Modifiers are allowed in interface Modifiers are allowed in interface hello, What modifiers are allowed for methods in an Interface? hello, Only public and abstract modifiers are allowed for methods in interfaces
http://roseindia.net/tutorialhelp/comment/12629
CC-MAIN-2014-41
refinedweb
2,466
64.81
This is your resource to discuss support topics with your peers, and learn from each other. 04-07-2010 03:41 PM - edited 04-07-2010 03:49 PM Hi guys, I think I have solved the problem. There was a tricky issue on Java server-side. In my code requests were handled by Java servlet. Here is sample servlet class: public class MyServlet extends HttpServlet { @Override public void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { resp.setHeader(...); resp.write("<xml>my superlarge xml goes here, etc...</xml>"); resp.setStatus(...); } } It is not obvious, but this code (assuming that XML is very large - for example 12000 bytes) sends response to source (XHR object in Widget) before resp.write(...) operation is even completed. This is happening because content of response (on TCP/IP level) is sent to request source, split into data portions. These data portions are limited by buffer and the buffer size is a number resp.getBufferSize(). This is what I partly sorted out from here. In my case resp.getBufferSize() == 8191. So every response with content more than 8191 bytes was split into 2 data portions. And BlackBerry XHR object stops processing in readyState 3 after it receives first data portion, although it is not completed content of response. I solved this issue by increasing buffer size on Java server-side: public class MyServlet extends HttpServlet { @Override public void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { resp.setHeader(...); String line = "<xml>my superlarge xml goes here, etc...</xml>"; resp.setBufferSize(line.length()); resp.write(line); resp.setStatus(...); } } I think this might be a solution for nrizz too. And don't you think that BlackBerry should be able to handle such situations? Thanks for your help, guys! 04-07-2010 05:22 PM Hi guligo, Thanks for sticking with it and drilling down to the cause of the issue!!! I'll send your results to our Browser Team to see if they can bring some more clarity around the topic. Cheers 04-09-2010 11:38 AM Hi guligo, Great work figuring that out. I tried tweaking those settings as you did, and although my problem didn't go away, I found that it takes longer for the null to show up for the first time. My setup is a bit different than you, I actually had to set the buffer size on the application server and on the jsp. Perhaps, I have another setting to find in order to get a fully reliable response. But it does seem like theres a corelation. I was wondering how you were able to do the TCP analysis. Do you have some software that you would recommend? Thanks 04-09-2010 12:01 PM Hi nrizz, I use Wireshark. However I am quite sure that there are better tools. 04-26-2010 06:00 PM Hi Tim, Could we get an update on this. Was the Dev team able to reproduce what guligo has found? Is there a fix in the works? Thanks 05-01-2010 10:36 AM The testing team was able to re-produce the problem. They were able to get around it by using the work around suggested in this thread. As far as a fix, I wouldn't imagine seeing a fix for this until 6.0 05-04-2010 07:40 AM As a follow up update, we have also found that in later versions of the BlackBerry 5.0 OS, like the simulators included in the final RTM version of the BlackBerry Widget SDK and development tools, this issue no longer occurs. 05-07-2010 01:54 PM I gave my widget a test drive this morning with the new SDK/Simulators and all seems to working well, I've also noticed that the responses come back much faster. Thanks for looking into this. Thanks 03-31-2011 04:58 PM Hi Guys, I've been developing a webworks sample application that uses AJAX,but as mentionted here i receive every response as a null response, but this only happen in a BB Torch 9800 Os6 (both phone and sim), for another device (Curve 9300 Os6) the webworks app works correctly. I've tried the solution posted here, and putting the server in the intranet and outside, but keep getting the same results. I'm going to try it in a curve 8520 with os5, to see what happens. Can you please help me? Thanks in advance. 03-31-2011 06:39 PM Hey Guys! I've solved the problem, it was missing the content type "text/xml" in the servlet. Thanks.
http://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/AJAX-Problem-on-BB-Widget/m-p/498887/highlight/true
CC-MAIN-2015-27
refinedweb
769
73.27
Status: Obsolete. This was written as part of the Semantic Web Toolbox page and spun off. It investigated a syntax for RDF/XML which would be simpler for users than the 'striped' syntax of RDF M&S 1.0. It also looks at the rules for extracting RDF semantics from other non-RDF markup. In this sense it connects with the Top-down functional interpretation of XML You can think of this syntax as Notation 2. A later syntax, Notation 3, was much more successful. (Within this document, XML elements with namespace prefix are assumed to be defined as pointing to something the reader can figure out, and unprefixed element names are used for new features which are introduced in this document. ). The major difference between this syntax and RDF 1.0 M&S is that RDF edges correspond to elements, and RDF nodes are implicit. It is basically as the M&S syntax with parseType=resource is a default. Basically the things which drove this particular syntax are I assume for the purposes of the Toolbox page a syntax for data in XML in which XML elements be classified into the following categories. The element introduces information about an arc in the graph. As nodes in RDF do not inherently have any information apart from their arcs, properties are the only way RDF information actually described. Property elements work as follows: rdf:forattribute indicates otherwise for the subject for one property element. (This is a shortcut) rdf:aboutattribute on any element sets the default subject for any contained elements. (Equivalent to RDF M&S) rdf:fyiattribute on an element removes any default subject for the element and its descendants unless otherwise specified. The RDF parser may ignore the element and its contents as far as RDF semantics go. rdf:extendattribute on any element indicates that the semantics of the element are of relevance to the RDF parser and must be interpreted according to the specification, and where this cannot be done the RDF semantics are undefined (and typically an error condition will result from an attempt at evaluation). The element is known as an RDF-opaque element rdf:valueattribute that indicates the value of the property. This is just a shortcut for having it in the element content. The Semantic context is not changed. And example might be all HTML tags, to make it simple to include RDF in HTML documents (and extract it). The RDF parser can deduce nothing about the element or its contents, unless it knows the semantics of the element. Example: <sense:room-temperature/> RDF-Opaque tags are understood by parsers conforming to the namespace they are in. In the toolbox we will introduce new features which, while they indeed be expressed longhand in the existing XML-RDF notation, in practice need to available in a more concise form at a high level. These are therefore extensions to the RDF-XML syntax for logic. Example: <not> RDF-Opaque tags in the RDF space are understood by conforming parsers. Other tags are assumed to be property elements if there is subject defined (default or otherwise) and otherwise RDF-Transparent (by default) or Opaque (if specified). Information as to whether tags are RDF-Opaque may be given in the document using them or in a schema (or indeed in principle anywhere else). It may be done element by element, or if applicable, to an en entire namespace. This syntax was written as have something for examples, and part of the purpose of this is a feasibility of writing logic in XML. I apologise to the reader for the effort required to work in a strange syntax. There was later a call for a simpler syntax and so this was cleaned up a little as a strawman. Sometimes the effort of creating an element just in order only to define the subject for a following assertion is a bit heavy. Making a standard well-known and mandatory understood attribute would make this easier. Suppose, for example that rdf:about=foo always sets the thing to which a contained property element refers by default, and rdf:for=bar overrode it for the element itself. ( rdf:for would also imply that the element was an RDF property) <dc:author rdf: is an easier way of specifying a single property. <frontm rdf: <z:date>sdfghjk</z:date> <z:title>Makeing more pancakes</z:title> <z:obsoletes> <!-- default subject is no longer theBook --!> <z:title>Making pankakes</z:title> <z:price>$3.00</z:price> </z:obsoletes> <z:price>$6.00</z:price> <z:price$78.00</z:price> </frontm> (The only problem I have with rdf:about and rdf:for is that it becomes mandatory for any semantically aware parser to be able to handle this, as ignoring it is of course impossible.) When one wants to introduce information about an RDF node, this is basically done by any element with an rdf:about attribute. When there is no other element which conveniently provides a placeholder, the rdf:description element may be used. <rdf:description rdf: <dc:author>Ralph</dc:author> <http:from>swick@w3.org</http:from> </rdf:description> If the rdf:about attribute is present it indicates that the node represents a resource (document) whose URI is that give. That attribute may be omitted. There are times when using an XML element name for a property may be difficult or impossible, such as when there are many properties to be listed, each from different namespaces, or when the property must take the value of the variable. (Yes, I understand this takes RDF out of first order logic but our ability to quote statements and refer to them I think makes that step anyway). <rdf:property This is also useful as a serialisation syntax when dumping the output of a parser, for example. See also Identifying things in RDF There are two ways to put RDF into HTML using these conventions. One could declare that all HTML elements are RDF-transparent, in which case RDF can be stuck in anywhere. One could bring them closer, so that the RDF subject is set to appropriate URI by convention by declaring them (in RDF schema code inserted into the XHTML schema) to be opaque. In this case, I would propose that HTML's HEAD (or maybe even the HTML document container) be considered as a Node element whose context is the document itself. I would propose that A switch context to the destination of the link - as one often wants a neat way of putting in information about it. Examples: References See also: So much for syntax: on to the semantic toolbox.
http://www.w3.org/DesignIssues/Syntax
CC-MAIN-2014-10
refinedweb
1,110
52.49
This article has three parts: 1) Data lakes 2) EMR data lake solution 3) Customer case studies Alibaba Cloud E-MapReduce is designed to leverage the Alibaba Cloud ecosystem, which is 100% open source and provides enterprises with stable and highly reliable big data services. Initially released in June 2016, EMR's latest version is 4.4. With EMR, use more than 10 types of ECS instances in order to create auto scaling clusters in minutes. EMR supports OSS, and its self-developed Jindo FS greatly improves the performance of OSS. At the same time, EMR integrates with the Alibaba Cloud ecosystem, for example, DataWorks and PAI can be seamlessly connected in EMR. You can also use EMR as a computing engine to compute stored data for storage services, such as log service and MaxCompute. All EMR components are Apache open source versions. With the continuous upgrade of the community version, the EMR team will make a series of optimizations and improvements in application and performance for components such as Spark, Hadoop, and Kafka. EMR adopts semi-hosted architecture. Users can log on to the ECS server node in the cluster in order to deploy and manage their own ECS servers, providing a very similar experience to that in the on-premises data center. It also offers a series of enterprise-grade features, including alerting and diagnosis at the service level for host jobs as in APM. It also supports MIT, Kerberos, RAM, and HAS as authentication platforms. Ranger is also used as a unified permission management platform. The following figure shows the overall open-source big data ecosystem of EMR, which covers both software and hardware. Several planes are involved here. For example, JindoFS is based on the storage layer (OSS). JindoFS is a set of components developed by the EMR team. This component is used to accelerate the reading and computing of OSS data. In actual comparative tests, the performance of JindoFS is much better than that of offline HDFS. Delta Lake is a technical computing engine and platform for open-source data lakes. The EMR team made a series of optimizations based on the deployment of Delta Lake in Presto, Kudu, and Hive, and significantly improved performance compared to the open-source version. The Flink of EMR is the enterprise version of Verica and provides better performance, management, and maintainability. EMR consists of four node types: master, core, task, and gateway. Services such as NameNode, ResourceManager, and Hmaster of Hbase are deployed on master nodes to achieve centralized cluster management. You may enable the high availability (HA) feature when creating a production cluster to automatically create high availability clusters. Core nodes mainly accommodate the Yarn NodeManager and HDFS DataNode. From this perspective, core nodes can perform both computing and storage. For data reliability, core nodes cannot implement auto scaling and spot instances. Only a NodeManager is deployed on the task node. Therefore, scale the data lake accordingly. When all user data is stored in OSS, use the auto scaling feature of task nodes to quickly respond to business changes and flexibly scale computing resources. Also, use ECS preemptible instances to reduce costs. Task nodes also support GPU instances. In many machine learning or deep learning scenarios, the computing period is very short (and once in a few days or weeks). However, GPU instances are expensive, so manual scaling of instances greatly reduce costs. Gateway nodes are used to hold various client components, such as Spark, Hive, and Flink. Departments can use different clients or client configurations for isolation. This also prevents users from frequently logging on to the cluster and performing operations. JindoFS It has been more than 10 years since HDFS was launched. Its community supporting functions are relatively mature and perfect. However, it has some defects. For example, the architecture of HA is too complex (if HA is required, JournalNode and ZKFC must be deployed). When a cluster is too large, the Federation of HDFS is used. When the scale of operation is large, the DataNode-Decommission cycle is also very long. If the host fails or the disk fails, the node needs to be offline for a period of up to 1-2 days, even requiring special personnel to manage the DataNode-Decommission. Restarting a NameNode may take half a day. What are the advantages of OSS? OSS is service-oriented object storage in Alibaba Cloud with very low management and O&M costs. OSS provides multiple hierarchical data storage types (such as standard object storage, infrequent access storage, and archive storage). OSS effectively reduces user costs. Users do not need to pay attention to NameNode and Federation (because they are service-oriented), and the data reliability is very good (reliability of 11 consecutive nines). Therefore, many customers use OSS to build enterprise data lakes. OSS is typically characterized by high openness. Almost all cloud products support OSS as backend storage. OSS also has some problems. In the beginning, OSS was mainly used to store data in big data scenarios in conjunction with business systems. Because OSS is designed for general scenarios, performance problems are encountered when it is adapted to big data computing engines (Spark and Flink). When a rename operation is performed, the move operation is actually performed and the file is really copied. OSS is unlike the Linux file system, which is fast enough to complete the rename operation. List operation requests all objects. When there are too many objects, the speed is extremely slow. The eventual consistency cycle is relatively long. When data is read or written, data inconsistency may occur. JindoFS is developed based on the open-source ecosystem. You may use JindoFS to read data from OSS and query data in almost all computing engines. On the one hand, JindoFS delivers the advantages of OSS: storage of EB of data (level). JindoFS also offers high flexibility: When you use OSS semantics, all the computing engines such as other computing services or BI report tools can obtain data quickly. JindoFS is a generic API. JindoFS is widely used in the cloud. When processing data in HDFS and OSS, it avoids performance problems with performing rename, list, and other operations on files. The following figure shows the architecture of the JindoFS. A namespace is the master service, and storage is the slave service. The master service is deployed on one or more nodes. The slave service is deployed on every node. The client service is deployed on each EMR machine. When data is read or written, the system first sends a request to the master service through the slave service to obtain the location of the file. If the file does not exist locally, the file is obtained from the OSS and cached locally. JindoFS implements HA architecture. Local HA is implemented through RocksDB, and remote HA is implemented through OTS. Therefore, JindoFS can achieve both performance and reliability. JindoFS uses Ranger for permission management and design. Use JindoFS SDK to migrate data from on-premises HDFS to OSS for archiving or using. JindoFS supports block storage and cache modes. If you use JindoFS in the block storage mode, its source data is stored in the local RocksDB and remote OTS. The Block mode delivers better performance but is less universal. Customers may only use the source data of JindoFS to obtain the location and detailed information of file blocks. JindoFS in the block storage mode also allows you to specify hot data, cold data, and warm data. Moreover, JindoFS can effectively simplify O&M. The cache mode uses local storage, and semantics are also based on OSS, such as oss: /bucket/path. The advantage of the cache mode is its universality. This mode is used not only in EMR but also in other computing engines. Its disadvantage lies in its performance. When a large amount of data is involved, the performance is relatively poor compared to the block storage mode. You may select the modes based on business requirements. Auto Scaling EMR supports auto scaling based on time and cluster load (Yarn metrics are collected and can be manually specified). When you use auto scaling, select multiple recognition types to avoid job failures caused by insufficient resources. Also, use preemptible instances to reduce costs. In this article, we will discuss how you can use EMR Spark Relational Cache to accelerate the query process by quickly extracting the target data from a cube that contains a large volume of data. Specifically, you can do so through things such as columnar storage, file indexing, and Z-order. With these, again, you can quickly identify your target data by using filtering to greatly reduce the actual I/O data volume, avoid I/O bottlenecks, and optimize the overall query performance. We will look at how you can use file indexing to improve efficiency by narrowing the scope of queries. Following this, we will discuss what file indexing does and what it means for you. If the total data volume is quite large, the number of files to be stored will also be high. In this case, even if we can get better filtering results by using the footers of Parquet, we may still need to start certain tasks to read these footers. In fact, in an actual implementation of Spark, the number of footer reading tasks is normally similar to the number of files. Therefore, scheduling these tasks can be time-consuming especially when cluster computing resources are limited. Therefore, to tackle this issue and further reduce the scheduling overhead of Spark jobs and improve execution efficiency, you can index files. File indexes are similar to independent footers. With file indexes, you can collect the maximum and minimum data values of each field in each file in advance, and then store these values in a hidden data table. By doing this, you will only need to read the independent table and perform filtering at the file level to obtain the target file. This is an independent stage, because a file corresponds to only one record in this hidden table. Therefore, the number of tasks that are required for reading the hidden table would be much less than the overhead of reading footers of all data files, and following this, the number of tasks in subsequent stages can also be significantly reduced. In access scenarios with Relational Cache, the overall acceleration effect would this solution would be quite obvious. EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm.). This topic describes how to use MapReduce to read and write data in JindoFileSystem (JindoFS). This topic describes how to migrate data from Hadoop Distributed File System (HDFS) to JindoFileSystem (JindoFS) that stores data in Object Storage Service (OSS). What Is Domain Resolution and How It Works What Is Static and Dynamic IP 2,631 posts | 623 followersFollow Alibaba Clouder - August 10, 2020 Alibaba EMR - May 7, 2020 Alibaba EMR - November 4, 2020 Alibaba EMR - July 19, 2021 Alibaba Clouder - November 17, 2020 Alibaba EMR - May 14, 2021 2,631 posts | 623 followersFollow A Big Data service that uses Apache Hadoop and Spark to process and analyze dataLearn More
https://www.alibabacloud.com/blog/e-mapreduce-building-a-cloud-data-lake-and-accelerating-query-speed_597612
CC-MAIN-2021-39
refinedweb
1,877
55.44
python build jelly - a simple, extensible pythonic build framework Project DescriptionRelease History Download Files Python Build Jelly New just added zsh completion! found in the file zsh.sugar Anyway, PBJ is a simple, extensible pythonic build framework, whose purpose is to be dead simple for the basic cases. Here’s an example: from pbj import Builder, cmd import os build = Builder("PJs") build.cmd("jstest", ("js", "test/runtests.js")) build.clean("build", "test/py/*.js") @build.file("build/pjslib.js", depends="jslib/*.js") def jslib(name): text = cmd("cat", "jslib/*.js") if not os.path.exists("build"): os.mkdir("build") open("build/pjslib.js").write(text) if __name__ == "__main__": build.run() Cool things: targets are classes, and decorate functions. And…this project is just starting out, so I’ll fill the rest in later. Included: disttest - a drop-in plugin to add a “setup.py test” for distutils Cheers. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pbj/
CC-MAIN-2017-51
refinedweb
170
61.73
My projects aim is to create an application that do syntax checking of javascript and html. I am using Microsoft.Jscript reference to help in the checking. Here is the code that I used to get the .js file and do a compilation so that it can generate errors if there is: private JScriptCodeProvider provider = new JScriptCodeProvider(); private CompilerParameters parameters= new CompilerParameters(); public void Compile(string[] files) { foreach (string file in files) { using (StreamReader reader = File.OpenText(file)) { string source = reader.ReadToEnd(); CompilerResults results = provider.CompileAssemblyFromSource(parameters, source); foreach (CompilerError error in results.Errors) { Messagebox.Show(error.ErrorText)//just an example of showing the error. } } } } One problem is that I have to do additional features to my application like ensuring no eval function is present in the file. Can anyone tell me how to do so, or if there's a method in the Microsoft.Jscript namespace to do that, did some research on its library and can't seem to find. I was thinking of tapping in the namespace and try to access its list of functions(for e.g. for, function, if ,else, do while, eval). So if there is an eval function in the list, throw out the error. There is an Eval object in the namespace that I see can be a potential answer to my problem, but how do you make use to detect an eval function in the .js file? Appreciate Thanks.
https://www.daniweb.com/programming/software-development/threads/297693/getting-a-list-of-jscript-functions-or-objects-using-c
CC-MAIN-2018-34
refinedweb
238
65.93
ViralPatel.net ‘s primary goal is to deliver quality tutorials and articles on different programming methodologies and latest technologies. You can join us and send your articles/tutorials on any technical topic. We will publish it on our site and will link to your site and provides your credential on the article. You will get the benefit of huge user base of our site. Why to write for us? Few points that you can consider for writing articles on this site. - Traffic to your site with a potential of converting them as your readers. - You also gain access to our site’s geeky and super-duper tech savvy readers. - Share your knowledge with the others through your article. - Get popular and promote your blog/website to large community of users. Simple guidelines for writing articles Our main interest is the programming and technology. What are concrete topics I write about? The list of tags in the upper area of the page offers an overview of the subjects covered in this site . However, we prefer the following topics: - Topics: Java, Struts, Spring, J2EE, PHP, JavaScript, CSS, Linux, Database, Network, Hardware, Security, Gadgets, Programming, or Open source applications or ANY TECHNICAL TOPIC - Format: Text, MS-Word, OpenOffice format, PDF, PPT etc - Add a couple of line about yourself and link to your blog/website or project at the bottom of the article. - Send us your Article in above mention formats via email at viralpatel.net@gmail.com and we will try to publish your Articles within 12 hours.. Further reading: Why Guest Blogging is a Powerful Way to Gain Exposure for Your Blog. Contact us at viralpatel.net@gmail.com for any questions you may have on sending tips or writing articles for viralpatel.net. Very useful blog for developers Very very useful blog site for professionals and learners This s my most likely site. Very Very Useful web site for Professionals & Learners. nice to give comment I think no site is there like this providing the best services for J2EE develpment & training Is good site for java developers nice work….really it help me a lot….well done…. Please integrate twitter and email subscription in to this site so that we can get alert regarding of new posts instantly. @Mat: We are already on Twitter, Facebook and Google+. Also you can subscribe via email; see “subscription” section in right sidebar. really usefull information! Thank you very much…it was very help full :D So nice, very useful information Useful tutorial best tutorial It is so much helpful for the learning students who having no guide line of any guide(faculty) hi viral, I need a Spring mvc project with jdbc connection as a web application. need to use jsp page. need to get a sql query from text area with a button. result should fetch from database. please help me out.thanks in advance. I am creating my project on voice implementation in java using sphinx API. Can you please help me how to begin? Hi Viral, I want to submit some articles on you site. Can you please tell me about the format of article you are accepting? Thanks a lot this tutorial and this site are perfect Hi Viral, I have gone through your tutorials and articles. They are just great. I am working on creating website in PHP with admin panel and through this panel I want to let the user upload image that directly publish on the webpage. Please guide me further Thanks Ankur Patel it worked thanku very much good site for taking the help in java I want to join this for more details about this. Nice website to learn about New technologies and research in IT…….. Hello All, I am designing the base layout in Spring base layout with table, and other jsp’s are in table with the css for color + pagination, but it is not compatible with low resolution screens, give me suggestions can i start to use DIV’s instead table tr td…?? Thanks in advance …:) *_* Hello Viral, I have a requirement where I have to create excel file as an output with cell value and count as say 1 based on some input file data. Also I have a requirement to update existing excel file if I have found one more occurence of same record in input file. In this case I should update same record in excel by incrementing count as 2. can you please help How to found which class or method contain specific annotation? Suppose we are having a class with separate user define annoation in same package like @One class A { } @Two class B { } @One class C { } How will I find which class is using @one annotation and which @Two? OR We are having a class in which methods are annotated and i want to retrieve which annotation is having which annotation like : class A { @two public method1() { } @one public method2() { } @two public method3() { } } Any other idea to do something like this will be appreciated. Any help will be appreciated I am using core java reflection and annotations. Original Scenario:- Suppose i am having a class testcases having GUI ,Functional and others these three categories of testcase. i want to found method or class according to category and execute that method or class which ever is possible. How to take the screenshot of my application using Java code when user is using the same from his/her system. We are using JSF 1.2 Framework in our application. Is der any way to take the screenshot?? Please suggest solution.. Hi Sir, HI Sir, I hv create a sp which contains one select stmt and one update stmt in oracle and i have to access it through my java class .But i am getting some errors . I mention here my .java file,procedure and table structure and the error file. Please make some necessary correction and send me updated code. Please respond this post as soon as possible. The details are as follows:- StoredProcedure.java ——————————– import java.sql.*; class StoredProcedure { public static void main(String[] args)throws Exception { try { Class.forName(“oracle.jdbc.driver.OracleDriver”); Connection con = DriverManager.getConnection(“jdbc:oracle:thin:@localhost:orcl”,”scott”,”tiger”); CallableStatement cst = con.prepareCall(“{call addinterest(?,?)}”); cst.registerOutParameter(2,Types.FLOAT); cst.setInt(1,123); cst.execute(); float accbal = cst.getFloat(2); System.out.println(“Modified balance is rs:” +accbal); cst.close(); con.close(); } catch (Exception e) { e.printStackTrace(); } } } oracle table structure ——————————- CREATE TABLE ACCOUNTS ( ACCNO NUMBER(10), ACCNM VARCHAR2(10 BYTE), ACCBAL NUMBER(7,4) ) Procedure ————— CREATE OR REPLACE PROCEDURE addinterest( ano IN accounts.accno%TYPE, bal OUT accounts.accbal%TYPE) IS BEGIN select accbal into bal from accounts where accno=ano; bal:=bal+bal*0.5; update accounts set accbal=bal where accno=ano; end; and the error is ——————— java.sql.SQLException: ORA-01403: no data found ORA-06512: at “RETDEV.ADDINTEREST”, line 8 ORA-06512: at line 1 at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173)30) at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:191) at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:944) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3381) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3482) at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:3856) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1373) at com.per.sub.StoredProcedure.main(StoredProcedure.java:20) Hi, I am new to stored procedure. I have one table and i have written a stored procedure. My doubt is how to execute the below stored procedure. Please help me as soon as possible. Please help me how to execute the stored procedure in oracle. Hi, I have 2 files having some records/contents. My requirement is that i have to find out the common records as well as non matching records. How can i get the answer ? Please help me. Hi, I have an applet which have some drop dwon fields and text fields and one button which are interlinked to each other.i.e when i select one value from one field then another field will polulated. i.e the data came from the database and populated in the corresponding field. Requiremnt:- When i select a value from the drop down list then the button field will blinking . I have to stop the blinking. How can i do it ? Please help me. Superb site help me lot i Understanding of Spring MVC Superb site … Help me lot in Understanding of Spring MVC Yeah, a lot of good stuff! Can u suggest spring with angulerjs sample example using jsp file. informative site Kindly feedback more info about java/ eclipse. . I have seen some of the Oracle Article…just want to say Superb…gr8 study material… I am using Struts2 for developing web application.In configuration files like struts.xml, is used. In Network protocol analyzer tool is is found that application makes call to “struts.apache.org”, When application makes call to “struts.apache.org” ? and how to prevent makes call for the same? where i can find netbeans tutorial ? concatenate two strings without using strcat() in java nice Hello Sir, i am new to Spring and Hibernate. I want to integrate Spring 3.0 with Hibernate 3.2. But i do not have any idea how to do that. Sir if and only if possible can you provide me a simple example Spring with Hibernate using MVC architecture with annotation. Its humble request sir. Thank You. I am trying to develop web application using… Java, but i want to know how in real time URL mapping happens.. For example if we use CODEIGNITOR FRAMEWORK for PHP.. we get “route.php” there we can configure the URL’s ex::$route[‘lp/hospital’] = “commoncontroller/getPage”; when we hit it goes to commoncontroller class and getpage function, The same way how can we do it for Java based web application. PLz let me help Hi Viral,I want to implement LDAP authentication on Apache tomcat server.I tired with information provided in your blog ,but it’s not working.Plzzzzz can you help on this. viratpatel.net i was change my password but i cant sign in and it shows wrong password.I dont want to lost my account so can you please help me to get my account back very useful Hi Please any one help how to create a nested tabs, if any idea please share me. HI Hi Viral, Hope you doing good, thank you so much for making such a informative articles i learned lot from your website. it will be good if your team start writing about react.js and angular2 Thanks very good Hello Mr Viral, Nice article on service/factory. But i have still doubt , can you please help me out to understand difference between these terms: service and factory?.it will be very helpful for me if you do this for me. Very useful blog for Java Developers and Students
http://viralpatel.net/blogs/join-us/
CC-MAIN-2017-47
refinedweb
1,859
58.79
Tuple<T1, T2, T3, T4, T5, T6>.IComparable.CompareTo Method (Object) Compares the current Tuple<T1, T2, T3, T4, T5, T6> object to a specified object and returns an integer that indicates whether the current object is before, after, or in the same position as the specified object in the sort order.> instance is cast to an IComparable interface. This method provides the IComparable.CompareTo implementation for the Tuple<T1, T2, T3, T4, T5, T6> class. Although the method can be called directly, it is most commonly called by the default overloads of collection-sorting methods, such as Array.Sort(Array) and SortedList.Add, to order the members of a collection. The IComparable.CompareTo(Object) method uses the default object comparer to compare each component. The following example creates an array of Tuple<T1, T2, T3, T4, T5, T6> objects that contain population data for three cities in the United States from 1960 to 2000. The six components consist of the city name followed by the city's population at 10-year intervals from 1960 to 2000. The example displays the components of each tuple in the array in unsorted order, sorts the array, and then calls the ToString method to display each tuple in sorted order. The output shows that the array has been sorted by name, which is the first component. Note that the example does not directly call the IComparable.CompareTo(Object) method. This method is called implicitly by the Sort(Array) method for each element in the array. using System; public class Example { public static void Main() { // Create array of sextuple with population data for three U.S. // cities, 1960-2000. Tuple<string, int, int, int, int, int>[] cities = { Tuple.Create("Los Angeles", 2479015, 2816061, 2966850, 3485398, 3694820), Tuple.Create("New York", 7781984, 7894862, 7071639, 7322564, 8008278), Tuple.Create("Chicago", 3550904, 3366957, 3005072, 2783726, 2896016) }; // Display array in unsorted order. Console.WriteLine("In unsorted order:"); foreach (var city in cities) Console.WriteLine(city.ToString()); Console.WriteLine(); Array.Sort(cities); // Display array in sorted order. Console.WriteLine("In sorted order:"); foreach (var city in cities) Console.WriteLine(city.ToString()); } } // The example displays the following output: // In unsorted order: // (Los Angeles, 2479015, 2816061, 2966850, 3485398, 3694820) // (New York, 7781984, 7894862, 7071639, 7322564, 8008278) // (Chicago, 3550904, 3366957, 3005072, 2783726, 2896016) // // In sorted order: // (Chicago, 3550904, 3366957, 3005072, 2783726, 2896016) // (Los Angeles, 2479015, 2816061, 2966850, 3485398, 3694820) // (New York, 7781984, 7894862, 7071639, 7322564, 8008278) Available since 8 .NET Framework Available since 4.0 Portable Class Library Supported in: portable .NET platforms Silverlight Available since 4.0 Windows Phone Silverlight Available since 8.0 Windows Phone Available since 8.1
https://msdn.microsoft.com/en-us/library/dd784378.aspx
CC-MAIN-2018-05
refinedweb
439
57.67
A Domain object Customer implemented in a concrete class Customer, we define a Customer with an interface ICustomer. In many traditional DDD implementations, where a sort of anemic domain model is used, all logic on certain entities is placed is so called services. This brings a few problems with it, especially in terms of maintenance. This is explained very well in this talk from Jimmy Bogard The solution to this problem is containing business logic in the domain models itselve. This is a very good theory, but in practice we see that this is very hard to align with the modern ORM’s. When it comes to hosting a WCF, NSericeStack service, we want to reuse this Domain model, but not necessarily transfer all the business logic to the client/server side. This is were my idea comes in. I wanted to create an extensible platform in which a domain model is merely a contract defining which data an entity should hold, and which functionality it exposes. The concrete implementation of these interfaces is something the developer doesnt have to deal with. Those are generated at runtime by MData internals. Now one of the main problems with this approach is that interfaces typically do not contain ‘real’ code. What I mean by this is that an interface never supplies an implementation for let’s say a method (nor any other member). So to make it possible for the developper to actually write his ‘domain logic’, I created the notion of a LogicBase<T>, where T is the interface it supplies the logic for. This is a zero/one to one relationship, meaning that for one given interface (domain model) there can at most be one LogicBase implementation. If no implementation was provided, the system uses the default generic LogicBase class. LogicBase<T> LogicBase Now this blog-series will cover how I implemented the MData framework step by step, I’ll try to list the benefits and liabilities, and show some interesting use cases for this framework. By the end of the series we will be able to write this code: private static void Main(string[] args) { //create EF CF context using (DomainContext t = new DomainContext()) { //all CRUD operations work ICustomer customer = t.Customers.Create(); t.Customers.Add(customer); ICustomer firstCustomerEver = t.Customers.FirstOrDefault(x => x.Id == 1); t.SaveChanges(); } } public class DomainContext : MDbContext { public MDbSet<ICustomer> Customers { get; set; } } In the meanwhile all code for MData is publicly accessible on github This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Math Primers for Programmers
http://www.codeproject.com/Articles/415121/MData-The-idea
CC-MAIN-2013-20
refinedweb
434
50.46
Here’s my code , works perfectly fine in my IDE like every other code which i write but i always always get NZEC error in at input point in code chef IDE. I tried Scanner i got same result , i used BuffererReader i get same result . It has really become frustrating for me now as i am not able to submit most of the problems which i was solve correctly. While when i checked solution , so many others have used int test with BufferedReader and they were able to submit the code. I request someone to pls pls pls help me out. Here’s my solution of code chef code -> Splitit public class Main { public static void main(String[] args) throws IOException { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); final int test = Integer.parseInt(br.readLine()); for (int i = 0; i < test; i++) { boolean check = false; int N = Integer.parseInt(br.readLine()); String s = br.readLine(); for(int j = 0 ; j < N-2 ; j++){ for(int k = j+2 ; k < N ; k++){ StringBuilder stringBuilder = new StringBuilder(s); String toAppend = stringBuilder.substring(j,k); stringBuilder.replace(N-toAppend.length() , N , toAppend); if(stringBuilder.toString().equals(s)){ check = true; break; } } if(check){ break; } } if(check){ System.out.println("YES"); }else { System.out.println("NO"); } } } }
https://discuss.codechef.com/t/i-always-end-up-getting-nzec-error-whether-i-use-scanner-or-bufferedreader-it-doesnt-matter/80300
CC-MAIN-2021-10
refinedweb
213
66.23
17 September 2009 12:00 [Source: ICIS news] ?xml:namespace> CRUDE: October WTI: $72.58/bbl, up $0.07/bbl. November BRENT: $71.65/bbl, down $0.02/bbl Crude prices were range bound either side of Wednesday’s close, which saw prices rise by well over a dollar after the weekly US Stock figures revealed a large, unexpected draw on crude stocks. NAPHTHA: Open spec spot cargoes were assessed in a $623-633/tonne CIF (cost, insurance and freight) NWE (northwest Europe) range, up by $7/tonne CIF NWE on the buy side of the range set at the end of trading on Wednesday. October swaps were pegged at $623-624/tonne CIF NWE. BENZENE: On Wednesday, two September benzene deals were done at $807-810/tonne CIF ARA ( STYRENE: A September styrene trade was done on Wednesday at $1,105/tonne FOB (free on board) TOLUENE: The market remained quiet, with no firm bids or offers heard. The range was notionally pegged at $760-800/tonne FOB MTBE: Bids and offers were heard at a factor of 1.13-1.15 against gasoline on Thursday morning, unchanged from Wednesday’s levels. Gasoline traded at $659-663/tonne XYLENES: The paraxylene market remained quiet, with no firm buying interest heard. The range remained stable at $840-910
http://www.icis.com/Articles/2009/09/17/9248067/noon-snapshot-europe-markets-summary.html
CC-MAIN-2015-06
refinedweb
218
73.68
Learning Resources for Software Engineering Students » Authors: Jeremy Goh, Yong Zhi Yuan Reflection is the ability of a computer program to examine, inspect and modify its own behaviour at runtime. In particular, reflections in Java allows the inspection of classes, methods and fields during runtime, without having any knowledge of it during compile time. With Java reflections, you can: Before getting started with reflections in Java, it is important to realize that a class is also an object. From the Java Class API, we see that Class is a subclass of Object. Every unique Object is assigned an immutable Class object by the JVM. This immutable Class object is fundamentally different from instances of a class. The class object itself holds information such as its name and the package it resides in while an instance of a class holds the instanced values and methods as defined in the class. Take for example the following class: public class Student { private final String name; private final String gender; public Student(String name, String gender) { this.name = name; this.gender = gender; } // Other methods here... } An instance of Student class can be created as usual using the new keyword: Student john = new Student("John Doe", "Male"); However, it is also possible to get information about the Student class itself because the class itself is an Object: Class<Student> studentClass = Student.class; This means that you can store any Class object in any data structure for future retrieval. This is the main entry point for Java's reflections. You can now get the name of the class, create new instances of the class, observe its public/private fields - the possibilities are endless! There are many webpages dedicated to explaining the details of reflections in Java; so this will not repeat what is being made readily available on the web. One good place to start is this article by JavaWorld. One important point is that while Java reflections are powerful, its implementations are not very straightforward. There are however some libraries out there such as the Google's Guava library which contains many utility methods and classes that makes our life easier. A good example of reflections is to get a private field of another class. While this should optimally be solved by modifying the field visibility to protected or public, sometimes it is not possible to do so because you might not have any access to the code (for example codes in libraries or frameworks). For the sake of simplicity, let us use a example of a simple Animal class. The class definition can look like this. public class Animal { private int age; public Animal() { age = 0; } } And let us assume that you, for some reason, cannot modify this code. But you are interested in making a new class Sheep that extends Animal and do something when his age reaches certain threshold. But the annoying thing is that somebody decided that it is a good idea to make the age value private (and without a getter method!) instead of protected in a top-level class such as this! So you cannot access the age of your Sheep even though it is an Animal. This of course can be solved by Reflection as follows: public class Sheep extends Animal { public Sheep() { super(); } /** * A sheep begins to produce wool after it's a year old! */ public boolean isProducingWool() { return getAge() > 1; } private int getAge() { // Since class is a reserved keyword, we use clazz in variable names Class<Animal> animalClazz = Animal.class; try { Field f = animalClazz.getDeclaredField("age"); f.setAccessible(true); return (int) f.get(this); } catch (IllegalAccessException | NoSuchFieldException e) { // perform error handling } } } And there you have it! What Sheep is really doing is to examine itself at runtime in order to obtain the age field inherited from Animal. This technique can be used in test cases to access private fields and methods in the class under test without modifying the visibility modifiers of the fields and methods in the class itself. You may notice that the Sheep#getAge() method sets the age Field object to be accessible and might wonder the implications. Fret not! The Field#getDeclaredField() actually returns a new Field instance - so you're just setting that particular local Field instance to be accessible, not the actual age field itself. You can read more about it in this StackOverflow question. Do take note that two exceptions need to be handled when accessing fields: IllegalAccessException, which occurs if the field is private and you did not set the accessibility modifier to be true (e.g. f.setAccessible(true)). NoSuchFieldException, which occurs if the field with the specified name (e.g. age) does not exist. Suppose that you need to write tests for Sheep. As part of setting up the test, you need to create a sheep with age = 20. Suppose that the age of the sheep is updated automatically as time passes, whereby the age increases by 1 after every minute. A naive way of creating a sheep with age = 20 is to simply wait for 20 minutes before performing the test: public void foo() throws Exception { Sheep sheep = new Sheep(); TimeUnit.MINUTES.sleep(20); // perform test } Alternatively, a much simpler and efficient way to perform this test is to set a value to the private field using Reflection: public void foo() throws Exception { Sheep sheep = new Sheep(); Field field = Animal.class.getDeclaredField("age"); field.setAccessible(true); field.set(sheep, 20); // perform test } Suppose you want to perform a unit test for the method getAge(). However, you are only able to indirectly do so by testing isProducingWool(). This is not good as we are not able to directly verify the age of a sheep. However, with the help of Reflection, we can now test private methods. public void foo() throws Exception { Sheep sheep = new Sheep(); Method method = Sheep.class.getDeclaredMethod("getAge"); method.setAccessible(true); int age = (int) method.invoke(sheep); // verify age } You might have learnt from your Software Engineering module that the Observer pattern can be used for objects that are interested to get notified if the state of another object is changed. The Observer pattern is useful because you can avoid creating bidirectional dependencies between two unrelated objects that have no business talking to each other while allowing the objects to be notified of any changes in another object. One prime example of the implementation of the Observer pattern is the Google Events bus used in AddressBook Level 4. The event bus uses reflections to observe all registered objects via register method for methods annotated with the Subscribe annotation. An example implementation (not the actual) of the Subscribe annotation might be: // Retain this annotation at runtime! (ElementType.METHOD) // Only can be applied to methods! public Subscribe { }(RetentionPolicy.RUNTIME) And that is it! Important parts of the code to note are the first two lines before the declaration. The first line tells Java that this annotation must not be discarded during compile time so it will be available during runtime. The retention policy is there because some annotation do not mean anything after compilation (such as Override and SuppressWarnings), so it does not make sense to keep the annotation after compiling. The second line just means that this annotation can be applied to methods only. And the more important part is how the subscriber registry finds all its subscribing methods. The first step is to register a class as an event handler and an example of the code is like so: public class Sheep extends Animal { private static final EventsCenter EVENTS_BUS = EventsCenter.getInstance(); public Sheep() { super(); EVENTS_BUS.register(this.getClass()); } public void handleWeatherChangeEvent(WeatherChangeEvent event) { if (event.weather == Weather.RAIN) { hide(); } } ... } An example implementation of the EventsCenter (with a lot of details left out for simplicity) is like so: public void register(Class<?> clazz) { findAllEventHandlersInClass(clazz); } public void findAllEventHandlersInClass(Class<?> clazz) { // TypeToken class is provided by Google Guava reflection library Set<? extends Class<?>> supertypes = TypeToken.of(clazz).getTypes().rawTypes(); for (Class<?> supertype : supertypes) { for (Method method : supertype.getDeclaredMethods()) { if (method.isAnnotationPresent(Subscribe.class) { registerSubscriber(method); } } } } The first line of the findAllEventHandlersInClass method finds all classes and its parent classes of the registered class and converts it to a set. That is if you registered Sheep extends Animal as an event handler to the method, both Sheep and Animal will be captured by the first line. The following lines will then examine all their methods (during runtime!) for the Subscribe annotation and register the method so that it will receive the specified event when it is fired. Of course this implementation leaves out a lot of details but you get the idea of how Java reflections works. While Java reflections are powerful, you should not immediately jump on the reflections ship. This is because there are some drawbacks whenever you use reflections in your project. The following are some points you should consider before using Java reflections: Reflections convert a compile-time error to a potentially destructive run-time error. Compile time errors are easy to catch. Whenever you compile your code, the compiler cleverly spots any error you missed and points it out (along with line number and other useful information) to you before quitting. But by using reflections, you are bypassing these checks because there is no way to check such errors during compile time. These uncaught errors may cause your program to fail during runtime instead, turning into runtime errors. For example you might have come across this problem where your program crashed and you get a NullPointerException error in your crash log. As you might have experienced already, runtime errors are more troublesome in that they are harder to catch and debug. They might even bring your whole software under the water with it by crashing the whole thing. Reflections are harder to understand and harder to debug There is a reason why the topic of reflections is placed under the advanced section. Codes using reflections are fundamentally harder to understand. As mentioned above, it is also harder to debug when the classes might not even be there during compile time. This makes your code very hard to maintain. Poor performance Since reflections resolve types dynamically, performance suffers. This is usually not an issue with small software but you might want to keep it in mind if you want to scale up. Bad Security The second example demostrated a way to access the private fields of a class using reflections. This should be very concerning if your software deals with sensitive information because other classes can access fields that they are not supposed to. Indication of bad class design Having to use reflection in order to bypass a class' encapsulation is usually indicative of an API design problem. We can remove the usage of Reflection in the examples given above by adding a getter and setter method for age. See this post for further discussion. In this scenario whereby we cannot add a getter and setter method for age, we should create our own implementation of Animal class with the getter and setter methods. Introductions to Java reflections with some explanation A short but precise overview of Java reflections Google's Guava reflection library provides some utility methods and classes
https://se-education.org/learningresources/contents/java/JavaReflections.html
CC-MAIN-2019-26
refinedweb
1,868
53.41