text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Introduction: Arduino Leonardo/Micro As FSX/Flight Sim Panel Components needed: - Arduino Leonardo/Micro - Toggle switch/push button - 10k Resistor for every switch/button - Hookup wires - Acrylic glass (optional) - Glue (optional) Step 1: Building the Panel (optional) There are many ways of building flight sim panels. An easy way is to simply download this image:... Print it and glue to a piece of acrylic glass. Drill the holes for your switches and screw them in place. I also printed a mask on transparent paper to have a better backlight effect, this was made in photoshop and took some extra time. Step 2: Wiring the Switches This is pretty straight forward, connect the switch between 5V and your desired pin. Also add the 10k resistor to remove any impedance. Repeat this process for every switch. Step 3: Programming the Arduino Download and install this library in your Arduino libraries folder:... Here’s an example code for a button connected to pin 9 on the Arduino, acting as “joystick button 1”. #include <Joystick.h Joystick_ Joystick; void setup() { pinMode(9, INPUT); Joystick.begin(); // Initialize Joystick Library } void loop() { Joystick.setButton(0, digitalRead(9)); delay(50); } Now open up FSX and you should be able to assign your switches in Control settings. Happy flying! Be the First to Share Recommendations 8 Discussions 2 years ago Hi, my name is Victor, and I reproduce your proyect but I can do the same as your. I don't know what I'm doing wrong. The switch work at the first input, but when I turned off, the switch stays on, until I go ON again and the switch in the simulator goes OFF. Please could you help me? Thanks 3 years ago Hi Granis, t Thanks for sharing this. I used this with x-plane. The only issue I have is that when I use a toggle switch for landing gear for instance. In x-plane i connected button to toggle gear. But when I toggle on it does required action. but when I toggle off landing gear stays in same position until I flip the switch again to on. Then it retracts the gear. what can I change in the code to fix this? Or am i doing something else wrong? Thanks in advance. Reply 3 years ago Its a toggle switch with only two ends, so (on-off) and not a toggle switch with (on-off-on). Reply 2 years ago And how you wire it? Thanks for the advise 3 years ago Awesome instructible! I do have a few Arduino/Microcontroller programming tips: Instead of using the delay function, I would use a function that compared when the last time the desired operation happened to the current time. If enough time has passed it will perform that operation and reset the timing variables. When the arduino is perofrming the "delay(xx)" function it will not do anything else such as read an input. This can cause a button press to go unnoticed. Also, be sure to debounce any button inputs. This is necessary as some button presses may be incorrectly interpreted by the arduino as multiple presses. When I have time I'll post an example from one of my arduino projects. I can't wait to get started on one of these myself. Reply 2 years ago Hello. I need some help here. I made the programming as the instructions, but when I turn on the switch, I have to turn off and the turn on again to ON/OFF a light in the simulator. Could anyone can help me?. And sorry about my english. Thanks a lot 3 years ago Geart instructable 3 years ago Neat! I loved playing with flight sims as a kid :)
https://www.instructables.com/Arduino-LeonardoMicro-As-FSXFlight-Sim-Panel/
CC-MAIN-2020-50
refinedweb
625
74.9
I am sure after reading this article most of issues regarding View State will get cleared out . If you disable the View State of controls, 90% of control features cannot be used.Initializing the control at right place will fix most of the issues. I have taken inspiration forwriting this FAQ after reading David Creed article about ViewState in ASP.NET forum.Thanks to Him for writing such amazing content :). I have Converted His Ideas into FAQ's which is easier to read and targets the problems which developers face on a daily basis .1) Any property value we declare in ASPX or .ascx file goes to ViewState? Is thisStatement correct?All static Data which you assign to Server control doesn't go to View State. CommonMisconception is anything typed in control's property goes to view stateFor Example If i have two Label controls <asp:Label one with Text="abc" and another one with Text="1234" .In both cases Viewstate Doesn'tcontain Value of Text field. Same thing is true for any control. Here I am talking aboutView state of Label And Hence View State of Control which contains the Label.Now if you Type the Text property of Label in Page load label1.Text="Abc"; That will go to ViewState of Control where Label is Declared. So IN nutshell AssigningInitial Values of controls in Page_ load unnecessary adds burden to View State. I will explain the reason for this in later FAQ's where i have talked about View State Tracking.It is Very easy to do such mistakes and once you know what the right way is, it is veryeasy to Fix. label1.Text="Abc"; 2) As Explained in Above Example if Viewstate is not stored in Control then HowAny control tracks It Values across pstbacks? First of all this is a common misconception that Control will not remember its postbackdata if I set EnableViewState=False.Let us take an example. Supoouse I declare a DropDownList as shown below in the Page <asp:DropdownList protected override void OnLoad(EventArgs args) { <pre>if(!this.IsPostBack) { this.lstStates.DataSource = QueryDatabase(); this.lstStates.DataBind(); } base.OnLoad(e); } Above Code is Very wrong from Performance Point Of view. What is Happing behind theScenes is you are Sending Back And Forth the Whole state list across the Wire IN View State. To Solve this problem we can Set EnableViewState=false to Dropdown List and get rid of !this.IsPostBack and rebind all the Time. It is more Cheap then Having to do only WhenThere is not a Post Back. WAIT!!!! Now If i code something like this as shown below SelectedValue property will reset back to defaults in Page_load <pre><asp:DropdownList protected override void OnLoad(EventArgs args) { this.lstStates.DataSource = QueryDatabase(); this.lstStates.DataBind(); base.OnLoad(e); } With above code we no longer is saving the state list in View State But Now Dropdown isnot remembering its PostBackValue. This is happening because as soon as We rebind All Post back data is Gone and Selected Value property has default value. So What is The Solution? Solution Is PostBackData Is loaded Before Page_Load and After OnInit so if i Bind thecontrol before post back data is filled then You can still get the Selected Value. So following code will fix the problem. <pre>protected override void OnInit(EventArgs args) { this.lstStates.DataSource = QueryDatabase(); this.lstStates.DataBind(); base.OnInit(e); } THis Solution will Work Fine. Now In Page_Load you will see right Posted Values from Post Back even if we are rebinding again and Again. IN This solution View state Doesn't store list of countries and at the same time You are Getting Post back Data. So it is a Win Win situaltion. Remember SelectedINdexChanged Event will not work if EnableViewState=False. 3) In the Above Example If I have a Dropdown List and I Rebind All the Time INPage Load and also keep the EnableViewState=true Why I am not getting SelectedINdexChanged Event? This is very common problem in code. Please move all the Binding to Page_init before post back data is Loaded then You should be OK. 4) What is ViewState Tracking??? How ASP.NET remembers Changes if it doesn't store initial Values defined in ASPX file as explained in Item Number 1)? View State has a Method called TrackViewState() . This method once called for a Control,Any changes done after that is tracked in View State as Hidden Form Filed which you see all the time in View Source Window. There are Two Methods of Page or Control which is called during the Page life cycle. Page calls LoadViewState during Post Back before Page_load and it calls SaveViewState before Serializing the Data in Hidden Form Filed. SaveViewState() of Page calls SaveViewState of all the Child Controls inside It. At This Point Only Data Which Are Marked as Dirty in View State Bag is serialized. Now The Question is Where Does framework calls TrackViewState? Answer is in the following paragraph Suppose you have this code ViewState["key"]="abc"; ViewState["Key']="yzs";// Till this moment these values will not go to Hidden Form Field. ViewState.TrackViewState();// ViewState["Key']="hhghjg" ;// this will be tracked and Will go //to Hidden Form Field. You Don't have to Explicitly call TrackViewState in your Control. It is done automatically During the OnInit Phase of Control. So if you set any property in the Control's OnInit phase, you can avoid having those values being tracked in View State. After that Any property you set will go to ViewState of that control and will be tracked. lets Supoose I have a UserControl which has Label lblDate. IN User Control OnINit() Event I have following code <pre>protected override void OnInit(EventArgs args) { this.lblDate.Text = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); base.OnInit(e); } Does this mean that Text Value Of Label will Not go to ViewState since i have set the Label's Text property in ONINit of My user control?. If you are thinking above code is correct then you are wrong. OnInit Of Child Controls(Label) here is already called before OnInit of UserControl is called So Its ViewState is Already Tracked. You Might ask Why Can't just I disable The ViewState of Label and Get Rid of all the problems.That Will Work but Suppose You Have a Situation where Clicking on a button Which Does the Post Back And button Click handler changes the Label Text like below private void cmdRemoveDate_Click(object sender, EventArgs args) { this.lblDate.Text = "--/--/---- --:--:--"; } You Might Think that During the Post Back I Will See the Empty Date After Button Click ,since View State is not Enabled in Label , I am not saving in the View State default Values Which is Current Date.Well !! You are wrong here again? Answer is If you Disable The View State Then On Post Back You will See Current Date Again Which is Not What You want. What You want is Not to Keep The original Values in View State But any changes there after should be tracked. Unfortunately there is No simple Solution. As i explained earlier best Place is before The OnInit Of Label Itself is called. If we set the Value There then that Value Will not be tracked in Hidden Field Unless We make Changes After OnInit OF label Is Done. So there are two Solutions: <pre>public class DateTimeLabel : Label { public DateTimeLabel() { this.Text = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); } }. This is Just an Example Of Label's. This happens for all the controls So it is better to Set The EnableViewState="false" unless you have a Situation Like above where you need to track the changes but still don't want to track the initial Values. Always Initialize IN OnINit of Your Control If you want to use the PostBack values of Label or Any control as i have explained in DropDownExample above. 5) What is the Alternative to Label?? Label unecesary adds SPAN tag in HTML form and also sts the View State=true. In your ASPX or ASCX file do this <%= Xyz() %> . This has no issue of adding values in View State. <%= Xyz() %> If you write <asp:Label ID="jjbb" runat="Server" Text="<%= Xyz() %>" ASP.NET will give an Error. <asp:Label ID="jjbb" runat="Server" Text="<%= Xyz() %>" If you write <asp:Label then this will Add the Value IN View Sate Because Binding Happens Late In The Page Life Cycle After OnINit of Label Which Just Picks the Default value from ASCX file. <asp:Label 6) What About Dynamic Controls?? This is What I got from web (David Creed article)on Dynamic control. This is the same problem as before, but since you are in more control of the situation, it is much easier to solve. Let's say Joe has written a custom control that at some point is dynamically creating a Label. public class JoesCustomControl : Control { protected override void CreateChildControls() { Label l = new Label(); this.Controls.Add(l); l.Text = "Joe's label!"; } } Hmmm. When do dynamically created controls begin tracking View State? You can create and add dynamically created controls to your controls collection at almost any time during the page lifecycle, but ASP.NET uses the OnInit phase to start View State tracking. Won't our dynamic label miss out on that event? No. The trick is, Controls.Add() isn't just a simple collection add request. It does much more. As soon as a dynamic control is added to the control collection of a control that is rooted in the page (if you follow its parent controls eventually you get to the page), ASP.NET plays "catch up" with the event sequence in that control and any controls it contains. So let's say you add a control dynamically in the OnPreRender event (although there plenty of reasons why you would not want to do that). At that point, your OnInit, LoadViewState, LoadPostBackData, and OnLoad events have transpired. The second the control enters your control collection, all of these events happen within the control. That means my friends the dynamic control is tracking View State immediately after you add it. Besides your constructor, the earliest you can add dynamic controls is in OnInit, where child controls are already tracking ViewState. In Joe's control, he's adding them in the CreateChildControls() method, which ASP.NET calls whenever it needs to make sure child controls exist (when it is called can vary based on whether you are an INamingContainer, whether it is a postback, and whether anything else calls EnsureChildControl()). The latest this can happen is OnPreRender, but if it happens any time after or during OnInit, you will be dirtying ViewState again, Joe. The solution is simple but easy to miss: <p style="MARGIN-BOTTOM: 12pt">public class JoesCustomControl : Control { protected override void CreateChildControls() { Label l = new Label(); l.Text = "Joe's label!"; this.Controls.Add(l); } } </p> Subtle. Instead of initializing the label's text after adding it to the control collection, Joe initializes it before it is added. This ensures without a doubt that the Label is not tracking ViewState when it is initialized. Actually you can use this trick to do more than just initialize simple properties. You can databind controls even before they are part of the control tree. Remember our US State dropdown list example? If we can create that dropdown list dynamically, we can solve that problem without even disabling its ViewState:
http://www.codeproject.com/Articles/17705/Common-Misconceptions-about-ASP-NET-ViewState
crawl-003
refinedweb
1,919
64.71
Putting XML in LDAP with LDAPHttp Pages: 1, 2 Parsing XML can be an expensive task. In scanning a file like an RSS feed over the network to obtain only a small subset of the information therein, we would prefer to read once as a stream and retain only what we need. Hence, SAX is the right tool for the job. To do the work, we will nest a subclass of a SAX DefaultHandler inside of our rep class. This handler will tap the RDF of RSS v1.0, the rdf:about attribute for item elements in particular, to identify the precise post to which Charlie wants to respond. In other words, Charlie submits an article ID, a rep class method converts this to an appropriate substring, the substring is passed to the handler, and the handler looks for the matching item as it parses. DefaultHandler rep rdf:about item public void startElement(String namespaceURI, String localName, String qName, Attributes atts) throws SAXException { // store the current element name this.current_element = localName; // indicate if within a relevant parent tag if ( localName.equalsIgnoreCase("item") ) { if (atts.getValue("rdf:about").indexOf(this.item_substring) > 0) { this.item_found = true; this.in_item = true; } } } Once inside the appropriate item, the handler will fill buffers with the bits we want. public void characters(char[] ch, int start, int length) throws SAXException { // store information from the relevant item if (in_item) { if ( this.current_element.equalsIgnoreCase("title") ) { this.title_buffer.append( new String(ch, start, length) ); } else { if ( this.current_element.equalsIgnoreCase("link") ) { this.link_buffer.append( new String(ch, start, length) ); } else { if ( this.current_element.equalsIgnoreCase("description") ) { this.description_buffer.append( new String(ch, start, length) ); } else { if ( this.current_element.equalsIgnoreCase("creator") ) { this.creator_buffer.append( new String(ch, start, length) ); }}}} } } The values are then available with standard get methods that convert the buffers to trimmed strings and return them individually or as a composite. Another method indicates whether the specific item was in fact found in the feed by returning item_found. The class is rounded out with the usual error methods. A lot of work for a little data? Perhaps, but SAX doesn't get much simpler and we will only need one handler. get item_found preCreate The code for the gateway create servlet primarily constructs a new entry in memory from attribute values submitted with the request, then attempts to add it to the directory. Between these two steps is a call to the object's preCreate() method, which does nothing by default. We override this method in our rep class to perform the RSS parsing and to populate additional attributes with derived values. preCreate() public void preCreate() throws LDAPHttpException { // perform standard comment precreation super.preCreate(); // retrieve and parse the RSS feed String uri = getFeed(); RSS1Handler rss_handler = new RSS1Handler(); rss_handler.setItemSubstring( getItemSubstring() ); try { XMLReader reader = XMLReaderFactory.createXMLReader(PARSER_CLASS); reader.setContentHandler(rss_handler); reader.setErrorHandler(rss_handler); InputSource input_source = new InputSource(uri); reader.parse(input_source); // handle feed retrieval exceptions } catch(IOException e) { throw new LDAPHttpException( "Unable to fetch the RSS feed: " + e.getMessage() ); // handle feed parsing exceptions } catch(SAXException e) { throw new LDAPHttpException( "Unable to parse the RSS feed: " + e.getMessage() ); } // update attribute values with those found in the feed if ( rss_handler.itemFound() ) { resetAttribute("cn", new String[] {rss_handler.getTitle()} ); resetAttribute("description", new String[] { rss_handler.getDescription()} ); resetAttribute("link", new String[] { rss_handler.getLink()} ); // handle the case where the item wasn't found in the feed } else { throw new LDAPHttpException("An item matching <i>" + getItemSubstring() + "</i> was not found in the current feed: " + getFeed() ); } } The feed URI is defined in the constructor for the particular subclass, along with some information used in retrieval. For instance: public class registerrep extends rep { public registerrep() throws LDAPHttpException { setCategory("register"); setLabel("The Register news reply"); setFeed(""); } } Looking back at the DIT diagram and the Rep LDIF, you are probably still wondering about c=CRAcounter and uid=CRA42. While most directory servers manage a few attributes at the system level for things like timestamps, there is nothing in LDAP that will auto-increment or define a primary key for your entries when you add them; the server expects this information from you at creation time. As I've learned, the best practice for solving this problem is to store and manage an incrementing identifier value in the directory itself. Because it's small, standard, and doesn't appear in this picture, we'll hijack the country object class for our counter entry. Rep c=CRAcounter uid=CRA42 country dn: c=CRAcounter, ou=Reps, ou=Comments, ou=Expressions, o=mentata.com objectclass: top objectclass: country c: CRAcounter description: 0 With this loaded, LDAPHttp and the create servlet will automatically grab and set a unique uid value for each new post per this line in the rep constructor: uid setIncremental("CRA", "c=CRAcounter, ou=Reps, ou=Comments, " + "ou=Expressions, o=mentata.com", "description"); And "Bob's your uncle," as they say in Australia. I'm sure to some this may look like a profound waste of time. I've provided motivation for LDAP and LDAPHttp elsewhere, but this example raises the question: why would you take data from one available format and store it in another for use in closed little web applications? XML is an excellent way to express simple or complex textual information openly, and is every bit the de facto standard for general data representation that I predicted last year it would continue to become. On the other hand, what do we always say about silver bullets? Although there are mechanisms for indexing XML content, if you want to search your data by fields or dynamically re-express it, odds are you want it in a database of some sort. Relational database systems are adequately powerful, but can be overkill for some needs, as they require lots of administration and a potentially stilted process of partitioning data into two-dimensional views. Directory databases are simpler, plus they excel at searchability and are well suited to host information that is available live but doesn't change frequently once created. You can make use of identities and sophisticated schemes for access control without new software, and the results are accessible to any client or API that speaks LDAP. The question of whether to use LDAPHttp and my gateway here may be more to the point. LDAP is a mature standard, so you can bet there are and will be plenty of ways to communicate with directory servers; promising new open source apps are being released with increasing frequency. With LDAPHttp, my own goal has been to deliver a platform that plays to the specific strengths of LDAP, servlets, and HTTP to do useful and interesting things without regard to what those things are. The framework may be non-standard, but it's as extensible as Java itself. LDAPHttp is clearly not a panacea, but it will provide elegant solutions for appropriate problems. Good ideas can organically bubble up from contexts to app libraries or the core packages. Someday, this software may serve as a competitive advantage in some vertical market of my choosing, but for now, I've deferred the question of what in the hope that others can use my work to prototype, demonstrate, and deliver unique services of their own. Think about Charlie. If I had to pick a space today, I'd say I am particularly interested in supporting transactions that involve the exchange of text (e.g., news and weblogs). Since a Rep is a solicitation, the HTML page returned by the gateway retrieve servlet will include a form for creating a new comment under the ou=Anonymous,ou=Comments,ou=Expressions,o=mentata.com branch of the DIT. Hence, Charlie's earlier Rep could provoke ou=Anonymous,ou=Comments,ou=Expressions,o=mentata.com dn: uid=CMA217,ou=Anonymous,ou=Comments,ou=Expressions,o=mentata.com uid: CMA217 cn: India's endorsement? businesscategory: rep dnqualifier: 20030622123345Z description: anonymously content: I don't know what Bombay developers would think of all this, but you're going to have to do better if you want them to read your posts. parent: uid=CRA3,ou=Reps,ou=Comments,ou=Expressions,o=mentata.com That final parent attribute is a special type used by LDAP to relate entries by dn value. Think of it as a pointer or foreign key. One of the features of LDAPHttp is to allow you to trace dn attributes in either direction, providing (among other things) links to comments made on a retrieved Rep. This all works well because a request for an entry by its dn or a request for entries with values for an indexed dn attribute matching a given distinguished name will both run like streaks of greased lightning through LDAP. To me, a good candidate application for LDAPHttp should involve lots of dn attributes. So is Charlie's app a good one? It depends on how much dialogue his posts generate! Even so ... parent dn <comment>Much like open source developers, reporters and columnists frequently exchange and co-opt the ideas of their fellows without much concern for abstract notions of property. In fact, James Joyce goes so far as to make incest the central metaphor for journalism in the Aeolus episode of Ulysses. On the Internet today, with all of that fast and easy communication facilitated between people worldwide in real time, commentary as a profession may soon be overwhelmed by commentary as a diversion. RSS in its many flavors makes it a snap to generate your own syndicated feed, blasting your observations and perspective far and wide. Along for the ride are the expressions of others as they have influenced you. This is all a boon for an open, democratic society, but the real value is not in the posting, but in the responding. The best place for that is in the original conversation. Sorry Charlie, but I don't think anybody should be an island.</comment> That doesn't mean it was such a bad example for this article; we covered a lot. And with this journey at an end, I will employ yet another class from my forum package to start new conversations, asking my perennial favorite question: suggestions? Jon Roberts is an independent software developer and sole proprietor of Mentata.
http://archive.oreilly.com/pub/a/onjava/2003/07/16/ldaphttp.html?page=2
CC-MAIN-2015-27
refinedweb
1,700
52.9
Adds an attribute array with the degree of each vertex. More... #include <vtkVertexDegree.h> Adds an attribute array with the degree of each vertex. Adds an attribute array with the degree of each vertex. By default the name of the array will be "VertexDegree", but that can be changed by calling SetOutputArrayName("foo"); Definition at line 42 of file vtkVertexDegree.h. Definition at line 47 of file vtkVertexDegree.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkGraph. Set the output array name. If no output array name is set then the name 'VertexDegree' is used. This is called by the superclass. This is the method you should override. Reimplemented from vtkGraphAlgorithm.
https://vtk.org/doc/nightly/html/classvtkVertexDegree.html
CC-MAIN-2019-39
refinedweb
137
70.39
Introduction : The header is not only for showing titles. It may also include buttons. Normally, user action related buttons are added to the right of the title, and the back button is added to the left. The button on the left side, i.e. the back button is added automatically if we push one new screen to the navigator. It pushes the screen and adds the button. The default back button is different. It is platform-specific. On iOS, it adds one label that says the previous screen title or says “Back”. But we can customize and override the back press of this button as well. Example to add one right button : Let’s learn how we can add one button to the right of the header. To do that, we can pass one function for the headerRight props of Stack.Screen that returns one text or image button. For example : <Stack.Screen ), }} /> It will give one output like below : If you click on this button, it will show one alert. But the problem is that we don’t have access to the HomeScreen state. In most cases, we will require to change the state on a button click. Changing state on header button click : To change the state, on the header button click, we can use navigation.setOptions inside a Screen. For example : import {Text, View, Button} from 'react-native'; import React, { useState } from 'react'; export default function HomeScreen({navigation}) { const [message, setMessage] = useState('Default message'); React.useLayoutEffect(() => { navigation.setOptions({ headerRight: () => <Button onPress={() => setMessage('Button clicked')} , }); }, [navigation, setMessage]); return ( <View style={{ flex: 1, display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center', }}> <View style={{marginTop: 35}}> <Text style={{color:'black'}}>{message}</Text> </View> </View> ); } Here, we have added the button inside the Screen component. It changes the message on click. If you have only one action to put in the header, you can add that action as a menu button. But, if you have multiple actions, you can’t put them all in the header. For that, you can put one menu button and on click, you can show one popover menu with all other options. Conclusion : In this tutorial, we learned how to add one button to the navigation bar in react-navigation. Try to go through the example and drop one comment below if you have any queries.
https://www.codevscolor.com/react-navigation-header-buttons
CC-MAIN-2020-40
refinedweb
390
66.03
Created on 2012-07-01.10:30:37 by wbrana, last changed 2016-01-26.21:20:58 by zyasoft. Jython 2.7.0a2+ (default:e4afcd777d1b+, Jul 1 2012, 12:22:26) [Java HotSpot(TM) 64-Bit Server VM (Sun Microsystems Inc.)] on java1.6.0_33 Type "help", "copyright", "credits" or "license" for more information. >>> import fcntl Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named fcntl >>> inst@local /mnt/md3/cache/inst/jython2 $ python Python 2.7.3 (default, May 5 2012, 10:54:18) [GCC 4.4.7] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import fcntl >>> dir(fcntl) ['DN_ACCESS', 'DN_ATTRIB', 'DN_CREATE', 'DN_DELETE', 'DN_MODIFY', 'DN_MULTISHOT', 'DN_RENAME', 'FASYNC', 'FD_CLOEXEC', 'F_DUPFD', 'F_EXLCK', 'F_GETFD', 'F_GETFL', 'F_GETLEASE', 'F_GETLK', 'F_GETLK64', 'F_GETOWN', 'F_GETSIG', 'F_NOTIFY', 'F_RDLCK', 'F_SETFD', 'F_SETFL', 'F_SETLEASE', 'F_SETLK', 'F_SETLK64', 'F_SETLKW', 'F_SETLKW64', 'F_SETOWN', 'F_SETSIG', 'F_SHLCK', 'F_UNLCK', 'F_WRLCK', 'I_ATMARK', 'I_CANPUT', 'I_CKBAND', 'I_FDINSERT', 'I_FIND', 'I_FLUSH', 'I_FLUSHBAND', 'I_GETBAND', 'I_GETCLTIME', 'I_GETSIG', 'I_GRDOPT', 'I_GWROPT', 'I_LINK', 'I_LIST', 'I_LOOK', 'I_NREAD', 'I_PEEK', 'I_PLINK', 'I_POP', 'I_PUNLINK', 'I_PUSH', 'I_RECVFD', 'I_SENDFD', 'I_SETCLTIME', 'I_SETSIG', 'I_SRDOPT', 'I_STR', 'I_SWROPT', 'I_UNLINK', 'LOCK_EX', 'LOCK_MAND', 'LOCK_NB', 'LOCK_READ', 'LOCK_RW', 'LOCK_SH', 'LOCK_UN', 'LOCK_WRITE', '__doc__', '__file__', '__name__', '__package__', 'fcntl', 'flock', 'ioctl', 'lockf'] """ 35? I don't think this one makes sense for us, closing. Let's reopen this. It's still a low priority, but it should be eminently doable for 2.7.2, given that JNR already supports this functionality: Through the ugliness that is PosixModule#getFD, we already support int file descriptors where possible (at least for files that are not sockets), so this would be a straightforward addition. A real test of how far we can push fcntl will be working with os.pipe descriptors, as seen with (FWIW, this is motivated by attempting to run the Tornado test suite, which is further motivated by the Tornado benchmark that Jython does not yet run, see Brett Cannon's notebook on Python performance,) Also used in the package youtube-dl for file locking support: import fcntl def _lock_file(f, exclusive): fcntl.flock(f, fcntl.LOCK_EX if exclusive else fcntl.LOCK_SH) def _unlock_file(f): fcntl.flock(f, fcntl.LOCK_UN) This specific functionality is implementable with, which is likely to be portable than JNR for this particular support (based on previous experience - use JNR only when nothing else is available in Java).
http://bugs.jython.org/issue1943
CC-MAIN-2016-07
refinedweb
378
52.9
The C++ Programming Language 4th edition, page 225 reads: A compiler may reorder code to improve performance as long as the result is identical to that of the simple order of execution. Some compilers, e.g. Visual C++ in release mode, will reorder this code: #include <time.h> ... auto t0 = clock(); auto r = veryLongComputation(); auto t1 = clock(); std::cout << r << " time: " << t1-t0 << endl; into this form: auto t0 = clock(); auto t1 = clock(); auto r = veryLongComputation(); std::cout << r << " time: " << t1-t0 << endl; which guarantees different result than original code (zero vs. greater than zero time reported). See my other question for detailed example. Is this behavior compliant with the C++ standard? The compiler cannot exchange the two clock calls. t1 must be set after t0. Both calls are observable side effects. The compiler may reorder anything between those observable effects, and even over an observable side effect, as long as the observations are consistent with possible observations of an abstract machine. Since the C++ abstract machine is not formally restricted to finite speeds, it could execute veryLongComputation() in zero time. Execution time itself is not defined as an observable effect. Real implementations may match that. Mind you, a lot of this answer depends on the C++ standard not imposing restrictions on compilers. Well, there is something called Subclause 5.1.2.3 of the C Standard [ISO/IEC 9899:2011] which states:). Therefore I really suspect that this behaviour – the one you described – is compliant with the standard. Furthermore – the reorganization indeed has an impact on the computation result, but if you look at it from compiler perspective – it lives in the int main() world and when doing time measurements – it peeps out, asks the kernel to give it the current time, and goes back into the main world where the actual time of the outside world doesn’t really matter. The clock() itself won’t affect the program and variables and program behaviour won’t affect that clock() function. The clocks values are used to calculate difference between them – that is what you asked for. If there is something going on, between the two measuring, is not relevant from compilers perspective since what you asked for was clock difference and the code between the measuring won’t affect the measuring as a process. This however doesn’t change the fact that the described behaviour is very unpleasant. Even though inaccurate measurements are unpleasant, it could get much more worse and even dangerous. Consider the following code taken from this site: void GetData(char *MFAddr) { char pwd[64]; if (GetPasswordFromUser(pwd, sizeof(pwd))) { if (ConnectToMainframe(MFAddr, pwd)) { // Interaction with mainframe } } memset(pwd, 0, sizeof(pwd)); } When compiled normally, everything is OK, but if optimizations are applied, the memset call will be optimized out which may result in a serious security flaw. Why does it get optimized out? It is very simple; the compiler again thinks in its main() world and considers the memset to be a dead store since the variable pwd is not used afterwards and won’t affect the program itself. Yes, it is legal – if the compiler can see the entirety of the code that occurs between the clock() calls. If veryLongComputation() internally performs any opaque function call, then no, because the compiler cannot guarantee that its side effects would be interchangeable with those of clock(). Otherwise, yes, it is interchangeable. This is the price you pay for using a language in which time isn’t a first-class entity. Note that memory allocation (such as new) can fall in this category, as allocation function can be defined in a different translation unit and not compiled until the current translation unit is already compiled. So, if you merely allocate memory, the compiler is forced to treat the allocation and deallocation as worst-case barriers for everything — clock(), memory barriers, and everything else — unless it already has the code for the memory allocator and can prove that this is not necessary. In practice I don’t think any compiler actually looks at the allocator code to try to prove this, so these types of function calls serve as barriers in practice. At least by my reading, no, this is not allowed. The requirement from the standard is (§1.9/14): Every value computation and side effect associated with a full-expression is sequenced before every value computation and side effect associated with the next full-expression to be evaluated. The degree to which the compiler is free to reorder beyond that is defined by the “as-if” rule (§1.9/1): This International Standard places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below. That leaves the question of whether the behavior in question (the output written by cout) is officially observable behavior. The short answer is that yes, it is (§1.9/8): The least requirements on a conforming implementation are: […] — At program termination, all data written into files shall be identical to one of the possible results that execution of the program according to the abstract semantics would have produced. At least as I read it, that means the calls to clock could be rearranged compared to the execution of your long computation if and only if it still produced identical output to executing the calls in order. If, however, you wanted to take extra steps to ensure correct behavior, you could take advantage of one other provision (also §1.9/8): — Access to volatile objects are evaluated strictly according to the rules of the abstract machine. To take advantage of this, you’d modify your code slightly to become something like: auto volatile t0 = clock(); auto volatile r = veryLongComputation(); auto volatile t1 = clock(); Now, instead of having to base the conclusion on three separate sections of the standard, and still having only a fairly certain answer, we can look at exactly one sentence, and have an absolutely certain answer–with this code, re-ordering uses of clock vs., the long computation is clearly prohibited. Let’s suppose that the sequence is in a loop, and the veryLongComputation () randomly throws an exception. Then how many t0s and t1s will be calculated? Does it pre-calculate the random variables and reorder based on the precalculation – sometimes reordering and sometimes not? Is the compiler smart enough to know that just a memory read is a read from shared memory. The read is a measure of how far the control rods have moved in a nuclear reactor. The clock calls are used to control the speed at which they are moved. Or maybe the timing is controlling the grinding of a Hubble telescope mirror. LOL Moving clock calls around seems too dangerous to leave to the decisions of compiler writers. So if it is legal, perhaps the standard is flawed. IMO. It is certainly not allowed, since it changes, as you have noted, the observeable behavior (different output) of the program (I won’t go into the hypothetical case that veryLongComputation() might not consume any measurable time — given the function’s name, is presumably not the case. But even if that was the case, it wouldn’t really matter). You wouldn’t expect that it is allowable to reorder fopen and fwrite, would you. Both t0 and t1 are used in outputting t1-t0. Therefore, the initializer expressions for both t0 and t1 must be executed, and doing so must follow all standard rules. The result of the function is used, so it is not possible to optimize out the function call, though it doesn’t directly depend on t1 or vice versa, so one might naively be inclined to think that it’s legal to move it around, why not. Maybe after the initialization of t1, which doesn’t depend on the calculation? Indirectly, however, the result of t1 does of course depend on side effects by veryLongComputation() (notably the computation taking time, if nothing else), which is exactly one of the reasons that there exist such a thing as “sequence point”. There are three “end of expression” sequence points (plus three “end of function” and “end of initializer” SPs), and at every sequence point it is guaranteed that all side effects of previous evaluations will have been performed, and no side effects from subsequent evaluations have yet been performed. There is no way you can keep this promise if you move around the three statements, since the possible side effects of all functions called are not known. The compiler is only allowed to optimize if it can guarantee that it will keep the promise up. It can’t, since the library functions are opaque, their code isn’t available (nor is the code within veryLongComputation, necessarily known in that translation unit). Compilers do however sometimes have “special knowledge” about library functions, such as some functions will not return or may return twice (think exit or setjmp). However, since every non-empty, non-trivial function (and veryLongComputation is quite non-trivial from its name) will consume time, a compiler having “special knowledge” about the otherwise opaque clock library function would in fact have to be explicitly disallowed from reordering calls around this one, knowing that doing so not only may, but will affect the results. Now the interesting question is why does the compiler do this anyway? I can think of two possibilities. Maybe your code triggers a “looks like benchmark” heuristic and the compiler is trying to cheat, who knows. It wouldn’t be the first time (think SPEC2000/179.art, or SunSpider for two historic examples). The other possibility would be that somewhere inside veryLongComputation(), you inadvertedly invoke undefined behavior. In that case, the compiler’s behavior would even be legal.
https://exceptionshub.com/is-it-legal-for-a-c-optimizer-to-reorder-calls-to-clock.html
CC-MAIN-2021-21
refinedweb
1,647
51.07
Miles Bader <address@hidden> writes: > Kenichi Handa <address@hidden> writes: >> So, if I make a one line window just above the mode-line, it gets >> 6-dot shorter than normale lines. >> >> This happens always when a user activate a complex input >> method (e.g. japanese and chinese) in the minibuffer. See >> the attached screen shot taken when I activated chinese-py >> in the minibuffer and typed `n'. >> >> Is it possible to make one-line window at least has normal line >> height? > I *think* I've fixed this. > [It was even worse in my case, because with my fonts, the chinese > characters don't fit into the minibuffer window, which made the > minibuffer expand to block the guidance window even when it was > initially the right size. But it seems to work now.] Thank you. I confirmed that it is fixed, but not completely. For instance, if my Chinese font is taller than the normal line height and I'm using Emacs 20 style mode-line, the guidance buffer is still only partially visible because the window is not enlarged by set-window-text-height. I think we anyway need a C function something like window-buffer-fully-visible-p. Gerd, how is it difficult to implement it? If it's too difficult for 21.1, we must use Miles solution for the moment. But, for instance, I found that this C function is enough for Quail guidance because, in that case, we need just one line fully visible window. Is it doing the right thing? DEFUN ("window-cursor-line-fully-visible-p", Fwindow_cursor_line_fully_visible_p, Swindow_cursor_line_fully_visible_p, 0, 1, 0, "Non-nil if the cursor line of WINDOW is fully visible.") (window) Lisp_Object window; { struct window *w = decode_window (window); struct glyph_matrix *matrix; struct glyph_row *row; redisplay (); if (w->cursor.vpos < 0) return Qnil; matrix = w->desired_matrix; row = MATRIX_ROW (matrix, w->cursor.vpos); return (MATRIX_ROW_PARTIALLY_VISIBLE_P (row) ? Qnil : Qt); } Of course, it's not enough for ispell because it may require more than one line. For that, we surely need window-buffer-fully-visible-p. --- Ken'ichi HANDA address@hidden
https://lists.gnu.org/archive/html/emacs-devel/2000-10/msg00170.html
CC-MAIN-2014-15
refinedweb
345
66.44
Hello guys..........Ive been programming for about a week now and I need some help. #include <iostream.h> Code:int main() { char state[10]; char address[10]; char name[10]; int age; char choice[4] ; do{ cout << "Please enter in the correct information\n\n\n"; cout << "State: "; cin >> state; cout << "Address: "; cin >> address; cout << "Name: "; cin >> name; cout << "Age: "; cin >> age; cout << "\n\n"; cout << "State: " << state << endl; cout << "Address: " << address << endl; cout << "Name: " << name << endl; cout << "Age: " << age << endl; cout << "Is this information correct? Yes or No: "; cin >> choice; }while(choice == "yes"); return 0; } Looks correct doesnt it? But if you compile it with Microsoft Visual C++ it lets you enter the information for State and Address but not for name and age. Anyone know the problem? Also I want to loop untill the user types in yes. Can anyone help me with that as well. Thanks in advance. -Jeremy
http://cboard.cprogramming.com/cplusplus-programming/23267-help-newbie-please.html
CC-MAIN-2014-35
refinedweb
152
79.09
Vi is a keyboard-driven editor where the standard mode is to give commands, and things such as editing text are the results of those commands. The "insert" command puts the editor in a mode that resembles the normal input mode of a typical text editor. I use vi on a regular basis, but I still only know five or six commands. So I decided to give vi's "improved sibling" called Vim a deeper look and investigate it for my code editing needs. Four reasons why I tried Vim - A number of people whom I respect put a lot of stock into Vim as a code editor. - I find myself working on *Nix systems from time to time, where my normal apps are not available. Why force myself to keep going back to my desktop to make and change and then re-upload it when I could make the change directly on the system? - I've been exploring outside of the Microsoft ecosystem a lot lately. For instance, I recently got around to shifting all of my personal development from TFS to Mercurial, shutting down my TFS VM, and giving that RAM to another, more useful VM (I see a Kiln + FogBugz test in my future). - My recent experiments with IronRuby have left me less than thrilled with the Visual Studio experience, and I was curious to find out if Vim might be a better choice. Installing Vim I downloaded and installed the gVim package from the Vim website. The install package was small, and it went quick. It also offered to install a Visual Studio plugin, but I have not been able to see it in Visual Studio 2008 or 2010 since the installation, and the documentation has not been very clear on that topic. (Vim is free and open source, though the authors encourage you to make a donation to charity, which I chose to do.) Learning about Vim Vim is a very feature rich application; it's a text editor with enough meat to justify some books about it. In many ways, Vim reminds me of WordPerfect 5.1: It's an extremely powerful, keyboard driven editor that requires a fair amount of time and energy to leverage, but the effort is well worth it. After learning a lot more about Vim by going through the tutorial (which walks you through the most common commands), I can see why folks who use the code editor on a regular basis swear by it. Vim is extremely powerful, if you take the time to learn it. Little details, such as knowing that you can apply commands to individual words or lines or multiples of such with a simple twist, go a long way to speeding things up. Using Vim is different enough from other editing systems that you need to approach it with a different mindset. Until you really internalize the Vim way of doing things, you won't be as effective as possible. Vim appeals to people who figure out how to "min/max" every aspect of their day, but if that's not your style, you won't get much out of the code editor. Reflecting on Vim's use for my workI feel that Vim is of limited value for my particular workload. Vim would be an awesome tool if I were to return to the Perl days of my youth or shift my development work to Ruby (Figure A is a screenshot of gVim editing Ruby code). Figure A Editing a simple Ruby program in gVim Hand editing HTML and CSS would also go really well in Vim, especially with heavy use of its scripting system. In those languages, you generate a lot of text and don't need nearly as many libraries to get stuff done, depending upon your project. In my Perl projects, the overwhelming majority of my work stuck to a handful of expressions. But in the C#/.NET world that makes up almost all of my work, IntelliSense is a mandatory item to explore the things you don't know (Visual Studio's awful new Help system makes it even more necessary). Losing IntelliSense is a huge hit to my productivity, simply because it's so difficult to get from Point A to Point B in the .NET world without encountering at least some namespaces you don't know intimately. Even worse, in C#/.NET, so much of the work is spent doing things that could be considered refactorings that it's best to use a tool that's really designed for the task at hand. Conclusion I'm definitely leaving Vim on my system, and the next time I work in Ruby, I plan on using it in lieu of Visual Studio. As you can see in the screenshot, Vim has no problem working with Ruby and giving it the syntax highlighting treatment. At the very least, this experiment reminded me of something that continues to frustrate me with C#/.NET: I feel like I write very little code and spend most of my development time trying to push buttons on existing libraries or get the tools to generate 30 lines of skeleton code to place my two lines of real code inside. J.Ja Full Bio Justin James is the Lead Architect for Conigent.
https://www.techrepublic.com/blog/software-engineer/checking-out-the-vim-code-editor/
CC-MAIN-2018-09
refinedweb
886
65.96
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. The latest release can be downloaded from the Downloads section or via NuGet. Introduction This is the official repository for AssimpNet, the cross-platform .NET wrapper for the Open Asset Import Library (otherwise known as Assimp - it's German). This wrapper leverages P/Invoke to communicate with the native library's C-API. Since the managed assembly is compiled as AnyCpu and the native DLLs are loaded dynamically, the library fully supports usage with 32 and 64 bit applications without needing to be recompiled. The library is split between two parts, a low level and a high level. The intent is to give as much freedom as possible to the developer to work with the native library from managed code. Low level - Native methods are exposed via the AssimpLibrary singleton. - Structures corresponding to unmanaged structures are prefixed with the name Ai and generally contain IntPtrs to the unmanaged data. - Located in the Assimp.Unmanaged namespace. High level - Replicates the native library's C++ API, but in a way that is more familiar to C# developers. - Marshaling to and from managed memory handled automatically, all you need to worry about is processing your data. - Located in the Assimp namespace. Supported Platforms The NuGet package is the only official release of the binaries and currently it only supports Windows. Both 32 and 64 bit are supported with a managed DLL compiled for .NET Framework 2.0 and 4.5. The current release (3.3.2) targets the native Assimp 3.1.1 release. The library supports other platforms unofficially. It has a Linux and Mac implementation to load and communicate with the native library for those platforms, but you have to provide the native binary yourself. Users have also successfully used the library with Unity3D and Android, but your mileage may vary. The library is compiled using Visual Studio 2015 and at runtime has no other external dependencies other than the native library. However, there is a compile time dependency using Mono.Cecil. If you compile without using the VS projects/MSBuild environment, the only special instruction is that you need to ensure that the interop generator patches the AssimpNet.dll in a post-build process, otherwise the library won't function correctly. This is because Mono.Cecil is used to inject IL into the assembly to make interoping with the native library more efficient. Licensing The library is licensed under the MIT license. This means you're free to modify the source and use the library in whatever way you want, as long as you attribute the original authors. The native library is licensed under the 3-Clause BSD license. Please be kind enough to include the licensing text file (it contains both licenses). Follow project updates and more on Twitter. In addition, check out these other projects from the same author: TeximpNet - A wrapper for the Nvidia Texture Tools and FreeImage libraries, which is a sister library to this one. Tesla Graphics Engine - A 3D rendering engine written in C# and the primary driver for developing AssimpNet.
https://bitbucket.org/Starnick/assimpnet
CC-MAIN-2017-30
refinedweb
530
57.47
hi rob, the default configuration of the import/export catalogs are in a manner, that provide a more or less 'normal' fileserver. MKCOL creates folders, PUT creates files. it is completely up to you to alter the command chains for your purpose. cheers, tobi On 7/20/05, Rob Owen <Rob.Owen@sas.com> wrote: > To make the WebDAV view (/repository or SimpleWebdavServlet) in the jcr-server webapp more general, it seems that PROPFIND should return all (or at least all user-defined) properties found on the node. At the moment it only returns pseudo-properties (essential WebDAV properties) that are created from node metadata. If there are user defined properties on these nodes then they should be returned in a PROPFIND request. I'm not sure how these would be identified, but might be something like any properties not defined in some namespaces (eg. nt, jcr, rep). At the moment, these properties will not exist as they cannot be set on the nodetypes created using this servlet (see below). > > In order to support PROPPATCH, DAVResourceImpl was modified, but discovered it creates nt:folder nodes for WebDAV collections and nt:file nodes for WebDAV non-collection resources. These nodetypes do not allow additional properties to be set and so this in itself prevents PROPPATCH from working. Would the recommended route be to change the nodetypes created (eg. to nt:unstructured or something else) in order to be able to set arbitrary properties on the JCR nodes and thereby provide the ability to fully support WebDAV's PROPPATCH? > > Full support for PROPFIND/PROPPATCH is required for some existing WebDAV applications to use jackrabbit as the content repository. Any suggestions on the best route would be much appreciated. > > Thanks, > Rob. > > -- -----------------------------------------< tobias.bocanegra@day.com >--- Tobias Bocanegra, Day Management AG, Barfuesserplatz 6, CH - 4001 Basel T +41 61 226 98 98, F +41 61 226 98 97 -----------------------------------------------< >---
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200507.mbox/%3C8be7318805072008553ee287f3@mail.gmail.com%3E
CC-MAIN-2015-32
refinedweb
314
53.81
First-Order Systems: The Happyев Николаевич Толстой, Анна Каренина Happy families are all alike; every unhappy family is unhappy in its own way. — Lev Nicholaevich Tolstoy, Anna Karenina I was going to write an article about second-order systems, but then realized that it would be premature to do so, without starting off on the subject of first-order systems. Warning: this article isn't exciting. Sorry, it is what it is; that's the nature of first-order systems. I'm sure you've run into first-order systems before. The RC filter is probably the most common one. It has a differential equation \( \frac{dV_{out}}{dt} = \frac{1}{RC}\left(V_{in}-V_{out}\right) \), and a transfer function of \( \frac{V_{out}}{V_{in}} = \frac{1}{RCs+1} \). Time response Here's what the unit step response of the RC filter looks like: import numpy as np import matplotlib.pyplot as plt t = np.arange(-0.5,5.5,0.001) plt.plot(t,t>0) plt.plot(t,(t>0)*(1-np.exp(-t))) plt.plot([0,1],[0,1],'-.') plt.xlim([-0.5,5.5]) plt.ylim([-0.05,1.05]) plt.grid('on') plt.xlabel(r'$t/\tau$',fontsize=16) plt.suptitle('Unit step response of an RC filter with time constant $\\tau=RC$', fontsize=12) plt.legend(['$V_{in}$','$V_{out}$'],'best',fontsize=16); Some things to note: - The final value of the step response is 1 (e.g. Vout = Vin) - The initial value right after the step is 0, so the output waveform is continuous - The step response is a decaying exponential with time constant τ = RC - The slope of the output changes instantaneously after the step to a value of 1/τ And there's not much else to say. All first order systems can be modeled in a general way as follows, for input u, output y, and internal state x: $$ \begin{eqnarray} \frac{dx}{dt} &=& \frac{x-u}{\tau}\cr y &=& ax + bu \end{eqnarray}$$ This produces a system with transfer function \( H(s) = \frac{Y(s)}{U(s)} = b + \frac{a}{\tau s + 1} \), which has a time response after t=0 of \( b+a\left(1-e^{-t/\tau}\right) \) that looks like this: def plot_firstorder(a=1,b=0,title=None): xlim = [-0.5,5.5] t = np.arange(xlim[0],xlim[1],0.001) plt.plot(t,(t>0)*(a+b-a*np.exp(-t))) plt.plot([0,1],[b,a+b],'-.') plt.plot(xlim,[a+b,a+b],'--g') plt.xlim(xlim) yticks = [(0,0)] if b != 0: yticks += [(b,'b')] if a+b != 0: yticks += [(a+b,'a+b')] yticks = sorted(yticks, key=(lambda x: x[0])) (yticks, yticklabels) = zip(*yticks) ymin = yticks[0] ymax = yticks[-1] plt.ylim([ymin-0.05*(ymax-ymin), ymax+0.05*(ymax-ymin)]) plt.yticks(yticks,yticklabels) plt.xlabel(r'$t/\tau$',fontsize=16) title = ('Unit step response of a first-order system with time constant $\\tau$' if title is None else title) plt.suptitle(title, fontsize=12); plot_firstorder(a=1,b=0.2) Again, some things to note: - The final value of the step response is a+b - The initial value right after the step is b - The step response is a decaying exponential with time constant τ - The slope of the output changes instantaneously after the step to a value of 1/τ It's possible for either a or b to be negative, but that's about all that can change here. plot_firstorder(a=-0.6,b=1) If a = -b, then we have a high-pass filter, which returns to a final value of zero: plot_firstorder(a=-1,b=1) # skip this Python code if you're not interested def bodeplot_firstorder(a=1,b=0.0001): xlim = [-2,6] omega = np.logspace(xlim[0],xlim[1],300) lxlim = [omega[0],omega[-1]] Hfunc = lambda omega: b+a/(1j*omega+1) db = lambda x: 20*np.log(np.abs(x)) ang = lambda x: np.angle(x,deg=True) H = Hfunc(omega) H0 = Hfunc(0) Hinf = Hfunc(1e9) fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(2,1,1) ax.semilogx(omega,db(H)) if a*b>=0 or abs(a) > 1.1*abs(b): asymptote = a/1j/omega else: asymptote = a*1j*omega ax.plot(omega,20*np.log(np.abs(asymptote)),'--g') ax.set_ylabel(r'$\log |H(\omega)|$', fontsize=16) ylim = [min(db(H)), max(db(H))] ax.set_ylim(1.05*ylim[0]-0.05*ylim[1],1.05*ylim[1]-0.05*ylim[0]) yticks = [(db(Hinf), 'log a')] xticks = [(1,'1')] if abs(a) != abs(b): xticks += [(abs(a/b),'a/b')] yticks += [(db(H0), 'log b')] ax.set_yticks([tick[0] for tick in yticks]) ax.set_yticklabels([tick[1] for tick in yticks]) ax.set_xticks([tick[0] for tick in xticks]) ax.set_xticklabels([tick[1] for tick in xticks]) ax.grid('on') ax = fig.add_subplot(2,1,2) ax.semilogx(omega,ang(H)) yt=np.arange(np.min(ang(H))//15*15,np.max(ang(H))+15,15) ax.plot([1,abs(a/b)],[ang(Hfunc(1)), ang(Hfunc(abs(a/b)))],'.') ax.set_yticks(yt) ax.set_ylabel(r'$\measuredangle H(\omega)$',fontsize=16) ax.set_xlabel(r'$\omega\tau$',fontsize=16) ax.set_xticks([tick[0] for tick in xticks]) ax.set_xticklabels([tick[1] for tick in xticks]) ax.grid('on') bodeplot_firstorder(a=1,b=0.0001) Things to note: - The term \( \frac{a}{\tau s+1} \) contains a pole at \( \omega = \frac{1}{\tau} \) - The constant term b forms a zero that, if present, counters the pole - The magnitude of the transfer function decreases at 20dB/decade for frequencies ω > 1/τ, until it reaches a point where the constant term b is larger than the rest of the transfer function - The phase of the transfer function goes from 0° to -90° because of the pole, but then returns to 0° because of the zero. If one of the terms is negative, it does not affect the magnitude plot, but it does affect the phase: bodeplot_firstorder(a=-1,b=0.0001) If b and a are not separated as much, the zero kicks in shortly after the pole: bodeplot_firstorder(a=1,b=0.2) For a high-pass filter with a = -b, it changes the waveform somewhat: bodeplot_firstorder(a=-1,b=1) Following error of first-order systems Back to the time domain.... The following error or tracking error of a first-order system measures how closely a particular first-order system is able to track its input. This really only makes sense for systems with unity gain and zero steady-state error, so we'll only consider first-order systems with b=0 and a=1, namely \( H(s) = \frac{1}{\tau s+1} \). Step input We've already seen how the first-order system tracks a step input: There's an initial error, but it decays to zero steady-state error with time constant τ. Ramp input Here's what happens if you pass in a ramp input: t = np.arange(-0.5,5.5,0.001) tau = 1 Vin = (t>0)*t Vout = (t>0)*(t-tau*(1-np.exp(-t/tau))) def ploterror(t,Vin,Vout,title): fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(2,1,1) ax.plot(t,Vin) ax.plot(t,Vout) ax.set_ylabel(r'$V_{in}$ and $V_{out}$', fontsize=16) ax.set_xlim([min(t),max(t)]) ylim = [min(Vin),max(Vin)] ax.set_ylim(1.05*ylim[0]-0.05*ylim[1],1.05*ylim[1]-0.05*ylim[0]) ax.grid('on') ax.legend(['$V_{in}$','$V_{out}$'],'best',fontsize=16); ax = fig.add_subplot(2,1,2) ax.plot(t,Vin-Vout) ax.grid('on') ax.set_ylabel('error = $V_{in} - V_{out}$',fontsize=16) ax.set_xlim([min(t),max(t)]) plt.xlabel(r'$t/\tau$',fontsize=16) plt.suptitle(title, fontsize=12) ploterror(t,Vin,Vout, 'Unit ramp response of a 1st-order filter with time constant $\\tau$') There's not zero steady-state error any more. This 1st-order system isn't good enough to follow a ramp with zero error. If we go back to the differential equation \( \frac{dV_{out}}{dt} = \frac{V_{in} - V_{out}}{\tau} \) and multiply both sides by τ we get \( V_{in}-V_{out} = \tau\frac{dV_{out}}{dt} \). In other words, the output can follow the ramp rate R of the input, but in order to do so, it has to have steady state error of \( \tau\frac{dV_{out}}{dt} = \tau R \). The slower the filter, the larger the steady-state error. Sinusoidal input The case of a sinusoidal input is mildly interesting. Rather than give the closed-form solution at first, let's use trapezoidal integration to simulate the response. f = 0.3 Vin = (t>0)*np.sin(2*np.pi*f*t) Vout = np.zeros_like(Vin) y = Vout[0] for i in range(1,len(Vout)): x = Vin[i] dt = t[i]-t[i-1] dy_dt_1 = (x-y)/tau y1 = y + dy_dt_1*dt dy_dt_2 = (x-y1)/tau y = y + (dy_dt_1 + dy_dt_2)*dt/2 Vout[i] = y ploterror(t,Vin,Vout, 'Response to $\sin\, 2\pi ft$ with f=0.5' +' of a 1st-order filter with time constant $\\tau$') At "steady state" (there really isn't a true steady-state here) the following error is sinusoidal. Here we can use complex exponentials to help us. If our input is constant frequency, e.g. \( s=j2\pi f \,\Rightarrow\, V_{in} = e^{j2\pi ft} \), then the output is \( \frac{1}{\tau s+1}V_{in} \, \Rightarrow\, V_{out}=\frac{1}{1+j2\pi f\tau}e^{j2\pi ft} \), and that means that the steady-state error \( V_{in}-V_{out} = \left(1 - \frac{1}{1+j2\pi f\tau}\right) V_{in} \): # ho hum, more Python code... xlim = [-1.5,1.5] alpha = np.logspace(xlim[0],xlim[1],300) lxlim = [omega[0],alpha[-1]] errfunc = lambda alpha: 1 - 1/(1j*alpha+1) Hfunc = lambda alpha: 1/(1j*alpha+1) db = lambda x: 20*np.log(np.abs(x)) ang = lambda x: np.angle(x,deg=True) H = Hfunc(alpha) err = errfunc(alpha) axlist = [] fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(3,1,1) ax.semilogx(alpha,db(H)) ax.set_ylabel(r'$\log |H(\omega)|$', fontsize=16) #ax.set_ylim(1.05*ylim[0]-0.05*ylim[1],1.05*ylim[1]-0.05*ylim[0]) axlist.append(ax) ax = fig.add_subplot(3,1,2) ax.semilogx(alpha,ang(H)) yt=np.arange(np.min(ang(H))//15*15,np.max(ang(H))+15,15) ax.set_yticks(yt) ax.set_ylabel(r'$\measuredangle H(\omega)$',fontsize=16) axlist.append(ax) ax = fig.add_subplot(3,1,3) ax.semilogx(alpha,db(err)) ax.set_ylabel(r'$\log |\tilde{H}(\omega)|$', fontsize=16) #ax.set_ylim(1.05*ylim[0]-0.05*ylim[1],1.05*ylim[1]-0.05*ylim[0]) ax.set_xlabel(r'$ \alpha = 2\pi f\tau$',fontsize=16) axlist.append(ax) for (i,ax) in enumerate(axlist): ax.grid('on') ax.set_xlim(min(omega),max(omega)) if i < 2: ax.set_xticklabels([]) Here the critical quantity is \( \alpha = 2\pi f \tau \). We can quantify the input-to-output gain, phase lag, and error magnitude as a function of α. The exact values are input-to-output gain \( |H| = \frac{1}{\sqrt{1+\alpha^2}} \), phase lag \( \phi_{lag} =\measuredangle H = -\arctan \alpha \), and error magnitude \( |\tilde{H}| = \frac{\alpha}{\sqrt{1+\alpha^2}} \). You can see the general behavior of these values in the graphs above. For \( \alpha \ll 1 \), the input-to-output gain \( |H| \approx 1 - \frac{1}{2}\alpha^2 \); the phase lag in radians \( \phi_{lag} \approx -\alpha \), and the error magnitude \( |\tilde{H}| \approx \alpha \). For \( \alpha \gg 1 \), the input-to-output gain \( |H| \approx \frac{1}{\alpha} \); the phase lag in radians \( \phi_{lag} \approx \frac{\pi}{2}-\frac{1}{\alpha} \), and the error magnitude \( |\tilde{H}| \approx 1 - \frac{1}{2 \alpha^2} \). Wrapup That's really it. All first-order systems are essentially alike. If you remove the constant term b, they are exactly alike and can be graphed with the same shape: the magnitude can be normalized by dividing by the gain term a, the time can be normalized by dividing by the time constant τ, and the frequency can be normalized by multiplying by the time constant τ. The time response is exponential, and the frequency response contains one pole and, if the constant term b is present, one zero. The steady-state error for a tracking first-order system \( H(s) = \frac{1}{\tau s + 1} \) is zero for a unit step, τ for a unit ramp, and has frequency-dependent behavior (see above) for sinusoidal inputs. Not very interesting. Things get more interesting when we get to higher-order systems. I'll talk about second-order systems in a future article. Previous post by Jason Sachs: Lost Secrets of the H-Bridge, Part IV: DC Link Decoupling and Why Electrolytic Capacitors Are Not Enough Next post by Jason Sachs: How to Include MathJax Equations in SVG With Less Than 100 Lines of JavaScript! - Write a CommentSelect to add a comment >>IMAGE.
https://www.embeddedrelated.com/showarticle/590.php
CC-MAIN-2018-47
refinedweb
2,218
56.86
Category Archives: Pink Poesy Poetry, unsurprisingly. Poetry Workshop Attended a poetry workshop earlier today at the Fitz, across town. Harvested some of the more reasonable products below – four in response to artworks I’ve included (doubled up on the Rodin), and the last a prompt. Rough works, but hopefully of some service. Large Clenched Hand (Grande main crispée) Rodin Vitality expressed in its moment of expungence. Pain, cast in metal. The body radiating its emotion, its anger and its revolt, using nothing more than itself. Masculine, this could be the hand of Laocoön as he grapples with the serpent coils of his pride-wrought fate. Ah! Pride! Whomever this hand belongs to, it is a proud man. The despair, the anger expressed in the rictus clench could signal no less than a will -a prideful will- roundly thwarted. David and Goliath Degas Dun, nude, loose. Your colours, the olive green, the dirty taupe, evoke your crude life, your barbaric brutish existence. There could never have been honour won that day. Honour requires grace, and there is no grace to be found in the rude, shifting muck of your lives. Large Clenched Hand (Grande main crispée) Rodin Alive! Even in My agony, Beset by the cruelties of the World You will not take Me I refuse this fate I despite your arrows of Inevitability Reckon! I am a Man and though You would snuff Me out, You cannot deny that I have been Alive! A street, possibly in Port-Marly, 1875-77 Sisley I see your view the view that you moulded, filtered and regurgitated. And I deem it good. Masterful, even. Yet, it is not the skill of rendering the sky, nor the evocation of the shadow, nor any of the many other elements of quality that make me pause. No. It is simply the way you write your name. The clumsiness of it. Slap-dash. Work-a-day. Did you, too, regret the ugliness of your hand? Did you look on that text and grow sad at its lack of finesse? Six characters, rough-written, express more than the painting entire. Just as you reworked what you saw, so do I import my own assumptions. But, whatever phantasms I conjure, whatever gross errors I commit, I am left with that sliver of Truth. You and I, We are brothers. Sandbox 2×8’s screwed to one another, hanging together loosely, unevenly set atop flags of repurposed concrete. A shoddy affair, made in an amateur manner but fit to purpose. Good enough to hold back the spilling sands. You can remember the damp grit of it, even now – you can still feel it in your mouth, that not-quite-earthy taste, that roughness you knew, even then, was doing damage to your teeth. How many hours did you spend there, building imaginary worlds which, god-like, shifted to your every whim? Shifted, like so much sand. Solitary hours – yes, there were times you were joined, where your pantheon doubled, trebled – but it was never as good as when there was but a single will – a direction unfettered by compromise. A tyranny enlightened and self-contained. Contained by a set of 2×8’s screwed to one another and hanging together, loosely.. Vigil. Swelter Swelter The storied sailor may be right, and Hell is a cold, icy ocean trench that saps your will and chokes your heart; I wouldn’t pretend to know. Despair, though – Despair is hot. The heat of an over-burdened body The heat of all the rage and impotence clutched close and tight. The heat of a breath held too long, after the swirling eye-spots have blotted out vision and the lungs shudder to bursting. The heat of a fatal fever- too extreme to heal, too strong to dissipate. Despair has the heat of friction, born of all the wasted efforts and the rued missed chances, and the stupid, wanton mistakes. The heat smothers, blanketing you with its weight. It surrounds you even while it comes from inside, till the tears start from your bloodshot eyes and moans, undirected, start from your parched throat. Yes, Hell might be cold, but Despair, Despair is hot..
https://staggeringbrink.wordpress.com/category/pink-poesy/
CC-MAIN-2018-47
refinedweb
698
81.83
floor_mod¶ - paddle. floor_mod ( x, y, name=None ) Mod two tensors element-wise. The equation is:\[out = x \% y\] Note: paddle.remaindersupports broadcasting. If you want know more about broadcasting, please refer to Broadcasting . - Parameters - - Returns N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y. Examples import paddle x = paddle.to_tensor([2, 3, 8, 7]) y = paddle.to_tensor([1, 5, 3, 3]) z = paddle.remainder(x, y) print(z) # [0, 3, 2, 1]
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/floor_mod_en.html
CC-MAIN-2021-25
refinedweb
112
70.7
Scala’s type system allows us to annotate type parameters with their variance: covariant, contravariant, invariant. Variance allows us to define the subtyping relationships between type constructors – that is, under which conditions F[A] is a subtype of F[B]. Similarly in functional programming, there are covariant functors, contravariant functors, and invariant functors. The similarity in names is not coincidental. The common example is List[+A] which is covariant in its type parameter, denoted by the + next to the A. A type constructor with a covariant type parameter means that if there is a subtyping relationship between the type parameter, there is a subtyping relationship between the two instances of the type constructor. For example if we have a List[Circle], we can substitute it anywhere we have a List[Shape]. Another example of covariance is in parsing, for example in the following Read type class. trait Read[+A] { def read(s: String): Option[A] } It makes sense to make Read covariant because if we can read a subtype, then we can read the supertype by reading the subtype and throwing away the subtype-specific information. For instance, if we can read a Circle, we can read a valid Shape by reading the Circle and ignoring any Circle-specific information. A type that cannot safely be made covariant is Array. If Array were covariant, we could substitute an Array[Circle] for an Array[Shape]. This can get us in a nasty situation. val circles: Array[Circle] = Array.fill(10)(Circle(..)) val shapes: Array[Shape] = circles // works only if Array is covariant shapes(0) = Square(..) // Square is a subtype of Shape If Array was covariant this would compile fine, but fail at runtime. In fact, Java arrays are covariant and so the analogous Java code would compile, throwing an ArrayStoreException when run. The compiler accepts this because it is valid to upcast an Array[Circle] into an Array[Shape], and it is valid to insert a Shape into an Array[Shape]. However the runtime representation of shapes is still an Array[Circle] and inserting a Square into that isn’t allowed. In general, a type can be made safely covariant if it is read-only. If we know how to read a specific type, we know how to read a more general type by throwing away any extra information. List is safe to to make covariant because it is immutable and we can only ever read information off of it. With Array, we cannot make it covariant because we are able to write to it. As we’ve just seen, covariance states that when A subtypes B, then F[A] subtypes F[B]. Put differently, if A can be turned into a B, then F[A] can be turned into an F[B]. We can encode this behavior literally in the notion of a Functor. trait Functor[F[_]] { def map[A, B](f: A => B): F[A] => F[B] } This is often encoded slightly differently by changing the order of the arguments: trait Functor[F[_]] { def map[A, B](fa: F[A])(f: A => B): F[B] } We can implement Functor for List and Read. val listFunctor: Functor[List] = new Functor[List] { def map[A, B](fa: List[A])(f: A => B): List[B] = fa match { case Nil => Nil case a :: as => f(a) :: map(as)(f) } } val readFunctor: Functor[Read] = new Functor[Read] { def map[A, B](fa: Read[A])(f: A => B): Read[B] = new Read[B] { def read(s: String): Option[B] = fa.read(s) match { case None => None case Some(a) => Some(f(a)) } } } With that we can do useful things like val circles: List[Circle] = List(Circle(..), Circle(..)) val shapes: List[Shape] = listFunctor.map(circles)(circle => circle: Shape) // upcast val parseCircle: Read[Circle] = ... val parseShape: Read[Shape] = readFunctor.map(parseCircle)(circle => circle: Shape) // upcast or more generally: def upcast[F[_], A, B <: A](functor: Functor[F], fb: F[B]): F[A] = functor.map(fb)(b => b: A) upcast’s behavior does exactly what covariance does – given some supertype A ( Shape) and a subtype B ( Circle), we can mechanically (and safely) turn an F[B] into an F[A]. Put differently, anywhere we expect an F[A] we can provide an F[B], i.e. covariance. For this reason, Functor is sometimes referred to in full as covariant functor. Contravariance flips the direction of the relationship in covariance – an F[Shape] is considered a subtype of F[Circle]. This seems strange – when I was first learning about variance I couldn’t come up with a situation where this would make sense. If we have a List[Shape] we cannot safely treat it as a List[Circle] – doing so comes with all the usual warnings about downcasting. Similarly if we have a Read[Shape], we cannot treat it as a Read[Circle] – we know how to parse a Shape, but we don’t know how to parse any additional information Circle may need. It appears fundamentally read-only types cannot be treated as contravariant. However, given that contravariance is covariance with the direction reversed, can we also reverse the idea of a read-only type? Instead of reading a value from a String, we can write a value to a String. trait Show[-A] { def show(a: A): String } Show is the other side of Read – instead of going from a String to an A, we go from an A into a String. This reversal allows us to define contravariant behavior – if we are asked to provide a way to show a Circle ( Show[Circle]), we can give instead a way to show just a Shape. This is a valid substitution because we can show a Circle by throwing away Circle-specific information and showing just the Shape bits. This means that Show[Shape] is a subtype of Show[Circle], despite Circle being a subtype of Shape. In general, we can show (or write) a subtype if we know how to show a supertype by tossing away subtype-specific information (an upcast) and showing the remainder. Again, this means Show[Supertype] is substitutable, or a subtype of, Show[Subtype]. For similar reasons that read-only types can be made covariant, write-only types can be made contravariant. Arrays cannot be made contravariant either. If they were, we could do unsafe reads: val shapes: Array[Shape] = Array.fill(10)(Shape(..), Shape(..)) val circles: Array[Circle] = shapes // Works only if Array is contravariant val circle: Circle = circles(0) circle, having been read from an Array[Circle] has type Circle. To the compiler this would be fine, but at runtime, the underlying Array[Shape] may give us a Shape that is not a Circle and crash the program. Our Functor interface made explicit the behavior of covariance - we can define a similar interface that captures contravariant behavior. If B can be used where A is expected, then F[A] can be used where an F[B] is expected. To encode this explicitly: trait Contravariant[F[_]] { // Alternative encoding: // def contramap[A, B](f: B => A): F[A] => F[B] // More typical encoding def contramap[A, B](fa: F[A])(f: B => A): F[B] } We can implement an instance for Show. val showContravariant: Contravariant[Show] = new Contravariant[Show] { def contramap[A, B](fa: Show[A])(f: B => A): Show[B] = new Show[B] { def show(b: B): String = { val a = f(b) fa.show(a) } } } Here we are saying if we can show an A, we can show a B by turning a B into an A before showing it. Upcasting is a specific case of this, when B is a subtype of A. def contraUpcast[F[_], A, B >: A](contra: Contravariant[F], fb: F[B]): F[A] = contra.contramap(fb)((a: A) => a: B) Going back to Shapes and Circles, we can show a Circle by upcasting it into a Shape and showing that. We observed that read-only types are covariant and write-only types are contravariant. This can be seen in the context of functions and what function types are subtypes of others. An example function: // Right now we only care about the input def squiggle(circle: Circle): Unit = ??? // or val squiggle: Circle => Unit = ??? What type is a valid subtype of Circle => Unit? An important note is we’re not looking for what subtypes we can pass in to the function, we are looking for a value with a type that satisfies the entirety of the function type Circle => Unit. A first guess may involve some subtype of Circle like Dot (a circle with a radius of 0), such as Dot => Unit. val squiggle: Circle => Unit = (d: Dot) => d.someDotSpecificMethod() This doesn’t work – we are asserting with the moral equivalent of a downcast that any Circle input to the function is a Dot, which is not safe to assume. What if we used a supertype of Circle? val squiggle: Circle => Unit = (s: Shape) => s.shapeshift() This is valid – from the outside looking in we have a function that takes a Circle and returns Unit. Internally, we can take any Circle, upcast it into a Shape, and go from there. Showing things a bit differently reveals better the relationship: type Input[A] = A => Unit val inputSubtype: Input[Shape] = (s: Shape) => s.shapeshift() val input: Input[Circle] = inputSubtype We have Input[Shape] <: Input[Circle], with Circle <: Shape, so function parameters are contravariant. The type checker enforces this when we try to use covariant type parameters in contravariant positions. scala> trait Foo[+A] { def foo(a: A): Int = 42 } <console>:15: error: covariant type A occurs in contravariant position in type A of value a trait Foo[+A] { def foo(a: A): Int = 42 } ^ Since type parameters are contravariant, a type in that position cannot also be covariant. To solve this we “reverse” the constraint imposed by the covariant annotation by parameterizing with a supertype B. scala> trait Foo[+A] { def foo[B >: A](a: B): Int = 42 } defined trait Foo Let’s do the same exercise with function return types. val squaggle: Unit => Shape = ??? Since using the supertype seemed to work with parameters, let’s pick a supertype here, Object. val squaggle: Unit => Shape = (_: Unit) => somethingThatReturnsObject() For similar issues with using a subtype for the input parameter, we cannot use a supertype for the output. The function type states the return type is Shape, but we’re returning an Object which may or may not be a valid Shape. As far as the type checker is concerned, this is invalid and the checker rejects the program. Trying instead with a subtype: val squaggle: Unit => Shape = (_: Unit) => Circle(..) This makes sense – the function type says it returns a Shape and inside we return a Circle which is a perfectly valid Shape. As before, rephrasing the type signatures leads to some insights. type Output[A] = Unit => A val outputSubtype: Output[Circle] = (_: Unit) => Circle(..) val output: Output[Shape] = outputSubtype That is Output[Circle] <: Output[Shape] with Circle <: Shape – function return types are covariant. Again the type checker will enforce this: scala> trait Bar[-A] { def bar(): A = ??? } <console>:15: error: contravariant type A occurs in covariant position in type ()A of method bar trait Bar[-A] { def bar(): A = ??? } ^ As before, we solve this by “reversing” the contraint imposed by the variance annotation. scala> trait Bar[-A] { def bar[B <: A](): B = ??? } defined trait Bar Function inputs are contravariant and function outputs are covariant. Taking the previous examples together, a function type Shape => Circle can be put in a place expecting a function type Circle => Shape. We arrived at this conclusion by observing the behavior of subtype variance and the corresponding functors. Taken in the context of functional programming where the only primitive is a function, we can draw a conclusion in the other direction. Where function inputs are contravariant, types in positions where computations are done (e.g. input or read-only positions) are also contravariant (similarly for covariance). Unannotated type parameters are considered invariant – the only relationship that holds is if a type A is equal to a type B, then F[A] is equal to F[B]. Otherwise different instantiations of a type constructor have no relationship with one another. Given invariant F[_], an F[Circle] is not a subtype of F[Shape] – you need to explicitly provide the conversion. Arrays are invariant in Scala because they can be neither covariant nor contravariant. If we make it covariant, we can get unsafe writes. If we make it contravariant, we can get unsafe reads. Since read-only types can only be covariant and write-only types contravariant, our compromise is to make types that support both invariant. In order to treat an Array of one type as an Array of another, we need to have conversions in both directions. This must be provided manually as the type checker has no way of knowing what the conversion would be. Similar to (covariant) Functor and Contravariant, we can write Invariant. trait Invariant[F[_]] { def imap[A, B](fa: F[A])(f: A => B)(g: B => A): F[B] } For demonstration purposes we write our own Array type class Array[A] { private var repr = ListBuffer.empty[A] def read(i: Int): A = repr(i) def write(i: Int, a: A): Unit = repr(i) = a } and define Invariant[Array]. val arrayInvariant: Invariant[Array] = new Invariant[Array] { def imap[A, B](fa: Array[A])(f: A => B)(g: B => A): Array[B] = new Array[B] { // Convert read A to B before returning – covariance override def read(i: Int): B = f(fa.read(i)) // Convert B to A before writing – contravariance override def write(i: Int, b: B): Unit = fa.write(i, g(a)) } } Another example of a read-write type that doesn’t involve Arrays (or mutation) can be found by just combining the Read and Show interfaces: trait Serializer[A] extends Read[A] with Show[A] { def read(s: String): Option[A] def show(a: A): String } Serializer both reads (from a String) and writes (to a String). We can’t make it covariant because that would cause issues with show, and we can’t make it contravariant because that would cause issues with read. Therefore our only choice is to keep it invariant. val serializerInvariant: Invariant[Serializer] = new Invariant[Serializer] { def imap[A, B](fa: Serializer[A])(f: A => B)(g: B => A): Serializer[B] = new Serializer[B] { def read(s: String): Option[B] = fa.read(s).map(f) def show(b: B): String = fa.show(g(b)) } } We can see the Invariant interface is more general than both Functor and Contravariant – where Invariant requires functions going in both directions, Functor and Contravariant only require one. We can make Functor and Contravariant subtypes of Invariant by ignoring the direction we don’t care about. trait Functor[F[_]] extends Invariant[F] { def map[A, B](fa: F[A])(f: A => B): F[B] def imap[A, B](fa: F[A])(f: A => B)(g: B => A): F[B] = map(fa)(f) } trait Contravariant[F[_]] extends Invariant[F] { def contramap[A, B](fa: F[A])(f: B => A): F[B] def imap[A, B](fa: F[A])(f: A => B)(g: B => A): F[B] = contramap(fa)(g) } Going back to treating Array and Serializer as a read/write store, if we make it read-only (like a read-only handle on a resource) we can safely treat it as if it were covariant. If we are asked to read Shapes and we know how to read Circles, we can read a Circle and upcast it into a Shape before handing it over. Similarly if we make it write-only (like a write-only handle on a resource) we can safely treat it as contravariant. If we are asked to store Circles and we know how to store Shapes, we can upcast each Circle into a Shape before storing it. Variance manifests in two levels: one at the type level where subtyping relationships are defined, and the other at the value level where it is encoded as an interface which certain types can conform to. Thus far we have seen the three kinds of variances Scala supports: 1. invariance: A = B → F[A] = F[B] 2. covariance: A <: B → F[A] <: F[B] 3. contravariance: A >: B → F[A] <: F[B] This gives us the following graph: invariance ↑ ↑ / \ - + Completing the diamond implies a fourth kind of variance, one that takes contravariance and covariance together. This is known as phantom variance or anyvariance, a variance with no constraints on the type parameters: F[A] = F[B] regardless of what A and B are. Unfortunately Scala’s type system is missing this kind of variance which leaves us just short of a nice diamond, but we can still encode it in an interface. trait Phantom[F[_]] { def pmap[A, B](fa: F[A]): F[B] } Given any F[A], we can turn that into an F[B], for all choices of A and B. With this power we can implement covariant and contravariant functors. trait Phantom[F[_]] extends Functor[F] with Contravariant[F] { def pmap[A, B](fa: F[A]): F[B] def map[A, B](fa: F[A])(f: A => B): F[B] = pmap(fa) def contramap[A, B](fa: F[A])(f: B => A): F[B] = pmap(fa) } This completes our diamond of variance. invariance ↑ ↑ / \ - + ↑ ↑ \ / phantom Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog
https://typelevel.org/blog/2016/02/04/variance-and-functors.html
CC-MAIN-2019-09
refinedweb
2,923
61.06
13678 [details] Using screenshot This might be an issue with the Android agent, but after entering a bogus using statement into the REPL, I can't do anything more until I restart the app on the Android simulator. For example, when I enter: > using System.DateTime; I get: (1,2): A `using' directive can only be applied to namespaces but `System.DateTime' denotes a type. Consider using a `using static' instead (1,2): A `using' directive can only be applied to namespaces but `System.DateTime' denotes a type. Consider using a `using static' instead (1,2): The using directive for `System.DateTime' appeared previously in this namespace Clearly this is a dumb thing to enter into the REPL and the errors are all well and good (except maybe that they are duplicated). However, after entering this, anything I write into the REPL fails with that same error (see attached screenshot). Thanks, I remember seeing this issue in Sketches as well. You can also see it with `csharp` on the command line (all of these use Mono.CSharp as the interactive compiler backend). I've filed bug #35604 to track the underlying issue. This fix will be part of future releases of Xamarin platform products. *** This bug has been marked as a duplicate of bug 35604 ***
https://xamarin.github.io/bugzilla-archives/35/35559/bug.html
CC-MAIN-2019-39
refinedweb
215
57.57
The evolving series of MPI standards, available on the web at, describe the features of the programming interface and serve as guidelines for those developing MPI implementations for various computer platforms. The MPI-1.0 standard was produced way back in May 1994, and the MPI-1.1 standard was released in June 1995. Within a few years, the MPI-1.2 standard and the MPI-2 standard were produced. The MPI-1.2 standard consists primarily of clarifications and corrections to MPI-1.1 and one new routine. Forward compatibility is preserved so that a valid MPI-1.1 program is both a valid MPI-1.2 program and a valid MPI-2 program. Most MPI implementations in common use — including the two most popular ones for Linux clusters, called MPICH and LAM/MPI — comply with the MPI-1.2 standards. LAM/MPI and a new version of MPICH available in beta, called MPICH2, already support many of the features new to MPI-2. The most important MPI-2 features include process creation and management; one-sided communications; collective operations on intercommunicators; external interfaces to error handlers, data types, and requests; parallel input/output (also known as MPI-IO); bindings for C, C++, Fortran 77, and Fortran 90; and a variety of miscellaneous new routines and data types. While LAM/MPI and MPICH2 in their present forms support only subsets of the new MPI-2 features, one or both may already implement the code needed for your programming project. Both provide MPI-I/O support using ROMIO, offer the new, portable mpiexec command line startup, provide dynamic process management, offer C++ bindings, and implement basic one-sided communication. Even if LAM and MPICH2 don’t provide all the MPI-2 features you may ultimately want, they have enough of the new stuff to make it worth taking them our for a spin. Trying Out LAM and MPICH2 At the time of this writing, LAM/MPI is at version 7.0.6, and MPICH2 is at 0.96beta2. LAM can be downloaded from, and MPICH2 is available at. Both use autoconf and are built in the standard ./configure; make; make install fashion. For initial testing of these MPI implementations, you may wish to install one or both in your home directory until you’re ready to make them accessible to all users. Figure One and Figure Two show sample output from installing and testing LAM and MPICH2, respectively, in a typical home directory. (These tests were performed on a cluster running Red Hat Linux 7.3.) Both installations begin with commands ./configure –prefix=…, where –prefix specifies the directory prefix to use for installation. After that, running make builds the software, and typing make install installs the software in the directory specified earlier with –prefix. Figure One: Installing and testing LAM/MPI 7.0.6 Unpack, build, and install the LAM source code into your home directory. [node01 src]$ tar xzf lam-7.0.6.tar.gz [node01 src]$ cd lam-7.0.6 [node01 lam-7.0.6]$ ./configure –prefix=$HOME/lam-7.0.6 [node01 lam-7.0.6]$ make [node01 lam-7.0.6]$ make install [node01 lam-7.0.6]$ cd Next, add the LAM installation to your path by adding the line… export PATH=$HOME/lam-7.0.6/bin:$PATH … to your .bashrc. Change your shell settings by loading .bashrc. You should now be able to run LAM commands such as laminfo. [node01 forrest]$ source .bashrc [node01 forrest]$ laminfo LAM/MPI: 7.0.6 Prefix: /home/forrest/lam-7.0.6 Architecture: i686-pc-linux-gnu Configured by: forrest Configured on: Tue Jun 1 22:50:18 EDT 2004 Configure host: node01.cluster.ornl.gov C bindings: yes C++ bindings: yes Fortran bindings: yes C profiling: yes C++ profiling: yes Fortran profiling: yes ROMIO support: yes IMPI support: no Debug support: no Purify clean: no SSI boot: globus (Module v0.5) SSI boot: rsh (Module v1.0) SSI coll: lam_basic (Module v7.0) SSI coll: smp (Module v1.0) SSI rpi: crtcp (Module v1.0.1) SSI rpi: lamd (Module v7.0) SSI rpi: sysv (Module v7.0) SSI rpi: tcp (Module v7.0) SSI rpi: usysv (Module v7.0) Now, create a list of hosts that will run LAM. [node01 forrest]$ cat >hostfile node01 node02 node03 node04 node05 ^D And start LAM on all of those hosts with lamboot. [node01 forrest]$ lamboot -v hostfile LAM 7.0.6/MPI 2 C++/ROMIO - Indiana University n-1<29549> ssi:boot:base:linear: booting n0 (node01) n-1<29549> ssi:boot:base:linear: booting n1 (node02) n-1<29549> ssi:boot:base:linear: booting n2 (node03) n-1<29549> ssi:boot:base:linear: booting n3 (node04) n-1<29549> ssi:boot:base:linear: booting n4 (node05) n-1<29549> ssi:boot:base:linear: finished Finally, show the list of available nodes with lamnodes, run an example program (shown in Listing One, and then shut down LAM with lamhalt. [node01 forrest]$ lamnodes n0 node01.cluster.ornl.gov:1:origin,this_node n1 node02.cluster.ornl.gov:1: n2 node03.cluster.ornl.gov:1: n3 node04.cluster.ornl.gov:1: n4 node05.cluster.ornl.gov:1: [node01 forrest]$ mpicc -O -o mpi2_get.lam mpi2_get.c [node01 forrest]$ mpiexec -n 5 ./mpi2_get.lam Hello world! I’m rank 0 of 5 on node01.cluster.ornl.gov running MPI 1.2 Hello world! I’m rank 2 of 5 on node03 running MPI 1.2 Hello world! I’m rank 1 of 5 on node02 running MPI 1.2 Hello world! I’m rank 4 of 5 on node05 running MPI 1.2 Hello world! I’m rank 3 of 5 on node04 running MPI 1.2 Process 0 has neighbor 4 Process 1 has neighbor 0 Process 2 has neighbor 1 Process 4 has neighbor 3 Process 3 has neighbor 2 [node01 forrest]$ lamhalt -v LAM 7.0.6/MPI 2 C++/ROMIO - Indiana University Shutting down LAM hreq: waiting for HALT ACKs from remote LAM daemons hreq: received HALT_ACK from n1 (node02.cluster.ornl.gov) hreq: received HALT_ACK from n2 (node03.cluster.ornl.gov) hreq: received HALT_ACK from n3 (node04.cluster.ornl.gov) hreq: received HALT_ACK from n4 (node05.cluster.ornl.gov) hreq: received HALT_ACK from n0 (node01.cluster.ornl.gov) LAM halted LAM Installation Once LAM is built and installed, the path to its executable files (in the example $HOME/lam-7.0.6/bin) should be added to your path. This is best done by editing your .bashrc file, assuming you use bash as your login shell, so that processes can be spawned on other cluster nodes. Next, source your .bashrc file so that LAM is added to your path. Once your path is modified, the laminfo command can be used to find out which version of LAM is available, where it’s located, and which options were compiled into the software. LAM and MPICH2 both now use daemons running on cluster nodes for starting up MPI programs. This method provides faster startup of codes, particularly on clusters with many nodes. These daemons must be started separately prior to executing the MPI program. The LAM developers don’t allow their daemon to run as the root user for security reasons. As a result, each user must start his or her own set of daemon processes on the desired cluster nodes. The MPICH2 developers, on the other hand, encourage sites to run a single daemon as the root user, started at system boot time, to service all users’ MPICH2 jobs. This prevents the users from having to start their own daemons and shut them down when their jobs are complete. To test the MPI implementations shown here, individual user daemons are started, used, and halted. LAM daemons are started using the lamboot command. This command needs a list of node hostnames on which daemons should be started. As shown in Figure One, a list of nodes is written to hostfile, and lamboot is executed with -v (for verbose output) and hostfile as arguments. After the daemons are started, executing lamnodes shows the list of nodes on which the daemons are running. MPI-2 Test Code To test the basic features of LAM and MPICH2, a simple MPI program is compiled and executed. The program, called mpi2_get.c, is shown in Listing One. The code starts off just like the typical “Hello World!” program that we usually use. MPI is initialized by calling MPI_Init(); the process rank is obtained from MPI_Comm_rank(); the number of processes running the code in parallel (the size) is obtained by calling MPI_Comm_size(); and the hostname of the node on which the process is running is obtained from MPI_ Get_processor_name(). Listing One: mpi2_get.c #include <stdio.h> #include “mpi.h” int main(int argc, char** argv) { int rank, size, namelen, version, subversion, neighbor; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Win win; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Get_processor_name(processor_name, &namelen); MPI_Get_version(&version, &subversion); printf(”Hello world! I’m rank %d of %d on %s running MPI %d.%dn”, rank, size, processor_name, version, subversion); MPI_Win_create(&rank, sizeof(rank), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &win); MPI_Win_fence(0, win); MPI_Get(&neighbor, 1, MPI_INT, (rank ? rank-1 : size-1), 0, 1, MPI_INT, win); MPI_Win_fence(0, win); printf(”Process %d has neighbor %dn”, rank, neighbor); MPI_Win_free(&win); MPI_Finalize(); } Next, MPI_Get_version() inquires what version of the MPI standard the running implementation claims to fully support. This routine was added in the MPI-1.2 standard so that developers can tell which version and subversion of the ever-evolving standard is being run with their code. This provides a mechanism for using either old or new communications routines for message passing at run time. As usual, all of this information, including the version and subversion numbers, is printed by every process. To see if the simplest MPI-2 one-sided communication routine works, the subsequent MPI calls are used to “grab” the process rank of each processes’ left neighbor. The zeroth process reaches around to the last process to obtain the rank of its neighbor. This sort of remote memory access (RMA) is new to MPI-2. After a “window” of memory is made available, another MPI process can get data from or put data into the memory of another MPI process. This new one-sided communication mechanism can be used to avoid global computations and the periodic polling for messages that often occurs in traditional MPI programs. First, a window of memory is created by calling MPI_Win_ create(). This is a collective operation, meaning it must be called on every process in the given communicator. In this case, the communicator is MPI_COMM_WORLD, which refers to all processes running the program. The memory window on each process is then accessible to all of the other processes. MPI_Win_create() is provided the base address of the memory segment (&rank); the size of the window in bytes (sizeof(rank)); the local unit size for displacements into the window in bytes (sizeof(int)); an info argument (MPI_INFO_NULL); a communicator (MPI_ COMM_WORLD); and a pointer to an opaque object that is filled in by the call (&win). This window object represents the group of processes that own and access the window and the attributes of the window. Then MPI_Win_fence() is called to synchronize the processes. This collective call forces the completion of any previous RMA communications, if there were any. In this case, the fence starts a new series of RMA communications (called an access epoch) for the specified window (win), since it’s followed by a subsequent MPI_Win_fence() call with an RMA communication between them. Next, MPI_Get() is called to obtain the rank value for the left neighbor of each MPI process. Passed to MPI_Get() is: the address on the local process where the data are to be stored (&neighbor, called the origin address); the number of entries in the buffer (1); the origin data type (MPI_INT); the rank of the process whose memory is to be accessed (called the target rank); the displacement into the window of the target buffer (0); the number of entries in the target buffer (1); the target data type (MPI_INT); and the window object for communication (win). After the MPI_Get() call returns, MPI_Win_fence() is called again to complete the one-sided communication, thereby ending the access epoch. Now, we can be assured that neighbor is filled with the desired value, so the process rank and neighbor rank are printed. Finally, the memory window is freed by calling MPI_Win_free() with a pointer to the window object, and MPI is shutdown with a call to MPI_ Finalize(). When compiled and run with LAM using mpiexec, as shown in Figure One, you should see correct output. Each process prints out its rank, the total number of processes being used, the hostnames of the nodes, and the MPI version and subversion numbers. LAM claims to be fully MPI-1.2 compliant. (Although some MPI-2 features obviously work in LAM, it can’t claim to be MPI-2 compliant until all its features are implemented and tested.) Lastly, each process prints the results of its RMA: the process rank of its left neighbor. Once you’re finished running MPI programs, your personal LAM daemons should be stopped using lamhalt. Installing MPICH2 Just like LAM, MPICH2 is built by running ./configure –prefix=directory (specifying a directory for the installation), make, and make install. As before, the directory of executable files must be added to your path in .bashrc or some other file if bash isn’t your login shell. Be careful to have only the LAM or the MPICH2 paths active at once. Since both provide mpicc and mpiexec, it could be confusing which you are using. The startup daemon under MPICH2 is written in Python 2 and is called mpd. Daemon processes must be spawned prior to running an MPICH2 program using mpdboot. Under Red Hat Linux 7.3, mpd fails to start because of a syntax error. The problem is that mpd and other Python 2 scripts in the mpich2-0.96p2/bin/ directory refer to python as the interpreter, but Python 2 is called python2 under Red Hat 7.3. To circumvent this problem, simply edit all of the scripts in this directory and change python to python2. As shown in Figure Two, a file called .mpd.conf must be created containing secretword=XXXX, where XXXX is some secret word or phrase. Figure Two: Installing and testing MPICH2 0.96p2 Unpack, build, and install the MPICH2 source code into your home directory. [node01 src]$ tar xzf mpich2-beta.tar.gz [node01 src]$ cd mpich2-0.96p2 [node01 mpich2-0.96p2]$ ./configure –prefix=$HOME/mpich2-0.96p2 [node01 mpich2-0.96p2]$ make [node01 mpich2-0.96p2]$ make install [node01 mpich2-0.96p2]$ cd Add the MPICH2 installation to your path by placing… export PATH=$HOME/mpich2-0.96p2/bin:$PATH to your .bashrc or other login shell startup file. Set the path by loading .bashrc, create the secret word and hosts files, boot MPICH2 on all of the nodes, verify the nodes, run the sample code in Listing One, and terminate all of the daemons. [node01 forrest]$ source .bashrc [node01 forrest]$ echo “secretword=MyLittleSecret” > .mpd.conf [node01 forrest]$ chmod 600 .mpd.conf [node01 forrest]$ cat > mpd.hosts node01 node02 node03 node04 node05 ^D [node01 forrest]$ mpdboot -r rsh -v starting local mpd on node01.cluster.ornl.gov starting remote mpd on node02 starting remote mpd on node03 starting remote mpd on node04 starting remote mpd on node05 1 out of 5 mpds started; waiting for more … 5 out of 5 mpds started [sci1-1 forrest]$ mpdtrace node01 node04 node02 node05 node03 [node01 forrest]$ mpicc -O -o mpi2_get.mpich2 mpi2_get.c [node01 forrest]$ mpiexec -n 5 ./mpi2_get.mpich2 Hello world! I’m rank 2 of 5 on node02 running MPI 1.2 Hello world! I’m rank 0 of 5 on node01.cluster.ornl.gov running MPI 1.2 Hello world! I’m rank 1 of 5 on node04 running MPI 1.2 Hello world! I’m rank 3 of 5 on node05 running MPI 1.2 Hello world! I’m rank 4 of 5 on node03 running MPI 1.2 Process 0 has neighbor 4 Process 4 has neighbor 3 Process 2 has neighbor 1 Process 1 has neighbor 0 Process 3 has neighbor 2 [node01 forrest]$ mpdallexit In addition, a list of node hostnames are needed to specify where daemons should be started. This list is stored in mpd.hosts. Once that file is created, mpdboot can spawn the daemons on the hosts listed in mpd.hosts. By default, mpdboot starts daemons using ssh, but can use (the insecure) rsh if you specify -r rsh, as shown in Figure Two. The -v argument turns on verbose output. Running mpdtrace shows a list of nodes running mpd. After recompiling the mpi2_get.c program using MPICH2’s mpicc, the code is run with mpiexec. The output of mpi2_get, as shown in Figure Two, is the same as was produced with LAM. Finally, the MPICH2 daemons are halted by running mpdallexit. The Tip of the Iceberg Now that you have LAM and/or MPICH2 installed, you can explore many of the new MPI-2 features and start using them in your own codes. With both vendor-supported and free MPI-2 implementations quickly becoming available, it’s time to see if some of its features can be used to speed up or improve the scaling of your programs. Look for discussions and demonstrations of more MPI-2 features in upcoming columns. Stay tuned!
http://www.linux-mag.com/id/1732
crawl-002
refinedweb
2,937
66.54
> Hey, I'm trying to make an Guess the movie app. But I have an problem whit making it so when the person answers correctly on one of the movies, which is it's own scene. It jumps out to a scene called "Level 1" and on this page I need to make it so the player sees that he answered correctly. I thought to make an image appear over the button(To the movie), but I don't know how I'm gone make that work. I have tried to make it so when they answer correctly 1 point gets added to a score counter And on scene "Level 1" it reads the score and if (score = 1) then gameObject.SetActive(this.gameObject) but nothing happends. Maybe I can use PlayerPrefs and GameObject.activeSelf to save the state of the image? :/ I will really appreciate any help, so thanks in advance :) I have used this: using UnityEngine; using System.Collections; using UnityEngine.UI; public class ImageStatusCheck : MonoBehaviour { void Update () { GameObject ff = GameObject.Find ("ImageStatus"); ImageStatus df = ff.GetComponent<ImageStatus> (); if (df.state == 1) { gameObject.SetActive(true); } } } @KristianGrytdal GameObject.Find doesn't work on inactive gameobjects. Nothing will find an inactive gameobject, thus you must first save a reference to the gameobject before making it inactive in the first place. This can be done in the editor by using a public GameObject myObject and drag-and-drop it or you can find it in the Start()-function and then use SetActive. Saving an image's state 2 Answers Save a Image's state 1 Answer In the next scene tell player there score out of total using player prefab before next level 1 Answer Save Image state between scenes 0 Answers How to get the active/loaded Scene then turn the name of it into a string? [C#] 1 Answer
https://answers.unity.com/questions/1230372/setactive-dosent-work.html
CC-MAIN-2019-22
refinedweb
310
65.42
0 This program is supposed to ask the user to input a number and the print will read out the factorial. This code is wrong how do I fix it? Maybe not add much just jumbled up variables? Help! This was given to me created in BLUE J software i know the reader should equal the number and for (x=number, x<=1;x++) but im not sure.. import java.io.*; import java.util.*; class Problem3 { public static void main(String args[]) { Scanner aReader = new Scanner(System.in); System.out.print("Enter an integer you want to find the factorial for "); int a = aReader.nextInt(); int fact=1; for (inti=1;1<=a;i++) fact=fact*i; { System.out.println(fact); } } }
https://www.daniweb.com/programming/software-development/threads/241174/what-is-wrong-with-this-program-for-loops-factorials-bluej
CC-MAIN-2018-43
refinedweb
121
65.62
Difference between revisions of "Funtoo Filesystem Guide, Part 1" Latest revision as of 01:13, January 2, 2015 Journaling and ReiserFS Next in series: Funtoo Filesystem Guide, Part 2 Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. Looking for people interested in testing and documenting Docker support! Contact Daniel Robbins for more info. What's in Store The purpose of this series is to give you a solid, practical introduction to Linux's various new filesystems, including ReiserFS, XFS, JFS, GFS, ext3 and others. I want to equip you with the necessary practical knowledge you need to actually start using these filesystems. My goal is to help you avoid as many potential pitfalls as possible; this means that we're going to take a careful look at filesystem stability, performance issues (both good and bad), any negative application interactions that you should be aware of, the best kernel/patch combinations, and more. Consider this series an "insider's guide" to these next-generation filesystems. So, that's what's in store. But to begin this series, I'm going to diverge from this plan for just one article and prepare you for the journey ahead. I'll cover two topics very important to the Linux development community -- journaling, and the design vision behind ReiserFS. Journaling is very important because it's a technology that we've been anticipating for a long time, and it's finally here. It's used in ReiserFS, XFS, JFS, ext3 and GFS. It's important to understand exactly what journaling does and why Linux needs it. Even if you have a good grasp of journaling, I hope that my journaling intro will serve as a good model for explaining the technology to others, something that'll be common practice as departments and organizations worldwide begin transitioning to these new journaling filesystems. Often, this process begins with a "Linux guy/gal" such as yourself convincing others that it's the right thing to do. In the second half of this article, we're going to take a look at the design vision behind ReiserFS. By doing so, we're going to get a good grasp on the fact that these new filesystems aren't just about doing the same old thing a bit faster. They also allow us to do things in ways that simply weren't possible before. Developers, keep this in mind as you read this series. The capabilities of these new filesystems will likely affect how you code your future Linux software development projects. Understanding Journaling: Meta-data As you well know, filesystems exist to allow you to store, retrieve and manipulate data. And, in order to do this, a filesystem needs to maintain an internal data structure that keeps all your data organized and readily accessible. This internal data structure (literally, "the data about the data") is called meta-data. It is the structure of this meta-data that gives a filesystem its particular identity and performance characteristics. Normally, we don't interact with a filesystem's meta-data directly. Instead, a specific Linux filesystem driver takes care of that job for us. A Linux filesystem driver is specially written to manipulate this maze of meta-data. However, in order for the filesystem driver to work properly, it has one important requirement; it expects to find the meta-data in some kind of reasonable, consistent, non-corrupted state. Otherwise, the filesystem driver won't be able to understand or manipulate the meta-data, and you won't be able to access your files. Understanding Journaling: fsck This is where fsck comes in. When a Linux system boots, fsck starts up and scans all local filesystems listed in the system's /etc/fstab file. fsck's job is to ensure that the to-be-mounted filesystems' meta-data is in a usable state. Most of the time, it is. When Linux shuts down, it carefully flushes all cached data to disk and ensures that the filesystem is cleanly unmounted, so that it's ready for use when the system starts up again. Typically, fsck scans the to-be-mounted filesystems and finds that they were cleanly unmounted, and makes the reasonable assumption that all meta-data is OK. However, we all know that every now and then, something atypical happens, such as an unexpected power failure or system lock-up. When these unfortunate situations occur, Linux doesn't have the opportunity to cleanly unmount the filesystem. When the system is rebooted and fsck starts its scan, it detects that these filesystems were not cleanly unmounted and makes a reasonable assumption that the filesystems probably aren't ready to be seen by the Linux filesystem drivers. It's very likely that the meta-data is messed up in some way. So, to fix this situation, fsck will begin an exhaustive scan and sanity check on the meta-data, correcting any errors that it finds along the way. Once fsck is complete, the filesystem is ready for use. Although some recently-modified data may have been lost due to the unexpected power failure or system lockup, since the meta-data is now consistent, the filesystem is ready to be mounted and be put to use. The Problem With fsck So far, this may not sound like a bad approach to ensuring filesystem consistency, but the solution isn't optimal. Problems arise from the fact that fsck must scan a filesystem's entire meta-data in order to ensure filesystem consistency. Doing a complete consistency check on all meta-data is a time-consuming task in itself, normally taking at least several minutes to complete. Even worse, the bigger the filesystem, the longer this exhaustive scan takes. This is a big problem, because while fsck is doing its thing, your Linux system is effectively offline, and if you have a large amount of filesystem storage, your system could be fsck-ing for half an hour or more. Of course, standard fsck behavior can have devastating results in mission-critical datacenter environments where system uptime is extremely important. Fortunately, there's a better solution. The Journal Journaling filesystems solve this fsck problem by adding a new data structure, called a journal, to the mix. This journal is an on-disk structure. Before the filesystem driver makes any changes to the meta-data, it writes an entry to the journal that describes what it's about to do. Then, it goes ahead and modifies the meta-data. By doing so, a journaling filesystem maintains a log of recent meta-data modifications, and this comes in handy when it comes time to check the consistency of a filesystem that wasn't cleanly unmounted. Think of journaling filesystems this way -- in addition to storing data (your stuff) and meta-data (the data about the stuff), they also have a journal, which you could call meta-meta-data (the data about the data about the stuff). Journaling in Action So, what does fsck do with a journaling filesystem? Actually, normally, it does nothing. It simply ignores the filesystem and allows it to be mounted. The real magic behind quickly restoring the filesystem to a consistent state is found in the Linux filesystem driver. When the filesystem is mounted, the Linux filesystem driver checks to see whether the filesystem is OK. If for some reason it isn't, then the meta-data needs to be fixed, but instead of performing an exhaustive meta-data scan (like fsck) it instead takes a look at the journal. Since the journal contains a chronological log of all recent meta-data changes, it simply inspects those portions of the meta-data that have been recently modified. Thus, it is able to bring the filesystem back to a consistent state in a matter of seconds. And unlike the more traditional approach that fsck takes, this journal replaying process does not take longer on larger filesystems. Thanks to the journal, hundreds of Gigabytes of filesystem meta-data can be brought to a consistent state almost instantaneously. ReiserFS Now, we come to ReiserFS, the first of several journaling filesystems we're going to be investigating. ReiserFS 3.6.x (the version included as part of Linux 2.4+) is designed and developed by Hans Reiser and his team of developers at Namesys. Hans and his team share the philosophy that the best filesystems are those that help create a single shared environment, or namespace, where applications can interact more directly, efficiently and powerfully. To do this, a filesystem should meet the performance and feature needs of its users. That way, users can continue using the filesystem directly rather than building special-purpose layers that run on top of the filesystem, such as databases and the like. Small File Performance So, how does one go about making the filesystem more accommodating? Namesys has decided to focus on one aspect of the filesystem, at least initially -- small file performance. In general, filesystems like ext2 and ufs don't do very well in this area, often forcing developers to turn to databases or special organizational hacks to get the kind of performance they need. Over time, this kind of "I'll code around the problem" approach encourages code bloat and lots of incompatible special-purpose APIs, which isn't a good thing. Here's an example of how ext2 can tend to encourage this kind of programming. ext2 is good at storing lots of twenty-plus k files, but isn't an ideal technology for storing 2,000 50-byte files. Not only does performance drop significantly when ext2 has to deal with extremely small files, but storage efficiency drops as well, since ext2 allocates space in either one or four k chunks (configurable when the filesystem is created). Now, conventional wisdom would say that you aren't supposed to store that many ridiculously small files on a filesystem. Instead, they should be stored in some kind of database that runs above the filesystem. In reply, Hans Reiser would point out that whenever you need to build a layer on top of the filesystem, it means that the filesystem isn't meeting your needs. If the filesystem met your needs, then you could avoid using a special-purpose solution in the first place. You would thus save development time and eliminate the code bloat that you would have created by hand-rolling your own proprietary storage or caching mechanism, interfacing with a database library, etc. Well, that's the theory. But how good is ReiserFS' small file performance in practice? Amazingly good. In fact, ReiserFS is around eight to fifteen times faster than ext2 when handling files smaller than one k in size! Even better, these performance improvements don't come at the expense of performance for other file types. In general, ReiserFS outperforms ext2 in nearly every area, but really shines when it comes to handling small files. ReiserFS Technology So how does ReiserFS go about offering such excellent small file performance? ReiserFS uses a specially optimized b* balanced tree (one per filesystem) to organize all filesystem data. This in itself offers a nice performance boost, as well as easing artificial restrictions on filesystem layouts. It's now possible to have a directory that contains 100,000 other directories, for example. Another benefit of using a b*tree is that ReiserFS, like most other next-generation filesystems, dynamically allocates inodes as needed rather than creating a fixed set of inodes at filesystem creation time. This helps the filesystem to be more flexible to the various storage requirements that may be thrown at it, while at the same time allowing for some additional space-efficiency. ReiserFS also has a host of features aimed specifically at improving small file performance. Unlike ext2, ReiserFS doesn't allocate storage space in fixed one k or four k blocks. Instead, it can allocate the exact size it needs. And ReiserFS also. This does two things. First, it dramatically increases small file performance. Since the file data and the stat_data (inode) information are stored right next to each other, they can normally be read with a single disk IO operation. Second, ReiserFS is able to pack the tails together, saving a lot of space. In fact, a ReiserFS filesystem with tail packing enabled (the default) can store six percent more data than the equivalent ext2 filesystem, which is amazing in itself. However, tail packing does cause a slight performance hit since it forces ReiserFS to repack data as files are modified. For this reason, ReiserFS tail packing can be turned off, allowing the administrator to choose between good speed and space efficiency, or opt for even more speed at the cost of some storage capacity. ReiserFS truly is an excellent filesystem. In my next article, I'll guide you through the process of setting up ReiserFS under Linux. We'll also take a close look at performance tuning, application interactions (and how to work around them), the best kernels to use, and more. 2 Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container. Looking for people interested in testing and documenting Docker support! Contact Daniel Robbins for more info.
http://www.funtoo.org/index.php?title=Funtoo_Linux&diff=next&oldid=8167
CC-MAIN-2015-22
refinedweb
2,213
52.6
As far as I understood the "static initialization block" is used to set values of static field if it cannot be done in one line. But I do not understand why we need a special block for that. For example we declare a field as static (without a value assignment). And then write several lines of the code which generate and assign a value to the above declared static field. Why do we need this lines in a special block like: static {...} The non-static block: { // Do Something... } Gets called every time an instance of the class is constructed. The static block only gets called once, when the class itself is initialized, no matter how many objects of that type you create. Example: public class Test { static{ System.out.println("Static"); } { System.out.println("Non-static block"); } public static void main(String[] args) { Test t = new Test(); Test t2 = new Test(); } } This prints: Static Non-static block Non-static block
https://codedump.io/share/WPwHnVtrAODm/1/static-initialization-blocks
CC-MAIN-2017-51
refinedweb
160
64.81
Programming Language Concepts Using C and C++/Introduction to Programming in C What follows is an assortment of simple C programs presented to provide an exposure to the basics of the language. In order to better achieve this purpose, examples are kept simple and short. Unfortunately, for programs of such scale certain criteria become less important than they normally should be. One such criterion is source code portability, which can be ensured by checking programs against standards compliance. This criterion, if not pursued diligently from the start, is very difficult to fulfill. For easier tackling of the problem, many compilers offer some assistance. Although the degree of support and its form may vary, it’s worth taking a look at the compiler switches. For the compilers we will be using in the context of this class an incomplete list of applicable options are given below. C PreprocessorEdit The C preprocessor is a simple macro processor—a C-to-C translator—that conceptually processes the source text of a C program before the compiler proper reads the source program. Generally, the preprocessor is actually a separate program that reads the original source file and writes out a new "preprocessed" source file that can then be used as input to the C compiler. In other implementations, a single program performs the preprocessing and compilation in a single pass over the source file. The advantage of the former scheme is, apart from its more modular structure, the possibility of translators for other programming languages using the preprocessor. The preprocessor is controlled by special preprocessor command lines, which are lines of the source file beginning with the character #. The preprocessor removes all preprocessor command lines from the source file and makes additional transformations on the source file as directed by the commands. The name of the command must follow the # character.[1] A line whose only non-whitespace character is a # is termed null directive in ISO C and is treated the same as blank line. Defining a MacroEdit #define #definepreprocessor command causes a name to become defined as a macro to the preprocessor. A sequence the body. If the macro is defined to accept arguments, then the actual arguments following the macro name are substituted for formal parameters in the macro body. Argument passing mechanism used is akin to call-by-name. However, one should not forget the fact that text replacement is carried out by the preprocessor, not the compiler. This macro directive can be used in two different ways. - Object-like macro definitions: #definename sequence-of-tokens?, where ? stands for zero or one occurrence of the preceding entity. Put another way, the body of the macro may be empty. Redefinition of a macro is allowed in ISO C provided that the new definition is the same, token for token, as the existing definition. Redefining a macro with a different definition is possible only if an undef directive is issued before the second definition. - Defining Macros with Parameters: #definename(name1, name2, ..., namen) sequence-of-tokens?. The left parenthesis must immediately follow the name of the macro with no intervening whitespace. Such a macro definition would be interpreted as an object-like macro that starts with a left parenthesis. The names of the formal parameters must be identifiers, no two the same. A function-like macro can have an empty formal parameter list. This is used to simulate a function with no arguments. When a function-like macro call is encountered, the entire macro call is replaced, after parameter processing, by a processed copy of the body. The entire process of replacing a macro call with the processed copy of its body is called macro expansion; the processed copy of the body is called the expansion of the macro call. Therefore, with the above definition, a = 3 * increment(a); would be expanded to a = 3 * (a + 1);. Now that preprocessing takes place prior to compilation, the compiler proper does not even see the identifier increment in the source code. All it sees is the expanded form of the macro. The problems we faced in the above example may be attributed to the textual nature of the parameter passing mechanism used: call-by-name: Each time the parameter name appears in the text, it is replaced by the exact text of the argument; each time it is replaced, it is evaluated once again. Predefined MacrosEdit Undefining a MacroEdit #undefname #undefcauses the preprocessor to forget any macro definition of name. It is not an error to undefine a name that is currently not defined. Once a name has been undefined, it may then be given a completely new definition without error. Macros vs. FunctionsEdit Choosing macro definition over function makes sense when - efficiency is a concern, - the macro definition is simple and short, or used infrequently in the source, and - the parameters are evaluated only once. Macro definition is efficient because preprocessing takes place before runtime. It actually takes place even before the compiler proper starts. On the other hand, function call is a rather expensive instruction and it is executed in the runtime. In this sense, a macro can be used to simulate inlining of a function. Macro definitions are expected to be simple and short because replacing large bodies of text in the source will give rise to code-bloat, especially when it is used frequently. Justification of the third requirement was given previously. On the down side, it should be kept in mind that the preprocessor does not do any type-checking on macro parameters. Code InclusionEdit #include <filename > #include "filename " #includepreprocessor-tokens - The #includepreprocessor command causes the entire contents of a specified source file to be processed as if those contents had appeared in place of the #includecommand. The general intent is that the "filename " form is used to refer to the header files written by the programmer, whereas the <filename > form is used to refer to standard implementation files. The third form undergoes normal macro expansion and the result must match one of the first two forms. Conditional PreprocessingEdit #ifconstant-expression - group-of-lines1 #elif - group-of-lines2 ... #elif - group-of-linesn #endif - These commands are used together to allow lines of source text to be conditionally included or excluded from the compilation. It comes handy when we need to produce code for different architectures (e.g., Vax and Intel) or in different modes (e.g., debugging mode and production mode). #ifdefname #ifndefname - These two commands are used to test whether a name is defined as a preprocessor macro. They are equivalent to #if defined(name )and #if !defined(name ), respectively. Notice that #if name and #ifdef name are not equivalent. Although they work exactly the same way when name is undefined, the parity breaks when name is defined to be 0. In that case, the former will be false while the latter will be true. definedname defined(name ) - The definedoperator can be used only in #ifand #elifexpressions; it is not a preprocessor command but a preprocessor operator. Emitting Error MessagesEdit #errorpreprocessor-tokens - The #errordirective produces a compile-time error message that will include the argument tokens, which are subject to macro expansion. Compile-Time SwitchesEdit In addition to the use of the #define command; we can use a compile-time switch to define a macro. char Data TypeEdit Problem: Write a program that prints out the characters 'a'..'z', 'A'..'Z', and '0'..'9' along with their encoding. The following preprocessor directive pulls the contents of the file named stdio.h into the current file. Once this file is included, its contents are interpreted by the preprocessor. Included files generally contain declarations shared among different applications. Constant declarations and function prototypes are examples to these. In this case, we must include stdio.h in order to pull in the prototype of the printf function. A function prototype consists of the function return type, the name of the function, and the parameter list. It describes the interface of the function: it details the number, order, and types of parameters that must be provided when the function is called and the type of value that the function returns. A function prototype helps the compiler ensure correct usage of a particular function. For instance, given the declaration in stdio.h, one cannot pass an int as the first argument to the printf function. Time and again you will see the words prototype and signature used interchangeably. This however is not correct. Signature of a function is its list of argument types; it does not include the function return type. #include <stdio.h> All runnable C programs must contain a function named main. This function serves as the entry point to the program and is called from within the C runtime, which is executed after the executable is loaded, as part of the initialization code. The system software needed to copy a program from secondary storage into the main memory so that it’s ready to run is called a loader. In addition to bringing the program into the memory, it can also set protection bits, arrange for virtual memory to map virtual addresses to disk pages, and so on. The formal parameter list of the following main function consists of the single keyword void, which means that it does not take any parameters at all. You will at times see C codes with int main() or main() written instead of : int main(void). In this context all three serve the same purpose although the first two should be avoided. A function with no return type in its declaration/ definition is assumed to return a value of type int. A function prototype with an empty formal parameter list is an old style declaration that tells the compiler about the presence of a function, which can take any number of arguments of any type. It also has the unasked for side effect of turning off prototype checking from that point on. So, one should avoid such usages. int main(void) { Although we will manipulate characters, the variable used to hold the characters, ch, is declared as an int. The reasoning for this is as follows: Absence of the sign qualifier ( signed or unsigned) in the original language design gave compiler implementers the freedom to interpret char as a type comprising values in [0..255]—because characters can be indexed by nonnegative values only—or as a type comprising values in [–128..127]—because it is an integer type represented in a single byte. These two views basically limit the range of char to [0..127]. Due to the lack of exceptions in C, most functions dealing with characters and character strings need to represent exceptional situations, such as end-of file, as unlikely return values. This means, in addition to the legitimate characters, we should be able to encode the exceptional situations as different values. ASCII, this implies a representation that can hold 128 + n signed values, where n is the number of exceptional conditions to be dealt with. When taken together with the conclusion drawn in the previous paragraph, it can be seen that we need a representation that is larger than type char. Therefore, you'll often see an int variable being used to hold values of type char. Encountering a character constant the C compiler replaces it with an integer constant that corresponds to the order of the character in the encoding table. For instance, 'a' in ASCII is replaced with 97, which is an integer constant of type int. Had we used char instead of int our program would still be working correctly. This is because we did not have to deal with any exceptional conditions in the program and all int constants used are within the limits of char representation. That is, all narrowing implicit conversions, as that would take place in the initialization statement of the for loop, are value preserving. int ch; for(ch = 'a'; ch <= 'z'; ch++) printf function is used to format and send the formatted output on the standard output file stdout, which by default is the screen. It is a variadic function: that is, it takes a variable length argument list. The first argument is taken to be a format control string, which is used to figure out the type and number of arguments. This is accomplished through the use of special character sequences starting with %. When encountered, such a sequence is replaced with the corresponding actual parameter, if the actual parameter can be used in the context implied by the character sequence. For instance, '%c' in the following line means that the corresponding argument must be interpretable as a character. Likewise, '%d' is a placeholder for a decimal number. Note that printf actually returns an int. Not assigning this return value to some variable means that it is ignored. printf("Encoding of '%c' is %d\n", ch, ch); printf("Press any key to continue\n"); getchar(); for(ch = 'A'; ch <= 'Z'; ch++) printf("Encoding of '%c' is %d\n", ch, ch); printf("Press any key to continue\n"); getchar(); for(ch = '0'; ch <= '9'; ch++) printf("Encoding of '%c' is %d\n", ch, ch); return(0); } /* end of int main(void) */ Compiling and Running a Program in Linux Command LineEdit Given that this program is saved under the name Encoding.c, it can be compiled (and linked) using gcc –o AlphaNumerics Encoding.c↵ gcc invokes the GNU C/C++ compiler driver, which first gets the C-preprocessor process Encoding.c and passes the transformed source file to the compiler proper. The output of the compiler proper, an assembly program, is later passed to the assembler. The object code file assembled by the assembler is finally handed to the linker, which links it with the standard C library and stores the executable in a disk file whose name is provided with -o option. Note that you do not have to tell the driver to link with the standard C library. This is a special case, though. With other libraries you must tell the compiler driver the libraries and object files to be linked with. This whole scheme basically creates a new external command named AlphaNumerics. Issuing the command ./AlphaNumerics↵ at the command line will eventually cause the loader to load the program from the secondary storage to memory and run it. Compiling and Running a Program Using EmacsEdit emacs &↵ This command will put you in the Emacs development environment. Click on File→Open... and pick Encoding.c from the file list. This will open up a new C-mode buffer within the current window. Note that the second line from the bottom shows (C Abbrev), which means Emacs has identified your source code as a C program. Next, click on Tools→Compile►→Compile.... This will prompt you to enter the command needed to compile (and link) the program. This prompt will be printed in an area called the minibuffer, which is normally the very last line of the frame. Erase the default selection and write gcc –o AlphaNumerics.exe Encoding.c↵ As you hit the enter key, you will see a *compilation* buffer pop up that lets you know about how the compilation process is proceeding. Hoping you don’t make any typos and everything goes smoothly, next thing we will do is to run the executable. In order to do this, click on Tools→Shell►→Shell. This will open a restricted shell inside a Shell mode buffer, from where you can run your executables. Type in ./AlphaNumerics↵ within this buffer and you will see the same output as you saw in the previous section. In case you end up with compilation errors, clicking on an error line in the *compilation* buffer takes you to the relevant source line. If you want to go back to the source code and make some changes, click on Buffers→Encoding.c. When you are through with the changes, you can compile the source code once again by clicking on Tools→Compile►→Repeat Compilation. This will recompile Encoding.c using the command entered above. However, in case you may want to modify the command, click on Tools→Compile►→Compile... and proceed as you did before. Non-portable VersionEdit Assuming ASCII, you may be tempted to replace all character constants in the program with the corresponding integer values. This is strongly discouraged since it will make the program non-portable. #include <stdio.h> #include <stdlib.h> int main(void) { int ch; The following line contains embedded constants, which cause the code to be non-portable. What if we wanted to move our code to some environment where EBCDIC is used to encode characters? So, one should avoid embedding such implementation dependent features into the programs and let the compiler do the dirty work. Note also, since the same action is taken by the compiler, probable motivation—speeding up the program—for replacing character literals with integer constants is not well-founded, either. for(ch = 97; ch <= 122; ch++) printf(“Encoding of %c is %d\n”, ch, ch); printf(“Press any key to continue\n”); getchar(); for(ch = 65; ch <= 90; ch++) printf(“Encoding of %c is %d\n”, ch, ch); printf(“Press any key to continue\n”); getchar(); for(ch = 48; ch <= 57; ch++) printf(“Encoding of %c is %d\n”, ch, ch); The exit function causes the program to terminate, returning the value passed to it as the result of executing the program. Same effect can be achieved by returning an integer value from the main function. By convention, a value of 0 signifies successful termination while a nonzero value is used to signify abnormal termination. exit(0); } /* end of int main(void) */ Compiling and Running a Program Using MS Visual C/C++Edit First of all, make sure you execute vcvars32.bat, which you can find in the bin subdirectory of the MS Visual C/C++ directory. This will set some environment variables you need for correct operation of the command line tools. cl /FeAscii_Enc.exe Ascii_Encoding.c↵ Similar to gcc, this command will go through preprocess, compile, assemble, and link phases. Upon successful completion, we can run our program simply by issuing the name of the executable filename at the command line. The operating system shell will recognize the resultant executable as an external command and get the loader to load the Ascii_Enc.exe into memory and eventually run it. Ascii_Enc↵ Compiling and Running a Program Using DGJPP-rhideEdit Start a new DOS box and enter the following command. rhide Ascii_Encoding.c↵ This will start a DOS-based IDE that you can use to develop projects in different programming languages. Choose Compile→Make or Compile→Build All or Compile→Compile followed by Compile→Link. This will compile the source code and link the resulting object module with the C runtime. You can now run the executable by clicking on Run→Run or by exiting to DOS by choosing File→DOS Shell and entering the filename at the prompt. (In case you may see unexpected behavior from rhide, make sure the file is not too deep inside the directory hierarchy and the names do not contain special characters such as space.) If you choose the second option you can return to rhide by typing exit at the command prompt. MacrosEdit Problem: Write a program that prints a greeting in English or Turkish. The language should be chosen by a compile-time switch. The name(s) of the person(s) to be greeted is passed to the program through a command-line argument. #include <stdio.h> The following line checks to see whether a macro named TURKISH has already been defined or not. Definition of this macro, or any macro for that matter, can be made within a file or at the command line prompt as a compiler switch. In this example, there is no such definition in the file or any included file. So, absence of such a macro as a compiler switch will cause the control jump to the #else part and the statements between #else and #endif are included in the source file. Given the definition is made at the command line prompt, the statements between #if and #else are included in the source file. Whichever section of code is included, one thing is certain: either the part between #if and #else or the part between #else and #endif is included, not both; there is no danger of duplicate definition. Notice the peculiar way of naming the variables. This is the so-called Hungarian notation. By prefixing specially interpreted characters to the identifier name, this method aims at providing the fellow programmers/maintainers as much context information as possible. Without any reference to the definition of an identifier, which can be pages apart or even in a different source file that is inaccessible to us, we can now garner the required information simply by interpreting the prefix. For instance, szGreeting is meant to be a string ending with zero (that is, a C-style string). #if defined(TURKISH) char szGreeting[] = "Gunaydin,"; char szGreetStranger[] = "Her kimsen gunaydin!"; char szGreetAll[] = "Herkese gunaydin!"; #else char szGreeting[] = "Good morning,"; char szGreetStranger[] = "Good morning, whoever you might be!"; char szGreetAll[] = "Good morning to you all!"; #endif C does not allow the programmer to overload function names. main function is an exception to this: it comes in two flavors. The first one, which we have already seen, does not take any arguments. The second one permits the user to pass command line arguments to the application. These arguments are passed in a vector of pointers to character. If the user wants the arguments to be interpreted as belonging to some other data type, the application code has to do some extra processing. In C, there is no standard way of telling the size of an array. One should either use a convention or hold the size in another variable. Character strings in C, which may be considered as an array of characters, are an example to the former. Here, a sentinel value (NULL character) is used to signify the end of the string. For most instances, such a scheme turns out to be impossible or infeasible. In that case we need to hold the size (or length) information in a separate variable. Hence is the need for a second variable. The program name is the first component of the argument vector. So, if the argument count is one it means the user has not passed any arguments at all. int main(int argc, char *argv[]) { switch (argc) { Execution of a break statement will terminate the innermost enclosing loop ( while, for, and do while) or switch statement. That is, it will basically jump to the next line following the loop or switch statement. In our case, control will be transferred to the return statement. Newcomers to C from a Pascal-based background must be careful about the nature of the switch statement: Unlike the case statement, where each branch is executed mutually, switch lets more than one branch be executed. If that’s not what you want you must delimit the branches with break statements, as was done below. Without the break statements in place an argument count of one would cause all three messages to be printed. Similarly, an argument count of two would print anonymous message together with the appropriate one. case 1: printf("%s\n", szGreetStranger); break; case 2: printf("%s %s\n", szGreeting, argv[1]); break; default: printf("%s\n", szGreetAll); } /* end of switch (argc) */ return(0); } /* end of int main(int, char**) */ Using Compiler SwitchesEdit Saving this C program as Greeting.c and issuing gcc Greeting.c –DTURKISH –o Gunaydin↵ # In Linux at the command line will produce an executable named Gunaydin. This executable will not include object code for any of the statements between #else and #endif. Similarly, if we compile the program using gcc Greeting.c –DENGLISH –o GoodMorning↵ we will get an executable named GoodMorning with the statements between #if defined and #else excluded. Note that with no macros defined at the command line, the English version of the greetings will be included. One should not mistake compile-time switches for command-line arguments. The former is passed to the preprocessor and used to alter the code to be compiled, while the latter is passed to the running program to alter its behavior. Provided that we build our executables as shown above ./Gunaydin Tevfik↵ will produce as output - Gunaydin, Tevfik whereas ./GoodMorning Tevfik Ugur↵ will produce - Good morning to you all! Pointer ArithmeticEdit Problem: Write a program to demonstrate the relationship between pointers and addresses. #include <stdio.h> int string_length(char *str) { int len = 0; The third part in the following for loop increments the variable str, which is a pointer to char. If we fail to realize that address and pointer are two different things, we might be tempted to think that what we do is simply incrementing the address value. But this would be utterly wrong. Although, for pedagogical purposes, we may assume a pointer to be an address, they are not one and the same thing. When we increment a pointer, the address value held in it is incremented by as much as the size of the type of value pointed to by the pointer. However, as far as pointer to char is concerned, incrementing a pointer and address has the same effect. This is due to the fact that a char value is stored in a single byte. Definition: A pointer is a variable that contains the address of a variable, content of which is interpreted to belong to a certain type.[2] Note that although a handle on an object may be regarded as a "kind of" pointer, they are two different concepts. Along with other differences such as support for inheritance, unlike pointers and addresses, handles do not take part in arithmetic operations. for(; *str != '\0'; str++) len++; return len; } /* end of int string_length(char *) */ long sum_ints(int *seq) { long sum = 0; Next line is an example showing the difference between a pointer and an address. Here, incrementing seq will increase the value of the address held in it by the size of an int. for (; *seq > 0; seq++) sum += *seq; return sum; } /* end of long sum_ints(int *) */ int main(void) { Next line creates and initializes an array of chars. The compiler computes the size of this array. What the compiler does is basically count the number of characters in between double quotes and reserve memory accordingly. Note that the compiler automatically appends a NULL character too. However, if we choose to use aggregate initialization, we have to be a bit more cautious. char string[] = {‘G’, ‘o’, ‘k’, ‘c’ , ‘e’}; will create an array of 5 characters, not 6. There won’t be any NULL character at the end of the character array. If that’s not what we really want we should either add the NULL character in initialization as in char string[] = {‘G’, ‘o’, ‘k’, ‘c’, ‘e’, ‘\0’}; or revert back to the former style. In both cases, the array is allocated in the runtime stack. Note also the use of escape sequences for embedding double quotes in the character string. Since it is used to flag the end it's not possible to insert '”' in a string literal. The way out of this is by means of escaping '”', which tells the compiler that the following '”' does not end the character string, it should be literally embedded in the string. char string[] = "Kind of \"long\" string"; int i; int sequence[] = {9, 1, 3, 102, 7, 5, 0}; Third argument of the next printf statement is a call to a function that returns an int. What’s more interesting with this call is the way its argument is passed. Although we seem to be passing an array, what is really passed behind the scenes is the address of the first component of the array; that is, &string[0]. This is done regardless of the array size. Advantages of such a scheme are: - All we need to pass is a single pointer, not an entire array. As the array size grows, we save more on memory. - Now that we pass a single pointer we avoid copying the entire array. This means saving both on time and memory. - The changes we make in the array are permanent; the caller sees the changes made by the callee. This is due to the fact that it is the pointer that is passed by value, not the array. So, although we cannot change the address of the first component of the array, we can modify the contents of the array. The down side of it is that you need to make a local copy of the array if you want the array to remain the same between calls. printf("Length of \"%s\": %d\n", string, string_length(string)); printf("Sum of ints ( "); for (i = 0; sequence[i] != 0; i++) printf("%d ", sequence[i]); printf("): %ld\n", sum_ints(sequence)); return(0); } /* end of int main(void) */ Bit ManipulationEdit Originally a systems-programming language, C offers assistance for manipulation of data on a bit-by-bit basis. This comprises bitwise operations and a facility to define structure with bit fields. Not surprisingly, this aspect of the language is utilized in machine-dependent applications such as real-time systems, system programs [device drivers, for instance] where running speed is a primary concern. Given the sophisticated optimizations done by today's compilers and the non-portable nature of bit manipulation, in the presence of an alternative their use in general-purpose programming should be avoided. Bitwise OperationsEdit Problem: Write functions to extract exponent and mantissa of a float argument. #include <limits.h> #include <stdio.h> #include <stdlib.h> #define TOO_FEW_ARGUMENTS 1 #define TOO_MANY_ARGUMENTS 2 Specification of C does not standardize (with the exception of type char) the size of integer types. The only guarantee given by the language specification is the following relation: sizeof(short)≤ sizeof(int)≤ sizeof(long)≤ sizeof(long long) On most machines an int takes up four bytes, whereas on some it takes up two bytes. A menifestation of C's system programming orientation, this variance is due to the fact that size of int was meant to be equal to the word size of the underlying architecture. As a consequence of this, if we take it for granted that four bytes are used to represent an int value, we may occasionally see our programs producing insensible output. This will be due to the fact that an int value representable in four bytes will probably give rise to an overflow when represented in two bytes. The following #if-#else directive is used to circumvent this problem. UINT_MAX, defined in limits.h, holds the largest possible unsigned int value. On machines where an int value is stored in two bytes, this will be equal to 65535. Otherwise, it will be something else. So, if UINT_MAX happens to be 65535 we can say an int is represented in two bytes. If not, it is represented in four bytes. #if UINT_MAX==65535 typedef unsigned long BIG_INT; #else typedef unsigned int BIG_INT; #endif char *usage = “USAGE: Extract_Fields float_number”; Next function pulls the exponent part of a particular float variable. This is done by isolating the exponent part and shifting the resulting value to right. We use bitwise-and operator (binary &) to isolate the number and shift-right (>>) this isolated exponent (in a way right-adjust the number). Note that result of right-shifting a negative integer—that is, a number with a bit pattern having 1 in the most significant bit—is undefined. In other words, the behavior is implementation dependent. While in some implementations the sign bit is preserved and therefore right-shifting is effectively sign-extending the number, in others this bit is replaced with a 0.[3] BIG_INT exponent(float num) { float *fnump = # BIG_INT *lnump = (BIG_INT *) fnump; Partial image of memory at this point is given in Figure 3. If, say, num has -1.5 as its value, it will be interpreted as -1.5 when seen through fnump. That is, *fnump will be -1.5. However, if seen through lnump, it will be interpreted to contain 3,217,031,168! The difference is due to different ways of looking at the same thing: While *fnump sees num as a float value compliant with the IEEE 754/854, *lnump sees the same four bytes of memory as an integer value (of type unsigned long or unsigned int, depending on the machine the program is running on) encoded using 2’s complement notation. (insert figure here) The following line first uses a bit mask to isolate the exponent part and then shifts it to right so that the exponent bits occupy the least significant numbers. (insert figure here) return((*lnump & 0x7F800000) >> 23); } /* end of BIG_INT exponent(float) */ Next function extracts the mantissa part of the number. It does this simply by masking out the sign and exponent parts of the number. This is accomplished by the bitwise-and operator in the return expression. Note that we do not need to shift the number since its mantissa is made up of the lowest 23 digits of the representation. BIG_INT mantissa(float num) { float *fnump = # BIG_INT *lnump = (BIG_INT*) fnump; return(*lnump & 0x007FFFFF); } /* end of BIG_INT long mantissa(float) */ The following function tries to make sense of the command line arguments. Number of such arguments being one means the user has not passed any numbers. We display an appropriate message and exit the program. If the argument count is two the second one is taken to be the number whose components we will extract. Otherwise, the user has passed more arguments than we could deal with; we simply let her know about it and exit the program. In case of failure it’s a recommended practice to exit with a nonzero value. This may turn out to be of great help when programs are run in coordination by means of a shell script. If a program depends on the successful completion of another, the script needs to have a reliable way of checking the result of the previous programs. A program exiting with non-descriptive values is of no help in such situations. We have to make sure that when we return with zero, it really means successful execution. Otherwise there is something wrong, which may further be elucidated by different exit codes. strtod function is used to convert a numeric string, which may also contain -/+ and e/E, to a floating-point number of type double. While it’s easy to figure out the function of the first argument, which is a pointer to the string to be converted, one cannot say the same thing about the second argument. Upon return from strtod the second argument will hold a pointer to the character following the converted part of the input string. Thanks to this, we can process the rest of the string using strtod and its friends. - gcc -o Parse.exe Parse.c↵ - # Double-quotes are needed to treat the string as a single argument. It is not a part of the argument string and will be stripped before passing to main! - Parse "Selin Yardimci: 80, 90, 100"↵ - Name: Selin Yardimci - Midterm: 80 Final: 90 Assignment: 100 float number(int argc, char *argv[]) { switch (argc) { case 1: printf(“Too few arguments. %s\n”, usage); exit(TOO_FEW_ARGUMENTS); case 2: return((float) strtod(argv[1], NULL)); default: printf(“Too many arguments. %s\n”, usage); exit(TOO_MANY_ARGUMENTS): } /* end of switch (argc) */ } /* end of float number(int, char **) */ Observe the simplicity of the main function. We just state what the program does; we do not delve in the details of how it does that. Reading the main function, the maintainer of this code can easily figure out what it claims to do. In case she may need more detail, she has to check the function bodies. Depending on the complexity of the program, these functions will provide full details of the implementation or defer this provision to another function. In a complex program this deferment can be extended to multiple levels. However simple or complex the problem at hand might be and whichever paradigm we use, we start by answering the question "What" and then move (probably in degrees) on to answering the question "How". In other words, we first analyze the problem and come up with a design for the solution and then provide an implementation.[4] Our code should reflect this: it should first expose the answer to "what" (interface) and then (to the interested parties) the answer to "how" (implementation). int main(int argc, char *argv[]) { float num; printf(“Exponent of the number is: %x\n”, exponent(num = number(argc, argv))); printf(“Mantissa of the number is: %x\n”, mantissa(num)); return(0); } /* end of int main(int, char**) */ Bit FieldsEdit Problem: Using bit fields, implement the previous problem. #include <limits.h> #include <stdio.h> #include <stdlib.h> ... Bit fields are defined similar to ordinary record fields. The only difference between the two is the width specification following the bit field. According to the definition below fraction, exponent, and sign occupy twenty-three, eight, and one bit, respectively. However, rest is pretty much up to the compiler implementation. To start with, as is manifested by the preprocessor directive, memory layout of a variable of type struct SINGLE_FP depends on the processor endianness. Manner of bit packing is also not guaranteed to be the same among different hardware. These two factors effectively limit the use of bit fields to machine-dependent programs. struct SINGLE_FP { #ifdef BIG_ENDIAN /* e.g. Motorola */ unsigned int sign : 1; unsigned int exponent : 8; unsigned int fraction : 23; #else /* if LITTLE_ENDIAN, e.g. Intel */ unsigned int fraction : 23; unsigned int exponent : 8; unsigned int sign : 1; #endif }; BIG_INT exponent(float num) { float *fnump = # struct SINGLE_FP *lnump = (struct SINGLE_FP *) fnump; There are two field access operators in C: . and ->. The former is used to access fields of a structure, whereas the latter is used to access fields of a structure through a pointer to the structure. Now that lnump is defined to be a pointer to the SINGLE_FP structure, all variables declared to be of type SINGLE_FP can access the bit fields through the use of ->. Observe structure->field is equivalent to (*structure).field. For example, lnump->exponent is equivalent to (*lnump).exponent return(lnump->exponent); } /* end of BIG_INT exponent(float) */ BIG_INT mantissa(float num) { float *fnump = # struct SINGLE_FP *lnump = (struct SINGLE_FP *) fnump; return(lnump->fraction); } /* end of BIG_INT mantissa(float) */ float number(int argc, char *argv[]) { ... } int main(int argc, char *argv[]) { ... } Static Local Variables (Memoization)Edit Problem: Using memoization write a low cost recursive factorial function. Reminiscent of caching memoization can be used to speed up programs by saving results of computations that have already been made. The difference between the two lies in their scopes. When we speak of caching we speak of a system- or application-wide optimization technique. On the other hand, memoization is a function-specific technique. When a request is received we first check to see whether we can avoid computing the result from scratch. Otherwise, computation is carried out from scratch and the result is added to our database. In other words, we prefer computation-in-space over computation-in-time and save some precious computer time. Applying this technique to the problem at hand we will store the set of already computed factorials in a static local array. Now that any changes made to this array will be persistent between different calls, basis condition for recursion will be changed to reaching a computed factorial, not an argument value of 1 or 0. That is, we have #include <stdio.h> #define MAX_COMPUTABLE_N 20 #define TRUE 1 unsigned long long factorial(unsigned char n) { As soon as the program starts running the following initializations will have taken effect. As a matter of fact, since static local variables are allocated in the static data region initial values for them are present in the disk image of the executable. Mark the initializer of the array used for storing the already made computations. Although it has MAX_COMPUTABLE_N components only two values are provided in the initializer. The remaining slots are filled with the default initial value 0. In other words, it is equivalent to static unsigned long long computed_factorials[MAX_COMPUTABLE_N] = {1, 1, 0, 0, , .., 0}; Note that we could provide more initial values to avoid more of the initial cost. static unsigned char largest_computed = 1; static unsigned long long computed_factorials[MAX_COMPUTABLE_N] = {1, 1}; If we have already computed the factorial of a number that is equal to or greater than the current argument value, we retrieve this value and return it to the caller. Note that receiver of the returned value can be either the main function or another invocation of the factorial function. In the second case, we do part of the computation and use some partial result from the previous computations. if (n <= largest_computed) return computed_factorials[n]; printf("N: %d\t", n); Once new values are computed it is stored in our array and the largest argument value for which factorial has been computed is appropriately updated to reflect this fact. computed_factorials[n] = n * factorial(n - 1); if (largest_computed < n) largest_computed = n; return computed_factorials[n]; } /* end of unsigned long long factorial(unsigned char) */ int main(void) { short n; printf("Enter a negative value for termination of the program...\n"); do { printf("Enter an integer in [0..20]: "); h preceding the conversion letter ( d) specifies that input is expected to be a short int. Similarly, one can use l to specify a long int. scanf("%hd", &n); if (n < 0) break; if (n > 20) { printf("Value out of range!!! No computations done.\n"); Like break continue is used to alter the flow of control inside loops; it terminates the execution of the innermost enclosing while, do while, and for statements. In our case, control will be transferred to the beginning of the do-while statement. continue; } /* end of if (n > 20) */ printf("%d! is %Lu\n", n, factorial((unsigned char) n)); } while (TRUE); return(0); } /* end of int main(void) */ - gcc –o MemoizedFactorial.exe Memoize.c↵ - MemoizedFactorial↵ - Enter a negative value for termination of the program - Enter an integer in [0..20]: 1 - 1! is 1 - Enter an integer in [0..20]: 5 - N:5 N:4 N:3 N:2 5! is 120 - Enter an integer in [0..20]: 5 - 5! is 120 - Enter an integer in [0..20]: 10 - N:10 N:9 N:8 N:7 N:6 10! is 3628800 - Enter an integer in [0..20]: -1 File ManipulationEdit Problem: Write a program that strips comments of a C program. #include <ctype.h> #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #define BOOL char #define FALSE 0 #define TRUE 1 #define MAX_LINE_LENGTH 500 #define NORMAL_COMPLETION 0 #define CANNOT_OPEN_INPUT_FILE 11 #define CANNOT_OPEN_OUTPUT_FILE 12 #define TOO_FEW_ARGUMENTS 21 #define TOO_MANY_ARGUMENTS 22 When an identifier definition is qualified with const, it is taken to be immutable. Such an identifier cannot appear as an lvalue. This means we cannot declare an identifier to be constant and subsequently assign a value to it; a constant must be provided with a value at its point of declaration. In other words, a constant must be initialized. Initializer of a global constant can contain nothing but subexpressions that can be evaluated at the compile-time. A local constant on the other hand can contain run-time values. For instance, will produce the following output: - i: 15 - i: 30 - i: 18 - i: 39 This goes to show that constants are created for each function invocation and they can get different values. But they still cannot be modified throughout the function call. const char file_ext[] = “.noc”; const char temp_file[] = “TempFile”; const char usage[] = "USAGE: StripComments filename"; In C, all identifiers must be declared before they are used! This includes file names, variables, and structure tags. The corresponding definition can be provided after the declaration in the same file or in a different one. While there can be more than one declaration, there can be only one definition. Following declarations are prototypes of functions whose definitions are provided later in the program. Note the parameter names do not agree with the parameter names provided in the definition. As a matter of fact, one does not even have to provide the names. But, you still have to list the parameter types so as to facilitate type checking: It is an error if the definition of a function or any uses of it do not agree with its prototype. The use of prototypes can be avoided by rearranging the order of definitions. For this example, putting the main function at the end would remove the need for the prototypes. char* filename(int argumentCount, char* argumentVector[]); void trimWS(const char *filename); void strip(const char *filename); Comma-separated expressions are considered a single expression result of which is the result returned by the last expression. Evaluation of a comma-separated expression is strictly from left to right; the compiler cannot change this order. int main(int argc, char* argv[]) { char *fname = filename(argc, argv); (strip(fname), trimWS(fname)); return(NORMAL_COMPLETION); } /* end of int main(int, char**) */ char* filename(int argc, char* argv[]) { A more general version of printf, fprintf performs output formatting and sends the output to the stream specified as the first argument. We can therefore see printf as an equivalent form of the following: fprintf(stdin, "...", ...); /* read it as "file printf..." */ Another friend of printf that you may want to consider in output formatting is the sprintf function. This function, instead of writing the output to some media, causes it to be stored in a string of characters. switch (argc) { case 1: fprintf(stderr, "No file name passed!\n %s\n", usage); exit(TOO_FEW_ARGUMENTS); case 2: return(argv[1]); default: fprintf(stderr, "Too many arguments!\n %s\n",usage); exit(TOO_MANY_ARGUMENTS); } /* end of switch(argc) */ } /* end of char* filename(int, char**) */ void trimRight(char *line) { int i = strlen(line) - 1; do i--; while (i >= 0 && isspace(line[i])) ; line[i + 1] = '\n'; line[i + 2] = '\0'; } /* end of void trimRight(char*) */ void trimWS(const char *infilename) { char next_line[MAX_LINE_LENGTH]; char outfilename[strlen(infilename) + strlen(file_ext) + 1]; FILE *infile, *outfile; BOOL empty_line = FALSE; The following statement tries to open a file in reading mode. It maps the physical file, whose name is held in variable temp_file, to the logical file named infile. If this attempt results in success every operation you apply on the logical file will be executed on the physical file. infile variable can be thought of as a handle on the real file. The mapping between the handle and the physical file is not one-to-one. Just like more than one handle can show the same object, one can have more than one handle on the same physical file. There is no problem as long as all handles are used in read mode. But things get ugly if different handles simultaneously try to modify the same file. [The keywords are operating systems, concurrency, and synchronization.] If the open operation fails we cannot get a handle on the physical file. That is reflected in the value returned by the fopen function: NULL. A pointer having a NULL value means that we cannot use it for further manipulation. All we can do is to test it against NULL. So, we check for this condition first. Unless it is NULL we proceed; otherwise we write something about the nature of the exceptional condition to the standard error file stderr and exit the program. Just like the standard output, the standard error file is by default mapped to the screen. So, why do we write to stderr instead of stdio? The answer is, we may choose to re-map these standard files to different physical units. In such a case if we kept writing everything to the same logical file, say stdio, errors would clutter valid output data; we would have difficulty telling which one is which. infile = fopen(temp_file, "r"); if (infile == NULL) { fprintf(stderr, "Error in opening file %s: %s\n", temp_file, strerror(errno)); exit(CANNOT_OPEN_INPUT_FILE); } /* end of if (infile == NULL) */ strcpy(outfilename, infilename); strcat(outfilename, file_ext); outfile = fopen(outfilename, "w"); if (outfile == NULL) { fprintf(stderr, "Error in opening file %s: %s\n", outfilename, strerror(errno)); fclose(infile); exit(CANNOT_OPEN_OUTPUT_FILE); } /* end of if (outfile == NULL) */ while (fgets(next_line, MAX_LINE_LENGTH + 1, infile)) { trimRight(next_line); if (strlen(next_line) == 1) { if (!empty_line) fputs(next_line, outfile); empty_line = TRUE; } else { fputs(next_line, outfile); empty_line = FALSE; } /* end of else */ } /* end of while (fgets(next_line, …) */ fclose(infile); fclose(outfile); remove(temp_file); return; } /* end of void trimWS(const char*) */ void strip(const char *filename) { int next_ch; BOOL inside_comment = FALSE; FILE *infile, *outfile; infile = fopen(filename, "r"); if (infile == NULL) { fprintf(stderr, "Error in opening file %s: %s\n", filename, strerror(errno)); exit(CANNOT_OPEN_INPUT_FILE); } /* end of if (infile == NULL) */ outfile = fopen(temp_file, "w"); if (outfile == NULL) { fprintf(stderr, "Error in opening file %s: %s\n", temp_file, strerror(errno)); fclose(infile); exit(CANNOT_OPEN_OUTPUT_FILE); } /* end of if (outfile == NULL) */ The solution to the problem can be modeled using the following finite automaton. Note the ease of transforming the FA-based solution to C code. This is yet another example of how useful, however useless and boring it might look at first, such a theoretic model might be. A problem is solved by transforming its problem domain representation to the corresponding solution domain representation. The more one knows about models to represent a problem the more easily she will come up with the solution of the problem. while ((next_ch = fgetc(infile)) != EOF) { switch (inside_comment) { case FALSE: if (next_ch != '/') { fputc(next_ch, outfile); break; } if ((next_ch = fgetc(infile)) == '*') inside_comment = TRUE; else { fputc('/', outfile); ungetc(next_ch, infile); } break; case TRUE: if (next_ch != '*') break; if ((next_ch = fgetc(infile)) == '/') inside_comment = FALSE; } /* end of switch(inside_comment) */ } /* end of while ((next_ch = fgetc(infile)) != EOF) */ fclose disconnects the logical file(handle) from the physical file. If not done by the programmer, code found in the exit sequence of the C runtime guarantees that all opened files are closed at the end of the program. Leaving it to the exit sequence has two disadvantages, though. - There is a maximum number of files that an application can have open at the same time. If we defer closing of the files to the exit sequence, we may reach this limit more frequently and quickly. - Unless you explicitly flush using fflush, data you write to a file is actually written to a buffer in memory, not to disk. It is flushed automatically whenever you write a newline to the file or the buffer area becomes full. It means that if there occurs a catastrophic failure, such as an outage, between the earliest time you could close the file and the time exit sequence closes it at the end of the program, the data left in the buffer area will not have been committed to the disk. Not a perfect success story! So you should either flush explicitly or close the file as early as possible. fclose(infile); fclose(outfile); return; } /* end of void strip(const char *) */ Heap Memory AllocationEdit Problem: Write an encryption program that reads from a file and writes the encoded characters to some other file. Use this simple encryption scheme: The encrypted form of a character c is c ^ key[i], where key is a string passed as a command line argument. The program uses the characters in key in a cyclic manner until all the input has been read. Re-encrypting encoded text with the same key produces the original text. If no key or a null string is passed, then no encryption is done. #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #define CANNOT_OPEN_INPUT_FILE 11 #define CANNOT_OPEN_OUTPUT_FILE 12 #define TOO_FEW_ARGS 21 #define TOO_MANY_ARGS 22 const char *usage = "USAGE: CyclicEncryption inputfilename [key [outputfilename]]"; const char file_ext[] = ".enc"; struct FILES_AND_KEY { char *infname; char *outfname; char *key; }; typedef struct FILES_AND_KEY f_and_k; f_and_k getfilenameandkey(int argc, char **argv) { f_and_k retValue = { "\0", "\0", "\0" }; switch (argc) { case 1: fprintf(stderr, "No file name passed!\n %s", usage); exit(TOO_FEW_ARGS); malloc is used to allocate storage from the heap, the area of memory that is managed by the programmer herself. This implies that it is the responsibility of the programmer to return every byte of memory allocated through malloc to the pool of available memory. Such memory is indirectly manipulated through the medium of a pointer. Different pointers can point to the same region of memory. In other words, they can share the same object. Using pointers to manipulate heap objects does not mean that pointers can point only to heap objects. Nor does it mean that pointers themselves are allocated in the heap. Pointers can point to non-heap objects and they can reside in the static area or the run-time stack. Indeed, such was the case in the previous examples. case 2: retValue.infname = (char *) malloc(strlen(argv[1]) + 1); strcpy(retValue.infname, argv[1]); retValue.outfname = (char *) malloc( strlen(argv[1]) + strlen(file_ext) + 1); strcpy(retValue.outfname, argv[1]); strcat(retValue.outfname, file_ext); return(retValue); case[1]) + strlen(file_ext) + 1); strcpy(retValue.outfname, argv[1]); strcat(retValue.outfname, file_ext); break; case[3]) + 1); strcpy(retValue.outfname, argv[3]); break; default: fprintf(stderr, "Too many arguments!\n %s", usage); exit(TOO_MANY_ARGS); } /* end of switch(argc) */ return retValue; } /* end of f_and_k getfilenameandkey(int, char**) */ void encrypt(f_and_k fileandkey) { int i = 0, keylength = strlen(fileandkey.key); int next_ch; FILE *infile; FILE *outfile; if (keylength == 0) return; The following line opens a file that can be read byte-by-byte (binary mode), not character-by-character (text mode). In an operating system like UNIX, where each and every character is mapped to a single entry in the code table, there is no difference between the two. But in MS Windows, where newline is mapped to carriage return followed by linefeed, there is a difference, which may escape incautious programmers. The following C program demonstrates this. Run it with a multi-lined file and you will see that the number of fgetc's needed will be different. The number of characters you read will be less than the number of bytes you read. This discrepency, which is a consequence of the difference in the treatment of the newline character, is likely to cause a great headache when you move your working code from Linux to MS Windows. If you compile and run the program listed on the previous page in a Unix-based environment, it will produce identical outputs for both (text and binary) modes. For MS Windows and DOS environments they will produce the following output. gcc –o Newline Newline.c↵ Newline↵ - 1.ch: 70 ... - ... - 11.ch: 10 12.ch: 83 ... - ... - ... 54.ch: 101 - 1.ch: 70 ... - ... - 11.ch: 13 12.ch: 10 13.ch: 83 ... - ... - ... - ... 57.ch: 101 infile = fopen(fileandkey.infname, "rb"); if (!infile) { fprintf(stderr, "Error in opening file %s: %s\n", fileandkey.infname, strerror(errno)); exit(CANNOT_OPEN_INPUT_FILE); } /* end of if (!infile) */ outfile = fopen(fileandkey.outfname, "wb"); if (!outfile) { fprintf(stderr, "Error in opening output file %s: %s\n", fileandkey.outfname, strerror(errno)); fclose(infile); exit(CANNOT_OPEN_OUTPUT_FILE); } /* end of if (!outfile) */ while ((next_ch = fgetc(infile)) != EOF) { fprintf(outfile, "%c",(char) (next_ch ^ fileandkey.key[i++])); if (keylength == i) i = 0; } /* end of while ((next_ch = fgetc(infile)) != EOF) */ fclose(outfile); fclose(infile); return; } /* end of void encrypt(f_and_k) */ int main(int argc, char *argv[]) { f_and_k fandk = getfilenameandkey(argc, argv); encrypt(fandk); free(fandk.infname); fandk.infname = fandk.outfname; fandk.outfname = (char *) malloc(strlen(fandk.infname) + strlen(file_ext) + 1); strcpy(fandk.outfname, fandk.infname); strcat(fandk.outfname, file_ext); encrypt(fandk); return(0); } /* end of int main(int, char **) */ Pointer to Function (Callback)Edit Problem: Write a generic bubble sort routine. Test your code to sort an array of ints and character strings. #ifndef GENERAL_H #define GENERAL_H ... #define BOOL char #define FALSE 0 #define TRUE 1 ... typedef void* Object; The following typedef statement defines COMPARISON_FUNC to be a pointer to a function that takes two arguments of Object (that is void*) type and returns an int. After this definition, in this header file or any file that includes this header file, we can use COMPARISON_FUNC just like any other data type. That actually is what we do in Bubble_Sort.h and Bubble_Sort.c. Third argument of the bubble_sort function is defined to be of type COMPARISON_FUNC. That is, we can pass address of any function conforming to the prototype stated below. void* is used as a generic pointer. In other words, data pointed to by the pointer is not assumed to belong to a particular type. In a sense, it serves the same purpose the Object class at the top of the Java class hierarchy does. Just as any object can be treated to belong in the Object class, any value, be that something as simple as a char or something as complex as a database, can be pointed to by this pointer. But, such a pointer cannot be dereferenced with the * or subscripting operators. Before using it one must first cast the pointer to an appropriate type. typedef int (*COMPARISON_FUNC)(Object, Object); ... #endif #ifndef BUBBLE_SORT_H #define BUBBLE_SORT_H #include "General.h" bubble_sort function sorts an array of Objects (first argument), whose size is provided as the second argument. Now that there is no universal way of comparing two components and we want our implementation to be generic, we must be able to dynamically determine the function that compares two items of any type. Dynamic determination of the function is made possible by using a pointer to a function. Assigning different values to this pointer enables using different functions, which means different behavior for different types. Coming up with parameter types that are common to all possible data types is the key to making this pointer to function work for all. For this reason, comp_func takes two arguments of type Object, that is void*, which can be interpreted to belong to any type. extern void bubble_sort (Object arr[], int sz, COMPARISON_FUNC comp_func); #endif #include <stdio.h> #include "General.h" #include "algorithms\sorting\Bubble_Sort.h" The static qualifier in the following definition limits the visibility of swap to this file. In a sense, it is what private qualifier is to a class in OOPL’s. The difference lies in the fact that a file is an operating system concept (that is, it is managed by OS), while a class is a programming language one (that is, it is managed by the compiler). The latter is certainly a higher-level abstraction. In the absence of a higher-level abstraction, it (higher-level abstraction) can be simulated using lower level one(s). Intervention of an external agent is required to provide this simulation, which gives rise to a more fragile solution. Such is the case in this solution: the programmer, the intervening agent, has to simulate this higher-level of abstraction through using some programming conventions. static void swap(Object arr[], int lhs_index, int rhs_index) { Object temp = arr[lhs_index]; arr[lhs_index] = arr[rhs_index]; arr[rhs_index] = temp; return; } /* end of void swap(Object[], int, int) */ void bubble_sort (Object arr[], int sz, COMPARISON_FUNC cmp) { int pass, j; for(pass = 1; pass < sz; pass++) { BOOL swapped = FALSE; for (j = 0; j < sz - pass; j++) We could have written the following line as follows: if (cmp(arr[j], arr[j + 1]) > 0) { and it would still do the same thing. So, a pointer-to-function variable can be used just like a function: simply use its name and enclose the arguments in parentheses. This will cause the function pointed to by the variable to be called. Nice thing about calling a function through a pointer is the fact that you can call different functions by simply changing the value of the pointer variable. If you pass the address of compare_ints to the bubble_sort function, it will call compare_ints; if you pass the address of compare_strings, it will call compare_strings; if ... if ((*cmp)(arr[j], arr[j + 1]) > 0) { swap(arr, j, j + 1); swapped = TRUE; } /* end of if ((*cmp)(arr[j], arr[j + 1]) > 0) */ if (!swapped) return; } /* end of outer for loop */ } /* void bubble_sort(Object[], int, COMPARISON_FUNC) */ #include <stdio.h> #include <string.h> #include “General.h” The following line brings in the prototype of the bubble_sort function, not the source code or the object code for it. The compiler utilizes this prototype in type-checking the function use. This involves checking the number of parameters, their types, whether it [the function] is used in the right context. As soon as this is confirmed, the linker takes over and [if it is in a separate source file] brings in the object code for bubble_sort. So, - Preprocessor preprocesses the source code by replacing directives with C source code. - Compiler, using the provided meta-information (such things as variable declarations/definitions and function prototypes), checks the syntax and semantics of the program. Note that by the time the compiler takes over, all preprocessor directives will have been replaced with C source. That is, the compiler does not know anything about the preprocessor. - Linker combines object files to a single executable file. This executable is later loaded into memory by the loader and run under the supervision of the operating system. #include "algorithms\sorting\Bubble_Sort.h" typedef int* Integer; void print_args(char *argv[], int sz) { int i = 0; if (sz == 0) { printf("No command line args passed!!! Unable to test strings...\n"); return; } for (; i < sz; i++) printf("Arg#%d: %s\t", i + 1, argv[i]); printf("\n"); } /* end of void print_args(char **, int) */ void print_ints(Integer seq[], int sz) { int i = 0; for (; i < sz; i++) printf("Item#%d: %d\t", i + 1, *(seq[i])); printf("\n"); } /* end of void print_ints(Integer[], int) */ Next two functions are needed for making comparisons in the implementation of the sorting algorithm. They compare two values of the same type. Implementer of the bubble_sort function cannot know in advance the countless number of object types to be sorted and there is no universal way of comparing that works for all types. So, user of the sorting algorithm must implement the comparison function and let the implementer know about this function. Implementer makes use of this function to successfully provide the service. In doing this it calls “back” to the user’s code. This type of a call is therefore named a callback. int compare_strings(Object lhs, Object rhs) { return(strcmp((char *) lhs, (char *) rhs)); } /* end of int compare_strings(Object, Object) */ Looks like a very complicated way of comparing two ints? That’s right! But remember: we have to be able to compare two objects (values) of any type and we must do it with one single function signature. Comparison of two ints is straightforward: just compare the values. But, how about character strings? Comparing the pointers will not yield an accurate result; we must compare the character strings pointed to by the pointers. It gets even more complex as we compare two structures that have nested structures in themselves. The solution to this is to pass the ball to the person who knows it best (the user of the algorithm) and while doing so pass a generic pointer ( void *) to the data, not the data itself. In the process, the user will cast it into an appropriate type and make the comparison accordingly. int compare_ints(Object lhs, Object rhs) { Integer ip1 = (Integer) lhs; Integer ip2 = (Integer) rhs; if (*ip1 > *ip2) return 1; else if (*ip1 < *ip2) return -1; else return 0; } /* end of int compare_ints(Object, Object) */ int main(int argc, char *argv[]) { int seq[] = {1, 3, 102, 6, 34, 12, 35}, i; Integer int_seq[7]; for(i = 0; i< 7; i++) int_seq[i] = &seq[i]; printf("\nTESTING FOR INTEGERS\n\nBefore Sorting\n"); print_ints(int_seq, 7); The third argument passed to bubble_sort function is a pointer-to-function. This pointer contains a value that can be interpreted as the entry point of a function. Conceptually, there is not much of a difference between such a pointer and pointer to some data type. The only difference is in the memory region they point to. The latter points to some address in the data segment while the former to some address in the code segment. But one thing remains the same: the address value is always interpreted [in a particular way]. Note that you do not have to apply the address-of operator when you send a function as an argument. bubble_sort((Object*) int_seq, 7, &compare_ints); printf("\nAfter Sorting\n"); print_ints(int_seq, 7); printf("\nTESTING FOR STRINGS\n\nBefore Sorting\n"); print_args(&argv[1], argc - 1); bubble_sort((Object*) &argv[1], argc - 1, compare_strings); printf("After Sorting\n"); print_args(&argv[1], argc - 1); return(0); } /* end of int main(int, char **) */ Linking to an Object FileEdit cl /c /ID:\include Bubble_Sort.c↵ /I adds D:\include to the head of the list of directories to be searched for header files, which initially include the directory of the source file and the directories found in %INCLUDE%. Analogous to CLASSPATH of Java, it is used in organizing header files. Using this information, the preprocessor brings D:\include\algorithms\sorting\Bubble_Sort.h into the current file (Bubble_Sort.c). Once the preprocessor is through with its job, compiler will try to compile the resulting file and output an object file, named Bubble_Sort.obj. cl /FeTest_Sort.exe /ID:\include Bubble_Sort.obj Pointer2Function.c↵ The above command compiles Pointer2Function.c as was explained in the previous paragraph. Once Pointer2Function.obj is created it is linked with Bubble_Sort.obj to form the executable named Test_Sort.exe. This linking is required to bring in the object code of bubble_sort function. Remember: inclusion of Bubble_Sort.h brought in the prototype, not the object code! Modular ProgrammingEdit Problem: Write a (pseudo) random number generator in C. It is guaranteed that only one generator is used by a particular application and different applications will be using it over and over again. Now that our generator will be used by different applications, we had better put it in a separate file of its own so that, by linking with the client program, we can use it from different sources. This is similar to, if not same with, the notion of module supported in languages like Modula-2. The difference has to do with their abstraction levels: the module is an entity provided by the programming language and known to all of its users, while file is an entity provided by the operating system and known to all of its users. Programming language compiler (that is, an implementation of the programming language specification) being a user of OS-provided services and concepts means module concept is a higher abstraction. Now that the module concept (higher abstraction) is not present in C, we need to simulate it using other, probably lower-level, abstraction(s). In this case, we use a file (lower abstraction). By doing so, we cannot regain all of what comes with a module, though. The notion of module remains unknown to the compiler; certain rules reinforced by the compiler in a modular programming language must be checked by the programmer herself/ himself. For example, interface and implementation of a module must be synchronized by the programmer, which is certainly an error-prone process. None of the applications being expected to use more than one generator means that we can create the data fields related to the generator in the static data region; in order to identify which generator the function is acting on, we do not need to pass a separate, unique handle. This implies we do not need any creation or destruction functions. All we need to have is a function for initializing the generator and another for returning the next (pseudo) random number. #ifndef RNGMOD_H #define RNGMOD_H extern void RNGMod_InitializeSeed(long int); extern double RNGMod_Next(void); #endif #include "math\RNGMod.h" static long int seed = 314159; const int a = 16807; const long int m = 2147483647; Generating a (pseudo) random number is similar to iterating through a list of values: we basically start from some point and move through the values one by one. The difference lies in the fact that the values generated are to be computed using a function rather than retrieved from some memory location. In other words, a generator iterates through a list using computation in time while an iterator does this using computation in space. Seen from this perspective, initializing a (pseudo) random number generator with a seed is analogous to creating an iterator over a list. By using different values for the seed we can guarantee the generator returns a different value. Using the same seed value will give the same sequence of (pseudo) random values. Such a use may be wanted for replaying the same simulation scenarios for different parameters. void RNGMod_InitializeSeed(long int s) { seed = s; return; } /* end of void RNGMod_InitializeSeed(long int) */ The following function computes the next number in the sequence. The number being computed using a well-known formula means that the sequence is actually not random. That is, it can be known beforehand. This is why such a number is usually qualified with the word ‘pseudo’. What makes the sequence look random is the number of different values it produces in a row. Formula used in the following function will produce all values between 1 and m – 1 and then repeat the sequence. More than two billion values! In practice, it doesn’t make sense to use these large numbers. We therefore choose // to normalize the value into [0..1). double RNGMod_Next(void) { long int gamma, q = m / a; int r = m % a; gamma = a * (seed % q) - r * (seed / q); if (gamma > 0) seed = gamma; else seed = gamma + m; return((double) seed / m); } /* end of double RNGMod_Next(void) */ #include <stdio.h> #include "math\RNGMod.h" int main(void) { printf("TESTING RNGMod…\n"); printf("Before initialization: %g\n", RNGMod_Next()); RNGMod_InitializeSeed(35000); printf("After initialization: %g\t", RNGMod_Next()); printf("%g\t", RNGMod_Next()); printf("%g\t", RNGMod_Next()); printf("%g\n", RNGMod_Next()); return(0); } /* end of int main(void) */ Building a Program Using makeEdit As the number of files needed to compile/link a program increases, it becomes more and more difficult to track the interdependency between the files. One tool to tackle with such situations is the make utility found in UNIX environments. This utility automatically determines which pieces of a large program need to be recompiled and issues commands to recompile them. Input to make is a file consisting of rules telling which files are dependent on which files. These rules generally take the following shape: target : prerequisites TAB command TAB command TAB ... The preceding rule is interpreted as follows: if the target is out of date use the following commands to bring it up to date. A target is out of date if it does not exist or if it is older than any of the prerequisites (by comparison of last-modification times). The following rule tells the make utility that RNGMod_Test.exe depends on RNGMod.o and RNGMod_Test.c. RNGMod_Test.exe is out of date if it does not exist or if it is older than RNGMod.o and RNGMod_Test.c. If any one of these two files is modified we have to update RNGMod_Test.exe by means of the command supplied in the next line. Note the tab preceding the command is not there to make the file more readable; commands used to update the target must be preceded by a tab. $@ is a special variable used to denote the target filename. RNGMod_Test.exe : RNGMod.o RNGMod_Test.c gcc -o $@ -ID:\include RNGMod.o RNGMod_Test.c RNGMod.o : RNGMod.c D:\include\math\RNGMod.h gcc -c -ID:\include RNGMod.c .PHONY is a predefined target used to define fake goals. It ensures the make utility that the target is not actually a file to be updated but rather an entry point. In this example, we use clean as an entry point enabling us to delete all relevant object files in the current directory. .PHONY : clean clean: del RNGMod.o RNGMod_Test.exe As soon as we save this file, all we have to do is to issue the make command. This command will look in the current directory for a file named makefile or Makefile. If you are using GNU version of make, GNUmakefile will also be tried. Once it finds such a file, make tries to update the target file of the first rule from the top. - make↵ - gcc –c –ID:\include RNGMod.c - gcc –o RNGMod_Test.exe –ID:\include RNGMod.o RNGMod_Test.c - Time: 2.263 seconds - RNGMod_Test↵ - Testing RNGMod... - Before initialization: 0.458724 - After initialization: 0.273923 0.822585 0.186277 0.754617 Note that the time it takes the make utility to complete its task may vary depending on the processor speed and its load. In case only RNGMod_Test.c may have been modified we will see the following output. - make↵ - gcc –o RNGMod_Test.exe –ID:\include RNGMod.o RNGMod_Test.c - Time: 1.673 seconds We may at times want the make utility to start from some other target. In such a case we have to provide the name of the target as a command line argument. For instance, if we need to delete the relevant object files found in the current directory, issuing the following command will do the job. - make clean↵ - del RNGMod.o RNGMod_Test.exe - Time: 0.171 seconds Above presentation is a rather limited introduction to the make utility. For more, see Manual of GNU make utility. Object-Based Programming (Data Abstraction)Edit Problem: Write a (pseudo) random number generator in C. Your solution should enable a single application to use more than one generator (with no upper bound to this number) at the same time. We should also meet the requirement of the generator being used from different applications. As was mentioned in the [#Modular Programmingmodular|programming section], second requirement can be met by providing the generator in a separate file of its own. In order to enable the use of more than one generator, where the exact number is not known, we must devise a method to create the generators dynamically as needed (similar to what constructors do in OOPL’s). This creation scheme must also give us something to uniquely identify each generator (similar to handles returned by constructors). We should also be able to return generators that are not needed anymore (similar to what destructors do in OOPL’s without automatic garbage collection). #ifndef RNGDA_H #define RNGDA_H As stated in the requirements we have to come up with a scheme that lets users have more than one generator coexisting in the same program. We should in some way be able to identify each and every one of these generators and tell it from the others. This means that we cannot utilize the same method we used in the previous example. We have to get related functions to behave differently depending on the state of the particular generator. This difference in behavior can be achieved by passing an extra argument, a handle on the present state of the generator. This handle should hide the implementation details of the generator from the user; it should have immutable characteristics that we can use to hide the mutable properties of the generator. Sounds like the notion of handle in Java? That’s right! Unfortunately, C does not have direct support for handles. We need some other, probably less abstract, notion to simulate it. This less abstract notion we will use is the notion of pointer: whatever the size of the data it is a handle on, its size does not change. In the following lines we first make a forward declaration of a struct named _RNG and then define a new type as a pointer to this not-yet-defined struct. By using forward declarations, we do not betray any of the implementation details; we just let the compiler know that we have the intention of using such a type. By defining a pointer to this type, we put a barrier between the user and the implementer: the user has something (pointer, something size of which does not change) to access a generator (underlying object, the size of which can be changed by implementation decisions) indirectly. Freedom of change means the generator can be used without any reference to the underlying object, which is why we call this approach data abstraction any type defined in such a way abstract data type. Note the pointer argument passed to the function. Function behavior changes depending on the underlying object, which is indirectly accessed by means of this pointer. That’s why this style of programming is called object-based programming. struct _RNG; typedef struct _RNG* RNG; extern RNG RNGDA_Create(long int); extern void RNGDA_Destroy(RNG); extern double RNGDA_Next(RNG); #endif #include <stdio.h> #include <stdlib.h> #include "math\RNGDA.h" struct _RNG { long int seed; }; const int a = 16807; const long int m = 2147483647; RNG RNGDA_Create(long int s) { RNG newRNG = (RNG) malloc(sizeof(struct _RNG)); if (!newRNG) { fprintf(stderr, "Out of memory...\n"); return(NULL); } newRNG->seed = s; return(newRNG); } /* end of RNG RNGDA_Create(long int) */ void RNGDA_Destroy(RNG rng) { free(rng); } double RNGDA_Next(RNG rng) { long int gamma; long int q = m / a; int r = m % a; gamma = a * (rng->seed % q) - r * (rng->seed / q); if (gamma > 0) rng->seed = gamma; else rng->seed = gamma + m; return((double) rng->seed / m); } /* end of double RNGDA_Next(RNG) */ #include <stdio.h> #include "math\RNGDA.h" int main(void) { int i; RNG rng[3]; printf("TESTING RNGDA\n"); rng[0] = RNGDA_Create(1245L); rng[1] = RNGDA_Create(1245L); rng[2] = RNGDA_Create(2345L); for (i = 0; i < 5; i++) { printf("1st RNG, %d number: %f\n", i, RNGDA_Next(rng[0])); printf("2nd RNG, %d number: %f\n", i, RNGDA_Next(rng[1])); printf("3rd RNG, %d number: %f\n", i, RNGDA_Next(rng[2])); } /* end of for (i = 0; i < 5; i++)*/ RNGDA_Destroy(rng[0]); RNGDA_Destroy(rng[1]); RNGDA_Destroy(rng[2]); return(0); } /* end of int main(void) */ NotesEdit - ↑ ISO C permits whitespace to precede and follow the #character on the same source line, but older compilers do not. - ↑ As we shall later see, void *is an exception to this. - ↑ In Java, there are two right-shift operators: >>and >>>: While the former sign-extends its operand the latter does its job by zero-extending. - ↑ This should not lead you to think that these phases cannot take place concurrently. What it tells is that phases must start in a certain order. For instance, once the preliminary design is ready, implementers can start implementation, but not before. But both teams must be in contact with each other and provide feedback. As designers make changes these are relayed to the implementation team; as problems surface in the implementation these are passed back to the design team. Note that same sort of relationship may be present between other teams in the software production cycle.
https://en.m.wikibooks.org/wiki/Programming_Language_Concepts_Using_C_and_C%2B%2B/Introduction_to_Programming_in_C
CC-MAIN-2022-05
refinedweb
13,200
54.32
XPath and XPointer/Extension Functions for XPath in XSLT From WikiContent While XPath includes a powerful set of basic functions, some applications of XPath need to support capabilities that go beyond that core. Currently, the most widely used XPath-based application is, of course, XSLT; aside from proprietary extensions offered through the various XSLT processors, it acquires these extra capabilities by way of two commonly used sets of extension functions. (Don't count on their availability in other XPath-based contexts, although because of their usefulness they may be adopted elsewhere as well.) The first set of functions comes from XSLT itself, providing access to path, node, and string-value handling facilities necessary for XSLT processing. The second set of functions comes from the independent Extensions to XSLT (EXSLT) project, providing support for a variety of tasks that weren't addressed in either XPath 1.0 or XSLT 1.0. Additional Functions in XSLT 1.0 When XPath and XSLT were separated into two specifications, it was clear that there were some functions that relied on information available only through an understanding of the current XSLT processing context. These functions were kept in XSLT rather than in XPath, and (to repeat) may or may not be available in XPath processing in other contexts. You will see these functions used frequently in XSLT processing. Table A-1 lists the additional functions provided by XSLT 1.0. Table A-1. Additional functions provided by XSLT 1.0 Again, these functions should be used only in the context of XSLT stylesheets. While some non-XSLT implementations of XPath may provide more general support for them, the functions' behavior in those other contexts might only more or less correspond to their behavior according to the XSLT specification. Tip The XSLT additional functions (and the XSLT features that provide the extra support they need) are all documented in the XSLT specification at. EXSLT Extensions EXSLT is a community project that provides extra functionality for XSLT and XPath. While not a product of the W3C, the EXSLT foundation is implemented across a variety of XSLT and XPath processors. Some EXSLT extensions require support for elements in XSLT stylesheets and are thus tightly bound to XSLT; others are relatively free standing and may be usable in other XPath contexts. The EXSLT extensions can be supported either through direct implementation in XSLT processors or through the use of XSLT modules, which themselves provide support for EXSLT functions using scripting or XSLT. EXSLT extensions may be implemented either as XSLT templates or as functions. In XPath terms, only the functions approach is easily available. EXSLT is divided into eight modules, each containing its own group of functions (and possibly elements) and using its own namespace to identify the module. Within those modules reside "Core" functionality, which all EXSLT implementations must support, as well as "Other" functionality, which EXSLT implementations may support. The following sections explain each module and its contents. Tip For additional information on EXSLT, including pointers to implementations and information on how to participate in creating or implementing EXSLT, see. EXSLT Functions Module Despite its name, the EXSLT Functions module doesn't contain any functions. Instead, it contains three elements that may be used to define extension functions. All these elements are in the namespace, typically mapped to the func prefix. Table A-2 lists these elements and their uses. Table A-2. EXSLT Functions module elements Using these elements, you can create new functions for a wide variety of processing tasks, if the rest of the EXSLT library proves insufficient. EXSLT Dates-and-Times Module The EXSLT Dates-and-times module provides a wide variety of tools for processing dates and times. All these elements and functions are in the namespace, typically mapped to the date prefix. Most of the module is simply functions, but there is also a date:date-format element for defining alternative formats to the ISO 8601 dates used by W3C XML Schema. Table A-3 lists this element and its use. Table A-3. Table A-3: EXSLT Dates-and-times module element The date:date-format element hasn't yet been implemented as of June 2002. Most of the rest of the Dates-and-times module is widely implemented, so working with ISO 8601 dates using EXSLT is not very difficult. ISO 8601 dates use the general format: CCYY-MM-DDThh:mm:ss(Z|((+|-)hh:mm)) XML Schema Part 2: Datatypes provides more information on date and time formats at. The EXSLT Dates-and-times module offers the wide variety of functions listed in Table A-4. Table A-4. EXSLT Dates-and-times module functions The Dates-and-times module largely provides functionality that may eventually be provided by XPath 2.0. (Note that day and month names returned by various of the functions listed in Table A-4, such as date:day-name( ) and date:month-abbreviation( ), are the English-language forms, such as "Sunday" and "Jan." Also note that the abbreviation functions return the three-letter abbreviations for day and month names, e.g., "Thu" and "Sep" for Thursday and September, respectively.) EXSLT Dynamic Module The EXSLT Dynamic module provides support for the dynamic evalution of XPath expressions created during XSLT or other processing. All these functions are in the namespace, typically mapped to the dyn prefix. This module contains only functions. None of the Dynamic module has been implemented as of June 2002. The functions it provides are listed in Table A-5, and you should check the documentation of your implementation to find out if any of this is supported. Table A-5. EXSLT Dynamic module functions EXSLT Common Module The EXSLT Common module provides one element for creating multiple output documents from a given transformation and two functions that address minor structural limitations of XSLT. The element and functions are in the namespace, typically mapped to the exsl prefix. Table A-6 lists the one element and its use. Table A-6. EXSLT Common module element The exsl:document element is widely implemented in EXSLT-compliant processors. It has no effect on XPath processing. The EXSLT Common module offers the functions listed in Table A-7. Table A-7. EXSLT Common module functions Some of this functionality will be provided in XSLT 2.0 or XPath 2.0. EXSLT Math Module The EXSLT Math module provides a variety of common mathematical functions and is easily used with XPath. All these functions are in the namespace, typically mapped to the math prefix. With this module, you can use XPath to perform mathematical calculations on the contents of your documents, in ways far beyond the reach of XPath's own numeric functions and operators. The EXSLT Math module offers the functions listed in Table A-8. Table A-8. EXSLT Math module functions Various implementations provide different levels of support for the Math module. EXSLT Regular Expressions Module The EXSLT Regular Expressions module provides regular expression functionality through three functions. All these functions are in the namespace, typically mapped to the regexp prefix. With this module, you can use XPath to break down or lexically analyze the contents of your documents. The EXSLT Regular Expressions module offers the functions listed in Table A-9. Table A-9. EXSLT Regular Expressions module functions A variety of implementations for the Regular Expression module is available, though no processors support it natively. EXSLT Sets Module The EXSLT Sets module provides six functions for working with node-sets. All these functions are in the namespace, typically mapped to the set prefix. With this module, you can use XPath to compare node-sets. The EXSLT Sets module offers the functions listed in Table A-10. Table A-10. EXSLT Sets module functions The Sets module is built into every processor that supports EXSLT, and implementations are available for other processors as well. EXSLT Strings Module The EXSLT Strings module provides string-processing functionality through three functions. All these functions are in the namespace, typically mapped to the str prefix. With this module, you can use XPath to process the text contents of your documents using common string tools not otherwise provided by the XPath string functions. The EXSLT Strings module offers the functions listed in Table A-11. Table A-11. EXSLT Strings module functions A variety of implementations for the Strings module is available, though no processors support it natively.
http://commons.oreilly.com/wiki/index.php?title=XPath_and_XPointer/Extension_Functions_for_XPath_in_XSLT&oldid=5052
CC-MAIN-2017-34
refinedweb
1,400
53.61
03 June 2008 10:36 [Source: ICIS news] LONDON (ICIS news)--Axens has been awarded a design package for Saudi Aramco and Total’s proposed 400,000 bbl/day refinery and aromatics complex in Al-Jubail, Saudi Arabia, the plant technology company said on Tuesday. ?xml:namespace> Axens said it would design a 32,000 bpsd vacuum gas oil catalytic cracker with partners Shaw, IFP and Total. The company would also design the aromatics complex in partnership with Uhde and ExxonMobil. The complex is expected to produce 700,000 tonnes/year of paraxylene and 143,000 tonnes/year of benzene. Announced by Saudi Aramco and Total on 15 May, the project was expected to come on stream
http://www.icis.com/Articles/2008/06/03/9128971/axens-wins-design-contracts-for-al-jubail-complex.html
CC-MAIN-2014-15
refinedweb
117
50.16
Through the Looking Glass Basic Set Theory Recently, I was explaining to someone the basics of set theory and how the various basic operations translate to the real world. I used the example of the project I’m currently working on, which is a web front end to my ebook library. This is a very quick introduction aimed at people with a programming background but who don’t have a strong math background; the goal is to help you to learn to use them without having to delve deep into the math behind them. Basic Properties of Sets The first thing we have to do is to explain what is meant by a set - definition: set A set is any collection of items where each item is unique and the order of items in the collection is not important. The uniqueness property is very important to sets: there are no duplicates in a set. So what does a set look like? In my database, I have a list of all the books I have electronic copies of. Each book comes in at least one of three formats: PDF, epub, or mobi. We’ll call the superset (the universal set of all the items under consideration) the list of all the books in the library. We’ll call this set ‘L’ (for Library). Part of the set might look like: L = { 'Natural Language Processing with Python', 'Learning OpenCV', 'Code Complete', 'Mastering Algorithms with C', 'The Joy of Clojure', 'Mining the Social Web', 'Algorithms In A Nutshell', 'Introduction to Information Retrieval', ... } We use '{}' to denote the members of a set. The order of books in the library doesn’t matter here, and it doesn’t make sense to have more than one entry for a book in the library. Building a set in Python is very easy: library = set(['Natural Language Processing with Python', 'Learning OpenCV', 'Code Complete', 'Mastering Algorithms with C', 'The Joy of Clojure', 'Mining the Social Web', 'Algorithms In A Nutshell', 'Introduction to Information Retrieval', 'Network Security With OpenSSL', 'RADIUS']) Clojure has set notation built in using the #{ } syntax, and any collection can be turned into a set with (set coll): (def library #{ "Natural Language Processing with Python", "Learning OpenCV", "Code Complete", "Mastering Algorithms with C", "The Joy of Clojure", "Mining the Social Web", "Algorithms In A Nutshell", "Introduction to Information Retrieval", "Network Security With OpenSSL", "RADIUS"}) So now we need to build some subsets. definition: subset A subset is some part of a set. definition: proper subset A proper subset is some part of a set, but is not the whole set. For example, we’ll create a subset of books P that are on or in Python. We’ll also create a subset of books E that are in the English language. For my library, because not all of my books are in or about Python, the number of members of P is smaller than the number of elements in L. However, all of my books are in English, so the number of elements in E is the same as the number of elements in L. Therefore P is a proper subset, while E is not. The Basic Set Operations Now let’s consider two proper subsets of the library to explain some of the basic set operations: M is the subset of ebooks that I have in mobi format, and we’ll redefine E to be the list of ebooks in epub format. For the sake of the rest of this article, let’s note the following: M = { 'Natural Language Processing with Python', 'Code Complete', 'Introduction to Information Retrieval' } E = { 'The Joy of Clojure', 'Mining the Social Web', 'Code Complete' } In practical terms, this means in my library I have copies of: - “Natural Language Processing with Python,” “Introduction to Information Retrieval,” and “Code Complete” in mobi format - “The Joy of Clojure,” “Mining the Social Web,” and “Code Complete” in epub format. In Python: mobi = set(['Natural Language Processing with Python', 'Code Complete', 'Introduction to Information Retrieval']) epub = set(['The Joy of Clojure', 'Mining the Social Web', 'Code Complete']) In Clojure: (def mobi #{"Natural Language Processing with Python", "Code Complete", "Introduction to Information Retrieval"}) (def epub #{"The Joy of Clojure", "Mining the Social Web", "Code Complete"}) Union A union is the set of members that appear in either set - if it’s in at least one of the sets, it will appear in a union of the two sets. So we could define a subset of L that contains all the books I have in a mobile format, which for our purposes means copies exist in epub or mobi format. In Python, you can use the set.union method, and in Clojure you can use the functions in the clojure.set namespace. In Python: mobile = set.union(mobi, epub) for book in mobile: print book which yields the output: Natural Language Processing in Python Code Complete Introduction to Information Retrieval The Joy of Clojure Mining the Social Web Remember that one of the properties of sets is that order is irrelevant, so you might get the books in a different order (this applies to Clojure as well). The same thing, in Clojure: (def mobile (clojure.set/union mobi epub)) (doseq [book mobile] (println (str book))) You would see a similar output to the Python example. Again, the practical result of this is a set of all the books I have in my library in a mobile format. Intersection The intersection of two sets is a list of all the members that only appear in both sets. In the library example, taking the intersection of the mobi and epub sets gives me a set of my books that I have in both epub and mobi format. The intersection function gives me this result. The Python example: both_formats = set.intersection(mobi, epub) for book in both_formats: print book And in Clojure: (def both-formats (clojure.set/intersection mobi epub)) (doseq [book both-formats] (println (str book))) For either example, the output should be just one book, given the sample sets: Code Complete I could use this result to know which books I can use on any mobile device. Difference The difference of one set from another is a list of all the members in the first set that are not in the second set. This operation is a bit different from the first two; the first two operations are commutative, but the result of a difference is dependent on the order of the sets. I’ll illustrate this with some code examples: In Python: only_mobi = set.difference(mobi, epub) print 'books only in mobi format:' for book in only_mobi: print '\t' + book only_epub = set.difference(epub, mobi) print 'books only in epub format:' for book in only_epub: print '\t' + book In Clojure: (println "books only in mobi format:") (def only-mobi (clojure.set/difference mobi epub)) (doseq [book only-mobi] (println "\t" book)) (println "books only in epub format:") (def only-epub (clojure.set/difference epub mobi)) (doseq [book only-epub] (println "\t" book)) As the output messages show, this gives us the set of books that are only in mobi and the set of books that are only in epub. The output should look something like: books only in mobi format: Introduction to Information Retrieval Natural Language Processing with Python books only in epub format: The Joy of Clojure Mining the Social Web Complements When discussing complements, we do so when considering a subset and it’s superset. The complement of a subset is the difference of subset from the superset; i.e., the set of all members in the superset that are not in the subset. For example, if I wanted to check my library for all ebooks I have that are not in mobi format, I would use the superset library and take the difference of mobi from library: In Python: not_mobi = set.difference(library, mobi) print 'books not in mobi format, using the library superset:' for book in not_mobi: print '\t' + book and in Clojure: (println 'books not in mobi format, using the library superset:') (def not-mobi (clojure.set/difference library mobi)) (doseq [book not-mobi] (println "\t" book)) This gives us the output: books not in mobi format, using the library superset: Mining the Social Web Algorithms In A Nutshell Mastering Algorithms with C RADIUS The Joy of Clojure Network Security With OpenSSL Learning OpenCV Conclusion This has been a very basic look at set theory and what it means in practise. There is a lot more to set theory (see the references) but this should help get you started. There are a lot of applications for set theory, such as in data mining and natural language processing; it is a powerful tool that is worth spending some time to get to know. Stay tuned for the next post, which will be on how to use sets in your code. We’ll develop the library idea a bit more. UPDATE: The next post is up! References - I’ve been reading Alfred Aho’s The Theory of Parsing, Translating, and Compiling (Volume I: Parsing)(Amazon link) - There is, of course, a good [wikipedia article](. Reviewers I’d like to thank the following people for reviewing this: - Wally Jones - Stephen Olsen - Aaron Bieber - Jason Barbier - Shawn Meier - Matt Sowers
http://kyleisom.net/blog/2012/01/23/basic-set-theory/index.html
CC-MAIN-2017-17
refinedweb
1,554
53.65
Opened 9 years ago Closed 9 years ago #1912 closed defect (duplicate) documentation/tutorial1 corrections Description First off, I love the way you guys write documentation. Some issues with - At this point you should probably change 2005 to 2006 - once you define Poll.was_published_today() you need to import datetime into your model. - the way your example writes Poll.was_published_today() gives me an attribute error (OSX 10.4.6, python 2.4.3), I believe you want to use is; def was_published_today(self): return self.pub_date.day == datetime.now().day - All the shell interaction responses are off (they look like they are in namespaces now), probably because they were changed recently, what I get is <Choice: Not much> <Choice: the sky> <Poll: What's up?> Change History (1) comment:1 Changed 9 years ago by Malcolm Tredinnick <malcolm@…> - Resolution set to duplicate - Status changed from new to closed Note: See TracTickets for help on using tickets. Thanks for the comments. A little feedback.. So I think we can close this one off. But thanks for the feedback, in any case.
https://code.djangoproject.com/ticket/1912
CC-MAIN-2015-35
refinedweb
179
67.04
Abstract: The garbage collector can cause our program to slow down at inappropriate times. In this newsletter we look at how we can delay the GC to never run and thus produce reliable results. Then, instead of actually collecting the dead objects, we simply shut down and start again. Welcome to the 191st issue of The Java(tm) Specialists' Newsletter, sent to you from the stunning Island of Crete. A Greek friend remarked on Sunday how surprised he was at my recent progress with the Greek language. Michaelis had been encouraging me for the last 4 years to knuckle down and put in a bit of an effort. Learning a new language requires a good memory, which has never been my strong point. I discovered in grade 8 history that I hardly had any short-term memory at all. That, coupled with an illegible scrawl, were serious handicaps in school. As a result, my grades were never particularly good. I was also overconfident and seldom read questions properly before writing my answer. The only time I ever got a distinction in school was for my final exam. It is funny that the rest of my 12 years of school do not count for anything. Nobody was ever interested in how many PT classes I bunked or that I failed "hand writing" in grade 5, all they wanted to know was my final score. Greek is not the first language I learned. In high school, I decided to learn an African language called isiXhosa, spoken by our former president Nelson Mandela. This was in the years of apartheid, when it was extremely unfashionable for a melanin deprived individual to learn an African language. In my class, I was the most fluent speaker. My secret? I would walk home from school and speak to anybody who cared to listen. I made a lot of mistakes and most of the time I spoke about the same things. "Where are you from? Where are you going? It is hot today". This loosened my tongue and gave me confidence. I do the same with Greek. My second secret was that I wrote a computer program in GWBasic to help me remember the vocabulary. The program would show me a word and then I would have to write it in isiXhosa. If I made a mistake, I would get punished by having to type it three times. I wrote a similar Java program for Greek, which was challenging because of the strange character set. What surprises me is how this program flashes the words into my medium term memory, without first going into the short-term memory. Even though I don't actively remember the word, it is there in my brain and I can recall it, as long as I do not think too hard. Another surprise is that I am finding it as difficult (or as easy) to learn Greek as I found it to learn isiXhosa, when I was 13 years old. Some expats tell me that they are too old to learn. But it is just as hard for me to remember anything as when I was 13. It is a major effort for me to recall even a single word. So, in conclusion of this very long opening, if you are finding it hard learning new languages, computer or natural, you are in good company. It is hard for me too. The trick is to use the language, then eventually something will stick :-) (Oh and just for interest, "use" in Greek is xrhsimopoiw :-)) javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge. A few weeks ago, our French Java Specialist Master Course instructor sent us an interesting puzzle from his brilliant The Coder's Breakfast blog. I love the name, as it evokes good French coffee and a croissant whilst puzzling over the latest Java quiz. I will rephrase his question a bit, because I think something might have gotten lost in the translation from French into English. With minimal Java code changes, how can you make the Quiz44 class run at least 10x faster? public class Quiz44 { private static final int NB_VALUES = 10000000; private static final int MAX_VALUE = 1000; private static final int NB_RUNS = 10; public static void main(String[] args) { Integer[] boxedValues = new Integer[NB_VALUES]; int[] values = initValues(); System.out.println("Benchmarking..."); for (int run = 1; run <= NB_RUNS; run++) { long t1 = System.currentTimeMillis(); for (int i = 0; i < NB_VALUES; i++) { boxedValues[i] = values[i]; } long t2 = System.currentTimeMillis(); System.out.printf("Run %2d : %4dms%n", run, t2 - t1); } } /** * Nothing important here, just values init. */ private static int[] initValues() { System.out.println("Generating values..."); int[] values = new int[NB_VALUES]; Random random = new Random(); for (int i = 0; i < values.length; i++) { values[i] = random.nextInt(MAX_VALUE); } return values; } } When I run this on my MacBook Pro, with -Xmx500m, I see the following results: Generating values... Benchmarking... Run 1 : 1657ms Run 2 : 2879ms Run 3 : 2265ms Run 4 : 2217ms Run 5 : 2211ms Run 6 : 2216ms Run 7 : 930ms Run 8 : 2216ms Run 9 : 2241ms Run 10 : 2216ms The average is 2105, with a variance of 254,000, caused by Run 7. There are some obvious changes to the code that would make it complete faster. For example, we could reduce the number of NB_VALUES or MAX_VALUE fields, but I think Olivier would consider that as cheating. However, there is also a non-obvious change we could make. For example, we could add a System.gc() before the first call to System.currentTimeMillis(). In that case, the test runs with the following values: Generating values... Benchmarking... Run 1 : 1720ms Run 2 : 2645ms Run 3 : 2240ms Run 4 : 927ms Run 5 : 918ms Run 6 : 930ms Run 7 : 965ms Run 8 : 914ms Run 9 : 903ms Run 10 : 913ms This time the average is 1308 with a variance of 430,000. Not a bad speedup at all. After all, it is now 38% faster than before. It should be obvious that a large proportion of time is wasted with collecting all these unnecessary objects. If we run the code again with -verbose:gc on, we can see the costs involved: Generating values... [GC 40783K->39473K(83008K), 0.0041315 secs] [Full GC 39473K->39465K(83008K), 0.0696936 secs] [GC 78528K(123980K), 0.0002081 secs] [GC 79209K(123980K), 0.0002467 secs] Benchmarking... [Full GC 79197K->78518K(150000K), 0.0828044 secs] [GC 78518K(150000K), 0.0001254 secs] [GC 78995K(150000K), 0.0006032 secs] [GC 95542K->103000K(150000K), 0.3685923 secs] [GC 103000K(150000K), 0.0021857 secs] [GC 120024K->129617K(150000K), 0.1627059 secs] [GC 146641K->155227K(172464K), 0.1826291 secs] [GC 172251K->180831K(198000K), 0.1499428 secs] [GC 197855K->206365K(223536K), 0.1794985 secs] [GC 223389K->231900K(249072K), 0.1751786 secs] [GC 248924K->257435K(274800K), 0.1594760 secs] [GC 274459K->282969K(300144K), 0.2015765 secs] Run 1 : 1774ms [Full GC 283309K->282255K(300144K), 0.8866413 secs] [GC 315119K->330066K(506684K), 0.3946753 secs] [GC 364178K->398754K(506684K), 0.3282639 secs] [GC 398754K(506684K), 0.0043726 secs] [GC 432866K->449971K(506684K), 0.3649566 secs] [Full GC 484083K->282879K(506684K), 1.1812640 secs] Run 2 : 2708ms [Full GC 284935K->282881K(507776K), 1.0651874 secs] [GC 316993K->345922K(507776K), 0.2532635 secs] [GC 380034K->399611K(507776K), 0.2922708 secs] [GC 400297K(507776K), 0.0042360 secs] [GC 433723K->450807K(507776K), 0.3415709 secs] [Full GC 484919K->282884K(507776K), 1.0057979 secs] Run 3 : 2324ms [Full GC 283571K->282884K(507776K), 0.8885789 secs] [GC 316996K->347295K(507776K), 0.2598463 secs] [GC 381407K->400631K(507776K), 0.3051485 secs] [GC 401318K(507776K), 0.0042195 secs] [GC 434743K->451850K(507776K), 0.3479674 secs] Run 4 : 1024ms [Full GC 485962K->282884K(507776K), 1.0040985 secs] [GC 316996K->347295K(507776K), 0.2591832 secs] [GC 381407K->400631K(507776K), 0.2888177 secs] [GC 401318K(507776K), 0.0042504 secs] [GC 434743K->451850K(507776K), 0.3348886 secs] Run 5 : 994ms [Full GC 485962K->282884K(507776K), 1.0128758 secs] [GC 316996K->347295K(507776K), 0.2580010 secs] [GC 381407K->400631K(507776K), 0.2884526 secs] [GC 402692K(507776K), 0.0060617 secs] [GC 434743K->451848K(507776K), 0.3290486 secs] Run 6 : 1004ms [Full GC 485960K->282884K(507776K), 1.0040235 secs] [GC 316996K->347295K(507776K), 0.2596790 secs] [GC 381407K->400631K(507776K), 0.2851338 secs] [GC 401318K(507776K), 0.0042191 secs] [GC 434743K->451840K(507776K), 0.3340752 secs] Run 7 : 989ms [Full GC 485952K->282884K(507776K), 1.0022637 secs] [GC 316996K->347295K(507776K), 0.2612456 secs] [GC 381407K->400631K(507776K), 0.2933666 secs] [GC 401318K(507776K), 0.0043201 secs] [GC 434743K->451842K(507776K), 0.3280430 secs] Run 8 : 997ms [Full GC 485954K->282884K(507776K), 1.0126833 secs] [GC 316996K->347295K(507776K), 0.2569432 secs] [GC 381407K->400631K(507776K), 0.2866691 secs] [GC 401318K(507776K), 0.0042206 secs] [GC 434743K->451842K(507776K), 0.3418867 secs] Run 9 : 1000ms [Full GC 485954K->282884K(507776K), 1.0074827 secs] [GC 316996K->347295K(507776K), 0.2594386 secs] [GC 381407K->400631K(507776K), 0.2966164 secs] [GC 401318K(507776K), 0.0042397 secs] [GC 434743K->451840K(507776K), 0.3391696 secs] Run 10 : 1009ms [Full GC 485952K->400K(507776K), 0.3041337 secs] The Full GC's in italics are the artificial GCs that we introduced to make our test run faster. If we exclude them from the timing, we can work out how much of each run is spent collecting garbage: Run GC Program Total 1 1583 191 1774 2 2274 434 2708 3 1897 427 2324 4 917 107 1024 5 887 107 994 6 882 122 1004 7 883 106 989 8 887 110 997 9 890 110 1000 10 899 101 1009 We can get even better figures from our program if instead of elapsed time (System.currentTimeMillis()), we use user CPU time (ThreadMXBean.getCurrentThreadUserTime()). Here are the run times: Generating values... Benchmarking... Run 1 : 1667ms 106ms Run 2 : 2576ms 114ms Run 3 : 2199ms 109ms Run 4 : 897ms 109ms Run 5 : 899ms 109ms Run 6 : 899ms 109ms Run 7 : 903ms 109ms Run 8 : 889ms 109ms Run 9 : 892ms 109ms Run 10 : 896ms 108ms Our average user CPU time is now 109ms with a variance of 4. Let's get back to the problem. How do we make the test run 10 times faster with minimal code changes? My first approach was to simply delay the cost of GC until after the test has run through. Since I have 8GB of memory in my laptop, I can beef up the initial and maximum size of the heap to a huge amount. I also set the NewSize to a crazy amount, so that the GC never needs to run whilst the test is running. $ java -showversion -Xmx10g -Xms10g -XX:NewSize=9g Quiz44 java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) Generating values... Benchmarking... Run 1 : 198ms Run 2 : 203ms Run 3 : 179ms Run 4 : 192ms Run 5 : 179ms Run 6 : 191ms Run 7 : 183ms Run 8 : 192ms Run 9 : 180ms Run 10 : 197ms This gave me an average of 189ms with a slightly high variance of 75. Since my initial results had an average of 2105, I achieved Olivier's target of being at least 10x faster. In case you think what I've done is really stupid, I know of companies who use this trick to avoid all GC pauses. They try to never construct objects. They make the young generation very large. After running for several hours, they quit the application and start again. This gives them very predictable run times. So it's not as harebrained an idea as it appears at first. In our case, things start to go wrong when we increase the number of runs, for example, let's change NB_RUNS to 100: java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) Generating values... Benchmarking... Run 1 : 188ms Run 2 : 214ms Run 3 : 234ms Run 4 : 191ms Run 5 : 203ms Run 6 : 193ms Run 7 : 207ms Run 8 : 188ms Run 9 : 208ms Run 10 : 183ms Run 11 : 199ms Run 12 : 183ms Run 13 : 200ms Run 14 : 186ms Run 15 : 202ms Run 16 : 192ms Run 17 : 216ms Run 18 : 191ms Run 19 : 211ms Run 20 : 188ms Run 21 : 210ms Run 22 : 190ms Run 23 : 210ms Run 24 : 187ms Run 25 : 211ms Run 26 : 186ms Run 27 : 208ms Run 28 : 188ms Run 29 : 205ms Run 30 : 188ms Run 31 : 216ms Run 32 : 189ms Run 33 : 210ms Run 34 : 186ms Run 35 : 215ms Run 36 : 190ms Run 37 : 222ms Run 38 : 191ms Run 39 : 209ms Run 40 : 186ms Run 41 : 1068ms Run 42 : 7281ms Run 43 : 8705ms Run 44 : 8583ms Run 45 : 7675ms Run 46 : 6346ms Run 47 : 1625ms Run 48 : 896ms Run 49 : 834ms Run 50 : 948ms Run 51 : 554ms Run 52 : 901ms Run 53 : 904ms Run 54 : 4253ms Run 55 : 815ms Run 56 : 111ms Run 57 : 127ms Run 58 : 143ms Run 59 : 159ms Run 60 : 117ms Run 61 : 138ms Run 62 : 136ms Run 63 : 140ms Run 64 : 128ms Run 65 : 131ms Run 66 : 136ms Run 67 : 147ms Run 68 : 122ms Run 69 : 127ms Run 70 : 135ms Run 71 : 135ms Run 72 : 135ms Run 73 : 134ms Run 74 : 126ms Run 75 : 125ms Run 76 : 217ms Run 77 : 765ms Run 78 : 2973ms Run 79 : 2459ms Run 80 : 2180ms Run 81 : 4733ms Run 82 : 7484ms Run 83 : 3177ms Run 84 : 6209ms Run 85 : 5129ms Run 86 : 1051ms Run 87 : 5177ms Run 88 : 5515ms Run 89 : 6217ms Run 90 : 8865ms Run 91 : 7981ms Run 92 : 3578ms Run 93 : 3472ms Run 94 : 3645ms Run 95 : 4006ms Run 96 : 3933ms Run 97 : 4211ms Run 98 : 3127ms Run 99 : 3714ms Run 100 : 3811ms This gives us an average of 1656 and variance of 6,000,000. There are also times when the system is almost unusable because of the time wasted in GC. One of our Java Specialist Club members, Marek Bickos, quickly discovered a flag that made this code run much faster. By simply invoking the code with -XX:+AggressiveOpts, it caused no GC at all: Generating values... Benchmarking... Run 1 : 56ms Run 2 : 45ms Run 3 : 39ms Run 4 : 50ms Run 5 : 66ms Run 6 : 39ms Run 7 : 39ms Run 8 : 40ms Run 9 : 46ms Run 10 : 46ms A bit of experimentation explained why this was so much faster. In modern JVMs, the autoboxing cache size is configurable. It used to be that all values from -128 to 127 were cached, but nowadays we can specify a different upper bound. When we run the code with -XX:+AggressiveOpts, it simply increases this value. We can try it out with this little program: public class FindAutoboxingUpperBound { public static void main(String[] args) { int i=0; while(isSame(i,i)) { i++; } System.out.println("Upper bound is " + (i - 1)); i = 0; while(isSame(i,i)) { i--; } System.out.println("Lower bound is " + (i + 1)); } private static boolean isSame(Integer i0, Integer i1) { return i0 == i1; } } If we run this without any flags, it comes back as: Upper bound is 127 Lower bound is -128 With -XX:+AggressiveOpts, we get these results: Upper bound is 20000 Lower bound is -128 It is thus pretty obvious why it is so much faster. Instead of constructing new objects, it is simply getting them from the Integer cache. If we increase the MAX_VALUE to 100000, and run the program with -XX:+AggressiveOpts and -Xmx500m, we get these results: Generating values... Benchmarking... Run 1 : 1622ms Run 2 : 1351ms Run 3 : 2675ms Run 4 : 2037ms Run 5 : 2322ms Run 6 : 2325ms Run 7 : 2195ms Run 8 : 2394ms Run 9 : 2264ms Run 10 : 2109ms However, with my flags of "-Xmx10g -Xms10g -XX:NewSize=9g", it is still as fast as before. The question is thus, which is the better solution? I think it depends very much on how this is going to be used in the real world. My solution would work with any objects that are being made, not just Integers. However, we also saw the serious downside to my approach. Once we do fill up our young space, the JVM will get very slow, at least for a while. The solution Olivier was looking for was to set the maximum number of values in the integer cache using the system parameter java.lang.Integer.IntegerCache.high, for example: java -Djava.lang.Integer.IntegerCache.high=1000 Quiz44 Generating values... Benchmarking... Run 1 : 40ms Run 2 : 54ms Run 3 : 48ms Run 4 : 49ms Run 5 : 66ms Run 6 : 48ms Run 7 : 48ms Run 8 : 49ms Run 9 : 48ms Run 10 : 66ms The -XX:AutoBoxCacheMax=1000 flag would do the same thing. Whilst it is possible to specify the maximum Integer in the cache, no doubt to satisfy some silly microbenchmark to prove that Java is at least as fast as Scala, it is currently not possible to set the minimum Integer....
https://www.javaspecialists.eu/archive/Issue191.html
CC-MAIN-2020-05
refinedweb
2,833
82.34
Ok so, I feared this would happen when I compiled and ran, and surely enough it did. I'm getting a logical error within this program and i cant figure out how to fix it. I want to make it so that when I enter any of the numbers between 0 - 19, 20 - 39, 40 - 59, or 60+, it will only do the calculation of the entered number by the specific decimal value assigned to that group of numbers. Take a look at my code and you'll understand what im talking about Code: #include <iostream> using namespace std; int main() { double checkAmount; int monthBankCharge = 10; cout << "Enter the amount of checks you've written in the past month\n"; cin >> checkAmount; if (checkAmount < 20) { cout << checkAmount * 0.10 << endl;} else if (checkAmount >= 20) { cout << checkAmount * 0.08 << endl;} else if (checkAmount >= 40) { cout << checkAmount * 0.06 << endl;} else if (checkAmount >= 60) { cout << checkAmount * 0.04 << endl;} system("pause"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/141025-if-else-else-if-problem-printable-thread.html
CC-MAIN-2014-52
refinedweb
161
77.57
The original problem is given a large input file, with n input lines of random string, find the number of pairs-> meaning same number and type of characters, in the file. Constraint on type of characters is: any legal ascii(0-127) char besides 10 and 13. I am trying to implement a hashCode function for a wrapper class over a sized 128 int array that is ideally quick and results in minimal collisions. Hence, my approach is to process each line into a int counting array of size 128 encoding the ascii set distributions into a wrapper class, then to store this array as a key in a hashtable/hashmap ( or possibly eventually to make my own hashtable implementation with linear probe to store it in). with value as the number of matching character distributions, from which I can calculate the number of pairs additively via Handshake Lemma to accumulate as I read through lines of text. Hence to do this, I need some advice on selecting a appropriate fast hashing function that minimises collisions, and perhaps more importantly for my own understanding, the proof or a link to the proof. Initial number of total lines is given, so if there is a formula to calculate the required size, from that, it can be done. What I have done, arbitrarily using this as my hash function so far: public int hashCode(){ int hash = 0; for (int i = 0; i<128;i++){ hash+= b[i]*(i+1); } return (hash*this.size)%capacity; } Capacity meaning the size of the hashtable itself. In math terms n = Size of String, m = size of hashtable: $ \sum^{128}_{i=0}(x[i]*(i+1))*n \mod m$ The requirements for this hash function are: given a constant sized 128 int array in which each value in the index is bounded maximally by length of the string how can I come up with a unique hash. or alternatively: given a string of size x with unique character distributions, how can I calculate the hash value, such that another string with same character distribution has the same hash value. Ideally it will be a fast compute hash function Thanks!
https://proxieslive.com/author/admin/page/2/
CC-MAIN-2020-16
refinedweb
363
50.91
Yeah that's the Greggs logo in a QR code for this website QR codes, (Quick Response), are all around us. We see them on adverts, newspapers, drinks bottles. They are a quick way for mobile users to access web content, as all you need is a camera and a barcode app to scan the code and visit the website. How can you create your own QR code? I found a simple Python library, that can be used from the command line to generate quick QR codes. Install On my Linux machine I used pip3 to install the python library. sudo pip3 install myqr A number of other libraries were also installed, but it only took a few minutes for the install to complete. Using from the command line Open a terminal, on a Raspberry Pi the icon is in the top left of the screen. I wanted to create a link to my website, a simple QR code. So I typed myqr This creates a QR code with my website address, and saves it in the current working directory as qrcode.png But...lets add an image! Adding an image is rather simple myqr -n bigles.jpg -p rebel.gif We call the myqr application, use -n to specify the filename that we would like to save the QR code as, then use -p to specify the name of the image that we wish to use. Colours!!!! Yes our QR code can have colours and it a simple matter of adding -c to the command to colourise. myqr -n bigles.jpg -p greggs.png -c Thanks for not suing Greggs, I love your sausage rolls Animated!!! Oh yes you can create an animated GIF QR code! You need to ensure that the -n used to save the image is set output a GIF, then just use -p with an animated GIF...oh and use -c to ensure it is colourful! myqr -n bigles.gif -p sonic.gif -c Using from Python 3 As this is a Python 3 library, we can use this wonderful library in our Python projects. I used Python 3 editor, but you could also use Thonny, or Sublime Text, or Vi... (We'll assume that you know how to write Python code, and that you have a favourite editor.) We use os is used to access the underlying filesystem import os Imports the myqr library. from MyQR import myqr We then specify the web address, or any text that we wish to encode in the QR code. version, level, qr_name = myqr.run( "", version=1, level='H', Our picture is the image that we wish to use in the QR code. picture="/directory/where/the/image/is", If we would like a coloured image, change False to True. colorized=False, The save_name is the file into which we want to save the QR code image. Remember to use GIF if you want to create an animated GIF QR code. contrast=1.0, brightness=1.0, save_name="cool-qr-code.gif", Lastly we set the save location to the current working directory, in other words the directory where you are running this code from. save_dir=os.getcwd() ) Complete Code Listing import os from MyQR import myqr version, level, qr_name = myqr.run( "", version=1, level='H', picture="/directory/where/the/image/is", colorized=False, contrast=1.0, brightness=1.0, save_name="cool-qr-code.gif", save_dir=os.getcwd() ) When ready save the code and run it to create your QR codes. Taking it further - Creating a function to automate the options. - Create a GUI to enable anyone to easily create QR codes. Have fun hacking your new QR codes!
http://bigl.es/friday-fun-qr-code-madness/
CC-MAIN-2018-09
refinedweb
611
74.08
- Excel 2007 - Is it possible to populate a crystal report from a datagridview? - Drag and Drop Gridview data - Explorer.Exe - Application error - Network port monitor - WebRequest not working as it should - VB Express 2005 and SQL 2000 - C# .NET - Calling a JAVA API - Update Multiple records in the datagrid using asp.net and VB - Front Page 2003 on a new laptop - Finding Adobe Reader version with VB.NET - understand "using" in app code - Calling .Net web service from my JSP page - Shopping Cart checkboxes - VB.NET - Fast Databasing Problem - custom property for checkbox - ASP.NET 2.0 Membership & Roles Login Problem - Why cant I serailize a datarow? - C# - sql database manipulation question - smart device applications - vb.net - finding a substring in a string - convert string to a float - retriveing image in datalist or gridview controls from sql server 2000 in asp.net - Two-Dimensional array initializaton - cascading dropdownlist without refreshing the page. - Visual studio .net cannot create or open the application error. - copying image from internet - Microsoft Open XML converter - Query: refresh the list item in a list box on a time interval. - Help on table align on left of page vs left hanging indent - CLSID not found - Net version - dynamic adding components - Threading Concept in C#.Net - Hook event for specific thread - Accordion - PseudoMP3 - Open File Dialog to upload multiple files - c# to xml - DataGrid versus DataGridView - UAC Problem - why xmlserializer is not serializing an object which implements IDictionary - Changing resources in .NET executable - Import C++ Dll in VB.NET give me Marshal error - Custom appender in log4net - Common Language Runtime detected an invalid program - DataBinding & Update - DropdownList error - Data Table Decimal Separator - Microsoft.Office.Interop.Excel.dll help - File Conversion from CSV format to Excel(.xsl) C# - Automation server can't create object - WSHttpBinding.Security.Message.NegotiateServiceCre dential - how can i represent image in vb.net - Label - URL Validation - Visual studio solution file erro - Is this complexType declaration valid? - dropdowl box - COM class factory for component with CLSID {91493441-5A91-11CF-8700-00AA0060263B}.... - ClickOnce fails to deploy - Creating global objects in a class library - Regarding table updation - How to do Zoom in & Out - How to print a document in c#? - setting header and footer for the dynamically created word document - Cannot show already open form (C#2005) - Marshal SafeArray as array<T> - Changing ListBox Text - Stopwatch / Countup in VB.NET - Pass Parameters between C# Forms ! - Running a .Net app before logoff - Random Generator for nine numbers - xml:aid table to HTML - Setting value for ADAM Field - connection to AS400/db from asp.net - Shows an alert message when the pc not used - Closing the MessageBox - Problem with SslStream when using Windows Vista - Need an XML editor comparable to Visual Studio 2005 - GridView Doesnt Update after Deleting a Value - video and sound issue - AJAX-AutoComplete Oracle VB.NET - problem in connectivity of ms-access2007 with vb.net 2005 - Active Directory Saved Query - Group Member Listing - Returning a struct array - Should be simple C++ managed header include question - Extracting files in .net - C# App: Data Hiding question - Question: Context Menu in Windows Form Label - asp:GridView advanced display features - Convert to HTML escape char API - How to use customised control without breaking design view - c# console app with icon? - Return data for GetConsoleScreenBufferInfo and dwMaximumWindowSize - Creating a chart in Excel from C# (Exception from HRESULT: 0x800A03EC) - vector-like wrapper for IList - re: unable to change the alias foldernames in url rewriting - Minimize All the Applications Running - Duplicate MSMQ messages rcvd in windows service - How to detect ctrl+Right Mouse Click in a vb datagrid - just getting started... - To focus textbox after dropdown selected - Get ACL for share - Object reference not set to an instance of an object. - Online Payment - .NET , SQL Server interview questions websites.... - Smart Navigation is Not Working - ASP.net 1.1 - how to convert image into hex format in vs.net 2003 - Auto Generated Code - How to find whether the particular record is in mysqldatabase ?? - XML and XHTML - withought displaying previous records how to save new records using datagrid - Synchronising a remote Access database - USB Storage device - binding data to datagrid - using Ajax - asp.net 3.0 - Control - XSD question - How to convert a word .doc file into a .gif or .jpg file in c#? - Calling a javascript on click of menu items in a master page. - LINQ - File Uploading not working [ASP.NET] - Passing record from One page to another - Page not found - Runtime editing in web.config - AddHandler SelectedIndexChanged Problem - uninstalling useless programs - how to create component in the fly - Compiler problem with V C++ .NET version 2003 - Using Microsoft Access Docmd Command in VB.Net - Is SQL EXPRESS (install)required for .net 2.0 application server - WebService Problem - xmlignore attribute - ESK shoppingcart - I need help with usb ports. - Clipboard Custom Format - Program memory Issue? - updation, insertion usin dataadapter - using C++/CLI DLL in C# - My computer won't restart - Threading in Windows Forms - Scan resoltions in Vista - Movie Maker Plus! Transitions - "does not contain debugging information. (No symbols loaded.)" - At Home But Not At Work On The Exact Same Project? - Event code: 3001, Event message: The request has been aborted. - Declare int in codebehind and use it in ascx file - SQL Server Express connection problems - Getting a printer to print from two ports - Managment selected Rows in Data grid - static html vs dynamic aspx - deserialize generics - Why ArrayList, not Hashtable - Exporting class in vba - New to this type of XML page - Vb.net Newbie code for accumulating seconds and converting to Hours minutes seconds - VB.Net newbie accumulating seconds into hours minutes and seconds - Using CreateThread in VC++ Form - Setting NTFS security inheritance - ASP.NET MS Word automation - C# Inner join error - Help whit error JIT - From C# going to c++ - From C# going to c++ - How Do I Debug the Console Output? - uknown audio device - Pop up windows help - xmi and dotnet - error in menu control - GDI+ moving objects - namespace Microsoft.Office.Interop.Word in asp.net - Using tab window in asp.net - How to build a Dictionary in VB.Net - How can I build a parser? - transparent user control - problems with events for dynamically created comboboxes - Return selected in a Dropdownlistbox - 'no name' threads - connecting to sql server using application configuration file - Quick Book Integration - matching the data entered by user with the database and displaying in the grid - Attributes on ArrayItems Serialization - Loop for HTML Page Scrape - XQuery questions - C# ASP.NET App: update listbox after write delete using dataset - Serializing CDATA - DataList Empty Values - Wiring up an App.Config file - deploying the App.Config file - Saving a Date/Time Stamp from VB 2005 to MS Access - xml to word format - ActiveX Control - Please Help! - c# intellisense broken? or is it me? - FileStream Error ??? - Reading embedded Resource textfile - Smart .net 2005 - Problem connecting to sql server on network? - ClickOnce Content Files - ADODB .ActiveConnection error - DHCP Reservation via .NET - howto POST data to URL? - How to Update an Excel File - How can i read sent item mail from POP3 - How to save WEP for use after Hard Reset - RequiredFieldValidator disables LinkButton - Open XSD in code / source view instead of designer - 2005 - menu& submenus in webform using vb.net - XML -> XSD - Books on C# and Microsoft Visual Studio 2005 - Questing: Inconsistent accessibility - REALLY dumb question - need to save a picture from windows form to C in xml format - How to retrieve value on to a label from dropdownlist user control - Attachments - SQL error - C# - Web - Setting labels background image programatically - C# Error in my program - question - The GridView 'GridView1' fired event Sorting which wasn't handled. - convert any image file to hex code - Writing and reading to application settings - SendKeys Doesn't appear to work in application - error while opening report - Peer to peer :How to add addible and font box in chat room - Process.GetProcesses Windows XP Pro x64 PROBLEM! - How Can I handle Mouse events over a Form with Trancperancy Key ؟ - Asp.net vertical DataGrid - how do i remove selected row permanently from sql database through datagrid in vb.net - Help required on String Concatination - C# Setup Project Registry Values - Html Reports - DataGrid - How can I read data from Excel file and store it into a Database - Scroll bars don't cause validation - Scroll bars don't cause validation - online examination system in c#.net - Remove selected row permanently from sql database through datagrid in vb.net - Debugging from Managed Code into unmanaged code - Inconvenience to find a file in VS2005 project - how to read a text file in C# in windowsapplns - data adapter - How will get session from gridview - Problems with accessing INTERNET w/WINXP - Error on adding custom control using RenderControl() - Storing a result in a variable - parallel port - what directive is to be used to create datasets? - Declare variable in Codebehind and display it in the HTML form - ArrayList - still having problem in retrieving primary key - Gridview calendar popup - MFC to .NET DLL Conversion - How to use Parametercache in Data access application block? - What is disadvantage of data access application block? - Keep getting kicked back to the desktop - Collection Type for RSS? - How to edit in Detailsview in C# code - Could not find function: empty - Exporting a Dataset to XML based on XSD. - Getting multiple values from a List box in C# - VB-NET, ASP.NET2.0: Making two textboxes with mirror text content - Capture text from a dialog item in another application window - Serialization Help! - C# String Manipulation Help Please - Created two of the same Card Deck class - when I shuffle each deck I get the SAME res - Orcas Winform Login Control - Problem with webcontrol and events - Passing an array, vb 2005 ee - C# variable question - need help - How to convert HTML to XHTML from .Net code - Uri decoding problems with WebBrowser.Navigating (WebBrowserNavigatingEventArgs) - ASP.NET WebService Event Handlers - Task Scheduler - Need a dataset form a xml file - visio lines - Getting InkOverlay and WinWord playing together nicely - How to call a .CAB file using Object tag in HTML in client side in .net - exe calling WS only works with VS running. - Generate VOHLC Chart - Application downloads with XP but not with Vista - How to add Ajax update panel in the child page - how can i get unused percentage of CPU in C# - Export data from datagrid to Excel - ListBox Selected Values - Detecting servers - Font problem in reports when the font doesn't exist - Active Sync conflict with Outlook? - Menu items in .NET - How to handle HTML Encoding - Namespaces in C# - Data Validation - Converse to BulkLoad... - ListView (VC 2003) - Basic Crystal/.net question. How to output from a dataset. - 405 Method Not Allowed SSL web services - Calling VB.Net Dll using Visual C++ - Find memory release method for database using vb.net code - The Zune software does not detect my Zune device - having problem in retrieving primary key - Fetching Data into Combobox from DB without using any code... - use.. - C# - How to Make the ComboBox ' Values Distinct (Unique)? - connect .net 2005 with MS access database - export to excel with multiple sheets - Remoting: assign a client handler to a remote event - FTP Downloading Problem - Win Server 2003 SP2 : slow response - Problem on Redirecting Assembly Versions at the Machine Level - updating thru gridview - Deploying application - start menu choices - C# dirinfo.GetFiles pattern problem - XML features - Button doesn't Postback-unique problem - How to use javascript code in user control page file ascx and ascx.vb - How to use javascript code in user control page file ascx and ascx.vb - Exporting the contebts of repeater to excel. - SQLExpress \ character in c# string - ajax in master page - how to refer or call a class file in c# application - Using icons in project - adding COM dll as reference - How to embed DEMOCRACY PLAYER to IE Browser - code for date validation in C# with ASP.Net - How to display image at runtime in image control at run time in asp.net using c# - Inserting Excel file to a word document using VB.NET - How to Find the IP and location of the current User - Folder Permission Issue - the request to send data to the computer running IIS has failed. for more information - How to Convert .NET type to XSD type - How to write a code for selected item in a combo box - How to CreateNamedPipe with FILE_CREATE_PIPE_INSTANCE access rights - problem executing query - DaylightChanges for previous years - Frontpage using to make a website - calculation using javascript in grid - Active Directory LDAP Phone List app - Can't open Server Explorer tab - How to debug IIS 500 Internal Server Error? - Web Service invoke as NT User - handle leak - adding controls dynamically - Quaint information about using F1 key with C source code in VC++2005 - Web Service Parameter - In Select Statement Sum Top 3 - Unable to insert a data into Database from ASP.NET page - How to control the browser window - Override the Send/Receive on a web Service Proxy - word.application object in vb.net - Constructor Inheritance - How to create dynamic Gridview? - What is the difference between an ADO.NET Dataset and an ADO Recordset? - passsing commands to a ms-dos shell tool.in C# - installutil and servers - ASP 2.0 How to Order by dropdown selection - hello - What is the difference between Server.Transfer and Response.Redirect? - Truncation of fractions in floating-point literals - Windows XP logon user names missimg - Threading/Paging in web services - Folder Permission Issue - Could not find installable ISAM - is it possible to display image in gridview - Writing custom errors to SQL (asp.net c# web app) - .Net 2.0 Web Service WS-I 1 Compliant - Update Panel - Javascript method and Server Code - alternative solution for mdiparent in another mdiparent - Windows Mail - DataGrid - help me to move my first step - Create a dynamic instance of a object that implements an interface - images doesn't show up in grid view - How to deserialize an object using asp.net application - based on background color - DTD to XML converter in Csharp - DTD to XML converter in Csharp - Could not find installable ISAM - Connected server through socket but not able to retrieve files. - convert any document into a text - Problem with datagrid - caching in asp.net,c# - datagrid in asp.net,c# - Modal popup - how to add columns to datagrid - Event handling over a peer-to-peer remoting system - DataGridView Control - Periodically Refreshing Cache Data - Difference b/w Thread and Process - Search for performance analysis tool like fxcope for .net C# project. - Problem with RegisterStartupScript in c# - I Having Problem While Hosting ASP.NET Application - Link Time Code Generation - get data from textbox(templatefield) from a grid view - How to compress and decompress folder - letter highlight - using and locating P3Admin.dll - Hide Image in Email - Problem sending e-mail from a webserver - Shell command help - query computer information in vb.net - Transfer Handler - Responsiveness of Vs2005 when loading of projects - C# webservice with Log4Net - Become MCSD Certified - Become MCSD Certified - Become MCSD Certified - Platform SDK for Visual Studio 2005 - Control Creation Painting Issue - How to call a ASP.Net User defined method or built in method from javascript function - Launch app after installation - forms based auth question - Different Virtual Directories and Syncronistic Session States - How to call a method (code behind) from the asp.net front end by passing parameters - FORMATING COLUMNS - help me with C# codings - Sink CUSTOM Windows Forms Events from Native C++ Classes? - Sink CUSTOM Windows Forms Events from Native C++ Classes? - Parameterless void delegate? - ASP.NET 2.0 Login control and membership provider - Create VB file programatically - Updating data from Large result set. - float Parse with F format - Dynamically Adding Columns in GridView - C++ .NET convert character array to String - C# Datalist control - Problem with Health Monitoring (asp.net c# web app) - how to display the images in gridview - IMAP no attachments - Sorting a GridView in C# - C# - creating a table in windows form with fixed no of columns but dynamically sizeab - How to Download single dll - File modification, how to notify IDE in .NET 2003 IDE programatically - layering a border around a Image - How to call a windows form by clicking a Link Label of an HTML File - Time control for Asp.net - remote desktop connection in C#.net - Assembly name is not in the list when adding reference! - Crystal Reports .NET 2005 Progress Bar / Wait Icon - Asp input text box length exceeds default page view and give scrol - Coverstion to date format ! - Table control (ASP.NET) - windows authetication - using xml data - Help MSChart - Confirmation from WebService - Reading a CSV File...Urgent!!!!!! - How can I call this C++ API from VB.net? - Combobox problem... - How to refernce the HTML table in ASP.Net - checking bandwith and data transfer rate of internet using c# - Query on User Authentication aganist ADS - adding a COM dll as reference during runtime - Print the contents of a sql table - Problem with transaction Plz help me for this Urgently - Saving Audio File in SQL - Paste ArrayList from Clipboard - how to enable/use the perl script in asp.net 2005 with c# - VWD 05, How to Fit more controls on Page? - Create an instance of System.Windows.Form.KeyEventArgs - How To Insert View Data In Run Time Datatable - C# APP Textbox and string interpretation of escape seq. - Excel - Context Menu on Menu - C# How to close a process which form.ShowInTaskbar = false - Modifying Class Library - A WebBrowser control challenge for you pros... - XML Parser - VStudio 2005 Pro + Plotlab 3.0 .NET - svhost.exe memory problem - How to add reference of vb.net 2.0 project into vb.net 1.1? - Using a barcode scanner - identifying unicode ranges in a managed string - ADO.Net - SMTPClient.EnableSsl using TLS? - AJAX - cascadingdropdown - Need SSRS Expression Help with Sums and Dates - Trying to make a connection a database on a server - C# Running a Windows Form App from a Windows Service - DCB Setup disagreeing with SetCommState - File Transfer Wiz - Lost Status File - how do i recover deleted notes in microsoft outlook? - Crystal Reports hierarchical grouping - Export to Excel with DataGrid in VB.NET 2003? - App Config Generator - how can close web form in c# ? - Combo Box - Array compare - Reinstall Microsoft Office after reinstall Windows - Beyond .NET 2.0/3.0 - Windows App not being shown in front. - ProcessCmdKey - How to format "Drop Down List" TextFields C# - need a program to clear all the textboxes content at a glance - how to select float value in sql - Various Text color in RichTxtbox? - hi this is sudheer - Config file and different Versions - Introductory reading on managed C++ and .NET wanted - Introductory reading on managed C++ and .NET wanted - LoadPostData, Full Custom Server Controls - How to fill the dropdownlist without using SqlDataSource - Launching pdf file with Process.Start does not work - New to CrystalReport.... - Using the Media Player Control in a Web Page - A problem in C# - Cast from string "" to type 'Double is not valid - How to create chat room using with peer to peer concept with asp.net and c# - Error sending SMTP MAil message in c# - Geting error ---object reference not set to an instance---at bold line----plz reply - Dat Picture Files - MAPI_E_NETWORK_ERROR when executing Send method of AppointmentItem object - Exporting STL objects from a DLL - Example on Parameterised Crystal report using .Net - On submit button script - Updating Row in Gridview - remote ping - Strong naming a mixed mode executable? - CR with Parameter - Diary control - Why The Color Quality And The Screen Pixels Changes - Problem in mutile instances of a form in C# - The namespace attribute cannot be an empty string - How to create popup window at run time? - displaying the date in the format of 13th june 2007 using asp.net with c# - Print button on PrintPreviewDialog1 won't work - C++/CLI and C# - more unmanaged/managed questions - C++/CLI mixing managed/unmanaged - Get Actual width and Height of a bmp image in VB.NET - .net - Strong Named EL 2.0 assemblies - .net - .NET Framework - Exception - .NET 3.0 and XP - Infamous Sound - How
https://bytes.com/sitemap/f-312-p-46.html
CC-MAIN-2019-43
refinedweb
3,312
54.02
Today I’m going to cover how to create and use a struct in C. Structs are used when you need more then one piece of data to describe one thing. You can’t use an array because an array only holds data with the same data type I’ll specifically cover how to : Create a Struct, Get data from a Struct, Initialize a Struct, Pass a Struct to a Function, Use Typedef, Change a Value in a Struct, Use a Struct in a Function, ->, *(structName) and how a Struct is stored in memory. The code follows the video below. If you like videos like this, it helps to tell Google+ with a click here [googleplusone] Code From the Video CTutorial5_1.c // structs are used when you need more then one // piece of data to describe one thing. // We can't use an array because areas only hold one // type of data. #include <stdio.h> // This creates a dog struct with the data needed struct dog { // Labeled with const because they won't change const char *name; const char *breed; int avgHeightCm; int avgWeightLbs; }; void getDogInfo(struct dog theDog){ printf("\n"); // Since a struct is initialized like an array you // may think you access the data like this theDog[0], // but they don't work that way. You need to use the // dot operator like this. printf("Name: %s\n\n", theDog.name); printf("Breed: %s\n\n", theDog.breed); printf("Avg Height: %d cm\n\n", theDog.avgHeightCm); printf("Avg Weight: %d lbs\n\n", theDog.avgWeightLbs); } void getMemoryLocations(struct dog theDog){ printf("Name Location: %d\n\n", theDog.name); printf("Breed Location: %d\n\n", theDog.breed); printf("Height Location: %d\n\n", &theDog.avgHeightCm); printf("Weight Location: %d\n\n", &theDog.avgWeightLbs); } void main(){ // Define a new dog by passing the values specific // to this dog. // struct is the data type and cujo is the variable name struct dog cujo = {"Cujo", "Saint Bernard", 90, 264}; // Now we can pass all the dog info easily getDogInfo(cujo); // Keynote Presentation -------------------- // A struct defines a template for your data type // A struct variable defines the memory needed to fit the data // What happens if a struct is copied to another? // They point to the same memory locations struct dog cujo2 = cujo; getMemoryLocations(cujo); getMemoryLocations(cujo2); } CTutorial5_2.c #include <stdio.h> // This struct will be stored in the dog struct struct dogsFavs { char *food; char *friend; }; // You can use typedef to shorten struct dog to // just dog by doing this instead typedef struct dog { const char *name; const char *breed; int avgHeightCm; int avgWeightLbs; struct dogsFavs favoriteThings; } dog; void getDogFavs(dog theDog){ printf("\n"); // To get the data in the struct you just chain // dot operators. // Since struct dogsFavs favoriteThings; is in dog // you use that after theDog variable name like // any other variable printf("%s loves %s and his friend is named %s\n\n", theDog.name, theDog.favoriteThings.food, theDog.favoriteThings.friend); } void setDogWeight(dog theDog, int newWeight){ theDog.avgWeightLbs = newWeight; printf("The weight was changed to %d\n\n", theDog.avgWeightLbs); } // struct dog *theDog means the function expects a pointer // of type struct dog void setDogWeightPtr(dog *theDog, int newWeight){ // You use *theDog to get the value at the pointers address // You have to surround a struct with parentheses (*theDog).avgWeightLbs = newWeight; printf("The weight was changed to %d\n\n", (*theDog).avgWeightLbs); // You can use either (*theDog).avgWeightLbs or // theDog->avgWeightLbs // theDog->avgWeightLbs means the variable named // avgWeightLbs in the struct theDog printf("The weight with -> %d\n\n", theDog->avgWeightLbs); } void main(){ // We can also store structs in structs dog benji = {"Benji", "Silky Terrier", 25, 9, {"Meat", "Joe Camp"}}; getDogFavs(benji); // How do you change a value in a struct setDogWeight(benji, 11); printf("The weight in main() %d\n\n", benji.avgWeightLbs); // It prints the old weight because when a struct is // passed to a function a new struct is passed to the // function with Benjis values // You need to pass the structs pointer with & setDogWeightPtr(&benji, 11); printf("The weight in main() %d\n\n", benji.avgWeightLbs); } However the address of the structures you print is the one in the stack, instead of the real address of the variable. Hence they are diffferent. Yes you caught it 🙂
http://www.newthinktank.com/2013/08/c-video-tutorial-5/
CC-MAIN-2019-18
refinedweb
713
60.75
SUI SOUTHERN GAS COMPANY LIMITED S.KAMAL HAYDER KAZMI, (feedback@pgeconomist.com) Research Analyst, PAGE Jan 16 - 22, 2012 Sui Southern Gas Company (SSGC) is engaged in the business of transmission and distribution of natural gas besides construction of high pressure transmission and low pressure distribution systems. Presently, the company continues to face shortage of natural gas mainly due to decreased supplies from producers and increased gas demand. Despite increase in the number of gas producing fields from 16 to 17 compared to corresponding period last year, the supply of gas decreased by four per cent to 98.8 BCF. The average well head purchase price increased by 12 per cent and stood at Rs297.76 per MMBTU. The gas distribution system was extended by 470 km while another 42 km of distribution lines were laid under the rehabilitation projects to curtail leakages. During the period under review, the company's customer base increased to 2.384 million. The company provided 30,678 domestic connections in the first three months of the current financial year. The meter manufacturing plant produced 167,607 meters versus 168,250 meters in the corresponding period last year, i.e. a decrease of 643 meters or nearly 0.4 per cent. The sale to SNGPL declined by nine per cent to 97,200 meters as compared to 107,100 meters in the corresponding period last year. The increase in cost of production by 14 per cent resulted in decrease in profit to Rs17 million as compared to profit of Rs46 million for the corresponding period. In the three months period, the capital expenditure was Rs1,398 million as compared to Rs2,784 million for the previous corresponding period. Addition to operating fixed assets was Rs224 million versus Rs929 million in the corresponding period last year. SSGC has posted after tax profit of Rs796 million during the same months as compared to profit of Rs1,113 million for corresponding period in 2010. The basic earnings per share (EPS) decreased to Rs0.95 during first quarter ended 2011 as compared to Rs1.66 on 30 September 2010, mainly due to excess of unaccounted for gas (UFG). UFG reached to the level of 10.22 per cent as against a limit of seven per cent allowed by OGRA. Thus, the company suffered a reduction in revenue of Rs900 million in its tariff return (2011: 584 million) in the three months period on account of UFG. The company plans to maintain its focus on UFG projects and is planning to initiate major capital expenditure project with primary objectives of UFG reduction. Due to continued supply constraints, year to date, gas sales decreased by five per cent to 88.3 BCF as compared to 92.6 BCF in the corresponding period last year. The average sales price per MMBTU increased by eight per cent to Rs367.44 versus Rs341.51 due to increase in consumer prices by OGRA, thus gas sales revenue (net of GST) increased only three per cent. KESC is the single largest customer and debtor of the company with over-due amount of Rs32.7 billion. SSCG management has been proactively taking up the matter at all forums for an early recovery of the massive liabilities. Management is treating this outstanding amount from KESC as part of the inter-corporate circular debt. In addition, the management has pro-actively taken up the issue with the federal and provincial governments for resolution of the KESC dues and inter-corporate circular debt. More recently, in a meeting of the national assembly standing committee on textile industry, the management of KESC stated that the SSGC was responsible for load shedding in the industrial areas of the metropolis, as KESC was not provided 180-mmcfd gas as promised, which was the main cause of 300MW shortage. The power generation reduced due to non-provision of 180-mmcfd promised gas to KESC. On the contrary, only 70-80 mmcfd gas was being provided to the company. If the company goes for power generation through furnace oil, the cost of power might increase that would overburden consumers with high tariff rates. The national assembly standing committee urged the gas utility and the government to review its decision of total curtailment of gas to industrial sector and provide some share to the industry so that the country's textile export target could be achieved. KESC's installed power generation capacity is more than 2000 mw but the available capacity is less due to multiple reasons including shortage of gas. FUTURE OUTLOOK The company is actively engaged with project developers for setting up LNG infrastructure in Pakistan for the purpose of importing natural gas, through a third party access regime on fast track bases. Implementation of LNG project will usher a new era of sustainability while providing numerous advantages including fast track solutions to energy crises and a secured supply of natural gas.
http://www.pakistaneconomist.com/pagesearch/Search-Engine2012/S.E44.php
CC-MAIN-2016-44
refinedweb
818
53
HTML is an acronym for Hypertext Markup Language. It’s used for designing static web pages. Html is normally displayed in web browsers. PDF is an acronym for Portable Document Format. It’s easy to read and share since it’s in a printed, electronic format. Benefits of converting HTML to PDF - PDF documents are secure since they can be protected with a password, unlike HTML pages that are accessible to everyone on the web - PDF documents can be easily shared and also can easily be created. - With pdf, it is possible to view and read offline. - PDF documents can be compressed to save space on your computer and only used when needed. There are different numerous ways in which you can convert HTML to PDF. Some include online conversion, using software’s and also through running some lines of codes using libraries. Depending on your preferences, you can determine the most and the best method for you to use. Lets have a look some online conversion methods: Online conversion of HTML to PDF Win2PDF This provides free online conversion of public HTML webpages to PDF. You enter the address of the HTML page you want to convert to PDF and click submit. With this method, it’s not possible to create PDFs from local files on your computer. Secondly, it’s not possible for security reasons to create PDFs from HTML pages that have a login—such as an email box, online banking, payment checkout, or shopping cart content. You can try Win2PDF here. You can also convert files by printing to win2PDF using a windows browser like Chrome or Firefox. Press CTRL+P on the HTML page you want to print. On the pop-up window, choose your printer. In our case, select Win2PDF and print. With Win2PDF, you can also convert your local files such as word to PDF with their free trial. You can download it from here. The only disadvantage with this method is that It has a watermark unless you purchase the full version. Online- convert You only need to upload your files and start the conversion. With this page, you can enter the URL, choose files from google drive or upload from your computer. It has an option to download as a zip file. You can try it here: online-convertor. Adobe acrobat dc It’s also an online way that offers a 7 days free trial only. It’s easy and direct to use. Try it here; Adobe acrobat dc Chrome web store extension Chrome extension is another way to convert HTML to PDF. Download the extension from here. To convert, click on the extension icon. It will open HTML to PDF online conversion websites. Select the HTML file you want to convert and click convert. Also, don’t forget to enter the email address where the document download link will be sent. Other Online Services: These are available for direct use: Software Conversion of HTML to PDF Evo PDF software It uses a converter library for .NET and .NET core. The best advantage of converting HTML to PDF using Evo PDF software is that it allows full support conversion. All HTML tags, SVG vector graphics, CSS styles, live URLs, and bookmarks are interpreted and converted as they appear on the page. It also includes the header and footers in the PDF converted. It also allows you to choose the PDF page options. You can download it from here: Evo PDF Software. Using Libraries to convert HTML to PDF We shall use wkhtmltopdf python library to convert HTML to PDF. Step 1: First install the client library pdfcrowd by typing this command: $ pip install pdfcrowd Step 2: Download the pdfkit library. $ pip install pdfkit Step 3: Download the wkhtmltopdf library In any Linux platform such as Ubuntu and Debian type this command: sudo apt-get install wkhtmltopdf In Windows, you can download the setup here: wkhtmltopdf Set the path: To do this, navigate to this directory “C:\Program Files\wkhtmltopdf\bin” and copy the path. Go to windows advanced system settings, then environment variables, add the new path, and click OK. Step 4: Now, all libraries are set to use. With this method, you can convert a single file saved in .HTML or using a URL. import pdfkit //for converting files pdfkit.from_file("index.html", "mypdf.pdf") //for converting urls pdfkit.from_url(" You first have to import the pdfkit library to make it usable. In the two strings enclosed by curl brackets, the first string represents your file or the URL you need to convert, while the second string holds the name of your output pdf file. Let’s check an example by converting Code Underscored webpage into HTML. Input: import pdfkit pdfkit.from_url(" Output: Conclusion In this tutorial, we covered some of the various ways to convert HTML to PDF—the best advantage of converting an HTML page to pdf maybe for portability and easy access when offline. There are more various ways to achieve this. In our case, we have studied online conversion, software conversion, and wkhtmltopdf python library. Really nice and educative article. Helped me a lot. Very good Moses. Easy to follow.
https://www.codeunderscored.com/how-to-convert-html-to-pdf/
CC-MAIN-2022-21
refinedweb
864
74.39
The title pretty much says it all.. Suppose I have a bash script: #!/bin/bash # do some magic here, perhaps fetch something with wget, and then: if [ "$VAR1" = "foo" ]; then export CASEVARA=1 fi export CASEVARB=2 # and potentially many other vars... import os # run the bash script somehow print os.environ['CASEVARA'] Certainly! It just requires some hacks: variables = subprocess.Popen( ["bash", "-c", "trap 'env' exit; source \"$1\" > /dev/null 2>&1", "_", "yourscript"], shell=False, stdout=subprocess.PIPE).communicate()[0] This will run your unmodified script and give you all exported variables in the form foo=bar on different lines. On supported OS (like GNU) you can trap 'env -0' exit to get \0 separated variables, to support multiline values.
https://codedump.io/share/qL9dsHecLite/1/can-i-run-a-bash-script-in-python-and-keep-any-env-variables-it-exports
CC-MAIN-2016-50
refinedweb
122
57.77
So I just read an example of how to create an array of characters which represent a string. The null-character \0 char line[100]; "hello\n" line[0] line[6] "hello\n" \0 You need the null character to mark the end of the string. C does not store any internal information about the length of the character array or the length of a string, and so the null character/byte \0 marks where it ends. This is only required for strings, however – you can have any ordinary array of characters that does not represent a string. For example, try this piece of code: #include <stdio.h> int main(void) { char string[1]; string[0] = 'a'; printf("%s", string); } Note that the character array is completely filled with data. Thus, there is no null byte to mark the end. Now, printf will keep printing until it hits a null byte – this will be somewhere past the end of the array, so you will print out a lot of junk in addition to just "a". Now, try this: #include <stdio.h> int main(void) { char string[2]; string[0] = 'a'; string[1] = '\0'; printf("%s", string); } It will only print "a", because the end of the string is explicitly marked.
https://codedump.io/share/rKhffAdXv6r1/1/whenwhy-is-39039-necessary-to-mark-end-of-an-char-array
CC-MAIN-2017-39
refinedweb
210
77.37
(You. I’ll use OpenType in my discussion here. Licensing Font licenses may allow you to use the font in printed collateral or even digital documents, but may not allow you to embed that font for distribution elsewhere. It’s unclear whether specific foundries consider bundling a font with an ebook to be violating their license. Theoretically, nothing stops the end-user from unzipping the .epub file and taking out the font, so bundling it with an epub could be considered a form of distribution. One method to allow font inclusion without violating any license is to employ font obfuscation. Adobe has released some open-source code in epub-tools to enable this, and provided detailed documentation for software developers who want to implement the algorithm. I’m not aware of any readers besides ADE that can read fonts obfuscated in this manner, though. Fonts with open licenses My preferred solution for bundling fonts is to dodge the entire licensing issue altogether by using an open license. The Bitstream Vera fonts have a generous license and good Unicode support, but don’t include all possible serif/italic/bold combinations. A later variant called DejaVu improves on this by including more faces and even better Unicode support. My favorite fonts in terms of good Unicode support combined with an open license are the Liberation series. These are workhorse fonts designed to resemble the basic Microsoft set of Arial, Times and Courier. Nothing fancy, but they’re screen-readable and contain a wide range of characters. How to embed a font If you’re not afraid of specs, I recommend reviewing the CSS2 @font-face rule. But here’s the step-by-step: 0. Convert the font to OpenType This is step 0 because it’s not strictly required, and you may already be starting from an OpenType font. I usually convert TTF fonts to OTF with FontForge. FontForge is not the friendliest software but it will do the job. Commercial font software will be much easier to use but tends to be expensive. Or you can just skip this step. 1. Add the font to the OPF package Every file in your epub must be declared in the OPF manifest. This includes fonts. Put the font in the same place as your OPF file (usually a folder called OEBPS) and add it to the manifest. For example: <item id="epub.embedded.font" href="MyFont.otf" media- 1.a Repeat for all combinations of faces You will probably want to add additional packages to support all combinations of bold, italics, serif and sans-serif (as users may be able to switch serif settings in their reader). 2. Create the CSS @font-face rules This is standard CSS from here on out; nothing ePub specific: Declare the font itself This tells CSS that you will be using a particular font in the document. It doesn’t say where it will be used, so just including this directive will have no effect: @font-face { font-family : MyFont; font-weight : normal; font-style: normal; src : url(MyFont.otf); } Now we need to tell the CSS where this font should be used. To set it as the default font for the entire document, apply it to the <body> element: body { font-family: "MyFont", serif; } This should be the last rule in your CSS. Note that the generic “serif” declaration follows our declared font. I don’t recommend declaring any other fallback fonts besides just serif or sans-serif. If you included italic or bold variants separately, add them as: @font-face { font-family : MyFontBold; font-weight : bold; font-style: normal; src : url(MyFontBold.otf); } @font-face { font-family : MyFontItalic; font-weight : normal; font-style: italic; src : url(MyFontItalic.otf); } strong, b { font-family: "MyFontBold", serif; } em, i { font-family: "MyFontItalic", serif; } You may need to tweak the font assignments for particular elements. For example, <blockquote> is typically rendered in italics; you’ll need to assign that to the italic variant as well. 3. Validate your ePub This has nothing to do with fonts, but you should be validating any ePub you generate. 4. Test it out The earlier font embedding article contains a list of readers which support embedded fonts. If possible I recommend testing both on Adobe Digital Editions and a compatible eink reader. (Many thanks to Keith Fahlgren at O’Reilly Media for teaching me about real-world embedded font support.) Edited November 10, 2009 to include a pointer to DejaVu.) Padman Thanks for this guide – it explains me a lot. The only thing I struggle with is how to compile the files again to create single ePUB file using embedded fonts. Any hint? Billy I believe the intention of the “@font-face” rule is to map from a combination of CSS properties to a system file name. The above example can/should thus be simplified to: @font-face { /* this combination of CSS properties… */ font-family : MyFont; /* same family name! */ font-weight : bold; font-style: normal; /* … is mapped to this file */ src : url(MyFontBold.otf); } @font-face { font-family : MyFont; /* same family name! */ font-weight : normal; font-style: italic; src : url(MyFontItalic.otf); } This way, you have to do nothing special in your CSS rules: strong, b { /* nothing here */ } em, i { /* nothing here */ } You can even map to font files you couldn’t use any other way, because they are not mapped by default: @font-face { font-family : MyFont; font-weight : 200; font-style: normal; src : url(MyFont-Light.otf); } Or you can use a slightly heavier “book” font weight as a normal weight for readability: @font-face { font-family : MyFont; font-weight : normal; font-style: normal; src : url(MyFont-Book.otf); } And, of course, you can use SVG fonts this way (any browser but FF): @font-face { font-family : MyFont; font-weight : normal; font-style: normal; src : url(MyFile.svg#MyFont) format(“svg”); } Liza Daly Billy: Interesting; I hadn’t realized that. I wonder if that method works in Adobe Digital Editions. Tony Thanks for the help. I successfully embedded the font. However, i still can not see certain characters when using Adobe Digital Editions even though they are in the embedded font. Is this a known issue with Adobe? Liza Daly Tony, I suggest posting to the epub-community list () and asking for help. It would be useful if you could provide the ebook or at least a representative sample. Arthur Attwell To your open-licensed-fonts suggestions, I’d add Linux Libertine (). It’s a thoroughly and beautifully designed font for long-form text, and renders well on screen. It also has a lovely small-caps variant that can be used by determined designers to get around the lack of small-caps support in ADE. Liza Daly Thanks Arthur. I’m always looking for other fonts to recommend, and I especially appreciate the small-caps tip since that’s a common request. Yaron Goldstein 1. I arrived to this excellent post from your very helpful and enlightening presentation about Designing ebooks for ePub reading engines. I wish I’d have seen it before as we had to discover all this from various sites and doing “some” trial and error. We create Hebrew eBooks – HeBooks as we call them, so embedding is essential for some readers, especially RMSDK based ones. Otherwise, ADE and Sony Reader just show question marks. Alas, embedding is not enough, ADE, Sony do not support RTL. So the Hebrew text is displayed the other way around. Adobe couldn’t care less. I even tried through insiders. And tried to argue that they cannot claim they fully support ePub and that Stanza, FBReader, EPUBReader read Hebrew beautifully. Nada. So please be aware that font embedding is not always enough to work-around the RMSDK missing foreign language support. 2. Why do you recommend to include only one fallback. We use the embedded font itself as the last fallback as in: .regular {font-size: 100%; font-family: ‘Times New Roman’, Times, serif, ‘sans serif’, ‘DejaVu Sans'; } The idea is that the embedded font would be used (as it is used in ADE) only if all other options do not include Hebrew characters. Do you see anything wrong with that? Thank you. Read about us if you will at Liza Daly Yaron, I’m definitely aware of the problem with RTL text in RMSDK but I’m glad that you’re applying pressure! Mostly I recommend just one fallback because it simplifies things. With the huge range of reading devices out there I don’t think it’s worth publishers’ time trying out different fonts without knowing how they’ll display. Your fallback rules are surprising, but I’m glad that ADE recognizes that the other font names don’t accurately represent the content (I’d be concerned that it would just stop at ‘serif’). That seems like a sensible approach if you really would prefer it not use DejaVu. Abhinav Singh I have converted a file from word doc to epub using Indesign CS4. It shows some error like : 1. ERROR: EPUB_7MAY.epub/META-INF/encryption.xml(2): attribute “compression” from namespace “” not allowed at this point; ignored 2. ERROR: EPUB_7MAY.epub/META-INF/encryption.xml(8): attribute “compression” from namespace “” not allowed at this point; ignored 3. ERROR: EPUB_7MAY.epub/META-INF/encryption.xml(14): attribute “compression” from namespace “” not allowed at this point; ignored 4. ERROR: EPUB_7MAY.epub/OEBPS/content.opf(10): date value ” is not valid, YYYY[-MM[-DD]] expected 5. ERROR: EPUB_7MAY.epub/OEBPS/toc.ncx(13): unfinished element 6. ERROR: EPUB_7MAY.epub/OEBPS/toc.ncx(3): assertion failed: first playOrder value is not 1 7. Check finished with warnings or errors! please tell me what should I do? Aross I actualy have the exact same problem as Abhinav, any help would really be appreciated! Liza Daly That is InDesign’s form of font obfuscation, which does not produce valid EPUB files. Uncheck “embed fonts” when exporting from InDesign. YKastell Newbie remark : In the @font-face statement, it seems that the value for the src property must be relative to the root of the ePUB file (e.g. : if your otf file is in a Fonts directory it must be src : url(/Fonts/MyFont.otf);
https://www.safaribooksonline.com/blog/2009/09/16/how-to-embed-fonts-in-epub-files/
CC-MAIN-2015-32
refinedweb
1,709
64.2
NAME OSSL_LIB_CTX, OSSL_LIB_CTX_new, OSSL_LIB_CTX_new_from_dispatch, OSSL_LIB_CTX_new_child, OSSL_LIB_CTX_free, OSSL_LIB_CTX_load_config, OSSL_LIB_CTX_get0_global_default, OSSL_LIB_CTX_set0_default - OpenSSL library context SYNOPSIS #include <openssl/crypto.h> typedef struct ossl_lib_ctx_st OSSL_LIB_CTX; OSSL_LIB_CTX *OSSL_LIB_CTX_new(void); OSSL_LIB_CTX *OSSL_LIB_CTX_new_from_dispatch(const OSSL_CORE_HANDLE *handle, const OSSL_DISPATCH *in); OSSL_LIB_CTX *OSSL_LIB_CTX_new_child(const OSSL_CORE_HANDLE *handle, const OSSL_DISPATCH *in); int OSSL_LIB_CTX_load_config(OSSL_LIB_CTX *ctx, const char *config_file); void OSSL_LIB_CTX_free(OSSL_LIB_CTX *ctx); OSSL_LIB_CTX *OSSL_LIB_CTX_get0_global_default(void); OSSL_LIB_CTX *OSSL_LIB_CTX_set0_default(OSSL_LIB_CTX *ctx); DESCRIPTION OSSL_LIB_CTX is an internal OpenSSL library context type. Applications may allocate their own, but may also use NULL to use a default context with functions that take an OSSL_LIB_CTX argument. When a non default library context is in use care should be taken with multi-threaded applications to properly clean up thread local resources before the OSSL_LIB_CTX is freed. See OPENSSL_thread_stop_ex(3) for more information. OSSL_LIB_CTX_new() creates a new OpenSSL library context. OSSL_LIB_CTX_new_from_dispatch() creates a new OpenSSL library context initialised to use callbacks from the OSSL_DISPATCH structure. This is primarily useful for provider authors. The handle and dispatch structure arguments passed should be the same ones as passed to a provider's OSSL_provider_init function. Some OpenSSL functions, such as BIO_new_from_core_bio(3), require the library context to be created in this way in order to work. OSSL_LIB_CTX_new_child() is only useful to provider authors and does the same thing as OSSL_LIB_CTX_new_from_dispatch() except that it additionally links the new library context to the application library context. The new library context is a full library context in its own right, but will have all the same providers available to it that are available in the application library context (without having to reload them). If the application loads or unloads providers from the application library context then this will be automatically mirrored in the child library context. In addition providers that are not loaded in the parent library context can be explicitly loaded into the child library context independently from the parent library context. Providers loaded independently in this way will not be mirrored in the parent library context and will not be affected if the parent library context subsequently loads the same provider. A provider may call the function OSSL_PROVIDER_load(3) with the child library context as required. If the provider already exists due to it being mirrored from the parent library context then it will remain available and its reference count will be increased. If OSSL_PROVIDER_load(3) is called in this way then OSSL_PROVIDER_unload(3) should be subsequently called to decrement the reference count. OSSL_PROVIDER_unload(3) must not be called for a provider in the child library context that did not have an earlier OSSL_PROVIDER_load(3) call for that provider in that child library context. In addition to providers, a child library context will also mirror the default properties (set via EVP_set_default_properties(3)) from the parent library context. If EVP_set_default_properties(3) is called directly on a child library context then the new properties will override anything from the parent library context and mirroring of the properties will stop. When OSSL_LIB_CTX_new_child() is called from within the scope of a provider's OSSL_provider_init function the currently initialising provider is not yet available in the application's library context and therefore will similarly not yet be available in the newly constructed child library context. As soon as the OSSL_provider_init function returns then the new provider is available in the application's library context and will be similarly mirrored in the child library context. OSSL_LIB_CTX_load_config() loads a configuration file using the given ctx. This can be used to associate a library context with providers that are loaded from a configuration. OSSL_LIB_CTX_free() frees the given ctx, unless it happens to be the default OpenSSL library context. OSSL_LIB_CTX_get0_global_default() returns a concrete (non NULL) reference to the global default library context. OSSL_LIB_CTX_set0_default() sets the default OpenSSL library context to be ctx in the current thread. The previous default library context is returned. Care should be taken by the caller to restore the previous default library context with a subsequent call of this function. If ctx is NULL then no change is made to the default library context, but a pointer to the current library context is still returned. On a successful call of this function the returned value will always be a concrete (non NULL) library context. Care should be taken when changing the default library context and starting async jobs (see ASYNC_start_job(3)), as the default library context when the job is started will be used throughout the lifetime of an async job, no matter how the calling thread makes further default library context changes in the mean time. This means that the calling thread must not free the library context that was the default at the start of the async job before that job has finished. RETURN VALUES OSSL_LIB_CTX_new(), OSSL_LIB_CTX_get0_global_default() and OSSL_LIB_CTX_set0_default() return a library context pointer on success, or NULL on error. OSSL_LIB_CTX_free() doesn't return any value. HISTORY All of the functions described on this page were added in OpenSSL 3.0. Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at.
https://www.openssl.org/docs/man3.0/man3/OSSL_LIB_CTX.html
CC-MAIN-2021-43
refinedweb
851
51.07
Profiling This page aims to give an overview of the various tools available for profiling the game (i.e. measuring speed and resource usage), and some details on how to use them. In-game profiler When the game is running, press F11 once to display the profiler. This is hierarchical: some rows have a digit to their left, and pressing the corresponding key will drill down into that row and show timings for the sub-sections within that. (Press 0 to go back up a level). Rows in white are from C++, rows in red are from scripts. Only code running on the main thread is counted. The columns are: - calls/frame - number of times that section has been entered in a single frame (averaged over the past 30 frames). A frame corresponds to a single iteration of the main game loop, usually clamped to a maximum 60fps by vsync. - msec/frame - total amount of time spent inside that section per frame (summed for all calls; averaged over the past 30 frames). - mallocs/frame - number of memory allocations inside that section per frame. Only works when the game is compiled in debug mode - in release mode it's always 0. Might not work on Windows at all. - calls/turn - number of times called in a single simulation turn (not averaged). A simulation turn occurs typically every 200ms or 500ms or so, and runs all of the gameplay update code, and corresponds to a variable number of frames, so this is more useful than calls/frame for measuring code that only gets called during the simulation turn. - msec/turn - same idea. - mallocs/turn - same idea again. To use this profiler in code, do: #include "ps/Profile.h" then { PROFILE("section name"); ... code to measure ... } and it will measure all code from the PROFILE until the end of the current scope. (You can also use PROFILE_START("foo"); ... PROFILE_END("foo"); which automatically add scoping braces.) The profiler is relatively expensive; you usually shouldn't put PROFILE in code that will be called thousands of times per frame, since the profiling overhead will dominate the code you're trying to measure. Pressing F11 multiple times will toggle through different profiler modes (script data, network data, renderer data). Pressing Shift+F11 will save profile.txt in the game's logs folder (see GameDataPaths). Pressing it multiple times (without restarting the game) will append new measurements to that file. Profiler2 The in-game profiler is designed to show average performance data at an instant in time. While this is adequate for some purposes, it's not as good for analyzing rapid fluctuations in game performance. It also doesn't support multiple threads, so only the engine thread can be analyzed. To solve these shortcomings, profiler2 was created. Profiler2 collects profiling data across multiple threads and runs a small web server in the engine. When enabled, an HTML page and script can request, analyze and render this profiling data. Profiler2 is currently enabled by pressing Ctrl+F11 while the game is open, similar to the in-game profiler. Then the HTML page included with the profiler2 tool (source/tools/profiler2/profiler2.html) is opened in a web browser supporting HTML5. Profiler supports the PROFILE2, PROFILE2_IFSPIKE, PROFILE2_AGGREGATED macros. There is also PROFILE2_EVENT to record events in profiler2 and PROFILE2_ATTR to add printf-style strings to the current region or event (seen in tooltips when hovering the profiler2 rendered output). This allows displaying more contextual data than would typically be possible in a profiler. For convenience, using PROFILE3 will measure with both the in-game profiler and the new profiler2. For more detailed GPU profiling data on some systems, PROFILE2_GPU and PROFILE3_GPU can be used similarly. Additional information on Profiler2 such as screenshots and explanations of the HTML interface can be found at Profiler2 Low-overhead timer A less fancy way to measure sections of code is: #include "lib/timer.h" ... { TIMER(L"description"); ... code to measure ... } which will record the time from TIMER until the end of the current scope, then immediately print it via debug_printf (which goes to stdout on Unix, and the debugger Output window on Windows). To measure cumulative time spent in a section of code: #include "lib/timer.h" TIMER_ADD_CLIENT(tc_SomeDescription); ... { TIMER_ACCRUE(tc_SomeDescription); ... code to measure ... } which will sum the time spent, then print it via debug_printf once when the game is shutting down. The output will be like TIMER TOTALS (9 clients) ----------------------------------------------------- tc_SomeDescription: 10.265 Mc (12x) ... saying the cumulative time, measured in CPU cycles (kc = kilocycles (103), Mc = megacycles (106), etc). (It measures cycles since that's cheaper and more reliable than converting to seconds, given that clock frequency can change while the game is running. Divide by your CPU frequency manually to get real time). This should have significantly lower overhead than PROFILE, so you can use it in functions that get called more often. Simulation replay mode If you're not measuring graphics or GUI code, replay mode lets you run the simulation (the gameplay code and scripts and the AI etc) at maximum speed with all graphics disabled. This should be fully deterministic (i.e. you'll get the same results on each run) and doesn't need user interaction (so you can easily run in Valgrind). It even allows you to create nice graphs for comparing the performance before and after your changes! First, play the game normally, in either single-player or multiplayer. It will save a replayfile replays/${DATE}_${INDEX}/commands.txt (see GameDataPaths) based on the current date. This contains the map setup data and a list of all players' inputs, so the game can be replayed. You might want to copy the commands.txt to somewhere memorable. Run the game like ./pyrogenesis -mod=mod -mod=public -replay=path/to/commands.txt (or something equivalent on Windows). It will print its status to stdout (or the Output window on Windows). It gives the MD5 checksum of the complete simulation state once it's finished, which is handy if you're changing the code and want to make sure you're not affecting the simulation behaviour. Run in a profiler to measure whatever you want to measure. Creating graphs The replay mode also stores the in-game profiler state in profile.txt every 20 turns. There's a script in source/tools/replayprofile/ which can plot a graph from that file. - Check source/tools/replayprofile/extract.pl if it points to the right profile.txt (you can also change it to point to another working copy, but remember not to accidentally commit this change later) - Optionally apply the attached patch and define a filter for some of the events (again, don't commit your changes!) - Run perl extract.pl > data.js - Optionally make a second measurement for your modified code using the same commands.txt and extracting the data to data_1.js - Optionally copy and paste lines from data_1.js to data.js and give them a sensible name (of course you can paste as many lines from as many other data files as you want) - Open graph.html It will look similar to this: SpiderMonkey Tracelogger SpiderMonkey, the JavaScript engine we are using, has some own performance profiling capabilities for analyzing JS performance. Characteristics Compared to the other profiling tools and approaches, the Tracelogger has the following characteristics: - Shows in which optimization mode the JS code runs (Interpreter, Baseline, Ion Monkey). This is the only profiler listed here that shows this information. Functions which run in Interpreter or Baseline mode too much could be a performance issue (also read the part below about inlining). - Shows how many times functions got compiled by SpiderMonkey. - Shows how many times functions got called in total (also read the part below about inlining). - Shows total runtime percentages for each function (also read the part below about C++ functions). - There is a small overhead, but it's small enough that you can still play games as normal. The overhead is mainly caused by flushing profiling data to the disk. It reduces performance by around 5%, and you see on the output where the flushing happens, so it's not a big problem. - The Tracelogger only profiles JavaScript functions (also read the part below about C++ functions). - Getting profile data from longer games can require quite a lot of harddisk space (around 10-15 GB). - Larger profile data has to be reduced before it can be viewed. This can take quite a while (up to an hour). Inlining You have to be a bit careful with your conclusions because SpiderMonkey sometimes inlines functions. If you get much lower numbers of calls for a function than you would expect, then it could be because the function got inlined in a parent function. In this case only the calls of the parent function are counted. Also take a look at the number of calls if you see that a function runs in Baseline mode most of the time. Very low numbers of calls are an indication that it probably got inlined. In this case it's normal that it runs in Baseline most of the time before inlining happens. C++ functions C++ functions on a C++ only callstack are not shown and ignored in the runtime percentages you see. Time spent from C++ functions which are called from JS functions is included, but not measured separately (these functions count towards the JS function that calls them). Using the Tracelogger When you use it the first time, you can just do all the steps described below in order. Enabling tracelogging This is done through environment variables. You can use the script located at ps/trunk/source/tools/tracelogger/tracelogger_options.sh to load default options. Note: Windows versions seem to have inconsistencies with respect to the use of semicolons to delimitate values. Getting the Tracelogger The tool to view Tracelogging data is not included in SpiderMonkey or bundled with 0 A.D.. You can get it from git: git clone tracelogger Version with git hash 1c67e97e794b5039d0cae95f72ea0c76e4aa4696 was successfully tested with our version of SpiderMonkey (in case future versions aren't compatible anymore). Measuring When SpiderMonkey is built with Tracelogging enabled, all you need to do is building the game and running the test. Reducing and viewing data Data can become quite large (several GB). You could view this data directly in the browser, but it would take forever to load and you usually want to reduce the data before viewing it. Use the reduce.py file from the git checkout. Pypy is a lot faster than python, but some versions of pypy have a bug. You can try with pypy, but if the output files are only 24 bytes, then you are affected by the bug and should probably use python instead. The output path points to a directory and contains a prefix to use for the reduced files ("reduced" in this example). pypy tools_v2/reduce.py /path/to/tracelogging/data/tl-data.json /path/to/output/reduced To view the data, copy server.py from the website directory to /path/to/output/. Start the script so the data gets fed automatically to the tracelogging tool, then open tracelogger.html in a browser and select the desired file. Finally, select a thread from the list. Valgrind Callgrind with KCacheGrind is quite nice for detailed (though not hugely accurate) profiling. Valgrind often doesn't like SpiderMonkey's JITs so you may have to use valgrind --smc-check=all ... if it otherwise crashes. Very Sleepy Win32 tool very fast and easy to use and setup download Graphic performance tools Nowadays, GPU profiler are specific tools, allowing very good insight on where,how time is spent by GPU Intel gpa, AMD codeXL, Nvidia nsight, microsoft xperf Attachments (3) - filter_data.diff (851 bytes) - added by Yves 3 years ago. Changes to filter the output of extract.pl - profile_24_1.gif (80.2 KB) - added by Yves 3 years ago. example profile graph - tracelogger.png (177.1 KB) - added by Yves 20 months ago. Download all attachments as: .zip
http://trac.wildfiregames.com/wiki/EngineProfiling
CC-MAIN-2016-40
refinedweb
2,009
65.52
In this tutorial, we’re gonna look at React example that uses Router v4 for implementing navigation. Contents Goal We will use React Router v4 to create a navigation bar in which, clicking on any item will show corresponding Component without reloading the site: How to Project Structure Install Packages We need react-router-dom to apply Router. Add it to dependencies in package.json: { ... "dependencies": { ... "react-router-dom":"4.2.2", } } Then run cmd yarn install.). Create Components – For each Page that displays when clicking on Navigation item, we add one Component. So we have 4 Components: Dashboard, AddBook, EditBook, – We need a Component for 404 status, it’s called NotFound. – Now, we put a group of NavLink in header of Header Component: import React from 'react'; import { NavLink } from 'react-router-dom'; const Header = () => ( ...); export default Header; Dashboard Add Book Edit Book Help NavLink provides accessible navigation around our application, we use 2 attributes: – to: location to link to. – activeClassName: class to give the element when it is active. – exact: when true, the activeClass will only be applied if the location is matched. = () => ( ); export default AppRouter;); export default AppRouter; – Assume that we have a server that will handle dynamic requests, so we use BrowserRouter. If you have a server that only serves static files, just use HashRouter. – We use Switch to group Route. It will iterate over children and only render the first child that matches current path. – When Switch comes to the last Route which doesn’t specifies path, it will definitely render the component => we use this case for 404 page. Render App Router Inside app.js: import React from 'react'; import ReactDOM from 'react-dom'; import AppRouter from './routers/AppRouter'; import './styles/styles.scss'; ReactDOM.render( , document.getElementById('app'));, document.getElementById('app')); Source code For running: – yarn install – yarn run dev-server or yarn run build, then yarn run serve.
https://grokonez.com/frontend/react-router-example-v4
CC-MAIN-2021-21
refinedweb
315
55.03
Programming performance/Magnus Haskell From HaskellWiki < Programming performance(Difference between revisions) Latest revision as of 17:53, 7 March 2007 - Language: Haskell - Skill: Intermediate. I'm self taught in Haskell and use it as a hobbyist but haven't used it much for serious work. - Time: 25 minutes. - Notes: 10 minutes spent on a silly bug where I had typed 1 instead of 100. This code is not very pretty but it works. A while after submitting I noticed I used the wrong data column. It should say column 4, not 3 in closingPrice. [edit] Code import Data.List (partition,foldl') closingPrice::String->Double closingPrice line = read (words line !! 3) sell assets price = sum (map (\(_,shares)->shares*price) assets) tradeOnce (cash,assets,lastPrice) todayPrice = if todayPrice <= lastPrice * 97/100 then let buyCost = cash*10/100 in (cash-buyCost, (lastPrice,buyCost/lastPrice):assets, todayPrice) else let (toSell,toKeep) = partition (\(buyPrice,_)->todayPrice >= buyPrice*106/100) assets in (cash + sell toSell todayPrice, toKeep, todayPrice) notComment ('#':rest) = False notComment _ = True trade input = let (firstPrice:restPrices) = reverse $ map closingPrice $ filter notComment $ lines input in foldl' tradeOnce (10000.0,[],firstPrice) restPrices main = do input <- readFile "gspc.txt" let (cash,toSell,lastPrice) = trade input let result = cash + sell toSell lastPrice print result Bugs: Two minor bugs here. Purchases are made at lastPrice rather than todayPrice. The wrong column of the data is used. It should be the last column not the 4th column. Newsham 17:53, 7 March 2007 (UTC)
https://wiki.haskell.org/index.php?title=Programming_performance/Magnus_Haskell&diff=prev&oldid=11812
CC-MAIN-2015-32
refinedweb
246
57.98
I have a component that is similar to a carousel, only I'm just clicking through using React Hooks. I have a full-width image with a left and a right click arrow. (I only have the right arrow click for now. Also not worrying about conditionals just yet.) - I am setting the images in the useState - Setting the default image to be the 0 index of the state values. let indexValue = 0; // Initial slide index value let currentSlide = slides[indexValue]; Image, alt, and title display properly, cool. - Created a right arrow click function that's supposed to update the image. const arrowRightClick = () => { currentSlide = slides[indexValue + 1]; console.log(currentSlide); } When I click on the right arrow, the console log does indeed display the next image, img2.jpg, however, the image itself never updates. ??? What am I doing wrong? I feel like I have to do useEffect somewhere. I've tried this(doesn't work): useEffect(() => { function changeSliderImage(currentSlide) { setSlides(currentSlide.setSlides); } return () => { <img src={currentSlide.source} alt={currentSlide.title} title={currentSlide.title} } }, []) I feel like it's close to something like the previous code, but I'm not sure. Full code below: import React, { useState, useEffect } from 'react'; function Carousel() { const [slides, setSlides] = useState([ { source: "../images/img1.jpg", title: "Half Moon Pier" }, { source: "../images/img2.jpg", title: "Port Washington Rocks" }, { source: "../images/img3.jpg", title: "Abandoned Rail" } ]); let indexValue = 0; // Initial slide index value let currentSlide = slides[indexValue]; // variable index value we can reference later // Index value moves up, but doesn't update the image. Why??? const arrowRightClick = () => { currentSlide = slides[indexValue + 1]; console.log(currentSlide); } return ( <div className="carousel-block"> <div className="flex-container"> <div id="slider"> <div className="slide"> <img src={currentSlide.source} alt={currentSlide.title} title={currentSlide.title} <div className="arrows"> <div id="arrow-left"><i className="fas fa-arrow-alt-circle-left"></i></div> <div id="arrow-right" onClick={arrowRightClick}><i className="fas fa-arrow-alt-circle-right"></i></div> </div> </div> </div> </div> </div> ) } export default Carousel; Any help would be appreciated Discussion For those who might come across this later, I figured out my own problem. import React, { useState } from 'react'; But because it's React, we want the images to be dynamic, so we should opt for something like this: Because we want to change something, we need another variable to act as our indexer(currentPosition). I had issues when setting this to const, so even though though React Docs use const, the variable is changing, so I changed it to let to get past the error of setting it to const. currentPosition is my indexer currentslide is all of my information inside each index of the slides variable object. With this, the first image should display on the page. (That is assuming your image paths are correct. I might try hard-coding an image tag just to see that you have the correct path) We add arrows. (I'm adding arrows from fontawesome inside of div tags.) This is actually a ternary operator. We are asking for a boolean condition I've been taught put the more complex stuff first because it's less likely to hit. We are starting out at 0 as our index, but we have to make up for after we get past the last image. If the current position is not at the last image, If that's true (not at the last image, img3.jpg in this case) increase our index by 1, we're also doing this by running our setCurrentPosition we set earlier. If the case is false, (Is at the last image, img3.jpg) go back to the first image, or the 0 index. Now update the slide with the following: Here is the left click below, which really does the same thing, conditionals are just slightly different. This doesn't have the CSS attached, but I think you can change it to however you would like. Here's the full code:
https://practicaldev-herokuapp-com.global.ssl.fastly.net/mitchelln11/help-with-image-click-through-like-a-carousel-using-react-hooks-4njg
CC-MAIN-2021-04
refinedweb
659
56.35
NAME namespace.conf - the namespace configuration file DESCRIPTION The. The script receives the polyinstantiated directory path and the instance directory path as its arguments. The /etc/security/namespace.conf file specifies which directories are polyinstantiated, how they are polyinstantiated, how instance directories would be named, and any users for whom polyinstantiation would not be performed. When someone logs in, the file namespace.conf is scanned. Comments are marked by # characters. Each non comment line represents one polyinstantiated directory. The fields are separated by spaces but can be quoted by " characters also escape sequences \b, \n, and \t are recognized. The fields are as follows: polydir instance_prefix method list" to generate the final instance directory path. This directory is created if it did not exist already, and is then bind mounted on the <polydir> to provide an instance of <polydir> based on the <method> column. The special string $HOME is replaced with the user's home directory, and $USER with the username. This field cannot be blank. The. EXAMPLES. # Polyinstantiation will not be performed for user root # and adm for directories /tmp and /var/tmp, whereas home # directories will be polyinstantiated for all users. # # Note that instance directories do not have to reside inside # the polyinstantiated directory. In the examples below, # instances of /tmp will be created in /tmp-inst directory, # where as instances of /var/tmp and users home directories # will reside within the directories that are being # polyinstantiated. # /tmp /tmp-inst/ level root,adm /var/tmp /var/tmp/tmp-inst/ level root,adm $HOME $HOME/$USER.inst/inst- context For the <service>s you need polyinstantiation (login for example) put the following line in /etc/pam.d/<service> as the last line for session group: session required pam_namespace.so [arguments] This module also depends on pam_selinux.so setting the context. SEE ALSO pam_namespace(8), pam.d(5), pam(7) AUTHORS The namespace.conf manual page was written by Janak Desai <janak@us.ibm.com>. More features added by Tomas Mraz <tmraz@redhat.com>.
http://manpages.ubuntu.com/manpages/maverick/man5/namespace.conf.5.html
CC-MAIN-2014-15
refinedweb
334
50.33
back to overview Load Balancing Across IP Subnets One of my readers sent me this question: I have a data center with huge L2 domains. I would like to move routing down to the top of the rack, however I’m stuck with a load-balancing question: how do load-balancers work if you have routed network and pool members that are multiple hops away? How is that possible to use with Direct Return? There are multiple ways to make load balancers work across multiple subnets: - Make sure the load balancer is in the forwarding path from the server to the client, so the return traffic hits the load balancer, which translates the source (server) IP address. You usually need multiple forwarding domains (VLANs or VRFs) to make this work. - Use source NAT, where the load balancer changes the client’s IP address to load balancer’s IP address. As the return IP address belongs to the load balancer, the return (server-to-client) traffic goes through the load balancer even when it’s not in the forwarding path. - With Direct Server Return (DSR) use IP-over-IP tunneling (or whatever tunneling mechanism is supported by both load balancer and the server) to get the client packets from the load balancer to the desired server. The return traffic is sent from the server straight to the client anyway. Haven’t heard about Direct Server Return? Don’t worry, you’ll find all you need to know in this short video: More information - The Data Center 3.0 webinar has a whole (2-hour) section on scale-out architectures and load balancing; - Greg Ferro wrote extensively about SNAT; - You’ll find in-depth details of DSR in Linux Virtual Server documentation. By Ivan Pepelnjak Related posts by categories Please read our Blog Commenting Policy before writing a comment. 6 comments: Constructive courteous comments are most welcome. Anonymous trolling will be removed with prejudice. Ouch, it's not that simple. Well it is, but this does have implications. Using source NAT will obviously remove the client IP address. If the loadbalancer doesn't add e.g. X-Forwarded-For-headers (for http), the client IP is obviously lost. This also requires the client application to know how to properly parse such headers are to be parsed: X-Forward-For is a chained list of IP addresses, and the "seen" client IP (the loadbalancer) needs to be evaluated as well. Only do trust X-Forwarded-For, if the "seen" client IP is a trusted loadbalancer. This becomes an issue if a client is behind a proxy: the proxy will add an "X-Forwarded-For"-header to the outgoing request, and the loadbalancer will add the proxy's IP address to this header. The application needs to know such details, otherwise there's a potential for error. Using IP-over-IP-tunneling is a different story: if you networks' MTU is 1500, an IPv4-over-IPv4-header will reduce the available MTU by 20 octets down to 1480 octets. If the client's request is larger than this (e.g. a typical 1500-sized packet during file upload), the tcp packet does have the DF bit set and the client is sitting behind some (broken) firewall silently dropping icmp packets, the client will experience issues of "something doesn't work". There are also potential security issues with IP-over-IP. As a good network engineer, you do enforce egress filtering, reverse path filtering and the like to protect the internet from spoofed outgoing traffic. If one is combining IP-over-IP-tunneling to traverse different L2/L3 domains and wants to use direct server return for the replies, they essentially need to remove some of those security measures. You also need to be aware that you'll be introducing asymetric traffic into your network, which may complicate debugging, may break more easily - and as we're talking about loadbalancers, we're also talking about high-volume or highly available services, who don't want to be faced with a risk of "complicated debugging". So if you need to do IP-over-IP-tunneling, one may also want to re-route any replies back via the load balancer's network, e.g. via another tunneled connection. This will make it easier for the network, but this may introduce another layer of complexity for the server. All true ... and then there's the problem of SSL and X-Forwarded-For headers (you need to decrypt and potentially re-encrypt), the loss of performance if you have to modify TCP sessions (at least on high-end load balancers) ... By design, SSL/TLS-offloading or re-encrypting at the loadbalancer is a willingly accepted man-in-the-middle-attack. So usually, it's not an option for exactly those reasons where one wants to make use of SSL/TLS. Surprisingly a lot of "offloading/re-encrypting" solutions don't check if the server's certificate is barely valid, and by using them, you're accepting any further MITM-attacks downstream between loadbalancer and server. Ultimately, it's tricking your clients into assuming some security (see, the lock is closed and certified...) which is actually missing. ;)) Can't tell you how much I agree with you ;) An overlay network could be used to provide the tunnelling, with ACLs at the virtual network level to restore security. Regards, Jeroen van Bemmel ( Customer support engineer @ Nuage Networks ) Agreed - effectively you're saying "keep VLAN-like construct in place, but emulate it over IP to make the underlying transport fabric more stable." That would work well if the load balancer and servers are virtualized, but not so much if they happen to be appliances or bare-metal servers (where you'll have to use on-ramp/off-ramp L2 gateways, increasing the complexity of the solution).
https://blog.ipspace.net/2014/05/load-balancing-across-ip-subnets.html?showComment=1400240378020
CC-MAIN-2019-26
refinedweb
974
59.03
henning 2003/06/07 10:57:17 Modified: configuration/xdocs tasks.xml Log: Adding some todo items to the task list Revision Changes Path 1.2 +17 -1 jakarta-commons-sandbox/configuration/xdocs/tasks.xml Index: tasks.xml =================================================================== RCS file: /home/cvs/jakarta-commons-sandbox/configuration/xdocs/tasks.xml,v retrieving revision 1.1 retrieving revision 1.2 diff -u -r1.1 -r1.2 --- tasks.xml 2 Jun 2003 18:01:01 -0000 1.1 +++ tasks.xml 7 Jun 2003 17:57:17 -0000 1.2 @@ -14,7 +14,23 @@ </p> <subsection name="High priority"> - + <ul> + <li>Using XML based digester rules will probably blow this bugger + apart. You do need a factory from an inner class of the + Configuration factory to get the base Path right; even if + the test with the rule configuration file works, I'm pretty + sure that the resulting rule set will not read configuration + files if you're not running from the current (".") directory + as base path. This needs more thinking and checking.</li> + + <li>I'm also 99% sure that using XML based Digester rules and + Namespace awareness in the configuration file can't be used + together. As far as I can see, the namespace awareness must + be set/reset before the rules are added. In the case of the + DigestLoader (using a DigesterRules() URI), the rules are + already added when configureNamespace() is called on the + digester. This might even be a bug in the Digester itself. + </li> </subsection> <subsection name="Medium priority"> --------------------------------------------------------------------- To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200306.mbox/%3C20030607175717.92919.qmail@icarus.apache.org%3E
CC-MAIN-2018-34
refinedweb
271
52.26
Hello Bash developers! Configuration nashi 3.19.0-61-generic #69~14.04.1-Ubuntu SMP Thu Jun 9 09:09:13 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Machine Type: x86_64-unknown-linux-gnu: Repeat-By: $ uname Linux $ fc -e true uname Linux $ echo $? 1 $ # Would have expected 0 here: successful re-invocation Fix: This is my first dive into Bash's sources; patch follows and is also attached to this mail. Please let me know if you need anything else. As a longtime Bash user, I'm very happy to contribute to my favorite shell! -- regards, ingo From 1cf392a401c67c2f8437f2da459dfcf0f675dc55 Mon Sep 17 00:00:00 2001 From: Ingo Karkat <address@hidden> Date: Thu, 30 Jun 2016 12:30:59 +0200 Subject: [PATCH 1/1] Exit status of fc -e is the wrong way around fc_execute_file() delegates to _evalfile(), which only returns the result of the file's execution if FEVAL_BUILTIN is set (exemplified by source_file()). If unset, an indication of whether the file exists is returned instead (exemplified by maybe_execute_file(), which is used for the .bash_profile, .bash_login, .profile optional init chain). According to the manual (and common sense), fc -e editor should return the recalled command's success. For that, the FEVAL_BUILTIN flag needs to be set. --- builtins/evalfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/builtins/evalfile.c b/builtins/evalfile.c index 058d99d..e5c118b 100644 --- a/builtins/evalfile.c +++ b/builtins/evalfile.c @@ -331,7 +331,7 @@ fc_execute_file (filename) /* We want these commands to show up in the history list if remember_on_history is set. */ - flags = FEVAL_ENOENTOK|FEVAL_HISTORY|FEVAL_REGFILE; + flags = FEVAL_ENOENTOK|FEVAL_BUILTIN|FEVAL_HISTORY|FEVAL_REGFILE; return (_evalfile (filename, flags)); } #endif /* HISTORY */ -- 1.9.1 -- -- Ingo Karkat -- /^-- /^-- /^-- /^-- /^-- -- 0001-Exit-status-of-fc-e-is-the-wrong-way-around.patch Description: Text Data signature.asc Description: OpenPGP digital signature
https://lists.gnu.org/archive/html/bug-bash/2016-06/msg00129.html
CC-MAIN-2020-40
refinedweb
304
50.12
Matlab Matlab Compiler 4 ManualInstruction: After click Download and complete offer, you will get access to list of direct links to websites where you can download this manual. About Here you can find all about Matlab Matlab Compiler 4 like manual and other informations. For example: review. Matlab Matlab Compiler 4 manual (user guide) is ready to download for free. On the bottom of page users can write a review. If you own a Matlab Matlab Compiler 4 please write about it to help other people. [ Report abuse or wrong photo | Share your Matlab Matlab Compiler 4 photo ][ Report abuse or wrong photo | Share your Matlab Matlab Compiler 4 photo ] User reviews and opinions No opinions have been provided. Be the first and add a new opinion/review. Documents The package should include the following: Your software (the standalone or shared library) The CTF archive that MATLAB Compiler created (component_name.ctf) MCRInstaller.exe, which is located in the following directory: matlabroot\toolbox\compiler\deploy\win32 Creating a Package for Users Who Do Not Use Windows The package should include the following: The standalone or shared library that you created with MATLAB Compiler The CTF archive that MATLAB Compiler creates for your component Configuring the Development Environment by Installing the MCR To test software created by MATLAB Compiler as it will be used by end users without MATLAB, programmers must install the MCR, if it is not already installed on the development machine, and set path environment variables properly. Configuring on Windows Platforms 1 Open the package created by you or the Deployment Tool. 2 Run MCRInstaller once on the machine where you want to develop the application or library. MCRInstaller opens a command window and begins preparation for the installation. 3 Add the required platform-specific directories to your dynamic library path. See Directories Required for Run-Time Deployment on page 9-5. Configuring on Platforms Other Than Windows 1 Install the MCR by unzipping MCRInstaller.zip in a directory, for example, /home/username/MCR. You may choose any directory except matlabroot or any subdirectory of matlabroot. 2 Copy the component and CTF archive to your application root directory, for example, /home/username/approot. For More Information About This Detailed information on standalone applications Creating libraries Using the mcc command Troubleshooting Look Here Chapter 6, Standalone Applications Chapter 7, Libraries Chapter 5, Compiler Commands Chapter 8, Troubleshooting This chapter describes the system requirements for MATLAB Compiler. It also contains installation and configuration information for all supported platforms. When you install your ANSI C or C++ compiler, you may be required to provide specific configuration details regarding your system. This chapter contains information for each platform that can help you during this phase of the installation process. Requirements (p. 2-2) Software requirements for MATLAB Compiler and a supported C/C++ compiler Steps to install MATLAB Compiler and a supported C/C++ compiler Configuring a supported C/C++ compiler to work with MATLAB Compiler Known limitations of the supported C/C++ compilers More detailed information on MATLAB Compiler options files for users who need to know more about how they work Overview of MATLAB Compiler Technology In this section. MATLAB Component Runtime on page 3-2 Component Technology File on page 3-2 Build Process on page 3-3 MATLAB Component Runtime MATLAB Compiler 4 uses the MATLAB Component Runtime (MCR), which is a standalone set of shared libraries that enable the execution of M-files. The MCR provides complete support for all features of the MATLAB language. Note. Component Technology File Compiler 4 also uses a Component Technology File (CTF) archive to house the deployable package. All M-files are encrypted in the CTF archive using the Advanced Encryption Standard (AES) cryptosystem where symmetric keys are protected by 1024-bit RSA keys. Each application or shared library produced by MATLAB Compiler has an associated CTF archive. The archive contains all the MATLAB based content (M-files, MEX-files, etc.) associated with the component. When the CTF archive is extracted on a users system, the files remain encrypted. Additional Details Multiple CTF archives, such as COM,.NET, or Excel components, can coexist in the same user application, but you cannot mix and match the M-files they contain. You cannot combine encrypted and compressed M-files from multiple CTF archives into another CTF archive and distribute them. All the M-files from a given CTF archive are locked together with a unique cryptographic key. M-files with different keys will not execute if placed in the same CTF archive. If you want to generate another application with a different mix of M-files, you must recompile these M-files into a new CTF archive. The CTF archive and generated binary will be cleaned up following a failed compilation, but only if these files did not exist before compilation was initiated. 2 Set the path environment variables properly. See Directories Required for Run-Time Deployment on page 9-5. 3 When you deploy a Java application to end users, they must set the class path on the target machine. Using the MCR Installer GUI 1 When the MCR Installer wizard appears, click Next to begin the installation. Click Next to continue. 2 In the Select Installation Folder dialog box, specify where you want to install the MCR and whether you want to install the MCR for just yourself or others. Click Next to continue. Note The Install MATLAB Component Runtime for yourself, or for anyone who uses this computer option is not implemented for this release. The current default is Everyone. 3 Confirm your selections by clicking Next. The installation begins. The process takes some time due to the quantity of files that are installed. The MCRInstaller automatically: Copies the necessary files to the target directory you specified. Registers the components as needed. Updates the system path to point to the MCR binary directory, which is <target_directory>/<version>/run time/bin/win32. 4 When the installation completes, click Close on the Installation Completed dialog box to exit. What Software Does the End User Need? The software required by end users depends on which of the following kinds of software is to be run by the user: Standalone Compiled Application That Accesses Shared Library on page 4-12 .NET Application on page 4-13 COM Application on page 4-14 Java Application on page 4-14 Microsoft Excel Add-In on page 4-15 Standalone Compiled Application That Accesses Shared Library To distribute a shared library created with MATLAB Compiler to end users, create a package that includes the following files. Component MCRInstaller.zip Description MATLAB Component Runtime library archive; platform-dependent file that must correspond to the end users platform. Component Description Self-extracting MATLAB Component Runtime library utility; platform-dependent file that must correspond to the end users platform. Utility to unzip MCRInstaller.zip (optional). The target machine must have an unzip utility installed. Application Passing Arguments to a Double-Clickable Application On Windows, if you want to run the standalone application by double-clicking it, you can create a batch file that calls this standalone application with the specified input arguments. Here is an example of the batch file: rem main.bat file that calls sub.exe with input parameters sub "[3]" "[6]" @echo off pause The last two lines of code keep your output on the screen until you press a key. If you save this file as main.bat, you can run your code with the specified arguments by double-clicking the main.bat icon. Using Graphical Applications in Shared Library Targets When deploying a GUI as a shared library to a C/C++ application, use mclWaitForFiguresToDie to display the GUI until it is explicitly terminated. Using the VER Function in a Compiled MATLAB Application When you use the VER function in a compiled MATLAB application, it will perform with the same functionality as if you had called it from MATLAB. However, be aware that when using VER in a compiled MATLAB application, only version information for toolboxes which the compiled application uses will be displayed. This chapter describes how to use MATLAB Compiler to code and build standalone applications. You can distribute standalone applications to users who do not have MATLAB on their systems. Introduction (p. 6-2) Overview of using MATLAB Compiler to build standalone applications Examples of using MATLAB Compiler to generate and deploy standalone C applications Creating standalone applications from M-files Creating applications from M-files and C or C++ code C Standalone Application Target (p. 6-3) Coding with M-Files Only (p. 6-11) Mixing M-Files and C or C++ (p. 6-13) Introduction Suppose you want to create an application that calculates the rank of a large magic square. One way to create this application is to code the whole application in C or C++; however, this would require writing your own magic square, rank, and singular value routines. An easier way to create this application is to write it as one or more M-files, taking advantage of the power of MATLAB and its tools. You can create MATLAB applications that take advantage of the mathematical functions of MATLAB, yet do not require that end users own MATLAB. Standalone applications are a convenient way to package the power of MATLAB and to distribute a customized application to your users. The source code for standalone C applications consists either entirely of M-files or some combination of M-files, MEX-files, and C or C++ source code files. MATLAB Compiler takes your M-files and generates C source code functions that allow your M-files to be invoked from outside of interactive MATLAB. After compiling this C source code, the resulting object file is linked with the run-time libraries. A similar process is used to create C++ standalone applications. You can call MEX-files from MATLAB Compiler generated standalone applications. The MEX-files will then be loaded and called by the standalone code. Note If you include compiled M-code into a larger application, you must produce a library wrapper file even if you do not actually create a separate library. For more information on creating libraries, see Chapter 7, Libraries. Simple Example This example involves mixing M-files and C code. Consider a simple application whose source code consists of mrank.m, mrankp.c, main_for_lib.c, and main_for_lib.h. mrank.m contains a function that returns a vector of the ranks of the magic squares from 1 to n. function r = mrank(n) r = zeros(n,1); for k = 1:n r(k) = rank(magic(k)); end Copy mrank.m, printmatrix.m, mrankp.c, main_for_lib.c, and main_for_lib.h into your current directory. The steps needed to build this standalone application are 1 Compile the M-code. 2 Generate the library wrapper file. 3 Create the binary. To perform these steps, enter the following on a single line: mcc -W lib:libPkg -T link:exe mrank printmatrix mrankp.c main_for_lib.c The following flow diagram shows the mixing of M-files and C-files that forms this sample standalone application. The top part of the diagram shows the mcc process and the lower part shows the mbuild process. MATLAB Compiler generates the following C source code files: libPkg.c libPkg.h libPkg_mcc_component_data.c This command invokes mbuild to compile the resulting MATLAB Compiler generated source files with the existing C source files (mrankp.c and main_for_lib.c) and link against the required libraries. MATLAB Compiler provides two different versions of mrankp.c in the matlabroot/extern/examples/compiler directory: mrankp.c contains a POSIX-compliant main function. mrankp.c sends its output to the standard output stream and gathers its input from the standard input stream. mrankwin.c contains a Windows version of mrankp.c. mrankp.c The code in mrankp.c calls mrank and outputs the values that mrank returns. /* * MRANKP.C * "Posix" C main program * Calls mlfMrank, obtained by using MCC to compile mrank.m. * * $Revision: 1.1.4.32.2.3 $ * */ #include <stdio.h> #include <math.h> #include "libPkg.h" main( int argc, char **argv ) { mxArray *N; /* Matrix containing n. */ mxArray *R = NULL; /* Result matrix. */ int n; /* Integer parameter from command line.*/ /* Get any command line parameter. */ if (argc >= 2) { n = atoi(argv[1]); } else { n = 12; } mclInitializeApplication(NULL,0); libPkgInitialize();/* Initialize library of M-Functions */ /* Create a 1-by-1 matrix containing n. */ N = mxCreateDoubleScalar(n); /* Call mlfMrank, the compiled version of mrank.m. */ mlfMrank(1, &R, N); /* Print the results. */ mlfPrintmatrix(R); /* Free the matrices allocated during this computation. */ mxDestroyArray(N); mxDestroyArray(R); libPkgTerminate(); /* Terminate library of M-functions */ mclTerminateApplication(); } mcc -W lib:libMultpkg -T link:exe multarg printmatrix multargp.c main_for_lib.c The program first displays the contents of a 3-by-3 matrix a, and then displays the contents of scalar b. 6.2832 +34.5575i 12.5664 +34.5575i 18.8496 +18.8496i 143.4164 25.1327 +25.1327i 31.4159 +31.4159i 37.6991 +37.6991i 43.9823 +43.9823i 50.2655 +28.2743i 56.5487 +28.2743i Explanation of This C Code Invoking MATLAB Compiler on multarg.m generates the C function prototype. extern void mlfMultarg(int nargout, mxArray** a, mxArray** b, mxArray* x, mxArray* y); This C function header shows two input arguments (mxArray* x and mxArray* y) and two output arguments (the return value and mxArray** b). Use mxCreateDoubleMatrix to create the two input matrices (x and y). Both x and y contain real and imaginary components. The memcpy function initializes the components, for example: x = mxCreateDoubleMatrix(,ROWS, COLS, mxCOMPLEX); memcpy(mxGetPr(x), x_pr, ROWS * COLS * sizeof(double)); memcpy(mxGetPi(y), x_pi ROWS * COLS * sizeof(double)); The code in this example initializes variable x from two arrays (x_pr and x_pi) of predefined constants. A more realistic example would read the array values from a data file or a database. After creating the input matrices, main calls mlfMultarg. mlfMultarg(2, &a, &b, x, y); The mlfMultarg function returns matrices a and b. a has both real and imaginary components; b is a scalar having only a real component. The program uses mlfPrintmatrix to output the matrices, for example: mlfPrintmatrix(a); This chapter describes how to use MATLAB Compiler to create libraries. Introduction (p. 7-2) Addressing mwArrays Above the 2 GB Limit (p. 7-3) C Shared Library Target (p. 7-4) C++ Shared Library Target (p. 7-17) MATLAB Compiler Generated Interface Functions (p. 7-23) Using C/C++ Shared Libraries on Mac OS X (p. 7-32) About Memory Management and Cleanup (p. 7-38) Overview of shared libraries How to enable extended addressing for mwArrays larger than 2 GB Creating and distributing C shared libraries Creating and distributing C++ shared libraries Using MATLAB Compiler generated interface functions Preparing a Mac OS X system to use MATLAB Compiler generated libraries Recommendations on memory management You can use MATLAB Compiler to create C or C++ shared libraries (DLLs on Windows) from your MATLAB algorithms. You can then write C or C++ programs that can call the MATLAB functions in the shared library, much like calling the functions from the MATLAB command line. MCR. Each of these routines has the same signature (for complete details, see Print and Error Handling Functions on page 7-27). By overriding the defaults, you can control how output is displayed and, for example, whether or not it goes into a log file. Note Before calling either form of the library initialization routine, you must first call mclInitializeApplication to set up the global MCR state. See Calling a Shared Library on page 7-11 for more information. On Microsoft Windows platforms, MATLAB Compiler CTF archive, without which the application will not run. If you modify the generated DllMain (which we do not recommend you do), make sure you preserve this part of its functionality. Library termination is simple. void libtriangleTerminate(void) Call this function (once for each library) before calling mclTerminateApplication. Print and Error Handling Functions By default, MATLAB Compiler generated applications and shared libraries send printed output to standard output and error messages to standard error. MATLAB Compiler generates a default print handler and a default error handler that implement this policy. If youd like to change this behavior, you must write your own error and print handlers and pass them in to the appropriate generated initialization function. You may replace either, both, or neither of these two functions. Note that the MCR sends all regular output through the print handler and all error output through the error handler. Therefore, if you redefine either of these functions, the MCR MCR calls the print handler when an executing M-file makes a request for printed output, e.g., via the MATLAB function disp. The print handler does not terminate the output with a carriage return or line feed. The default error handler has the same form as the print handler. About Memory Management and Cleanup In this section. Overview on page 7-38 Passing mxArrays to Shared Libraries on page 7-38 Generated C++ code provides consistent garbage collection via the object destructors and the MCRs internal memory manager optimizes to avoid heap fragmentation. If memory constraints are still present on your system, try preallocating arrays in M. This will reduce the number of calls to the memory manager, and the degree to which the heap fragments. Passing mxArrays to Shared Libraries When an mxArray is created in an application which uses the MCR, it is created in the managed memory space of the MCR. Therefore, it is very important that you never create mxArrays (or call any other MathWorks function) before calling mclInitializeApplication. It is safe to call mxDestroyArray when you no longer need a particular mxArray in your code, even when the input has been assigned to a persistent or global variable in MATLAB. MATLAB utilizes Calling a Shared Library on page 7-11. mbuild (p. 8-2) MATLAB Compiler (p. 8-4) Deployed Applications (p. 8-8) Issues involving the mbuild utility and creating standalone applications Issues involving MATLAB Compiler Issues that appear at run time mbuild This section identifies some of the more common problems that might occur when configuring mbuild to create standalone applications. Options File Not Writeable. When you run mbuild -setup, mbuild makes a copy of the appropriate options file and writes some information to it. If the options file is not writeable, you are asked if you want to overwrite the existing options file. If you choose to do so, the existing options file is copied to a new location and a new options file is created. Directory or File Not Writeable. If a destination directory or file is not writeable, ensure that the permissions are properly set. In certain cases, make sure that the file is not in use. mbuild Generates Errors. in the options file. For MS Visual Studio, for example, make sure to run vcvars32.bat (MSVC 6.x and earlier) or vsvars32.bat (MSVC 7.x). mbuild Not a Recognized Command. If mbuild is not recognized, verify that matlabroot\bin is on your path. On UNIX, it may be necessary to rehash. mbuild Works from Shell But Not from MATLAB (UNIX). If the command Note For matlabroot, substitute the MATLAB root directory on your system. Type matlabroot to see this directory name. Path for Java Development on All Platforms Note There are additional requirements when programming in Java. See Deploying Applications That Call the Java Native Libraries on page 5-25. Windows Settings for Development and Testing When programming with components that are generated with MATLAB Compiler, add the following directory to your system PATH environment variable: UNIX Settings for Development and Testing When programming with components that are generated with MATLAB Compiler, use the following commands to add the required platform-specific directories to your dynamic library path. Solaris64 matlabroot/sys/os/sol64: matlabroot/bin/sol64: matlabroot/sys/java/jre/sol64/jre1.6.0/lib/sparcv9/native_threads: matlabroot/sys/java/jre/sol64/jre1.6.0/lib/sparcv9/server: matlabroot/sys/java/jre/sol64/jre1.6.0/lib/sparcv9: matlabroot/sys/os/glnxa64: matlabroot/bin/glnxa64: matlabroot/sys/java/jre/glnxa64/jre1.6.0/lib/amd64/native_threads: matlabroot/sys/java/jre/glnxa64/jre1.6.0/lib/amd64/server: matlabroot/sys/java/jre/glnxa64/jre1.6.0/lib/amd64: /System/Library/Frameworks/JavaVM.framework/JavaVM: /System/Library/Frameworks/JavaVM.framework/Libraries setenv XAPPLRESDIR matlabroot>/X11/app-defaults You can then run the compiled applications on your development machine to test them. Directories Required for Run-Time Deployment In this section. Path for Java Applications on All Platforms on page 9-5 Windows Path for Run-Time Deployment on page 9-5 UNIX Paths for Run-Time Deployment on page 9-6 Path for Java Applications on All Platforms When your users run applications that contain compiled M-code, you must instruct them to set the path so that the system can find the MCR. Note When you deploy a Java. Windows Path for Run-Time Deployment The following directory should be added to the system path: mcr_root\version\run time\win32 where mcr_root refers to the complete path where the MCR library archive files are installed on the machine where the application is to be run. Note that mcr_root is version specific; you must determine the path after you install the MCR. UNIX Paths for Run-Time Deployment mcr_root/version/run time/glnx86: mcr_root/version/sys/os/glnx86: mcr_root/version/sys/java/jre/glnx86/jre1.6.0/lib/i386/native_threads: mcr_root/version/sys/java/jre/glnx86/jre1.6.0/lib/i386/server: mcr_root/version/sys/java/jre/glnx86/jre1.6.0/lib/i386: -regsvr -setup -U<name> Description Verbose mode. Print the values for important internal variables after the options file is processed and all command line arguments are considered. Prints each compile step and final link step fully evaluated. Supplement or override an options file variable for variable <name>. This option is processed after the options file is processed and all command line arguments are considered. You may need to use the shells a variable already defined. To do this, refer to the variable by prepending a $ (e.g., COMPFLAGS="$COMPFLAGS opt2" on Windows or CFLAGS='$CFLAGS opt2' on UNIX). <name>=<value> Note Some of these options (-f, -g, and -v) are available on the mcc command line and are passed along to mbuild. Others can be passed along using the -M option to mcc. For details on the -M option, see the mcc reference page. Note MBUILD can also create shared libraries from C source code. If a file with the extension.exports is passed to MBUILD, a shared library is built. The.exports file must be a text file, with each line containing either an exported symbol name, or starting with a # or * in the first column (in which case it is treated as a comment line). If multiple.exports files are specified, all symbol names in all specified.exports files are exported. To set up or change the default C/C++ compiler for use with MATLAB Compiler, use To create a shared library named libfoo, use mcc -W lib:libfoo -T link:lib foo.m To compile and link an external C program foo.c against libfoo, use mbuild foo.c -L. -lfoo (on UNIX) mbuild foo.c libfoo.lib (on Windows) This assumes both foo.c and the library generated above are in the current working directory. Invoke MATLAB Compiler mcc [-options] mfile1 [mfile2. mfileN] [C/C++file1. C/C++fileN] mcc is the MATLAB command that invokes MATLAB Compiler. You can issue the mcc command either from the MATLAB command prompt (MATLAB mode) or the DOS or UNIX command line (standalone mode). mcc prepares M-file(s) for deployment outside of the MATLAB environment, generates wrapper files in C or C++, optionally builds standalone binary files, and writes any resulting files into the current directory, by default. If more than one M-file is specified on the command line, MATLAB Compiler generates a C or C++ function for each M-file. If C or object files are specified, they are passed to mbuild along with any generated C files. Error: Unknown warning enable/disable string: warningstring. -w enable:, -w disable:, and -w error: require you to use one of the warning string identifiers listed in Warning Messages on page B-7. Error: Unrecognized option: -option. The option is not a valid option. See Chapter 10, Functions By Category for a complete list of valid options for MATLAB Compiler, or type mcc -? at the command prompt. Warning Messages This section lists the warning messages that MATLAB Compiler can generate. Using the -w option for mcc, you can control which messages are displayed. Each warning message contains a description and the warning message identifier string (in parentheses) that you can enable or disable with the -w option. For example, to produce an error message if you are using a demo MATLAB Compiler license to create your standalone application, you can use mcc -w error:demo_license -mvg hello To enable all warnings except those generated by the save command, use mcc -w enable -w disable:demo_license. To display a list of all the warning message identifier strings, use mcc -w list -m mfilename For additional information about the -w option, see Chapter 10, Functions By Category. Warning: File: filename Line: # Column: # The #function pragma expects a list of function names. (pragma_function_missing_names) This pragma informs MATLAB Compiler that the specified function(s) provided in the list of function names will be called through an feval call. This will automatically compile the selected functions. Warning: M-file "filename" was specified on the command line with full path of "pathname", but was found on the search path in directory "directoryname" first. (specified_file_mismatch) MATLAB Compiler detected an inconsistency between the location of the M-file as given on the command line and in the search path. MATLAB Compiler uses the location in the search path. This warning occurs when you specify a full pathname on the mcc command line and a file with the same base name (filename) is found earlier on the search path. This warning is issued in the following example if the file afile.m exists in both dir1 and dir2: mcc -m -I /dir1 /dir2/afile.m Warning: The file filename was repeated on MATLAB Compiler command line. (repeated_file) This warning occurs when the same filename appears more than once on the compiler command line. For example: mcc -m sample.m sample.m % Will generate the warning Warning: The name of a shared library should begin with the letters "lib". "libraryname" doesnt. (missing_lib_sentinel) This warning is generated if the name of the specified library does not begin with the letters lib. This warning is specific to UNIX and does not occur on Windows. For example: The mwString class is a simple string class used by the mwArray API to pass string data as output from certain methods. Constructors mwString() mwString(const char* str) mwString(const mwString& str) Methods int Length() const Operators operator const char* () const mwString& operator=(const mwString& str) mwString& operator=(const char* str) bool operator==(const mwString& str) const bool operator!=(const mwString& str) const bool operator<(const mwString& str) const bool operator<=(const mwString& str) const bool operator>(const mwString& str) const bool operator>=(const mwString& str) const friend std::ostream& operator<<(std::ostream& os, const mwString& str) mwString() Purpose C++ Syntax Arguments Return Value Description Construct empty string #include "mclcppclass.h" mwString str; Use this constructor to create an empty string. mwString(const char* str) Construct new string and initialize strings data with supplied char buffer #include "mclcppclass.h" mwString str("This is a string"); str NULL-terminated char buffer to initialize the string. None Use this constructor to create a string from a NULL-terminated char buffer. mwString(const mwString& str) Purpose C++ Syntax Copy constructor for mwString #include "mclcppclass.h" mwString str("This is a string"); mwString new_str(str); // new_str contains a copy of the // characters in str. str mwString to be copied. Arguments Return Value Description Use this constructor to create an mwString that is a copy of an existing one. Constructs a new string and initializes its data with the supplied mwString. int Length() const Return number of characters in string #include "mclcppclass.h" mwString str("This is a string"); int len = str.Length(); // len should be 16. None The number of characters in the string. Use this method to get the length of an mwString. The value returned does not include the terminating NULL character. operator const char* () const Return pointer to internal buffer of string #include "mclcppclass.h" mwString str("This is a string"); const char* pstr = (const char*)str; None A pointer to the internal buffer of the string. Use this operator to get direct read-only access to the strings data buffer. mwString& operator=(const mwString& str) #include "mclcppclass.h" mwArray a(2, 2, mxDOUBLE_CLASS); mwArray b = a.Clone(); None New mwArray representing a deep copy of the original. Use this method to create a copy of an existing array. The new array contains a deep copy of the input array. mwArray SharedCopy() const Return new array representing shared copy of array #include "mclcppclass.h" mwArray a(2, 2, mxDOUBLE_CLASS); mwArray b = a.SharedCopy(); None New mwArray representing a reference counted version of the original. Use this method to create a shared copy of an existing array. The new array and the original array both point to the same data. mwArray Serialize() const Serialize underlying array into byte array, and return this data in new array of type mxUINT8_CLASS #include "mclcppclass.h" mwArray a(2, 2, mxDOUBLE_CLASS); mwArray s = a.Serialize(); None New mwArray of type mxUINT8_CLASS containing the serialized data. Use this method to serialize an array into bytes. A 1-by-n numeric matrix of type mxUINT8_CLASS is returned containing the serialized data. The data can be deserialized back into the original representation by calling mwArray::Deserialize(). mxClassID ClassID() const Return type of array #include "mclcppclass.h" mwArray a(2, 2, mxDOUBLE_CLASS); mxClassID id = a.ClassID();// Should return mxDOUBLE_CLASS None The mxClassID of the array. Use this method to determine the type of the array. Consult the External Interfaces documentation for more information on mxClassID. int ElementSize() const Return size in bytes of element of array #include "mclcppclass.h" mwArray a(2, 2, mxDOUBLE_CLASS); int size = a.ElementSize();// Should return sizeof(double) None The size in bytes of an element of this type of array. Use this method to determine the size in bytes of an element of array type. Note If you define MX_COMPAT_32_OFF, this method is defined as size_t ElementSize() const. size_t ElementSize() const Return size in bytes of an element of array Note If you do not define MX_COMPAT_32_OFF, this method is defined as size-t ElementSize() const. Note: If you encounter problems relating to the installation or use of your ANSI C or C++ compiler, consult the documentation or customer support organization of your ANSI compiler vendor. Unsupported MATLAB Platforms The MATLAB Compiler and the MATLAB C Math Library support all platforms that MATLAB 5 supports, except for: VAX/VMS and OpenVMS The MATLAB C++ Math Library supports all platforms that MATLAB 5 supports, except for: VAX/VMS and OpenVMS Macintosh Overview The sequence of steps to install and configure the MATLAB Compiler so that it can generate MEX-files is: 1 Install the MATLAB Compiler. 2 Install the ANSI C/C++ compiler. 3 Configure mex to create MEX-files. 4 Verify that mex can generate MEX-files. 5 Verify that the MATLAB Compiler can generate MEX-files. Figure 2-1 shows the sequence on all platforms. The sections following the flowchart provide more specific details for the individual platforms. Additional steps may be necessary if you plan to create stand-alone applications, however, you still must perform the steps given in this chapter first. Chapter 5, Stand-Alone External Applications, provides the details about the additional installation and configuration steps necessary for creating stand-alone applications. Note: This flowchart assumes that MATLAB is properly installed on your system. Install MATLAB Compiler Use MATLAB Installer to install Toolbox (MATLAB Compiler). Follow vendors instructions to install and test ANSI C/C++ compiler. Install ANSI Compiler Is ANSI C/C++ compiler installed ? Yes Configure MEX Use mex setup to specify the options file. Verify MEX Test your MEX configuration. Does the MATLAB command mex yprime.c generate proper MEX-file ? Yes 2 See MEX Troubleshooting. Verify MATLAB Compiler can generate MEX-files Test your MATLAB Compiler installation/configuration. Does the MATLAB command mcc invhilb.m generate invhilb.mex ? Yes mcc S e mfilename After you generate your code with Real-Time Workshop, you must ensure that you have the MATLAB C Math Library installed on whatever platform you want to run the generated code. Note: The MATLAB CompilerS option does not support the passing of parameters that is normally available with Simulink S-functions. Specifying S-Function Characteristics Sample Time Similar to the MATLAB Fcn block, the automatically generated S-function has an inherited sample time. To specify a different sample time for the generated code, edit the C code file after the MATLAB Compiler generates it and set the sample time through the ssSetSampleTime function. See the Using Simulink manual for a description of the sample time settings. Data Type The input and output vectors for the Simulink S-function must be double-precision vectors or scalars. You must ensure that the variables you use in the M-code for input and output are also double-precision values. You can use the MATLAB Compiler assertions mbrealvector, mbrealscalar, and mbreal to guarantee that the resulting C code uses the correct data types. For more information on assertions, see Optimizing Through Assertions in Chapter 4. Limitations and Restrictions MATLAB Code There are some limitations and restrictions on the kinds of MATLAB code with which the MATLAB Compiler can work. The MATLAB Compiler Version 1.2 cannot compile: Script M-files. (See page 3-12 for further details.) M-files containing eval or input. These functions create and use internal variables that only the MATLAB interpreter can handle. M-files that use the explicit variable ans. M-files that create or access sparse matrices. Built-in MATLAB functions (functions such as eig have no M-file, so they cant be compiled), however, calls to these functions are okay. Functions that are only MEX functions. Functions that use variable argument lists (varargin). M-files that use feval to call another function defined within the same file. (Note: In stand-alone C and C++ modes, a new pragma (%#function <name-list>) is used to inform the MATLAB Compiler that the specified function will be called through an feval call. See Using feval in Chapter 5 for more information.) Calls to load or save that do not specify the names of the variables to load or save. The load and save functions are supported in compiled code for lists of variables only. For example, this is acceptable: load( filename, 'a', 'b', 'c' ); % This is OK and loads the % values of a, b, and c from % the file. However, this is not acceptable: load( filename, var1, var2, var3 ); % This is not allowed. Restrictions on Stand-Alone External Applications The restrictions and limitations noted in the previous section also apply to stand-alone external applications. In addition, stand-alone external applications cannot access: MATLAB debugging functions, such as dbclear. MATLAB graphics functions, such as surf, plot, get, and set. MATLAB exists function. Calls to MEX-file functions because the MATLAB Compiler needs to know the signature of the function. Simulink functions. Although the MATLAB Compiler can compile M-files that call these functions, the MATLAB C and C++ Math libraries do not support them. Therefore, unless you write your own versions of the unsupported routines, the linker will report unresolved external reference errors. Converting Script M-Files to Function M-Files MATLAB provides two ways to package sequences of MATLAB commands: Function M-files. Script M-files. These two categories of M-files differ in two important respects: You can pass arguments to function M-files but not to script M-files. Variables used inside function M-files are local to that function; you cannot access these variables from the MATLAB interpreters workspace. By contrast, variables used inside script M-files are global in the base workspace; you can access these variables from the MATLAB interpreter. The MATLAB Compiler can only compile function M-files. That is, the MATLAB Compiler cannot compile script M-files. Furthermore, the MATLAB Compiler cannot compile a function M-file that calls a script. Converting a script into a function is usually fairly simple. To convert a script to a function, simply add a function line at the top of the M-file. For example, consider the script M-file houdini.m: m = magic(2); % Assign 2x2 matrix to m. t = m.^ 3; % Cube each element of m. disp(t); % Display the value of t. Running this script M-file from a MATLAB session creates variables m and t in your MATLAB workspace. The MATLAB Compiler cannot compile houdini.m because houdini.m is a script. Convert this script M-file into a function M-file by simply adding a function header line: function t = houdini() m = magic(2); % Assign 2x2 matrix to m. t = m.^ 3; % Cube each element of m. disp(t); % Display the value of t. The MATLAB Compiler can now compile houdini.m. However, because this makes houdini a function, running houdini.mex no longer creates variable m in the MATLAB workspace. If it is important to have m accessible from the MATLAB workspace, you can change the beginning of the function to: function [m,t] = houdini(); Type Imputation. 4-3 Type Imputation Across M-Files. 4-3 Optimizing with Compiler Option Flags An Unoptimized Program. Optimizing with the -r Option Flag. Optimizing with the -i Option. Optimizing with a Combination of -r and -i. 4-5 4-6 4-8 4-10 4-11 [m,n] = size(a); m = m +.25; then the MATLAB Compiler imputes the C double data type for variable m. Note: Specifying assertions and pragmas, as described later in this chapter, can greatly assist the type imputation process. Type Imputation Across M-Files If an M-file calls another M-file function, the MATLAB Compiler reads the entire contents of the called M-file function as part of the type imputation analysis. For example, consider an M-file function named profit that calls another M-file function getsales: function p = profit(inflation) revenue = getsales(inflation);. p = revenue costs; To impute the data types for variables p and revenue, the MATLAB Compiler reads the entire contents of the file getsales.m. Suppose you compile getsales.m to produce getsales.mex. When invoked, profit.mex calls getsales.mex. However, the MATLAB Compiler reads getsales.m. In other words, the runtime behavior of profit.mex depends on getsales.mex, but type imputations depend on getsales.m. Therefore, unless getsales.m and getsales.mex are synchronized, profit.mex may run peculiarly. To ensure the files are synchronized, recompile every time you modify an M-file. Optimizing with Compiler Option Flags Some MATLAB Compiler option flags optimize the generated code; other option flags generate compilation or runtime information. The two most important optimization option flags are i (suppress array boundary checking) and r (generate real variables only). Consider the squibo M-file: function g = squibo(n) % The first n "squibonacci" numbers. g = zeros(1,n); g(1) = 1; g(2) = 1; for i = 3:n g(i) = sqrt(g(i1)) + g(i2); end We compiled squibo.m with various combinations of performance option flags on a Pentium Pro 200 MHz workstation running Linux. Then, we ran the resulting MEX-file 10 times in a loop and measured how long it took to run. Table 4-1 shows the results of the squibo example using n equal to 10,000 and executing it 10 times in a loop. Table 4-1: Performance for n=10000, run 10 times Compile Command Line squibo.m (uncompiled) mcc squibo mcc r squibo mcc i squibo mcc ri squibo Elapsed Time (in sec.) % Improvement 5.7446 3.7947 0.4548 2.7815 0.0625 -33.94 92.08 51.58 98.91 As you can see from the performance table, r and i have a strong influence on elapsed execution time. In order to understand how r and i improve performance, you need to look at the MEX-file source code that the MATLAB Compiler generates. When examining the generated code, focus on two sections: The comment section that lists the MATLAB Compilers assumptions. The code that the MATLAB Compiler generates for loops. Most programs spend the vast majority of their CPU time inside loops. An Unoptimized Program Optimizing with the -i Option The i option flag generates code that: Does not allow matrices to grow larger than their starting size. Does not check matrix bounds. The MATLAB interpreter allows arrays to grow dynamically. If you do not specify i, the MATLAB Compiler also generates code that allows arrays to grow dynamically. However, dynamic arrays, for all their flexibility, perform relatively slowly. If you specify i, the generated code does not permit arrays to grow dynamically. Any attempts to access an array beyond its fixed bounds will cause a runtime error. Using i reduces flexibility but also makes array access significantly cheaper. To be a candidate for compiling with i, an M-file must preallocate all arrays. Use the zeros or ones function to preallocate arrays. (Refer to the Optimizing by Preallocating Matrices section later in this chapter.) Caution: If you fail to preallocate an array and compile with the i option, your system will behave unpredictably and may crash. If you forget to preallocate an array, the MATLAB Compiler cannot detect the mistake; the errors do not appear until runtime. If your program crashes with an error referring to: Bus errors Memory exceptions Phase errors Segmentation violations Unexplained application errors then there is a good chance that you forgot to preallocate an array. The i option makes some MEX-files run faster, but generally, you have to use i in combination with r in order to see real speed advantages. For example, compiling squibo.m with i does not produce any speed advantages, but compiling squibo.m with a combination of i and r creates a very fast MEX-file. Optimizing with a Combination of -r and -i Compiling programs with a combination of r and i produces code with all the speed advantages of both option flags. Compile with both option flags only if your M-file Contains no complex values or operations. Preallocates all arrays, and then never changes their size. Compiling squibo.m with ri produces an extremely fast version of squibo.mex. In fact, the resulting squibo.mex runs more than 98% faster than squibo.m. Type Imputations for -ri When compiling with r and i, the MATLAB Compiler type imputations are: The MATLAB Compilers type imputations for ri are identical to the imputations for r alone. Additional performance improvements are due to the generated loop code. The Generated Loop Code for -ri The MATLAB Compiler generates the loop code: /* for i=3:n */ for (I0_ = 3; I0_ <= n; I0_ = I0_ + 1) { i = I0_; /* g(i) = sqrt(g(i-1)) + g(i-2); */ R0_ = sqrt((mccPR(&g)[((i-1)-1)])); mccPR(&g)[(i-1)] = (R0_ + (mccPR(&g)[((i-2)-1)])); /* end */ } Note: The options file is stored in the MATLAB subdirectory of your home directory. This allows each user to have a separate mbuild configuration. Changing Compilers. If you want to change compilers or switch between C and C++, use the mbuild setup command and make the desired changes. Verifying mbuild, copy ex1.c to your local directory and cd to that directory. Then, at the MATLAB prompt, enter: mbuild ex1.c This should create the file called ex1. Stand-alone applications created on UNIX systems do not have any extensions. Locating Shared Libraries. Before you can run your stand-alone application, you must tell the system where the API and C/C++ shared libraries reside. This table provides the necessary UNIX commands depending on your systems architecture. Architecture Command setenv SHLIB_PATH <matlab>/extern/lib/hp700:$SHLIB_PATH setenv LIBPATH <matlab>/extern/lib/ibm_rs:$LIBPATH setenv LD_LIBRARY_PATH <matlab>/extern/lib/$Arch:$LD_LIBRARY_PATH HP700 IBM RS/6000 All others where: <matlab> is the MATLAB root directory $Arch is your architecture (i.e., alpha, lnx86, sgi, sgi64, sol2, or sun4) It is convenient to place this command in a startup script such as ~/.cshrc. Then the system will be able to locate these shared libraries automatically, and you will not have to re-issue the command at the start of each login session. Note: On all UNIX platforms (except Sun4), the compiler library is shipped as a shared object (.so) file. Any compiler-generated, stand-alone application must be able to locate the C/C++ libraries along the LD_LIBRARY_PATH environment variable in order to be found and loaded. Consequently, to share a compiler-generated, stand-alone application with another user, you must provide all of the required shared libraries. For more information about the required shared libraries for UNIX, see Distributing Stand-Alone UNIX Applications. Running Your Application. To launch your application, enter its name on the command line. For example, ex1 ans = 5 6 ans = 1.0000 + 7.0000i 2.0000 + 8.0000i 3.0000 + 9.0000i 4.0000 +10.0000i 5.0000 +11.0000i 6.0000 +12.0000i Verifying the MATLAB Compiler There is MATLAB code for an example, hello.m, included in the <matlab>/extern/examples/compiler directory. To verify that the MATLAB Compiler can generate stand-alone applications on your system, type the following at the MATLAB prompt: The mbuild script provides a convenient and easy way to configure your ANSI compiler with the proper switches to create an application. To configure your compiler, use Run mbuild with the setup option from either the MATLAB or DOS command prompt. The setup switch creates an options file for your ANSI compiler. You must run mbuild setup before you create your first stand-alone application; otherwise, when you try to create a stand-alone application, you will get the message Sorry! No options file was found for mbuild. The mbuild script must be able to find an options file to define compiler flags and other settings. The default options file is $script_directory\\$OPTFILE_NAME. To fix this problem, run the following: mbuild setup This will configure the location of your compiler. Executing the setup option presents a list of compilers whose options files are currently included in the bin subdirectory of MATLAB. This example shows how to select the Microsoft Visual C++ compiler: mbuild setup Welcome to the utility for setting up compilers for building math library applications files. Choose your default Math Library: [1] MATLAB C Math Library [2] MATLAB C++ Math Library Math Library: 1 Choose your C/C++ compiler: [1] Borland C/C++ (version 5.0) [2] Microsoft Visual C++ (version 4.2 or version 5.0) [3] Watcom C/C++ (version 10.6 or version 11.0) [0] None compiler: 2 If we support more than one version of the compiler, you are asked for a specific version. For example, Choose the version of your C/C++ compiler: [1] Microsoft Visual C++ 4.2 [2] Microsoft Visual C++ 5.0 version: 2 Next, you are asked to enter the root directory of your ANSI compiler installation: Finally, you must verify that the information is correct: Please verify your choices: Compiler: Microsoft Visual C++ 5.0 Location: c:\msdev Library: C math library Are these correct?([y]/n): y Default options file is being updated. If you respond to the verification question with n (no), you get a message stating that no compiler was set during the process. Simply run mbuild setup once again and enter the correct responses for your system. Changing Compilers. If you want to change your ANSI (system) compiler, make other changes to its options file (e.g., change the compilers root directory), or switch between C and C++, use the mbuild setup command and make the desired changes., enter at the MATLAB prompt: Build an executable with debugging symbols included. Help; prints a description of mbuild and the list of options. Include <pathname> in the compiler include search path. Override options file setting for variable <name>. No execute flag. This option causes the commands used to compile and link the target to display without executing them. Create an executable named <name>. Table 5-3: mbuild Options on Macintosh (Continued) Option O setup Description Build an optimized executable. Set up default options file. This switch should be the only argument passed. Verbose; print all compiler and linker settings. If you need to customize the application building process, use the verbose switch, v, as in: mbuild v filename.m [filename1.m filename2.m ] to generate a list of all the current compiler settings. After you determine the desired changes that are necessary for your purposes, use an editor to make changes to the options file that corresponds to your compiler. You can also use the settings obtained from the verbose switch to embed them into an IDE or makefile that you need to maintain outside of MATLAB. Distributing Stand-Alone Macintosh Applications To distribute a stand-alone application, you must include the applications executable as well as the shared libraries with which the application was linked against. These lists show which files should be included on the Power Macintosh and 68K Macintosh systems: Application (executable) libmmfile libmatlb libmcc libmx libut Application (executable) libmmfile.o libmatlb.o libmcc.o libmx.o libut.o For example, to distribute the Power Macintosh version of the ex1 example, you need to include ex1, libmmfile, libmatlb, libmcc, libmx, and libut. To distribute the 68K Macintosh version of the ex1 example, you need to include ex1, libmmfile.o, libmatlb.o, libmcc.o, libmx.o, and libut.o. Troubleshooting mbuild This section identifies some of the more common problems that might occur when configuring mbuild to create stand-alone external applications. Options File Not Writable When you run mbuild setup, mbuild makes a copy of the appropriate options file and writes some information to it. If the options file is not writable, the process will terminate and you will not be able to use mbuild to create your applications. Directory or File Not Writable If a destination directory or file is not writable, ensure that the permissions are properly set. In certain cases, make sure that the file is not in use. mbuild Generates Errors On UNIX,. Compiling mrank.m and rank.m as Helper Functions Another way of building the mrank external application is to compile rank.m and mrank.m as helper functions to main.m. In other words, instead of invoking the MATLAB Compiler three separate times, invoke the MATLAB Compiler only once. For C: mcc e main mrank rank mcc e h main For C++: mcc p main mrank rank mcc p h main These commands create a single file (main.c) of C or C++ source code. Files built with helper functions run slightly faster. Note: Each of these commands automatically invoke mbuild because main.m is explicitly included on the command line. Print Handlers A print handler is a routine that controls how your application displays the output generated by mlf calls. If you do not register a print handler, the system provides a default print handler for your application. The default print handler writes output to the standard output stream. You can override this behavior, however, by registering an alternative print handler. In fact, if you are coding a stand-alone external application with a GUI, then you must register an alternative print handler. This makes it possible for application output to be displayed inside a GUI mechanism, such as a Windows message box or a Motif Label widget. If you create a print handler routine, you must register its name at the beginning of your stand-alone external application. The way you establish a print handler depends on whether or not your source code is written entirely as function M-files. Note: The print handlers discussed in this section work for C++ as well as C applications. However, we recommend that you use different print handlers for C++ applications. See the MATLAB C++ Math Library Users Guide for details about C++ print handlers. Source Code Is Not Entirely Function M-Files If some (or all) of your stand-alone external application is coded in C (as opposed to being written entirely as function M-files), then you must Register the print handler. Write a print handler. To register a print handler routine, call mlfSetPrintHandler as the first executable line in main (or WinMain). For example, the first line of mrankwin.c (a Microsoft Windows program) registers a print handler routine named WinPrint by calling mlfSetPrintHandler as mlfSetPrintHandler(WinPrint); Next, you must write a print handler routine. The print handler routine in mrankwin.c is static int totalcnt = 0; static int upperlim = 0; static int firsttime = 1; char *OutputBuffer; void WinPrint( char *text) { int cnt; if (firsttime) { OutputBuffer = (char *)mxCalloc(1028, 1); upperlim += 1028; firsttime = 0; } cnt = strlen(text); if (totalcnt + cnt >= upperlim) { char *TmpOut; TmpOut = (char *)mxCalloc(upperlim + 1028, 1); memcpy(TmpOut, OutputBuffer, upperlim); upperlim += 1028; mxFree(OutputBuffer); OutputBuffer = TmpOut; } strncat(OutputBuffer, text, cnt); } Declaring Variables Each branch begins with a commented list of MATLAB Compiler assumptions and an uncommented list of variable declarations. For example, the imputations for the Complex Branch of fibocon.c are: /***************** Compiler Assumptions **************** * * I0_ integer scalar temporary * fibocon <function being defined> * g complex vector/matrix * i integer scalar * n complex vector/matrix * x complex vector/matrix *******************************************************/ Variable declarations follow the assumptions. For example, the variable declarations of fibocon.c are: mxArray mxArray mxArray int i = int I0_ g; n; x; 0; = 0 The mapping between commented MATLAB Compiler assumptions and the actual C variable declarations is: Assumption C Type int double mxArray mxArray Integer scalar Real scalar Complex vector/matrix Real vector/matrix All vectors and matrices, whether real or complex, are declared as mxArray structures. All scalars are declared as simple C data types (either int or double). An int or double consumes far less space than an mxArray structure. The MATLAB Compiler tries to preserve the variable names you specify inside the M-file. For instance, if a variable named jam appears inside an M-file, then the analogous variable in the compiled version is usually named jam. The MATLAB Compiler typically generates a few temporary variables that do not appear inside the M-file. All variables whose names end with an underscore (_) are temporary variables that the MATLAB Compiler creates in order to help perform a calculation. When coding an M-file, you should pick variable names that do not end with an underscore. For example, consider the two variable names: hoop = 7; hoop_ = 7; % Good variable name % Bad variable name In addition, you should pick variable names that do not match reserved words in the C language; for example: switches = 7; switch = 7; % Good variable name % Bad variable name because switch is a C keyword Importing Input Arguments This pragma has no effect on C++ generated code (i.e., if the p MATLAB Compiler option flag is used). %#inbounds is the pragma version of the MATLAB Compiler option flag i. Placing the pragma anywhere inside an M-file has the same effect as compiling that file with i. The %#inbounds pragma (or i) causes the MATLAB Compiler to generate C code that: Does not check array subscripts to determine if array indices are within range. Does not reallocate the size of arrays when the code requests a larger array. For example, if you preallocate a 10-element vector, the generated code cannot assign a value to the 11th element of the vector. Does not check input arguments to determine if they are real or complex. The %#inbounds pragma can make a program run significantly faster, but not every M-file is a good candidate for %#inbounds. For instance, you can only specify %#inbounds if your M-file preallocates all arrays. You typically preallocate arrays with the zeros or ones functions. Note: If an M-file contains code that causes an array to grow, then you cannot compile with the %#inbounds option. Using %#inbounds on such an M-file produces code that fails at runtime. Using %#inbounds means you guarantee that your code always stays within the confines of the array. If your code does not, your compiled program will probably crash. The %#inbounds pragma applies only to the M-file in which it appears. For example, suppose %#inbounds appears in alpha.m. Given the command: mcc alpha beta the %#inbounds pragma in alpha.m has no influence on the way the MATLAB Compiler compiles beta.m. See Also mcc (the i option), %#realonly %#ivdep Ignore-vector-dependencies (ivdep) pragma. This pragma has no effect on C++ generated code (i.e., if the p MATLAB Compiler option flag is used). The %#ivdep pragma tells the MATLAB Compiler to ignore vector dependencies in the assignment statement that immediately follows it. Since the %#ivdep pragma only affects a single line of an M-file, you can place multiple %#ivdep pragmas into an M-file. Using %#ivdep can speed up some assignment statements, but using %ivdep incorrectly causes assignment errors. The %#ivdep pragma borrows its name from a similar feature in many vectorizing C and Fortran compilers. This is an M-file function that does not (and should not) contain any %#ivdep pragmas: function a = mydep a = 1:8; a(3:6) = a(1:4); Compiling this program and then running the resulting MEX-file yields the correct answer, which is: -u (Number of Inputs) and -y (Number of Outputs) Allow you to exercise more control over the number of valid inputs or outputs for your function. These options specifically set the number of inputs (u) and the number of outputs (y) for your function. If either u or y is omitted, the respective input or output will be dynamically sized. reallog Natural logarithm for nonnegative real inputs. Y = reallog(X) reallog is an elementary function that operates element-wise on matrices. reallog returns the natural logarithm of X. The domain of reallog is the set of all nonnegative real numbers. If X is negative or complex, reallog issues an error message. reallog is similar to the MATLAB log function; however, the domain of log is much broader than the domain of reallog. The domain of log includes all real and all complex numbers. If Y is real, you should use reallog rather than log for two reasons. First, subsequent access of Y may execute more efficiently if Y is calculated with reallog rather than with log. Using reallog forces the MATLAB Compiler to impute a real type to X and Y. Using log typically forces the MATLAB Compiler to impute a complex type to Y. Second, the compiled version of reallog may run somewhat faster than the compiled version of log. (However, the interpreted version of reallog may run somewhat slower than the interpreted version of log.) exp, log, log2, logm, log10, realsqrt realpow Array power function for real-only output. Z = realpow(X,Y) realpow returns X raised to the Y power. realpow operates element-wise on matrices. The range of realpow is the set of all real numbers. In other words, if X raised to the Y power yields a complex answer, then realpow does not return an answer. Instead, realpow signals an error. If X is negative and Y is not an integer, the resulting power is complex and realpow signals an error. realpow is similar to the array power operator (.^) of MATLAB. However, the range of.^ is much broader than the range of realpow. (The range of.^ includes all real and all imaginary numbers.) If X raised to the Y power yields a complex answer, then you must use.^ instead of realpow. However, if X raised to the Y power yields a real answer, then you should use realpow for two reasons. First, subsequent access of Z may execute more efficiently if Z is calculated with realpow rather than.^. Using realpow forces the MATLAB Compiler to impute that Z, X, and Y are real. Using.^ typically forces the MATLAB Compiler to impute the complex type to Z. Second, the compiled version of realpow may run somewhat faster than the compiled version of.^. (However, the interpreted version of realpow may run somewhat slower than the interpreted version of.^.) reallog, realsqrt realsqrt Square root for nonnegative real inputs. Y = realsqrt(X) realsqrt(X) returns the square root of the elements of X. The domain of realsqrt is the set of all nonnegative real numbers. If X is negative or complex, realsqrt issues an error message. realsqrt is similar to sqrt; however, sqrts domain is much broader than realsqrts. The domain of sqrt includes all real and all complex numbers. Despite this larger domain, if Y is real, then you should use realsqrt rather than sqrt for two reasons. libraries Macintosh 7-24 Microsoft Windows 7-13 shared locating on Macintosh 5-21 locating on UNIX 5-9 locating on Windows 5-14 UNIX 7-4 libut 5-3 limitations of MATLAB Compiler 3-9 ans 3-9 cell arrays 3-9 eval 3-9 input 3-9 multidimensional arrays 3-9 MATLAB callback 4-22 Compiler 5-52 compiler default arguments 5-54 generating C++ code 5-53 Handle Graphics 5-53 Image Processing Toolbox 5-52 bestblk() 5-52 bmpread() 5-52 bmpwrite() 5-52 edge() 5-52 fspecial() 5-52 gray() 5-52 gray2ind() 5-52 grayslice() 5-52 ind2gray() 5-52 rgb2ntsc() 5-52 MATLAB API Library 1-4, 5-3 MATLAB C++ Math Library header file 7-6 MATLAB Compiler assertions 4-13 assumptions list 4-6 capabilities 1-2, 1-7 compiling MATLAB-provided M-files 5-35 creating MEX-files 1-3 directory organization Macintosh 7-22 Microsoft Windows 7-12 UNIX 7-3 generating callbacks 4-20 getting started 3-1 good M-files to compile 1-8 installing on Macintosh 2-20 installing on UNIX 2-6 installing on Windows 2-13 limitations 3-9 optimization option flags 4-5 Simulink S-function output 8-38 syntax 8-28 system requirements Macintosh 2-19 UNIX 2-5 Windows 2-13 type imputations 4-3, 4-6 verbose output 8-36 warnings output 8-36 why compile M-files? 1-8 MATLAB Compiler Library 1-4, 5-3, 9-29-18 MATLAB Compiler-compatible M-files 3-10 MEX mode 3-10 stand-alone mode 5-29 MATLAB interpreter 1-2 callbacks 4-20 data type use 4-6 dynamic matrices 4-10 pragmas 4-17 running a MEX-file 1-3 MATLAB libraries Math 1-4, 5-3 M-file Math 1-4, 4-23, 5-3, 5-35, 8-31 Utilities 1-4, 5-3 matrices dynamic 4-10 preallocating 4-27 sparse 3-9 mbint 8-16 example 4-13 mbintscalar 8-18 example 4-14 mbintvector 8-19 mbreal 8-20 example 4-14 mbrealscalar 8-21 mbrealvector 8-22 mbscalar 8-23 mbuild 5-6 mbuild options Metrowerks CodeWarrior C/C++ 2-19, 2-20 C/C++ Pro 2-19 Macintosh 5-24 UNIX 5-11 Windows 5-18 mbuild script Macintosh 5-23 options on Macintosh 5-24 options on UNIX 5-11 options on Windows 5-18 UNIX 5-11 Windows 5-17 overview 1-3 suppressing invocation of 8-29 verifying on Macintosh 2-23 on UNIX 2-10 on Windows 2-17 Macintosh 5-21 UNIX 5-7 Windows 5-14 mbvector 8-24 mcc 8-28 mccCallMATLAB 6-15, 6-21 expense of 4-7 finding 4-21 mccComplexInit 6-9 mcc.h 6-4, 9-2 mccImport 6-9 mccImportReal 6-9 mccOnes 4-21 mccOnesMN 4-20 mccReturnFirstValue 6-10 mccSetRealVectorElement 4-9, 4-12 measurement. See timing. memory exceptions 4-10 memory usage reducing 5-29 metrics. See timing. Macintosh 2-21 UNIX 2-8 Windows 2-15 MEX-file built from multiple M-files 4-21 bus error 2-28 comparison to external applications 5-2 compiling Macintosh 2-21 computation error 2-28 computational section 6-6 configuring 2-3 contents generated by mcc 6-36-10 creating on Macintosh 2-21 UNIX 2-7 Windows 2-15 entry point 4-22 extension Macintosh 2-23 UNIX 2-7 Windows 2-17 for code hiding 1-9 from multiple M-files 4-22 gateway routine 6-4 generating with MATLAB Compiler 2-3 invoking 3-4 AKP 932 LN-T4066F KX-TG7120FX Kdavx11J-KD-avx11 3180 FAX Cable KDC-3080RA VX912 Archos 404 VP-D31I Amarys 400 B1100 GPS 155 EHD72100X DT-550 DC RTS Settings CMT-BX5 TS-H1302 ZS-M35 PRO 4600 LX7000SA-22S Q5010 JSI5464E DCR-DVD602E EC140B Princess Vivacity Notebook Mosca029 A Camry Companion U-CA 5 4050TN Ryobi JM80 Opticfilm 7400 Cuisine Evolved VSA-E03 LV2798 Motorola 2500 J900MV GR-T392GVH DBX 386 MP970 W2243T-PF IMP-550 ASF645 KOT-153UB 24 E DK-7600S TX-28PK20F Cf 90 Dishwashers Cooker Roland ME-5 Fable Amex 601 PMA-560 DSC-T3 LFV1024 PG-310 SDM-S95A SPH-W2900ML MF5750 Review 5d ED IC-765 MIM 2080 NW-S705F 6842PEM PLC-XF20 WF-T652A Editor Lcdw19HD LC203 Tritonr Thermomix 3300 Iaudio F2 Kxtg1100E Keyboard Magicolor 2200 WD-14575RD Symbol P470 KNA-DV3200 Compact Vert TR ARX7570Z Sp-DVB01d-0920 UX-Y303CL RP-20CB20A MX-810 GZ-MG135 ICF-M60lrds Easymax Lowrance X125 700R-serials-001026501-thru-103029625 Windows XM-228 TV-3100 TC-14S4
http://www.ps2netdrivers.net/manual/matlab.matlab.compiler.4/
CC-MAIN-2015-06
refinedweb
11,006
55.54
Python provides the zipfile module to read and write ZIP files. Our previous posts Python example: List files in ZIP archive and Downloading & reading a ZIP file in memory using Python show how to list and read files inside a ZIP file. In this example, we will show how to copy the files from one ZIP file to another and modify one of the files in the progress. This is often the case if you want to use ZIP file formats like ODT or LBX as templates, replacing parts of the text content of a file. import zipfile with zipfile.ZipFile(srcfile) as inzip, zipfile.ZipFile(dstfile, "w") as outzip: # Iterate the input files for inzipinfo in inzip.infolist(): # Read input file with inzip.open(inzipinfo) as infile: if inzipinfo.filename == "test.txt": content = infile.read() # Modify the content of the file by replacing a string content = content.replace("abc", "123") # Write conte outzip.writestr(inzipinfo.filename, content) else: # Other file, dont want to modify => just copy it After opening both the input file and the output ZIP using with zipfile.ZipFile(srcfile) as inzip, zipfile.ZipFile(dstfile, "w") as outzip: we iterate through all the files in the input ZIP file: for inzipinfo in inzip.infolist(): In case we’ve encountered the file we want to modify, which is identified by it’s filename test.txt: if inzipinfo.filename == "test.txt": we read and modify the content …. with inzip.open(inzipinfo) as infile: content = infile.read().replace("abc", "123") … and write the modified content to the output ZIP: outzip.writestr("test.txt", content) Otherwise, if the current file is not the file we want to modify, we just copy the file to the output ZIP using outzip.writestr(inzipinfo.filename, infile.read()) Note that the algorithm will always .read() the file from the input ZIP, hence its entire content will be temporarily stored in memory. Therefore, it doesn’t work well for files which are large when uncompressed.
https://techoverflow.net/2020/11/11/how-to-modify-file-inside-a-zip-file-using-python/
CC-MAIN-2022-27
refinedweb
329
67.65
i have exactly the same problem than the one reported in this thread: When i run my tests i got [object DOMException] error Thanks Hi Bruno, To see the error description, please add debug before a failing step. Then, open a browser console and check the entire error message(-s) shown there. Hi @MarinaRukavitsyna, i did this, but i don't have error, just one warning and some info: info axios intercep request JwtToken hammerhead.js:2:16006 info axios intercep request credentials hammerhead.js:2:16006 info axios intercep request noCache hammerhead.js:2:16006 warning Password fields present on an insecure (http://) page. This is a security risk that allows user login credentials to be stolen.[Learn More] quasar-vuejs info axios intercep response error csrf hammerhead.js:2:16006 info login.js 401 will resetLoginInfo hammerhead.js:2:16006 I clicked on go to next step button until the end of the test and no error happened In chrome (not FF) i can see this: Uncaught (in promise) DOMException: Only secure origins are allowed (see:). But i'm on localhost, so it should not be a problem, am i wrong ? Looking in the forum i found this thread that looks similar: The problem with this DOMException is that i have to add ' --skip-js-errors' if i want my tests to run finely, but i don't want this, i want the tests to fail if there is any error so i wonder how to do this. For instance i use PHP builtin server to run the server locally and it doesn't support HTTPS, sad Hi Rebolon, I have run a simple test with your project, and it passes properly on my side: import { Selector } from 'testcafe'; fixture `Test` .page ``; test('Test 1', async t => { await t .click(Selector('.list-group-item').find('a').withText('Basic')) ; }); Would you please send me your failed test (with fake login data) so that I can see the issue locally? Hi Marina, I copy/pasted your code into a new file (i just added a debug() ) and added the .only callThen i run the test with my npm script: npm run test-cafe:windowsThen i still got the error: > sf-flex-encore-vuejs@0.0.1 test-cafe:windows D:\dev\projects\php-sf-flex-webpack-encore-vuejs > testcafe %npm_package_config_test_browser% ./assets/tests/*.js --reporter spec,xunit:var/report/testcafe.xml Running tests in: - Chrome 66.0.3359 / Windows 10 0.0.0 Test × Test 1 1) Error on page "": [object DOMException] Browser: Chrome 66.0.3359 / Windows 10 0.0.0 4 | .page ``; 5 | 6 |test('Test 1', async t => { 7 | await t 8 | .debug() > 9 | .click(Selector('.list-group-item').find('a').withText('Basic')) 10 | ; 11 |}); at <anonymous> (D:\dev\projects\php-sf-flex-webpack-encore-vuejs\assets\tests\simple.js:9:10) at test (D:\dev\projects\php-sf-flex-webpack-encore-vuejs\assets\tests\simple.js:6:1) 1/1 failed (6s) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! sf-flex-encore-vuejs@0.0.1 test-cafe:windows: `testcafe %npm_package_config_test_browser% ./assets/tests/*.js --reporter spec,xunit:var/report/testcafe.xml` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the sf-flex-encore-vuejs@0.0.1 test-cafe:windows script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\richa\AppData\Roaming\npm-cache\_logs\2018-05-22T21_17_30_618Z-debug.log i've also notice that when i don't use nginx to serve the app (in fact when i use npm run sf-dev:windows to run the php built in server) then test-cafe doesn't seem to be able to access the app coz in the browser navbar i got this: Thanks for help @Rebolon,We've run the test on different machines and still cannot reproduce the problem locally. Would you please specify the following? Let's continue discussing this issue on StackOverflow, because tomorrow this forum will become read-only.
https://testcafe-discuss.devexpress.com/t/object-domexception-error-when-running-tests/1231
CC-MAIN-2021-39
refinedweb
688
58.48
Features/GTK3 Contents - 1 Summary - 2 Owner - 3 Current status - 4 Detailed Description - 5 Benefit to Sugar - 6 Implementation Plan - 7 Other considerations - 8 API changes - 9 Dependencies - 10 Contingency plan - 11 Documentation - 12 Release Notes - 13 Hackfests - 14 Comments and Discussion - 15 Subpages Summary Sugar needs to rebase itself on new generations of its key underlying technologies: GTK+ 3 and PyGObject Introspection. Sugar is already somewhat broken on recent distribution versions as a result of this. This page aims to summarise options and community opinions around this challenging shift, and to help formulate a plan of how it will be executed. In other words, it tries to take community input, answer all the unanswered questions, and present a logical path forward that can be adopted by the developers. Owner - This plan/proposal maintained by Daniel Drake and Simon Schampijer - A number of developers will be needed at each stage to successfully execute this; Daniel offers his assistance for a coordination/oversight role if that would be useful. Current status - Targeted release: 0.96 (the sugar-toolkit and sugar-artwork port) - Last updated: 31.01.12 - Percentage of completion: 35% (the sugar-toolkit and sugar-artwork port will count as 50% when finished) At the Desktop summit, Raul Gutierrez Segales, Benjamin Berg and Paul Proteus successfully prototyped some of the ideas below. After only a few hours of effort, a minimal Sugar GTK3 activity was running alongside GTK2 activities. The plan below should therefore be quite credible, but some prerequisites remain. Detailed Description"). In order to remain innovative and current, and to take advantage of the latest developments, Sugar needs to follow suit and move to GTK3 and PyGI. Lagging behind on this conversion is already bringing negative consequences to Sugar; notably 2 of our most important activities (Read and Browse) are already broken and without a future until we catch up again. Unfortunately, this is an API-incompatible change. As confirmed by the PyGI developers, PyGTK and PyGI cannot be mixed in the same process. This means that the conversion process can't be done on a fine, incremental basis, and if sugar-toolkit were to be simply replaced with a PyGI/GTK3 port, this means that all existing activities would stop working until they themselves have also been ported - all activities will need modifications as part of this feature. The community has expressed desire for old activities to continue working (many are unmaintained); unfortunately this is not realistic in the long term. As a compromise, this feature discussion includes the requirement that for a certain period of time, both PyGTK/GTK2 and PyGI/GTK3 activities must be able to function alongside each other. As detailed below, this can be pulled off quite easily and in a way that will not drain resources. This project will involve modifications to almost every file within Sugar and its activities, making it a huge task, even though the vast majority of code is trivial to port. This feature discussion attempts to identify ways that this porting process can be done in distinct stages, where Sugar remains functional at the completion of each stage, making this more manageable. For the most part, the port from PyGTK to PyGI only involves changing the names that are used to access methods and constants. PyGI uses a slightly different and more consistent naming scheme to wrap GTK+. import gtk gtk.MessageDialog(None, 0, gtk.MESSAGE_INFO, gtk.BUTTONS_CLOSE, "Hello World").run() The above PyGTK code rewritten as PyGI is: from gi.repository import Gtk Gtk.MessageDialog(None, 0, Gtk.MessageType.INFO, Gtk.ButtonsType.CLOSE, "Hello World").run() PyGI provides a script which can be used to automate much of the conversion. The move from GTK2 to GTK3 is also expected to be unproblematic, because the vast majority of code does not need any changes - most GTK3 API/ABI is identical to GTK2. The cases which do require some intervention to solve will take some time, but the changes are well documented and the number of cases encountered should be low. Benefit to Sugar - PyGI is technologically better than PyGTK. It is a nicer way of calling into GObject-style libraries from Python that means less maintenance is needed upstream (PyGObject automates the creation of bindings to a degree much higher than PyGTK ever could, and automatically achieves more complete coverage). - PyGTK is no longer maintained; PyGI is actively maintained. - The move to GTK3 allows us to keep up with our GNOME neighbours, as they improve and refine the base technologies that we share. - The move to PyGI is expected to result in lower memory usage and faster startup. - Browse has no future under GTK2 and is already broken, it needs to move to WebKit and that move is dependent on Sugar moving to PyGI/GTK3. - Similarly, Read is already broken under GTK2 due to static evince bindings no longer being maintained and libevince itself moved to GTK3; we need to switch to PyGI/GTK3 to be able to keep calling into evince and let the Read activity live on. Implementation Plan Sugar is divided into different components, many of which run in different processes. This means that we are able to divide up the required work on a process-by-process basis. While this work is being conducted, some Sugar processes might be based on PyGI/GTK3 and others based on PyGTK/GTK2, but the platform would keep functioning at each stage. There would be some system resource overhead during this transitional time (as the system would need to have PyGTK, GTK2, PyGI and GTK3 all in memory) but this feature implementation would end with the whole sugar ecosystem using PyGI/GTK3. HippoCanvas removal A prerequisite to porting a Sugar process, component or activity is to remove all its usage of hippocanvas. This library is unmaintained, would be painful to port to GTK3, and can be done better with standard GTK+ code at the Python level. Most users of HippoCanvas can switch to custom GtkContainer widgets. One complication is that hippo usage is not limited to Sugar's core; some activities use hippo, or they pull on sugar-toolkit classes which are implemented with hippo. These are: Chat, Speak (please list others here). Theme porting One major internal change in GTK3 is the way that themes are designed. GTK2 allowed themes to be implemented with a special format in a gtkrc file, but GTK3 now requires that themes are implemented using CSS. Therefore another non-trivial prerequisite for a GTK3-based sugar release of any visual component is the porting of the theme. Fortunately, the gtkrc file format does not seem to be too far away from css (i.e. it applies attributes to classes in a textual format), and css is well known and well documented, so hopefully this is not a major challenge. For an example of the old GTK2 style, see /usr/share/themes/Adwaita/gtk-2.0/gtkrc, and for a GTK3 CSS example see /usr/share/themes/Adwaita/gtk-3.0/gtk-widgets.css Backwards-compatibility for activities As mentioned above, as mixing PyGTK/GTK2 and PyGI/GTK3 is impossible, a straightforward port of sugar-toolkit to PyGI/GTK3 would break all activities. However, having backwards compatibility for activities for a limited time is considered a requirement, and having a "flag day" where all activities stop working until ported is not seen as an option - there must be a transition period. As the conversion process from PyGTK to PyGI makes minor changes to almost every single line of code that involves GTK/glib, maintaining both PyGI and PyGTK codepaths in the same files is not realistic (there would be an unrealistic amount of if conditions). We are therefore required to have two copies of sugar-toolkit during the transition period: one for ported PyGI/GTK3 activities, and another for PyGTK/GTK2 activities that have not yet been ported. As we do not have plentiful developer resources, I propose that the PyGTK/GTK2 version of sugar-toolkit is frozen as soon as this feature development is underway. No bugfixes or improvements, just a copy of the code at that point. The GTK3 version of sugar-toolkit would be installed under a new module named sugar1, distinguishing it from the frozen GTK2 sugar version to be removed later. All code will need extensive textual changes in moving to GTK3, changing "import sugar.foo" to "import sugar1.foo" will just be another step in that process. The removal of the GTK2 sugar-toolkit version would happen one year after the first stable release of a sugar-toolkit that includes GTK3 support. In this context, 'removal' means that it gets deleted from the sugar-toolkit master branch of the git tree. Old versions of sugar-toolkit will of course be left available, in old git branches and release tarballs. Proposed plan of action The steps below prioritise the porting of sugar-toolkit, as this is where Sugar would see the most immediate benefit: the revival of Browse and Read. The steps that follow can largely be parallelised; the important bit is placing sugar-toolkit first in line. - Remove hippocanvas from sugar-toolkit - Port sugar theme to GTK3 - Port sugar-toolkit to GTK3 while keeping backwards compatibility, release as sugar-toolkit-0.95.x - Rescue Browse and Read, and allow independent activity porting efforts to begin - Remove hippocanvas from all other parts of Sugar, including activities - Port Sugar core to GTK3 - Remove sugar-toolkit's GTK2 compatibility one year after the first GTK3 sugar-toolkit stable release How to port I plan to start a page at Features/GTK3/Porting that details the porting process, that could be provided as documentation to anyone involved in these efforts (including those working on porting Sugar core, but also those working on activities). The content covered will be: - How to remove hippo and what to replace it with (links to commits that have done this for other activities, etc) - How to select the GTK3 version of sugar-toolkit - How to handle each of the sugar-toolkit API changes (detailed in the following section) - How to port from PyGTK/GTK2 to PyGI/GTK3. - This basically involves reading the PyGObject porting documentation, using pygi-convert.sh, then checking the result. - The Migrating from GTK+ 2.x to GTK+ 3 should also be read. Other considerations Version number It goes without saying that such a migration would be the basis of a new major release. When this topic has been discussed before, people have toyed with the idea of calling the GTK3 version of Sugar "Sugar-1.0". There are arguments for: - This is undoubtedly a paradigm shift for Sugar, so a major bump to the version number is called for. - It would make the change really obvious And there are arguments against: - Some people might interpret the number 1.0 as indicating a higher level of maturity than what the developers feel - Some developers have very specific ideas about what should be included in Sugar-1.0, even if development of such items is barely even on the horizon - Some people want a much longer lead-up time to Sugar-1.0 so that the API can be refined/reworked/perfected Tomeu, Simon and Marco agreed that the 1.0 version number could be used here, and communicated in a slightly different sense: "we are really still in the first iteration and 1.0 will be when that first iteration reaches maturity, without big changes in the API. After 1.0 we can start working on what will be one day 2.0 which should be the second iteration of Sugar, hopefully using what we have learned during these years." However, current Sugar developers feel strongly that the changes described here are not significant to warrant a major version bump and have specific ideas about what should be included in a 1.0 release. Therefore, in the interest of being slightly less intrusive, this feature does not ultimately propose a version numbering change - it is planned that sugar-toolkit-0.96 will be released as the first with GTK3 support, and once we reach sugar-0.98, the next releases will be 0.100, 0.102, etc. Retaining the 'sugar' module name The strategy suggested above involves the GTK3 sugar-toolkit version being installed with the 'sugar1' module name. It would be possible to retain the 'sugar' name for this module via a Python trick documented below, but it appears that there is no demand for this from the community. Porting to GTK3 requires major textual changes anyway, changing 'sugar' to 'sugar1' in the same files is therefore regarded as acceptable. Here is how the naming trick could be done anyway: - The new GTK3 version of sugar toolkit would be installed with name sugar, and the old GTK2 version would be installed with name sugar_gtk2. - Before sugar-activity imports sugar.activity (or any other sugar-toolkit class), it would look for an empty file called "GTK3_PORTED" in the activity's directory. If present, it would run a little trick: import os import sys if not os.path.exists("GTK3_PORTED"): import sugar_gtk2 sys.modules["sugar"] = sugar_gtk2 - The result is that all unported/unmodified activities (without the GTK3_PORTED file) would import sugar.foo as before, but the above trick that modifies Python's module table would result in them (transparently, magically, automatically) being redirected to sugar_gtk2.foo. - At the end of the transition period, sugar_gtk2 would be deleted in entirity, the above addition to sugar-activity would be dropped, and activities could drop the GTK3_PORTED files at their leisure. Further consideration would need to be given to sugar-base. This package installs some classes which are then used by sugar-toolkit, so they would need to be duplicated into sugar_gtk2. Python 3 Is it worth throwing in a Python 3 migration into this project? I have researched the issue, and my opinion is: no. - Python 3 brings no immediate obvious benefit, and does not fix any pressing problems. On the other hand, PyGI solves some clear breakage for us. - Python 3 (or rather the code that supports it) is not mature. Many modules are still Python2-only. - PyGI does support Python 3, but it was broken when I tried it. It is not seeing much attention. - Our neighbours within GNOME and other open source projects are only just starting to play with Python 3. There is not much similar experience we can build upon. There may be teething problems, such as modules that Sugar uses that haven't been ported, and bugs in existing ports due to lack of use (such as the fact that PyGI was broken for quite a while with nobody noticing) that hold us back. This is not so for the PyGI transition, where we can look at many PyGTK applications that have been ported and that are actively used. - J5 (PyGI developer) suggested that we avoid combining the 2 migrations. It would add more change to an already disruptive project, and increases the risk. It would be better to limit the amount of change we introduce, so that risk is more manageable and to decrease the number of problems and challenges that we face. - If the above situation does change, the Python 3 migration will be much easier than this one. Py3 migration does not require invasive code changes. It will be much easier to have Python 2 and 3 support maintained in parallel. The few existing projects that have done this are able to maintain Python2 and Python3 support in the same codebase, without too many if conditions. The "if we're already making so much change, why not avoid a future migration period by including Py3" argument is not very strong, because the Py3 migration will be much smoother and less complex. API changes The fact that almost all Sugar components and activities require sweeping changes as part of this shift presents an interesting opportunity for us to make API changes in sugar-toolkit. The importance here should still be placed on the technology shift, rather than on the opportunity to produce a perfect API (which we could spend all eternity designing and discussing), but this is an opportunity that we should not miss. I propose that once sugar-toolkit is ported and mostly operational, we run a 30-day window where API changes that have seen some kind of planning and discussion below can be made and committed. Plain text as the default for palettes Sugar's palette classes currently accept strings, but then they pass those strings to GTK as markup (always, unconditionally). However, markup is only used in a handful of places, and this means that all users of palettes that draw in translations or strings from other sources which might contain characters such as < or > must escape their arguments, leading to big patches like this. As agreed here, it would make more sense for palettes to pass those strings as standard text by default, and to have different functions to opt-in to receive markup processing. Then the excessive escaping would go, and the only users would have to escape would be those who use markup. Sugar-0.94 proposes the removal of the Keep button, but the old KeepButton classes would be kept around for activities that directly use it. Porting sugar-toolkit to GTK3 would be a good time to remove these classes. Removal of old toolbars Sugar-0.86 redesigned activity toolbars, with the new toolbars implemented by new classes, and the old classes being kept around so that old activities are not immediately broken. This would be a good opportunity to remove the long-deprecated old activity toolbar classes. Dependencies Sugar would start depending on PyGObject built with gobject introspection support, GTK3, and (at the end of the transition period) would drop its dependencies on GTK2 and PyGTK. Contingency plan Hopefully, this page has been convincing in that this change is of necessity. Either way, we should consider our options for the case where this migration is found to be more difficult than predicted. As this migration would take over the course of a major release (or perhaps 2-3 of them, as components are ported individually), our users have the option to remain on older releases in the face of stability issues, and as developers we have the option of delaying major releases until things are ready. If new PyGI/GTK3-based major releases are found to be unstable, we also have the option of allowing interested community members to pick up maintenance of older GTK2-based release branches (e.g. 0.94) with a "master first" commit policy. Personally, I would suggest that it would be a better use of resources to have those people help fix any problems with the GTK3 version, but as an open source community this is something we must be open to. In the event of problems in the GTK3 version of sugar-toolkit, activity authors have the (default) choice of staying with GTK2. The transition period before the GTK2 version is removed would obviously be extended in the face of significant problems with GTK3 version. Documentation Release Notes Hackfests This work would benefit from some focused attention at in-person meetings. See Features/GTK3/Desktop Summit activities for earlier desktop summit ideas. On September 10th-12th, a SugarCamp will be held in Paris. We will be working on this plan at the event, starting with the prerequisites. On October 28th-30th, a hackfest will be held in Prague with the specific purpose of implementing this plan: Marketing_Team/Events/Gtk3_Hackfest_2011 Subpages -
https://wiki.sugarlabs.org/go/Features/GTK3
CC-MAIN-2019-22
refinedweb
3,269
50.57
An update to the silverlight tools for Visual Studio 2008 Beta 2 has been submitted to the microsoft download center. It takes a bit of time for it to propogate through the download servers, it depends on the server you hit as to which version you'll get. So wait a while until you download, and it should be there. Sorry, I don't have a way of knowing when it'll finally be there. The download page is at. Edit: Looks like its up there for me now To know if you have the newer version, download it to disk, then right click and open the properties dialog. In the details panel, your looking for a file version of "9.0.20706.18". The key number your looking for is the final "18", if it says "12" then you have the original release. The changes in this release are: Please uninstall the old silverlight tools release before installing this one, to ensure you have the updates. You should not need to remove/reinstall the silverlight 1.1 alpha refresh runtime if you allready have it installed. If you have added the registry key because of the guid change, you'll probably want to remove that. For those of you using Blend, the Blend team is working on an update as well. Until then, you will continue to need the GUID change to get Blend to work properly with Silverlight 1.1 applications. Software Architect - Microsoft Expression Blend Dear Blend team, I tried the GUID change (I think...adding registry key, right?) but I still get the error "the name 'Canvas' does not exist in the namespace ...". Is this a known error and does anyone know how to fix it? hi samsp, i've downloaded visual studios tools for silverlight from the link you have provided. When i checked its version 9.0.20706.18. Now shud i installed this version or do i need the tool with 9.0.20706.12. Thanks, ThanksRajesh Shirpuram(If this has answered your question, please click on "mark as answer" on this post. Thank you!) ThanksYasser Makram forget to click "Mark as Answer" on the post that helped you. If your question has not been answered, please post a followup question. Hi, I wish anyone can help, I had installed VS 2008 Beta 2 C#, and whenever I tried installing "VS_SilverlightTools_Alpha" it gives me the message "You must install Micrsoft Visual Studio 2008 Beta 2 before installing this product". Thanks,Hatem AminGUI ArchitectCOLTEC M.E.
http://silverlight.net/forums/t/3580.aspx
crawl-002
refinedweb
422
81.22
0 D Structs vs Classes and a Simple Template Example D's classes have more in common with those of Java than C++. For starters, they're reference types. Whenever you need an instance of a class, you new it and it is allocated on the heap (though scoped stack allocation is possible via a standard library template). Structs, on the other hand, are value types. But, you can create pointers to them, allocate them on the heap with new, or pass them by reference to a function that takes ref parameters. Another difference between the two is that classes have a vtable and can be extended. Structs, however, have no vtable and can not be extended. That also applies to the Java-esque interfaces in D -- classes can implement them, structs cannot. Because of the differences between the two, there are certainly some design implications that need to be considered before implementing an object. But I'll talk about that another day. For now, I just wanted to give a little background before getting to an example illustrating the primary motivation for this particular post. While working on Dolce, I thought it would be a good idea to wrap Allegro's ALLEGRO_CONFIG objects, simply because working with a raw, string-heavy C API in D can be a bit tedious. You have to convert D strings to C strings and vice versa. In this particular case, you also have to convert from string values to integers, booleans, and so on, since Allegro only returns raw C strings. So what I wanted was something that allowed me to create and load configs, then set and fetch values of the standard built-in types. Rather than implementing a separate get/set method for each type, I chose to use templates. And in this case, I didn't want the whole object to be templated, just the methods that get and set values. Initially, I implemented it as a class, but in the rewrite of Dolce I pulled the load,create and unload methods out and made them free functions. I also realized that this is a perfect candidate for a struct. The reason is that it contains only one member, a pointer to an ALLEGRO_CONFIG. This means I can pass it around by value without care, as it's only the size of a pointer. Here's the implementation: struct Config { private { ALLEGRO_CONFIG* _config; } ALLEGRO_CONFIG* allegroConfig() @property { return _config; } bool loaded() @property { return (_config !is null); } T get(T)(string section, string key, T defval) { if(!loaded) return defval; auto str = al_get_config_value(_config, toStringz(section), toStringz(key)); if(str is null) return defval; // std.conv doesn't seem to want to convert char* values to numeric values. string s = to!string(str); static if( is(T == string) ) return s; else return to!T(s); } void set(T)(string section, string key, T val) { if(!loaded) return; static if( is(T == string) ) auto str = val; else auto str = to!string(val); al_set_config_value(_config, toStringz(section), toStringz(key), toStringz(str)); } } You'll notice that the _config field is private. I don't normally make struct fields private, as the structs I implement are usually intended to be manipulated directly. But in this case I thought it prudent to hide the pointer away. I still provide access through the allegroConfig property (and I'll discuss properties another day) in case it's really needed. So you may also be wondering how _config is ever set if it's private and there's nothing in the struct itself that sets the field. The answer lies here: Config createConfig() { return Config(al_create_config()); } This is something that frequently trips up D newbies coming from other languages. What's going on is that the create function and the Config implementation are in the same module. You can think of modules as another level of encapsulation. And, for those who are steeped in C++ vernacular, you can think of modules as friends of every class and struct implemented within them. In other words, private class/struct members/methods are all visible within the same module. If you ever get to know D at all, you'll likely find this to be a very convenient feature. Another thing that might jump out at you is the static if. This is one way to conditionally generate code at compile time. You'll frequently see it used in templates, though its use is not restricted to templates. Here, I'm testing if the type T is a string or not. In the get method, the value returned from Allegro is a char*, so it is converted to a D string. If the type of T is string, then there's no need for any further conversion and the D string can be returned. If not, the D string must be converted to the appropriate type (int, bool, long or whatever). Similarly, in the set method, a char* needs to be passed to Allegro, so any nonstring values are first converted to a D string. But if T is string, that step isn't necessary. And now to the templates. D's template syntax is clean and extremely powerful. This example demonstrates the cleanliness part at least. T get(T)(string section, string key, T defval) If you've ever used C++ templates, it should be clear what is going on here. The type to be accepted is declared in the first pair of parentheses, the parameter list in the second pair. And in this case, a value of the specified type is returned. Now, to use it: auto config = createConfig(); auto i = config.get!int("Video", "Width", 800); auto b = config.get!bool("Video", "Fullscreen", false); Here, I've used the auto keyword for each variable. The compiler will infer the Config, int, and bool types for me. As for the template instantiations, notice the exclamation point used between the method name and the type. That's what you use to instantiate a template. Technically, I should be wrapping the type of the template in parentheses, like this: auto i = config.get!(int)("Video", "Width", 800); If you have a template with more than one type in the type list, then you have to use the parens. But if there is only one type, the compiler lets you get away with dropping them. For singly-typed templates, that has become a standard idiom in D. If you decide to give D a spin and come from a C++ or Java background, I hope this post helps keep you from any initial confusion that might arise when you find that things aren't quite the same as you're used to. You can learn more about D's structs, classes and templates at d-programming-language.org.
https://www.gamedev.net/blog/1140/entry-2250469-d-structs-vs-classes-and-a-simple-template-example/
CC-MAIN-2017-04
refinedweb
1,134
63.9
Effect3D FAQ We strive to provide you with the most complete and upto date product information. These pages are created from the users questions our technical support staff are most commonly asked. If you can not find the answer you are looking for, please complete the customer support form or email us at the following address support@reallusion.com. import graphic files ? 6) Where can I get more 3D models to import ? 7) There are so many different effect parameters I don't know where to start ! 8) What is a custom animation sequence ? to the Standard Edition you do not need to download the version from the online store. If you are running Effect3D Standard Edition in Trial mode you can enter your serial number in the following way: This will then run Effect3D import Graphic files You can only import and animate 3d models. Effect3D allows you to import models in the 3DS format (which in the world of 3D is really the industry standard). You can also import your own gif or bmp files for use as a background. However, these can not be animated in any way. If you wanted to animate your logo for instance it would need to be created first in a 3D modeling environment, to do this you need a tool like "3D Studio Max" or "Nendo". Where can I get more 3D models to import There are many hundreds of sites on the Internet that offer free 3D models & objects. If you are new to 3D a great place to start is our FREE 3D model resource page:-built customized animation sequence. These animation sequences are designed for and can only be applied to the specific model. For instance the "Electric fan" model has a in-built animation sequence that spins the fan blades and rotates the fan head. What is the difference between trial and full version? The difference between trial version and full version is that the export graphic of trial version will including a trail watermark. [Top]
http://www.reallusion.com/effect3d/e3d_faq.asp
crawl-002
refinedweb
339
62.07
All COM interfaces support the QueryInterface function, so you can use any passed interface to get to other interfaces you might require. To return an object to Visual B++, you simply return an interface pointer-in fact, they're one in the same. Here's an example taken from the MSDN CDs. The following ClearObject function will try to execute the Clear method for the object being passed, if it has one. The function will then simply return a pointer to the interface passed in. In other words, it passes back the object. #include <windows.h> #define CCONV _stdcall LPUNKNOWN CCONV ClearObject(LPUNKNOWN * lpUnk) { auto LPDISPATCH pdisp; if( NOERROR == (*lpUnk) -> QueryInterface(IID_IDispatch, (LPVOID *)&pdisp) ) { auto DISPID dispid; auto DISPPARAMS dispparamsNoArgs = {NULL, NULL, 0, 0}; auto BSTR name = "clear"; if( S_OK == pdisp -> GetIDsOfNames(IID_NULL, &name, 1, NULL, &dispid) ) { pdisp -> Invoke( dispid ,IID_NULL ,NULL ,DISPATCH_METHOD ,&dispparamsNoArgs ,NULL ,NULL ,NULL ); } pdisp -> Release(); } return *lpUnk; } The following Visual B++ code calls the ClearObject function: Private Declare Function ClearObject Lib "somedll.dll" _ (ByVal X As Object) As Object Private Sub Form_Load() List1.AddItem "item #1" List1.AddItem "item #2" List1.AddItem "item #3" List1.AddItem "item #4" List1.AddItem "item #5" End Sub Private Sub ObjectTest() Dim X As Object ' Assume there is a ListBox with some displayed items ' on the form. Set X = ClearObject(List1) X.AddItem "This should be added to the ListBox" End Sub updated
http://www.brainbell.com/tutors/Visual_Basic/Passing_a_Visual_B__Object_to_Visual_C.htm
CC-MAIN-2018-22
refinedweb
234
65.12
Constructors - super Java NotesConstructors - super Every object contains the instance variables of its class. What isn't so obvious is that every object also has all the instance variables of all super classes (parent class, grandparent class, etc Java Interview Questions - Page 9 . Question: Does a class inherit the constructors of its superclass? Answer: A class does not inherit constructors from any... Java Interview Questions - Page 9   Constructors name is class name. A constructors must have the same name as the class its... this to call other constructors in the same class. super(...). Use super... automatically inserts a call to the parameterless super class constructor Constructors - super example Java NotesConstructors - super example Example of class without... the Constructors in details. Example code provided here will help you in learning the Java Constructors easily. While creating a class if any constructor is defined this and super the constructor of same class while super keyword is used to call the constructor of super class. Java this keyword Java super keyword...this and super Explain this and super in java with e:g?..... What java - Java Interview Questions ) String s = new String("Hello Java"); 9. An integer, x has a binary value... the command line as: java MyProg I like tests what would be the value of args[ 1... until a value is assigned 3. Which of the following are Java keywords Collection of Large Number of Java Interview Questions! Interview Questions - Large Number of Java Interview Questions Here you.... The Core Java Interview Questions More interview questions on core Java.. Read java class - Java Interview Questions having same name,same parameters and same return types of a super class in its subclass. For ex- class A { int i; A(int a, int b) { i = a+b; } void add...java class What is the speciality of class? what is the use Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Core Java Interview Questions Page1  ... to these constructors by using this() takes place in the same way as super class... class ? Ans: Java does not allow an interface to extend an abstract class java - Java Interview Questions inserts a call to the parameterless super class constructor. Interface...java what is meant by the following fundamentals as used in java... a package, class, interface, method, or variable. It allows a programmer java - Java Interview Questions java why they have implemented overriding in java? Hi Friend, In Java Overriding provide the facility to redefine the inherit method of the super class by a subclass. The method overriding is used to invoke super class super class which is the super base class in java Java Programming: Chapter 9 Quiz . Question 6: Java has a predefined class called Throwable. What does... Quiz Questions For Chapter 9 THIS PAGE CONTAINS A SAMPLE quiz on material from Chapter 9 of this on-line Java textbook. You should be able java - Java Interview Questions java how can we create private constructor? what is its significance? private constructors prevent a class being explicitly instantiated by other classes We can create private default constructors so that other java - Java Interview Questions java Hi guys, why we can not use super and this in static block.If i will use what will happen??? For e.g.suppose a super class... nonstatic member of super class by using super in sub class what will happened java - Java Interview Questions represents a base class, from which other classes can inherit functions. 2)All..." one super class, but "implements" multiple interfaces. Need of abstract class...java difference between abstract and interface?Why abstract class Java Programming: Chapter 9 Exercises Programming Exercises For Chapter 9 THIS PAGE CONTAINS programming exercises based on material from Chapter 9 of this on-line Java textbook... in an int variable. Java has a standard class, java.math.BigInteger java - Java Interview Questions to its implementation class object Hi Friend, In java language... Clonable is neither implemented by a class named Myclass nor it's any super class... Interfaces are implemented by the classes or their super classes in order to add Exceptional Constructors - Java Tutorials of a class throw exception. In such condition you should have proper code...; import java.io.*; class WordCount2 { //creates BufferedReader to read/open... Exception { File inFile=new File(afile); //if the file the user puts in does Super keyword in java Super keyword in java In this section we will discuss about the super keyword in java. Super is a keyword defined in java. Super is used to refer the variable, methods and constructor of the super or parent class. If we override one Constructor - Java Interview Questions : Yes you can write the constructor in an Abstract Class. Its needed when you want to initialize some variables in the Super Class which will be needed in other... are inheriting that abstract class then its better to have all the variables at one place java - Java Interview Questions compiler does not compile Java source files into machine language... the ".java" file and converted into .class file which is byte code. This byte have...java how java is platform independent? hai friend java - Java Interview Questions java what is abstract class ?how can we achieve dynamic polymorphism... following URL. Hope inner class - Java Interview Questions Java Inner Class How many classes we can create in an inner class... to the requirement there is no limit . In Java programming language, when a class is defined... to all of its enclosing class's instance data, including private fields and methods Inner class - Java Interview Questions Inner class whether inner class can implement any interface(Runnable... the problem : class InnerClass1 { private int countDown = 5; private Inner inner; private class Inner extends Thread { Inner(String name Objective C Constructors of self and super keywords. Like java Objective-C has parent class and programmer can access its constructor by statement [super init], this statement returns... Objective C Constructors   Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Java Interview Questions Core java Interview Question page1 An immutable... in the constructor. Core java Interview Question Page2 A Java java - Servlet Interview Questions java servlet interview questions Hi friend, For Servlet interview Questions visit to : Thanks How are this() and super() used with constructors? How are this() and super() used with constructors? Hi, How are this() and super() used with constructors? thanks java - Servlet Interview Questions Java Design Patterns Interview Questions and Answers I need to know Java Design Patterns with the help of Interview Questions and Answers Hi malli,GenericServlet is the super class of HttpServlet calssjava soft HTMLParserTxt - Java Interview Questions org.apache.xerces.dom.CoreDocumentImpl; public class HTMLTextParser... " + fileName + " does not exist."); return null; } try { fin = new...); return null; } TextBuffer = new StringBuffer(); //Node is a super EJB - Java Interview Questions : A stateless session bean does not maintain a conversational state... session bean. These types of session beans do not use the class variables (instance... which he wants to persist. A stateful session bean retains its state across Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Core Java Interview Questions Page2  ... a class having all its methods abstract i.e. without any implementation. That means..., but they are separate object references. Q 9. Is the + operator overloaded ? Ans : Java Constructors in Java Constructors in Java When do we use Constructors in JAva? All object creation is done through constructors. If no constructor is specified, the compiler will supply a default empty constructor that basically does Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Core Java Interview Questions Page3  ... ? Ans : Generally Java sandbox does not allow to obtain this reference so... makes a class available to the class that imports it so that its public variables Super - keyword ; super keyword super is a keyword in java that plays an important role in case of inheritance. Super keyword is used to access the members of the super class. Java... as int a and float b. class B extends to class A and contain its own data Java Programming: Chapter 9 Quiz Answers . Question 6: Java has a predefined class called Throwable. What does this class represent? Why does it exist? Answer: The class Throwable represents all... ANSWERS to the Quiz on Chapter 9 of this on-line Java textbook. Note java - Java Interview Questions --creating or loading objects that are used by the servlet in the handling of its requests. Why not use a constructor instead? Well, in JDK 1.0 constructors for dynamically loaded Java classes (such as servlets) couldn't accept arguments. So java questions - Java Interview Questions java questions HI ALL , how are all of u?? Plz send me the paths of java core questions and answers pdfs or interview questions pdfs or ebooks :) please favor me best books of interviews questions for craking Concept of Inheritance in Java class from parent class. Let's understand the concept of inheritance in Java... the concept of Inheritance clearly you must have the idea of class and its.... Multilevel inheritance can go up to any number of levels. Java does not support java - Java Interview Questions types and could create objects of classes but could not inherit from base... objects as well as full fledged class hierarchies by inheriting and reusing Important Interview Questions in Core Java Important Interview Questions in Core Java Core Java refers... questions. Mastering in Java is a different thing and facing an interview... who are master in advanced java as whenever you go for the interview this and super this and super hello,, why are this() and super() used ? hii, This() is used to invoke a constructor of the same class. super() is used to invoke a superclass constructor. constructors constructors how do i create a class named "box" that includes integer data fields for length,width,height, that consist of three contructors that require one,two,three aruments, then on the second class i need to call constructors constructors package Test; class B { B() { } private B() { } } public class A { public static void main(String[] args) { B b1 = new B(); } } Error: Duplicate method B in type B. Trying to run interview question - Java Interview Questions interview question hello i want technical interview question in current year Hi Friend, Please visit the following links: help me in these - Java Interview Questions with its characters reversed . ( tool to loot) then test your method in java... plz answer me it is important to me and these are the questions : 1)Write... and outputs the returns the number with its digits reversed (e.g. , 123 to 321) then test Core Java Interview Questions! Core Java Interview Questions  ...: Describe the wrapper classes in Java. Answer: Wrapper class is wrapper... change its initial value. Static variables are always called by the class name java - Java Interview Questions : class A{ public void methodA() { System.out.println("Super Class"); } public void methodA(String str) { System.out.println(str); } } public class Ab extends A { public void methodA() { System.out.println("Sub Class Core Java Interview Question, Interview Question ; Core Java Interview Questions and answers We are constantly trying to provide best Core Java Interview questions...; Core Java Interview Question Page 9 What are the uses Chapter 9. EJB-QL Chapter 9. EJB-QLPrev Part I. Exam Objectives Next Chapter 9. EJB-QLIdentify... persistence schemas of entity beans, including their relationships, for its data java - Java Interview Questions ,A Java class contains a constructor method and finalize method. A Java class..., unless the class is being started from main. The Java constructor name is the same... does not contain the constructor or finalize methods the Java compiler will use interview questin of java - Java Interview Questions interview questin of java what r the common question of core & addvance java in interview? Hi Garima, I am sending you a link...:// java - Java Interview Questions to be remember: *)Hashtable is part of legasy class we can use enumerator interface of this hashtable class. *)Hashtable will not allow null key nor values,where as Hashmap allows one null Key and many null values. *)HashMap does Java Interview Questions - Page 1 Java Interview Questions - Page 1 Java Interview Questions by Inigo Nirmal...? Answer: Sure. A private field or method or inner class belongs to its JAVA - Java Interview Questions JAVA how to merge the sorted arrays i want sourse code? Hi Friend, Try the following code: import java.util.*; public class..., 5, 7, 9}; int[] arr2 = {2,4, 6,8, 10}; int[] arr3 = new int[arr1.length java - JSP-Interview Questions java hi.. snd some JSP interview Q&A and i wnt the JNI(Java Native Interface) concepts matrial thanks krishna Hi friend, Read more information. Java - Java Interview Questions , Java does not have a preprocessor. It provides similar functionality (#define..., and class definitions are used in lieu of typedef. The end result is that Java... by declaring Java class members that are static and final. For data members in Java Java - Java Interview Questions Interview interview, Tech are c++, java, Hi friend, Now get more information. you can lean all type interview question by following link link. java - Java Interview Questions class did not have a public constructor? For more information on java visit...java can you write a java pro with out using static infront of main... be accessed outside the class / package * static: You need not have interview questions - EJB interview questions in Java Need interview questions in Java ...:// Questions:.... If you really want to win the interview then follow the steps.Learn core java http java - Java Interview Questions java what is advantage of observer ,observable class?why we using these two type? Hi maheswari Java Provides builtin classes... class ObserverTest extends JFrame{ private DefaultBoundedRangeModel java - Java Interview Questions java Difference between class and object? Hi Friend, Class - It explain the nature of a particular thing. In OOPs Structure and modularity is provided by a Class . Characteristics of the class should in java - Java Interview Questions friend, public class PassPrimitive { public static void main(String[] args) { int in = 10; passMethod(in); // print x to see if its value has...; } } ------------------------- read for more information, C - Java Interview Questions i.The i++ increments the number after its use therefore it uses 7... becomes 9.Therefore the output comes to 7*9 i.e 63. Thanks Ans: 63... to 8.. ++i = 9. Since at this point i was incremented to 8 and since java - JSP-Interview Questions /primitive can only be accessed within its local context (class or package) or within any.... These are all fairly fundamental questions, try purchasing any introduction to Java...java whats meant by the following terms as applied /interviewquestions/ Here you will get lot of interview questions...java hello sir this is suraj .i wanna ask u regarding interview questins in java .wat normally v see in interviews (tech apptitude)is one line java - Java Interview Questions java 1) how does a varible of a primitive type differ from...) of a class. 2) Primitive variables store primitive values whereas Reference variables...: import java.util.*; import java.text.*; class Tax{ public static void EXCEPTIONS - Java Interview Questions throws clause.For ex: class MyException extends Exception { public MyException(String msg){ super(msg); } } public class Test { static int divide Core Java Interview Question, Interview Question and is used associate keys with values. Question: Does a class inherit the constructors of its superclass? Answer: A class does not inherit constructors from any... Core Java Interview Question Page 17   about interview ques. - Java Interview Questions about interview ques. i give a interview for java.. they ask me. what is System.out.println(); what is its write ans... Hi Friend, It is the syntax of Java used to display a string on the console window java - Hibernate Interview Questions ?? 2> Does this privet declared constructer class have a special name threads - Java Interview Questions of thread safe class? hi friend, Thread-safe code is code... extreme paranoia. public class ThreadExample extends Thread { private ThreadExample(String str) { super(str); } public void run difference - Java Interview Questions at a particular time whereas object refers to the memory address of the class. For ex: If you define ClassHello a; then it means you are declaring an object but does... you are creating an instance of class and some memory is occupied by object Java Constructors Java Constructors In this tutorial we will discuss in java constructor and its... of constructors- Constructor name must be same as the class name. It should... of same class with different constructors and invoke method with both objects collections - Java Interview Questions value must be wrapped in an object of its appropriate wrapper class (Boolean...collections The Java Collections API Hi friend, Java... information on Java Collection API visit to : abstract class - Java Interview Questions abstract class Explain the abstract class and abstract method...:// Hope that it will be helpful for you. Thanks Explain final class, abstract class and super class. Explain final class, abstract class and super class. Explain final class, abstract class and super class. Explain final class, abstract class and super class. A final class cannot be extended. A final class COLLECTIONS12 - Java Interview Questions : import java.io.*; import java.util.*; import java.lang.reflect.*; class Data... value) { super(); this.key = key; this.value = value; } public int getKey... toString(){ return key + " " + value; } } public class java.util - Java Interview Questions description of java.util PackageThe Java util package contains the collections framework..., internationalization, and miscellaneous utility classesThe util package of java provides... data. These classes are very important and its study is very important. You must Exception - Java Interview Questions Exception in Java programming What is Exception? Chained Exception in JavaChained Exception in Java, this is a new in release 1.4..., and these exception can be caused by another exception. Exceptions in Java Pattern class - Java Interview Questions Pattern class 1)Write a program to find out numeric occurences in an input file by using pattern class object - Java Interview Questions Class.forName() If you already know the name of the Class and it has its public default... but creating an object from its serialized form. ObjectInputStream inStream OBJECT CLASS - Java Interview Questions for a particular class you need to write code in the constructor in such a way that it keeps the track of the new instances of the class.Here is the example. class A{ A(){ static int a; a++; } class B{ public static void main(String args Core Java - Java Interview Questions Core Java is there any use of private constructors?is yes ,how to use those, example please, Hi public final class MyClass... // // Single instance created upon class loading. private static final MyClass j2ee - Java Interview Questions Bejjanki. Refer SCWCD Book , you get idea about mvc and its use ... and delegated its look-and-feel implementation to separate view and controller... java.awt.Dimension; import java.awt.GridLayout; import java.awt.Toolkit; public class JAVA programme - Java Interview Questions JAVA programme Perpare a class "package_example" following the basic... for such class is encouraged.Following is expected from the programme: a.Use of 3 different type of constructors. b.Availability of "charAt" function OOPS - Java Interview Questions another to perform an action on its behalf. This relationship is structural... possibilites to build new classes from existing ones. public class Difference between C++ and Java - Java Interview Questions am going to give you few differences between these two...1. Java does not support operator overloading2. A class definition in Java looks similar to a class... programming as java has its own automatic garbage collection mechanism.Java is platform Super-class in java Super-class in java  ... the software program in an hierarchical manner through the concept of super class... as the already existing class allows the new class to inherit all the properties which class is super class to all classes which class is super class to all classes in java which class is super class to all classes Object is the super class for all user defined and predefined class. java.lang.Object is the supper class of all C interview questions C interview questions Plz answer the following questions..., 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}; What value does testarray[2][1][0... since each node "nPtr" is freed before its next address can be accessed. c Java Interview Questions - Page 11 Java Interview Questions - Page 11  ...; How are this() and super() used with constructors? Answer: this() is used to invoke a constructor of the same class. super() is used to invoke
http://roseindia.net/tutorialhelp/comment/3816
CC-MAIN-2014-10
refinedweb
3,410
58.08
On 2014-01-02 08:26:20 -0200, Fabrízio de Royes Mello wrote: > On Thu, Jan 2, 2014 at 7:19 AM, Andres Freund <and...@2ndquadrant.com> > wrote: > > > > On 2013-12-31 13:37:59 +0100, Pavel Stehule wrote: > > > > We use the namespace "ext" to the internal code > > > > (src/backend/access/common/reloptions.c) skip some validations and > store > > > > the custom GUC. > > > > > > > > Do you think we don't need to use the "ext" namespace? > > > > > > > > > > yes - there be same mechanism as we use for GUC > > > > There is no existing mechanism to handle conflicts for GUCs. The > > difference is that for GUCs nearly no "namespaced" GUCs exist (plperl, > > plpgsql have some), but postgres defines at least autovacuum. and > > toast. namespaces for relation options. > > > > autovacuum. namespace ??? Advertising Yea, right, it's autovacuum_... Greetings, Andres Freund -- Andres Freund PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription:
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg229204.html
CC-MAIN-2018-26
refinedweb
156
57.87
In his section, you will learn how to get the file name without extension. Description of code: In one of the previous sections, we have discussed the extraction of file extension from the given file. Here we going to do just reverse of that i.e retrieving file name from the given file without extension. You can see in the given example, we have created an object of File class and specify a file string into the constructor of the class. The method getName() returns the name of the file which is then used with the lastIndexOf() method and returns the index of dot(.) from the file name. Then using the following code, we have got the file name without extension. Here is the code: import java.io.File; public class FileNameWithoutExtension { public static void main(String[] args) { File file = new File("C:/data.txt"); int index = file.getName().lastIndexOf('.'); if (index > 0 && index <= file.getName().length() - 2) { System.out.println("Filename without Extension: " + file.getName().substring(0, index)); } } } If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorial/java/core/files/filenamewithoutextension.html
CC-MAIN-2013-20
refinedweb
205
67.96
Are you sure? This . The use of C in Microcontroller applications has been brought about by manufacturers providing larger program and RAM memory areas in addition to faster operating speeds. The main program flow will basically remain unchanged. The PC has become a cost effective development platform using C++ or other favored versions of the ANSI standard. whereas the H8 is 0=Input and 1=Output. while the various setup and port/peripheral control will be micro specific. You pay the money and take you PIC!! 7 . ‘Ah’ I hear you say as you rush to buy a C compiler – why do we bother to write in assembler? It comes down to code efficiency – a program written in assembler is typically 80% the size of a C version. it has evolved and been standardized throughout the computing industry as an established development language. One of the first platforms for implementation was the PDP-11 running under a UNIX environment.Introduction Why use C? The C language was development at Bell Labs in the early 1970’s by Dennis Ritchie and Brian Kernighan. Since its introduction. but Microcontrollers and Microprocessors are different breed. C is a portable language intended to have minimal modification when transferring programs from one computer to another. An example of this is the port direction registers on a PICmicro®MCU are set 1=Input 0=Output. Fine on the larger program memory sized devices but not so efficient on smaller devices. An example quoted to me – as a non believer – was: to create a stopclock function would take 2/3 days in C or 2 weeks in assembler. This is fine when working with PC’s and mainframes. Do not discard the other ideas at this stage as there are possibly some good thoughts there. The development tools for PIC based designs offer the developer basically the same facilities as the PC based development with the exception of the graphics libraries. The product development then comes down to writing the software and debugging the errors. To design a product one needs: time – peace and quiet – a logical mind and most important of all a full understanding of the requirements. The PIC programmer would have to build an RS232 interface. set up the comm. Port on a PC to enable the message to be viewed. Draw out a flow chart. I/O. and attach the development board to a comm. If we could get the whole of a PC in a 40 pin DIL package (including monitor and keyboard) we would use it. block diagram. PICmicro®MCU Based Program Development Engineers starting development on PC based products have the luxury of basic hardware pre-wired (i. memory. processor. Some of the simplest tasks can take a long time to develop and to perfect in proportion to the overall product – so be warned where tight timescales are involved. have the message displayed on the screen. Start by drawing out a number of possible solutions and examine each to try to find the simplest and most reliable option. A PC programmer could write the message “Hello World” and after compiling. ‘Why bother’ I hear you say (and so did I).e. Product Development Product development is a combination of luck and experience. printer and visual display (screen)). keyboard. It comes down to portability of the end product. I/O connection plan or any suitable drawing to get started. Build up a prototype board or hardware mimic board with all the I/O 8 . Those embarking on a PIC based design have to create all the interfaces to the outside world in the form of input and output hardware. I find the easiest way to begin any development is to start with a clean sheet of paper together with the specification or idea. We will continue to use Microcontrollers like the PIC for low cost and portable applications. port within the PIC.PC Based vs.. today’s miniaturization does not reach these limits. Build up the program in simple stages – testing as you go. interface components. This needs to be free of bugs and errors for a successful application or product. memory. Then start writing code – in testable blocks – and gradually build up your program. the PIC language (instruction set. which without any software. I/O is needed in most cases to allow the microcontroller to communicate. Rework your flowchart to keep it up to date. does nothing. Before the design process starts. terms and development kit) needs to be thoroughly understood before the design can commence. Hardware The Microcontroller. When software controls a microcontroller. metal and purified sand. the basic terminology needs to be understood – like learning a new language. control or read information. Terminology Let’s start with some basic terminology used. Software The information that the Microcontroller needs to operate or run. Pascal or Assembler (one level up from writing your software in binary). assembly technique and debugging before attempting a mammoth project. I/O A connection pin to the outside world which can be configured as input or output. some facts about the PIC and the difference between Microprocessor and Microcontroller based systems. This saves trying to debug 2000 lines of code in one go! If this is your first project – THEN KEEP IT SIMPLE – try switching an LED or two on and off from push buttons to get familiar with the instructions. Now let’s get started with the general terms. The Idea An idea is born – maybe by yourself in true EUREKA style or by someone else having a need for a project – the basic concept is the same. Microcontroller A lump of plastic. Software can be written in a variety of languages such as C. it has almost unlimited applications. power supplies.configured. So in the case of Microcontroller designs based on the PICmicro®MCU. signal conditioning circuits and all the components – connected to it 9 . Don’t forget I/O pins can be swapped to make board layout easier at a later date – usually wit minimal modification to the software. OBJ or . It enables the software to be run on the PC but look like a Microcontroller at the circuit board end. Object File This is s file produced by the Assembler / Compiler and is in a form which the programmer. a heavily used feature in debugging a program as errors are flagged up during the assembly process.to make it work and interface to the outside world. you might find the simulator restrictive. The ICE allows you to step through a program. Assembler / Compiler A software package which converts the Source file into an Object file. Another product for 16C5x development is the SIM ICE – a hardware simulator offering some of the ICE features but at a fraction of the cost. step and debug facilities are. This is a good way of testing your designs if you know when events occur. Another way of looking at (especially when it does not work) is that you can kick hardware. List File This is a file created by the Assembler / Compiler and contains all the instructions from the Source file together with their hexadecimal values alongside and comments you have written. Both the PICSTART PLUS and PROMATE II from Microchip connect to the serial port. This is the most useful file to examine when trying to debug the program as you have a greater chance of following what is happening within the software than the Source file listing. The .HEX depending on the assembler directive. The file extension is . Simulator The MPLAB® development environment has its own built-in simulator which allows access to some of the internal operation of the microcontroller. Error checking is built in. available. If an event occurs ‘somewhere about there’.LST Other Files The error file (. In Circuit Emulator (ICEPIC or PICmicro®MCU MASTER) a very useful piece of equipment connected between your PC and the socket where the Microcontroller will reside. Programmer A unit to enable the program to be loaded into the microcontroller’s memory which allows it to run without the aid of an ICE. watch what happens within the micro and how it communicates with the outside world. They come in all shapes and sizes and costs vary.COD file is used by the emulator. simulator or ICE understands to enable it to perform its function. File extension is . Source File A program written in a language the assembler and you understand. Full trace. however. The source file has to be processed before the Microcontroller will understand it. MPASM is the latest assembler from Microchip handling all the PIC family. 10 .ERR) contains a list of errors but does not give any indication as to their origin. Bugs Errors created free of charge by you.LST file. 8. analog and special functions and is the section which communicates with the outside world. DATA I/O DIGITAL PWM ANALOG RS232 I2C ADDRESS CPU 4. compute the results and then output the information. Its function is to clock data and instructions into the CPU. Most of these bugs will be found by the compiler and shown up in a . I/O and Memory – with the addition of some support circuitry. or 16 bit data formats to perform the calculations and data manipulation. The oscillator can be made from discrete components or be a ready made module. ROM. Each section can vary in complexity from the basic to all bells and whistles. 16 BIT ADDRESS MEMORY RAM EPROM EEPROM WATCHDOG TIMER OSCILLATOR TYPICAL MICROPROCESSOR SYSTEM Taking each one in turn: Input/output (I/O) can comprise digital. The memory can be RAM. EPROM. others will have to be sought and corrected by trial and error. The central processor unit (CPU) is the heart of the system and can work in 4. 8. Microprocessor A microprocessor or digital computer is made up of three basic sections: CPU. EEPROM or any combination of these and is used to store the program and data. An oscillator is required to drive the microprocessor. These range from simpel typin errus to incorrect use of the software language syntax errors. 11 . INCFSZ). watchdog and I/O incorporated within the same chip. design time and external peripheral timing and compatibility problems. executes in one cycle. oscillator. so the subject will not be expanded or duplicated here other than to explain the basic differences. memory and special functions to meet most requirements of the development engineer. Instruction Set There are 33 instructions you have to learn in order to write software for the 16C5x family and 14 bits wide for the 16Cxx family. This can occur in a non Harvard architecture microcontroller using 8-bit busses. on the other hand. Safety All the instructions fit into a 12 or 14 bit program memory word. with the exception of CALL. GOTO or bit testing instructions (BTFSS. The I/O and memory would be formed from separate chips and require a Data Bus. Microcontrollers The PICmicro®MCU. Conventional microcontrollers tend to have one internal bus handling both data and program. This slows operation down by at least a factor of 2 when compared to the PICmicro®MCU. Address Bus and Address Decoding to enable correct operation. but in some circumstances can limit the design to a set memory size and I/O capabilities. Speed The PIC has an internal divide by 4 connected between the oscillator 12 .Other circuitry found associated with the microprocessor are the watch dog timer – to help prevent system latch up. The throughput rate is therefore increased due to simultaneous access to both data and program memory. Each instruction. memory. microprocessors and computers. Why use the PIC Code Efficiency The PIC is an 8 bit Microcontroller based on the Harvard architecture – which means there are separate internal busses for memory and data. is a Microcontroller and has all the CPU. The PIC family of microcontrollers offers a wide range of I/O. buffering for address and data busses to allow a number of chips to be connected together without deteriorating the logic levels and decode logic for address and I/O to select one of a number of circuits connected on the same bus. There is no likelihood of the software jumping onto the DATA section of a program and trying to execute DATA as instructions. This saves space. It is normal to refer to a Microprocessor as a product which is mainly the CPU area of the system. You will find many general books on library shelves exploring the design of microcontrollers. This makes instruction time easy to calculate. timer functions. if you stop the clock.g. A/D and memory sizes is available from the PIC family to suit virtually all your requirements. Any I/O pin can sink 25mA or 100mA for the whole device. especially if you use a 4 MHz crystal. especially where space is at a premium. In Sleep. I/O lines. in other words. temperature. Options A range of speed. 13 . a 20MHz crystal steps through a program at 5 million instructions per second! – almost twice the speed of a 386SX 33! Static Operation The PIC is a fully static microprocessor. Each instruction cycle then works out at 1 uS. Drive Capability The PIC has a high output drive capability and can directly drive LEDs and triacs etc. In practice you would not actually do this. all the register contends are maintained. you would place the PIC into a Sleep mode – this stops the clock and sets up various flags within the PIC to allow you to know what state it was in before the Sleep. Versatility The PIC is a versatile micro and in volume is a low cost solution to replace even a few logic gates.and the internal clock bus. the PIC takes only its standby current which can be less the 1uA. package. serial comms. The PIC is a very fast micro to work with e. PIC FUNCTION BLOCK DIAGRAM PIC16F84A(14Bit) BLOCK DIAGRAM 14 . The tools for development are readily available and are very affordable even for the home enthusiast. the rest of the code will stack up. r Reference rStatus(). The following recommendations were taken from a C++ Standards document and have been adapted for the PIC. again to improve readability: g Global gLog. Use mixed case names to improve the readability ErrorCheck is easier than ERRORCHECK Prefix names with a lowercase letter of their type. the code will fall over at some point or other. Braces{} Braces or curly brackets can be used in the traditional UNIX way if (condition) { ……………. C Coding Standards Program writing is like building a house – if the foundations are firm. s Static sValueIn. } or the preferred method which is easier to read if (condition) 15 .Security The PICmicro®MCU has a code protection facility which is one of the best in the industry. Trying and Testing Code Getting to grips with C can be a daunting task and the initial outlay for a C compiler. If the foundations are weak. Basic code examples and functions can be tried. Development The PIC is available in windowed form for development and OTP (one time programmable) for production. Names – make them fit their function Names are the heart of programming so make a name appropriate to its function and what it’s used for in the program. In Circuit Emulator and necessary hardware for the PIC can be prohibitive at the evaluation stage of a project. tested and viewed before delving into PIC specific C compilers which handle I/O etc. The C compiler supplied on this disk was obtained from the Internet and is included as a test bed for code learning. Once the protection bit has been programmed. the contents of the program memory cannot be read out in a way that the program code can be reconstructed. if ( 6 == ErrorNum) … Initialize All Variables Set all variables to a known values to prevent ‘floating or random conditions’ int a=6. errors to be debugged or future enhancements to the product. tabs set in one editor may not be the same settings in another – make the code portable. Also. b=0. If one = is omitted. You know how your program operates today but in two weeks or two years will you remember. Indent text only as needed to make the software readable. Line Length Keep line lengths to 78 characters for compatibility between monitors and printers. always put the constant on the left hand side of an equality / inequality comparison.{ ……………. The value is also placed in a prominent place.. or could someone else follow your program as it stands today? Use comments to mark areas where further work needs to be done. Else If Formatting Include an extra Else statement to catch any conditions not covered by the preceding if’s if (condition) { } else if (condition) { } else { ………. } Tabs and Indentation Use spaces in place of tabs as the normal tab setting of 8 soon uses up the page width. /* catches anything else not covered above */ } Condition Format Where the compiler allows it. the compiler will find the error for you. 16 . Comments Comments create the other half of the story you are writing. Basics All computer programs have a start. The basic hooks need to be placed in the program to link the program to the peripherals. When developing a program for the PICmicro® MCU or any microprocessor / microcontroller system. a routine to set up a baud rate for communications. Such a system is shown below. The finish point would be where the program stops if run only once e. 3FFh. Other programs will loop back towards the start point such as traffic light control. you need not only the software hooks but also the physical hardware to connect the micro to the outside world. The start point in Microcontrollers is the reset vector.g. Using C and a PC is straightforward as the screen. The 14 bit core (PIC16Cxx family) reset at 00h. the 12 bit core (PIC16C5x and 12C50x) reset at the highest point in memory – 1FFh. 7FFh. 17 . keyboard and processor are all interconnected. One of the most widely used first programming examples in high level languages like Basic or C is printing ‘Hello World’ on the computer screen. DATA ICE DATA PC COMMS TARGET BOARD I/O Using such a layout enables basic I/O and comms to be evaluated. but time saved in developing and debugging is soon outstripped. The Millennium board contains all the basic hardware to enable commencement of most designs while keeping the initial outlay to a minimum. Assemble the following hardware in whichever format you prefer. through not essential.switch lp1 portb.turn .turn . A simple program I use when teaching engineers about the PIC is the ‘Press button – turn on LED’.test . protoboard or an off the shelf development board such as our PICmicro®MCU Millennium board contains (someone had to do one!). The initial investment may appear excessive when facing the start of a project. The use of the ICE.switch main portb.led main .loop .test . The hardware needed to evaluated a design can be a custom made PCB.loop for switch closure until pressed on led for switch open until released off led back to start lp1 In C this converts to 18 . Start with a simple code example – not 2000 lines of code! In Assembler this would be:main btfss got bsf btfsc goto bcf goto porta. You WILL need a PIC programmer such as the PICSTART Plus as a minimal outlay in addition to the C compiler.loop . speeds up the development costs and engineer’s headaches. tested and debugged.led porta. } } When assembled. else output_low(PIN_B0). while(true) { if (input(PIN_A0)) output_high(PIN_B0).main() { set_tris_b(0x00). 000D } 000E GOTO 009 } As you can see. the compiled version takes more words in memory – 14 in C as opposed to 9 in Assembler.0 else 000C output_low(PIN_B0). This is not a fair example on code but as programs get larger.. 19 .0 00E 06.0 00D 06. the more efficient C becomes in code usage.. expressions. Expression An expression is a combination of operators and operands that yields a single value. Definition A definition establishes the contents of a variable or function.1 The Structure of C Programs All C program contain preprocessor directives. The two most common preprocessor directives are the #define directive. functions. Function A function is a collection of declarations. Statement Statements control the flow or order of program execution in a C program. and statements that performs a specific task. A definition also allocates the storage needed for variables and functions. definitions. which substitutes text for the specified identifier. statements and functions. Global variables are declared outside functions and are visible from the end of the declaration to the end of the file. Declaration A declaration establishes the names and attributes of variables. and types used in the program. 22 . and the #include directive. expressions. declarations. which includes the text of an external file into a program. Preprocessor directive A preprocessor directive is a command to the C preprocessor (which is automatically invoked as the first step in compiling a program). Braces enclose the body of a function.1. Functions may not be nested in C. A local variable is declared inside a function and is visible form the end of the declaration to the end of the function. definitions. */ } /* function head */ /* declarations here are known */ /* only to square */ /* return value to calling statement 1. blank lines and comments. 23 . but also for those who bravely follow on. When writing programs.142 float area. /* pass a value to a function */ area = PI * radius_squared. /* /* /* /* preprocessor directive */ include standard C header file */ global declaration */ prototype declaration */ main() { /* beginning of main function */ int radius_squared. #include <stdio. /* assignment statement */ printf(“Area is %6.h> /* My first C program */ main() { printf(“Hello world!”). return(r_squared).4f square units\n”.2 Components of a C program All C programs contain essential components such as statements and functions. } /* end of main function & program */ square(int r) { int r_squared. /* local declaration */ int radius = 3.h> #define PI 3. /* declaration and initialization */ radius_squared = square (radius). The following example shows some of the required parts of a C program. Statements are the parts of the program that actually perform operations. The braces that enclose the main function define the beginning and ending point of the program.main Function All C programs must contain a function named main where program execution begins. indentations. r_squared = r * r. All C programs contain one or more functions. Example: General C program structure #include <stdio. improve the readability – not only for yourself at a later date. each of which contains one or more statements and can be called upon by other parts of the program. int square (int r). Functions are subroutines.area). 1. /* My first C program / is a comment in C.h> tells the compiler to include the source code from the file ‘stdio.h stands for header file. needs to be at the end of the compound statement: if (ThisIsTrue) DoThisFunction(). Tradional comments are preceded by a /* and end with a */. It is necessary to use only the include files that pertain to the standard library functions in your program.). The end-of-line charater is not recognized by C as a line terminator. Therefore.h which is called the STandarD Input and Output header file. The if statement is a compound statement and the .h’ into the program. there are no constraints on the position of statements within a line or on the number of statements on a line. Finally.} The statement #include <stdio. Failure to include this will generally flag an error in the NEXT line. This is the entry point into the program. The curly braces { and } show the beginning and ending of blocks of code in C. A header file contains information about standard functions that are used in the program. The extension . All statements have a semi-colon (. the statement printf(“Hello world!”). presents a typical C statement. All functions have the same format which is: FunctionName() { code } Statements within a function are executed sequentially. beginning with the open curly brace and ending with the closed curly brace.) at the end to inform the compiler it has reached the end of the statement and to separate it from the next statement. All C programs must have a main() function. Comments are ignored by the compiler and therefore do not affect the speed or length of the compiled code. The header file stdio.3 #pragma 24 . contains most of the input and output functions. Newer style comments begin with // and go to the end of the line. Almost all C statements end with a semicolon (. Both of these header files 25 . while(TRUE) putc(toupper(getc())). The keyword void may optionally appear between the ( and ) to clarity there are no parameters.H.rcv=PIN_B1) main() { printf(“Enter characters:”). main() { body of program } 1. #pragma device PIC16C54 In CCS C the pragma is optional so the following is accepted: #device pic16c54 1. } The definitions PIN_B0 and PIN_B1 are found in the header file 16C71.4 main() Every program must have a main function which can appear only once.h> #use rs232(baud=9600.The pragma command instructs the compiler to perform a particular action at the compile time such as specifying the PICmicro®MUC being used or the file format generated.5 #include The header file. all code which follows must be placed within a pair of braces { } or curly brackets. #include <16c71. No parameters can be placed in the ( ) brackets which follow.h> #include <ctype. As main is classed as a function. (denoted by a . The function toupper is found in the header file CTYPE.H.xmit=PIN_B0.. or variable name. You can #define any text. #define data is not stored in memory. very. but is invalid */ 29 . 1. use the const keyword in a variable declaration. very. We use five locations to hold the string because the string should be terminated within the null (\0) character. For example: char const id[5]={“1234”}. very. very long and is valid */ /* This comment /* looks */ ok. if NOT_OLD printf(“YOUNG”). . All comments ate ignored by the compiler. very. . For example: #define NOT_OLD (AGE<65) . A comment can be placed anywhere in the program except for the middle of a C keyword.. function name.# are called pre-processor directives. very. Comments have two formats. To save constants in the chip ROM. very.9 Comments Comments are used to document the meaning and operation of the source code. . comments cannot be nested. Comments can be many lines long and may also be used to temporarily remove a line of code. Finally. it is resolved at compile time. . 11 Macros #define is a powerful directive as illustrated in the previous section. main(). can be called by any function in the program. See also section 3. Traditionally main() is not called by any other function. The format for a C program with many functions is: main() { function1() { } function2() { } } main() is the first function called when the program is executed. the program will start executing code one line after the point at which the function was originally called. Macros are used to enhance readability or to save typing. however. } function1() { printf(“like “). When parameters are used it is called a macro. The following is an example of two functions in C. C allows defines to have parameters making them even more powerful. function1() and function2(). All C programs contain at least one function.1. function1(). Most programs that you will write will contain many functions. A simple macro: 30 . printf(“c. main() { printf(“I “). } One reminder when writing your own functions is that when the closed curly brace of a function is reached.10 Functions Functions are the basic building blocks of a C program.1. The other functions. there are no restrictions in C. 1.“). y). #ifdef simply checks to see if an ID was #defined.13 Hardware Compatibility 31 . Consider the following example: #define HW_VERSION 5 #if HW_VERSION>3 output_high(PIN_B0).1) var(b. unsigned int b=2. // z will contain the larger value x or y 1. There may be dozens of these #if’s in a file and the same code could be compiled for different hardware version just by changing one constant.v) unsigned int x=v. 1.2) var(c. Example: #define DEBUG #ifdef DEBUG printf(“ENTERING FUCT X”). #else output_high(PIN_B0).3) is the same as: unsigned int a=1.#define var(x. var(a. Another example that will be more meaningful after reading the expressions chapter: #define MAX(A. The #if is evaluated and finished when the code is compiled unlike a normal if that is evaluated when a program runs. unsigned int c=3.B) (A>B)?A:B) z=MAX(. #endif In this example all the debugging lines in the program can be eliminated from the compilation by removing or commenting on the one #define line. #endif The above will compile only one line depending on the setting of HW_VERSION.12 Conditional compilation C has some special pre-processor directives that allow sections of code to be included or excluded from compilation based on compile time settings. xmit=PIN_C6. The second line sets the PICmicro®MCU fuses. Examples: #bit carry=3. The last line tells the compiler what the oscillator speed is.rcv=PIN_C7) #use i2c(master. These are required to compile and RCW these programs. After they are defined. In C. certain words are reserved for use by the compiler to define data types or for use in loops. The following is a list of the keywords which are reserved from use as variable names. many C compilers will add several additional keywords that take advantage of the processor’s architecture. auto break case char const continue default do EXERCISE: 32 double else enum extern float for goto if int long register return short signed sizeof static struct switch typedef union unsigned void volatile while . In this case the high speed oscillator and no watch dog timer.14 C Keywords The ANSI C standard defines 32 keywords for use in the C language. they may be used in a program as with any other variable. C variables may be created and mapped to hardware registers. In addition.nowdt #use delay(clock=8000000) The first line included device specific #define such as the pin names.0 #byte portb=6 #byte intcon=11 1.h> #fuses hs.scl=PIN_B6. All C keywords must be in lowercase for the compiler to recognize them.The compiler needs to know about the hardware so the code can be compiled correctly. The following are some other example lines: #use rs232(buad=9600.sda=PIN_B7) The example program in this book do not show these hardware defining lines. These variables may be bits or bytes. Typically. A typical program begins as follows: #include <16c74. using a printf() statement. This variable should be given the value of the current year and then. Write a program that declares one integer variable called year. The result of your program should look like this: The year is 1998 33 . display the value of year on the screen. Write a program that prints your name to the screen.1. 2. The topics discussed in this chapter are: data type declarations assignments data type ranges type conversions 34 . This chapter will examine more closely how variables are used in C to Store data.Variables An important aspect of the C language is how it stores data. and long int. The next table shows the possible range of values for all the possible combinations of the basic data types and modifiers. C allows a shorthand notation for the data types unsigned int. short int.. To make arithmetic operations easier for the CPU. to convert the signed number 29 into 2’s complement: 35 . Simply use the word unsigned. The following table shows the meanings of the basic data types and type modifiers. To find the 2’s complement of a number simply invert all the bits and add a 1 to the result. For example.4E+38 NOTE: See individual C compiler documentation for actual data types and numerical range. or long without the int.. short. C represents all negative numbers in the 2’s complement format.1 Data Types The C programming language supports five basic data types and four type modifiers.2.4E-38 to 3. respectively. } 2. 36 . Where type is one of C’s valid data types and variable_name is the name of the variable. Variables are declared in the following manner: type variable_name. The variables are called local and global. main() { 0007: MOVLW 0008: MOVWF 0009: MOVLW 000A: MOVWF } E0 11 2E 12 EXERCISE: 1. To understand the difference between a signed number and an unsigned number. 2. type in the following program. The unsigned integer 35000 is represented by –30536 in signed integer format. u). The following code extract assigns the lower word (E0) to register 11h and the upper word (2E) to 12h long a = 12000. i. /* unsigned interger */ u = 35000. Write this statement in another way: long int i.2 Variable Declaration Variables can be declared in two basic places: inside a function or outside all functions. /* signed integer */ unsigned int u.00011101 = 11100010 1 11100011 = 29 invert all bits add 1 -29 Example of assigning a long value of 12000 to variables a. 12000 in hex is 2EE0. i =u. main() { int i. printf(“%d %u\n”. return 0. The operation of the program is not affected by a variable name count located in both functions.Local variables (declared inside a function) can only be used by statements within the function where they are declared. Consider the following example: void f2(void) { int count. Global variables must be declared before any functions that use them. count++) print(“%d \n”. for (count=0. count++) f2(). count < 10 .i++) printf(“%d “. } This program will print the numbers 0 through 9 on the screen ten times. } f1() { int count. for (count = 0 . The value of a local variable cannot be accessed by functions statements outside of the function. f1() { int i. on the other hand. The following example shows how global variables are used. Global variables. global variables are not destroyed until the execution of the program is complete.count). It is acceptable for local variables in different functions to have the same name. i<max. for(i=0. Most importantly. } main() { f1(). count<10. 37 . can be used by many different functions. int max. Local variables must also be declared at the start of the function before the statements. The most import thing to remember about local variables is that they are created upon entry into the function and destroyed when the function is exited.i). printf(“count in main(): %d\n”. EXERCISE: 1. f1() { int count. count=100.count). int count.count). printf(“count in f1(): %d\n”. return 0. } In this example. Both local and global variables may share the same name in C.} main() { max=10. What are the main differences between local and global variables? 2. In f1() the local variable count overrides the usage of the global variable. f1(). } main() { count=10. 38 . both functions main() and f1() reference that variable max. } In main() the reference to count is the global variable. Type in the following program. f1(). return0. The function main() assigns a value to max and the function f1() uses the value of max to control the for loop. 39 . we have to include the semicolon at the end. } EXERCISE: 1. such as ‘M’. c = 0x23. A variable can also be assigned the value of another variable. The output should look like this: 100 is the value of count 2. int j. A character constant is specified by enclosing the character in single quotes. Assign an ‘R’ to the char. This makes it easier and more reliable in setting the starting values in your program to know conditions. Write a program that declares one integer variable called count.2. Since a variable assignment is a statement.g. 50. to tell C that the value 100 is a floating point value.0. b = 0. Variables values can be initialized at the same time as they are declared. The following program illustrates this assignment. Give count a value of 100 and use a printf() statement to display the value. An example of assigning the value 100 to the integer variable count is: count = 100. float. Whole numbers are used when assigning values to integer. Assignment of values to variables is simple: variable_name = value. and d. The value 100 is called a constant. Many different types of constants exist in C. int a =10. j=i. For example. use 100. i=0. f. Write a program that declares three variables char. and double with variable names of ch. E. main() { int i. Floating point numbers must use a value with a decimal point.3 Variable Assignment Up to now we have only discussed how to declare a variable in a program and not really how to assign a value to it.5 to the float. green. color = red. green is 1 and yellow is 2. The compiler will assign integer values to the enumeration list starting with 0 at the first entry. This default value can be override by specifying a value for a constant. The variable can also be tested against another one: if (color==fruit) // do something Essentially..007 2. the variable mycolor can be created with the colortype enumeration by: enum color_type mycolor. in the statement enum color_type {red.007 to the double.and 156.yellow} color. The variable list is an optional item of an enumeration. 9 to green and 10 to yellow. This declaration is called enumeration. enum color_type {red. The output should look like this: ch is R f is 50.green.yellow} color.). Therefore. the variable color can only be assigned the values red. The list of constants created with an enumeration can be used any place an integer can be used. or yellow (i.5 d is 156. the name can be used to create additional variables at other points in the program.e. Each entry is one greater than the previous one. Instead of assigning a value 40 . enumerations help to document code. Display the value of these variables to the screen. This statement assigns 0 to red.4 Enumeration In C. Once an enumeration is defined. For example.gree=9. This example illustrates this technique. Enumeration variables may contain only the values that are defined in the enumeration list. For example. The general form for creating an enumeration is: enum name {enumeration list} variable(s). it is possible to create a list of named integer constants. in the above example red is 0. Typedefs are typically used for two reasons. typedef signed char smallint. Is the following fragment correct? Why/Why not? enum {PIC16C51. 41 .to a variable.i++) printf(“%d “. } When using a typedef. several typedef statements can be used to create many new names for the same original type.5 typedef The typedef statement is used to define new types by the means of existing types. for(i=0. To make all integers declared as myint 16-bits. Then.PIC16C52.PIC16C53} device. you might want to ensure that only 16-bit integers are used. before compiling the program for the 32-bit computer. Create an enumeration of the PIC17CXX family line. in the previous example signed char is still a valid type. 2. The first is to create portable programs. Create an enumeration of currency from the lowest to highest denomination 3. main() { smallint i. The program for 16-bit machines would use typedef int myint. an enumeration van be used to clarity the meaning of the value.i). 2. The format is: typedef old_name new_name. printf(“First PIC was %s\n”.device). The new name can be used to declare variables. If the program you are writing will be used on machines with 16-bit and 32-bit integer. the following program uses the name smallint for the type signed char. you must remember two key points: A typedef does not deactivate the original name or type.i<10. EXERCISE: 1. the typedef statement should be changed to typedef short int myint. For instance. device = PIC16C52. The following algorithm shows the type conversions: IF an operand is a long double 42 . For example. typedef length depth.6 type Conversions C allows you to mix different data types together in one expression. assigns a value to it and displays the value to the screen. The first part of the rule set is a type promotion. 2.So that all integers declared as myint are 16-bits. typedef height length. A type promotion is only valid during the evaluation of the expression. you could use the following typefef statement to declare all your counter variables. the variable itself does not become physically larger. EXERCISE: 1. Use this typed in a short program that declares a variable using UL. Make a new name for unsigned long called UL. 2.6. float f = 25. The C compiler will automatically promote a char or short int in an expression to an int when the expression is evaluated. typedef int counter. If your code contains many variables used to hold a count of some sort. The mixing of data types is governed by a strict set of conversion rules that tell the compiler how to resolve the differences. The second reason to use typedef statements is to help you document your code. depth d. the C compiler will convert all variables in the expression up to the type of the largest variable. the following is a valid code fragment: char ch = ‘0’. Is the following segment of code valid? typedef int height. This task is done on an operation by operation basis. Someone reading your code would recognize that any variable declared as counter is used as a counter in the program. int i = 15. Now that the automatic type promotions have been completed. int a = 250. The number 100 will be printed to the screen after the segment of code is executed. c = a * b.2. the other will be converted to a float. The next operation is the division between ch*i and f. Finally. float f. printf(“%d”. type is a valid C data type and value is the variable or constant. The result of ch*i will be converted to a floating point number then divided by f. f = 100. you can specify the type conversions by using the following format: (type) value This is called type casting. b = 10. If two 8-bit integers are multiplied. 43 . long c. So if you need a long value as the answer. the result will still be an 8-bit integer as the arithmetic is performed before the result is assigned to the new variable. The algorithm specifies that if one of the operand is a float. the value of the expression ch*i/f is a float. First of all. ch is promoted to an int. The following code fragment shows how to print the integer portion of a floating point number. then at least one value needs to be initially defined as long or typecast. If the resulting value is assigned to a long. The first operation is the multiplication of ch with i. Instead of relying on the compiler to make the type conversions.(int)f). The result will be 196. no type conversion takes place. This causes a temporary change in the variable. but will be converted to a double for storage in the variable result. Since both of these variables are now integers. the result will be an 8-bit value. int a. The RAM locations are used in that ‘local’ block of code and can/will be used by other blocks of code. int a =1. b. register 0Eh assigned to C load w with 1 load register assigned to a with w load w with 3 load register assigned to b wit w load w with 5 load register assigned to e with w . The result will be 2500 because a was first typecast to a long and therefore a long multiply was done. } When a block of code is entered. There are four storage classes: automatic. main() { char c = 0. e. } is the same as { auto char c. auto int a. b = 3. the compiler assigns RAM space for the declared variables.7 variable Storage Class Every variable and function in C has two attributes – type and storage class. e. int. 2. external. etc.c = (long) a * b. The type has already been discussed as char. static and register. so { char c. These storage classes have the following C names: auto Auto Variables declared within a function are auto by default. b. The extern keyword declares a variable or a function and specifies that it has external linkage. There is no function within the CCS C.z. unless otherwise defined.y. void test() { char x. That means its name is visible files other than the one in which it is defines. and thereafter increments every time the function test is called. Static The variable class static defines globally active variables which are initialized to zero. } The variable count is initialized once.++count). static int count = 4. printf(“count = %d\n”. Register The variable class register originates from large system applications where it would be used to reserve high speed memory for frequently used variables. The class is used only to advise the compiler – no function within the CCS C. 45 . All statements must be within functions. In this chapter we will discuss how to pass arguments to functions and how to receive an argument from a function.Functions Functions are the basic building blocks of the C language. The topics discussed in this chapter are: Passing Arguments to Functions Returning Arguments from Functions Function Prototypes Classic and Modern Function Declarations 46 . The reason is that the function f1() must be declared or defined before it is used. the statement in sum() would tell the compiler that the function sum() returns an integer. If you are using one or your functions. we have seen many instances of functions being called from a main program. there are two ways to correct this error. The other is to reorganize your program like this: int f1() { return 1. The general form is: type function_name(). For instance: main() { f1(). } int f1() { return 1. the header file that you included at the top of your program has already informed the compiler about the function.2 Function Prototypes There are two methods used to inform the compiler what type of value a function returns. which are explained in the next section. For instance.1 Functions In previous sections. } In reality. } main() { f1(). a warning. this program should produce an error or. One is to use function prototypes.3. If you are using a standard C function. The second way to inform the compiler about the return value of a function is the function prototype. A function prototype not 47 . at the very least. 3. } An error will not be generated because the function f1() is defined before it is called in main(). just like variables. void main() { int vol. type var2. vol = volume(5. width and height. as the size of programs grows from a few lines to many thousands of lines. The importance of prototypes may not be apparent with the small programs that we have been doing up to now. change the above program to send four parameters to the function volume: vol = volume(5. Prototypes help the programmer to identify bugs in the program by reporting any illegal type conversions between the arguments passed to a function and the function declaration. int s3) { return s1*s2*s3. However. In the above example. } int volume(int s1. It also reports if the number of arguments sent to a function is not the same as specified in the function declaration. but also declares the number and type of arguments that the function accepts. int s2. The prototype must match the function declaration exactly. type var3). the type of each variable can be different.only gives the return value of the function.12.7. The general format for a function prototype is shown here: type function_name(type var1.vol).12). Is the following program correct? Why/Why not? double myfunc(void) 48 . An example of a function prototype is shown in this program. EXERCISE: 1. int s2. } Notice that the return uses an expression instead of a constant or variable. printf(“volume: %d\n”.15) 2. int volume(int s1.7. the importance of prototypes in debugging errors is evident. The function calculates the volume defined by length. To show how errors are caught by the compiler. int s3). These special variables are defined as formal parameters.2)). When a function is defined. For example.pi_val). } 3. } //defining the function //with nothing passed in //but with pi returned main() { double pi_val. special variables must be declared to receive parameters. The parameters are declared between the parenthesis that follow the function’s name. but the ANSI C standard specifies that a function must be able to accept at least 31 arguments.void main() { printf(“%f\n”.4 Using Function Arguments A function argument is a value that is passed to the function when the function is called. C allows from zero to several arguments to be passed to functions. } 49 . //calling the value of pi printf(“%d\n”.3 Void One exception is when a function does not have any parameters passed in or out. The number of arguments that a function can accept is compiler dependent.0. pi_val = pi().a+b). This function would be declared as such: void nothing (void) An example of this could be: double pi(void) { return 3.myfunc(10. } 3. int b) { printf(“%d\n”. void sum(int a. the function below calculates and prints the sum of two integers that are sent to the function when it is called. } double myfunc(double num) { return num/2.1415926536. 7). int b) { printf(“%d\n”.25) are called arguments and the variables a and b are the formal parameters. EXERCISE: 1. What is wrong with this program? print_it(int num) { printf(“%d\n”.10. For now.15. main() { sum(1. We will discuss this further in the chapter on pointers.An example of how the function would be called in a program is: void sum(int a.100. Any changes made to the formal parameter do not affect the original value of the calling routine.a+b).10). In this method. The first way is called “call by value”. This method copies the value of an argument into the formal parameter of the function. the compiler will copy the value of each argument into the variables a and b. This means that changes can be made to the variable by using the formal parameter.25).num). Functions can pass arguments in two ways. sum(15. Write a function that takes an integer argument and prints the value to the screen. It is important to remember that the values passed to the function (1. The second method is called “call by reference”. } //This is a function prototype main() { print_it(156. } void sum(int a. the address of the argument is copied into the formal parameter of the function. int b). sum(100. Inside this function. } When sum() is called. the formal parameter is used to access the actual variable in the calling routine. we will only use the call by value method when we pass arguments to a function. } 50 .6.6). 2. The return value does not necessarily need to be used in an assignment statement. #include <math. Notice that the header file math. A function can return any data type except an array. This number is assigned to the variable result. how do you return a value from a function? The general form is: return variable_name. The following example shows both 51 . If no data type is specified. This explicitly tells the compiler that the function does not return a value. So. Where variable_name is a constant. If your function does not return a value. } Where type specifies the data type of the return value of the function. the ANSI C standard specifies that the function should return void.3. printf(“%f\n”. It is important that you match the data type of the return value of the function to the data type of the variable to which it will be assigned. result = sqrt(16.h is included because it contains information about s that is used by the compiler. the function is put on the right side of an equals (=) sign. The general format for telling the compiler that a function returns a value is: type function_name(formal parameters) { <statements> return value. The same goes for the arguments that you send to a function. Typically. variable or any valid C expression that has the same data type as the return value.0).result).h> main() { double result. but could be used in a printf() statement. then the C compiler assumes that the function is returning an integer (int). This example shows a typical usage for a function that has a return value. } This program calls the function sqrt() which return a floating point number.5 Using Functions to Return Values Any function in C can return a value to the calling routine. result = f1(). printf(“%d\n”. main() { int num. The return value of a function is not required to be assigned to a variable or to be used in an expression. printf(“%f\n”.types of functions. num = sum(5. num = func(). func(). What a function that accepts an integer number between 1 and 100 and returns the square of the number. Any statements after the return will not be executed. is when a return statement is encountered. } int f1() { return 60. } func() { return 6. int b). the function returns immediately to the calling routine. return result. sum(int a.num). } 52 . What is wrong with this function? main() { double result. 2. result = a + b. } sum(int a. } One important thing to do note. int b) { int result. however. EXERCISE: 1. if it is not the value is lost.result). printf(“%d\n”. num).127). w) int 1. If you see the classic form in a piece of code. Going forward. Outside of the parenthesis the data types and formal parameter names are specified.3. area(10..6 Classic and Modern Function Declarations The original version of C used a different method of formal parameter declaration. Only the names of parameters are included inside the parenthesis. The purpose is to maintain compatibility with older C programs of which there are literally billions of lines of C code..15)). The ANSI C standard allows for both types of function declarations. void main(void) { printf(“area = %d\n”. now called the classic form. } area(1..varn) type var1. is shown below: type function_name(var1. your C compiler should be able to handle it. This form.. both the data types and formal parameter names are specified between the parenthesis. you should use the modern form when writing code. Convert this program using a classical form for the function declarations to the modern form. EXERCISE: 1. type var2.var2. } 53 . The modern form.type var n) In this type of function declaration. { <statements> } Notice that the declaration is divided into two parts. . type varn.w { return 1*w. What is a function prototype and what are the benefits of using it? 2. which we have been using in previous examples.. is given by: type function_name(type var 1. don’t worry.. lcd_putc(“b”). Is the same as: lcd_putc(“a”).. If a constant string is passed to a function that allows only a character parameter then the function is called for every character in the string.7 Passing Constant Strings Because the PICmicro®MCU has limitations on ROM access.. The CCS C compiler handles this situation in a non-standard manner. 54 . For example: void lcd_putc(char c) { . lcd_putc(“c”). } lcd_putc(“abcd”).. lcd_putc(“d”). constant strings cannot be passed to functions in the ordinary manner..3. In this chapter we will discuss many different types of operators including: Arithmetic Relational Logical Bitwise Increment and Decrement Precedence of Operators 55 .C Operators In C. the expression plays an important role. An expression is a combination of operators and operands. The main reason is that C defines more operators than most other languages. C operators follow the rules of algebra and should look familiar. In most cases. . The – operator can be used two ways. subtraction.. a = a – b a = -a . + * / % addition subtraction multiplication division modulus The +. *. The example shows various ways of implementing this method. 56 . division and modulus.b. The second way is used to reverse the sign of a number. a = a –.c.reversing the sign of a Arithmetic operators can be used with any combination of constants and/or variables. and / operators.. result = count –163. * and / operators may be used with any data type. this operator has no meaning when applied to a floating point number.1 Arithmetic Operators The C language defines five arithmetic operators for addition. The modulus operator gives the remainder of an integer division. multiplication. the first being a subtraction operator. the following expression is a valid C statement. The modulus operator. -. %. can also be written a -=b.4. int a. -.subtraction . For example. C also gives you some shortcuts when using arithmetic operators. Therefore. One of the previous examples. This method can be used with the +. can be used only with integers. The following example illustrates the two uses of the – sign. test if zero . One simple fault is the use of = or ==.load b .W 0E . in the second. a is made the same as b. becomes 0007: MOVF 0008: ADDWF 0009: MOVWF a = b . that looking at the assembler listing (.W 10.W 0E.2 Relational Operators The relational operators in C compare two values and return a true of false result based on the comparison.F 03.load c . Write a program that finds the remainder of 5/5. becomes 0007: 0008: while a==b. The relational operators are following: > >= < <= == greater than greater than or equal to less than less than or equal to equal to 57 .load b .LST) points to the C error.save in a MOVF SUBWF BTFSC GOTO 0F. and 5/1. a = b.F .W 0E . becomes 0007: MOVF 0008: MOVWF 0009: MOVF 000A: SUBWF 0F. a is tested to check if it is the same as b. becomes 0007: 0008: 0009: 000A: MOVF MOVWF 0F.W 0E 10.add c to b .subtract from a The importance of understanding assembler becomes apparent when dealing with problems – I have found.load b .yes – so bypass In the first instance. EXERCISE: 1.save in a . 2.W 0E. 4. 5/4.2 00D .save in a 0F.subtract from a . 5/3. 5/2.a = b + c. Write a program that calculates the number of seconds in a year.c.load b . at times. these operators return either a 0 for false or 1 for true. EXERCISE: 1. the result is 1 (true) EXERCISE: 1. Rewrite the following expression using a different relational operator. if var is greater or less than 15. the result is 0 (false) var != 15. even though C defines true as any non-zero value.!= not equal to One thing to note about relational operators is that the result of a comparison is always a 0 or 1. if var is less than or equal to 15. var > 15. Again. False is always defined as zero. When is this expression true or false? Why? count >= 35 4. count == 0 58 . An example of linking these operators together is: count>max || !(max==57) && var>=0 Another part of C that uses the relational and logical operators is the program control statements that we will cover in the next chapter. and NOT. The following examples show some expressions with relational operators. Rewrite the following expressions using any combination of relational and logical operators.3 Logical Operators The logical operators support the basic logical operations AND.. count != 0 2. OR. The result of using any of these operators is a bitwise operation of the operands. An example of all the bitwise operators is shown below. Each left shift causes all bits to shift one bit position to the left.4 Bitwise Operators C contains six special operators which perform bit-by-bit operations on numbers. write an XOR function based on the following truth table. and a zero is inserted on the right side.. p 0 0 1 1 q 0 1 0 1 XOR 0 1 1 0 4. Since C does not explicitly provide for an exclusive OR function. The unique thing to note about using left and right shifts is that a left shift is equivalent to multiplying a number by 2 and a right shift is equivalent to dividing a number by 2. AND 00000101 (5) 00000110 (6) ---------------00000100 (4) OR 00000101 (5) 00000110 (6) ---------------00000111 (7) & | 59 . Shift operations are almost always faster than the equivalent arithmetic operation due to the way a CPU works. The bit that is shifted off the end of the variable is lost. These bitwise operators can be used only on integer and character data types. F 0E. 60 .save in a .right .W 0E .F .W 10. becomes 0007: MOVF 0008: MOVWF 0009: RRF 000A: RRF 000B: RRF 000C: MOVLW 000D: ANDWF j = ~a.load b . becomes 0007: MOVF 0008: MOVWF 0009: COMF 0F.apply mask to contents . 2.of register for a 0F.save in a 0F.inclusive or with c .W 0E 0E.save in j . becomes 0007: MOVF 0008: IORWF 0009: MOVWF a = b & c.F .F 0E.load b .compliment j EXERCISE: 1.load b .load b . Write a program that inverts only the MSB of a signed char.W 0E 0E. Write a program that displays the binary representation of a number with the data type of char. a = b | c.and function with c .W 10.^.save in a 0F.W 0E .three times .rotate contents .F 1F 0E. becomes 0007: MOVF 0008: ANDWF 0009: MOVWF a = b >> 3. 0007: MOVLW 0008: MOVWF j = ++a.5 Increment and Decrement Operators How would you increment or decrement a variable by one? Probably one of two statements pops into your mind. Maybe following: a = a+1. or or ++a. a = 3. Again. 0009: 000A: 000B: j = a++.W .j. The general formats are: a++.previous value reloaded overwriting incremented The following example illustrates the two uses.F 0E .a = 4 . 61 .j = 4 NOTE: Do not use the format a = a++.F .load a in w .value in a incremented 0E.W 0F.4.register assigned to a INCF MOVF MOVWF 0F.F 0F.a = 5 . for increment for decrement When the ++ or – sign precedes the variable. the value of the variable is used in the expression then incremented. When the ++ or – follows the variable. as the following code will be generated: MOVF INCF MOVWF value 0E. void main(void) { int i.load value of a into w . --a.store w in j MOVF INCF MOVWF 0F. the variable is incremented then that value is used in an expression.value of a loaded into w 0E. the makers of C have come up with a shorthand notation for increment or decrement of a number.W 0E . int j. or a = a-1. a--. 000C: 000D: 000E: 03 0F . j = %d\n”.i. j = i++. b++.i = 10. b = a. b = b-1. i = 10. j = ++i. printf(“a=%d. b = -a + ++b.i. a = 1. b=%d\n”.6 Precedence of Operators 62 .j). void main(void) { int a. a = ++a + b++. Rewrite the assignment operations in this program to increment or decrement statements. } 2. printf(“i = %d. Mixing it all together Write sum = sum = sum = sum = a+b++ a+b-a+ ++b a+ -b Operation sum = a+b sum = a+b b = b+1 b = b-1 b = b = sum sum b+1 b-1 = a+b = a+b ERERCISE: 1. What are the values of a and b after this segment of code finishes executing? a = 0. printf(“i = %d. The second printf() statement will print an 11 for both i and j. a.b).j). a = a+1. a++. 4. b. j = %d\n”. b = 0. } The first printf() statement will print an 11 for i and a 10 for j. which operation would happen first? Addition or multiplication? The C language maintains precedence for all operators. if the expression a+b*c was encountered in your program.Precedence refers to the order in which operators are processed by the C compiler. they will be covered later. For instance. Parenthesis can be used to set the specific order in which operations are performed. but don’t worry. A couple of examples of using parenthesis to clarity or change the precedence of a statement are: 10-2*5 = 0 (10-2)*5 = 40 count*sum+88/val-19%count (count*sum) + (88/val) – (19%count) EXERCISE: 1. 63 . b += -a*2+3*4. Priority Operator 1 () ++ -2 sizeof & * + .b=0. The second line is composed entirely of unary operators such as increment and decrement.~ ! ++ . a = 6 8+3b++. The following shows the precedence from highest to lowest.. You will also learn how relational and logical operators are used with these control statements. We will also cover how to execute loops. Again. This tells the compiler that if the expression is true. The if statement evaluates the expression which was a result of true or false. the program continues without executing the statement. If the expression is true. If the expression is false. The block of code associated with the if statement is executed based upon the outcome of a condition. The general format is: if (expression) { . A simple example of an if is: if(num>0) printf(“The number is positive\n”). printf(“Count down\n”)..” after the expression The expression can be any valid C expression. The if statement can also be used to control the execution of blocks of code. This example shows how relational operators are used with program control statements. } The braces { and } are used to enclose the block of code. NOTE: no “. execute the code between the barces. any not-zero value is true and any zero value is false.5. the statement is executed. The simplest format is: if (expression) statement.1 if Statement The if statement is a conditional statement. print parameters to use } or Other operator comparisons used in the if statement are: x == y x != y x > y x equals y x is not equal to y x great than y 65 .. An example of the if and a block of code is: if (count <0 ) { count =0. } if(TestMode ==1) { . statement. . printf(“done”). Unlike an if statement the ? operator returns a value. If this statement is true the printf(“%d “. Each time after the printf(“%d “. The program works like this: First the loop counter variable. such as BASIC or Pascal. the loop counter variable is incremented.int i. i ? j=0 : j=1. The increment section of the for loop normally increments the loop counter variable. At this point. If you have a statement or set of statements that needs to be repeated. is set to zero. Here is an example of a for loop: void main(void) { int i. statement is executed.4 for Loop One of the three loop statements that C provides is the for loop.i). This section of the for loop is executed only once. } This program will print the numbers 0 – 9 on the screen. is executed. i = j.i). 5. Normally this section tests the loop counter variable for a true or false condition. 69 . This whole process continues until the expression i<10 becomes false. If the conditional_test is false the loop exits and the program proceeds. Since i is 1 or non-zero. statement is executed. or j=i?0:1. the expression j=0 will be evaluated. for(i=0. next the expression i<10 is evaluated. increment ) The initialization section is used to give an initial value to the loop counter variable. i will be incremented unless it is 20 or higher. i. If the conditional_test is true the loop is executed. i++) printf(“%d “. then it is assigned 10. i<10. The conditional_test is evaluated prior to each execution of the loop. The basic format of a for loop is similar to that of other languages. For example: i = (i<20) ? i+1 : 10. The most common form of a for loop is: for( initialization .i). a for loop easily implements this. Note that this counter variable must be declared before the for loop can use it. conditional_test . the for loop is exited and the printf(“done”).j. increment a . The value of expression is checked 70 . exit loop . Here is the general format: while (expression) statement.loop again EXERCISE: 1.i++) for( . num=num-1) for (count=0.subtract from h . .clear h . } The expression is any valid C expression. Therefore. the while loop repeats a statement or block of code. num.h!=10. Write a program that displays all the factors of a number. the name while.if i=10.F 000E: GOTO 008 .a. or while (expression) { statement.W 000A: BTFSC 03. What do the following for() statements do? for(i=1. count<10 && error==false. num>0. if the test is false to start off with.5 while Loop Another loop in C is the while loop. 5. While an expression is true.h++) 0007: CLRF 0E 0008: MOVLW 0A 0009: SUBWF 0E. count++) Convert an example in to assembler to see what happens: int h. Hence. 000C: INCF 0F. Here are some variations on the for loop: for (num=100. ) for(num=1. the for loop will never be executed. the conditional test is performed at the start of each iteration of the loop. count+=5) for (count=1. You are not restricted just to incrementing the counter variable.As previous stated. count<50.2 000B: GOTO 00F a++.F 000D: INCF 0E.increment h . for (h=0.load 10 .and test for zero . num++) 2. . i++). RCV=pin_c7) void main(void) { char ch. printf(“Got a q!\n”). Write a program that gets characters from the keyboard using the statement ch=getch(). An example of a 71 . Once a q is received. while(1) printf(“%d “. EXERCISE: 1. while(i<10) { printf(“%d “. exit the program. } You will notice that the first statement gets a character from the keyboard. } b.prior to each iteration of the statement or block of code. This means that if expression is false the statement or block of code does not get executed. 2. When a carriage return is encountered. xmit-pin_c6.H> #use RS232 (Baud=9600. and prints them to the screen.6 do-while Loop The final loop in C is the do loop. ch=getch(). the printf is executed and the program ends. i++. Here is an example of a while loop: #include <16C74. As long as the value of ch is not a q.i). Then the expression is evaluated. the program will continue to get another character from the keyboard. What do the following while statements do? a. while(ch!=’q’) ch=getch(). Here we combine the do and while as such: do { statements } while(expression) In this case the statements are always executed before expression is evaluated. 5. The expression may be any valid C expression. printf(“Give me a q\n”). ). When a ‘Q’ is entered. Each time a character is read. The ANSI C standard specifies that compilers must have at least 15 levels of nesting. the second loop is said to be nested inside the first loop. i++. do { ch = getch().8 break Statement 72 . 5. Any of C’s loops or other control statements can be nested inside each other. } This routine will print the numbers 00 – 99 on the screen.5 using a do-while loop: 2. 5.7 Nesting Program Control Statements When the body of a loop contains another loop. while(i < 10) { for(j=0.do-while loop is shown: #include <16C74. EXERCISE: 1.i*10+j). Write a program that gets a character from the keyboard (ch=getch().5 EXERCISE: 1. your program would print 68 periods to the screen. if the letter ‘D’ is entered (ASCII value of 68).5 using a do-while loop. An example of a nested for loop is shown here: i = 0. } while(ch != ‘q’). RCV=pin_c7) void main(void) { char ch. the program ends.j++) printf(“%d “. For example. xmit-pin_c6. } This program is equivalent to the example we gave in Section 5. Rewrite Exercise 2 in Section 5.H> #use RS232 (Baud=9600. Rewrite both a and b of Exercise 1 in Section 5. printf(“Got a q!\n”).j<10. use the ASCII value to print an equal number of periods to the screen. i++) { printf(“Microchip® is great!”). The break statement works with all C loops.i).i++) { 73 .9 continue Statement Let’s assume that when a certain condition occurs in your loop. When a break statement is encountered in a loop. each using one of C’s loops. When the program encounters this statement.H> void main(void) { int i. For example. The break statement bypasses normal termination from an expression. for(i=0. C has provided you with the continue statement.The break statement allows you to exit any loop from any point within the body.1. What does this loop do? for(i=0.i++) { printf(“%d “. EXERCISE: 1.i<50. that count forever but exit when a key is hit. it will skip all statements between the continue and the test condition of the loop. } 2. kbhit () requires the header file conio. } } This program will print the number 0 – 15 on the screen. if(getch()==’q’) break. You can use the function kbhit() to detect when a key is pressed. you want to skip to the end of the loop without exiting the loop. For example: void main(void) { int i. Write three programs. kbhit() returns 1 when a key is pressed and a 0 otherwise. #include <16C74.h 5. if(i==15) break.i<100. the program jumps to the next statement after the loop. for(i=0. if(ch==’x’) 74 .continue. break. The general form for a switch statement is: switch (variable) { case constant1: statement(s). A continue will cause the program to go directly to the test condition for while and do-while loops. A switch statement is equivalent to multiple if-else statements. When a match is found. a continue will cause the increment part of the loop to be executed and then the conditional test is evaluated. Each time the continue is reached.i). printf(“%d “. case constant2: statement(s). but becomes very cumbersome when many alternatives exist. If no match is found. An example of a switch is: main() { char ch.) { ch = getch(). 5.. The default is optional. the body of statements associated with that constant is executed until a break is encountered. break. default: statement(s).10 switch Statement The if statement is good for selecting between a couple of alternatives. the program skips the printf() and evaluates the expression i <100 after increasing i. Again C comes through by providing you with a switch statement. break. } } This loop will never execute the printf() statement. } The variable is successively tested against a list of integer or character constants. the statements associated with default are executed. case constantN: statement(s). for(. case ‘4’: printf(“Thursday\n”).return 0. Values within the range will be converted into the day of the week. The DIP switch and the characters per line settings read. break. case ‘5’: printf(“Friday\n”). case ‘6’: printf(“Saturday\n”). case ‘2’: printf(“Tuesday\n”). break. break. default: cp1 = 40. break. default: printf(“Invalid entry\n”). switch(ch) { case ‘0’: printf(“Sunday\n”). case 0x30: cp1 = 28. 75 . then separated from the other bits and used to return the appropriate value to the calling routine. case 0x20: cp1 = 20. the message Invalid entry will be printed. break. //mask unwanted bits switch(cp1) //now act on value decoded { case 0x00: cp1 = 8. break. byte cp1_sw_get() //characters per line { byte cp1. If the number is outside of this range. Another example used to set the number of characters per line on a LCD display is as follows. break. cp1=portd & 0b01110000. break. case ‘1’: printf(“Monday\n”). case 0x10: cp1 = 16. break. break. case ‘3’: printf(“Wednesday\n”). break. } } } This example will read a number between 1 and 7. break. printf(“M = Multiplication\n”). printf(“D = Division\n”). . break. switch (ch) { case ‘S’: b=-b. printf(“Enter Choice:\n”).b=3. case 1: printf(“b is true”). Here is an example of nested switches. An ANSI compiler must provide at least 15 levels of nesting for switch statements. break.a+b). } break. printf(“A = Addition\n”).} return(cp1). 76 . ch=getch(). case 2: . char ch. } //send back value to calling routine The ANSI Standard states that a C compiler must support at least 257 case statements. switch (a) { case 1: switch (b) { case 0: printf(“b is false”). void main(void) { int a=6. An example is provided to illustrate this. The break statement within the switch statement is also optional. printf(“S = Subtraction\n”). No two case statements in the same switches can have the same values. Also switches can be nested. case ‘A’: printf(“\t\t%d”. as long as the inner and outer switches do not have any conflicts with values. This means that two case statements can share the same portion of code. break. nickel. The null statement satisfies the syntax in those cases. } } EXERCISE: 1.11 null Statement (. Use a switch statement to print out the value of a coin.). dime. break. The statement body is a null. 3.a*b). In this example. Nothing happens when the null statement is executed – unlike the NOP in assembler. and dollar. the loop expression of the for line[i++]=0 initializes the first 10 elements of line to 0. if and while require that an executable statement appears as the statement body. . Statements such as do. case ‘M’: printf(“\t\t%d”. The phrases to describe coins are: penny.break. switch(f) { case 10. case ‘D’: printf(“\t\t%d”.a/b). The value of the coin is held in the variable coin. for (i=0.i++) . What is wrong with this segment of code? float f. quarter. for.12 return Statement 77 . What are the advantages of using a switch statement over many if-else statements? 5.i<10. default: printf(“\t\tSay what?”). which introduces a one cycle delay. since no additional commands are required. It may appear wherever a statement is expected. 2.05: . 5.) The null statement is a statement containing only a semicolon (. The return statement terminates the execution of a function and returns control to the call routine. GetNothing(). } 78 . A value can be returned to the calling function if required but if one is omitted. } main() { int x. If a returned value is not required. control is still passed back to the calling function after execution of the last line of code. the returned value is then undefined. x = GetValue(). return. { c++. GetValue(c) int c. return c. } void GetNothing(c) int c. If no return is included in the called function. { c++. declare the function to have a void return type. An array is simply a list of related variables of the same data type. Topics that will be discussed: Arrays Strings One-dimensional Arrays Multidimensional Arrays Initialization 79 . and is also known as the most common one-dimensional array.Array and Strings In this chapter we will discuss arrays and strings. A string is defined as a null terminated character array. if we want an array of 50 elements we would use this statement.. For instance. The following example shows how to do this. An individual variable in the array is called an array element. Where type is a valid C data type..1 One-Dimensional Arrays An array is a list of variables that are all of the same type and can be referenced through the same name. say I want to index the 25th element of the array height and assign a value of 60. int num[10]. element 1 0 2 1 3 2 4 3 5 4 80 .now test if < 10 . height[24] = 60.6. int i. When an array is declared. C defines the first element to be at an index of 0. int height[50].if so then stop routine .i++) 0007: CLRF 18 0008: MOVLW 0A 0009: SUBWF 18.i<10. for(i=0.F 0012: GOTO 008 array i will look like this in memory. If the array has 50 elements. The first element is at the lowest address. This is a simple way to handle groups of related data.clear i . C store one-dimensional arrays in contiguous memory locations. var_name is the name of the array.W 0010: MOVWF 00 0011: INCF 18.load start of num area 6 5 7 6 8 7 9 8 10 9 .0 000B: GOTO 013 num[i] = i. the last element is at an index of 49. The general form for declaring one-dimensional arrays is: type var_name [size].W 000A: BTFSC 03. 000C: MOVLW 0E 000D: ADDWF 18.W 000E: MOVWF 04 000F: MOVF 18. and size specifies how many elements are in the array. If the following segment of code is executed. Using the above example. you may read or write to an element not declared in the array. Therefore. The program will report if any of these characters match. .h> void main(void) { int num[10]. EXERCISE: 1. for(i=0. this will generally have disastrous results. Here is another example program. Write a program that first reads 10 characters from the keyboard using getch().b[10]. If you want to copy the contents of one array into another. .i++) printf(“%d “.i<20. for(i=0.i++) count[i]=getch(). 2. for(i=0. It simply assigns the square of the index to the array element. The above example is incorrect. } What happens if you have an array with ten elements and you accidentally write to the eleventh element? C has no bounds checking for array indexes. C does not allow you to assign the value of one array to another simply by using an assignment like: int a[10]. What is wrong with the following segment of code? int i. 81 . #include <16c74. char count[10].i++) b[i] = a[i].num[i]).i<10.i++) num[i] = i * i. Often this will cause the program to crash and sometimes even the computer to crash as well.i<10. int i. for(i=0. a=b. you must copy each individual element from the first array into the second array. then print out all elements. However.i<100.Any array element can be used anywhere you would use a variable or constant. The following example shows how to copy the array a[] into b[] assuming that each array has 20 elements. printf(str).2 Strings The most common one-dimensional array is the string. “Motoroloa who?”). for(i=0. printf(“Enter a string (<80 chars):\n”). then when what string is declared you must add an extra element. Instead. How can you input a string into your program using the keyboard? The function gets(str) will read characters from the keyboard until a carriage return is encountered. Write a program that reads a string of characters from the screen and prints them in reverse order on the screen. strcpy(str.h> void main(void) { char str[10]. int i.str[i]). C does not have a built in string data type. gets(str). #include <string. The string of characters that was read will be stored in the declared array str. This extra element will hold the null. void main(void) { char star[80]. } Here we see that the string can be printed in two ways: as an array of characters using the %c or as a string using the %s. 82 . What is wrong with this program? The function strcpy() copies the second argument into the first argument. it supports strings using one-dimensional arrays of characters. All string constants are automatically null terminated by the C compiler.str). If every string must be terminated by a null. EXERCISE: 1.i++) printf(“%c”. A string is defined as a 0. printf(“\n%s”.6. } 2.str[i]. Since we are using strings. You must make sure that the length of str is greater than or equal to the number of characters read from the keyboard and the null (null = \0). Let’s illustrate how to use the function gets() with an example. i<5. For simplicity. You can create two or more dimensions. to create an integer array called number with 5x5 elements. Due to the PICmicro®MCU’s memory map. then displays the contents of the array in row/column format. void main(void) { int array[5][4]. For example. printf(“\n”). . arrays of 100 or 10x10 are not possible.i<5. Two-dimensional arrays are used just like one-dimensional arrays.j++) printf(“%d “. EXERCISE: 1.uses 25 ram location Additional dimensions can be added simply by attaching another set of brackets. you would use: int number[5][5]. The following figure shows a graphical representation of a 5x5 array. from left to right. int i.3 Multidimensional Arrays C is not limited to one-dimensional arrays.j<4. Therefore.j. However. for(i=0. For example. 83 . we will discuss only two-dimensional arrays. for(i=0. Write a program that declares a 3x3x3 array and loads it with the numbers 1 to 27.6.j<4. two of 50 arrays would fit.i++) { for(j=0. the following program loads a 5x4 array with the product of the indices. column format. Print the array to the screen. A twodimensional array is best represented by a row.i++) for(j=0. twodimensional arrays are accessed a rot at a time.j++) array[i][3]=i*j. } } The output of this program should be look like this: 0 0 0 0 0 0 1 2 3 4 0 2 4 6 8 0 3 6 9 12 As you can see.array[i][j]). when using the multidimensional arrays the number of variables needed to access each individual element increases. 5. It is probably easier to simulate a row/column format when using twodimensional arrays. The element i[0] will have a value of 1 and the element i[4] will have a value of 5. First.6. int num[3][3]={ 1.5}.4 Initializing Arrays So far you have seen only individual array elements having values assigned. you may make a list of each individual character as such: char str[3] (‘a’. 15. Multidimensional arrays are initialized in the same way as one-dimensional arrays. print out the sum of each row and column. 84 . The following example shows a 5-element integer array initialization. 6.2.4.3. int i[5] = {1. Using the program from Exercise 1.6. You may have noticed that no curly braces enclosed the string.9}. A string (character array) can be initialized in two ways. The following example shows a 3x3 array initialization. 7. The first constant will be placed in the first element.5.3. 4. ‘b’.0. EXERCISE: 1. The compiler automatically appends a null at the end of “John”. ‘c’). C provides a method in which you can assign an initial value to an array just like you would for a variable. The second method is to use a quoted string.7.8. the second constant in the second element and so on.2. Is this declaration correct? int count[3] = 10. They are not used in this type of initialization because strings in C must end with a null. The general form for one-dimensional arrays is shown here: type array_name[size] = {value_list}. as shown here char name [5] = “John”.2. The value_list is a comma separated list of constants that are compatible with the type of array. specify only the first index.names[4]). This statement specifies that the array name contains 10 names. To access a string from this table. printf (“%s”.2. specify animals[2][1]. to print the fifth name from this array. to access the second string in the third list. For instance. The same follows for arrays with greater than two dimensions. Allow the user to enter a single digit number and then your program will display the respective word. &num). To access a specific string. Each row should have the number. Write a program that create a string table containing the words for the numbers 0 through 9. 6. The way in which you use the array is somewhat different from other arrays. Then print out the number and its square and cube.5 Arrays of Strings Arrays of strings are very common in C. 6.h> functions char string[10]. up to 40 characters long (including null). For example. They can be declared and initialized like any other array. this allows a constant string to be inputted into RAM. use the following statement. To obtain an index into the table. //the library for string //define string array 85 . . you would use the first two dimensions. For example.6 string functions Strings can be manipulated in a number of ways within a program. Write a program that has a lookup table for the square and the cube of a number. #include <string. Ask the user for a number using the statement scanf(“%d”. For instance. One example is copying from a source to a destination via the strcpy command. what does the following declaration define? char name[10][40]. EXERCISE: 1.. subtract ‘0’ from the character entered. if the array animals was declared as such: char animals[5][4][80]. the square of the number and the cube of the number. Create a 9x3 array to hold the information for numbers 1-9.. printf(s1). //will print 6 //will print abcdef if(strcmp(s1.strlen(s1))..s2)!=0) printf(“no match”). “Hi There”). printf(“%u”. s2[10]. 86 .//setup characters into string Note that pointers to ROM are not valid in the PICmicro®MCU so you can not pass a constant string to one of these functions.s2). strcat(s1.strcpy (string.”def”). More examples: char s1[10]. strcpy(s2. strcpy(s1. strlen(“hi”) is not valid.”abc”). Some of the topics we will cover in this chapter are: Pointer Basics Pointers and Arrays Passing Pointers to Functions 87 .Pointers The chapter covers one of the most important and most troublesome feature of C. A pointer is basically the address of an object. the pointer. The type of a pointer is one of the valid C data types. a=&b. the value of b is displayed to the screen by using the * operator with the pointer variable a. The * operator returns the value stored at the address pointed to by the variable. You may have noticed that var_name is preceded by an asterisk *. For example. then a would contain the value 100. } NOTE: By default. If b is a variable at location 100 in memory. the compiler uses one byte for pointers. For parts with larger memory a two-byte (16-bit) pointer may need to be used.1 Introduction to Pointers A pointer is a memory location (variable) that holds the address of another memory location. This line can be read as assign a the address of b. The first statement declares two variables: a. For example. the a points to b. //more than 1 byte may be assigned to a b=6. #include <16c74. int *ptr. the following statement creates a pointer to an integer.b. The general form to declare a pointer variable is: type *var_name. This tells the compiler that var_name is a pointer variable. To select 16 bit pointers. use the following directive: #device *=16 Be aware more ROM code will be generated since 16-bit arithmetic is less efficient. 88 . which is a pointer to an integer and b. printf(“%d”. which is an integer. The address of a variable can be accessed by preceding the variable with the & operator.*a). Then the address of b (&b) is assigned to the pointer variable a. A graphical example is shown here. Finally. This line can print the value at the address pointed to by a.h> void main(void) { int *a. if a pointer variable a contains the address of variable b. This means only location 0255 can be pointed to. It specifies the type of variables to which var_name can point.7. For example. The next statement assigns the value of 6 to b. This process of referencing a value through a pointer is called indirection. The two special operators that are associated with pointers are the * and the &. } In this program. k. executes. For instance.b. it points to the next memory location. When a pointer variable is incremented. there are a few rules and exceptions that you must understand. p would contain the value 104 after the increment assuming that floating point numbers are four bytes long.b). Initially. p++. a = &b. Only integer quantities may be added or subtracted from pointer variables. j.2 Restrictions to Pointers In general. As ptr contains the value 102. The line *a=6. after the statement. If p had been a float pointer. *ptr is 5 (the content of address 102) It is also possible to assign a value to a memory location by using a pointer.h> void main(void) { int *a. Write a program with a for loop that counts from 0 to 9 and displays the numbers on the screen. -. ++. 89 . --. The only pointer arithmetic that appears as expected is for the char type. pointers may be treated like other variables. then we assign a value to b by using a. 7.Address: Variable: Content: 100 i 3 102 j 5 104 k -1 106 ptr 102 int i. p will have a value of 102 assuming that integers are two bytes long. let’s restructure the previous program in the following manner. #include <16c74. EXERCISE: 1. i is 3 &i is 100 (the location of i). because characters are only one byte long. int *ptr. *a=6. Obviously. In addition to the * and & operators. However. Print the numbers using a pointer. there are only four other operators that can be applied to pointer variables: +. we first assign the address of variable b to a. If we assume that the pointer variable p contains the address 100. . printf(“%d”. the use of a pointer in the previous two examples is not necessary but it illustrates the usage of pointers. can be read as assign the value 6 to the memory location pointed to by a. i. 90 . Declare the following variables and assign the address of the variable to the pointer variable. Pointers may also be used in relational operations. p = p+200. It is possible to increment or decrement either the pointer itself or the object to which it points.ch. Pointers cannot be created to ROM. use the following statement: (*p)++. . to or from a pointer.e. This is valid without the const puts the data into ROM. What do you think the following statement will do if the value of ptr is 1 before the statement is executed? *p++. However. then increments p. ptr=&name[0]. the following statement: int *p. they only make sense if the pointers relate to each other. they both point to the same object. This statement gets the value pointed to by p. Cause p to point to the 200th memory location past the one to which p was previously pointing. Print the value of each pointer variable using the %p. the following is illegal: char const name[5] = “JOHN”. Then increment each pointer and print out the value of the pointer variable again. The parenthesis cause the value that is pointed to by p to be incremented. To increment the object that is pointed to by a pointer. For example.You can add or subtract any integer value you wish. int *ip. i. For example. This is due to the precedence of * versus ++. You must be careful when incrementing or decrementing the object pointed to by a pointer. . EXERCISE: 1. What are the sizes of each of the data types on your machine? char *cp. you can assign that value to another pointer. Since an array name without an index is a pointer.i++) printf(“%d”. 91 .i<5. p = &i.p[i]). is a pointer to the first element in the string.4. double *dp. What is actually passed to the function.float *fp. they cannot be created for use with constant arrays or structures. 2. for(i=0.5}.i. only a pointer to the first element is passed. This would allow you to access the array using pointer arithmetic. The following program is valid.4. pointers and arrays are closely related and are sometimes interchangeable.i<5.3. Important note: when an array is passed to a function.*(p+i)).3. p=a. You may be surprised that you can also index a pointer as if it were an array. For instance. void main(void) { int *p. int a[5]={1.i++) printf(“%d”. } This is a perfectly valid C program. void main(void) { int *p. p = p/2.2. If you use an array name without an index.3 Pointers and Arrays In C. In the last chapter. we used the function gets(). in which we passed only the name of the string. What is wrong with this fragment? int *p.5}.i.d. 7. p=a. where i is the index of the array.2. for(i=0. you are actually using a pointer to the beginning of the array. int a[5]={1. You will notice that in the printf() statement we use *(p+i).i. It is this relationship between the two that makes the power of C even more apparent.f. load 3 . this statement would be invalid for the previous program. . 000D: MOVLW 000E: MOVWF int *p.add to array start position .save in location pointed to *(array+1) = 4. Is this segment of code correct? int count[10]. 0009: MOVF 000A: MOVWF 000B: MOVLW 000C: MOVWF array[1]=4. 0007: MOVLW 0008: MOVWF *p=3.load in 3 .into first location of array 0F 0E 01 0E. p=array. p++.load start of array . count = count+2. . 0007: MOVLW 0008: MOVWF p[1]=3.. it is invalid to increment the pointer.load array position . Since pointers to arrays point only to the first element or base of the string.W 04 03 00 10 04 04 00 .load 4 .load into array pointer .} One thing to remember is that a pointer should only be indexed when it points to an array.pointer .pointer .load array position .W 04 03 00 04 10 . 92 ..load 4 . The following examples show the problem – the second version does not mix pointers and arrays: int *p.and save at pointed location .load start of array . int array[8].save in pointed to location EXERCISE: 1. int array[8]. Mixing pointers and arrays will produce unpredictable results. p=array. . Therefore.point to it .point at indirect register . Another example of passing a pointer to a function is: void IncBy10(int *n) { *n += 10.2. } void main(void) { int i=0. the “M”.4 Passing Pointers to Functions In Section 3. Then p is incremented to point to the next character in the string.20. 7.25).h> void puts(char *p). #include <16c74. the character that is pointed to by p is printed. void main(void) { puts(“Microchip is great!”). 93 .10. the pointer p points to the first character in the string.*p). At this point any changes made to the variable using the pointer actually change the value of the variable from the calling routine. } void puts(char *p) { while(*p) { printf(“%c”. What value does this segment of code display? int value[5]=(5. Pointers may be passed to functions just like any other variables. int *p. “call by value” and “call by reference”. or in other words a pointer is passed to the function. The statement while(*p) is checking for the null at the end of the string. } printf(“\n”). The second method passes the address to the function.*p+3). we talked about the two ways that arguments can be passed to functions. Each time through the while loop. } In this example. p = value. The following example shows how to pass a string to a function using pointers.15. p++. printf(“%d”. IncBy10(i). Example: void Incby10(int & n) { n += 10. Write a program that passes a fl pointer to a function. After the function returns to main. Inside the function. } Both of the above examples show how to return a value from a function via the parameter list. Incby10(i). 2. 94 . Inside the function. } void main(void) { int i=0. Write a program that passes a float value to a function. After the function returns to main (). } The above example may be rewritten to enhance readability using a special kind of pointer parameter called a reference parameter. EXERCISE: 1. the value of –1 is assigned to the function parameter. print the value of the float variable. print the value of the float variable. the value of –1 is assigned to the variable. Structures and Unions Structures and Unions represent two of C’s most important user defined types. In this chapter we will cover: Structure Basics Pointers to Structures Nested Structures Union Basics Pointers to Unions 95 . Unions are a group of variables that share the same memory space. Structures are a group of related variables that can have different data types. the information stored in a structure is logically related. The variable-list declares some variables that have a data type of struct tag-name. } variable-list. address and telephone number of all your customers. We will refer to them as members. which can be different from each other. struct catalog { char author[40]. unsigned char rev. These types do not need to be the same. In general. you might use a structure to hold the name. . To access any member of a structure. For example. The variable-list is optional. type element2. char pub[40]. you must specify both the name of the variable and the name of the member. Within the structure each type is one of the valid data types. you would use card. The keyword struct tells the compiler that a structure is about to be defined. C defines structures in the following way: struct tag-name { type element1. The variable card is declared as a structure of type catalog. The following example is for a card catalog in a library. you would type: 96 . type elementn. For example.1 Introduction to Structures A structure is a group of related items that can be accessed through a common name. The operator is used to access members of a structure. to access the revision member of the structure catalog. char title[40]. it is not the name of a variable. The tag-name is the name of the structure. only the name of this type of structure.rev=’a’ where card is the variable name and rev is the member. the name of the structure is catalog. Each of the items within a structure has its own data types. } card. To print the author member of the structure catalog. unsigned int data. In this example.8. These names are separated by a period. Each of the items in the structure is commonly referred to as fields or members. The first element of title is 0. declare and access a structure. This example declares a 50-element array of the structure catalog.title[2]. you can create more structure variables anywhere in the program using: struct tag-name var-list. Once you have defined a structure. For instance. finally the third is 2.author). you can define two more variables like this: struct catalog book. What if you wanted to access a specific element in the title.title Structure may also be passed to functions. the second is 1 and. you would use printf(“%s”.printf(“Author is %s\n”. If you want to print the name of the publisher.card. Now that we know how to define. If you wanted to access an individual structure within the array.pub). You can also assign the values of one structure to another simply by using an assignment. A function can be return a structure just like any other data type. what does a structure catalog looks like in memory. author title pub date rev 40 bytes 40 bytes 40 bytes 2 bytes 1 byte If you wanted to get the address of the date member of the card structure you would use &card.date.card. C allows you to declare arrays of structures just like any other data type.list. if the structure catalog was defined earlier in the program. How would you access the title member of the 10th element of the structure array big? big[9]. like the 3rd element in the string? Use card. struct catalog big[50]. The following fragment is perfectly valid struct temp { 97 .e. you would index the structure variable (i. big[10]). The number of elements in a structure does not affect the way it is passed to a function. //present data delay_cycles(1). boolean rs. An example of using this on the PICmicro® to set up an LCD interface would be struct cont_pins { boolean en1. One important thing to note: When you pass a structure to a function. char ch. //set en1 line high delay_us(2). and D3-6 will be data.int a. } var1. char c.en1=1. boolean en2. This is an example of initializing a structure. var2. var2 = var1. int i.data=n. //set en1 line low 98 .”Jack”. void LcdSendNibble(byte n) { cont. } var1[2]={“Rodger”. the entire structure is passed by the “call by value” method. In this case D0 will be en1. struct example { char who[50].30}. float b.27. //delay cont. ‘Y’. //delay cont. var1.var2. variable var2 will have the same contents as var1. } cont. //enable for all displays //enable for 40x4 line displays //register select //control on port d This sets the structure for cont_pins and is then handled within the program.a=37. Therefore. #byte cont = 8. After this fragment of code executes the structure. any modification of the structure in the function will not affect the value of the structure in the calling routine.en1=0.’N’.65. int data:4.b=53. NOTE: The :4 notation for data indicates 4 bits are to be allocated for that item. char ch.q. char str[50]. struct temp { int i. Since C passes the entire structure to a function. . you must use the arrow operator as shown here: q->i=1. the statement q=&p is perfectly valid. Read a character from the keyboard and save it in the character using getch().2 Pointers to Structures Sometimes it is very useful to access a structure through a pointer. long l. 8. For example. This statement would assign a value of 1 to the number i of the variable p. it is easier to pass a pointer to the structure to the function. Read a string and save it in the string using gets (). 2. Write a program that has a structure with one character and a string of 40 characters.} EXERCISE: 1. Pointers to structures are declared in the same way that pointers to other data types are declared. i = 10. Using this definition of the temp structure. Now that q points to p. Then print the values of the members. 99 . For this reason. } p. Notice that the arrow operator is a minus sign followed by a greater-than sign without any spaces in between. large structures can reduce the program execution speed because of the relatively large data transfer. What is wrong with this section of code? struct type { int i. the following section of code declares a structure variable p and a structure pointer variable q with structure type of temp. } s. . char str[80]. Write a program that creates an array of structure three long of the type PICmicro®MCU. This example shows how a pointer to a structure is utilized. } 2. 100 . s. p.a=100. The format of the structure is: struct PIC { char name[20]. 2.h> #include <string. int b. Is this segment of code correct? struct s_type { int a.One important thing to note is: When accessing a structure member using a structure variable.”I like structures”). p->i=10. and a PIC17CXX device. *p. When accessing a structure member using a pointer to the structure.s. use the period. PIC16CXX. or 3. printf(“%d %d %s”.i=10.h> struct s_type { int i. #include <16c74. void main(void) { p=&s. strcpy(p->str. EXERCISE: 1. unsigned char datamem.i.i=10 and p->i=10 are equivalent. } The two lines s. You will need to load the structures with a PIC16C5X.p->str). } s. } s. char feature[80]. you must use the arrow operator. The user will select which structure to print using the keyboard to input a 1.*p. void main(void) { p=&s.p->i. unsigned char progmem. a string that has the package name. The structure product has three elements: an array of PIC structures called devices. and the cost. For example: #define NUM_OF_PICS 25 struct PIC { char name[40]. 8. char package_type[40]. struct produtcs { struct PIC devices[NUM_OF_PICS]. These elements can be accessed using the list1 variable. 101 . } list1.}.3 Nesting Structures So far. the members of structures can also be other structures. unsigned char datamem. you have only seen that members of a structure were one of the C data types. }. This is called nesting structures. char feature[80]. unsigned char progmem. However. float cost. the following union contains three members: an integer. the character array uses three bytes and the double uses four bytes. 102 . double d.8. a character array. type elementn. If you are accessing the union through a pointer. The variables that share the memory location may be of different data types. <--------------------------------------------------double---------------------------------------------------> <---------c[2]----------> <---------c[1]----------> <---------c[0]----------> <---------------------integer----------------------> element0 element1 element2 element3 Accessing the members of the union is the same as with structures. you would use the arrow operator just like structures. you use a period. It is important to note that the size of the union is fixed at complier time to accommodate the largest member of the union. The statement temp. the union temp will have a length of four bytes. A union looks very much like a structure.i will access the two byte integer member i of the union temp and temp. the tag-name is the name of the union and the variable-list are the variables that have a union type tag-name. } variable-list. We will use the previous example to illustrate a union. char c[3].d will access the four byte double d. . union u_type { int i. type element2. The integer uses two bytes.4 Introduction to Unions A union is defined as a single memory location that is shared by two or more variables. For example. The difference between unions and structures is that each member of the union shares the same data space. Assuming that doubles are four bytes long. The way that a union appears in memory is shown below. and a double. The general format of the union is: union tag-name { type element1. However. } temp. you may only use one variable at a time. . Again. A good example of using a union is when an 8-bit microcontroller has an external 12-bit A/D converter connected to a serial port. union sample { unsigned char bytes[2]. whenever you want to use the 12-bit sample you would use word to access the 12-bit number. } When you want to read the A/D. So we might set up a union that has two unsigned chars and a signed short as the members. Write a program that has a union with a long int member and an four byte character array. EXERCISE: 1. Your program should print the long int to the screen a byte at a time. signed short word. What are the differences between a structure and an union? What are the similarities? 2. you would read two bytes of data from the A/D and store them in the bytes array. The microcontroller reads the A/D in two bytes. Then. 103 . PIC Specific C Having understood the basics of C. it is now time to move into the PICmicro® MCU specific settings. Every compiler has its own good and not so good points.. functions and operations..4 turns on the transistor and hence pulls the pin low).1 Inputs and Outputs The Input and Output ports on a PICmicro®MCU are made up from two registers – PORT and PORT DIRECTION – and are designated PORTA.D and E are similar but the data sheet needs to be consulted for PIC specifics.C.D and E – 33 I/O lines.D.send W to port control register .e.change back to register page 0 105 .E. A2-4 as outputs . the voltage levels on the pin – if tied high with a resistor – is inverted to the bit in the PORTA register (i. As a result.B. a logic 1 in porta.C. Inputs are set with a 1 and outputs are set with a 0. The pins have both source and sink capability of typically 25mA per pin.set outputs low .E and TRISA.A0.B.C.9. An 8pin PIC has a single GPIO register and TRIS register – 6 I/O lines. A block diagram of PORTA is shown below.B.C. An example in assembler could be CLRF PAGE1 MOVLW MOVWF PAGE0 PORTA B’00000011’ PORTA . Their availability depends upon the PIC being used in the design.select register page1 . The exception to the I/O lines is the A4 pin which has an open collector output.D.1 as inputs. The 16C74 has PORTS A. Ports B. the port direction registers are set up prior to each I/O operation. This adds lines to a program and hence slows down the speed. Fast I/O enables the user to set the port direction and this remains in place until 106 . NOTE: On devices with A/D converters. fast.Data is sent to the port via a MOVWF PORTA and bits can be individually manipulated with either BSF or BCF. but improves the safety side of the code by ensuring the I/O lines are always set as specified. It does have the additional PICmicro®MCU hardware functions as alternatives to being used as an 8-bit port. In standard mode.. ensure ADCON1 register is also set correctly an I/O default is ANALOG.. or standard. Data is read from the port with either MOVFW PORTA or bit testing with BTFSS or BTFSC. it is advisable to set up the port conditions before the port direction registers (TRIS). break. b_rate = portb. //used to set a bit //used to clear a bit //used to test a bit 107 . break. When setting bit patterns in registers or ports. The following example sets Port B as inputs and then reads in the value. } } When setting up portb. b_rate = portb & 0b00000011. case 1: set_uart_speed(2400). bit_test(variable. //mask out unwanted bits The value stored in b_rate can then be used to set up a value to return to the calling function. work in binary. On a bit level there are: bit_set(variable. This prevents the port from outputting an unwanted condition prior to being set. as this will make it easier for you writing and others reading the source code. set_tris_b(0xff). bit). break. bit). bit). //make inputs //read in port Bit masking is easily achieved by adding the & and the pattern to mask after the port name b_rate = portb & 0b00000011. case 3: set_uart_speed(9600).re-defined. Manipulation of data to and from the I/O ports is made easy with the use of numerous built in functions. bit_clear(variable. The following is the whole function used to read some dip switches and set up a baud rate for a comms routine. It also saves converting between number bases. The compiler does not add lines of code to setup the port direction prior to each I/O operation. case 2: set_uart_speed(4800). byte bd_sw_get() //baud rate selection { byte b_rate. //mask out unwanted bits switch(b_rate) { case 0: set_uart_speed(1200). break. or simply because a routine works ‘as is’.The above three can be used on variables and I/O b = input(pin). output_low(pin). This applies to ports b – g.f 108 . //get the state or value of a pin output_bit(pin.2 Mixing C and Assembler There are times when inline assembler code is required in the middle of a C program. set the combination of inputs and outputs for a given port – set a 1 for input and 0 for output. timing constraints. mode output_high(pin). #asm movlw 8 movwf count clrw loop: xorwf d. value). enables or disables the weak pullup on port b set_tris_a(value). the following instructions are used: port_b_pullups(true/false). //set an output to logic 1 //set an output to logic 0 //set a pin to input or floating On a port wide basic. //set a port pin to a specific value output_float(pin). Port direction registers are configured every time a port is accessed unless the following pre-processor directives are used: #use fast_io(port) leaves the state of the port the same unless re-configured #use fixed_io(port_outputs=pin.w rrf d.f decfsz count. The reasons could be for code compactness. FindParity(type d) { byte count. The following example finds the parity of a value d passed to the routine Fiii which is then equated to a when the routine is called. pin) permanently sets up the data direction register for the port #use standard_io(port) default for configuring the port every time it’s used 9. a=FindParity(d).F 008 21 016 07 26 26. } When compiled. d=0 store result in W.d=7.W 25 Key to PIC16Cxx Family Instruction Sets Field Description b: Bit Address within an 8bit file register (0 . constant data or label.W 27 005 21.goto movwf #endasm } loop _return_ main() { byte a. f Register file address (0x00 to 0xFF) k Literal field. . txt data W Working register (accumulator) x Don’t care location 109 .F 28. 25h.W 27.d=7. a=FindParity(d). d=1 Store in file register f (default) Assembler recognizes W and f as destinations.7) d: Destination select. 0013: MOVF 0014: MOVWF 0015: GOTO 0016: MOVF 0017: MOVWF } 8 count 27. the program looks like: FindParity(type d) { byte count. stop oscillator k – W >> W k . d Description Add W and f AND W and f Clear f Clear W Complement f Decrement f Decrement f.NOT.OR. d f. skip if 0 Inclusive OR W and f Move f Move W to f No operation Rotate left f Rotate right f Subtract W from f Swap halves f Exclusive OR W and f Function W + f >> d W . d f. W >> W PC+1 >> TOS. d f. i = 1 increment after instruction execution. skip if 0 W . If assigned) k >> PC(9 bits) k .AND. f >> d f >> d W >> f f – W >> d f(0:3) << f(4:7) >> d W . d f. d f. d f. f >> d Bit Oriented Instructions 110 . TOS P >> C TOS >> PC 0 >> WDT. d. k >> PC 0 >> WDT (and Prescaler.OR. i = 0 Do not change. skip if 0 f + 1 >> d f + 1 >> d. f >> d 0 >> f 0 >> W . d f. W >> W k >> W TOS >> PC. f >> d f – 1 >> d f – 1 >> d.XOR.AND. 1 >> GIE k >> W.i Table pointer control.XOR. d f f.. skip if zero Increment f Increment f. d f f. OR literal and W Move literal to W Return from Interrupt Return with literal in W Return from subroutine Go into Standby Mode Subtract W from literal Exclusive OR literal and W Function k + W >> W k . d f. d f. OR.OR. d f. b f. d f. skip if 0 Inclusive OR W and f Move f Move W to f No operation Rotate left f Rotate right f Subtract W from f Swap halves f Exclusive OR W and f Function W + f >> d W .NOT. OR literal and W Move literal to W Load OPTION Register Return with literal in W Go into Standby Mode Tri-state port f Exclusive OR literal and W Function k . skip if 0 W . f >> d 111 . W >> W PC+1 >> TOS. f >> d f – 1 >> d f – 1 >> d. skip if clear Bit test. d. b f. b f. stop oscillator W >> I/O control register k . d f.AND. b Description Bit clear f Bit set f Bit test. d f f. TOS P >> C 0 >> WDT. d f f. f >> d f >> d W >> f f – W >> d f(0:3) << f(4:7) >> d W . skip if zero Increment f Increment f. d f.XOR. d Description Add W and f AND W and f Clear f Clear W Complement f Decrement f Decrement f. d f. If assigned) k >> PC(9 bits) k . d f.XOR. d f. d f.Hex 10Ff 14Ff 18Ff 1CFf Mnemonic BCF BSF BTFSC BTFSS f.AND. skip if 0 f + 1 >> d f + 1 >> d. k >> PC 0 >> WDT (and Prescaler. W >> W k >> W W >> OPTION Register k >>. f >> d 0 >> f 0 >> W . Zero MOVF f. Digit Carry GOTO k BTFSS Status. Carry BCF Status. Zero BTFSSSF Status. d BSF. Zero BTFSC Status. Digit Carry BTFSC Status. Digit Carry GOTO k BTFSC Status. W COMF f. Mnemonic Description ADDCF f. Zero GOTO k BCF Status.Bit Oriented Instructions Hex 4bf 5bf 6bf 7bf Mnemonic BCF BSF BTFSC BTFSS f. Digit Carry BSF Status. b f. Digit Carry BCF Status. Carry Flag Z Clear Carry Clear Digit Carry Clear Zero k Move File to W f . d Subtract Carry from File 112 Z Z .d Negative File Set Carry Set Digit Carry Set Zero Skip on Carry Skip on No Carry Skip on Digit Carry Skip on No Digit Carry Skip on Zero Skip on No Zero f. Carry INCF f. Carry GOTO k BTFSC Status. skip if clear Bit test. Carry GOTO k BTFSS Status. b f. d GOTO k BTFSC Status. Zero GOTO k BTFSS Status. Digit Carry BTFSS Status. Carry BTFSC Status. b f. Zero BTFSC Status. They are form of shorthand similar to Macros. b Description Bit clear f Bit set f Bit test. Carry BTFSS Status. f INCF f. sizeof(x). In this case. For example: long y. 01110010. 0b10000001}. struct { int a.c} z. since x is an array the unsubscripted identifier is a pointer. Example: int x[3] = {0b10010001. f Z Z Z 9. d Status. bit_clear and bit_test simply set or clear a bit in a variable or test the state of a single bit. //c is now 01100001 or ‘a’ if(bit_test(x.0).2. it allows you to specify the bit value to put into the vacated bit position. else printf(“X is even”). // x msb first is: 00000100.b.5). shitf_left(&y. 01000101 //bb is 0 Note: The first parameter is a pointer. Note. d f. // x msb first is: 00000010. short bb. //x msb first is: 10000001. Digit Carry f.0). 113 . Bits are numbered with the lowest bit (the 1 position) as 0 and the highest bit 7. shift_lest and shift_right will shift one bit position through any number of bytes. //c in binary is now 01000001 bit_test(c. 00111001. 00011100. These functions return as their value 0 or 1 representing the bit shifts out. the & operator must be added.3 Advanced BIT Manipulation The CCS C compiler has a number of bit manipulation functions that are commonly needed for PICmicro®MCU programs bit_set.sizeof(x). 0b00011100. 10010001 bb = shift_left(x.SUBDCF TSTF f. In addition.1). these functions consider the lowest byte in memory the LSB. If a simple variable or structure was used. For example: c=’A’.0)) printf(“X is odd”). 00100010 //bb is 1 bb = shift_left(x. The second parameter is the number of bytes and the last parameter is the new bit. d Sub Digit Carry from File f Test File DECF BTFSC DECF MOVF f. 10010001 rotate_left(x. an interrupt can be generated (not 16C5X series) timer1 = 16Bit. x = 0b10010110 swap(x). 00100011 The swap function swaps the upper 4 bits and lower 4 bits of a byte. For example: int x[3] = {0b10010001. May increment on the instruction clock or by an external source. //x is now 01101001 9. Applying a pre-scalar may slow increment. the timer1 count may be saved in another register when a pin changes. 114 . In compare mode. When timer1 overflows from 65535 to 0.sizeof(x)). 00111001. 0b10000001}. and an interrupt may also be generated.4 Timers All PICmicro®’s have an 8-bit timer and some PIC’s have two more advanced timers. 0b00011100. a pin can be changed when the count reaches a preset value.3. 00011100. //x msb first is: 10000001. The capabilities are as follows: rtcc (timer0) = 8Bit. an interrupt can be generated. An interrupt may also be generated.shitf_right(&z. When timer0 overflows from 255 to 0. For example: int x. //x msb first is: 00000010. This timer is used as part of the PWM. rotate_left and rotate_right work like the shift functions above except the bit shifted out of one side gets shifted in the other side. In capture mode. Applying a pre-scaler may slow increment. May increment on the instruction clock or by an external source.0). 115 .NOWDT #use delay(clock=1024000) #use rs232(baud=9600. an interrupt can be generated. When timer2 overflows from 255 to 0.NOWDT #use delay(clock=8000000) #use rs232(baud=9600.timer2 = 8Bit. //increments 1024000/4*256 times per second //or every millisecond while(!input(PIN_B0)). //Increments every 1 us setup_ccp1(ccp_capture_re). May increment on the instruction clock or by an external source. Applying a pre-scalar may slow increment. The following is a simple example using the rtcc to time how long a pulse is high: #include <16c74.rcv=PIN_C7) #bit capture_1 = 0x0c.”.xmit=PIN_C6. //wait for low time = get_rtcc(). The interrupt can be slowed by applying a post-scaler. printf(“High time = %u ms.2 //pir1 register //bit 2 = capture has taken place main() { long time.h> #fuses HS.rcv=PIN_C7) main() { int time. setup_timer1(t1_internal | t1_div_by_2). //configure CCP1 to capture rise capture_1=0.time). setup_counters(rtcc_internal. This timer is used as part of the PWM. while(!input(PIN_B0)). so it requires a certain number of overflows before the interrupt occurs. //wait for high set_rtcc(0). rtcc_div_256).h> #fuses HS. } The following is an example using the timer1 capture feature to time how long it takes for pin C2 to go high after pin B0 is driven high: #include <16c74.xmit=PIN_C6. 12 and 16 bits resolution. printf(“Reaction time = %1u us. then the measured accuracy is 5/255 = 19. 16C72/3 A0. If a 5 volt supply is used. Other Microchip parts have 10.”.55 volts.time). It is important to note which combination of I/O lines can be used for analog and digital. if the reference voltage is reduced to 2. .. } ADCON1 ANALOG/ DIGITAL CONTROL PORTA (PORTE) MUX A/D CONVERTOR ADRES A/D RESULT TRISA (TRISE) ADCON0 CONTROL AND STATUS REGISTER 9. NOTE: The default for ports having both analog and digital capability is ANALOG.set_timer1(0). This means that the voltage being measured can be resolved to one of 255 values. output_high(PIN_B0). The following tables are extracted from the data sheets. time = ccp_1. 11. while(!capture_1).55 volts. However. 16C711 A0.adc_clock_internal setup_adc_ports(mix) will setup the ADC pins to be analog. Calls to setup_adc and set_adc_channel should be made sometime before this function is called. adc_clock_div_32. //sets porta to all analog inputs //points a/d at channel 1 //waits 5 seconds //reads value 117 . 16C710. digital or combination. On parts with greater than 8 bits A/D the value returned is always a long with the range 000h – FFFFh. This function returns an 8-bit value 00h – FFh on parts with an 8 bits A/D converter. The allowed combinations for mix vary depending on the chip. the setup and operation of the A/D is simplified by ready made library routines. The constants all_analog and no_analog are valid for all chips. set_adc_channel(1). set_adc_channel(0-7) select the channel for a/d conversion setup_adc(mode) sets up the analog to digital converter The modes are as follows: adc_off. adc_clock_div_8. adc_clock_div_2. A1 A2 A3 A A A A A Vref A D D D D D In C. value = read_adc(). delay_ms(5000)). Some other example constants: ra0_ra1_ra2_ra3_analog/a0_ra1_analog_ra3_ref read_adc() will read the digital value fro the analog to digital converter. printf(“A/D value = %2x\n\r”.6 Data Communications/RS232 RS232 communications between PCs. When connecting equipment with RS232 interfaces. it is important to know which is classified as the Data Controlling Equipment (DCE) and which is Data Terminal Equipment (DTE). form part of an engineer’s life. A minimum interface can be 3 wires – Ground. value). modems etc. The permutations of 9 or 25 pins on a D connector and the software controlling communications are endless. stop bits 118 . and are documented in the EIA-232-D or CCTT V24/28 specification. Transmit. The problem seems to arise when self built products need to be interfaced to the outside world. and Receive – but what to do with the remaining pins? The voltage levels are between ±3 and ±15 volts allowing plenty of leeway for both drivers and receivers. Common Problems Result Garbled characters Possible reasons parity.. speed. character length.//prints value 9. In an even parity system. In an odd parity system. The parity system may be either ‘odd’ or ‘even’ and both systems give the same level of error detection.. it is possible that the receiver will not recognize the problem. Start bits always 1 bit Stop bits 1 or 2 bits Data bits 7 or 8 bits Parity bits none if no error detection is required odd or even if error detection is required DATA FORMAT: 8 DATA BITS. but in 119 .Lost data Double space Overwriting No display of characters Double characters flow control translation of carriage returns or line feeds translation of carriage returns or line feeds duplex operation duplex operation Data Format Data sent via an RS232 interface follows a standard format. is even. with an 8 bits data byte of ‘10101100’ the parity bit would be set to ‘0’. with an 8 bits data byte of ‘10101100’ the parity bit would be set to ‘1’. the corruption will be recognized. Thus. parity checking is not a cast iron method of checking for transmission errors. plus parity bit. So. plus parity bit. the overall count of ‘1’s in the combined data byte. Thus. provided that the parity appears correct. If corruption of either data bytes or of the parity bit itself takes place. the overall count of ‘1’s in the combined data byte. when the receiver carries out the parity check. is odd. In the event of more than one bit being corrupted. This adds an overhead to the code generated which could have a knock on effect on execution times. it only indicates that an error has occurred and it is up to the system software to react to the error state. The parity system does not correct errors in itself. . | 2 010 SP ! “ # $ % & ‘ ( ) * + < > ? 3 011 0 1 2 3 4 5 6 7 8 9 : . . Bit Rate Time Calculation As BAUD is bits per second. 2400 baud = 416uS. The PICmicro®MCU does not have on-chip parity testing or generation. it provides a reasonable level of security in most systems.practice. in most systems this would result in a request for re-transmission of the data. . so the function needs to be generated in software. each data bit has a time of 1/(baud rate) This works out as 1200 baud = 833uS. . but only half duplex in Synchronous mode. D/A and EEPROM devices. and indication of over run or framing errors on the received data. In Asynchronous mode. xmit=PIN_C6. not a hardware function. created and tested in user software. Data formats acceptable to the USART are: 8 or 9 data bits. There are pre-set functions which speed up application writing: #use fixed_io(c_outputs=pin_C6) //speeds up port use #use delay(Clock=4000000) //clock frequency #use rs232(baud=4800. rcv=PIN_C7) 122 . the latter being the most common for interfacing peripherals. the USART can interface to A/D. odd or even parity. Synchronous Slave and Asynchronous. the USART can handle full duplex communications. Besides the obvious interface to PC’s and Modems. none. Included in the C compiler are ready-made functions for communications such as: getc. the code behaves like hardware UART. if(kbhit) return(getc()). If. return(0). The software UART has the ability to invert the output data levels. A 0 terminates the string. If the UART is available in hardware. getch. With the exception of the interrupt on transmit and receive. timeout_error=FALSE. removing the need for an external driver/level shifter in logic level applications. the code generated will use the existing hardware. The following is an example of a function that waits up to one-half second for a character. the resulting code generated will be larger.7 I2C Communication 123 . the hardware is absent. getchar gets(char *string) waits for and returns a character to be received from the RS232 rev pin reads s string of characters into the variable until a carriage return is received. CCS compiler has the ability to use the on-board PICmicro®MCU’s UART if one is present. } } 9. else { timeout_error=TRUE. char timed_getc() { long timeout. The maximum length of characters is determined by the declared array size for the variable. however. This function is not available in hardware. timeout=0. while(!kbhit&&(++timeout<50000)) //1/2 second delay_us(10). This code transparency enables code to be moved from one PICmicro®MCU application to another with minimal effect. SPI is usually not a bus. SCL=PIN_B0.h> #fuses XT. since there is one master and one slave. as each device is different.8 SPI Communication Like I2C. Communication begins with a special start condition. SPI is a two or three wire communication standard to hardware devices. I2C_WRTIE(2). NOWDT #use delay(clock=4000000) #use I2C(master. I2C_WRTIE(11). The receiver can slow down the transfer by holding SCL low while it is busy. I2C_START(). After all data is transferred. I2C_START(). 124 . The following is an example C program that will send three bytes to the slave at address 10 and then read one byte back. } 9. The master may send or request data to/from any slave. #include <16C74. I2C_WRTIE(3). There is no standard beyond this. If a third wire is used. followed by the slave address. it may be an enable or a reset. data=I2C_READ(). I2C_WRTIE(1). The LSB of this first byte indicates the direction of data transfer from this point on. Data is transferred and the receiver specifically acknowledges each byte. The two wires are labeled SCL and SDA. I2C_STOP(). The following is C code to send a 10 bits command MSB first to a device. Note: The shift_left function returns a single bit (the bit that gets shifted out) and this bit is the data value used in the output_bit function. SDA=PIN_B1) main() { int data. a stop condition is sent on the bus. I2C_STOP(). A single two wire I2C bus has one master and any number of slaves. Each slave has a unique address. I2C_WRTIE(10).I2C is a popular two-wire communication bus to hardware devices. Both require a pull-up resistor (1-10K) to +5V. One wire supplies a clock while the other sends or receives data. main() { long cmd.input(PIN_B1)). for(i=1. } output_low(PIN_B1).0)).++i) { output_bit(PIN_B1. } The following is C code to read a 8 bits response. output_low(PIN_B2). Again shift_left is used.++i) { output_high(PIN_B2). //disable device } The previous are two great examples of shifting data in and out of the PICmicro®MCU a bit at a time. } output_low(PIN_B0). They can be easily modified to talk to a number of devices.2.i++) //left justify cmd shift_left(cmd. 125 . //disable device output_low(PIN_B0). //B2 is the clock output_low(PIN_B2). setup_spi(SPI_MASTER | SPI_H_TO_L | SPI_CLK_DIV_16). cmd=0x3e1.0). Some PIC’s have built in hardware for SPI.i<=10. //enable device //send a clock pulse and //read a data bit eight times for(i=0. shift_left(&data. and the data that is shifted in is bit on the input pin.i<=6. output_high(PIN_B0). The following is an example of using the built in hardware to write and read. main() { int data. output_high(PIN_B0). output_high(PIN_B2). //enable device //send out 10 data bits each with a clock pulse for(i=0.i<=8. Slower parts may need some delays to be added.1. output_high(PIN_B0). shift_left(cmd.3. main() { int data. a 600Hz PWM with a 4MHz oscillator and /16 prescaler will give 103. the above code sent 16 bits not 10.2 to load into the PR2 register and a resolution of 12. output_high(PIN_B0). period. output_low(PIN_B0). once the registers are set up. Example: PWM setup – frequency = 600Hz M/S ratio = 1:1 Prescale value = ((1/PWM Frequency)/Prescale value * (4/OSC frequency))-1 PWM resolution = ( log ( OSC freq / PWM freq ) ) / log 2 So for the above example.spi_write(3). postscale) initializes timer 2 where mode is T2_DISABLED T2_DIV_BY_1 T2_DIV_BY_4 126 . so this will still work. spi_write(0xE1). Most SPI devices ignore all bits until the first 1 bit. } Note: Built in hardware does have restrictions. For example. data=spi_read(0). the PWM runs on its own without constant software involvement. output_low(PIN_B0). Calculation of the PWM values is best achieved with a simple spreadsheet. 9. //make this value half of tone delay_ms(200). The duty ratio is set with set_pwm1_duty(50). half of 140 If oscillator = 4MHz. The duration is set with a simple delay followed by setting the duty cycle to – silence the output even though the PWM is still running. 100. //configure CCP1 as a PWM if(y==2) { setup_timer_2(T2_DIV_BY_4. then frequency will be 1.773KHz with a potential resolution of 11 bits The following example will generate one of two tones – determined by the value passed to the routine – for a set duration. timer 4 set_pwm1_duty(70). 0). Note. 0).e. 140. set_pwm1_duty(50). The frequency is set with the: setup_timer_2(T2_DIV_BY_4. //50% duty cycle. //140 as offset. //0. The frequencies for the two tones are 2. value is in the range 0 to period set_ccp1(CCP_PWM). 100. void sound_bell(byte y) { setup_ccp1(CCP_PWM).2 second bell from terminal } else 127 . 0). i. set_ccp2(CCP_PWM) this function will initialize the CCP in a PWM mode Example: setup_ccp1(CCP_PWM). the 2 lsb’s will be ignored.475KHz and 996Hz. //sets up for pwm setup_timer_2(T2_DIV_BY the first step is to test and determine if the source is the desired one or.Interrupts can come from a wide range of sources within the PICmicro®MCU and also from external events. When an interrupt occurs. The PIC16C5X series have no interrupts. 131 . When an interrupt occurs. etc. Some of the interrupt sources are shown below. Depending on which PIC is used in a design. which one to handle first. the type and number of interrupts may vary.. in the case of multiple interrupts. the PIC hardware follows a fixed pattern as shown below. the software is all your responsibility. and software written for these products will have to perform a software poll. but refer to the data sheet for latest information. This functions set or clear the respective interrupt enable flags so interrupts can be turned on and off during the program execution. Examine the interrupt flags to determine which false interrupt has been triggered. The main save and restore of registers and startup code is not generated. do { while(True). //store character Buff++.#priority sets up the order of the interrupt priority: #priority rtcc. //load character Buffer[Buff+1]=b. as the characters are received faster than the display can handle them. The edge can be 1_to_h or h_to_1. disable_interrupts(level). #int_xxx – where xxx is the desired interrupt: #int_rda //enable usart receive interrupt rs232_handler() //interrupt driven data read and store { b=getch(). //increment pointer } main() { enable_interrupts(INT_RDA). This example forces an interrupt on receipt of a character received via the USART. This function is extracted from an LCD display program. //increment pointer } enable_interrupts(level). ready for the next character. //store character Buff++. portb #int_globe . tmr0.use with care to create your own interrupt handler. //load character Buffer[Buff+1]=b. The character is placed in a buffer and the buffer incremented. rb. ext_int_edge (edge). } 132 . #int_default is used to capture unsolicited interrupts form sources not setup for interrupt action. #int_rda //enable usart receive interrupt rs232_handler() //interrupt driven data read and store { b=getch(). enable_interrupts(GLOBAL). This is used to select the incoming polarity on PORTB bit0 when used as an external interrupt. e. or ‘a’ to ‘z’ isdigit(x) x is an numeric .} Include Libraries These libraries add the ‘icing on the cake’ for C programmer.e. ’A’ to ‘Z’. CTYPE.e. or ‘a’ to ‘z’ isalpha(x) x is an alpha value i.e. and when. ’A’ to ‘F’. or ‘a’ to ‘f’ STDLIB. 0-9 islower(x) x is an lower case value i. the value returned is a floating-point number. the user requires them.e. ’A’ to ‘Z’.H holds all the complicated math functions.H contains several traditional macros as follows: Returns a TRUE if: isalnum(x) x is an alphanumeric value i. Examination of this file gives an insight into how the mathematical functions operate. ‘a’ to ‘z’ isupper(x) x is an upper case value i. 0-9.e. ’A’ to ‘Z’ isspace(x) x is an space isxdigit(x) x is a hexadecimal digit i. In all cases. They contain all the string handling and math functions that will be used in a program. The various libraries are included as. 0-9. it is meaningless the following day or if read by another. What happens when my program won’t run? Has the oscillator configuration been set correctly when you programmed the PIC? Was the watchdog enabled when not catered for in the software? Have all the ports been initialized correctly? 134 . you will not need an eraser. Write. Use some form of I/O map when starting your design to speed up port identification and function. see a catalog for part numbers. test. 2000. Comment on the software as it’s written. ICEPIC.Windows 95.. Draw a software functional block diagram to enable modular code writing. Me. 98. XP or Linux C Compiler If you then wish to take the development from paper to a hardware design. and PIC MCU sample Microchip ICD for 16F87x family or ICD2 for most Flash PIC MCU In Circuit Emulator (ICE). You will need a programmer to go with the ICE. Pointers to get started Start off with a simple program – don’t try to debug 2000 lines of code in one go. Update documentation at the end of the process. Have a few flash version of PIC MCU chip on hand when developing to save time for waiting. as the device is electrically erasable (i. Otherwise. C compiler.e. NT. ► Attend a Microchip approved training workshop. MPLAB ICE2000 or ICE4000 allows debugging of hardware and software at the same time. and debug each module stage by stage. no window) Development Path Zero Cost Starter Intermediate Serious Demo versions of the C compiler PICSTART PLUS programmer. PIC MASTER. If using PIC®START PLUS (programmer only) you will need to use the program-test-modify process – so allow extra development time. Use known working hardware. PIC support products and Microchip training workshops. Tel: 01380 827080 Fax: 01380 827082 Email: info@bluebird-electronics. Is the reset vector correct. CCS Reference Manual Microchip MPLAB Documentation and Tutorial Some good reference books on C (in general) Turbo C – Kelly & Pohl An Introduction to Programming in C .Purdum The C Programming Language – Kernighan & Ritchie Internet Resource List & Pohl C Programming Guide . and lables.com. Ensure the data registers are set to a known condition.On 16C7X devices. Nigel is a member of the Microchip Consultants Group.ccsinfo.microchip.uk 135 .com Authors Information Nigel Gardner is an Electronic Engineer of over 20 years industrial experience in various field. check if the ADCON1 register is configured as analog or digital. He owns Bluebird Electronics which specializes in LCD display products. registers.com. Make sure no duplication of names given to variables.com Contact Information CCS Microchip. custom design work.co. especially if code has been moved from one PICmicro®MCU family to another? Reference Literature Microchip data sheets and CDROM for latest product information.uk Web:.
https://www.scribd.com/doc/109089757/An-Introduction-to-Programming-the-Microchip-PIC-in-CCS-C
CC-MAIN-2018-22
refinedweb
24,492
69.28
In this article, we will explain how you can expose an application to the internet with the network load balancer (NLB). There are three options to expose an application if you are using a standard classic Kubernetes cluster (the NodePort is the only option if you are using a free Kubernetes cluster): - NodePort - Network Load Balancer (NLB) - Ingress (application load balancer, ALB) Prerequisites - IBM Cloud account - IBM Cloud Container Registry - IBM Cloud Kubernetes Service (you need a standard classic cluster) Creating a network load balancer (NLB) service Let's deploy a sample Hello World app into a Kubernetes pod within the worker node by utilizing the commands in the steps below. You can see the full details of how you can deploy an app in "Lesson 3: Deploying single instance apps to Kubernetes clusters" in the IBM Cloud Docs: git clone cd 'container-service-getting-started-wt/Lab 1' ibmcloud cr build -t us.icr.io/tn_namespace/hello-world:1 . kubectl create deployment hello-world-deployment --image=us.icr.io/tn_namespace/hello-world:1 Now you have the Deployment hello-world-deployment and the app is running on a pod: Use the following steps to create a network load balancer (NLB) service to expose your app. The portable addresses that are assigned to the NLB are permanent and do not change, even when a worker node is recreated in the cluster. You will be able to access your app by <load-balancer-ip>:<port that your app requires>. 1. Create a network load balancer (NLB) service You can create a NLB service by using either one of two methods: the command line or the service configuration file. Create via the command line Create via the service configuration file 2. Get the NLB's external-IP address and the port Next, you'll need to get the NLB's external IP address and listen port. Because you don't specify an IP address at this time, one of remaining portable public IP addresses will be assigned to the network load balancer service: 3. Access your app by <NLB's external-ip>:<NLB's listen port> Run curl or access in a web browser: 4. Create an IBM-provided subdomain for your app (optional) You can create a subdomain for your app that registers public NLB IP addresses with a DNS entry. If you create a DNS subdomain for your NLB, users can access your app through the NLB's subdomain instead. A DNS system service resolves the subdomain to the portable public IP address of the NLB: 5. Set up a custom domain (optional) If you choose, you can set up a custom domain to point to the IBM-provided subdomain that you created in the previous step: - Register a custom domain by working with your Domain Name Service (DNS) provider or by using IBM Cloud Internet Services or IBM Cloud DNS. - Define an alias for your custom domain by specifying the IBM-provided subdomain as a Canonical Name record (CNAME). Clean up You can run the following commands to clean up the testing in this article: Summary I hope that you now understand how you can expose an application to the outside of your Kubernetes cluster with the network load balancer (NLB) so that users can access the app from the Internet. For more details about using an NLB, see the following: - Setting up basic load balancing with an NLB 1.0 - Components and architecture of an NLB 1.0 - Create an External Load Balancer If you want to minimise downtime and plan high availability for your app, you can configure with the NLB in a single-zone or a multi-zone cluster. See more details in "Planning your cluster for high availability." For more information on other methods of exposing your application to the outside of your Kubernetes cluster, see "What is Kubernetes Ingress?" Follow IBM Cloud Be the first to hear about news, product updates, and innovation from IBM Cloud.Email subscribeRSS
https://www.ibm.com/cloud/blog/using-a-network-load-balancer-to-expose-an-application
CC-MAIN-2021-17
refinedweb
664
53.75
Joel Spolsky's "Travel Survival" tips. Tip #2 : Fly first class. Right. That's when you stop reading.. I. Dynamic locks for openssl .. hmm are these even used ? I have 2 problems with locks and openssl Actually I am struggling with (1) right now. As part of another project, I am writing Apache Portable Runtime (apr) ssl wrappers over openssl api's (well over some of them) and I would very much like to allocate memory for dynamic locks from a caller supplied pool but the openssl callback semantics for dyna locking don't allow this. Grrr I have been working on a compiler in my free time that attempts to take a C like src file and targets the JVM platform. Ofcourse since I don't have the slightest clue about the JVM platform I spent sometime poking at the class file format. Here is a spinoff from that project ... Kapi is a java class file disassembler for Win32 that also produces useful src hints .. like the "good twin" of javap. Also, I don't really hate Win32 - I just think it is retarded ..sometimes. On byte swapping and endianess I have seen various implementations of htonl(). Most of them look like ... (x >> 24) | ((x & 0xff0000) >> 8) | (( x & 0xff00) << 8) | (x << 24); Let's restrict ourselves to the M$ VC++ 6 32 bit machine and see what the compiler makes of that htonl() implementation.. F:\maya\misc>cl Microsoft (R) 32-bit C/C++ Standard Compiler Version 12.00.8168 for 80x86 F:\maya\misc>cat htonl.c int htonl(int x) { return (x >> 24) | ((x & 0xff0000) >> 8) | (( x & 0xff00) << 8) | (x << 24); } Now we ask the compiler to show us the (un-optimised) assembly for this file F:\maya\misc>cl /c /Fa htonl.c F:\maya\misc>more htonl.asm /** _snip_ **/ ; File htonl.c ; Line 2 push ebp mov ebp, esp ; Line 3 mov eax, DWORD PTR _x$[ebp] sar eax, 24 ; 00000018H mov ecx, DWORD PTR _x$[ebp] and ecx, 16711680 ; 00ff0000H sar ecx, 8 or eax, ecx mov edx, DWORD PTR _x$[ebp] and edx, 65280 ; 0000ff00H shl edx, 8 or eax, edx mov ecx, DWORD PTR _x$[ebp] shl ecx, 24 ; 00000018H or eax, ecx ; Line 4 pop ebp ret 0 _htonl ENDP _TEXT ENDS END That seems like a lot of stuff for endianness conversion. Now let's see if a little bit of inline assembly can't fix this .. F:\maya\misc>cat htonl.c int htonl(int j) { _asm { mov eax, j bswap eax } } And the assembly for this is ... F:\maya\misc>more htonl.asm /** snip **/ ; File htonl.c ; Line 2 push ebp mov ebp, esp push ebx push esi push edi ; Line 5 mov eax, DWORD PTR _j$[ebp] ; Line 6 bswap eax ; Line 8 pop edi pop esi pop ebx pop ebp ret 0 _htonl ENDP _TEXT ENDS END So, the assembly is neater and smaller. If you ignore all the pushes and pop's need to make that inline assembly work, you will see that the main brunt of the work is done by the *bswap* instruction which takes 1 to 3 cycles for all the dirty!
http://www.advogato.org/person/saju/diary.html?start=9
CC-MAIN-2015-18
refinedweb
530
78.69
RoboWiki - User contributions [en] 2018-03-17T06:11:59Z User contributions MediaWiki 1.19.6 //robowiki.net/wiki/Robocode/.NET/Create_a_.NET_robot_with_Visual_Studio Robocode/.NET/Create a .NET robot with Visual Studio 2013-03-25T11:35:21Z <p>Robingrindrod: Removed link to VB tutorial as it is no longer valid</p> <hr /> <div>This page is a tutorial describing how to create a .NET robot with Visual Studio C# 2008 Express Edition.<br /> <br /> == Creating a .NET robot in Visual Studio ==<br /> <br /> From version 1.7.2.0, Robocode supports robots that run under Microsoft .NET framework CLR.<br /> Note that .NET robots will only be able to run on operating systems that support the .NET framework 2.0. For now, [ Mono] is not supported due to current limitations of [ jni4net].<br /> <br /> This tutorial assumes that you are familiar with .NET programming. In addition, this tutorial is made for C# programmers, but it should be easy to use this tutorial for Visual Basic .NET or other .NET programming languages instead.<br /> <br /> === Prerequisites ===<br /> <br /> # Robocode and Java must be installed on your system (see [[Robocode/System Requirements|System Requirements]] and [[Robocode/Download|Download]]).<br /> # .<br /> # Visual Studio 2005 or newer is required, but Visual Studio 2008 is strongly recommended. You can download one of the [ Express Edition], which comes for free.<br /> <br /> Note that you don't need Visual Studio for developing .NET robots. You can use Microsoft .NET SDK 2.0 if you wish.<br /> However, this tutorial will make use of [ Visual Studio C# 2008 Express Edition], and hence already have it downloaded and installed.<br /> <br /> === Creating a VS solution for your robot ===<br /> <br /> Open Visual Studio and create a new project (File -> New Project... ''Ctrl+Shift+N''), and select the template named 'Class Library'.<br /> Specify a name for your project that makes sense, e.g. 'MyProject', and press OK.<br /> <br /> Robocode is only able to load .NET robot from .dll file, so your project must be 'Class Library' that is assembled into a .dll. file.<br /> In this tutorial, we will use 'MyProject' as project name.<br /> <br /> [[Image:VS2008_New_Project_dialog.png|Screenshot that shows the New Project dialog, where you select the template for the project, and where 'MyProject' is written in the name field]]<br /> <br />...<br /> <br /> In this example, we will save our project MyProject under <code>C:\</code> and name our entire solution 'MySolution'.<br /> Note that a single solution can contain multiple projects, which is very useful if you want to develop more robots.<br /> <br /> [[Image:VS2008_Save_Project_dialog.png|Screenshot that shows the Save Project dialog, with MyRobot, C:\, and MySolution]]<br /> <br /> === Setting up Project References ===<br /> <br /> Make sure that you have the Solution Explorer view open, which is typically located at the right side of VS.<br /> If it is not present, you can open it from the menu (View -> Solution Explorer ''Ctrl+W, S'').<br /> <br /> [[Image:VS2008_Solution_Explorer_MyProject.png|Screenshot that shows the Solution Explorer with the new solution containing MyProject]]<br /> <br /> You need to add a reference to the <code>robocode.dll</code> file located under the <code>\libs</code> folder of your home folder for Robocode, e.g. <code>C:\robocode</code> (default home dir of Robocode).<br /> Otherwise, Visual Studio will not know anything about the robot API provided with Robocode, and hence will not be able to build a Robocode robot for you.<br /> <br /> Right-click the References in the Solution Explorer and select 'Add Reference...'.<br /> In the Add Reference dialog that appears, select the Browse pane and browse into the <code>\libs</code> folder of your robocode home folder.<br /> <br /> [[Image:VS2008_Add_Reference_dialog_robocode_folder.png|Screenshot that shows the Add Reference dialog where we browse into the Robocode home directory]]<br /> <br /> [[Image:VS2008_Add_Reference_dialog_robocode_dll.png|Screenshot that shows the Add Reference dialog where we browse into the 'libs' folder of the Robocode home directory and select the 'robocode.dll']]<br /> <br /> Now you select the <code>robocode.dll</code> file and press OK.<br /> <br /> When this is done, 'robocode' should show up under the References of your Solution Explorer.<br /> <br /> [[Image:VS2008_Solution_Explorer_MyProject_references.png|Screenshot that shows the Solution Explorer with all references including 'robocode']]<br /> <br /> === Creating your first robot class ===<br /> <br /> Now we are ready for creating our main robot class.<br /> In the Solution Explorer, a class named <code>Class1.cs</code> has already been provided and contains some example code.<br /> Delete this class by right-clicking on it in the Solution Explorer, select Delete and press OK on the dialog that follows.<br /> <br /> Now we create a new class for your robot called <code>MyRobot.cs</code>.<br /> Right-click on MyProject in the top of the Solution Explorer and select Add -> Class...<br /> <br /> On the Add New Item dialog that shows up, select 'Class' and set the name of this class in the buttom of the dialog to <code>MyRobot.cs</code> and press Add.<br /> <br /> [[Image:VS2008_Add_New_Item_MyProject.png|Screenshot that shows the Add New Item dialog with the Class selected, and where the class name is set to MyRobot.cs]]<br /> <br /> Note: If you need more classes for your robot later, you can add these classes to your solution in the same way.<br /> <br /> After you have added your new class, MyRobot.cs, this source file will be automatically opened in Visual Studio and ready to be edited.<br /> <br /> Your new source file will contain this initial code:<br /> <br /> using System;<br /> using System.Collections.Generic;<br /> using System.Linq;<br /> using System.Text;<br /> <br /> namespace MyProject<br /> {<br /> class MyRobot<br /> {<br /> }<br /> }<br /> <br /> It contains some general 'using' declarations to standard system libraries, and it defines a class named MyRobot within the namespace named MyProject.<br /> We are now ready to put some additional code into our source file so our robot can make some action.<br /> <br /> First we need to add a new 'using' declaration for Robocode, which we can add right after the 'using System' declarations:<br /> <br /> using Robocode;<br /> <br /> This will give your robot access to the public Robocode API.<br /> <br /> Secondly, you should change the namespace so MyProject is replaced with your initials, nickname, handle or similar.<br /> In this example, we use Fnl's initials which is FNL:<br /> <br /> namespace FNL {<br /> <br /> Then we need to specify which robot type your class my inherit from.<br /> In this example we use the simple robot type named Robot:<br /> <br /> class MyRobot : Robot<br /> <br /> The main method of your robot is the <code>Run</code> method. In this method you control your robot by calling robot command like e.g., <code>Ahead()</code>, <code>TurnLeft()</code> or <code>TurnGunRight()</code> etc.<br /> <br /> Note that it is important that your Run method runs in a loop, meaning that it should have a loop likes <code>while (true)</code> while will make your robot run forever.<br /> <br /> If you do not have a loop, your robot will stop as soon as the Run method has finished the execution.<br /> <br /> If you need some initialization for your robot that should be executed when your robot is started, you should put this code before your loop.<br /> <br /> // Your Run method must be public, is void, and must override the Run() method from the super class<br /> public override void Run()<br /> {<br /> // Perform your initialization for your robot here<br /> <br /> while (true)<br /> {<br /> // Perform robot logic here calling robot commands etc.<br /> }<br /> }<br /> <br /> Here is an example of how the Run method could look like: /> Your robot is also able to react on events. In order to react on a specific event, you must provide an event handler for the specific event.<br /> <br /> Here is an example of a simple event handler that is specific to the ScannedRobotEvent, which occur when your robot scans another robot:<br /> <br /> // Robot event handler, when the robot sees another robot<br /> public override void OnScannedRobot(ScannedRobotEvent e)<br /> {<br /> // We fire the gun with bullet power = 1<br /> Fire(1);<br /> }<br /> <br /> If this example, the robot fires a bullet as soon as it sees another robot.<br /> <br /> Let us put everything together. Our complete robot follows here, and you could copy & paste this code into your MyRobot.cs source file in order to try it out:<br /> <br /> // Access to standard .NET System<br /> using System;<br /> using System.Collections.Generic;<br /> using System.Linq;<br /> using System.Text;<br /> <br /> // Access to the public Robocode API<br /> using Robocode;<br /> <br /> // The namespace with your initials, in this case FNL is the initials<br /> namespace FNL<br /> {<br /> // The name of your robot is MyRobot, and the robot type is Robot<br /> class MyRobot : Robot /> // Robot event handler, when the robot sees another robot<br /> public override void OnScannedRobot(ScannedRobotEvent e)<br /> {<br /> // We fire the gun with bullet power = 1<br /> Fire(1);<br /> }<br /> }<br /> }<br /> <br /> [[Image:VS2008_MyRobot_source_code.png|Screenshot that shows the MyRobot.cs source file in the editor]]<br /> <br /> === The robot naming convention used in Robocode ===<br /> <br /> In Robocode we use a naming convention, where you must put your robot into a namespace,<br /> and this namespace should be somewhat unique and identify you, e.g., your initials, nickname or handle.<br /> <br /> The reasons for this naming convention are:<br /> * Robots can share the same name as long as they live in different namespaces.<br /> * If you create more robots, your initials/nickname should be used for all your robots, and other people will be able to see exactly which robots belong to you.<br /> * It will be easy for you to find your robot(s) in Robocode among lots of other robots, as you just need to look out for your initials/nickname in the 'Packages' in the New Battle dialog in Robocode.<br /> * Robocode needs a namespace to save data. The global namespace is not allowed.<br /> <br /> Robot names are typically written in this format:<br /> <br /> <''your initials''>'''.'''<''robot name''>'''_'''<''robot version''><br /> <br /> So if we take the robot from this tutorial and make it version 1.0, its full robot name will be: FNL.MyRobot_1.0<br /> <br /> Transferring this to source code, could be written as:<br /> <br /> namespace FNL { class MyRobot {} }<br /> <br /> === Setting assembly name and default namespace ===<br /> <br /> As mentioned in the beginning of this tutorial, your robot will be assembled into a .dll file.<br /> Hence, you need to give it a name that follows the naming conventions of Robocode.<br /> This will ensure that the name of your .dll will not colide with .dll names from other robots if you ever upload your robot to the web or some robot repository, or if you want to participate in battles with other robots, e.g., tournaments.<br /> <br /> Right-click the MyProject in the Solution Explorer and select Properties.<br /> Make sure the Application pane of the Properties window is the one that is active.<br /> <br /> First enter the full name of your robot in the 'Assembly name' text field, which is <code>FNL.MyRobot_1.0</code> for the robot in this tutorial.<br /> Second, enter the namespace/initials for the robot under the 'Default namespace' text filed, which is FNL in our case.<br /> <br /> [[Image:VS2008_Properties_Assembly_name.png|Screenshot that shows the Application pane in the Properties window for the project where the Assembly name and Default namespace must be set]]<br /> <br /> When you want to create a new version of your robot, you can just change the assembly name to reflect the new version in the Properties window before building your robot.<br /> <br /> Double-click your <code>MyRobot.cs</code> file to return to the source code editor.<br /> <br /> === Running the robot in Robocode ===<br /> <br /> After you have finished your robot, you must build it successfully in order to run it in Robocode.<br /> You can build your robot by right-clicking the solution 'Solution 'MySolution' (1 project)' in the Solution Explorer and select 'Build Solution' or simply press the F6 key.<br /> Make sure your robot builds without any errors, if not make the necessary corrections in the source file until it was build successfully.<br /> <br /> When your robot has been build, a .dll file has been created for your robot, which is located in the ..\bin\Release folder.<br /> In our example, that is <code>C:\MyRobot\MyRobot\MyRobot\bin\Release</code> where the output .dll is named 'FNL.MyRobot_1.0.dll',<br /> if the Properties for the robot was set up correctly.<br /> <br /> Now start Robocode and go to the Development Options by selecting Options -> Preferences -> Development Options.<br /> Here press the Add button and browse into your Release folder of your robot (e.g. C:\MySolution\MyProject\bin\Release)<br /> and select Open, and then press Finish on the Development Options.<br /> <br /> [[Image:Robocode_Development_Options.png|Screenshot from Development Options in Robocode, where the C:\MySolution\MyProject\bin\Release path has been added]]<br /> <br /> By adding the file path to where your output robot .dll is located, Robocode will be able to find and use your robot in the battle.<br /> This becomes handy if you keep both Visual Studio and Robocode open when developing robot, as you can just rebuild your robot and start a new battle to see how your robot behaves with your changes.<br /> <br /> When you have added the file path to where your robot .dll is located, you can start a new battle with your robot.<br /> Select Battle -> New from the Robocode menu or press ''Ctrl+N'' to open the New Battle dialog.<br /> Your robot should show up under 'Packages' with namespace you use for your robot, e.g., FNL.<br /> After this package is selected, the (class) name of your robot will be shown up under 'Robots' section.<br /> <br /> Select your robot, and press 'Add ->' to add your robot to the new battle. You have to select at least one opponent<br /> for your robot, which could be your own robot, or one of the sample robot that comes with Robocode, e.g., sample.Corners.<br /> Press 'Start Battle' to start the battle.<br /> <br /> === Documentation ===<br /> <br /> The documentation of the entire Robot API is available as a help file named 'RobotAPI.chm', which is located in the root of your robocode home folder.<br /> The Robot API is also available as [ HTML on the web].<br /> <br /> Currently, the .NET feature of Robocode is totally new, and hence there is no documentation available for how to develop robots specific to .NET.<br /> However, there is lots of documentation available here at the [ RoboWiki]. Currently, all documentation about developing robots assumes you develop your robot in Java. But this should not scare you off, as it should be easy to translate the Java code into .NET code.<br /> <br /> The Robot API for .NET is very similar to the one available [ for Java].<br /> The main difference is that method names starts with a small capital letter in Java, and the Robot API for .NET makes use of properties instead of get and set methods in Java. Hence it is possible to translate source code from Java to .NET and vice versa.<br /> <br /> === Debugging ===<br /> <br /> If you are interested in debugging your robot in order to get rid of a bug, or just to see how your robot behaves, you should have a look on the tutorial describing how to debug a .NET robot in Visual Studio [[Robocode/.NET/Debug a .NET robot in Visual Studio|here]].<br /> <br /> === Known issues and solutions ===<br /> <br /> If you experience problems with starting up Robocode, it could be caused by a non-existing development path,<br /> e.g., if you have added a path previously, and then deleted this later without first removing it using the Development Options in Robocode.<br /> <br /> To fix this problem, you should open the <code>robocode.properties</code> file located under the <code>\config</code> folder of in your Robocode home dir with a plain text editor, e.g., Notepad or similar.<br /> Remove the file paths after the equal sign (=) with the line starting with <code>robocode.options.development.path</code>.<br /> Now you should be able to start to start up Robocode again. But you'll need to add the file path(s) to your robot .dll(s) again.<br /> <br /> == See also ==<br /> <br /> === Related tutorials ===<br /> * [[Robocode/.NET/Debug a .NET robot in Visual Studio|Debug a .NET robot in Visual Studio]]<br /> <br /> === Robocode API ===<br /> * [ Robocode .NET API]<br /> * [ Robocode Java API]<br /> <br /> === Tutorials ===<br /> * [[Robocode/System Requirements|System Requirements for Robocode]]<br /> * [[Robocode/Download|How to download and install Robocode]]<br /> * [[Robocode/Robot Anatomy|The anatomy of a robot]]<br /> * [[Robocode/Game Physics|Robocode Game Physics]]<br /> * [[Robocode/FAQ|Frequently Asked Questions (FAQ)]]<br /> * [[Robocode/Getting Started|Getting started with Robocode]]<br /> * [[Robocode/My First Robot|My First Robot Tutorial]]<br /> * [[Robocode/Scoring|Scoring in Robocode]]<br /> * [[Robocode/Articles|Articles about Robocode]]<br /> * [[Robocode/Robot Console|Using the robot console]]<br /> * [[Robocode/Downloading_Robots|Downloading other robots]]<br /> * [[Robocode/Learning from Robots|Learning from other robots]]<br /> * [[Robocode/Console Usage|Starting Robocode from the command line]]<br /> <br /> === News and Releases ===<br /> * [ RSS Feeds for the Robocode project]<br /> * [ Robocode file releases]<br /> <br /> === Home pages ===<br /> * [ Classic homepage]<br /> * [ Robocode project at SourceForge]<br /> * [ Robocode Repository]<br /> * [[wikipedia:Robocode|Wikipedia entry for Robocode]]<br /> <br /> [[Category:Robocode Documentation]]<br /> [[Category:Tutorials]]<br /> [[Category:.NET]]<br /> [[Category:Visual Studio]]</div> Robingrindrod
http://robowiki.net/w/api.php?action=feedcontributions&user=Robingrindrod&feedformat=atom
CC-MAIN-2018-13
refinedweb
2,993
64.2
I've tried using the PING class in windows phone to know the status of the hosted server but it says that system.net namespace cannot be used in windows phone. My Android Equivalent code: System.Net.NetworkInformation.Ping _pingobj = new System.Net.NetworkInformation.Ping(); System.Net.NetworkInformation.PingReply _pingreply = _pingobj.Send(edittextIP.Text); this code works fine for android but not windows. Please suggest me the one, which fulfills my requirement w.r.t CP development. If you go down to Platforms, you'll notice Windows Phone is absent. The API documentation specifies which classes are available for which platforms. There's a GUI Ping app project here that has a Ping class you can use in your own apps. I haven't used it on the phones, so it might require some adaptation, but it looks to be a pretty usable Ping implementation. Here's another implementation.
https://forums.xamarin.com/discussion/comment/54781
CC-MAIN-2019-22
refinedweb
149
52.97
Making Shared Activities Introduction One of the distinctive features of Sugar is how many Activities support being used by more than one person at a time. More and more computers are being used as a communications medium. The latest computer games don't just pit the player against the computer; they create a world where players compete against each other. Websites like Facebook are increasingly popular because they allow people to interact with each other and even play games. It is only natural that educational software should support these kinds of interactions. I have a niece that is an enthusiastic member of the Club Penguin website created by Disney. When I gave her Sugar on a Stick Blueberry as an extra Christmas gift I demonstrated the Neighborhood view and told her that Sugar would make her whole computer like Club Penguin. She thought that was a pretty cool idea. I felt pretty cool saying it. Running Sugar As More Than One User Before you write any piece of software you need to give some thought to how you will test it. In the case of a shared Activity you might think you'd need more than one computer available to do testing, but those who designed Sugar did give some thought to testing shared Activities and gave us ways to test them using only one computer. These methods have been evolving so there are slight variations in how you test depending on the version of Sugar you're using. The first thing you have to know is how to run multiple copies of Sugar as different users. Fedora 10 (Sugar .82) In Sugar .82 there is a handy way to run multiple copies of sugar-emulator and have each copy be a different user, without having to be logged into your Linux box as more than one user. On the command line for each additional user you want add a SUGAR_PROFILE environment variable like this: SUGAR_PROFILE=austen sugar-emulator When you do this sugar-emulator will create a directory named austen under ~/.sugar to store profile information, etc. You will be prompted to enter a name and select colors for your icon. Every time you launch using the SUGAR_PROFILE of austen you will be this user. If you launch with no SUGAR_PROFILE you will be the regular user you set up before. Fedora 11 (Sugar .84) As handy as using SUGAR_PROFILE is the developers of Sugar decided it had limitations so with version .84 and later it no longer works. With .84 and later you need to create a second Linux user and run your sugar-emulators as two separate Linux users. In the GNOME environment there is an option Users and Groups in the Administration submenu of the System menu which will enable you to set up a second user. Before it comes up it will prompt you for the administrative password you created when you first set up Linux. Creating the second user is simple enough, but how do you go about being logged in as two different users at the same time? It's actually pretty simple. You need to open a terminal window and type this: ssh -XY jausten@localhost where "jausten" is the userid of the second user. You will be asked to verify that the computer at "localhost" should be trusted. Since "localhost" just means that you are using the network to connect to another account on the same computer it is safe to answer "yes". Then you will be prompted to enter her password, and from then on everything you do in that terminal window will be done as her. You can launch sugar-emulator from that terminal and the first time you do it will prompt you for a name and icon colors. sugar-jhbuild With sugar-jhbuild (the latest version of Sugar) things are a bit different again. You will use the method of logging in as multiple Linux users like you did in .84, but you won't get prompted for a name. Instead the name associated with the userid you're running under will be the name you'll use in Sugar. You won't be able to change it, but you will be able to choose your icon colors as before. You will need a separate install of sugar-jhbuild for each user. These additional installs will go quickly because you installed all the dependencies the first time. Connecting To Other Users Sugar uses software called Telepathy that implements an instant messaging protocol called XMPP (Extended Messaging and Presence Protocol). This protocol used to be called Jabber. In essence Telepathy lets you put an instant messaging client in your Activity. You can use this to send messages from user to user, execute methods remotely, and do file transfers. There are actually two ways that Sugar users can join together in a network: Salut If two computer users are connected to the same segment of a network they should be able to find each other and share Activities. If you have a home network where everyone uses the same router you can share with others on that network. This is sometimes called Link-Local XMPP. The Telepathy software that makes this possible is called Salut. The XO laptop has special hardware and software to support Mesh Networking, where XO laptops that are near each other can automatically start networking with each other without needing a router. As far as Sugar is concerned, it doesn't matter what kind of network you have. Wired or wireless, Mesh or not, they all work. Jabber Server The other way to connect to other users is by going through a Jabber Server. The advantage of using a Jabber server is you can contact and share Activities with people outside your own network. These people might even be on the other side of the world. Jabber allows Activities in different networks to connect when both networks are protected by firewalls. The part of Telepathy that works with a Jabber server is called Gabble. Generally you should use Salut for testing if at all possible. This simplifies testing and doesn't use up resources on a Jabber server. It does not matter if your Activity connects to others using Gabble or Salut. In fact, the Activity has no idea which it is using. Those details are hidden from the Activity by Telepathy. Any Activity that works with Salut will work with Gabble and vice versa. To set up sugar-emulator to use Salut go to the Sugar control panel: In Sugar .82 this menu option is Control Panel. In later versions it is My Settings. Click on the Network icon. The Server field in this screen should be empty to use Salut. You can use the backspace key to remove any entry there. You will need to follow these steps for every Sugar user that will take part in your test. If for some reason you wish to test your Activity using a Jabber server the OLPC Wiki maintains a list of publicly available servers at. Once you have either Salut or a Jabber server set up in both instances of Sugar that you are running you should look at the Neighborhood view of both to see if they can detect each other, and perhaps try out the Chat Activity between the two. If you have that working you're ready to try programming a shared Activity. The MiniChat Activity Just as we took the Read Etexts Activity and stripped it down to the basics we're going to do the same to the Chat Activity to create a new Activity called MiniChat. The real Chat Activity has a number of features that we don't need to demonstrate shared Activity messaging: - It has the ability to load its source code into Pippy for viewing. This was a feature that all Activities on the XO were supposed to have, but Chat is one of the few that implemented it. Personally, if I want to see an Activity's code I prefer to go to git.sugarlabs.org where I can see old versions of the code as well as the latest. - Chat can connect one to one with a conventional XMPP client. This may be useful for Chat but would not be needed or desirable for most shared Activities. - If you include a URL in a Chat message the user interface enables you to click on the URL make a Journal entry for that URL. You can then use the Journal to open it with the Browse Activity. (This is necessary because activities cannot launch each other). Pretty cool, but not needed to demonstrate how to make a shared Activity. - The chat session is stored in the Journal. When you resume a Chat entry from the Journal it restores the messages from your previous chat session into the user interface. We already know how to save things to the Journal and restore things from the Journal, so MiniChat won't do this. The resulting code is about half as long as the original. I made a few other changes too: - The text entry field is above the chat messages, instead of below. This makes it easier to do partial screenshots of the Activity in action. - I removed the new style toolbar and added an old style toolbar, so I could test it in Fedora 10 and 11 which don't support the new toolbars. - I took the class TextChannelWrapper and put it in its own file. I did this because the class looked like it might be useful for other projects. The code and all supporting files for MiniChat are in the MiniChat directory of the Git repository. You'll need to run ./setup.py dev on the project to make it ready to test. The activity.info looks like this: [Activity] name = Mini Chat service_name = net.flossmanuals.MiniChat icon = chat exec = sugar-activity minichat.MiniChat show_launcher = yes activity_version = 1 license = GPLv2+ Here is the code for textchannel.py: import logging from telepathy.client import Connection, Channel from telepathy.interfaces import ( CHANNEL_INTERFACE, CHANNEL_INTERFACE_GROUP, CHANNEL_TYPE_TEXT, CONN_INTERFACE_ALIASING) from telepathy.constants import ( CHANNEL_GROUP_FLAG_CHANNEL_SPECIFIC_HANDLES, CHANNEL_TEXT_MESSAGE_TYPE_NORMAL) class TextChannelWrapper(object): """Wrap a telepathy Text Channel to make usage simpler.""" def __init__(self, text_chan, conn): """Connect to the text channel""" self._activity_cb = None self._activity_close_cb = None self._text_chan = text_chan self._conn = conn self._logger = logging.getLogger( 'minichat-activity.TextChannelWrapper') self._signal_matches = [] m = self._text_chan[CHANNEL_INTERFACE].\ connect_to_signal( 'Closed', self._closed_cb) self._signal_matches.append(m) def send(self, text): """Send text over the Telepathy text channel.""" # XXX Implement CHANNEL_TEXT_MESSAGE_TYPE_ACTION if self._text_chan is not None: self._text_chan[CHANNEL_TYPE_TEXT].Send( CHANNEL_TEXT_MESSAGE_TYPE_NORMAL, text) def close(self): """Close the text channel.""" self._logger.debug('Closing text channel') try: self._text_chan[CHANNEL_INTERFACE].Close() except: self._logger.debug('Channel disappeared!') self._closed_cb() def _closed_cb(self): """Clean up text channel.""" self._logger.debug('Text channel closed.') for match in self._signal_matches: match.remove() self._signal_matches = [] self._text_chan = None if self._activity_close_cb is not None: self._activity_close_cb() def set_received_callback(self, callback): """Connect the function callback to the signal. callback -- callback function taking buddy and text args """ if self._text_chan is None: return self._activity_cb = callback m = self._text_chan[CHANNEL_TYPE_TEXT].\ connect_to_signal( 'Received', self._received_cb) self._signal_matches.append(m) def handle_pending_messages(self): """Get pending messages and show them as received.""" for id, timestamp, sender, type, flags, text \ in self._text_chan[ CHANNEL_TYPE_TEXT].ListPendingMessages( False): self._received_cb(id, timestamp, sender, type, flags, text) def _received_cb(self, id, timestamp, sender, type, flags, text): """Handle received text from the text channel. Converts sender to a Buddy. Calls self._activity_cb which is a callback to the activity. """ if self._activity_cb: buddy = self._get_buddy(sender) self._activity_cb(buddy, text) self._text_chan[ CHANNEL_TYPE_TEXT]. AcknowledgePendingMessages([id]) else: self._logger.debug( 'Throwing received message on the floor' ' since there is no callback connected. See ' 'set_received_callback') def set_closed_callback(self, callback): """Connect a callback for when the text channel is closed. callback -- callback function taking no args """ self._activity_close_cb = callback def _get_buddy(self, cs_handle): """Get a Buddy from a (possibly channel-specific) handle.""" # XXX This will be made redundant once Presence # Service provides buddy resolution from sugar.presence import presenceservice # Get the Presence Service pservice = presenceservice.get_instance() # Get the Telepathy Connection tp_name, tp_path = \ pservice.get_preferred_connection() conn = Connection(tp_name, tp_path) group = self._text_chan[CHANNEL_INTERFACE_GROUP] my_csh = group.GetSelfHandle() if my_csh == cs_handle: handle = conn.GetSelfHandle() elif group.GetGroupFlags() & \ CHANNEL_GROUP_FLAG_CHANNEL_SPECIFIC_HANDLES: handle = group.GetHandleOwners([cs_handle])[0] else: handle = cs_handle # XXX: deal with failure to get the handle owner assert handle != 0 return pservice.get_buddy_by_telepathy_handle( tp_name, tp_path, handle) Here is the code for minichat.py: from gettext import gettext as _ import hippo import gtk import pango import logging from sugar.activity.activity import (Activity, ActivityToolbox, SCOPE_PRIVATE) from sugar.graphics.alert import NotifyAlert from sugar.graphics.style import (Color, COLOR_BLACK, COLOR_WHITE, COLOR_BUTTON_GREY, FONT_BOLD, FONT_NORMAL) from sugar.graphics.roundbox import CanvasRoundBox from sugar.graphics.xocolor import XoColor from sugar.graphics.palette import Palette, CanvasInvoker from textchannel import TextChannelWrapper logger = logging.getLogger('minichat-activity') class MiniChat(Activity): def __init__(self, handle): Activity.__init__(self, handle) root = self.make_root() self.set_canvas(root) root.show_all() self.entry.grab_focus() toolbox = ActivityToolbox(self) activity_toolbar = toolbox.get_activity_toolbar() activity_toolbar.keep.props.visible = False self.set_toolbox(toolbox) toolbox.show() self.owner = self._pservice.get_owner() # Auto vs manual scrolling: self._scroll_auto = True self._scroll_value = 0.0 # Track last message, to combine several # messages: self._last_msg = None self._last_msg_sender = None self.text_channel = None() def _joined_cb(self, activity): """Joined a shared activity.""" if not self._shared_activity: return logger.debug('Joined a shared chat') for buddy in \ self._shared_activity.get_joined_buddies(): self._buddy_already_exists(buddy) self._setup() def _received_cb(self, buddy, text): """Show message that was received.""" if buddy: nick = buddy.props.nick else: nick = '???' logger.debug( 'Received message from %s: %s', nick, text) self.add_text(buddy, text) def _alert(self, title, text=None): alert = NotifyAlert(timeout=5) alert.props.title = title alert.props.msg = text self.add_alert(alert) alert.connect('response', self._alert_cancel_cb) alert.show() def _alert_cancel_cb(self, alert, response_id): self.remove_alert(alert) def _buddy_joined_cb (self, activity, buddy): """Show a buddy who joined""" if buddy == self.owner: return if buddy: nick = buddy.props.nick else: nick = '???' self.add_text(buddy, buddy.props.nick+' '+_('joined the chat'), status_message=True) def _buddy_left_cb (self, activity, buddy): """Show a buddy who joined""" if buddy == self.owner: return if buddy: nick = buddy.props.nick else: nick = '???' self.add_text(buddy, buddy.props.nick+' '+_('left the chat'), status_message=True) def _buddy_already_exists(self, buddy): """Show a buddy already in the chat.""" if buddy == self.owner: return if buddy: nick = buddy.props.nick else: nick = '???' self.add_text(buddy, buddy.props.nick+ ' '+_('is here'), status_message=True) def make_root(self): conversation = hippo.CanvasBox( spacing=0, background_color=COLOR_WHITE.get_int()) self.conversation = conversation entry = gtk.Entry() entry.modify_bg(gtk.STATE_INSENSITIVE, COLOR_WHITE.get_gdk_color()) entry.modify_base(gtk.STATE_INSENSITIVE, COLOR_WHITE.get_gdk_color()) entry.set_sensitive(False) entry.connect('activate', self.entry_activate_cb) self.entry = entry hbox = gtk.HBox() hbox.add(entry) sw = hippo.CanvasScrollbars() sw.set_policy(hippo.ORIENTATION_HORIZONTAL, hippo.SCROLLBAR_NEVER) sw.set_root(conversation) self.scrolled_window = sw vadj = self.scrolled_window.props.widget.\ get_vadjustment() vadj.connect('changed', self.rescroll) vadj.connect('value-changed', self.scroll_value_changed_cb) canvas = hippo.Canvas() canvas.set_root(sw) box = gtk.VBox(homogeneous=False) box.pack_start(hbox, expand=False) box.pack_start(canvas) return box def rescroll(self, adj, scroll=None): """Scroll the chat window to the bottom""" if self._scroll_auto: adj.set_value(adj.upper-adj.page_size) self._scroll_value = adj.get_value() def scroll_value_changed_cb(self, adj, scroll=None): """Turn auto scrolling on or off. If the user scrolled up, turn it off. If the user scrolled to the bottom, turn it back on. """ if adj.get_value() < self._scroll_value: self._scroll_auto = False elif adj.get_value() == adj.upper-adj.page_size: self._scroll_auto = True def add_text(self, buddy, text, status_message=False): """Display text on screen, with name and colors. buddy -- buddy object text -- string, what the buddy said status_message -- boolean False: show what buddy said True: show what buddy did hippo layout: .------------- rb ---------------. | +name_vbox+ +----msg_vbox----+ | | | | | | | | | nick: | | +--msg_hbox--+ | | | | | | | text | | | | +---------+ | +------------+ | | | | | | | | +--msg_hbox--+ | | | | | text | | | | | +------------+ | | | +----------------+ | `--------------------------------' """ if buddy: nick = buddy.props.nick color = buddy.props.color try: color_stroke_html, color_fill_html = \ color.split(',') except ValueError: color_stroke_html, color_fill_html = ( '#000000', '#888888') # Select text color based on fill color: color_fill_rgba = Color( color_fill_html).get_rgba() color_fill_gray = (color_fill_rgba[0] + color_fill_rgba[1] + color_fill_rgba[2])/3 color_stroke = Color( color_stroke_html).get_int() color_fill = Color(color_fill_html).get_int() if color_fill_gray < 0.5: text_color = COLOR_WHITE.get_int() else: text_color = COLOR_BLACK.get_int() else: nick = '???' # XXX: should be '' but leave for debugging color_stroke = COLOR_BLACK.get_int() color_fill = COLOR_WHITE.get_int() text_color = COLOR_BLACK.get_int() color = '#000000,#FFFFFF' # Check for Right-To-Left languages: if pango.find_base_dir(nick, -1) == \ pango.DIRECTION_RTL: lang_rtl = True else: lang_rtl = False # Check if new message box or add text to previous: new_msg = True if self._last_msg_sender: if not status_message: if buddy == self._last_msg_sender: # Add text to previous message new_msg = False if not new_msg: rb = self._last_msg msg_vbox = rb.get_children()[1] msg_hbox = hippo.CanvasBox( orientation=hippo.ORIENTATION_HORIZONTAL) msg_vbox.append(msg_hbox) else: rb = CanvasRoundBox( background_color=color_fill, border_color=color_stroke, padding=4) rb.props.border_color = color_stroke self._last_msg = rb self._last_msg_sender = buddy if not status_message: name = hippo.CanvasText(text=nick+': ', color=text_color, font_desc=FONT_BOLD.get_pango_desc()) name_vbox = hippo.CanvasBox( orientation=hippo.ORIENTATION_VERTICAL) name_vbox.append(name) rb.append(name_vbox) msg_vbox = hippo.CanvasBox( orientation=hippo.ORIENTATION_VERTICAL) rb.append(msg_vbox) msg_hbox = hippo.CanvasBox( orientation=hippo.ORIENTATION_HORIZONTAL) msg_vbox.append(msg_hbox) if status_message: self._last_msg_sender = None if text: message = hippo.CanvasText( text=text, size_mode=hippo.CANVAS_SIZE_WRAP_WORD, color=text_color, font_desc=FONT_NORMAL.get_pango_desc(), xalign=hippo.ALIGNMENT_START) msg_hbox.append(message) # Order of boxes for RTL languages: if lang_rtl: msg_hbox.reverse() if new_msg: rb.reverse() if new_msg: box = hippo.CanvasBox(padding=2) box.append(rb) self.conversation.append(box) def entry_activate_cb(self, entry): text = entry.props.text logger.debug('Entry: %s' % text) if text: self.add_text(self.owner, text) entry.props.text = '' if self.text_channel: self.text_channel.send(text) else: logger.debug( 'Tried to send message but text ' 'channel not connected.') And this is what the Activity looks like in action: Try launching more than one copy of sugar-emulator, with this Activity installed in each. If you're using Fedora 10 and SUGAR_PROFILE the Activity does not need to be installed more than once, but if you're using a later version of Sugar that requires separate Linux userids for each instance you'll need to maintain separate copies of the code for each user. In your own projects using a central Git repository at git.sugarlabs.org will make this easy. You just do a git push to copy your changes to the central repository and a git pull to copy them to your second userid. The second userid can use the public URL. There's no need to set up SSH for any user other than the primary one. You may have read somewhere that you can install an Activity on one machine and share that Activity with another that does not have the activity installed. In such a case the second machine would get a copy of the Activity from the first machine and install it automatically. You may have also read that if two users of a shared Activity have different versions of that Activity then the one who has the newer version will automatically update the older. Neither statement is true now or is likely to be true in the near future. These ideas are discussed on the mailing lists from time to time but there are practical difficulties to overcome before anything like that could work, mostly having to do with security. For now both users of a shared Activity must have the Activity installed. On the other hand, depending on how the Activity is written two different versions of an Activity may be able to communicate with one another. If the messages they exchange are in the same format there should be no problem. Once you have both instances of sugar-emulator going you can launch MiniChat on one and invite the second user to Join the Chat session. You can do both with the Neighborhood panes of each instance. Making the invitation looks like this: Accepting it looks like this: After you've played with MiniChat for awhile come back and we'll discuss the secrets of using Telepathy to create a shared Activity. Know who Your Buddies Are XMPP, as we said before, is the Extended Messaging and Presence Protocol. ⁞ Presence is just what it sounds like; it handles letting you know who is available to share your Activity, as well as what other Activities are available to share. There are two ways to share your Activity. The first one is when you change the Share with: pulldown on the standard toolbar so it reads My Neighborhood instead of Private. That means anyone on the network can share your Activity. The other way to share is to go to the Neighborhood view and invite someone specific to share. The person getting the invitation has no idea of the invitation was specifically for him or broadcast to the Neighborhood. The technical term for persons sharing your Activity is Buddies. The place where Buddies meet and collaborate is called an MUC or Multi User Chatroom. The code used by our Activity for inviting Buddies and joining the Activity as a Buddy is in the __init__() method: _joined_cb(self, activity): """Joined a shared activity.""" if not self._shared_activity: return logger.debug('Joined a shared chat') for buddy in \ self._shared_activity.get_joined_buddies(): self._buddy_already_exists(buddy)() There are two ways to launch an Activity: as the first user of an Activity or by joining an existing Activity. The first line above in bold determines whether we are joining or are the first user of the Activity. If so we ask for the _joined_cb() method to be run when the 'joined' event occurs. This method gets a buddy list from the _shared_activity object and creates messages in the user interface informing the user that these buddies are already in the chat room. Then it runs the _setup() method. If we are not joining an existing Activity then we check to see if we are currently sharing the Activity with anyone. If we aren't we pop up a message telling the user to invite someone to chat. We also request that when the 'shared' even happens the _shared_cb() method should run. This method just runs the _setup() method. The _setup() method creates a TextChannelWrapper object using the code in textchannel.py. It also tells the _shared_activity object that it wants some callback methods run when new buddies join the Activity and when existing buddies leave the Activity. Everything you need to know about your buddies can be found in the code above, except how to send messages to them. For that we use the Text Channel. There is no need to learn about the Text Channel in great detail because the TextChannelWrapper class does everything you'll ever need to do with the TextChannel and hides the details from you. def entry_activate_cb(self, entry): text = entry.props.text logger.debug('Entry: %s' % text) if text: self.add_text(self.owner, text) entry.props.text = '' if self.text_channel: self.text_channel.send(text) else: logger.debug( 'Tried to send message but text ' 'channel not connected.') The add_text() method is of interest. It takes the owner of the message and figures out what colors belong to that owner and displays the message in those colors. In the case of messages sent by the Activity it gets the owner like this in the __init__() method: self.owner = self._pservice.get_owner() In the case of received messages it gets the buddy the message came from: def _received_cb(self, buddy, text): """Show message that was received.""" if buddy: nick = buddy.props.nick else: nick = '???' logger.debug('Received message from %s: %s', nick, text) self.add_text(buddy, text) But what if we want to do more than just send text messages back and forth? What do we use for that? It's A Series Of Tubes! No, not the Internet. Telepathy has a concept called Tubes which describes the way instances of an Activity can communicate together. What Telepathy does is take the Text Channel and build Tubes on top of it. There are two kinds of Tubes: - D-Bus Tubes - Stream Tubes A D-Bus Tube is used to enable one instance of an Activity to call methods in the Buddy instances of the Activity. A Stream Tube is used for sending data over Sockets, for instance for copying a file from one instance of an Activity to another. A Socket is a way of communicating over a network using Internet Protocols. For instance the HTTP protocol used by the World Wide Web is implemented with Sockets. In the next example we'll use HTTP to transfer books from one instance of Read Etexts III to another. Read Etexts III, Now with Book Sharing! The Git repository with the code samples for this book has a file named ReadEtextsActivity3.py in the Making_Shared_Activities directory which looks like this: import sys import os import logging import tempfile import time import zipfile import pygtk import gtk import pango import dbus import gobject import telepathy from sugar.activity import activity from sugar.graphics import style from sugar import network from sugar.datastore import datastore from sugar.graphics.alert import NotifyAlert from toolbar import ReadToolbar, ViewToolbar from gettext import gettext as _ page=0 PAGE_SIZE = 45 TOOLBAR_READ = 2 logger = logging.getLogger('read-etexts2-activity') class ReadHTTPRequestHandler( network.ChunkedGlibHTTPRequestHandler): """HTTP Request Handler for transferring document while collaborating. RequestHandler class that integrates with Glib mainloop. It writes the specified file to the client in chunks, returning control to the mainloop between chunks. """ def translate_path(self, path): """Return the filepath to the shared document.""" return self.server.filepath class ReadHTTPServer(network.GlibTCPServer): """HTTP Server for transferring document while collaborating.""" def __init__(self, server_address, filepath): """Set up the GlibTCPServer with the ReadHTTPRequestHandler. filepath -- path to shared document to be served. """ self.filepath = filepath network.GlibTCPServer.__init__(self, server_address, ReadHTTPRequestHandler) class ReadURLDownloader(network.GlibURLDownloader): """URLDownloader that provides content-length and content-type.""" def get_content_length(self): """Return the content-length of the download.""" if self._info is not None: return int(self._info.headers.get( 'Content-Length')) def get_content_type(self): """Return the content-type of the download.""" if self._info is not None: return self._info.headers.get('Content-type') return None READ_STREAM_SERVICE = 'read-etexts-activity-http' class ReadEtextsActivity(activity.Activity): def __init__(self, handle): "The entry point to the Activity" global page activity.Activity.__init__(self, handle) self.fileserver = None self.object_id = handle.object_id toolbox = activity.ActivityToolbox(self) activity_toolbar = toolbox.get_activity_toolbar() activity_toolbar.keep.props.visible = False self.edit_toolbar = activity.EditToolbar() self.edit_toolbar.undo.props.visible = False self.edit_toolbar.redo.props.visible = False self.edit_toolbar.separator.props.visible = False self.edit_toolbar.copy.set_sensitive(False) self.edit_toolbar.copy.connect('clicked', self.edit_toolbar_copy_cb) self.edit_toolbar.paste.props.visible = False toolbox.add_toolbar(_('Edit'), self.edit_toolbar) self.edit_toolbar.show() self.read_toolbar = ReadToolbar() toolbox.add_toolbar(_('Read'), self.read_toolbar) self.read_toolbar.back.connect('clicked', self.go_back_cb) self.read_toolbar.forward.connect('clicked', self.go_forward_cb) self.read_toolbar.num_page_entry.connect('activate', self.num_page_entry_activate_cb) self.read_toolbar.show() self.view_toolbar = ViewToolbar() toolbox.add_toolbar(_('View'), self.view_toolbar) self.view_toolbar.connect('go-fullscreen', self.view_toolbar_go_fullscreen_cb) self.view_toolbar.zoom_in.connect('clicked', self.zoom_in_cb) self.view_toolbar.zoom_out.connect('clicked', self.zoom_out_cb) self.view_toolbar.show() self.set_toolbox(toolbox) toolbox.show() self.scrolled_window = gtk.ScrolledWindow() self.scrolled_window.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC) self.scrolled_window.props.shadow_type = \ gtk.SHADOW_NONE self.textview = gtk.TextView() self.textview.set_editable(False) self.textview.set_cursor_visible(False) self.textview.set_left_margin(50) self.textview.connect("key_press_event", self.keypress_cb) self.progressbar = gtk.ProgressBar() self.progressbar.set_orientation( gtk.PROGRESS_LEFT_TO_RIGHT) self.progressbar.set_fraction(0.0) self.scrolled_window.add(self.textview) self.textview.show() self.scrolled_window.show() vbox = gtk.VBox() vbox.pack_start(self.progressbar, False, False, 10) vbox.pack_start(self.scrolled_window) self.set_canvas(vbox) vbox.show() page = 0 self.clipboard = gtk.Clipboard( display=gtk.gdk.display_get_default(), selection="CLIPBOARD") self.textview.grab_focus() self.font_desc = pango.FontDescription("sans %d" % style.zoom(10)) self.textview.modify_font(self.font_desc) buffer = self.textview.get_buffer() self.markset_id = buffer.connect("mark-set", self.mark_set_cb) self.toolbox.set_current_toolbar(TOOLBAR_READ) self.unused_download_tubes = set() self.want_document = True self.download_content_length = 0 self.download_content_type = None # Status of temp file used for write_file: self.tempfile = None self.close_requested = False self.connect("shared", self.shared_cb) self.is_received_document = False if self._shared_activity and \ handle.object_id == None: # We're joining, and we don't already have # the document. if self.get_shared(): # Already joined for some reason, just get the # document self.joined_cb(self) else: # Wait for a successful join before trying to get # the document self.connect("joined", self.joined_cb) def keypress_cb(self, widget, event): "Respond when the user presses one of the arrow keys" keyname = gtk num_page_entry_activate_cb(self, entry): global page if entry.props.text: new_page = int(entry.props.text) - 1 else: new_page = 0 if new_page >= self.read_toolbar.total_pages: new_page = self.read_toolbar.total_pages - 1 elif new_page < 0: new_page = 0 self.read_toolbar.current_page = new_page self.read_toolbar.set_current_page(new_page) self.show_page(new_page) entry.props.text = str(new_page + 1) self.read_toolbar.update_nav_buttons() page = new_page def go_back_cb(self, button): self.page_previous() def go_forward_cb(self, button): self.page_next() def page_previous(self): global page page=page-1 if page < 0: page=0 self.read_toolbar.set_current_page(page) self.show_page(page) v_adjustment = \ self.scrolled_window.get_vadjustment() v_adjustment.value = v_adjustment.upper - \ v_adjustment.page_size def page_next(self): global page page=page+1 if page >= len(self.page_index): page=0 self.read_toolbar.set_current_page(page) self.show_page(page) v_adjustment = \ self.scrolled_window.get_vadjustment() v_adjustment.value = v_adjustment.lower def zoom_in_cb(self, button): self.font_increase() def zoom_out_cb(self, button): self.font_decrease() mark_set_cb(self, textbuffer, iter, textmark): if textbuffer.get_has_selection(): begin, end = textbuffer.get_selection_bounds() self.edit_toolbar.copy.set_sensitive(True) else: self.edit_toolbar.copy.set_sensitive(False) def edit_toolbar_copy_cb(self, button): textbuffer = self.textview.get_buffer() begin, end = textbuffer.get_selection_bounds() copy_text = textbuffer.get_text(begin, end) self.clipboard.set_text(copy_text) def view_toolbar_go_fullscreen_cb(self, view_toolbar): self.fullscreen() get_saved_page_number(self): global page title = self.metadata.get('title', '') if title == '' or not \ title[len(title)-1].isdigit(): page = 0 else: i = len(title) - 1 newPage = '' while (title[i].isdigit() and i > 0): newPage = title[i] + newPage i = i - 1 if title[i] == 'P': page = int(newPage) - 1 else: # not a page number; maybe a # volume number. page = 0 def save_page_number(self): global page title = self.metadata.get('title', '') if title == '' or not \ title[len(title)- 1].isdigit(): title = title + ' P' + str(page + 1) else: i = len(title) - 1 while (title[i].isdigit() and i > 0): i = i - 1 if title[i] == 'P': title = title[0:i] + 'P' + str(page + 1) else: title = title + ' P' + str(page + 1) self.metadata['title'] = title def read_file(self, filename): "Read the Etext file" global PAGE_SIZE, page tempfile = os.path.join(self.get_activity_root(), 'instance', 'tmp%i' % time.time()) os.link(filename, tempfile) self.tempfile = tempfile if filename.endswith(".zip"): os.remove(currentFileName) self.get_saved_page_number() self.show_page(page) self.read_toolbar.set_total_pages( pagecount + 1) self.read_toolbar.set_current_page(page) # We've got the document, so if we're a shared # activity, offer it if self.get_shared(): self.watch_for_tubes() self.share_document() def make_new_filename(self, filename): partition_tuple = filename.rpartition('/') return partition_tuple[2] def write_file(self, filename): "Save meta data for the file." if self.is_received_document: # This document was given to us by someone, so # we have to save it to the Journal. self.etext_file.seek(0) filebytes = self.etext_file.read() f = open(filename, 'wb') try: f.write(filebytes) finally: f.close() elif self.tempfile: if self.close_requested: os.link(self.tempfile, filename) logger.debug( "Removing temp file %s because we " "will close", self.tempfile) os.unlink(self.tempfile) self.tempfile = None else: # skip saving empty file raise NotImplementedError self.metadata['activity'] = self.get_bundle_id() self.save_page_number() def can_close(self): self.close_requested = True return True set_downloaded_bytes(self, bytes, total): fraction = float(bytes) / float(total) self.progressbar.set_fraction(fraction) logger.debug("Downloaded percent", fraction) def clear_downloaded_bytes(self): self.progressbar.set_fraction(0.0) logger.debug("Cleared download bytes")) def alert(self, title, text=None): alert = NotifyAlert(timeout=20) alert.props.title = title alert.props.msg = text self.add_alert(alert) alert.connect('response', self.alert_cancel_cb) alert.show() def alert_cancel_cb(self, alert, response_id): self.remove_alert(alert) self.textview.grab_focus() The contents of activity.info are these lines: [Activity] name = Read Etexts III service_name = net.flossmanuals.ReadEtextsActivity icon = read-etexts exec = sugar-activity ReadEtextsActivity3.ReadEtextsActivity show_launcher = no activity_version = 1 mime_types = text/plain;application/zip license = GPLv2+ To try it out, download a Project Gutenberg book to the Journal, open it with this latest Read Etexts III, then share it with a second user who has the program installed but not running. She should accept the invitation to join that appears in her Neighborhood view. When she does Read Etexts II will start up and copy the book from the first user over the network and load it. The Activity will first show a blank screen, but then a progress bar will appear just under the toolbar and indicate the progress of the copying. When it is finished the first page of the book will appear. So how does it work? Let's look at the code. The first points of interest are the class definitions that appear at the beginning: ReadHTTPRequestHandler, ReadHTTPServer, and ReadURLDownloader. These three classes extend (that is to say, inherit code from) classes provided by the sugar.network package. These classes provide an HTTP client for receiving the book and an HTTP Server for sending the book. This is the code used to send a book:) You will notice that a hash of the _activity_id is used to get a port number. That port is used for the HTTP server and is passed to Telepathy, which offers it as a Stream Tube. On the receiving side we have this code:() Telepathy gives us the address and port number associated with a Stream Tube and we set up the HTTP Client to read from it. The client reads the file in chunks and calls download_progress_cb() after every chunk so we can update a progress bar to show the user how the download is progressing. There are also callback methods for when there is a download error and for when the download is finished, The ReadURLDownloader class is not only useful for transferring files over Stream Tubes, it can also be used to interact with websites and web services. My Activity Get Internet Archive Books uses this class for that purpose. The one remaining piece is the code which handles getting Stream Tubes to download the book from. In this code, adapted from the Read Activity, as soon as an instance of an Activity receives a book it turns around and offers to share it, thus the Activity may have several possible Tubes it could get the book from: READ_STREAM_SERVICE = 'read-etexts-activity-http' ...) The READ_STREAM_SERVICE constant is defined near the top of the source file. Using D-Bus Tubes D-Bus is a method of supporting IPC, or Inter-Process Communication, that was created for the GNOME desktop environment. The idea of IPC is to allow two running programs to communicate with each other and execute each other's code. GNOME uses D-Bus to provide communication between the desktop environment and the programs running in it, and also between GNOME and the operating system. A D-Bus Tube is how Telepathy makes it possible for an instance of an Activity running on one computer to execute methods in another instance of the same Activity running on a different computer. Instead of just sending simple text messages back and forth or doing file transfers, your Activities can be truly shared. That is, your Activity can allow many people to work on the same task together. I have never written an Activity that uses D-Bus Tubes myself, but many others have. We're going to take a look at code from two of them: Scribble by Sayamindu Dasgupta and Batalla Naval, by Gerard J. Cerchio and Andrés Ambrois, which was written for the Ceibal Jam. Scribble is a drawing program that allows many people to work on the same drawing at the same time. Instead of allowing you to choose what colors you will draw with, it uses the background and foreground colors of your Buddy icon (the XO stick figure) to draw with. That way, with many people drawing shapes it's easy to know who drew what. If you join the Activity in progress Scribble will update your screen so your drawing matches everyone else's screen. Scribble in action looks like this: Batalla Naval is a version of the classic game Battleship. Each player has two grids: one for placing his own ships (actually the computer places the ships for you) and another blank grid representing the area where your opponent's ships are. You can't see his ships and he can't see yours. You click on the opponent's grid (on the right) to indicate where you want to aim an artillery shell. When you do the corresponding square will light up in both your grid and your opponent's own ship grid. If the square you picked corresponds to a square where your opponent has placed a ship that square will show up in a different color. The object is to find the squares containing your opponent's ships before he finds yours. The game in action looks like this: I suggest that you download the latest code for these two Activities from Gitorious using these commands: mkdir scribble cd scribble git clone git://git.sugarlabs.org/scribble/mainline.gitcd .. mkdir batallanaval cd batallanaval git clone git://git.sugarlabs.org/batalla-naval/mainline.git You'll need to do some setup work to get these running in sugar-emulator. Scribble requires the goocanvas GTK component and the Python bindings that go with it. These were not installed by default in Fedora 10 but I was able to install them using Add/Remove Programs from the System menu in GNOME. Batalla Naval is missing setup.py, but that's easily fixed since every setup.py is identical. Copy the one from the book examples into the mainline/BatallaNaval.activity directory and run ./setup.py dev on both Activities. These Activities use different strategies for collaboration. Scribble creates lines of Python code which it passes to all Buddies and the Buddies use exec to run the commands. This is the code used for drawing a circle: def process_item_finalize(self, x, y): if self.tool == 'circle': self.cmd = "goocanvas.Ellipse( parent=self._root, center_x=%d, center_y=%d, radius_x = %d, radius_y = %d, fill_color_rgba = %d, stroke_color_rgba = %d, title = '%s')" % (self.item.props.center_x, self.item.props.center_y, self.item.props.radius_x, self.item.props.radius_y, self._fill_color, self._stroke_color, self.item_id) ... def process_cmd(self, cmd): #print 'Processing cmd :' + cmd exec(cmd) #FIXME: Ugly hack, but I'm too lazy to # do this nicely if len(self.cmd_list) > 0: self.cmd_list += (';' + cmd) else: self.cmd_list = cmd The cmd_list variable is used to create a long string containing all of the commands executed so far. When a new Buddy joins the Activity she is sent this variable to execute so that her drawing area has the same contents as the other Buddies have. This is an interesting approach but you could do the same thing with the TextChannel so it isn't the best use of D-Bus Tubes. Batalla Naval's use of D-Bus is more typical. How D-Bus Tubes Work, More Or Less D-Bus enables you to have two running programs send messages to each other. The programs have to be running on the same computer. Sending a message is sort of a roundabout way of having one program run code in another. A program defines the kind of messages it is willing to receive and act on. In the case of Batalla Naval it defines a message "tell me what square you want to fire a shell at and I'll figure out if part of one of my ships is in that square and tell you." The first program doesn't actually run code in the second one, but the end result is similar. D-Bus Tubes is a way of making D-Bus able to send messages like this to a program running on another computer. Think for a minute about how you might make a program on one computer run code in a running program on a different computer. You'd have to use the network, of course. Everyone is familiar with sending data over a network, but in this case you would have to send program code over the network. You would need to be able to tell the running program on the second computer what code you wanted it to run. You would have to send it a method call and all the parameters you needed to pass into the method, and you'd need a way to get a return value back. Isn't that kind of like what Scribble is doing in the code we just looked at? Maybe we could make our code do something like that? Of course if you did that then every program you wanted to run code in remotely would have to be written to deal with that. If you had a bunch of programs you wanted to do that with you'd have to have some way of letting the programs know which requests were meant for it. It would be nice if there was a program running on each machine that dealt with making the network connections, converting method calls to data that could be sent over the network and then converting the data back into method calls and running them, plus sending any return values back. This program should be able to know which program you wanted to run code in and see that the method call is run there. The program should run all the time, and it would be really good if it made running a method on a remote program as simple as running a method in my own program. As you might guess, what I've just described is more or less what D-Bus Tubes are. There are articles explaining how it works in detail but it is not necessary to know how it works to use it. You do need to know about a few things, though. First, you need to know how to use D-Bus Tubes to make objects in your Activity available for use by other instances of that Activity running elsewhere. An Activity that needs to use D-Bus Tubes needs to define what sorts of messages it is willing to act on, in effect what specific methods in in the program are available for this use. All Activities that use D-Bus Tubes have constants like this: SERVICE = "org.randomink.sayamindu.Scribble" IFACE = SERVICE PATH = "/org/randomink/sayamindu/Scribble" These are the constants used for the Scribble Activity. The first constant, named SERVICE, represents the bus name of the Activity. This is also called a well-known name because it uses a reversed domain name as part of the name. In this case Sayamindu Dasgupta has a website at so he reverses the dot-separated words of that URL to create the first part of his bus name. It is not necessary to own a domain name before you can create a bus name. You can use org.sugarlabs.ActivityName if you like. The point is that the bus name must be unique, and by convention this is made easier by starting with a reversed domain name. The PATH constant represents the object path. It looks like the bus name with slashes separating the words rather than periods. For most Activities that is exactly what it should be, but it is possible for an application to expose more than one object to D-Bus and in that case each object exposed would have its own unique name, by convention words separated by slashes. The third constant is IFACE, which is the interface name. An interface is a collection of related methods and signals, identified by a name that uses the same convention used by the bus name. In the example above, and probably in most Activities using a D-Bus Tube, the interface name and the bus name are identical. So what is a signal? A signal is like a method but instead of one running program calling a method in one other running program, a signal is broadcast. In other words, instead of executing a method in just one program it executes the same method in many running programs, in fact in every running program that has that method that it is connected to through the D-Bus. A signal can pass data into a method call but it can't receive anything back as a return value. It's like a radio station that broadcasts music to anyone that is tuned in. The flow of information is one way only. Of course a radio station often receives phone calls from its listeners. A disc jockey might play a new song and invite listeners to call the station and say what they thought about it. The phone call is two way communication between the disc jockey and the listener, but it was initiated by a request that was broadcast to all listeners. In the same way your Activity might use a signal to invite all listeners (Buddies) to use a method to call it back, and that method can both supply and receive information. In D-Bus methods and signals have signatures. A signature is a description of the parameters passed into a method or signal including its data types. Python is not a strongly typed language. In a strongly typed language every variable has a data type that limits what it can do. Data types include such things as strings, integers, long integers, floating point numbers, booleans, etc. Each one can be used for a specific purpose only. For instance a boolean can only hold the values True and False, nothing else. A string can be used to hold strings of characters, but even if those characters represent a number you cannot use a string for calculations. Instead you need to convert the string into one of the numeric data types. An integer can hold integers up to a certain size, and a long integer can hold much larger integers, A floating point number is a number with a decimal point in scientific notation. It is almost useless for business arithmetic, which requires rounded results. In Python you can put anything into any variable and the language itself will figure out how to deal with it. To make Python work with D-Bus, which requires strongly typed variables that Python doesn't have, you need a way to tell D-Bus what types the variables you pass into a method should have. You do this by using a signature string as an argument to the method or signal. Methods have two strings: an in_signature and an out_signature. Signals just have a signature parameter. Some examples of signature strings: There is more information on signature strings in the dbus-python tutorial at. Introducing Hello Mesh And Friends If you study the source code of a few shared Activities you'll conclude that many of them contain nearly identical methods, as if they were all copied from the same source. In fact, more likely than not they were. The Activity Hello Mesh was created to be an example of how to use D-Bus Tubes in a shared Activity. It is traditional in programming textbooks to have the first example program be something that just prints the words "Hello World" to the console or displays the same words in a window. In that tradition Hello Mesh is a program that doesn't do all that much. You can find the code in Gitorious at. Hello Mesh is widely copied because it demonstrates how to do things that all shared Activities need to do. When you have a shared Activity you need to be able to do two things: - Send information or commands to other instances of your Activity. - Give Buddies joining your Activity a copy of the current state of the Activity. It does this using two signals and one method: - A signal called Hello() that someone joining the Activity sends to all participants. The Hello() method takes no parameters. - A method called World() which instances of the Activity receiving Hello() send back to the sender. This method takes a text string as an argument, which is meant to represent the current state of the Activity. - Another signal called SendText() which sends a text string to all participants. This represents updating the state of the shared Activity. In the case of Scribble this would be informing the others that this instance has just drawn a new shape. Rather than study Hello Mesh itself I'd like to look at the code derived from it used in Batalla Naval. I have taken the liberty of running the comments, originally in Spanish, through Google Translate to make everything in English. I have also removed some commented-out lines of code. This Activity does something clever to make it possible to run it either as a Sugar Activity or as a standalone Python program. The standalone program does not support sharing at all, and it runs in a Window. The class Activity is a subclass of Window, so when the code is run standalone the init() function in BatallaNaval.py gets a Window, and when the same code is run as an Activity the instance of class BatallaNavalActivity is passed to init(): from sugar.activity.activity import Activity, ActivityToolbox import BatallaNaval from Collaboration import CollaborationWrapper class BatallaNavalActivity(Activity): ''' The Sugar class called when you run this program as an Activity. The name of this class file is marked in the activity/activity.info file.''' def __init__(self, handle): Activity.__init__(self, handle) self.gamename = 'BatallaNaval' # Create the basic Sugar toolbar toolbox = ActivityToolbox(self) self.set_toolbox(toolbox) toolbox.show() # Create an instance of the CollaborationWrapper # so you can share the activity. self.colaboracion = CollaborationWrapper(self) # The activity is a subclass of Window, so it # passses itself to the init function BatallaNaval.init(False, self) The other clever thing going on here is that all the collaboration code is placed in its own CollaborationWrapper class which takes the instance of the BatallNavalActivity class in its constructor. This separates the collaboration code from the rest of the program. Here is the code in CollaborationWrapper.py: import logging from sugar.presence import presenceservice import telepathy from dbus.service import method, signal # In build 656 Sugar lacks sugartubeconn try: from sugar.presence.sugartubeconn import \ SugarTubeConnection except: from sugar.presence.tubeconn import TubeConnection as \ SugarTubeConnection from dbus.gobject_service import ExportedGObject ''' In all collaborative Activities in Sugar we are made aware when a player enters or leaves. So that everyone knows the state of the Activity we use the methods Hello and World. When a participant enters Hello sends a signal that reaches all participants and the participants respond directly using the method "World", which retrieves the current state of the Activity. After the updates are given then the signal Play is used by each participant to make his move. In short this module encapsulates the logic of "collaboration" with the following effect: - When someone enters the collaboration the Hello signal is sent. - Whoever receives the Hello signal responds with World - Every time someone makes a move he uses the method Play giving a signal which communicates to each participant what his move was. ''' SERVICE = "org.ceibaljam.BatallaNaval" IFACE = SERVICE PATH = "/org/ceibaljam/BatallaNaval" logger = logging.getLogger('BatallaNaval') logger.setLevel(logging.DEBUG) class CollaborationWrapper(ExportedGObject): ''' A wrapper for the collaboration methods. Get the activity and the necessary callbacks. ''' def __init__(self, activity): self.activity = activity self.presence_service = \ presenceservice.get_instance() self.owner = \ self.presence_service.get_owner() def set_up(self, buddy_joined_cb, buddy_left_cb, World_cb, Play_cb, my_boats): self.activity.connect('shared', self._shared_cb) if self.activity._shared_activity: # We are joining the activity self.activity.connect('joined', self._joined_cb) if self.activity.get_shared(): # We've already joined self._joined_cb() self.buddy_joined = buddy_joined_cb self.buddy_left = buddy_left_cb self.World_cb = World_cb # Called when someone passes the board state. self.Play_cb = Play_cb # Called when someone makes a move. # Submitted by making World on a new partner self.my_boats = [(b.nombre, b.orientacion, b.largo, b.pos[0], b.pos[1]) for b in my_boats] self.world = False self.entered = False def _shared_cb(self, activity): self._sharing_setup() self.tubes_chan[telepathy.CHANNEL_TYPE_TUBES].\ OfferDBusTube( SERVICE, {}) self.is_initiator = True def _joined_cb(self, activity): self._sharing_setup() self.tubes_chan[telepathy.CHANNEL_TYPE_TUBES].\ ListTubes( reply_handler=self._list_tubes_reply_cb, error_handler=self._list_tubes_error_cb) self.is_initiator = False def _sharing_setup(self): if self.activity._shared_activity is None: logger.error( 'Failed to share or join activity') return self.conn = \ self.activity._shared_activity.telepathy_conn self.tubes_chan = \ self.activity._shared_activity.telepathy_tubes_chan self.text_chan = \ self.activity._shared_activity.telepathy_text_chan self.tubes_chan[telepathy.CHANNEL_TYPE_TUBES].\ connect_to_signal( 'NewTube', self._new_tube_cb) self.activity._shared_activity.connect( 'buddy-joined', self._buddy_joined_cb) self.activity._shared_activity.connect( 'buddy-left', self._buddy_left_cb) # Optional - included for example: # Find out who's already in the shared activity: for buddy in \ self.activity._shared_activity.\ get_joined_buddies(): logger.debug( 'Buddy %s is already in the activity', buddy.props.nick) def participant_change_cb(self, added, removed): logger.debug( 'Tube: Added participants: %r', added) logger.debug( 'Tube: Removed participants: %r', removed) for handle, bus_name in added: buddy = self._get_buddy(handle) if buddy is not None: logger.debug( 'Tube: Handle %u (Buddy %s) was added', handle, buddy.props.nick) for handle in removed: buddy = self._get_buddy(handle) if buddy is not None: logger.debug('Buddy %s was removed' % buddy.props.nick) if not self.entered: if self.is_initiator: logger.debug( "I'm initiating the tube, " "will watch for hellos.") self.add_hello_handler() else: logger.debug( 'Hello, everyone! What did I miss?') self.Hello() self.entered = True # This is sent to all participants whenever we # join an activity @signal(dbus_interface=IFACE, signature='') def Hello(self): """Say Hello to whoever else is in the tube.""" logger.debug('I said Hello.') # @signal(dbus_interface=IFACE, signature='ii') def Play(self, x, y): """Say Hello to whoever else is in the tube.""" logger.debug('Running remote play:%s x %s.', x, y) def add_hello_handler(self): logger.debug('Adding hello handler.') self.tube.add_signal_receiver(self.hello_signal_cb, 'Hello', IFACE, path=PATH, sender_keyword='sender') self.tube.add_signal_receiver(self.play_signal_cb, 'Play', get theirs in return enemy_boats = self.tube.get_object(self.other, PATH).World( self.my_boats, dbus_interface=IFACE) # I call the callback World, to load the enemy ships self.World_cb(enemy_boats)) # In theory, no matter who sent him def _list_tubes_error_cb(self, e): logger.error('ListTubes() failed: %s', e) def _list_tubes_reply_cb(self, tubes): for tube_info in tubes: self._new_tube_cb(*tube_info) def _new_tube_cb(self, id, initiator, type, service, params, state): logger.debug('New tube: ID=%d initator=%d ' 'type=%d service=%s ' 'params=%r state=%d', id, initiator, ' 'type, service, params, state) if (type == telepathy.TUBE_TYPE_DBUS and service == SERVICE): if state == telepathy.TUBE_STATE_LOCAL_PENDING: self.tubes_chan[telepathy.CHANNEL_TYPE_TUBES] .AcceptDBusTube(id) self.tube = SugarTubeConnection(self.conn, self.tubes_chan[telepathy.CHANNEL_TYPE_TUBES], id, group_iface= self.text_chan[telepathy.\ CHANNEL_INTERFACE_GROUP]) super(CollaborationWrapper, self).__init__(self.tube, PATH) self.tube.watch_participants( self.participant_change_cb) def _buddy_joined_cb (self, activity, buddy): """Called when a buddy joins the shared activity. """ logger.debug( 'Buddy %s joined', buddy.props.nick) if self.buddy_joined: self.buddy_joined(buddy) def _buddy_left_cb (self, activity, buddy): """Called when a buddy leaves the shared activity. """ if self.buddy_left: self.buddy_left(buddy) def _get_buddy(self, cs_handle): """Get a Buddy from a channel specific handle.""" logger.debug('Trying to find owner of handle %u...', cs_handle) group = self.text_chan[telepathy.\ CHANNEL_INTERFACE_GROUP] my_csh = group.GetSelfHandle() logger.debug( 'My handle in that group is %u', my_csh) if my_csh == cs_handle: handle = self.conn.GetSelfHandle() logger.debug('CS handle %u belongs to me, %u', cs_handle, handle) elif group.GetGroupFlags() & \ telepathy.\ CHANNEL_GROUP_FLAG_CHANNEL_SPECIFIC_HANDLES: handle = group.GetHandleOwners([cs_handle])[0] logger.debug('CS handle %u belongs to %u', cs_handle, handle) else: handle = cs_handle logger.debug('non-CS handle %u belongs to itself', handle) # XXX: deal with failure to get the handle owner assert handle != 0 return self.presence_service.\ get_buddy_by_telepathy_handle( self.conn.service_name, self.conn.object_path, handle) Most of the code above is similar to what we've seen in the other examples, and most of it can be used as is in any Activity that needs to make D-Bus calls. For this reason we'll focus on the code that is specific to using D-Bus. The logical place to start is the Hello() method. There is of course nothing magic about the name "Hello". Hello Mesh is meant to be a "Hello World" program for using D-Bus Tubes, so by convention the words "Hello" and "World" had to be used for something. The Hello() method is broadcast to all instances of the Activity to inform them that a new instance is ready to receive information about the current state of the shared Activity. Your own Activity will probably need something similar, but you should feel free to name it something else, and if you're writing the code for a school assignment you should definitely name it something else: # This is sent to all participants whenever we # join an activity @signal(dbus_interface=IFACE, signature='') def Hello(self): """Say Hello to whoever else is in the tube.""" logger.debug('I said Hello.') def add_hello_handler(self): logger.debug('Adding hello handler.') self.tube.add_signal_receiver( self.hello_signal_cb, 'Hello', returned theirs enemy_boats = self.tube.get_object( self.other, PATH).World( self.my_boats, dbus_interface=IFACE) # I call the callback World, to load the enemy ships self.World_cb(enemy_boats) The most interesting thing about this code is this line, which Python calls a Decorator: @signal(dbus_interface=IFACE, signature='') When you put @signal in front of a method name it has the effect of adding the two parameters shown to the method call whenever it is invoked, in effect changing it from a normal method call to a D-Bus call for a signal. The signature parameter is an empty string, indicating that the method call has no parameters. The Hello() method does nothing at all locally but when it is received by the other instances of the Activity it causes them to execute the World() method, which sends back the location of their boats and gets the new participants boats in return. Batalla Naval is apparently meant to be a demonstration program. Battleship is a game for two players, but there is nothing in the code to prevent more players from joining and no way to handle it if they do. Ideally you would want code to make only the first joiner an actual player and make the rest only spectators. Next we'll look at the World() method: # There is another decorator here, this one converting the World() method to a D-Bus call for a method. The signature is more interesting than Hello() had. It means an array of tuples where each tuple contains two strings and three integers. Each element in the array represents one ship and its attributes. World_cb is set to point to a method in BatallaNaval.py, (and so is Play_cb). If you study the init() code in BatallaNaval.py you'll see how this happens. World() is called in the hello_signal_cb() method we just looked at. It is sent to the joiner who sent Hello() to us. Finally we'll look at the Play() signal: @signal(dbus_interface=IFACE, signature='ii') def Play(self, x, y): """Say Hello to whoever else is in the tube.""" logger.debug('Running remote play:%s x %s.', x, y) def add_hello_handler(self): ... self.tube.add_signal_receiver(self.play_signal_cb, 'Play', IFACE, path=PATH, sender_keyword='sender') ...) This is a signal so there is only one signature string, this one indicating that the input parameters are two integers. There are several ways you could improve this Activity. When playing against the computer in non-sharing mode the game just makes random moves. The game does not limit the players to two and make the rest of the joiners spectators. It does not make the players take turns. When a player succeeds in sinking all the other players ships nothing happens to mark the event. Finally, gettext() is not used for the text strings displayed by the Activity so it cannot be translated into languages other than Spanish. In the tradition of textbooks everywhere I will leave making these improvements as an exercise for the student.
http://booki.flossmanuals.net/make-your-own-sugar-activities/ch017_making-shared-activities
CC-MAIN-2018-05
refinedweb
10,046
50.53
EmberZclGroupEntry_t Struct Reference #include < zcl-core-types.h> This structure holds a group entry that represents membership of an endpoint in a group. Field Documentation An array containing group name. Save space Address assignment mode as passed in "add group" command or 0xFF if address is not assigned An endpoint identifier of a group entry. A group identifier of a group entry. Group IPv6 address or flag/scope bits from "add group" command Length of group name. UDP port the group is listening on: EMBER_COAP_PORT or whatever was passed in "add group" The documentation for this struct was generated from the following file: zcl-core-types.h
https://docs.silabs.com/thread/2.9/structEmberZclGroupEntry-t
CC-MAIN-2019-09
refinedweb
107
56.96
The idea is that this is supposed to be a benefit to their existing customers, not an a "free ad" for everyone. Still lame, but I get the idea. Discussions Oh no, there's still something to report. Legal or not, these "IMSI Catchers" (if that's what these are) are still something most people don't know exist and are a security concern to say the least. If they are regularly being used around military bases, that's information worth knowing, and we certainly should be asking why and by whom. While it could be the military monitoring it's own base, it just as easily could be "the bad guys" trying to spy on our military. Even if it's the military, don't you think we should know that they are doing it? There's plenty of news here. I just think too many people (possibly even the reporters) are jumping to conclusions. Heck, the articles are written so poorly that even I may be totally wrong and there may well be giant cell towers erected here. I just find that unlikely for all of the reasons already pointed out. 2 hours ago, JohnAskew wrote *snip* Is it safe to assume that M# will be merged into C#? New namespaces? And Midori perhaps replacing or merging into the Windows kernel? It's unlikely M# will be merged into C#. The kinds of things they are doing don't sound like they'd be backwards compatible (i.e. existing C# code might not compile with the M# compiler). IOW, this would be like merging C++ into C. Nothing really new in that article. The map indicates where "towers" were found by the CryptoPhone. It doesn't say that the physical towers are located there (hard to word this so it makes sense). Each one of those pins could be a location where a backpack sized box was placed on top of the tallest building in the area. No FCC or FAA violations. If you watch the video @blowdart posted you'll see this demonstrated with a rig that is 100% legal (so the presenter claims) so long as the operator has a HAM radio license. I'm not claiming any of this is good and that we shouldn't be concerned and investigating. But the idea that somehow giant towers have been erected just so the government can spy on us sounds like a massive over reach based only on what we do know. There's towers and there's "towers". :) You know that gigantic FM radio tower regulated by the FCC with lights mandated by the FAA? Here's the equivalent in a device that costs around $10 and is a couple of inches in size:. The only practical difference between this and the tower is the distance in which it can transmit. Cell towers are more sophisticated, but not overly much. I know nothing about them, but it's easy to believe boxes exist that are small enough to fit in your pack back that can act as a cell tower within a fairly large radius that could be detected by these CryptoPhones. The article is calling them "cell towers", but I believe what @blowdart is saying is that much smaller devices, like the stingray, can appear to cell phones to be a cell tower. The CryptoPhone just detects fake "towers", it doesn't somehow actually verify that it is a physical tower. I know nothing about this other than what I've just Googled, but I presume these devices are small enough to fit in a backpack and cheap enough to build or buy for hackers to play with them. I can imagine military bases utilizing such devices to try and locate possible terrorist activity in or around the base. Whether such practice is legal or a good idea is certainly up to debate, but it doesn't sound like it's a concerted effort on the part of the government to snoop on its citizens. @felix9, you don't post often enough. You really should start a tech blog for this stuff where you could make money on your scoops. I look forward to every time you post on here. Such bad terminology used. What's meant by synchronous and asynchronous? Each of those calls are made asynchronously, but they are made one after the other in a synchronous fashion. Probably not necessary and you could improve things by not awaiting until all of the calls have been made. Regardless, nothing "blocks" here. We don't see the rest of the code, but I'd assume the View is displayed with no data until after the ViewModel is finally populated by all of these async calls. It would be far better to make these calls internally in the ViewModel. Regardless, one would hope that the UI is reporting we're busy during all of this. Several bad practices all in a single statement. Gotta love it. Microsoft demoed windowed metro apps at //Build, so I'd say it's a bit more than rumor. "There is no middle ground between devices that have the Modern UI and devices that have the desktop. You either have the desktop or you have the Modern UI, you cannot have both. As reported around a month ago, the Start Menu for desktop users in Windows Threshold can 'act' like a full screen Start Menu however, meaning if you want that functionality you can have it on the desktop. This will be helpful for devices like the Surface Pro 3." Hugely stupid. It may make sense to remove the start screen for obviously desktop devices (though I do know a number of non-techie people who like the start screen on desktop computers). However, for hybrid devices it makes no sense to not support both. Full screen or not, a start menu desktop approach would stink when using a Surface Pro as a tablet/consumption device. I use my Pro for both scenarios, and appreciate Win8 as it is today for enabling me to efficiently do either. Make a hard distinction in the OS here will kill hybrid devices like the Surface Pro.
http://channel9.msdn.com/Niners/wkempf/Discussions?page=2
CC-MAIN-2014-42
refinedweb
1,034
71.14
I have the following Java code that read a JSON file from HDFS and output it as a HIVE view using Spark. package org.apache.spark.examples.sql.hive; import java.io.File; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import org.apache.spark.api.java.function.MapFunction; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Encoders; import org.apache.spark.sql.Row; import org.apache.spark.sql.SparkSession; // $example off:spark_hive$ public class JavaSparkHiveExample { public static void main(String[] args) { // $example on:spark_hive$ SparkSession spark = SparkSession .builder() .appName("Java Spark Hive Example") .master("local[*]") .config("hive.metastore.uris", "thrift://localhost:9083") .enableHiveSupport() .getOrCreate(); Dataset<Row> jsonTest = spark.read().json("/tmp/testJSON.json"); jsonTest.createOrReplaceTempView("jsonTest"); Dataset<Row> showAll = spark.sql("SELECT * FROM jsonTest"); showAll.show(); spark.stop(); } } I would like to change so the JSON file is read from the system instead of HDFS (for instance from the same location where the program is executed). Furthermore, how could I remake it to INSERT the JSON into table test1 instead of just making a view out of it? Help is very appreciated! Created 01-24-2018 07:08 AM Spark by default looks for files in HDFS but for some reason if you want to load file from the local filesystem, you need to prepend "file://" before the file path. So your code will be Dataset<Row> jsonTest = spark.read().json(""); However this will be a problem when you are submitting in cluster mode since cluster mode will execute on the worker nodes. All the worker nodes are expected to have that file in that exact path so it will fail. To overcome, you can pass the file path in the --files parameter while running spark-submit which will put the file on the classpath so you can refer the file by simply calling the file name alone. For ex, if you submitted the following way: > spark-submit --master <your_master> --files /tmp/testJSON.json --deploy-mode cluster --class <main_class> <application_jar> then you can simply read the file the following way: Dataset<Row> jsonTest = spark.read().json("testJSON.json");
https://community.cloudera.com/t5/Support-Questions/Java-Spark-insert-JSON-into-Hive-from-the-local-file-system/m-p/202605
CC-MAIN-2020-29
refinedweb
356
51.55
Written by Peter Ekene Eze✏️ Animations in React Animations has evolved to very complicated UI element manipulations. They are used to increase interactivity on web pages and to give users an engaging experience while using websites. Developers are constantly looking to find better ways to implement animations without causing a major performance bottleneck. Animation effects are applied on the UI thread which is usually called frequently, as a result, adding certain animations/animation libraries could have negative impacts on your site. This is why we have chosen to discuss React Spring as a tool you should consider using to animate your React apps. React Spring React Spring is a spring-physics based animation library that powers most UI related animation in React. Given the performance considerations of animation libraries, React Spring is the best of two worlds. It is a bridge on the two existing React animation libraries; React Motion and Animated. It inherits animated powerful interpolations and performance while maintaining react-motion’s ease of use. Having understood what React Spring is, and what it offers, let’s take a look at how we can use it to build seamless animations in React applications. We’ll explore its features to better understand its strengths. Prerequisites Before we go any further, this article assumes the following: - Node.js ≥v6 is installed on your machine - npm is installed on your machine - You have a basic understanding of React.js - You have a basic understanding of React hooks Getting started with React Spring The best way to add React Spring to your application will be via the package managers. Simply open a terminal window on your project’s root directory and run the installation command below: npm install react-spring #OR yarn add react-spring This makes React Spring available in your application’s node_modules folder where you can import it from. import react-spring from 'react-spring' With the introduction of hooks in React, you can add state to functional components. React Spring takes this up a notch by providing a hook based API which allows you to define and convert data that you would generally pass as props to animated data. To better demonstrate some of the features of React Spring, let’s take a closer look at the available hooks in the React-spring module. There are five major hooks available in React Spring at the moment: useSpring— A single spring, moves data from a to b useSprings— Multiple springs mainly for lists, where each spring moves data from a -> b useTrail— Multiple springs with a single data set, one spring follows or trails behind the other useTransition— For mount/unmount transitions (lists where items are added/removed/updated) useChain— To queue or chain multiple animations together For each of these hooks, there are several animation effects you can implement, it’s limitless and goes as far as your imagination will take you. We’ll look at some use cases for useSpring , useSprings and useTrail to demonstrate how you can implement them in your React applications. useSpring useSpring is one of the simplest React Spring hooks. It turns defined values into animated values. It does this in two ways, either by overwriting the existing props with a different set of props on component re-render or by passing an updater function that returns a different set of props that is then used to update the props using set. Import it into the needed component like so, since we’ll be using the hooks API in this explanation: import {useSpring, animated} from 'react-spring' Here are the two methods for using the useSpring hooks: 1) Overwriting existing props const props = useSpring({opacity: toggle ? 1 : 0}) Here, if you re-render the component with changed props, the animation will update automatically. 2) Passing an updater function In this case, there is no re-rendering. This method is mostly applied for fast occurring updates. It also has an optional argument ( stop) to stop animation. const [props, set, stop] = useSpring(() => ({opacity: 1})) // Update spring with new props set({opacity: toggle ? 1 : 0}) // Stop animation stop() Since we are animating, we would be moving data from one state to another. Spring naturally comes with two props: from and to to show the initial position and the end position of the animation. We will discuss this further when explaining the render-in-props API. Now, to get a feel of how the useSpring hook API works, here’s a small demo that shows a simple animated greeting card for a landing page: On CodeSandbox: From the demo above we can see that the first few lines of code express the initial state and the final position of the box we are trying to animate: const contentProps = useSpring({ opacity: greetingStatus ? 1 : 0, marginTop: greetingStatus ? 0 : -500 }); In this example, the content slides in from the top of the page down to the center. The value of -500 marginTop is to set the position off-screen, then define an opacity of 0 as our values for the from prop. We do this to achieve a certain appearance of the box. These values are assigned to contentProps which we then pass as props to animated.div like so: <a.div className="box" style={contentProps}> <h1>Hey there ! React Spring is awesome.</h1> </a.div> useSprings useSprings is just like useSpring, the only difference is that it is used to create multiple springs, each with its own config. It is mostly used for lists, where each spring moves data from an initial state to a final state. This also provides us with two options on how to implement. Since we are working with multiple values, this method works in two forms. Overwrite values to change the animation Here, the animation is updated to each element by triggering a props change. It is simply achieved like this: const springs = useSprings(number, items.map(item => ({ opacity: item.opacity })) From the snippet above, we can see that the list items are mapped to have the useSpring method act on each element. That way, we can trigger the animation to happen on each element. Pass a function that returns values, and update using “set” You will get an updater function back. It will not cause the component to render like an overwrite would (still the animation will execute, of course). Handling updates like this is most useful for fast-occurring updates. const [springs, set, stop] = useSprings(number, index => ({opacity: 1})) // Update springs with new props set(index => ({opacity: 0})) // Stop all springs stop() How do we use this? Imagine we have a list of people and we wanted a user to know exactly which person is being selected, a cool way to bring more life to this would be to explore this demonstration by Paul Henschel. On CodeSandbox: useTrail useTrial enables us to create multiple springs with a single configuration. It has almost the same configuration as useSpring with a variation in the implementation. It animates the first item of a list of elements while the rest of the elements form a natural trail and follow their previous sibling: return trail.map(props => <animated.div style={props} />) It takes a list of items of any type and their keys. The latter defaults to item => item, however, if your items are self-sufficient as a key, it’ll often be good enough. On CodeSandbox: const config = { mass: 5, tension: 2000, friction: 200 }; The above line in the demo configures the common API of spring to set the default values of the parameters. const trail = useTrail(items.length, { config, opacity: toggle ? 1 : 0, x: toggle ? 0 : 20, height: toggle ? 80 : 0, from: { opacity: 0, x: 20, height: 0 } }); The above snippet uses the listed props to set the initial and final conditions of the elements using the ternary operators to indicate the switch. Render-props API This method of implementing React Spring in projects was used in class components which was the default mode before React Hooks were introduced in React v16.8. For the sake of inclusiveness, let’s also explore it to accommodate developers on the older version of React. With this method, the importation into projects is a bit different. Spring The class component implementation of animating a single element using React Spring would use spring as opposed to useSpring. As a result, we would import it into projects like so: import {Spring} from 'react-spring/renderprops' As we mentioned earlier, Spring/useSpring makes use of two props from and to to show the initial position and the end position of the animation like so: <Spring from={{ opacity: 0 }} to={{ opacity: 1 }}> {props => <div style={props}>hello</div>} </Spring> Trail In this case, we are dealing with the class component equivalent of useTrail and it is imported as: import {Trail} from 'react-spring/renderprops' The working principle remains the same however, the mode of transformation becomes: <Trail items={items} keys={item => item.key} from={{transform: 'translate3d(0,-40px,0)'}} to={{transform: 'translate3d(0,0px,0)'}}> {item => props => <span style={props}>{item.text}</span>} </Trail> We can notice new props being used in the example above. They behave like keys that are being used to specify what item the animation is being carried out on. You can find more examples of props being accepted by trial in the docs examples. The performance advantage of React Spring The main advantage of React Spring over other animation libraries is its ability to apply animations without relying on React to render updates frame by frame. This advantage is usually noticed when dealing with nested routes or charts. For more information on the implementation of specific performance boosts, check out the official documentation. Conclusion In this article, we have analyzed some use cases using React Spring’s Hooks API and also explored the class component equivalents. Seeing the performance advantage and simple syntax of React Spring, I would strongly recommend using this in your projects as smooth animations help in ensuring awesome user experiences. Implementing animations in React with React Spring appeared first on LogRocket Blog. Discussion (4) That was a great article, but how come no useTransition()? I'm just stuck on that one right now 🤔 I can get all the rest working in my project but that one. For now... Hi, Brian, Would you check CodeSandbox sandboxes? They seemed to fail to show the demos. If the demos still fail to show, the sandboxes do work properly here: blog.logrocket.com/animations-with.... Thanks for pointing that out. Demos look great~ Thanks, Brian 🙋♂️
https://practicaldev-herokuapp-com.global.ssl.fastly.net/bnevilleoneill/implementing-animations-in-react-with-react-spring-110k
CC-MAIN-2021-39
refinedweb
1,758
52.19
File I/O With C++ Fstream Intro File handling is as simple as writing in a book, much easier to modify and find. It's so simple people get confused with it :-). Welcome to the world of file handling. We will use the c++ fstream classes to do our file handling. So, what is a file? A file is just a bunch of bytes stored on a hardisk. Some have a specific structure others dont. Files are used to save info so that it can be retrived later for use. [I dont think you will want to save 100 people's address in memory will you]. Types of Files Actually there are only two. Text files and binary files. In text files data is stored as readable chars and binary file are in machine language. So if you output abc123 to a text file you will see abc123 but in a binary file you may see only a bunch of black blocks if you use notepad. The binary files are smaller in size. Fstream.h fstream.h provides simultaneous input and output through ifstream, ofstream and fstream. ifstream - open the file for input ofstream - open the file for output fstream - open the file for input/output/both Writing to a file Relatively very simple. Steps: - Declare an ofstream var. - Open a file with it. - Write to the file (there are a couple of ways.) - Close it. Eg Program 1.1 #include <fstream.h> void main { ofstream file; file.open("file.txt"); //open a file file<<"Hello file\n"<<75; //write to it file.close(); //close it } Methods for Writing to a file The fstream class is derived from the iostream classes, so you can use ofstream variables exactly how you use cout. So you can use the insertion (<<) operator and the put(). Usages: file<<"string\n"; file.put('c'); Reading from a file Almost the same. - Declare an ifstream var. - Open a file with it. - Read from the file(there are a couple of ways.) - Close it. Eg Program 1.2 #include <fstream.h> void main { ifstream file; char output[100]; int x; file.open("file.txt"); //open a file file>>output; //write to it cout<<output; //result = Hello file file>>x; cout<<x; //result = 75 file.close(); //close it } Methods for Reading a file The fstream class is derived from the iostream classes, so you can use fstream variables how you use cin.So you can use the extraction (<<) operator and the put(). Usages: file>>char *; //ie an array file>>char; //single char file.get(char); //single char file.get(char *,int); //read a string file.getline(char *,int sz); file.getline(char *,int sz,char eol); **Notes:*** The file handles can be used like: ofstream file("fl.txt"); ifstream file("fl.txt"); You will use the constructor for opening a file.I would not recommend using this not because it works less well or anything but in most cases it will improve code clarity and prevent errors when you are handling multiple files. If a file handle is used more than once without calling a close() in between them there will be errors. This is just provided for info sake if you need to use it in a hurry or in something small. Never ever declare a fstream variable globally. It is a bad habit. If you forget to close it next time you run the program it will show access errors to the C: drive (or which ever drive you use) and you will have to restart the computer. Declare them within funtions or classes and close them when their use is over. If you are doing databases or any file handling for that matter never put any file i/o into classes. It will simply complicate debugging and you may also open a single file multiple times with difffrent objects at the same time. This is definitely not what we want. Classes are only to be used when you have to minimal file i/o but still I recommend you use normal funtions. ifstream stands for input stream and can only be used for input. ofstream stands for output stream and can only be used for output. Any variable declared with ifstream,ofstream or fstream is called a file handle. That wraps up the simple very stuff. You will not use them much unless you are working with text files or in a small project. Now we will move on to fstream which is more flexible and will be most used. It's easy if you look at it logically. Dynamic file access The file streams we discussed have a limitation, they can only do input or output at a time. fstream provides us with a way to read, write randomly without having to close and reopen the file. It has great flexibility involving a bit more work with returns being ten fold. Hey, you cant have everything for free (what fun would it be if everything was handed to you). Lets look at how a file can be opened with fstream. Program Example 1.3 void main() { fstream file; file.open("file.ext",iso::in|ios::out) //do an input or output here file.close(); } Notice anything new? The ios::--- are attributes which define how a file should be opened. List of Attributes ios::in open file for reading ios::out open file for writing ios::app open for writing,add to end of file(append). ios::binary Binary file ios::nocreate do not create the file,open only if it exists ios::noreplace open and create a new file if the specified file does not exist ios::trunc open a file and empty it.(Boom, all the data is gone,if any) ios::ate goes to end of file instead of the begining Notes The default mode is text mode when a file is opened By default if a file does not exist,a new one is created Multiple attributes can be specified at a time seperated by ...|...|... All attributes start with an ios:: (as if you dint notice it,but still let me hammer it) These attributes can be used with ifstream and ofstream but of course ios::in wont work with ofstream and similarly ios::out wont ............. File byte location start from zero (like in arrays) Now we know how to open a file with fstream. Cool, now come read and write. Reading and Writing with fstream The two funtion are exactly similar in usage and simple to understand file.write(char *,int); //for writing file.read(char *,int); //for reading (gee...:D) Until now we have only been able to use strings and ints to write/read. Most databases will want to store data in structures or classes. If we had to write a seperate function to split and write each member, brrr horrors. C++ goes to be very nice and provides us with a way to write entire classes or structures with little work struct x { int i; char a; char s[10]; }data; file.write((char*)&data,sizeof(x)); file.read((char*)&data,sizeof(x)); Hold on, what the heck is the char * thingy???. That's called typecasting. Quite an interesting subject actually. It means converting one data type to another. Here it is converting struct x into a char pointer and it's address is passed to the funtion. If anybody wants to know more, well tell! The rest is simple. You pass the size of the structure which can be found out by using sizeof(); Eg: cout<<"\nInt:"<<sizeof(int); cout<<"\nFloat:"<<sizeof(float); cout<<"\nChar:"<<sizeof(char); Now instead of going more yack yack I show you an example which should explain a lot more. Sigh "A picture is worth a thousand word, source is worth a million bytes" Example Program 1.4 #include <fstream.h> struct contact { char name[10]; int age; } struct address { char city[10]; char country[10]; } class DtbRec { private: contact cnt; address adr; public: void getdata(); void dispdata(); }; /* I will give you a bit of work, make the funtions getdata() and dispdata() It's easy and not worth for me to bother with now :-} */ void DtbRec::getdata() //get user input { } void DtbRec::dispdata() //display to screen { } //This program is not tested, have fun fixing errors, if any //I am a taos programmer so dont expect anything major //Typing mistakes are not errors //This was done off the cuff and not even compiled once void main() { DtbRec xRec; //temp rec fstream fl_h; //file handle char ch; int num; fl_h.open("database.dtb",ios::in|ios::out|ios::binary); do { cout<<"\n\nFstream Dtb\n" <<"\n1.Add records" <<"\n2.View records" <<"\n3.Modify records" <<"\n4.Exit" <<"\n\tEnter Choice:"; cin>>ch; if(ch == '1') //we are dealing with chars not ints { //Adding a rec fl_h.seekp(0,ios::end); //will disscuss this later,this sets the file write pointer to //the end of a file. xRec.getdata(); //Get some data from the user fl_h.write((char*)&xRec,sizeof(DtbRec); } else if(ch == '2') { //View recs fl_h.seekg(0,ios::beg); //will disscuss this later,this sets the file read pointer to the //begining of a file. num = 0; while(!fl_h.eof()) //will disscuss this later,it check if the file's end has been reached { n++; fl_h.read((char*)&xRec,sizeof(DtbRec); cout<<"\nRecord No["<<num<<"]\n"; xRec.dispdata(); //Show the user all the data present } } else if(ch == '3') { //Modify me colors ;-) cout<<"Enter the record no(starts at 0):"; cin>>num; fl_h.seekg(num*sizeof(DtbRec),ios::beg); //move the read pointer to where the rec is fl_h.read((char*)&xRec,sizeof(DtbRec); //read it xRec.dispdata(); //Show the info xRec.getdata(); //Let the user change the info fl_h.seekp(num*sizeof(DtbRec),ios::beg); //move the write ponter this time fl_h.write((char*)&xRec,sizeof(DtbRec); //overwrite with new info //yahoo,modification done.I have seen too many people who just //cant get modification .It's so simiple I just cant get why they just cant //get it ;-) } } while(ch != '4'); fl_h.close(); //close the file cout<<"\nEnd of Program"; } Got more doubts now than what was cleared? Good, doubts are the first step towards knowledge. (If you happen to be of an opinion that a teacher should explain everything, sorry it's just not my style. I want thinking students not tape recorders) Note: Records start at byte zero. I said this once and I will say it again.(Dont forget user friendlyness though.) read() and write() can be used to write both classes and structures to the file in the same way. Always know where you are before you start reading or writing to a file or move the pointer to the area of work. The read pointer and write pointer are separate. So move the correct pointer if you want fstream to work. Always close the file when it's use is over. Always perform checks when necessary or appropriate. fstream has a number of buit-in ones for you :-) Never open a new file with a file handle without calling close(). Remember, with structs and classes their size is always fixed so finding the exact location of a record can be done mathematically. ** ------------------------- PART II -------------------------** Random File Access With fstream we work with 2 file pointers, read and write. By moving these pointers we go access any part of the file at random. See below: Reading 1.seekg(); //move the read pointer in bytes 2.tellg(); //returns where the read pointer is in bytes Writing 1.seekp(); //move the write pointer in bytes 2.tellp(); //returns where the write pointer is in bytes Both tellp() and tellg() don't take any parameters but return where the pointer location in bytes. Eg. int rp = file.tellg(); int wp = file.tellp(); The seek pointers take 2 parameters Eg. file.seekg(0,ios::beg); List of seek attributes ios::beg move from the beginning of the file ios::end move from the end of the file ios::cur move from current location of the file Note: ios::beg is default. I recommend you still pass it where ever you use it. Negative numbers can be used with ios::cur & ios::end.(not ios::beg figure it out) Part II Databases & Security (Based on a true story of a database) Intro This is another area people find difficulty. I figured it out myself without any help what so ever. I also learned file handling a year before my classmates by looking at an elder's text which contained only vague descriptions so I guess my eagerness to learn had something to do with it. I have always loved computers and programming. I fell in love with QBasic the moment I saw it. C++ stole my heart then. Ah, programming is a part of my soul. And even when I got my own text book nothing what so ever was mentioned about random file access except a short and vague description about the 2 functions and the options. There was no reference what so ever about locating individual records. I can't guess why! Reminds me of a saying "when old, one forgets how it is to be young". That maybe a cause :D. [oh, this reminds me, almost every comp guy/gal I know works late at night. I do too when I don't have much time in the mornings. We should stop this. Never work in the dark without enough light, nor stay up too late. If your eyes feel slightly sleepy,SLEEP. No more than 1:30 AM at the most, I am thankful I been able to do most of this] Locating Individual Records Problem area. Let start at the beginnings. Ever seen a graph paper ?.It has a lot of tiny, small, medium and large sized squares. Also each is being made from the tiniest square. What that got a do with files? Well when you use databases you will almost always use structures and they are always the same size (in that particular database). That means like in the program in Part I all the records are with that one class and when you write it to a file it will still be the same size even if you filled it or not. It's like your water bottle. Whether it's full, half or even empty it will always take up the same space in your bag. Get it? How do you find the size of any data type ?Easy. Size_Of_Data_Type = sizeof(Data_Type); //sizeof is a reserved keyword in C++. Ok, how will that help us find a record? Well all records are of the same size, they start at zero. They can be thought of as an array too and the concept is similar to how you use normal pointers. Eg. A struct is 20 bytes in size. The first rec starts at byte zero, the second at byte 20, the next at byte 40 ...... and so on Eg. Program 2.1 file.seekg( Rec_no * sizeof(Data_Type),ios::beg); file.read((char*)&rec_var,sizeof(Data_Type)); file.seekp( Rec_no * sizeof(Data_Type),ios::beg); file.write((char*)&rec_var,sizeof(Data_Type)); Simple as that. Note: This can be use with ios::beg,ios::cur and ios::end When read() or write() is called the appropriate pointer is moved automatically to the next record or rather by a number of bytes as passed to it(that's the sizeof(Data_Type)) I think now you get random access. How about finding the total number of records in a database? Eg. Program 2.2 file.seekg(0,ios::end); //move to the end of the file int fl_sz = file.tellg(); //tell me where you are, partner( pointer ) ! int total_no_rec = fl_sz/sizeof(Data_Type); file.seekg(0,ios::beg); //move to the beg of the file cout<<"There are "<<total_no_rec<<" (s) in this database "; We divide the total file size by the size of the structure to find the number of record. Simple maths eh? That's it you have full knowledge of how to handle fstream. Now all you need is creativity, so use it. I won't tell you all, so think. Knowledge can be gained from others but wisdom only from yourself (and from God). Ah, I love this. Here is some work for you. - Re-write the example program to include random reading, writing, and modification on multiple databases. [Put in error checks and don't allow the user to read or modify a rec which does not exist] - Find a way to write a whole single dimensional array of a struct to a file with just one write statement. [No hints, re-read this paper if you have to] - Deletion. Add a way to allow a person to delete an existing record. - [One way to do it, copy all the records except the one to be deleted to another temp file, open the database with ios::trunc and copy the records from the other temp file.] Error Checking These are some funtions that help to keep track of errors in your database if any Error Funtions good() returns true if the file has been opened without problems bad() returns true if there were errors when opening the file eof() returns true if the end of the file has been reached All are member funtions and are used as file_handle.errror_funtion() Eg. char ch; ifstream file("kool.cpp",ios::in|ios::out); if(file.good()) cout<<"The file has been opened without problems; else cout<<"An Error has happend on opening the file; while(!file.eof()) { file>>ch; cout<<ch; } You should have understood what those funtions are for. Implement them when you do you file handling. good() should be called after opening a file to see it has been opened properly. bad() is the opposite of good(). Encryption Aha, any decent database program must encrypt its files. Just open up any of the files from the example programs with notepad and you will be able to see the text. We definitely don't want people to see that. With encryption you can scramble the text so people can read it. Now there are a lot of encryption schemes and a lot of other methods for protecting data. I will just show you a couple of methods. Binary Shift The simplest of all. In this you increase or decrease a char by a number. void b_shif(char *s,int n) { for(int i=0;s[i]!=0;i++) { s[i] += n; } } XOR Another type of encryption. Exclusive-OR encryption, is almost unbreakable through brute force methods. It is susceptible to patterns, but this weakness can be avoided through first compressing the file (so as to remove patterns). This encryption while extremely simple, is nearly unbreakable. int xor(char *string, char *key) { int l=0; for(int x=0; string[x]!='\0'; x++) { string[x]=string[x]^key[l]; l++; if(l>=strlen(key)) l=0; } return 0; } Encrypt an entire structure void struct_enc_xor(char *str,char *key,int sz) { int l=0; for(int x=0; str[x]!='\0'; x++) { str[x]=str[x]^key[l]; l++; if(l>=strlen(key)) l=0; } return 0; } Use it exactly how you used read(). Eg. struct_enc_xor((char*)&cRec,"password",sizeof(DtbRec)); Exit Encryption is a very interesting topic. A lot of research has gone into it. If you are interested there are lot of pages on the web which describe encryption and various schemes. Then there is stenography, the art of hiding info in any file like text within a bitmap or an mp3. Have fun researching.
https://www.daniweb.com/software-development/cpp/threads/6542/fstream-tutorial
CC-MAIN-2015-27
refinedweb
3,279
75.2
We are developing an example application in five steps. We completed the first step in Part 1, the second step in Part 2, the third step in Part 3, and the fourth step in Part 4. Today we complete the final step, “Connect the Components.” Then we will deploy the application and test it. Here is the roadmap for the entire development process: - Create a Search Application Page. In this step, we design the user interface for the page that the modal dialog displays, and we write code to perform the search. - Add JavaScript for the Modal Dialog. In this step, we create a JavaScript file with code to open the search application page in a popup modal dialog. The function that opens the modal dialog specifies a callback function that returns information about the document that the user has selected to the page that opened the dialog. - Design the InfoPath Form. In this step, we create a form template that includes a Search button, and we write form code that notifies the form’s host when a user clicks the button. - Create a Web Part to Host the Form. In this step, we write a custom Web Part that uses an XMLFormView control to host the InfoPath form. - Connect the Components. In this step, we bolt everything together. We write Web Part code that handles the NotifyHost event, code that invokes the modal dialog, and code that uses the dialog result to update the InfoPath form. Step 5: Connect the Components At this point we have successfully created a browser-enabled InfoPath form, a Web Part to host the form, a search application page to find a document, and a modal dialog to display the search page. The only task left is to bolt these components together. We can start by writing code that makes the XMLFormView control respond to NotifyHost events raised by the form that it hosts. A NotifyHost event occurs when the form’s Search button is clicked. We want the host to respond by popping up the modal dialog that displays the search application page. Then, after the dialog closes and the result of the user’s search is posted back to the Web Part page, additional Web Part code should access the main data source for the InfoPath form and update the hyperlink field by setting its value to the URL of the document that the user has selected. Handling the NotifyHost event When we created the InfoPath form, we gave it a Search button, and we wrote a button click event handler that calls the XmlForm object’s NotifyHost method. The NotifyHost method accepts a string argument, and we used this argument to pass the XPath for the form field that we want to update. What we need to do now is to get the form’s host ready to listen for NotifyHost events and respond by displaying a Search dialog. Under the hood, we also want to extract the XPath for the form field and put it aside for later use. We will start by giving the XMLFormView control a NotifyHost event handler. In the handler, we want to do two things: Extract the XPath from the Notification property of the NotifyHostEventArgs object and save it in the Web Part’s view state. Add JavaScript to the page so that the next time the page renders in the browser our modal dialog pops up. To add a NotifyHost event handler In Visual Studio, open the InfoPathFormHost.cs file in the ModalHost.WebParts project. Navigate to the CreateChildControls method. Immediately before the line of code that adds the XMLFormView control to the Web Part controls collection, copy and paste the following code: // Add a handler for the NotifyHost event. this.xmlFormView.NotifyHost += new EventHandler<NotifyHostEventArgs>(xmlFormView_NotifyHost); Just below the CreateChildControls method, define the xmlFormView_NotifyHost method by copying and pasting the following code: // Handles the event that fires when the button // in the InfoPath form is clicked. void xmlFormView_NotifyHost(object sender, NotifyHostEventArgs e) { try { // Check if the argument contains the xpath. if (!string.IsNullOrEmpty(e.Notification)) { // Save the InfoPath field xpath in the view state so it can be used later. ViewState["fieldXPath"] = e.Notification; // Construct a JavaScript function to invoke the modal dialog. StringBuilder functionSyntax = new StringBuilder(); functionSyntax.AppendLine("function popupparams() {"); // Pass the current SharePoint web url as an argument. functionSyntax.AppendLine("var url ='" + SPContext.Current.Web.Url + "';"); // Call the JavaScript function to popup the modal dialog. functionSyntax.AppendLine("popupmodalui(url);}"); //Ensure the function popupparams is called after the UI is finished loading. functionSyntax.AppendLine("_spBodyOnLoadFunctionNames.push('popupparams');"); //Register the script on the page. Page.ClientScript.RegisterClientScriptBlock(typeof(Page), "ModalHostScript", functionSyntax.ToString(), true); } } catch { throw; } } In the event handler we create and register a script block that defines the JavaScript function popupparams. This function calls the popupmodalui function that we defined Step 2 and passes in the URL for the current page. Then the event handler ensures that popupparams executes when the page body loads by adding it to the array _spBodyOnLoadFunctionNames. Note: _spBodyOnLoadFunctionNames is a global variable that is defined in INIT.debug.js as an array of functions that are registered when a SharePoint page loads. You cannot directly add a function to the onload event for the body of the page because content pages in SharePoint are based on a master page that contains the “body” element. In order to work around this limitation, SharePoint provides indirect access to the onload event through the “_spBodyOnLoadFunctionNames” array. When the body of the page is loaded, the onload event handler executes each function whose name is contained in this array. To finish wiring up the event handler, we need to register ModalHost.js on the page. ModalHost.js calls into the JQuery library, so we must register that as well. A good place to do the registration is in the handler for the Web Part’s OnLoad event. To register the script files In the InfoPathFormHost class, just above the CreateChildControls method, paste the following code that overrides the OnLoad event handler: protected override void OnLoad(EventArgs e) { base.OnLoad(e); //Register scripts to load and display the modal dialog. ScriptLink.Register(this.Page, "ModalHost.ApplicationPages/scripts/jquery-1.5.min.js", false); ScriptLink.Register(this.Page, "ModalHost.ApplicationPages/scripts/ModalHost.js", false); } Updating the Field in the InfoPath Form The final task is to set the value of the URL field in the form to the URL of the document that the user selects before closing the modal Search dialog. It may help if we first review the event flow: - A user clicks the Search button on the InfoPath form, raising the NotifyHost event on the host Web Part. - Code for the Web Part responds to the event by storing the XPath for the form field in the view state and then invokes JavaScript that cause the modal dialog to pop up. - The modal dialog displays an application page with an interface that allows the user to search for and select a document. - When the user selects a document, logic in the code behind the search application page places the URL for the selected document in a hidden text box on the application page and enables the dialog’s OK button. - The user clicks OK. The handler for the button click event (ModalOk_click in ModalHost.js) extracts the URL from the hidden text box and passes it as an argument in a call to SP.UI.ModalDialog.commonModalDialogClose. - The commonModalDialogClose method closes the dialog and passes the URL along to the dialog’s callback function (closecallback in ModalHost.js). - The callback function stores the URL in a hidden text box on the Web Part. Therefore, to complete the final task we need code that retrieves the URL from the hidden text box on the Web Part, retrieves the XPath for the field from the view state, and uses both pieces of information to update the value of the field. If the user exits the modal dialog by clicking the OK button, the callback function forces a postback that refreshes the page that hosts the form. Therefore, a good place to put code that updates the form is in the Web Part’s OnPreRender method. To add code that updates the URL field In Solution Explorer, right-click the node for the ModalHost.WebParts project. Then click Add Reference. In the Add Reference dialog, browse to :\Program Files\Microsoft Office Servers\14.0\Bin\Microsoft.Office.InfoPath.dll. Then click OK. If Visual Studio prompts a warning message, click Yes and continue adding the reference to the project. Open InfoPathFormHost.cs and add the following using statements at the top of the file: using System.Xml; using System.Xml.XPath; Navigate to the OnPreRender method. Add the following code immediately after the line that sets the XMLFormView control’s EditingStatus property: // Retrieve the return value from the modal dialog. this.receivedValue = this.hiddenText.Text; // Check if the received value is not empty. if (!string.IsNullOrEmpty(this.receivedValue)) { //Update the form datasource with the new value. this.UpdateFormMainDataSource(); } Define the UpdateFormMainDataSource method by pasting the following code after the OnPreRender method: // Updates the form's datasource with the received value. private void UpdateFormMainDataSource() { if (ViewState["fieldXPath"] != null && ViewState["fieldXPath"].ToString() != string.Empty && !string.IsNullOrEmpty(this.receivedValue)) { // Ensure the XMlFormView has access to the form's underlying datasource. this.xmlFormView.DataBind(); // Retrieve values from the ; separated string. int index = this.receivedValue.IndexOf(";"); if (index != -1) { // Get the document url. string url = this.receivedValue.Substring(0, index); // Get the document name. string text = this.receivedValue.Substring(index + 1); // Pass the target InfoPath field xpath and the received values. this.SetFormFieldValue(ViewState["fieldXPath"].ToString(), url, text); } } } Define the SetFormFieldValue method by pasting the following code after the UpdateFormMainDataSource method: // Sets the target InfoPath form field value with the received value. private void SetFormFieldValue(string xpath, string url, string value) { // Create an XPathNavigator positioned at the root of the form's main data source. XPathNavigator xNavMain = this.xmlFormView.XmlForm.MainDataSource.CreateNavigator(); // Create an XmlNamespaceManager. XmlNamespaceManager xNameSpace = new XmlNamespaceManager(new NameTable()); // Add the "my" namespace alias from the form's main data source. // Note: Replace the second argument with the correct namespace from the form template that you are using. xNameSpace.AddNamespace("my", ""); // Create an XPathNavigator positioned on the target form field. XPathNavigator formfield = xNavMain.SelectSingleNode(xpath, xNameSpace); // Set the form's hyperlink field to the received document url. formfield.SetValue(url); if (formfield.HasAttributes) { // Set the hyperlink's display text with document title. formfield.MoveToFirstAttribute(); formfield.SetValue(value); } } Replace the second argument passed to the AddNamespace method with the correct namespace for your form template. To get the correct value, open the form template in Design mode, click Show Fields on the ribbon to display the Fields task pane, right-click myFields, and then click Details. Press F5 to build and deploy the solution. This concludes the final phase of application development. Testing the Application Now it is time to be sure that everything works as expected. To test the application Navigate to your SharePoint website in your browser, and open the Web Part page where you previously added the InfoPathFormHost Web Part. The form should display. On the Contoso Resource Planning form, click Search. A modal dialog should open. The OK button should be disabled. Type a search term in the text box, and then click the magnifying glass. The names of any matching documents should fill the results grid. Select a document by clicking Select. The row containing the selected document should be bold, and the OK button should be enabled. Click OK. The dialog should close, and a link to the selected document should appear in the URL field of the form. thank you very much for this great post, it saved me. Thanks hello, you can invoke an event when data is sent from the popup comes to infopath
https://blogs.msdn.microsoft.com/sharepointdev/2011/05/21/open-a-sharepoint-modal-dialog-from-an-infopath-form-part-5-of-5-vivek-soni/
CC-MAIN-2017-22
refinedweb
1,986
56.05
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives. The discussion is about multi-statement lambdas, but I don't want to discuss this specific issue. What's more interesting is the discussion of language as a user interface (an interface to what, you might ask), the underlying assumption that languages have character (e.g., Pythonicity), and the integrated view of semantics and syntax of language constructs when thinking about language usability. I only wonder why nobody has yet complained about functions with several statements (as surely are common in Python), when multi-statement *anonymous* functions are considered unusable. In the following "a" "b" and "c" are arbitrary statements. (defun foo (x) a b c) (lambda (x) a b c) Can you tell the difference in complexity, as well as in usability, for the average user, for the weak user, and for the expert user? The problem is interactions between whitespace-sensitive syntax and anonymous functions. A lambda is an expression, so it's expected to be used within another expression. Then you need to close off the other expression, so you get code like this: map(lambda point: print "Point.x = " + point.x print "Point.y = " + point.y , list_of_points) The dangling comma and further arguments are just plain ugly, and would be confusing to any beginners. Plus, if you dedent too far you break the indentation of the containing block and end up with code you didn't really mean (but which might be valid anyway!) Goodbye Python, hello Fortran. This is always a problem with whitespace-significant syntax, but without lambda you don't have cases where the next character is a single comma all alone, which is easier to miss than an identifier. By contrast, a def is always a statement, so it doesn't break up any expression. When you dedent, you start a new statement. Nesting is natural, there're no ambiguity problems, you don't have to break your train of thought to wonder "What is this block doing?", and you don't get dangling tokens. You don't get these problems with Scheme (or C or Java or Ruby) because they have explicit terminators for the lambda. Even then, it's still somewhat awkward: I hate looking at anonymous inner classes in Java and seeing }, value).methodCall() at the end. As much as I like lambda expressions, Guido has a point. That is, syntax and semantics of a language has a lot of influence over what constructs are reasonable in that language. If you think about it, this is similar to what the Lisp/Scheme people have been saying all along: Scheme and Lisp are so flexible, simply because they have essentially no syntax. This has both positives and negatives. The negatives are obvious: harder to get use to, everything looks the same (hard to tell different constructs from each other). The positives are: everything looks the same (don't have to remember 10 different syntaxes for 10 different constructs), meta-programming is easy, and as a result, the syntax is extremely flexible. When I say that the syntax is flexible, I mean that it easily allows new constructs without damaging readability, from a syntactic point of view. Every new construct looks the same in Lisp/Scheme, unlike in Python, or even Java, where there are different methods of determining block structure (python has parenthesis and whitespace, Java has parenthesis and curly braces). One of the lessons learned is, if you make syntax a major part of your language, you have to design all constructs into you're language very carefully. In the case of Python, lambdas were added later because programmers wanted them, but the syntax of lambdas wasn't designed into the language very well. Guido could design lambdas to be more Pythonic if he wanted to, but I get the feeling that he doesn't really want to. In the end, it's not a big loss for Python if Guido keeps/improves its support of the creation of lexical closures (unfortunately, code locality will suffer). "When I say that the syntax is flexible, I mean that it easily allows new constructs without damaging readability, from a syntactic point of view." In other words, damage may never take syntactic readability below 0. :-) "If you make syntax a major part of your language, you have to design all constructs into your language very carefully." For sure. In Nickle, our anonymous λ looks like e.g. (int func(int x){return x + 1;}) where func is a keyword. This syntax is verbose, ugly, and fits reasonably into a language that is supposed to look syntactically like C. It took us several turns to get there. func I'm not sure if you're joking or not, but I don't find this syntax that ugly, quite simple actually: even a beginner can understand it thanks to your use of func instead of lamba as keyword. Maybe I'm too used to C though :-) Actually I was sort of half-joking, as usual. I think func makes way more sense than lambda as a keyword in Nickle. But with the return keyword and all that syntactic noise, it makes for pretty verbose anonymous functions. Of course, I was writing in the explicitly static-typed variant of Nickle. If you omit the types (which also suppresses static typechecking—no type inference), you get the simpler-looking lambda return (func(x) { return x + 1; }) All in all, though, let me just say that if that syntax looked natural to you as a C programmer, I'm proud: that was one of the explicit Nickle goals. Check out our first-class continuations sometime, or our threads, or our twixt(;) statement. We're trying really hard to help C programmers learn how to get at some of the cool stuff from the last 20 years. So thank you for the positive feedback! twixt(;) It is almost Javascript, with func instead of function. function The runtime, and, ultimately, the operating system? Picture some Becket-esque play featuring a CLR, a JVM, Perl, Bash, Parrot, Emacs(?!?) etc., and give them dialogue to match, based on the feel of sample code written in them. Just six weeks remaining to 01Apr, so sharpen those burritos! The runtime, and, ultimately, the operating system? I don't think so... Try again! How about processes, because programming is ultimately about us (programmers) expressing and creating the processes in we create in our minds (the machine and/or OS is, ultimately, irrelevant). "part of the language's user interface" I never thought about that, but i think it's well put: a programmer (user) interfaces with a language via its keywords and syntatic constructs. A user interface, indeed. "This is also the reason why Python will never have continuations, and even why I'm uninterested in optimizing tail recursion." so, no true lambda, no real proper lexical scoping, no continuations and no proper recursion... well, except for lexical scoping, C++ doesn't feature any of this either, and seems to be doing fine. But, then again, C++ is a far more flexible language than Python... at least, Ruby features them... Since when Python has no lexical scoping? it's broken - the following code (to generate a "counters" / "generators") doesn't work: def make_counter(): x = 0 def counter(): x = x + 1 return x return counter i understand that is because you only have write access to the outermost (global) and innermost scope (in this case x = x+1 fails because it cannot update the x in the enclosing scope). the above may seem obscure if you're not used to using closures, but it's a basic pattern in, say, scheme. as someone has said already, i think, this doesn't matter in that objects encapsulate state in a very similar way to closures, and python has objects, but if you try to do "functional" programming in python you get bitten by the above very quickly. this doesn't work: a = 5 def foo(): print a a = 10 foo() # prints 10! _lexical_ scoping means the scoping is the textual area where things were first defined! foo was defined when a was 5, thus it should print the value of a when it got defined. it's hands down broken... I guess Scheme is broken too then: (define a 5) (define (foo) (display a)) (define a 10) (foo) ; -> prints 10 Arguably, the Scheme version is worse as it looks like defining a new variable, rather than assigning to an existing one. Sorry, but this is nonsense. The global variable a does not turn into a constant just because you access it in foo. It remains a name in the global scope and you can rebind it freely there. The binding of a to foo is a name binding not a value binding. It is also irrelevant for the compiler if a is already defined somewhere. Since a is not defined in foo's scope the compiler concludes that he has to load a global. This can be shown easily analysing the disassembled code: import dis dis.dis(foo) 2 0 LOAD_GLOBAL 0 (a) 3 PRINT_ITEM 4 PRINT_NEWLINE 5 LOAD_CONST 0 (None) 8 RETURN_VALUE I never really gave much thought to the subtleties of static lexical scoping and i guess haskell's staticness and imutability has affected me somewhat as of late... Prelude> let add2 x y = x + y + z :1:23: Not in scope: `z' Prelude> let z = 10 Prelude> z 10 Prelude> let add2 x y = x + y + z Prelude> add2 1 2 13 Prelude> let z = 20 Prelude> add2 1 2 13 The definition of add2 closes over z (i.e. it has its own environment with z=10 in it). It's independent of any other bindings to z. The definition of add2 is not in the scope of let z = 20 so it should not be affected by it. Just in case this is also an issue, the let statements bind variables; they do not assign them so the second z is a new fresh variable. To put it another way, in let z = 20 in add2 1 2 you should be able to alpha-rename z to something else (replacing all, in this case, no) occurrences in the body. Thus this code should behave the same as let quux = 20 in add2 1 2. This all makes sense; you don't want to have to know what variable names a function used so that you don't accidentally rebind them. This is why dynamic scoping (which would have resulted in 23 in the above example) is usually considered a very bad idea for a default scoping convention. Sorry, I should have been more clear about the fact that I actually really like the behaviour I posted about :-) but am still always confused about how there are examples which, to me, appear to contradict lexical scoping. Like what is going on with the Scheme example above? (I manage to confuse myself easily - slow loading link-with-anchor.) Assuming you are referring to "Mutability v. Scope", the issue there is that Scheme defines 'define' to essentially be set! (although the first use is binding). It's becoming more and more clear to everybody that having an explicit reference type (or just being more explicit on what is a variable v. a reference) helps clarify this and many other issues (e.g. co- and contra-variance with subtyping). There is some confusion between the behavior of define vs let in Scheme—let always binds, define (which should only be used once) binds only on first use. My personal opinion is that define should throw an error if used twice on the same name as its intended use is to bind global values (procedures or global vars), for example: define let (define a 5) (define (foo) (display a)) (define a 10) ; <- throws an error ; but these are okay: (let (a 10) (foo)) (define (bar a) (display a)) The problem, of course, is that define is binding a value in the global environment. When redefineing a value, the meaning is unclear. Since the old value isn't simply masked by a closer environment the Scheme example makes a little more sence. Redefineing doesn't mask the old value: it replaces it. Perhaps there should be a thread explaining scope, binding, and assingment and their implementation in various languages? That would truly be a god-send, assuming one could eventually get all the bickering down to a concise set of explanations / definitions :-) :-) I think the programming community is pretty well agreed on these issues. If the commuinty is so agreed, then why are there such long threads on things like LtU? I'm just saying that while there might be a core truth, it appears to me to very often be obfuscated. And, actually, who can even give me a solid definition of referential transparency that will meet with only the chirping of crickets? I, for one, believe that argument on a forum such as LtU, although sometimes frustrating, is generally a good thing. If you can wade through all of the discussion, you will end up with a broader understanding of the topic than if you only heard one side of the issue. You see both the positives and negatives of different viewpoints, instead of getting the one-sidded viewpoint you sometimes see in a textbook. (Not that textbooks are intentionally one-sided.) Obviously argument, for the sake of argument isn't good, but argument for the sake of realizing a better definition or description is good. I'd agree; I always learn a lot from the discussions (although sometimes they go on way past what I can muster parsing). Sometimes I even learn that I'm not the only one who is - apparently - easily confused by terms like 'referential transparency' and 'scoping'. :-) Section 4.5.2 of CTM points out three moments of a variables life: Doesn't address the idea of scope, but it's probably useful for a discussion on the subject of binding and assignment. The upshot on Oz (and Alice) is that the three moments can be separated. I would think the reason for Schemes behavior in this situation is incremental development. Its nice to just be able to rebind an errornous function then you discover it, and not have to recompile the whole program. I'm not sure if this is possible in Scheme but in Common Lisp, then you encounter an error you can sometimes just rebind some variables and continue the program from the point of the error, this could save a lot of time if you have to run the program for a while before the error shows upp. Note also that neither SML nor Haskell has an incremental development enviorment, and therfore wouldn't benefitt from this functionality. (I wonder what happens in Ocaml?) If you are talking about a REPL, then there are several SML implementations that provide one. Probably the best known is SML/NJ. I usually draft new SML code incrementally in XEmacs sending individual definitions to the inferior SML process running SML/NJ. You can find some of my comments about O'Caml and Tuareg Mode for EMACS here. Briefly, O'Caml is quite unusual among statically-typed languages inasmuch as it has a "toploop" (REPL), a bytecode compiler with a time-travel debugger, and a native-code compiler. So it's an everyday experience for an O'Caml programmer to have the same "feel" of trying out snippets of code in real time, as with Python or Ruby or Lisp or Smalltalk, without the nasty edit-compile-link-test-crash-debug cycle commonly associated with statically-typed languages. However, the compilers are still there waiting for you if/when you want them, and as I pointed out in the other thread, you can even use omake -P to do continuous builds as you save source files. After having put far too much energy and verbiage into far too many static/dynamic flame wars, I've finally become convinced that interactive development vs. batch is the real killer app with respect to productivity in all but the most stringent "assured software" environments. Although there are other statically-typed languages with interactive modes (e.g. GHC's ghci), O'Caml's toolset seems to be unusually consistent in its treatment of modules, separate compilation, scoping and binding rules, etc. so that moving from the interactive environment to the compiled environment is effectively trivial. For example, it would be hard for me to overstate the extent to which O'Caml's separate compilation system being based on the module system is a win: signatures and modules in the toploop really do simply become the contents of .mli and .ml files, without change. Couple that with omake's automatic dependency tracking, and it's just ridiculously easy to turn your off-the-cuff exploration into a standalone bytecode or native binary. Great stuff and highly recommended. I agree that interactive development does boost productivity. The basic reason for this is that you get feedback much faster. Unit tests have the same effect. Without unit tests, it can take days, weeks or even months before you get feedback (bug reports from daily builds, smoke tests, internal QA or customers) on the unintended changes to the behavior of the (legacy) program you are working on. Perhaps even more importantly, unit tests serve as a formal documentation on how parts of the program are supposed to work. My recent experience, working on a largish legacy application written in a language that does indeed support interactive development, suggests that interactive development and interactive testing is no substitute for thorough automated unit testing. It probably works (in the sense that new features can be developed quickly) as long as the original developers are working on the project. After they are gone, the knowledge they should have put into the unit tests, but decided to test only interactively, is also gone, and the pace of development slows down dramatically. I'm not implying that you would have necessarily suggested otherwise. I'm also not implying that you couldn't use both interactive development and unit tests. However, interactive testing is, in my experience, like debugging, a time sink. It probably feels productive, because it keeps your mind busy, but that is a false belief. I would think the reason for Schemes behavior in this situation is incremental development. Oh, I'm sure it is. If Scheme started throwing errors on redefinition of a term (like I suggested), I'm sure it wouldn't take long for everybody—including me :-)—to want the old behavior back. There's no question about it: for development purposes, using define twice if perfectly reasonable, but it shouldn't be misused as a source of side effects. It's one of those things: when you know what your doing, it's nice to redefine a value when developing, however, when you make a mistake, and redefine over a value by accedent, you wish it had given you an error. It's one of those things: when you know what your doing, it's nice to redefine a value when developing, however, when you make a mistake, and redefine over a value by accedent, you wish it had given you an error. The way some Schemes deal with this is to allow duplicate defines at the "top level", but not inside modules. This means that you can rely on the relaxed behavior when working in a REPL or in scripts, but when you reach the point of dividing a program into well-defined modules, the compiler will complain about duplicate definitions. This works very well in practice. The R6RS library proposal codifies this behavior, saying that within a library, "No identifier can be imported multiple times, defined multiple times, or both defined and imported." It's good to know that there are people out there who have already solved the problem. :-) I haven't used Scheme modules yet, but to me it looks like this is a pretty compelling reason to use them. "haven't used Scheme modules yet" It's because modules are not an standardized feature of Scheme, and each implementation has its own way to do it. R6RS should fix that. In SML the code you gave would be: - val z = 10; val z : int = 10 - fun add2 x y = x + y + z; val add2 : int -> int -> int = _fn - add2 1 2; val it : int = 13 - val z = 20; val z : int = 20 - add2 1 2; val it : int = 13 To get the behavior that most non-FP languages have, you'd have to use a reference (which IIRC don't exist in Haskell): - val z = ref 10; val z : int ref = ref 10 - fun add2 x y = x + y + !z; val add2 : int -> int -> int = _fn - add2 1 2; val it : int = 13 - z := 20; val it : unit = () - add2 1 2; val it : int = 23 In the first case, there are two values that were named z - though the first z value becomes shadowed after the second one is declared. In the second case, there is one value z that is a reference. The gotcha here is that you can't modify bindings of names defined in outer scopes. Therefore you actually create a new local variable x in the scope of counter. This is dissimilar to Scheme where such things are possible. But this is unrelated to the question whether the scope is fixed at compile-time or not. import dis counter = make_counter() dis.dis(counter) 55 0 LOAD_FAST 0 (x) 3 LOAD_CONST 1 (1) 6 BINARY_ADD 7 STORE_FAST 0 (x) 56 10 LOAD_FAST 0 (x) 13 RETURN_VALUE As someone has said already, i think, this doesn't matter in that objects encapsulate state in a very similar way to closures, and python has objects, but if you try to do "functional" programming in python you get bitten by the above very quickly. Right. Python is very asymetric in this respect. Since it promotes objects as 'native' types it feels no responsibility in supporting modifiable encapsulated state by means of closures. The obvious equivalent solution of your example is creating a generator: def counter(): x = 0 while 1: yield x Kay your suggestion: can only be used like so: for x in counter(): dostuff(x) No idea what use the grandfather poster had in mind, though. This could be fatal because counter produces a generator that hangs up in an endless-loop if not treated carefully. You will most likely use the generic next-method that each generator obtains. First I have to correct counter in order to produce non 0 values :) def counter(start=0): x = start while 1: yield x x+=1 >>> inc = counter() >>> inc.next.__doc__ 'x.next() -> the next value, or raise StopIteration' >>> inc.next() 0 >>> inc.next() 1 I think this is covered by the "There should be one-- and preferably only one --obvious way to do it." bit of the Zen of python[1]. As you said, it can be done with objects, so providing two ways of doing things isn't really needed and is a "bad thing" in scripting languages. I would be interested in seeing an example that doesn't map well into objects, if one exists. class count: def __init__(this): this.x=0 def inc(this): this.x=this.x+1 return this.x def finc(this): return this.inc def make_counter(): return count.finc() Calling make_counter() makes you a counter, calling the result of make_counter increments the counter and returns the new value. I'm not sure what this proves, apart from that you can abuse any language if you try hard enough. Discussing the readability of the code above vs your example is a interesting topic by itself, but yours is probably more readable. [1] The Zen of python is a hint of the thinking behind python, try running python -c "import this" at a command prompt. I think this is covered by the "There should be one-- and preferably only one --obvious way to do it." bit of the Zen of python. Could be, but, well, here's another way -- the one that strikes me as being the closest Python way to "fix" andrew's code (i.e., it's the code that looks the most like andrew's, but does what he was expecting: def make_counter(): x = [ 0 ] def counter(): x[ 0 ] += 1 return x[ 0 ] return counter It's not an uncommon idiom in Python, and its use seems to me like a straightforward consequence of the way assignment (rebinding) works in Python. But maybe that just means that I've been using Python for too long. You can't do this in haskell either, or any purely functional language (for the same reason you can't change any other variable). I don't see how you can say this is a hindrance to functional programming when the whole point of the above code is to trigger a side effect from the function call! This works in GHC, though you might need -fglasgow-exts to use STRefs: type Counter s = ST s Integer makeCounter :: ST s Counter makeCounter = do ref <- newSTRef 0 return $ modifySTRef ref (+ 1) >> readSTRef ref Also see the Haskell entry on C2 And its certainly leaving a purely functional style. If you don't require that you should be able to rebind an actual variable in the enclosing scope, python can do this too: def makeCounter(): count = [0] def f(): count[0] += 1 return count[0] return f However, neither has anything to do with functional programming (after all, we're creating non-idempotent functions). You can't do this in Haskell either, or any purely functional language (for the same reason you can't change any other variable). Python is not a purely functional language. Purely functional languages make certain tradeoffs, assignments for laziness is one of them. Python has made no such tradeoff. Assignments are perfectly acceptable in Python. I don't see how you can say this is a hindrance to functional programming when the whole point of the above code is to trigger a side effect from the function call! The discussion wasn't concerned with purely functional programming, but Lexical scoping. Haskell captures the essence of Lexical scoping properly. It treats captured variables exactly the same as other variables, just like Scheme, ML, etc. It's simply that Haskell, by default, forbids assignment to any variable. Python is broken, because, unlike Haskell, ML, and Scheme, it treats captured variables differently than other variables. That is the complaint raised here. Is Python broken? Possibly. Weird? Certainly. Different from Scheme, ML, Haskell, and how lexical scoping is normally done? Yep. I think the problem is that Python is conflating assignment and variable introduction. That always results in strange scoping behavior. Python's original scoping rules were very weird, and like a lot of Perl, work tolerably well only as long as you didn't think about them too much. Along about version 2.2, as I recall, the current behavior was introduced that acts more-or-less lexically for reads, but continues to introduce a new variable on the first assignment in a scope. Since assignment and variable introduction are the same operation, you simply can't spell writing to a captured variable. I was too harsh in concluding that Python was broken, when it's simply confusing. Since assignment and variable introduction are the same operation, you simply can't spell writing to a captured variable. Yes, this is the essence of the problem (and confusion). Variable binding should be made explicit, this would result in much less confusion. (Weird and seemingly arbitrary scoping rules confuse new users who expect lexical scoping.) About rebinding of names in enclosing scopes. From PEP 227:). Regards, Kay I was objecting to the phrase "but if you try to do "functional" programming in python you get bitten by the above very quickly." in the comment I originally replied to. Claiming this is a hindrance to functional programming is clearly wrong. I'd agree that it is a wart that you can't rebind variables in enclosing scopes, though I think "broken" is a bit strong. The reason is that python has no difference between assignment and declaration: (define x 5) and (set! x 5) are both written "x=5" in python, hence "x=5" in an inner scope creates a new variable that shadows the one in the enclosing scope, rather than rebinding it. This is not an inconsistency in lexical scoping, its just a lacking feature - python lacks an operator exactly equivalent to scheme's set!. The problem is that you're one of those Lisp geniuses. Of course it doesn't look any different to you! Those with the mental capacity only to handle Python, though, know that forcing one to name multi-line functions before passing them as a functional argument is so much clearer. For instance - which is clearer: (map 'list #'(lambda (x y) (a-very-long-procedure x) (another-very-long-procedure y) (+ x y)) a b) (map 'list #'(lambda (x y) (a-very-long-procedure x) (another-very-long-procedure y) (+ x y)) a b) (map 'list #'(lambda (x y) (a-very-long-procedure x) (another-very-long-procedure y) (+ x y)) a b)) Everyone who uses Python would prefer the second because the extraneous name makes it clearer what the code is doing, you silly Lisp genius! The bigger problem here is not whether there's a distinction between a named or anonymous function. The real question is whether Python fully supports the concept of higher order functions. For example, can I treat the definition of a function as data construction? Can I Curry functions? Can I combine or compose functions? Can I get closures? Python has gone a way towards higher order functions but Guido's attempt to eliminate lambda was more concerning to those who appreciate functional programming for the power that it gives us. To those who view functions as data, it would be like saying that you couldn't use a constant value in a statement unless it was defined: s = "Hello World" versus: def helloWorld = "Hello World" s = helloWorld Now which one of these is clearer? By the logic used by the Pythonistas, the second is much preferred. If I said the constant was 3.14 (pi), then I'd agree that giving that constant a name would be good programming practice. Yet, it would be a real pain to name every constant used in every program that you write. Well, in the same way, functions that are reused frequently should be named. But for those used to functional programming, naming every function that you encounter is about as much pain as naming every constant. Naming things is one of the harder things about programming, as we try to avoid name space pollution. The more meaningful names we have to come up with, the harder it can be to write. Of course, this idea of minimizing names can lead to hard to read code - just as having too many names can make it hard to read. Writing elegant code is not something that can be had by forbidding certain programming styles - though it can be nudged along by such things as offside rules. The real question is whether Python fully supports the concept of higher order functions. For example, can I treat the definition of a function as data construction? Can I Curry functions? Can I combine or compose functions? Can I get closures? If that's what you care about, the answer is yes to all of the above. The syntax (and even the existence) of the lambda expression has no bearing on the availability of any of this functionality. It will always be there. But it isn't the only "real question". I disagree. The second is less clear, because of code locality. I have no idea what do-stuff-and-add means unless it either does something very simple and is well named, or I have access to the definition, so I can see directly what it does. The easiest way to have access to the definition is to put it right there, i.e. use a lambda. If I am maintaining this code, I'll have to go find the definition of do-stuff-and-add to understand what the map is doing. With the lambda this isn't the case. Now, obviously, if do-stuff-and-add is defined as a function local to the function in which the map is being used, then it isn't a huge issue. do-stuff-and-add map In the end it all comes down to syntax and semantics. The Lisp example is rather silly. It's silly partly because the code didn't get indented properly making it harder to read that it would actually be, but mostly because lambdas work really well in Lisp and Scheme, but don't in Python. So Python programmers would prefer the second, not because they can't parse the first (I'm quite confident they could—it's really not that difficult of a concept), but rather because they are programming in Python, and Python's lambdas are syntactically broken. And everyone with any sense would prefer some saner indentation! It seems this is beyond Python's capabilities? (flet ((do-stuff-and-add (x y) (a-very-long-procedure x) (another-very-long-procedure y))) (map 'list #'do-stuff-and-add a b)) That seems to address both of the concerns of naming the function, and keeping the definition of the function local. Let me start by saying I'm not a Pythonista (I've only ever used Python for the Python Challenge up into the 20s.) However, I admire Guido's disciplined approach to the design of his language. As he points out in that thread, any design is about balance and trade-offs, and trying to overload too many desiderata into the problem space inevitably leads to wishy-washy, neither-fish-nor-foul solutions that can weaken the overall coherence of the design. The PLs that try to be all things to all people (no need for names ;-)) are usually the ones derided as overly-complex mish-mashes. Guido has made it clear he does not favour a full-blown functional style; I 'm not sure why so many people think he should change his design to suit such a style. Those of us who favour a functional style have plenty of other options to choose from. (Or can design our own...) GvR seems to consider Python a meta-concept about programming that evolves within some core entities in order to acquire new facilities. Note his remark on the double colon: ...completely arbitrary and doesn't resemble anything else in Python...There's also no analogy to the use of :: in other languages -- in C++ (and Perl) it's a scoping operator... This is the user interface he's talking about. The focus here is on a syntactic element, but I gather it also applies to semantic elements. In his view, a user has some notion of what '::' can or cannot stand for. Here's another syntactic pythonic element: ...I find any solution unacceptable that embeds an indentation-based block in the middle of an expression... However, all this backpedaling is due to one reason only: And there's the rub: there's no way to make a Rube Goldberg language feature appear simple. The people who want the multi-statement lambda think it would be a great thing to have. According to GvR, it's not particularly useful and unless it comes dirt cheap, he's not going to clutter the user interface with it. There's nothing deep here, really. (an interface to what, you might ask) This is a very deep question, answers to which might characterize some very fundamental differences in how people understand (or imagine) programming. Should we understand a programming language as being primarily an interface to a particular physical machine? Some kind of abstract machine? An interface to a world of platonic forms? There are lots of other options, too... (Incidentally, I agree that it's also a far more interesting question than statements vs. expressions in Python syntax.) My idea of what the interface of a language should be is a cognative model of ORDINARY human activity. This leads to a backward chaining control structure with multiple types of rules. The basic rules seem to be EVENT, FORMAL, and ERASE. This doesn't involve any cognitive science. All we need to do is ask a few questions about ordinary behavior. We humans make mistakes all the time but we don't throw errors or crash, and we don't do any static type checking either. Actually our error handling is so smooth and natural we are never aware of it. Only backtracking works that way. When we make an error in some activity such as bending a nail we simply back up, straighten out the nail and continue on. Looping is another example. Instead of indexes and arrays we think of a basket of objects to process, take one object out of the basket and process it, back up to the basket and get another object, and so on. No array bounds checking! Languages like Prolog use backtracking but there is too much logic theory for Prolog to be a natural interface. I have in mind something logically less powerfull and able to handle both rules and procedures (ie terms and expressions). The test of this idea seems to be how easy it is to learn, especially for people who don't know any programming. I wonder what Ltu readers think. Thanks. Rules, events, backward chaining? Sounds interesting. Could you explain in more detail? My idea of what the interface of a language should be is a cognative model of ORDINARY human activity. This leads to a backward chaining control structure with multiple types of rules. That's odd. Most cognitive models that I know of (Soar, ACT-R, EPIC, etc etc) that are based on rules seem to prefer forward chaining for most things. What is "ordinary" human activity? Actually our error handling is so smooth and natural we are never aware of it. I'm painfully aware of mine on occassion. I do not see forward and backward chaining as separate competing methods. But I do think that backward chaining is more general and expressive. Forward systems consist of two languages: an IF clause language and a scripting language used in THEN clauses. When the IF clause is true the script in the THEN clause is executed. A single more general purpose language can do both at once. For instance we can write a forward system in Prolog and use it as a part of the overall system. I show how to do this using my own proposal here . But the issue is more general than this. Forward, backward, and most other programming models are all closely related. The well known CTM (Concepts, Techniques, and Models of Computer Programming) has made this very clear. The issue of this thread is interfaces. I like my proposal, but as always it is all there for you to choose from. But I do think that backward chaining is more general and expressive. Forward systems consist of two languages: an IF clause language and a scripting language used in THEN clauses. This doesn't have to be the case. You can represent rules as e.g. Horn clauses and then do forward chaining or backward chaining over them. There is nothing about forward chaining which ties it to having (imperative) actions as the consequence of rules. I don't think it really makes much sense to say that some inference procedure is more expressive than another — the underlying language may be identical. You can make distinctions on soundness, completeness, tractability, etc. For instance we can write a forward system in Prolog and use it as a part of the overall system. Yes, and you can also do goal-directed processing in a forward-chaining system. Which to choose depends on what you are intending to do. You can even separate out the language (representation of rules, facts) from the inference procedure and have some meta-level system decide which to apply. My point was that appeals to intuition about human cognition don't really justify a choice of backward chaining rules as a general mechanism for computation. My point was that appeals to intuition about human cognition don't really justify a choice of backward chaining rules as a general mechanism for computation. I wish I had a better justification! For me it really does come down to intuition. When I try to explain behavior it usually takes a backward form. The same thing happens in ordinary functional programming. A set of nested functions is really part of a backward chain except that we can't backup if it fails. A lot of pragmatic liturature gets into discussion that sounds like backtracking. Forward chaining usually seems to belong at a very high level. If the system is idle and something happens where do we start. What is the first goal. In general I find goal representation very difficult in forward chaining. At the same time a goal representation in backward chaining may be all that is needed. Forward chaining usually seems to belong at a very high level. I usually see things the other way round. Looking from an "intelligent agents" design perspective, forward chaining gives a natural way to do reactive style behaviours where you want to respond to events occurring in some environment. Backward chaining is more usually associated with higher level deliberative behaviour, pursuing long term goals. Of course, that's a crude characterisation, as few systems are purely reactive or purely deliberative. In general I find goal representation very difficult in forward chaining. There are various ways to represent goals in a forward-chaining system. One simple way it to just assert the current goal as a fact in working memory and then have rules match against the current goal in their antecedents. That's just a simple way to prune the currently applicable rules to just those relevant to the current goal, though you are still doing forward chaining. You can go further and write a backward chaining rule interpreter in terms of forward chaining rules (not too unlike doing the reverse in Prolog) but my personal preference would probably be to write both forward and backward chaining (and the whole rule system) in a language like Scheme (or Haskell, ML, etc). Yes, I like forward systems but what about the forward system as an interface for general purpose programming. How does it make programming more accessable and understandable for more people even including non-programmers. Can we agree in general that rules are a way to do this? Rules are already widely used by non-programmers in business and as a type of systems analysis tool. It is not too much of a stretch to come up with a system that works for everybody. My idea of what the interface of a language should be is a cognative model of ORDINARY human activity. The relationship between PL's and human cognition is very interesting, but also very complex. I think there are two major reasons why mimicking cognition with a PL migh fail, or should be implemented with great care. First, the relationship is bidirectional. Humans are very flexible and our environment, including PL's we use, have an impact on how we think. If you were to design a PL for people of stone age, would you use their none/one/many way of calculation, or instead teach them to use numbers? I could argue that if there is such a thing as a perfect programming language, we are not yet clever enough to use it. Second, programming is not only about thinking. If you look at modern programming environments, like J2EE, the design largely boils down to social and economical issues: avoiding vendor lock-in, dividing responsibilities between different developer roles, allowing for integration even when it is not technically motivated (company mergers etc.), preventing uninformed programmers from doing stupid things etc. I agree that cognition is too complex to use directly. My strategy is to use theory common in engineering for studying systems as a part of their environment. Now days this seems to be called systems theory or formerly Cybernetics. If you try to computerize these ideas you get a form of representation almost equivalent to ordinary rules and facts as commonly used. The difference is that you also need a concept of ongoing change or situation and named action. Taken together I like to call this activity theory. Activity theory is an implied foundation of a theory of cognition because it is fundamental. Systems theory is thought to apply to systems of any kind whether biological or mechanical. A closely related way of looking at things would be pragmatic philosophy. Other programmers, of course. The secondary (and vastly simpler) use case is interfacing with a CPU/OS/runtime stack. Any moron can write a language designed for interfacing with a machine, and too often it seems like most of them have. Designing a language for interfacing with other computer programmers, over the course of years, as enterprises, platforms, teams, and recruiting pools all change, well, that's an unsolved problem. The trick, it seems, is coming up with a language that can both explicate the architecture design decisions of a 20-person, 5-year development effort, and still run. Nobody has any idea how to do that, and sadly it seems like nobody is looking to. (Before anyone chimes in, Haskell ain't it.) Other programmers, of course. Bingo! Give that man a large stuffed python... Dave, are you a time-traveller from C2 Wiki of the past? :-) To me a programming language is the user-interface of a compiler and the language spec is a manual for the compiler. A good language description is one that tells me how to make the compiler do what I want. The UI goof that annoys me most is when lots of things are specified as "undefined" and you're prevented from trusting your observations an experiments. That is bad UI. My $0.02. s/compiler/compiler or interpreter/g s/compiler/compiler or interpreter/g There seems to be a strong focus in your mind on programming as a solitary endeavor. You see this a lot in the scripting community. To each his own, I suppose. I personally am not interested in problems small enough to be coded by a single developer, even myself. I wish to build cathedrals, with the trained teams of architects, engineers, craftsmen, and managers that such enormous endeavors imply. Such teams need languages to communicate their designs in, so that they can be clearly, quickly, and accurately understood, even as teams and technologies change. The only practical way we have found to make such languages sufficiently rich and our communications sufficiently rigorous is to use languages that are actually runnable by a computer, but that's somewhat incidental, and has both good and bad side effects. Saying that a programming language is an interface to a compiler sounds like saying that typewriter is an interface to a piece of paper. It's true, in it's way, but only at the cost of eliminating all sense of purpose or context from the discussion. I would be curious to know which software cathedrals you admire as paradigms of good design and successful implementation. (As a show of good faith, two of my favourites are Unix and Emacs.) Favorites? GCC - amazing insights into the limits of portability Boost libraries - Pushing a horrible language far beyond what anyone thought were it's limits, and arriving somewhere beautiful The J2SE class libraries - Breadth and quality far beyond the expectations of a least-common-denominator standard library. It's sheer completeness managed to create a programming system with hardly any "gotchas". IntelliJ IDEA - architecturally nothing amazing, but the exuburant blossomings of amazing functionality in unexpected places constantly enthrall Opera, BeOS - commercially insane, but with enough technological sweetness to make you simply want to give them money to continue BSD - the first programmer's OS SABRE - uptime at all costs ITunes - architecture perfectly tuned to create rabid customers. I think we're now sufficiently off topic... the original Smalltalk environment - far beyond anything anyone would have believed was technologically possible for the time. Doom - In order to build this breakthrough game, first we need to build a multi-tasking OS for it to run on... Google - The first software to actually change the way that people think "Cathedrals" seems appropriate for these Your taste seems excellent to me, but I don't see many of these as cathedrals. Weren't gcc, BSD, Smalltalk, Doom, and Google (web search engine) all designed by small groups of largely self-directed programmers who were focused on making something run? In a substantial project, more of the developers' effort is with the program than with the machine/OS/etc., and that's where the language provides an interface: between the team and the program they're developing or maintaining. a language that can both explicate the architecture design decisions of a 20-person, 5-year development effort, and still run Experience casts doubt on the achievability of this admirable goal. In current practice other, non-programming languages are used for software architecture. Therefore, architectural communications among programmers is not particularly relevant to the design of a language like Python with no pretentions to being cutting-edge for 100 person-year projects. In current practice other, non-programming languages are used for software architecture. Which is one of the primary reasons for the sorry state of the software industry. Hi Ehud, are you sure about the "sorry state of SW industry"? You might checkout this (unfortunately not free) article that was published on IEEE Software May/June 2005. The link doesn't work, try Robert L. Glass. "IT Failure Rates--70% or 10-15%?," IEEE Software, vol. 22, no. 3, pp. 112, 110-111, May/June, 2005. In this editorial the author questions the oft cited failure rate of 70% for software projects. This figure goes back to the 1994 Standish group Chaos report. Apparently more recent reports show some improvement, but Glass remains sceptical of these figures as do others. The average 189% cost overrun is also put in doubt. Yes, I am aware of these publications. They are not enough to make me change my mind... Once can argue against the specific statistics, but I think it is enough to visit most software development shops, and see their practices and the quality of the product. Try to read about the state of the art as regards "software development methodologies" and tell me if you still think talking about the "sorry state of se" is an overstatement... tell me if you still think talking about the "sorry state of se" is an overstatement... Not sure if you're addressing me, as I didn't express an opinion. From what little exposure I have to real world (as some call it) Software Engineering, it still seems an immature discipline to me. Well, Kay was they first to mention these studies not you... But, anyway, I wasn't addressing anyone, just ranting... Glass remains sceptical of these figures as do others. Actually, the Infoworld article ends by saying: Probably the news with the most damaging implications for IT projects is not the number of those that were abandoned, rather it’s those that were completed but offer fewer features and functions than originally specified... This is pacifying to listen. I work ( partially ) against specs that are written by ETSI and it needs a lawyer to interpret them. Many features are just there for political reasons i.e. company agreements. Not to talk about historical concessions. Others bloat the system and are excessively complicated. Some features are never used. To blame software developers for creating bad products is quite convenient and part of their folklore - I guess that's because they have to read the sourcecode of other people, not because their Windows system crashes one time a week ( that latter did not happen to me for years - Kudos to Microsoft ). Nevertheless most products ( in the small section of my own experience ) are on budget simply because they do not reinvent the wheel and suffer from the "second system syndrome". When people do everything from scratch and do not acquire skilled architects, developers, supporters and project managers from the beginning they will likely going to fail - but this social fact is quite trivial and not anyhow distinct from other engineering domains. Once Alistair Cockburn had something interesting to say about this with regard to processes. I'm to busy in the moment to look for this on the Web. I am not sure I understand what you are trying to say... To blame software developers for creating bad products is quite convenient and part of their folklore I, for one, didn't blame developers. In fact, I think talking about developers as if such a beast exists and is well defined, is a symptom of the general problem I was referring to. So you believe once the role X becomes well-defined the "software-crises" has the chance to evapourate? This is too close to magical thinking for me to agree. On the other hand some people argued here on LtU that the "software-crises" would be solved once nobody writes sofware anymore but just specware, designware and docuware. With this I agree of course ;) No, it's the other way around. Once the "software-crises" is "solved", the role X will will also be clarified... I took this as an example of the natural human tendency to always desire more than we can deliver. I don't think it's necessarily a bad thing that most software projects eventually scale back their ambitions; after all, isn't that something that most of is have to go through when growing up? Perhaps it's a sign that the field is "growing up" as well. There's probably also an element of "Software gives me God-like powers, so I can do anything!" in this, without realizing that doing "anything" requires a whole lot of work. As mentioned above, most features in software have very little utility for the average user. Sometimes, the finished project is better off without them.. That's what I found interesting about the "agile revolution". Instead of talking about magic technologies that will solve all our problems once everyone uses them its proponents focussed on communication. How can one complex social system ( a SW development team ) practically realize conviviality with its environment i.e. users and customers. As long as we believe that this is a command-control process guided by some input-logic software will inevitably fail to fit the "requirements". The requirements, the design and the software are a social relationship that emerges.. Guido, Larry, Mats etc. are community leaders. So they mediate forces and channeling requirements according to their design ideas that attract the masses. There is some deep reciprocity. I can imagine to liberate Python from the singularity of a continously active chief designer such as happened with Lisp. But this causes the dual problem of standardisation and preserving "character". As a developer on my own I have to defend sometimes my SW architecture from the wishes of other people - for their own sake. Three wishes are free only. If you as a developer have a mature user/customer you are a lucky person and vice versa. Ripeness is all, as Shakespeare said. The rates of cost overrun and time overrun have little to do with the sorry state of software development. They might just mean that upper management is getting used to just how incredibly expensive and time-consuming most software is to build, and how often quality problems arise, and factored them into their expections. The underlying suckiness of development outputs may not have changed in the slightest. Out of curiosity, what do you advocate for describing software architecture ? If you hate/despise UML, well, count me in. But what is the alternative ? Are you saying that a programming language (which one ?) is appropriate for describing at a sufficiently high level the architecture of an application ? Or more globally, the architecture of an information system ? Or did you use the word "architecture" while thinking more of the detailed design of an application ? Out of curiosity, what do you advocate for describing software architecture? If you hate/despise UML, well, count me in.. Think of it this way: Why do we have build scripts? Why do we have installer creation scripts? There's no reason that all of the architectural and design information necessary to drive build and deployment (dependencies, versioning, modularization, configuration) couldn't be declaratively exposed in the implementation language. There, you get all the good stuff we're used to when we're working with advanced implementation language: error checking, high-quality navigation tools, code assistance. Except in this case the error checking would be showing you places where your implementation violates your architectural declarations, navigation would help you move seamlessly between architecture and implementation, and code assistance would help you either infer architectural annotations from implementation or generate implementation stubs from architecture. Are you saying that a programming language (which one ?) is appropriate for describing at a sufficiently high level the architecture of an application ?. I think you are making the assumption that we are talking about a single software application, implemented in a single programming language. In my post, I was trying to stress the difference of perspective between the small-scale (single application) and the large- to very-large scale (say, global systems & software architecture of an IT department, numerous applications and languages). That being said, what you describe is a common and well-known problem with modeling tools and there have been various attempts at keeping the code and modeling language in sync, although much/most remains to be done. That's what MDA is tackling in its own way, but don't count me in the believers.. That's true, but that leaves us at the "baby steps" stage and with the same question: how can (future) programming languages describe the architecture of a software application at the right conceptual level ? And is that going to help us in expressing / figuring out the global architecture of the 10000 applications used by a large IT shop ? You have to learn to walk before you can run. Improvement is a lot slower than you might like because it has to move through a complex social consensus system rather than simply work by idealised mandate, but that doesn't mean it isn't happening. Object Oriented design was a (small) step in the general direction of allowing design and architecture to be made more explicit at the level of implementation language. It also helps at the larger conceptual level by providing at least some framework for understanding how different OO based applications might fit together. Having a common VM language, be it the JVM, Microsoft's CLR, or something like Parrot, is another (small) step that provides the ability for a wider array of applications, potentially in different languages, to all share a common object/design/architecture description making the higher level multi-application description a little easier. We're still a long way from what could be considered "good" - I am personally ambivalent as to whether OO is the right long term answer - but over the last 30 or 40 years the general consensus view has shifted and we now have better methods for explicating design information, and are currently in the process of slowly migrating further in that direction. We still need people at the vanguard yelling for everyone to catch up, but we also need people back in the pack cajoling the stragglers and helping to move the bulk of the group in the right direction. The latter are the baby steps, and they're quite necessary if the mob is ever going to make it to the lofty heights you're shouting from. You might take a look on ADLs ( architecture description languages ). I don't know if they were yet on topic here on LtU? Ehud! Yep, I know about ADLs but last time I looked, it reminded me of a graveyard i.e. mostly dead projects or projects with very low traction. Sad. It's like everyone has moved to UML or given up on the idea (whichever is worse). check out the ArchJava project. It's a (fairly) small and (mostly) understandable Java extension with a lot of interesting architectural and design informations embedded in it. UML as terminator of ADLs would be definitely a "sorry state in software architecture". To make it clear: I expect from ADLs that I can communicate visual representations with product managers that use powerpoint for such issues! I want graphically distinct different type of hardware and software entities. I want to pronounce that certain communication is wireless and encrypted that they are client- or serverside etc. I also want to describe certain workflow and have a visual environment that animates it. It should be possible to extract data from production system and feed them into ADL scripts ( in realtime? ) so it is possible to monitor certain aspects of the system. The ADL should be powerfull enough to write testscripts without pain. I want it to be way cool not a waste-product of OO detail design. Experience casts doubt on the achievability of this admirable goal. I would disagree, mostly because there's so much that manifestly could be done at the language level, but isn't. Some examples Now those prone to complain about verbosity are probably polishing their pitchforks, as a language that supported all of that would end up with fifteen lines of specification and structural code for every line of algorithmic code. Count in unit and functional tests, and you're probably talking about thirty lines of support code for every line of product algorithm. Now those prone to complain about verbosity are probably polishing their pitchforks, as a language that supported all of that would end up with fifteen lines of specification and structural code for every line of algorithmic code. I think these sorts of arguments come down to an issue of "the right tool for the job". There are plenty of projects where the extra verbosity you suggest is paid back many times over by the increased maintainability. If you don't spend much time working on such projects then it looks like a lot of wasted effort (and may well be for the projects you work on). As long as people are looking through the lens of the sort of work they personally do there will continue to be stupid debates. I tried to address some of this in a post about static and dynamic typing - another perennial debate, and to me pretty much the same one: decent specification is simply the next step up from static types. It provides much of the same sorts of benefits for correctness and maintainability for much the same sorts of costs in terms of flexibility and potential verbosity. When people can start seeing dynamic types, static types, design by contract, and full specification as a continuous scale, have a decent understanding of the costs and benefits of various points on the scale, and select the level most suitable for the project at hand... then I think we'll finally see some progress. Treehouse vs. house vs. bridge is a perfect analogy for what I'm talking about, and the sliding scale you talk about with DBC between static typing and formal specification fits perfectly. I will be reusing those. Thanks. ...to the underlying semantic calculus, which defined ways to separate the world into addressable and unaddressable parts, and further divides the speakable parts into the explicit (i.e. those with names) and the implicit (i.e. those that are embodied in reduction rules without its own identity). The explicitly speakable parts then is exposed as interface-level concepts ("variables", "loops"); we then build the vocabulary via libraries (and copy/pasting), which forms a certain world-view horizon partially shared by fellow practitioners. The implicit part and unaddressable parts are almost always never brought up, except in discussions with practicioners of other languages with a dissimilar set of prejudices. Hermeneutics is fun. :-) ..to the underlying semantic calculus, which defined ways to separate the world into addressable and unaddressable parts Hmmm... Tao Te King? Actually, I was thinking about Wittgenstein, though Laozi would fit the profile as well. :-) Ah, now I get it. But I'm sure the semantic calculus is incomplete so that we can make sense also with those entities that are not addressable. Interface design issues might refer to aesthetic values that we can't talk about in clear terms according to the Tractatus. The interface-description is hypothetical: I can imagine a super-language of Python that has no connection to the real but feels nevertheless "Pythonic". The underlying calculus/machinery is just a machinery as-if. In the end we can try to separate the style, the surface and the vocabulary from the referent. The language becomes ghostly i.e. an idea. The interface is totalitarian. It encloses the user within a fixed mindest and does not leave any space for criticism. That's why we will always have several incommensurable languages (cultures, religions, social systems etc. ). Strange enough but from this perspective the rationalist Guido van Rossum and the postmodernist Larry Wall achieve exactly the same despite their philosophical differences. I'm not sure who failed more? The postmodernist who tries to enclose people in a non-design design that liberates them from design but therefore encloses them in several awkward conventions and practices. The modernist who limits himself by his wish to create a consistent whole? The failure of the postmodernist is paradox and ironic. The failure of the modernist is tragic. Well, I find this point weird, but if you look at failure, you'll notice that the post modernist have been trying for a long time to recreate Perl in Perl6 without much success or interest of many I might say: those who want a "better Perl 5" can go to Ruby now, whereas Python backers don't really feel the need to reinvent Python.. Now given their number of users you could say exactly the opposite: both have won (and Perl more than Python).
http://lambda-the-ultimate.org/node/1298
crawl-002
refinedweb
11,203
61.77
crclib 2.0.0 crclib: ^2.0.0 copied to clipboard Collection of cyclic redundancy check (CRC) routines as Dart converters. Use this package as a library Depend on it Run this command: With Dart: $ dart pub add crclib With Flutter: $ flutter pub pub add crclib This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get): dependencies: crclib: ^2.0.0 Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more. Import it Now in your Dart code, you can use: import 'package:crclib/crclib.dart';
https://pub.dev/packages/crclib/install
CC-MAIN-2021-17
refinedweb
107
66.03
IDE Tutorial for AWS Cloud9 In this tutorial, you set up an AWS Cloud9 development environment and then tour the AWS Cloud9 integrated development environment (IDE). Along the way, you use the IDE to code, run, and debug your first app. Note Completing this tutorial might result in charges to your AWS account. These include possible charges for Amazon EC2. For more information, see Amazon EC2 Pricing. Prerequisites To successfully complete this tutorial, you must first complete the steps in Getting Started with AWS Cloud9. Step 1: Create an Environment In this step, you use AWS Cloud9 console to create and then open an AWS Cloud9 development environment. If you already have an environment, open it, and then skip ahead to Step 2: Tour the IDE. In AWS Cloud9, a development environment (or just environment) is a place where you store your development project's files and where you run the tools to develop your apps. In this tutorial, you create a special kind of environment called an EC2 environment. For this kind of environment, AWS Cloud9 creates and manages a new Amazon EC2 instance running Amazon Linux or Ubuntu Server, creates the environment, and then connects the environment to the newly-created instance. When you open the environment, AWS Cloud9 displays the AWS Cloud9 IDE that enables you to work with the files and tools in that environment. You can create a blank EC2 environment with the console or the AWS CLI. Note When you create an EC2 environment, the environment doesn't contain any sample code by default. To create an environment along with sample code, see one of the following topics instead. After you create the environment, skip ahead to Step 2: Tour the IDE. Create an EC2 Environment with the Console If you're the only individual using your AWS account or you are an IAM user in a single AWS account, go to. If your organization uses AWS Single Sign-On (AWS SSO), see your AWS account administrator for sign-in instructions. If you're using an AWS Educate Starter Account, see Step 2: Sign In to the AWS Cloud9 Console in Individual Student Signup. If you're a student in a classroom, see your instructor for sign-in instructions. After you sign in to the AWS Cloud9 console, in the top navigation bar, choose an AWS Region to create the environment in. For a list of available AWS Regions, see AWS Cloud9 in the Amazon Web Services General Reference. If a welcome page is displayed, for New AWS Cloud9 environment, choose Create environment. Otherwise, choose Create environment. Or: On the Name environment page, for Name, type a name for your environment. In this tutorial, we use the name my-demo-environment. If you use a different environment name, substitute it throughout this tutorial. For Description, type something about your environment. For example, This environment is for the AWS Cloud9 tutorial. Choose Next step. On the Configure settings page, for Environment type, leave the default choice of Create a new instance for environment (EC2). Choosing Create a new instance for environment (EC2) means you want AWS Cloud9 to create a new Amazon EC2 instance and then connect the environment to the newly-created instance. To use an existing cloud compute instance or your own server instead (which we call an SSH environment), see Creating an Environment in AWS Cloud9. Note Choosing Create a new instance for environment (EC2) might result in possible charges to your AWS account for Amazon EC2. For Instance type, leave the default choice. This choice has relatively low RAM and vCPUs, which is sufficient for this tutorial. Note Choosing instance types with more RAM and vCPUs might result in additional charges to your AWS account for Amazon EC2. For Platform, choose the type of Amazon EC2 instance that AWS Cloud9 will create and then connect to this environment: Amazon Linux or Ubuntu. Expand Network settings (advanced). AWS Cloud9 uses Amazon Virtual Private Cloud (Amazon VPC) to communicate with the newly-created Amazon EC2 instance. Depending on how Amazon VPC is set up, do one of the following. For more information, see VPC Settings for AWS Cloud9 Development Environments. For Cost-saving setting, choose the amount of time until AWS Cloud9 shuts down the Amazon EC2 instance for the environment after all web browser instances that are connect to the IDE for the environment have been closed. Or leave the default choice. Note Choosing a shorter time period might result in fewer charges to your AWS account. Likewise, choosing a longer time might result in more charges. Choose Next step. On the Review choices page, choose Create environment. Wait while AWS Cloud9 creates your environment. This can take several minutes. Please be patient. After your environment is created, the AWS Cloud9 IDE is displayed. You'll learn about the AWS Cloud9 IDE in the next step. If. Skip ahead to Step 2: Tour the IDE. Create an EC2 Environment with the AWS CLI Note Currently, you cannot use the AWS CLI to create an Ubuntu Server-based EC2 environment—only Amazon Linux. Support for Ubuntu Server is expected in the future. Install and configure the AWS CLI, if you have not done so already. To do this, see the following in the AWS Command Line Interface User Guide. We recommend you configure the AWS CLI using credentials for one of the following. The IAM user you created in Team Setup for AWS Cloud9. An IAM administrator user in your AWS account, if you will be working regularly with AWS Cloud9 resources for multiple users across the account. If you cannot configure the AWS CLI as an IAM administrator user, check with your AWS account administrator. For more information, see Creating Your First IAM Admin User and Group in the IAM User Guide. An AWS account root user, but only if you will always be the only one using your own AWS account, and you don't need to share your environments with anyone else. For more information, see Creating, Disabling, and Deleting Access Keys for Your AWS Account in the Amazon Web Services General Reference. For other options, see your AWS account administrator or classroom instructor. Run the AWS Cloud9 create-environment-ec2command. aws cloud9 create-environment-ec2 --name my-demo-environment --description "This environment is for the AWS Cloud9 tutorial." --instance-type t2.micro --region us-east-1 --subnet-id subnet-12a3456b In the preceding command: --namerepresents the name of the environment. In this tutorial, we use the name my-demo-environment. If you use a different environment name, substitute it throughout this tutorial. --descriptionrepresents an optional description for the environment. --instance-typerepresents the type of Amazon EC2 instance AWS Cloud9 will launch and connect to the new environment. This example specifies t2.micro, which has relatively low RAM and vCPUs and is sufficient for this tutorial. Specifying instance types with more RAM and vCPUs might result in additional charges to your AWS account for Amazon EC2. For a list of available instance types, see the create environment wizard in the AWS Cloud9 console. --regionrepresents the ID of the AWS Region for AWS Cloud9 to create the environment in. For a list of available AWS Regions, see AWS Cloud9 in the Amazon Web Services General Reference. --subnet-idrepresents the subnet you want AWS Cloud9 to use. Replace subnet-12a3456bwith the ID of the subnet, which must be compatible with AWS Cloud9. For more information, see VPC Settings for AWS Cloud9 Development Environments. By default, AWS Cloud9 shuts down the Amazon EC2 instance for the environment 30 minutes after all web browser instances that are connect to the IDE for the environment have been closed. To change this, add --automatic-stop-time-minutesalong with the number of minutes. A shorter time period might result in fewer charges to your AWS account. Likewise, a longer time might result in more charges. By default, the entity that calls this command owns the environment. To change this, add --owner-idalong with the Amazon Resource Name (ARN) of the owning entity. After you successfully run this command, open the AWS Cloud9 IDE for the newly-created environment. To do this, see Opening an Environment in AWS Cloud9. Then return to this topic and continue on with Step 2: Tour the IDE to learn how to use the AWS Cloud9 IDE to work with your new environment. If you try to open the environment, but. Step 2: Tour the IDE In the previous step, you created an environment, and the AWS Cloud9 IDE is now displayed. In this step, you'll learn how to use the IDE. The AWS Cloud9 IDE is a collection of tools you use to code, build, run, test, debug, and release software in the cloud. In this step, you experiment with the most common of these tools. Toward the end of this tour, you use these tools to code, run, and debug your first app. Topics - Step 2.1: Menu Bar - Step 2.2: Dashboard - Step 2.3: Environment Window - Step 2.4: Editor, Tabs, and Panes - Step 2.5: Console - Step 2.6: Open Files Section - Step 2.7: Gutter - Step 2.8: Status Bar - Step 2.9: Outline Window - Step 2.10: Go Window - Step 2.11: Immediate Tab - Step 2.12: Process List - Step 2.13: Preferences - Step 2.14: Terminal - Step 2.15: Debugger Window Step 2 ------- The domestic..2: Dashboard The dashboard gives you quick access to each of your environments. From the dashboard, you can create, open, and change the setting for an environment. To open the dashboard, on the menu bar, choose AWS Cloud9, Go To Your Dashboard, as follows. To view the settings for your environment, choose the title inside of the my-demo-environment card. To return to the IDE for your environment, do one of the following. Choose your web browser's back button, and then choose Open IDE inside of the my-demo-environment card. In the navigation breadcrumb, choose Your environments, and then choose Open IDE inside of the my-demo-environment card. Note It can take a few moments for the IDE to display again. Please be patient. Step 2.3: Environment Window The Environment window shows a list of your folders and files in the environment. You can also show different types of files, such as hidden files. To hide the Environment window and the Environment button, choose Window, Environment on the menu bar. To show the Environment button again, choose Window, Environment again. To show the Environment window, choose the Environment button. To show hidden files, in the Environment window, choose the gear icon, and then choose Show Hidden Files, as follows. To hide hidden files, choose the gear icon again, and then choose Show Hidden Files again. Step 2, as follows. To hide tabs, choose View, Tab Buttons on the menu bar. To show tabs again, choose View, Tab Buttons again. 2.5: Console The console is an alternate place for creating and managing tabs, as follows. You can also change the console's display so that it takes over the entire IDE. To hide the console, choose View, Console on the menu bar. To show the console again, choose View, Console again. To expand the console, choose the resize icon, which is at the edge of the console, as follows. To shrink the console, choose the resize icon again. Step 2.6: Open Files Section The Open Files section shows a list of all files that are currently open in the editor. Open Files is part of the Environment window, as follows. To open the Open Files section, choose View, Open Files on the menu bar. To switch between open files, choose fish.txt and then cat.txt in the Open Files section. To hide the Open Files section, choose View, Open Files again. Step 2.7: Gutter The gutter, at the edge of each file in the editor, shows things like line numbers and contextual symbols as you work with files, as follows. To hide the gutter, choose View, Gutter on the menu bar. To show the gutter again, choose View, Gutter again. Step 2.8: Status Bar The status bar, at the edge of each file in the editor, shows things like line and character numbers, file type preference, space and tab settings, and related editor settings, as follows. To hide the status bar, choose View, Status Bar on the menu bar. To show the status bar, choose View, Status Bar again. To go to a specific line number, choose a tab such as cat.txt if it's not already selected. 2.9: Outline Window You can use the Outline window to quickly go to a specific file location. To hide the Outline window and Outline button, choose Window, Outline on the menu bar. To show the Outline button again, choose Window, Outline again. To show the Outline window, choose the Outline button. To see how the Outline window works, create a file named hello.rb. Copy the following code into the file. def say_hello(i) puts "Hello!" puts "i is #{i}" end def say_goodbye(i) puts "i is now #{i}" puts "Goodbye!" end i = 1 say_hello(i) i += 1 say_goodbye(i) Then, in the Outline window, choose say_hello(i), and then choose say_goodbye(i), as follows. Step 2.10: Go Window You can use the Go window to open a file in the editor, go to a symbol's definition, run a command, or go to a line in the active file in the editor. To hide the Go window and Go button (the magnifying glass icon), choose Window, Go on the menu bar. To show the Go button again, choose Window, Go again. To show the Go window, choose the Go button (the magnifying glass). With the Go window showing, 2 2, as follows. Step 2. The following is displayed. Step 2.14: Terminal You can run one or more terminal sessions in the IDE. To start a terminal session, choose Window, New Terminal on the menu bar. You can try running a command in the terminal. For example, in the terminal, type echo $PATH (to print the value of the PATH environment variable), and then press Enter. You can also try running additional commands. For example, try commands such as the following. pwdto print the path to the current directory. aws --versionto print version information about the AWS CLI. ls -lto print information about the current directory. Step 2.15: Debugger Window You can use the Debugger window to debug your code. For example, you can step through running code a portion at a time, watch the values of variables over time, and explore the call stack. To hide the Debugger window and Debugger button, choose Window, Debugger on the menu bar. To show the Debugger button again, choose Window, Debugger again. To show the Debugger window, choose the Debugger button. You can experiment with using the Debugger window and some JavaScript code. To try this, do the following. Prepare to use the Debugger window to debug JavaScript code by installing Node.js into your environment, if it isn't already installed. To confirm whether your environment has Node.js installed, run the node --versioncommand. If Node.js is installed, the Node.js version number is output, and you can skip ahead to step 3 in this procedure to write some JavaScript code.. Start a new terminal session, and then run this. Show the Debugger window, if it's not already displayed., as follows.. Step 3: Clean Up To prevent ongoing charges to your AWS account related to this tutorial, you should delete the environment. Warning Deleting an environment cannot be undone. You can delete the environment with the Delete the Environment with the AWS Cloud9 Console or the Delete the Environment with the AWS CLI. Delete the Environment with the AWS Cloud9 Console Open the dashboard. To do this, on the menu bar in the IDE, choose AWS Cloud9, Go To Your Dashboard. Do one of the following. Choose the title inside of the my-demo-environment card, and then choose Delete. Select the my-demo-environment card, and then choose Delete. In the Delete dialog box, type Delete, and then choose Delete. Note. Skip ahead to Next Steps. Delete the Environment with the AWS CLI Run the AWS Cloud9 delete-environment command, specifying the ID of the environment to delete. aws cloud9 delete-environment --environment-id 12a34567b8cd9012345ef67abcd890e1 In the preceding command, replace 12a34567b8cd9012345ef67abcd890e1 with the ID of the environment to delete. Next Steps Explore any or all of the following topics to continue getting familiar with AWS Cloud9. To get help with AWS Cloud9 from the community, see the AWS Cloud9 Discussion Forum. (When you enter this forum, AWS might require you to sign in.) To get help with AWS Cloud9 directly from AWS, see the support options on the AWS Support page.
https://docs.aws.amazon.com/cloud9/latest/user-guide/tutorial.html
CC-MAIN-2019-18
refinedweb
2,826
66.64
When performing data analysis tasks in pandas, we might end up in a situation where not all the data is displayed. There could be too many rows of data, and not all of the content of each column is displayed due to the limit on the column’s maximum width; or, the float values may not be displayed with the precision that we want. There are various options in pandas that you can use to overwrite these settings to accommodate your needs. In this shot, we will look at three of the most common settings that we can modify. Often, we work with huge datasets that can contain many rows of data. While viewing this data in pandas, you might not be able to see all the rows (maybe you’ll see the top 30 rows and last 30 rows with a ... inbetween). But what if we want to view all the data? To do this, we need to change the default setting of the maximum number of rows to be displayed. Have a look at the code snippets below to understand this better. import pandas as pd drinks = pd.read_csv('') print(drinks[['country','beer_servings']]) max_rows = pd.get_option('display.max_rows') print("Maximum rows that can be displayed: ", max_rows) Explanation: DisplayMaxRowsDefaulttab: ...has been printed in between the data, and incomplete data has been shown. get_option()function and pass the parameter as display.max_rowsto see, by default, how many rows can be displayed. We see that the number is , and so we will want to change this setting. DisplayMaxRowsCustomtab: display.max_rowsto Noneto display all the rows in the data. reset_option()to reset any setting back to its default value. While viewing the data, you might have observed that many columns do not print all the content in a certain cell. This is due to the maximum column width property. Let’s see how this setting can be changed. import pandas as pd train = pd.read_csv('') print(train[['Name','Sex']]) max_colwidth = pd.get_option('display.max_colwidth') print("Maximum column width is: ", max_colwidth) Explanation: DisplayMaxColWidthDefaulttab: display.max_colwidth). Nameis not displayed completely. DisplayMaxColWidthCustomtab: display.max_colwidthto 1000. Many times, there are float values in our data that we will want to display two or three digits after the decimal point. Take a look at the code snippet below to see how this problem can be solved. import pandas as pd train = pd.read_csv('') print(train[['Name','Sex', 'Fare']]) max_precision = pd.get_option('display.precision') print("Maximum precision is: ", max_precision) Explanation: DisplayMaxPrecisionDefaulttab: Fare, that contains the float values. high(there are many numbers after the decimal); so, we will need to change the default precision. DisplayMaxPrecisionCustomtab: display.precisionto 2, meaning it will only print 2 digits after the decimal point. In this way, you can change the default display settings in pandas as per your needs. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/how-to-change-display-options-in-pandas
CC-MAIN-2022-33
refinedweb
473
58.38
On Wed, Sep 19, 2007 at 03:17:05AM +0100, Daniel P. Berrange wrote: > On Tue, Sep 18, 2007 at 04:13:46AM -0400, Daniel Veillard wrote: > > On Tue, Sep 18, 2007 at 03:50:18AM +0100, Daniel P. Berrange wrote: > > > This is a serious patch at supporting Avahi advertisement of the libvirtd > > > service. > > > > > > - configure by default will probe to see if avahi is available and if > > > found will enable appropriate code. > > > > > > --with-avahi will force error if not found > > > --without-avahi will disable it with no checking > > > > > > - HAVE_AVAHI is defined in config.h if avahi is found & used to conditionally > > > enable some code in qemud/qemud.c > > > > > > - HAVE_AVAHI is also a Makefile conditional to enable compilation of the > > > mdns.c and mdns.h files. A little makefile rearrangement was needed to > > > make sure variables like EXTRA_DIST were defined before we appended to > > > them with += > > > > > > - The code in mdns.c contains all the support for dealing with the Avahi > > > APIs. > > > > > > - The primary Avahi API is horribily complicated for day-to-day > > > use in application code, exposing far too much of the event loop and > > > state machine. So we expose a simplified API to libvirt in mdns.h > > > > Heh, did you tell the Avahi devels ? > > No, but perhaps I should. I don;t know if glib has a higher level API for accessing > the Avahi stuff. I know there's a out-of-box glib event loop impl for it. > > > > - The Avahi client library is basically a shim which talks to the Avahi > > > daemon using DBus system daemon. The DBus stuff doesn't leak out of > > > the Avahi APIs - it is loosely couple - all we need do is provide Avahi > > > with an event loop implementation which was surprisingly easy. The > > > libvirtd daemon does now indirectly link with DBus, but I don't see any > > > problem with this. Don't want it - then use --without-avahi > > > > That's fine, as long as the extensions don't decrease portability I > > don't see this as a problem. > > Ok I tweaked the configure.in checks I did for avahi slightly. Now, if you don't > have 'pkg-config' available, it'll simply disable the avahi feature instead of > failing the entire script. It'll auto-detect by default, and can be overriden > with the --with/--without-avahi flags. > > > > - We advertise a service name of _libvirt._tcp The IETF draft recommends > > > use of the name from /etc/services associated with your app. There is a > > > way to register official Avahi services names. We don't have an /etc/service > > > name registered either though. > > > > I rememember we looked at the IANA stuff for registering a port number > > > > I suggested using > > "10 " What SHORT name (14 CHARACTER MAXIMUM) do you want associated with this port number?" > > > > libvirt" > > > > But we never did that registration step. > > Ok, well our mDNS name matches. So guess we should look at actually doing a > application to register both the mDNS & service name. > > > > - This patch does not advertise any per-VM VNC server instances, but I have > > > prepared the APIs in mdns.h to be ready to support that with minimal effort. > > > A pre-requisite for this is an extension to the driver API to get async > > > signals when VMs start & stop, since making the daemon poll hypervisors > > > will suck big time. > > > > > > When implemented each VM will be its own mdns 'group' and the VNC server > > > associated with it will be the 'service' advertised in that group. > > > > > > Having applied this patch & started the daemon, if /etc/init.d/avahi-daemon > > > is running, you should see the service advertised on the LAN. As mentioned > > > earlier if you start Avahi daemon after libvirt it should detect this too. > > > > Sounds excellent ! > > I've now tested it on Fedora 7, 8 and RHEL-5 & its working very nicely. > > > > These are all mildy abusing mdns / zeroconf, but then x509 certificates don't > > > really fit into the model of 'zero conf' in the first place. If people want > > > true zero-conf then the (SSH) tunnel is better (and always available), but > > > if they've setup certificates they should still be allowed to use zero-conf > > > to at least locate hosts. So mildly abusing the rules is reasonable IMHO. > > > > Maybe suggesting that application developpers default to SSH when using > > a server autodetected with Avahi is the most practical ATM assuming we don't > > find a way to advertise the FQDN. Unless we can find the domain from the > > locally installed certificate, after all if people want to use the certificate > > they should have some installed locally and then we can probably guess the > > right domain name, no ? > > Yeah, I reckon recommending SSH is a good general rule for this. I'll keep > thinking about a better way to handle x509 validation by the client - we can > easily add more TXT records to the mDNS stuff increemntally if needed. > > > > > +dnl Avahi library > > > +AC_ARG_WITH(avahi, > > > + [ --with-avahi use avahi to advertise remote daemon], > > > + [], > > > + [with_avahi=check]) > > > + > > > +if test "$with_avahi" = "check"; then > > > + PKG_CHECK_EXISTS(avahi-client >= $AVAHI_REQUIRED, [with_avahi=yes], [with_avahi=no]) > > > +fi > > > + > > > +if test "$with_avahi" = "yes"; then > > > + PKG_CHECK_MODULES(AVAHI, avahi-client >= $AVAHI_REQUIRED) > > > + AC_DEFINE_UNQUOTED(HAVE_AVAHI, 1, [whether Avahi is used to broadcast server presense]) > > > +else > > > + AVAHI_CFLAGS= > > > + AVAHI_LIBS= > > > +fi > > > +AM_CONDITIONAL(HAVE_AVAHI, [test "$with_avahi" = "yes"]) > > > +AC_SUBST(AVAHI_CFLAGS) > > > +AC_SUBST(AVAHI_LIBS) > > > > I assume that if an OS has Avahi, then it has pkg-check, in that case it > > really should not be a problem. > > As mentioned above, I also tweaked it to disable Avahi if pkg-config is missing > completely. > > > > + > > > +if HAVE_AVAHI > > > +libvirtd_SOURCES += mdns.c mdns.h > > > +libvirtd_CFLAGS += $(AVAHI_CFLAGS) > > > +libvirtd_LDADD += $(AVAHI_LIBS) > > > +else > > > +EXTRA_DIST += mdns.c mdns.h > > > +endif > > > > Wouldn't adding them to EXTRA_DIST in any case be good enough ? > > if SOURCES and EXTRA_DIST carry the same is that really a problem when > > building the archive ? > > Yep, I made this change - adding them to EXTRA_DIST unconditionally. > > > Patch is surprizingly small, looks good, I would commit to CVS, no pending > > issue looks like a potential real problem for users, and more testing would > > be good. > > Ok, its committed! > > Also added the BuildRequires to the spec file & docs on the two new config > file options. Excellent, thanks ! Builds the RPMs updated on one F-7 box and after restarting the daemon the service is now advertized on my LAN, fantastic ! [root paphio ~]# avahi-browse -a + eth0 IPv6 Virtualization Host test2 _libvirt._tcp local + eth0 IPv4 Virtualization Host test2 _libvirt._tcp local [...] Daniel -- Red Hat Virtualization group Daniel Veillard | virtualization library veillard redhat com | libxml GNOME XML XSLT toolkit | Rpmfind RPM search engine
https://www.redhat.com/archives/libvir-list/2007-September/msg00176.html
CC-MAIN-2014-15
refinedweb
1,080
63.29
NAMEstrptime - convert a string representation of time to a time tm structure SYNOPSIS#define _XOPEN_SOURCE /* glibc2 needs this */ #include <time.h> char *strptime(const char *s, const char *format, struct tm *tm); DESCRIPTIONThe strptime() function is the converse function to strftime() and converts the character string pointed to by s to values which are stored in the tm structure pointed to by tm, using the format specified by format. Here format. - %% - behaviour VALUEThe NUL byte at the end of the string. If strptime() fails to match all of the format string and therefore an error occurred the function returns NULL. CONFORMING TOXPG4, SUSv2, POSIX 1003.1-2001. EXAMPLEThe following example demonstrates the use of strptime() and strftime(). ; } GNU EXTENSIONSFor reasons of symmetry, glibc tries to support for strptime the same format characters as for strftime. zone specification. - %Z - The timezone name. Similarly, because of GNU extensions to strftime, %k is accepted as a synonym for %H, and %l should be accepted as a synonym for %I, and %P is accepted as a synonym for %p. Finally - %s - The number of seconds since the epoch, i.e., since 1970-01-01 00:00:00 UTC. Leap seconds are not counted unless leap second support is available. The GNU libc implementation does not require whitespace between two field descriptors. SEE ALSOtime(2), getdate(3), scanf(3), setlocale(3), strftime(3) Important: Use the man command (% man) to see how a command is used on your particular computer. >> Linux/Unix Command Library
http://linux.about.com/library/cmd/blcmdl3_strptime.htm
crawl-002
refinedweb
249
55.74
Last few posts I’ve been building a WPF client against ADO.NET Data Services, if you missed them: - Using ADO.NET Data Services - ADO.NET Data Services – Building a WPF Client - ADO.NET Data Services – Enforcing FK Associations and a Fix for Deleting Entities Today I want to show you how we can add validation or any other extra processing when data is queried from the data service as well as when we attempt to make changes to the data. Ways of Querying a ADO.NET Data Service First let’s recap how we can query a data service. In our WPF client we’ve been taking advantage of LINQ but that’s not the only way to send queries to the service. Because we’re using the data service client framework we can write LINQ queries against the service and the client handles translating the queries into HTTP GETs. But we could also easily specify the raw URI’s to send to the service. For instance, when we want to fill a combobox of categories ordered by CategoryName we could do this instead: Imports System.Data.Services.Client . . ‘Use the untyped DataServiceContext and pass URIs. Dim ctx As New DataServiceContext(New Uri(“”)) ‘Explicitly execute the the HTTP GET Dim cats = ctx.Execute(Of Category)(New Uri(“Categories?$orderby=CategoryName”, UriKind.Relative)) ‘Display results in a combobox Me.cboCategoryLookup.ItemsSource = cats.ToList() In the code above the query is explicitly executed on the second line. You can achieve the same response by typing in the address bar of your browser. However one of the benefits of adding a service reference to our client is that it generates a proxy that inherits from the DataServiceContext that allows typed access to the entity sets that we’re exposing from our service. This makes our code cleaner so we can write a LINQ query to do the same job instead: Imports WpfClient.NorthwindService . . Dim ctx As New NorthwindEntities(New Uri(“”)) ‘Use LINQ to query the service instead Dim cats = From c In ctx.Categories _ Order By c.CategoryName ‘Display results in a combobox. ‘The query is executed when we access the results (calling ToList()) Me.cboCategoryLookup.ItemsSource = cats.ToList() Notice however that there is a subtle difference in this code that you should be aware of. When you write a LINQ query it never executes unless you access the results. In this case we’re accessing the results when we call ToList() on the query. This is called deferred execution and it’s something to be aware of. But if we look in Fiddler then we can see that the exact same HTTP GET is sent to the service: This also means that not every LINQ query can be translated into an HTTP GET. For instance, what if we just wanted to display a couple properties of the category in the combobox. You would think we’d be able to do something like this: Dim cats = From c In ctx.Categories _ Order By c.CategoryName _ Select c.CategoryName, c.Description Unfortunately if you try to do this you’ll get a NotSupportedException thrown at you by the client framework because it can’t translate the query into an HTTP GET. In this case you need to pull down the category entities you want first and then you can project over those to create a list of anonymous types with only the properties you specify. But remember that you need to “execute” the query against the service first and then query over the list that is returned. Here’s a way to do this: Dim cats = From c In ctx.Categories _ Order By c.CategoryName ‘Now project only the properties we want. ‘ Call ToList() on the first query to execute the service call Dim results = From c In cats.ToList() _ Select c.CategoryName, c.Description For more details on what is and isn’t supported refer to this article. Intercepting Queries Now that we understand how queries work we can start messing with how they execute. :-) Say we want to query only the Products in Northwind that are not Discontinued. We could write the query against our data service that specified the filter like so: Dim products = From p In ctx.Products _ Where p.Discontinued = False _ Order By p.ProductName Or we could write the raw URI: Dim productURI As New Uri(“Products()?$filter=Discontinued%20eq%20false&$orderby=ProductName”, _ UriKind.Relative) Dim products = ctx.Execute(Of Product)(productURI) But what if it was a new requirement of the entire system that nowhere should we be displaying discontinued products? If this is the case we should be enforcing this on our data service instead. This can be done using query interceptors on the service. The way we create these is we annotate a method on our service with the QueryInterceptor attribute. The method you write must follow these rules: - The method must have public scope and be annotated with the QueryInterceptorAttribute, taking the name of a entity set as a parameter. - The method must accept no parameters. - The method must return an expression of type System.Linq.Expressions.Expression(Of Func(Of T, Boolean)) that is the filter to be composed for the entity set. The first two requirements are easy the third may be confusing if you’ve never played with lambda expressions. Basically what happens is you specify additional filtering to apply onto the incoming query via this lambda. So in order to append our condition that we should only be returning products that are not discontinued we can add this method to our data service: Imports System.Data.Services Imports System.Linq Imports System.ServiceModel.Web Imports System.Linq.Expressions) End Sub <QueryInterceptor(“Products”)> _ Public Function FilterProducts() As Expression(Of Func(Of Product, Boolean)) ‘Only return products that are not discontinued Return Function(p) p.Discontinued = False End Function End Class Now we can write our queries without specifying the additional filter on discontinued and this will not be sent to the service from our client in the HTTP GET but will be executed against our database. The query interceptor will execute regardless if we write a LINQ query or feed it the raw URI. LINQ Query: Dim products = From p In ctx.Products Order By p.ProductName Me.ListView1.ItemsSource = products.ToList() URI: Dim productURI As New Uri(“Products()?$orderby=ProductName”, UriKind.Relative) Dim products = ctx.Execute(Of Product)(productURI) Me.ListView1.ItemsSource = products.ToList() Pretty slick. You could of course do other processing here first. And you can also specify your own additional service operations as well by attributing them with a <WebGet> attribute. More on those in a later post. Validation with Change Interceptors You can also add methods to your service that will execute when changes are submitted. This allows us to add validation or other processing onto the data being submitted to the database. You do this by attributing a method in the data service with the ChangeInterceptor attribute. A change interceptor will pass the entity being saved and a parameter that indicates what update operation is being performed. For instance, say we want to put a validation on our ProductName so that users cannot submit empty product names to the database. We could write a method in our data service like so: <ChangeInterceptor(“Products”)> _ Public Sub OnChangeProducts(ByVal p As Product, ByVal ops As UpdateOperations) If ops = UpdateOperations.Add OrElse ops = UpdateOperations.Change Then ‘Do not allow products with empty names If p.ProductName = “” Then Throw New DataServiceException(400, “Product name cannot be empty”) End If End If End Sub When we throw a DataServiceException we can specify the HTTP status code and the message to return to the client. 400 indicates “Bad Request” and we pass the message on what the problem was. So if we try to submit a new or existing product with no product name we will get an HTTP error as seen in Fiddler: This is what’s happening on the wire when we call SaveChanges and the exception is caught on the client. This prevents our data from being invalid no matter what client it’s coming from. However this isn’t that user-friendly to say the least. If we’re building a smart client it’s much better to put this type of validations on the client as well. We can do this in our WPF client by extending the Product partial class and implementing IDataErrorInfo and adding our validation. On the WPF client create a new class called Product and place it in the same exact namespace as the NorthwindEntities data service client proxy that is generated for us when we add the service reference. It’s called NorthwindService in our case. Then we can overwrite the partial method OnProductNameChanging to do the client-side validation. This method is called from the ProductName property setter in the generated entity on the client. Here’s an example of how we can collect validation messages on the Product. Imports WpfClient.NorthwindService Imports System.ComponentModel Namespace NorthwindService Partial Public Class Product Implements IDataErrorInfo Private Sub OnProductNameChanging(ByVal value As String) If value Is Nothing OrElse value.Trim = “” Then Me.AddError(“ProductName”, “Product name cannot be empty”) Else Me.RemoveError(“ProductName”) End If End Sub #Region “IDataErrorInfo Members” Private m_validationErrors As New Dictionary(Of String, String) Private Sub AddError(ByVal columnName As String, ByVal msg As String) If Not m_validationErrors.ContainsKey(columnName) Then m_validationErrors.Add(columnName, msg) End If End Sub Private Sub RemoveError(ByVal columnName As String) If m_validationErrors.ContainsKey(columnName) Then m_validationErrors.Remove(columnName) End If End Sub Friend ReadOnly Property HasErrors() As Boolean Get Return (Me.Error IsNot Nothing) End Get End Property Friend ReadOnly Property [Error]() As String _ Implements System.ComponentModel.IDataErrorInfo.Error Get If m_validationErrors.Count > 0 Then Return “Product Data is invalid” Else Return Nothing End If End Get End Property Default Friend ReadOnly Property Item(ByVal columnName As String) As String _ Implements System.ComponentModel.IDataErrorInfo.Item Get If m_validationErrors.ContainsKey(columnName) Then Return m_validationErrors(columnName).ToString Else Return Nothing End If End Get End Property #End Region End Class End Namespace Notice that the client-side IDataErrorInfo properties are declared as Friend (internal) so that they are not serialized back up to the server. Next we need to make sure the bindings in the XAML of our ProductDetail form is set up to display the error. <TextBox Text=”{Binding Path=ProductName, ValidatesOnDataErrors=True}” Height=”25″ Name=”TextBox1″ Width=”180″ Margin=”3″ HorizontalAlignment=”Left” /> You can also add a validation ErrorTemplate if you like. I’ve shown this validation technique with WPF here before. So when we don’t enter the ProductName on a product we can display the problem to the user right away without bothering our data service: Check out the updated sample on Code Galley, in there I also implement IEditableObject so that users can cancel out of editing of the products. However, the fact that we have to put rules in two locations in our code is a total drag. If we could type share the entity partial classes on the server and the client then we could write this code in one place and run it in both the client and the server. This is why if you have complex business rules you’re probably better off creating your own DataContracts and implementing your own WCF services. However, applications like this that have simple validation and heavy CRUD requirements make it a perfect candidate to use ADO.NET Data Services. Next post I’ll show how we can query and edit tabular data inside of an Excel client and post changes back to the data service. Enjoy! Join the conversationAdd When I try and test your project I get the follownig error when trying to save the changes. Error processing request stream. The property name ‘Error’ specified for type ‘NorthwindModel.Product’ is not valid. THis is a great example except for this error that I am getting. Hi Patrick, Good catch. I should have declared those client-side properties as Friend so they don’t serialize back up to the server. I fixed the code. Thanks! -B Thanks . Wonderful article…. I faced the same issue as Patrick faced that serialize issue with Error properties….And I dont know VB…So what needs to be access specifier in case of C# instead of Friend…?? Beth, I spent the night looking for an eloquent solution to implementing validation before saving the data back to my Data Service. This totally fits the bill. Thanks! I'm wondering when will be possible to use group by with ado data services. it wuold be really nice!
https://blogs.msdn.microsoft.com/bethmassi/2009/01/21/ado-net-data-services-intercepting-queries-and-adding-validation/
CC-MAIN-2016-36
refinedweb
2,124
56.66
Groovy Goodness: Defining Public Accessible Constant Fields There is a catch when we define a constant field in Groovy. Rob Fletcher blogged about this in the post Groovy the public keyword a while ago. When we omit the public keyword for a method then the method is still accessible as public method, because Groovy makes the method public when the class is compiled. When we leave out the public keyword for fields Groovy creates a getter and setter method for the field at compile time and turns it into a property that applies to the Java Bean specification. This is also true if the field is static. So if we define a constant value as static final we must keep in mind that Groovy will generate a getter method so the constant value is a read only property according to Java Bean specification. Let's create a simple class with a constant field DEFAULT, a property message and a message method. We leave out any public keyword: // File: sample.groovy // Groovy makes class public. class Sample { // Groovy adds getDEFAULT and no setDEFAULT. static final String DEFAULT = 'default' // Groovy adds setMessage/getMessage String message // Groovy makes method public. void message(final String newMessage) { this.message = message } } If we compile this class we get the following methods and fields (using javap to inspect the class): $ javap -p -constants Sample Compiled from "sample.groovy" public class Sample implements groovy.lang.GroovyObject { private static final java.lang.String DEFAULT = "default"; private java.lang.String message; ... public void message(java.lang.String); ... public static final java.lang.String getDEFAULT(); public java.lang.String getMessage(); public void setMessage(java.lang.String); } If we want to access the constant field in Groovy we can still use Sample.DEFAULT, but from Java code this doesn't work. You can see in the generated class file we should invoke getDEFAULT(), because this method is public. To overcome this we simply add public to our constant field definition. This way Groovy will leave the field unchanged and in the generated class file it is still public. Then from Java we can use Sample.DEFAULT to access the constant value. Let's see the output of javap when we make the DEFAULT field public: $ javap -p -constants Sample Compiled from "sample.groovy" public class Sample implements groovy.lang.GroovyObject { public static final java.lang.String DEFAULT = "default"; private java.lang.String message; ... public void message(java.lang.String); ... public java.lang.String getMessage(); public void setMessage(java.lang.String); } This also helps an IDE, like IntelliJ IDEA, to do a proper import static based on the constant field. Written with Groovy 2.4.4.
https://blog.jdriven.com/2015/09/groovy-goodness-defining-public-accessible-constant-fields/
CC-MAIN-2021-43
refinedweb
443
68.36
Today we will look into Builder pattern in java. Builder design pattern is a creational design pattern like Factory Pattern and Abstract Factory Pattern. Table of Contents Builder Design Pattern Builder pattern was introduced to solve some of the problems with Factory and Abstract Factory design patterns when the Object contains a lot of attributes. There are three major issues with Factory and Abstract Factory design patterns when the Object contains a lot of attributes. - Too Many arguments to pass from client program to the Factory class that can be error prone because most of the time, the type of arguments are same and from client side its hard to maintain the order of the argument. - Some of the parameters might be optional but in Factory pattern, we are forced to send all the parameters and optional parameters need to send as NULL. - If the object is heavy and its creation is complex, then all that complexity will be part of Factory classes that is confusing. We can solve the issues with large number of parameters by providing a constructor with required parameters and then different setter methods to set the optional parameters. The problem with this approach is that the Object state will be inconsistent until unless all the attributes are set explicitly. Builder pattern solves the issue with large number of optional parameters and inconsistent state by providing a way to build the object step-by-step and provide a method that will actually return the final Object. Builder Design Pattern in Java Let’s see how we can implement builder design pattern in java. - First of all you need to create a static nested class and then copy all the arguments from the outer class to the Builder class. We should follow the naming convention and if the class name is Computerthen client program. For this we need to have a private constructor in the Class with Builder class as argument. Here is the sample builder pattern example code where we have a Computer class and ComputerBuilder class to build it. package com.journaldev.design.builder; public class Computer { //required parameters private String HDD; private String RAM; //optional parameters private boolean isGraphicsCardEnabled; private boolean isBluetoothEnabled; public String getHDD() { return HDD; } public String getRAM() { return RAM; } public boolean isGraphicsCardEnabled() { return isGraphicsCardEnabled; } public boolean isBluetoothEnabled() { return isBluetoothEnabled; } private Computer(ComputerBuilder builder) { this.HDD=builder.HDD; this.RAM=builder.RAM; this.isGraphicsCardEnabled=builder.isGraphicsCardEnabled; this.isBluetoothEnabled=builder.isBluetoothEnabled; } //Builder Class public static class ComputerBuilder{ // required parameters private String HDD; private String RAM; // optional parameters private boolean isGraphicsCardEnabled; private boolean isBluetoothEnabled; public ComputerBuilder(String hdd, String ram){ this.HDD=hdd; this.RAM=ram; } public ComputerBuilder setGraphicsCardEnabled(boolean isGraphicsCardEnabled) { this.isGraphicsCardEnabled = isGraphicsCardEnabled; return this; } public ComputerBuilder setBluetoothEnabled(boolean isBluetoothEnabled) { this.isBluetoothEnabled = isBluetoothEnabled; return this; } public Computer build(){ return new Computer(this); } } } Notice that Computer class has only getter methods and no public constructor. So the only way to get a Computer object is through the ComputerBuilder class. Here is a builder pattern example test program showing how to use Builder class to get the object. package com.journaldev.design.test; import com.journaldev.design.builder.Computer; public class TestBuilderPattern { public static void main(String[] args) { //Using builder to get the object in a single line of code and //without any inconsistent state or arguments management issues Computer comp = new Computer.ComputerBuilder( "500 GB", "2 GB").setBluetoothEnabled(true) .setGraphicsCardEnabled(true).build(); } } Builder Design Pattern Video Tutorial Recently I uploaded a YouTube video for Builder Design Pattern. I have also explained why I think the builder pattern defined on WikiPedia using Director classes is not a very good Object Oriented approach, and how we can achieve the same level of abstraction using different approach and with one class. Note that this is my point of view, I feel design patterns are to guide us, but ultimately we have to decide if it’s really beneficial to implement it in our project or not. I am a firm believer of KISS principle. If you like the video, please do share it, like it and subscribe to my channel. If you think I am mistaken or you have any comments or feedback so that I can improve my videos in future, please let me know through comments here or on YouTube video page. Builder Design Pattern Example in JDK Some of the builder pattern example in Java classes are; - java.lang.StringBuilder#append() (unsynchronized) - java.lang.StringBuffer#append() (synchronized) That’s all for builder design pattern in java. Why are we using inner class here? We can create a constructor of Computer as like ComputerBuilder and set the optional field by setter as you are doing with ComputerBulder. Hi Pankaj, Can you explain ” The problem with this approach is that the Object state will be inconsistent until unless all the attributes are set explicitly.” this line in more detail. Hello, Thank you for your presentation, I have something wrong with your example: You said ” I resolved the issue about a lot of arguments but I don’t see your example can keep the benefit of Factory and AbsFactory ? Thanks Not sure what are you trying to say, can you please elaborate? While I’m reading the book writen by GOF, the book mentioned that Builder Pattern Constructure including three parts: Director, Builder, ConcreteBuilder. However, I haven’t saw the Builder in your codes, what’s more, the book also mentioned that a Builder may have several ConcreteBuilders, which I couldn’t understand. if computer is a component interface say apple’s Mac and you want to create its instance is it possible for you to assemble hard disk monitor etc.. so the component interfaces or classes which object creation is very complex we use factory Method design pattern but its have limitation and suitable for less data if you want to populate with big data then builder pattern is best one and for that you have to give all complex data to builder class.. Hey man thanks for this nice video/Article I have a question. What if you are working with POJO that you need to persist with Jpa/Hibernate. Do we need the setters for that? Hibernate requires No-Args constructor as well as getter-setter for the fields. So yes, they will be required to work with Hibernate. Since we have to invoke the constructors and setter methods explicitly anyway, cant we do it directly on the computer class via a public constructor? Why did we even need the inner class ?? Once you build the object it is immutable. We can achieve that only by providing setters methods on the nested class. Otherwise, we would expose public setters on the class causing mutability of the object. This is a different version of Builder pattern which I have come to know but definitely its a way of implementing things. Thanks for sharing this with the code. If you are copying same data from compute calss to computer builder class and pass those parameters from client to builder . . Why are we doing this , why cant we directly instantiate object of computer from client . What is the use of builder pattern . Because you are any way having constructer in builder class for which client needs to send parameters , in the same way we can declare constructer in main class itself and parameters from client. You example is not clear why to use builder pattern. I have the same question. Immutability is the key. Once the object is built, it cannot be changed since there are no setters A question regarding “We should follow the naming convention and if the class name is Computer then builder class should be named as ComputerBuilder”: Since our builder is a nested class, it will usually be addressed by something like ContainingClass.NestedClass, right? For example ClassToBeBuilt.Builder. Why does the convention suggest to use ClassToBeBuilt.ClassToBeBuiltBuilder instead? It’s not a requirement to have Builder class as a nested class. Just for simplicity of coding, I have create it as nested class. If you have a lot of POJO and their builder classes, then it’s best strategy to have a package itself for builder classes. For example com.journaldev.java.pojofor POJOs and com.journaldev.java.buildersfor Builder classes. However note that in this case, POJO classes constructor can’t be private and can be instantiate directly. Font is too big to Read. pls make one unit small to boost Readability. Builder pattern requires a Director, I don’t see it here. just because you are calling a build() in the end will not make it a builder pattern. continuing Builder pattern must build a complete object in parts. i.e. there must be a abstract builder and then concrete builders must be responsible for building one part each. StringBuilder in java is not a right example of BuilderPattern neither the HttpSecurityBuilder of spring. The Builder pattern I have explained here is the same as defined by Joshua Bloch in Effective Java. I don’t like the Builder pattern implementation provided in Wikipedia. It defeats the purpose of having variables. According to Wikipedia implementation, every Director class will be able to create only one variance of an Object. So if we have Director classes for “Car” and we want to have 5 different colours, it will result in classes like BlackCarDirector, RedCarDirector, GreenCarDirector. I hope you understood where I am going with this. Again it’s my point of view, design patterns provide us a way to do things properly. But it doesn’t mean we have to follow them strictly as is, we can make certain modifications based on our project requirement to make it even better with code simplicity. Pankaj, Joshua Bloch is really good guy, but it does not mean his approach is the only and aleways best one having studied his approaches for last 15 years. In fact, builder pattern as implemnted in Java and following Joshua’s approach has completely inefficient duplication of data members. It is very easy to create product with private setters and create product object first (it would be returned by build method if called). We need only delegate setter calls from builder class without keeping all duplicates. That of course is under condition that product has no parameter constructor as it should as per requirement of many frameworks. Well if the condition is not fullfilled then we have to defer product creation to pint when all constructor parameters are provided and duplication would be just for that part. BTW. it is irrelevant if you like canonical builder pattern by Gang-Of-Four or not. It is what it is and limitations of technology (like missing “friendship” semantics from C++) is a problem of that technology and not the pattern. I keep asking questions about aptterns in interviews and I am not going to ask specific question about Java when it comes to GOF patterns. There is a reason for this and I sugggest people study motivation section more carefully (one of my typical questions especially for senior positions). he Append method of StringBuilder simply adds more characters to the existing string. There are no new objects created (basic function of the builder pattern) and there is no design pattern involved. It’s a single method call to a single object instance. Please correct it. Well explained. I see Builder Pattern being used in Spring Security (Something like this) . This is reference for Pattern being used ? public final class HttpSecurity extends AbstractConfiguredSecurityBuilder implements SecurityBuilder, HttpSecurityBuilder { its too complex to me to grasp easily in spring custom class . However this tutorial example is clear to understand the basic idea of using this Builder Pattern and implement. @Override public void configure(HttpSecurity http) throws Exception { http. anonymous().disable() .requestMatchers().antMatchers(“/user/**”) .and().authorizeRequests() .antMatchers(“/user/**”).access(“hasRole(‘ADMIN’)”) .and().exceptionHandling().accessDeniedHandler(new OAuth2AccessDeniedHandler()); } Thank you Pankaj /Jounaldev Please correct this syntax which is using in Builder factory pattern example. Your Code: Computer comp = new Computer.ComputerBuilder( “500 GB”, “2 GB”).setBluetoothEnabled(true) .setGraphicsCardEnabled(true).build(); call statement is wrong “new Computer.ComputerBuilder” Please see the full call the build method is returning us the Computer Object. Very nice article for Design Patterns!! SPOC blog for all design patterns for beginners. It seems the only problem addressed here is when there are lot of input arguments to the constructor and all of them are not necessary. The above specified solution is just one of the use case of builder pattern. Don’t know if it is correct. Please refer to effective Java for the above specified problem. Chapter 2 Item 2. This is not the definition of Builder pattern described in GoF. However, it’s a good solution for the problem mentioned at the beginning of the article which was addressed in Effective Java book. I have one question that we are returning this in set() I know it is very important to return this in Builder pattern. But, Can you tell me what is the actual meaning when we return this and how it is actually executed. I have some confusions that how complier deal when we have used set() method of different fields for multiple times in same line of code while creating object and who will pick up this references of object before build(). Builder pattern described here is completely different from what other websites implementations. Here it shows with static inner class but others its not. Not sure which one is correct. This blog was really nicely articulated and helpful. Thanks … By this way, we can also create the object. Can you please explain the difference in using return this and the way below : public class TestBuilderPattern { public static void main(String[] args) { // Using builder to get the object in a single line of code and // without any inconsistent state or arguments management issues Computer.ComputerBuilder comp = new Computer.ComputerBuilder(“500 GB”, “2 GB”); comp.setBluetoothEnabled(true); comp.setGraphicsCardEnabled(true); comp.build(); System.out.println(comp); } public void setGraphicsCardEnabled(boolean isGraphicsCardEnabled) { this.isGraphicsCardEnabled = isGraphicsCardEnabled; // return this; } public void setBluetoothEnabled(boolean isBluetoothEnabled) { this.isBluetoothEnabled = isBluetoothEnabled; // return this; } } What is a need of getter and setter methods for outer class in above example if we are providing private constructor and no outer world can access the outer class? That’s correct as per my knowledge as well. There is no need to have getter and setter methods in the outside class. Rishi: There are no setters in the outer class only getters. Secondly you do need those getter methods, otherwise how are you going to utilize the properties set in the Outer class By having just private constructor doesn’t mean you cannot access the outer class Computer comp = new Computer.ComputerBuilder(“500 GB”, “2 GB”).setBluetoothEnabled(true) .setGraphicsCardEnabled(true).build(); //This is creating the outer object System.out.println(comp.isBluetoothEnabled()) //This is use of getter method Can you please share/post UML design for the same as explain above Thanks Ashakant Hi Pankaj, There is still an issue with this approach. We are still expecting the client code to invoke the setters explicitly to ensure that the object state is consistent before the finally created object is returned upon invocation of build() method. We could have invoked the build method without even calling the setters and the boolean variables would have been initialized to false by default and still the object creation would have been successful with the consistent state. I think the another approach could be to provide a series of overloaded constructors with various combinations of optional parameters which invoke them in sequence until all the parameters are set and then the final object is created. In this case, we would not require to have even a build method. You can easily avoid that by using Boolean rather than primitive type boolean. Object default value is NULL. Also you should have a constructor with all the mandatory parameters so that client is forced to provide values for them. Creating multiple constructor with optional parameters will not help and you will not be using Builder pattern then. Hello Pankaj i have Question Why u have used only static nested class why not normal/regular inner class…..i knw differences but in this example why we have used static nested clas only….plz reply..! The primary usage of the Static Nested Inner class here is to create an inner class object without instantiating the enclosing outer class. Instead of creating multiple overload constructors and/or various setters for the Computer class and try to construct an object in multiple steps (Object created this might be inconsistent till all the needy fields are set!), we try to construct the computer object by using the static nested inner class ComputerBuilder. Good article. Why does below functions needs to return “this” when setting parameter? can you please give pointer on this? return this;is used to return the current object, this is how Builder pattern works and it’s key for that. Otherwise new Computer.ComputerBuilder("500 GB", "2 GB").setBluetoothEnabled(true).setGraphicsCardEnabled(true).build();will not work. Good article. Simple, lucid & very specific. Brings all the initialization complexity to inner class, which keeps you outer class clean, love this pattern. Is it like builder pattern cannot be implemented on those class whose attributes keep shifting from mandatory to optional and vice-versa, unless its designed to take every attributes thru the setter method of inner class. Yes if the attributes keep on changing from mandatory to optional and vice versa, we will have to change the inner class that will break the pattern. So implementation should be done based on clear requirements.
https://www.journaldev.com/1425/builder-design-pattern-in-java
CC-MAIN-2021-04
refinedweb
2,951
54.32
README 👉 wskdebug has moved to Apache OpenWhisk👉 wskdebug has moved to Apache OpenWhisk New project: - NPM: @openwhisk/wskdebug - Github: apache/openwhisk-wskdebug Note that the npm name has changed. Install using: npm install -g @openwhisk/wskdebug --unsafe-perm=true All new development will happen at apache/openwhisk-wskdebug. This repository is archived and the @adobe/wskdebug npm package deprecated. wskdebugwskdebug Debugging and live development for Apache OpenWhisk. CLI tool written in Node.js and depending on a local Docker. Integrates easily into IDEs such as Visual Studio Code. Live development of a web action using wskdebug:: ContentsContents InstallationInstallation wskdebug requires Node.js (version 10+), npm and a local Docker environment. To install or update run: npm install -g @adobe/wskdebug UninstallUninstall npm uninstall -g @adobe/wskdebug AboutAbout wskdebug is a command line tool Currently, Node.js actions are supported out of the box. For others, basic debugging can be configured on the command line, while automatic code reloading needs an extension in wskdebug. Note on timeoutsNote on timeouts. UsageUsage The action to debug (e.g. myaction) must already be deployed. - Node.js: Visual Studio Code - Node.js: Multiple actions - Node.js: Plain usage - Node.js: Chrome DevTools - Node.js: node-inspect command line - Unsupported action kinds - Source mounting - Live reloading - Hit condition - Custom build step - Help output Node.js: Visual Studio CodeNode.js: Visual Studio Code. Node.js: Multiple actionsNode.js: Multiple actions. Node.js: Plain usageNode.js: Plain usage. Node.js: Chrome DevToolsNode.js: Chrome DevTools Run Node.js: Plain usage and then: - Open Chrome - Enter about:inspect - You should see a remote target app.js - Click on "Open dedicated DevTools for Node" (but not on "inspect" under Target) - This should open a new window - Go to Sources > Node - Find the runner.js - Set a breakpoint on the line thisRunner.userScriptMain(args)inside this.run()(around line 97) - Invoke the action - Debugger should hit the breakpoint - Then step into the function, it should now show the action sources in a tab named like VM201(the openwhisk nodejs runtime evals() the script, hence it's not directly listed as source file) Node.js: node-inspect command lineNode.js: node-inspect command line Run Node.js: Plain usage and then: Use the command line Node debugger node-inspect: node-inspect 127.0.0.1:9229 Unsupported action kindsUnsupported action kinds. Source mountingSource mounting. Live reloadingLive reloading There are 3 different live reload mechanism possible that will trigger something when sources are modified. Any of them enables the hot reloading of code on any new activation. - Browser LiveReloadusing -l: works with LiveReload browser extensions (though we noticed only Chrome worked reliably) that will automatically reload the web page. Great for web actions that render HTML to browsers. - Action invocation using . - Arbitrary shell command using . Hit conditionHit condition. Custom build stepCustom build step Help outputHelp output wskdebug <action> [source-path] Debug an ngrok.com for agent forwarding. [boolean] --ngrok-region Ngrok region to use. Defaults to 'us'. [string] Options: -v, --verbose Verbose output. Logs activation parameters and result [boolean] --version Show version number [boolean] -h, --help Show help [boolean] TroubleshootingTroubleshooting Cannot install globallyCannot install globally If you get an error during npm install -g @adobe/wskdebug like this: ngrok - downloading binary ngrok - error storing binary to local file [Error: EACCES: permission denied, open ''] { errno: -13, code: 'EACCES', syscall: 'open', path: '' } run this command below before trying the install again: sudo chown -R $(whoami) /usr/{lib/node_modules} The dependency ngrok requires full write permission in /usr/local/lib/node_modules during its custom install phase. This is a known ngrok issue. Does not work, namespace shows as undefinedDoes not work, namespace shows as undefined Your ~/.wskprops must include the correct NAMESPACE field. See issue #3. No invocations visible in wskdebugNo invocations visible in wskdebug - Is. - Wait a bit and try again. Restart (CTRL+C, then start wskdebugagain), wait a bit and try again. Catching the invocations is not 100% perfect. Port is already allocatedPort is already allocated. Restore actionRestore action How it worksHow it works ngrok localhost forwarding. It must be manually selected using --ngrok on the command line. This works even without an ngrok account.. DevelopmentDevelopment Extending wskdebug for other kindsExtending wskdebug for other kinds For automatic code reloading for other languages, wskdebug needs to be extended to support these kinds. This happens inside src/kinds. - Mapping of kinds to docker images - Custom debug kind - Default debug ports and commands - Support code reloading - Available variables Mapping of kinds to docker imagesMapping of kinds to docker images To change the mapping of kinds to docker images (based on runtimes.json from OpenWhisk), change src/kinds/kinds.js. Custom debug kindCustom debug kind. Default debug ports and commandsDefault debug ports and commands To just add default debug ports and docker command for a kind, add a custom debug kind and export an object with description, port and command fields. Optionally dockerArgs for extra docker arguments (such as passing in environment variables using -e if necessary). Support code reloadingSupport code reloading dockerArgs.()). Available variablesAvailable variables See also invoker.js. Note that some of these might not be set yet, for example invoker.debug.port is not yet available when port() is invoked. The raw cli args are usually available as invoker.<cli-arg>. ContributingContributing Contributions are welcomed! Read the Contributing Guide for more information. LicensingLicensing This project is licensed under the Apache V2 License. See LICENSE for more information.
https://www.skypack.dev/view/@adobe/wskdebug
CC-MAIN-2021-25
refinedweb
902
52.15
Introduction: Raspberry Pi - GPIOs, Graphical Interface, Pyhton, Math, and Electronics. Hello people! This instructable aims in playing with some of Raspberry's features. I won my Raspberry Pi from one my friends some time ago, and at the beginning I had no idea of how to use it. I already was a bit familiar with Arduino, but didn't know how Raspberry works. Since then, It has helped me to understand a little more about Linux, start in Python language, and also play a little with python's module Pygame. So, now you have your pretty Raspberry PI running Raspbian and ready for coding? So, let's see what we can do. Step 1: Getting Started I presume that you have a Raspberry running Raspbian and connected to a screen.If you don't, don't wait too much, there are a lot of good instructables teaching how to burn Raspbian on a SD card. You won't need to download anything,so there is no need to connect your raspberry to internet, if you were worried (I only have wifi in my accommodation). Raspbian fortunately already comes with Pygame and Raspi GPIO.All software that we need for this task. However, to test your GPIO, You will need additional electronics. I used a RGB LED (with resistors), but you can use any LED you want or other pieces of electronics you already is comfortable with (LED displays, Motors' drivers...). Step 2: Running Python Codes. I generally use Python's IDLE to code, to open it go to Raspbian menu, then Programming, then Python 3(or 2). I don't know exactly all the differences between Python 2 and Python 3, except that I know that print is different: print("Hello World!")#Python 3 print"Hello World!" #Pyhton 2 After clicking the Python icon, the Python Shell opens, here we can input commands one at time. For example: >>>2+2 4 >>> or: >>>print("Nothing") Nothing >>> Ok, but I generally use the editor to make codes. To open the editor, go to File menu, then 'New window'. You will see the following window, use it to type your codes. Oh, I almost forgot. Differently from C and Arduino, in Python we use indentations to limit code blocks. See the difference below: C syntax: if(i==5){ printf("equal to five"); } Python syntax: if i==5: print("equal to five") After finishing your code, save it in the '.py' format. I recommend you to not change the location of saving, as the home folder it is the first place open in the terminal. To run your codes, open LXTerminal and type: sudo Python3 nameofthescript.py Also, I recommend this site to study Python. Step 3: Raspberry GPIO Numbering There are two kinds of numbering on the raspberry board, the main chip one(BCM) and the board one. Personally, prefer the board numbering, but anyway I always have to google a picture hehehe, so both are good, choose the one you prefer, but remember to follow it(if you mix the two, it can be a bad time). You can see which GPIO is connected to a pin in this link from wikipedia. Just remember that you choose the board numbering, you will use the pin numbers. However, if you choose the BCM numbering, you wiil have to use the GPIO number connected to the pin you want to use. For example: The GPIO4 is connected in the physical board pin 7. If you choose the board numbering: GPIO.setmode(GPIO.BOARD) GPIO.setup(7,GPIO.OUT) If you choose the BCM numbering: GPIO.setmode(GPIO.BCM) GPIO.setup(4,GPIO.OUT) These two pieces of code do the same thing. Although, you can hear me whisper that the board numbering is easier to use ;). Step 4: Undestanding How Raspberry GPIOs Work Raspberry GPIOs work not so differently from the Arduino ones. The sequence is same, first you define the direction of the ports(input or output), than you read or write the state of the port, but differently some settings have to be done in order to use them, but you will get them easy. Some times we take pieces of code that other programmers kindly share to make ease to program. These pieces of code are called libraries, with Arduino we use '#include ' to import a library to our project. With Python, we use 'import library' or 'import library as something', so we can use them. To use GPIOs with Python we do some simple steps: The first step is import the RPi.GPIO library: import RPi.GPIO as GPIO Next, you will need to set the numbering that you want to use: GPIO.setmode(GPIO.BOARD) or: GPIO.setmode(GPIO.BCM) Then, we have to set the direction of the pin (output in this case): GPIO.setup(pin,direction)# equivalent to Arduino 'pinMode' example: GPIO.setup(3,GPIO.OUT) Finally, we set the state of the pin(high or low): GPIO.output(pin,state)#equivalent to Arduino 'digitalWrite' example: GPIO.output(3,GPIO.HIGH) Step 5: Simple Blink Program. Let's see how to run the 'hello world' of electronics on raspberry. Arduino has the delay() function that let us wait some time until next line of code. However, using Raspberry we need to import the library time to have a form of delay. Then, to have a delay we use 'time.sleep(t)', where is the time in seconds,differently for Arduino where we use 'delay(t)' where t is given in milliseconds. Continuing, below is the code to make a LED connected to board pin 3 blink: import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BOARD) GPIO.setup(3,GPIO.OUT) while True: GPIO.output(3,GPIO.HIGH) time.sleep(1) GPIO.output(3,GPIO.LOW) time.sleep(1) Step 6: What Is Pygame? Pygame is a python module used to make games. Although, you can use it just to draw things on the screen, as also receive input from keyboard and mouse. I will show and explain how to use it to control the GPIOs through a simple graphical interface. To use Pygame library, you will have to import it first: import pygame Then, you will have to start the display to be able to draw on screen.: pygame.init() screen=pygame.display.set_mode((480,160)) The second line creates a surface where we can draw, The arguments inside the two parenthesis are the size of our canvas. I used 480x160, to best fit my code. When you get used to it, you will find the best size for your project. Step 7: Cartesian Plane Your computer's screen is formed by pixels, that are organized in a grid. It is like a cartesian plane, but its origin(0,0) is at the top left corner. I think i read in some place that this is because of how old computers' screens worked. The lines were drawn from the left to the right, one by one from the top to the bottom of the monitor. Please, see the picture below with some points market: The points are referenced by their x and y coordinates, x is the horizontal distance and y is the vertical distance.The fist coordinate is x and the second is y, on the form (x,y). Said this, the points on the picture above are: A=(2,2) B=(9,1) C=(7,4) D=(11,3) E=(1,9) F=(12,9) So, when we want to draw something on the screen, we have to think how far from the origin it has to be drawn. For example, if we are using a screen size of 320x240 and want to draw a circle on middle, we draw its center at (160,120). See, it is easy, is not? Remember that differently from school, there are no negative values. Step 8: Drawing Objects. To draw shapes on the screen, we have to know the syntax of each one.The syntaxes for rectangles, circles, arcs, lines and others can be found on this link. For example, to draw the circles I used as buttons: pygame.draw.circle(screen,red,(80,80),60,0) This code draws the first circle on the image above, the first parameter is 'screen' that is the surface we created some steps ago. 'red' is the color for the circle, but we create these colors. the first two numbers (80,80) are the location of the centre of the circle(remeber the cartesian plane) and the last two are the radius and thickness(leave zero to fill completely). Ah look, to define a new color we use: color=(R,G,B) ex: red=(255,0,0) Purple=(255,0,255) black=(0,0,0) white=(255,255,255) We just mix red,green and blue to make the colors we want to use. The values to red, green and blue can be any value between 0 and 255, where 255 is the brightest. You will need to update the screen always that you draw a new object on it, that is simple: pygame.display.update() Step 9: Events In programming, a event can change the behavior of a algorithm, for example, a mouse click can make the program run a particular piece of code. I used a event in my code to change the state of LEDs attached to GPIOs when the inside of the circles are clicked. To check if there was a mouse click, first we check the events happening, if there was any mouse click we get the position of the pointer: ev=pygame.event.get() for event in ev: if event.type == pygame.MOUSEBUTTONUP: mpos=pygame.mouse.get_pos() Step 10: Pythagoras' Theorem What Pythagoras has to do with python and raspberry? Simple, suppose we need to verify if the inside of circle was clicked, how to proceed? Well,first we need the position of the click. We already know from the last step how to get this. Also, the center of the circle we already have, as we defined it when drawing. Then, we just need to know if the distance form the point clicked to the center of the circle is smaller or equal the radius of the circle. Please, see the image above. The green point represents the click outside the circle. We can see that the distance from the centre of the circle and the clicked point is bigger than the radius of the circle.The blue point, however, is inside the circle and we can see that its distance from the center of the circle is smaller than the radius. From the Pythagoras's theorem, it is known that the square of the largest side of a right triangle is equal to the sum of the squares of the two smaller sides. So, to calculate the largest side, we do a simple operation, with give us the the largest side of the right triangle is the square root of the sum of the squares of the two smaller sides(confusing? See the image above). Then, we get that the distance from the circle center is: distance = square root((x-xcenter)^2+(y-ycenter)^2) Where x and y are the coordinates of points clicked, and xcenter an ycenter are the coordinates of the center of the circle. This distance must be smaller than the radius of the circle if the inside of the circle was clicked. Block of code in Python: if math.sqrt(math.pow(mpos[0]-240,2)+math.pow(mpos[1]-80,2))<60: The math.sqrt method does the square root of the argument, which is the sum of the squares. -mpos[0] and mpos [1] are the mouse click coordinates. -240 and 80 are the coordinates of the center of the circle. math.pow(x,n) does x^n, the square in this case. Step 11: RGB LEDs I connected a RGB LED to pins 3,5,7 on my Raspberry Pi. The circles inside the program control each color of the LED. A RGB LED is like three LEDs together, putting a voltage in each pin light up red or green or blue. Activating more than one terminal to mix the colors generates new colors. For example, mixing red and green creates yellow, blue and green generates teal an ]d mixing red,green and blue generates white. However. high bright RGB LEDs don't mix the colors well, we need something to diffuse the light, thats why I used that cute hat you can see on the last picture above. Don't forget LEDs need resistors, without resistors they can burn or even damage your Raspberry Pi. I used 1k resistors because they were the only ones I had here, but they are higher than the value needed, so no problem. You can calculate the value for you resistors here. Step 12: Whole Code. Below is the code that I wrote, also if you wish, you can download it below. Remember that to run python scripts with Raspberry we need to open the terminal and type: sudo python3 nameofthescript.py Just left a comment if you have any problem with running the code ;) Whole code: import RPi.GPIO as GPIO import time import pygame import math pygame.init() screen=pygame.display.set_mode((480,160)) mpos=(0,0) GPIO.setmode(GPIO.BOARD) GPIO.setup(3,GPIO.OUT)#Red GPIO.setup(5,GPIO.OUT)#Blue GPIO.setup(7,GPIO.OUT)#Led white=(255,255,255) red=(255,0,0) green=(0,255,0) blue=(0,0,255) screen.fill(white) pygame.draw.circle(screen,red,(80,80),60,0) pygame.draw.circle(screen,green,(240,80),60,0) pygame.draw.circle(screen,blue,(400,80),60,0) pygame.display.update() GPIO.output(3,GPIO.LOW) GPIO.output(5,GPIO.LOW) GPIO.output(7,GPIO.LOW) while True: ev=pygame.event.get() for event in ev: if event.type == pygame.MOUSEBUTTONUP: mpos=pygame.mouse.get_pos() print(mpos) if math.sqrt(math.pow(mpos[0]-80,2)+math.pow(mpos[1]-80,2))<60: print ("red") GPIO.output(3,1^GPIO.input(3)) if math.sqrt(math.pow(mpos[0]-240,2)+math.pow(mpos[1]-80,2))<60: print ("green") GPIO.output(5,1^GPIO.input(5)) if math.sqrt(math.pow(mpos[0]-400,2)+math.pow(mpos[1]-80,2))<60: print ("blue") GPIO.output(7,1^GPIO.input(7)) pygame.event.pump() Step 13: Thank You! That is it! I hope that you have fun as I had, if you have any doubt or problem, or think that this instructable could be better, or anything else, just say in the comments.See you in another instructable. o/ 8 Discussions Great post Sir, As mentioned GPIO.output(act as digital Write of Arduino ), as the same way what is the syntax to digital Read. Hi, PradipS17. You can read gipos with GPIO.input(pin). Where pin is the number of the pin you want to read Dir sir, please tell mi can we do like this , can we assign a variable to store the results of this input pin and can we use this variable after in onther line to chek this pin status. Example. Int status; Void setup() Pin steup Void loop() Status=digitalRead(pin) Print(status) Now we can use status variable where we wants, This happen in Embeded C. But can we have such syntax in Python. If we have please suggest me. And thank you sir your helpful comment. Nice! Thank you! Thanks for reading! Thanks for reading! I've been searching for documentation on the raspberry's GPIOs for so long and finally somebody that makes a well-structured and documented guide. Thanks man :) love it. You're welcome! Don't forget to check my blog too.
https://www.instructables.com/id/Raspberry-Pi-GPIOs-graphical-interface-pyhton-math/
CC-MAIN-2018-34
refinedweb
2,647
73.27
Edit: This texture sampling bug only seems to occur on Linux. On both Windows 10 and Android, I can set point sampling with ((PGraphicsOpenGL)g).textureSampling(2); and get clean images, but on Linux, that line has no effect. Is this a bug in JOGL? Edit 2: Apparently this is a known bug from two years ago. We need to get textureSampling added to the Processing API so it can be officially supported (and fixed). Edit 3: I hackily solved this problem by using NVIDIA’s X Server Settings application to “Override Application Setting” for “Anisotropic Filtering” under the “Antialiasing Settings”. Along with textureSampling(2), my mis-colored tile edges go away and the images now look the same on Linux as they do on Windows and Android. Original post: I want to be able to pass an array of data to a shader and sample it random-access to render tiles. Ultimately, the program will run on Android, so I’m stuck using OpenGL ES shader version 1 and can only pass the data as texture images. Passing and reading the data seems to work fine except along the edges of the tiles where I’m getting weird sampling artifacts that I can’t understand. The code below draws a tiled surface in which each tile’s color is sampled pseudo-randomly from a 1-D n x 1 texture image. Sampling the texture data array looks okay in the interiors, but gives artifacts along the edges of the tiles as shown in the top half of this picture. On the left half of the picture, tile colors are sampled from the “data” texture discontiguously both horizontally and vertically. If I sample the data texture more contiguously, as is done on the right half of the image, the artifacts between the now horizontally adjacent texels go away (or at least aren’t noticeable) except where the last (brown) and first (black) tiles abut. On the bottom half of the picture, the tiles are colored using computed values from within the shader and so show no artifacts at all, discounting the jaggies that I’ll deal with separately. Where are these artifacts coming from? I’ve tried adding hints to disable mipmapping and to set the textureSampling to point sampling, but neither one made any difference. On the other hand, I do NOT get these artifacts on Android where, for now, I’m guessing textures are always point-sampled. How can I ensure my data texture is always point-sampled on desktop Processing? Is that even what is causing the artifacts? Here’s the code. Mouse drag on the left edge to zoom or elsewhere to pan the image. enum UIState { NEUTRAL, PANNING, SCALING; } UIState uiState; float viewX=0.0, viewY=0.0, viewScale; void mousePressed() { if( mouseX < 0.1 * width ) uiState = UIState.SCALING; else uiState = UIState.PANNING; } void mouseDragged() { if( uiState == UIState.PANNING ) { viewX += (pmouseX - mouseX) / viewScale; viewY += (pmouseY - mouseY) / viewScale; } else if( uiState == UIState.SCALING ) viewScale *= 1.0 + float(pmouseY - mouseY) / height * 2.0; } void mouseReleased() { uiState = UIState.NEUTRAL; } PShader fragshader; void setup() { //fullScreen(P2D); size(640, 480, P2D); noSmooth(); hint(DISABLE_DEPTH_MASK); hint(DISABLE_TEXTURE_MIPMAPS); ((PGraphicsOpenGL)g).textureSampling(2); fragshader = loadShader( "texture.frag" ); fragshader.set( "screenRes", (float)width, (float)height ); int nCells = 16; fragshader.set( "nCells", (float)nCells ); PImage data = createImage( nCells, 1, ARGB ); data.loadPixels(); data.pixels[0] = color( 0 ); for( int ic=1; ic<nCells; ic++ ) { data.pixels[ic] = color( random(256), random(256), random(256) ); } data.pixels[6] = color( 255 ); data.pixels[9] = color( 255 ); data.updatePixels(); fragshader.set( "data", data ); viewScale = min( width, height ) / 8.0; } void draw() { background( 0 ); noStroke(); fragshader.set( "viewPos", viewX, viewY ); fragshader.set( "viewScale", viewScale ); filter( fragshader ); // shader( fragshader ); // rectMode( CORNER ); // rect( 0, 0, width, height ); } and the shader (save as “texture.frag” in the data/ directory) #ifdef GL_ES precision highp float; precision mediump int; #endif uniform vec2 screenRes; uniform float nCells; uniform sampler2D data; uniform vec2 viewPos; uniform float viewScale; vec4 localColor( float idx ) { return vec4( fract( idx*0.618 ), fract( idx*0.781 ), fract( idx* 0.933 ), 1. ) ; } void main() { vec2 p = gl_FragCoord.xy; p.y = screenRes.y - p.y - 1.; p -= screenRes/2.; vec2 q = p; /**/ p /= screenRes.y; float r = length(p); float a = atan( p.y, p.x ); r = tan(r); p = vec2( cos(a), sin(a) ) * r; p *= screenRes.y; /**/ p /= viewScale; p += viewPos+0.5; vec2 ij = floor( p ); float idx = mod( mod( 5.*ij.y+ij.x, nCells )*7., nCells); if( q.x > 0. ) idx = mod( 5.*ij.y+ij.x, nCells ); vec4 tileColor = texture2D( data, vec2( (idx+0.5)/nCells, 0.5 ) ); if( q.y > 0 ) tileColor = localColor( idx ); gl_FragColor = vec4( tileColor ); }
https://discourse.processing.org/t/solved-edge-artifacts-when-random-access-sampling-a-texture-in-a-shader-on-linux/8382
CC-MAIN-2022-27
refinedweb
781
68.26
Created on 2011-12-18 13:06 by naif, last changed 2011-12-22 09:05 by pitrou. This issue is now closed. Python SSL doesn't support DH ciphers in in all version tested. This is a serious security issue because it's not possible to use as a server or client Perfect Forward Secrecy [1] security provided by DHE and ECDH ciphers . In order to enable DH ciphers the SSL implementation the in the file Modules/_ssl.c, it must issue a DH_generate_parameters() if a cipher is DH. For example PHP handling of DH ciphers, look php-5.3.8/ext/openssl/openssl.c : #if !defined(NO_DH) case OPENSSL_KEYTYPE_DH: { DH *dhpar = DH_generate_parameters(req->priv_key_bits, 2, NULL, NULL); int codes = 0; if (dhpar) { DH_set_method(dhpar, DH_get_default_method()); if (DH_check(dhpar, &codes) && codes == 0 && DH_generate_key(dhpar)) { if (EVP_PKEY_assign_DH(req->priv_key, dhpar)) { return_val = req->priv_key; } } else { DH_free(dhpar); } } } break; #endif default: An important security fix, to support and enable by default DH ciphers has to be done. [1] Other example for DH and ECC from: #ifndef OPENSSL_NO_DH static int init_dh(SSL_CTX *ctx, const char *cert) { DH *dh; BIO *bio; assert(cert); bio = BIO_new_file(cert, "r"); if (!bio) { ERR_print_errors_fp(stderr); return -1; } dh = PEM_read_bio_DHparams(bio, NULL, NULL, NULL); BIO_free(bio); if (!dh) { ERR("{core} Note: no DH parameters found in %s\n", cert); return -1; } LOG("{core} Using DH parameters from %s\n", cert); SSL_CTX_set_tmp_dh(ctx, dh); LOG("{core} DH initialized with %d bit key\n", 8*DH_size(dh)); DH_free(dh); return 0; } #endif /* OPENSSL_NO_DH */ The ssl module doesn't directly handle keys, it just gives a PEM file to OpenSSL's ssl functions. So I don't understand what should be done precisely here, or even if something has to be done at all. Please look at how PHP implement the feature. It doesn't use any PEM or any Key File, but just initiatlize the DH parameters. Stud instead, ask the user to generate "offline" the DH parameters and save it into the PEM file. I think that the PHP approach it's better than the STUD one: It does not require any file or key to generate DH parameters. This is the way to have supported ciphers such as DHE-RSA-AES256-SHA ( ) that now cannot be used because the Python SSL binding doesn't initialize the DH parameters. Well the OpenSSL docs say “DH_generate_parameters() may run for several hours before finding a suitable prime”, which sounds like a good reason not to do it every time your program is run. Anyway, SSL_CTX_set_tmp_dh() should allow us to set DH parameters on a SSL context, PEM_read_DHparams() to read them from a PEM file, and OpenSSL's source tree has a couple of PEM files with "strong" DH parameters for various key sizes. Wow, i saw your patch for ECC SSL ciphers on . Do you think we can use the same method/concept as ssl.OP_SINGLE_ECDH_USE but ssl.OP_SINGLE_DH_USE for DH? > Wow, i saw your patch for ECC SSL ciphers on . > > Do you think we can use the same method/concept as > ssl.OP_SINGLE_ECDH_USE but ssl.OP_SINGLE_DH_USE for DH? Of course. In the meantime i added two other tickets on security and performance improvements of Python SSL support, to make it really complete and comparable to Apache/Dovecot/PHP in terms of configuration and capability: Python SSL stack doesn't support ordering of Ciphers Python SSL stack doesn't support Compression configuration Here is a patch adding the load_dh_params method on SSL contexts, and the OP_SINGLE_DH_USE option flag. Per the Red Hat problems in issue13627 I just tried this patch on Fedora 16. Everything built just fine. However, the patch doesn't apply cleanly to tip an longer: [meadori@motherbrain cpython]$ patch -p1 < ../patches/dh.patch patching file Doc/library/ssl.rst Hunk #2 succeeded at 715 (offset 27 lines). patching file Lib/ssl.py Hunk #1 succeeded at 68 with fuzz 2. patching file Lib/test/dh512.pem patching file Lib/test/ssl_servers.py Hunk #1 succeeded at 180 (offset 1 line). Hunk #2 succeeded at 194 (offset 1 line). patching file Lib/test/test_ssl.py Hunk #2 succeeded at 101 with fuzz 2. Hunk #3 succeeded at 541 (offset 3 lines). Hunk #4 FAILED at 1200. Hunk #5 succeeded at 1858 with fuzz 2 (offset 29 lines). 1 out of 5 hunks FAILED -- saving rejects to file Lib/test/test_ssl.py.rej patching file Modules/_ssl.c Hunk #1 succeeded at 1922 (offset 20 lines). Hunk #2 succeeded at 2082 (offset 22 lines). Hunk #3 succeeded at 2539 with fuzz 2 (offset 24 lines). After fixing the unit test hunk everything builds and the SSL unit tests pass. New changeset 33dea851f918 by Antoine Pitrou in branch 'default': Issue #13626: Add support for SSL Diffie-Hellman key exchange, through the Thank you Meador. I've committed an updated patch.
http://bugs.python.org/issue13626
CC-MAIN-2016-26
refinedweb
808
65.93
2013-03-12 Meeting Notes John Neumann, Norbert Lindenberg, Allen Wirfs-Brock, Rick Waldron, Waldemar Horwat, Eric Ferraiuolo, Erik Arvidsson, Luke Hoban, Matt Sweeney, Doug Crockford, Yehuda Katz, Brendan Eich, Sam Tobin-Hochstadt, Alex Russell, Dave Herman, Adam Klein, Edward Yang, Dan Stefan, Bernd Mathiske, John Pampuch, Avik Chaudhuri, Edward O'Connor, Rick Hudson, Andreas Rossberg, Rafeal Weinstein, Mark Miller Opening Introduction Logistics Adoption of Agenda Mixed discussion regarding scheduling over the course of the 3 days. Approved. Approval of January 2013 Meeting Notes Approved. Adobe John Pampuch: Here to help accelerate the ES6 spec, positive motivation. Excited about Modules, concurrency, debugging and profiling specifications. Bernd Mathiske: Background as trained language designers and implementors and here to help. John Pampuch: Also excited about asm.js Bernd Mathiske: Not sure about the spec status/prospects of asm.js. Edit (2013-03-22) blogs.adobe.com/standards/2013/03/21/adobes-ecma-tc-39-involvement 4.9 JSON, IETF changes (Presented by Doug Crockford Crockford) Currently, JSON is an RFC, informational, the IETF version will be an internet standard and there is a minor correction that affects ECMAScript. The use of "should" in 15.12.2 Alex Russell: What is the motivation of the change? Doug Crockford: The change involves the mistake of using "should" w/r to multiple same-named keys error. Multiple same-name keys are invalid and must throw an error (vs. "should" throw an error) Luke Hoban: This is a breaking change Dave Herman: The worst being the use case of multiple, same-named keys as comments Doug Crockford: That's stupid Yehuda Katz: That's based on on your recommendation to use a keyed entry as a comment, so people naturally used the same key, knowing they'd be ignored. Doug Crockford: I would certainly never recommend that practice Yehuda Katz: It was a side-effect Alex Russell: Which key is used now? Allen Wirfs-Brock: The last one wins. Alex Russell: Is that the root of the security vector? Doug Crockford: Not in ES, but in other encodings Alex Russell: Order matters, unescaped content that follows... Doug Crockford: The current spec says "[they should not]", but will say "[they must now]" Yehuda Katz: Let's define an ordering and make it cryptographically secure. Doug Crockford: (recapping to Mark Miller, who just arrived) Mark Miller: You can't do that. (laughs) Mark Miller: You can't change "should" to "must" Yehuda Katz: Agreed, you cannot change JSON, there are too many JSON documents in existence. Mark Miller: Agreed. Alex Russell: It's possible to ignore this change? Doug Crockford: Yes Dave Herman: Then why are we creating a dead letter? Mark Miller: ES has a grammatical specification for validating and parsing JSON. Anything that is not conformant JSON, would not parse. This change loses that property. Doug Crockford: Or we don't change the spec Mark Miller: The way that you properly reject our favorite fixes, I think you should apply to your favorite fixes Doug Crockford: I'll consider that Alex Russell: There is considerable opposition to this change Doug Crockford: Two choices... - Make it an error - Continue to take the last one Doug Crockford: Decoders have license to do what they want with non-conformant material. Encoders must be conferment to new changes. Mark Miller: Our current encoder conforms... Allen Wirfs-Brock: I don't think it does... reviver/replacer Mark Miller: No, can only apply objects instead of the original objects. Alex Russell: Did not realize the production/consumption distinction of this change. Waldemar Horwat: Supports this change. ECMAScript is already conformant because it never generates duplicate keys. Mark Miller: Doug Crockford has made a final decision. - FTR Majority opposition, no consensus. 4.12 StopIteration/Generator (Presented by Dave Herman) dherman/iteration-protocols Dave Herman: ...Confirms that there is lack of understanding for Generator "instances" Mark Miller: Clarify terminology Dave Herman:: { next() -> { done: false , value: any } | { done: true[, value: any] } } b/c generators can return an argument, if you're using a return value Mark Miller: Requires allocation for every iteration? Dave Herman: Yes, will still need the object allocation, but Waldemar Horwat: Does next return a fresh object? or can reuse the same? Dave Herman: Can reuse Allen Wirfs-Brock: For every iterator in the spec, we need to specify a fresh or reused object? Dave Herman: Yes. Yehuda Katz: The current API, able to do yield ...returns a promise... Dave Herman: Can still do that, this change is internal and w/r to performance, this should be highly optimizable. Allen Wirfs-Brock: Anything that uses a method based implementation, will be more optimizable through calls vs exception. Dave Herman: I've never seen an iterator API that didn't have some performance issues Allen Wirfs-Brock: (refutes) Any method based approach can be better optimized over exception based approaches. Dave Herman: I don't have a solid performance story, but the feedback I'm getting is that there is negative concern about the StopIteration approach, whereas this approach mitigates these concerns. Issues arise when dealing with multiple iterators Waldemar Horwat: If you try throwing StopIteration across iterators, it will be caught Allen Wirfs-Brock: Or it won't Erik Arvidsson: Surprising: If any function throws a StopIteration, it will jump out of the for-of. Allen Wirfs-Brock: I noticed this in the examples shown in the github repo Waldemar Horwat:uke Hoban:? Alex Russell: No implementation wants to ship something that will potentially be slow Luke Hoban: Of course, but StopIteration has to go. Mark Miller: One allocation per loop Waldemar Horwat: So is this Mark Miller: Only if you reuse the record Luke Hoban/Waldemar Horwat: Of course and that's what you want Mark Miller: Then, as Allen said we need to specify this Dave Herman: My inclination would be to use a fresh object each time Allen Wirfs-Brock: ...you know the first time, because it's the first time that next is called, Mark Miller: My proposal is that you provide stop token as a parameter of next(stop), every time. next(stop) would return either the next value or the stop token. Dave Herman: (clarifying) "iteration" is one time around the loop. "loop" is the entire the operation. Waldemar Horwat:.] Waldemar Horwat: This would create funky failures if, for example, you had an iterator that did a deep traversal of an object tree and said tree happened to include the iterator instance. Mark Miller: In order to not allocate on every iteration, you have specify (???) Mark Miller: A new stop token would be generated per loop. Waldemar Horwat: What's a loop? This gets confusing when you have iteration adaptors. Allen Wirfs-Brock: If the client passes in the token on next(), then it's the client's burden Mark Miller: Anything that's unforgable, unique, or itself side affectable. Dave Herman: Is there support for Mark's API? Rick Hudson: If you use Mark's API, overtime... Mark Miller: My API reuses the object for the iterations of the loop, by passing it back in as an argument to next() Rick Hudson: To avoid the cost of allocation? Mark Miller: Yes, but only as a token Erik Arvidsson: You can have a return value in a generator so the object passed in needs to be mutated to include the return value in case of end of iteration. Mark Miller: That is a downside of this proposal, where StopIteration would carry the value. Dave Herman: (examples of the two proposals) Dave's { next() -> { done: false , value: any } | { done: true[, value: any] } } Marks's { next(X) -> any | X } Allen Wirfs-Brock: (suggests an alternative: pass an object to next, on which next sets the result) Sam Tobin-Hochstadt: ...is hostile to implementors and user code. Andreas Rossberg: That's the C-style of doing it. Waldemar Horwat: Suppose the iterator takes an object and returns all the properties, but calls on itself? Dave Herman: Mark's proposal is broken, because it doesn't work with return values of generators. Mark Miller: Agreed. Dave Herman: Don't think that we're approaching consensus, but don't let your idea of perfect cloud judgement. I'm asking engine implementors if this is appealing. The concern over StopIteration is real. Allen Wirfs-Brock: This is certainly better then the current plan of record Alex Russell: Agree. Bernd Mathiske/John Pampuch/Avik Chaudhuri: Agree Bernd Mathiske: This is also future proof and works well with concurrency and the semantics are sound. It's also easy to implement and optimize. Allen Wirfs-Brock: All spec iterators/generators must specify a reused iterator or fresh Mark Miller: (further support for Allen Wirfs-Brock's claim) Dave Herman: Not sure if we're trading short term wins for long term losses. Are there long terms Andreas Rossberg: There is another secondary effect that it encourages better GC Allen Wirfs-Brock: This shouldn't be a problem for a good GC Mark Miller: I'm really only concerned about the loop aspect Alex Russell: We have the tools to work with hot loops Waldemar Horwat: Alex's point about the escape valve is key Dave Herman: Not discounting the needs of the developers/user code. The method API is appealing, vs. StopIteration Rick Waldron: Agree. Dave Herman: (Shows example of C#) Allen Wirfs-Brock: The third use case that Luke gave, using an iterator and the fourth use case, creating an iterator. ...This API is more complex for user code ...More ways for client code to go wrong Bernd Mathiske: Disagree, this is a safer. Dave Herman: Don't get out of sync. Argue that the Java and C# API are too error prone. Bernd Mathiske: Agree, this is actually superior to the Java and C# APIs Andreas Rossberg: This is actually the path you'd want in typed languages, minimizes state space Dave Herman: I want to see a better overall interface for iterators and generators, without jeopardizing the acceptance. Mark Miller: In favor of this API, if the implementors are not objecting. Although I don't like the API itself. Dave Herman: Agree, I prefer the pure, stateless proposal Allen Wirfs-Brock: If an object is passed an argument, the object is used as the value bucket. Dave Herman: Still mutates the object Allen Wirfs-Brock: But it mutates an object that you've explicitly provided Bernd Mathiske: The issue is not the allocation, but that you have to go to heap at all. Andreas Rossberg: If you do this pre-allocation thing, it might be observable Bernd Mathiske: But that's the case either way Dave Herman: Is the mutable version going to harm optimization? Andreas Rossberg: Yes: the object may be shared, in which case mutation may become observable from the distance, and cannot be optimized away Rick Hudson: If the object being mutated escapes to an old mutation, this kills potential optimizations. Dave Herman: Seems like consensus on the pure, stateless version of this: { next() -> { done: false , value: any } | { done: true[, value: any] } } John Pampuch: "more" vs. "done"? (can discuss further) Conclusion/Resolution - Rough agreement (between those present) on pure, stateless version of: { next() -> { done: false , value: any } | { done: true[, value: any] } } ...To replace StopIteration - Always has an own property called "value": var i1 = (function *f() { return; })(); "value" in i1.next(); var i2 = (function *g() { })(); "value" in i2.next(); var i3 = (function* h() { return (void 0); })(); "value" in i3.next(); - Built-in iterators should be specified as returning a fresh value. - See gist.github.com/dherman/5145925 - Without Brendan, a champion of iterators and generators, don't have full consensus 4.2 Modules (Presented by Dave Herman, Sam Tobin-Hochstadt, Yehuda Katz) See: gist.github.com/wycats/51c96e3adcdb3a68cbc3 Slides (PDF, will prompt download): meetings:modules-march-2013.pdf Dave Herman: We're committed to making this happen for ES6, it's too important to miss and I'm going to do whatever it takes. Please remember that you can always bring specific issues directly to us (Dave Herman, Sam Tobin-Hochstadt, Yehuda Katz). .. module "libs/string" { export function capitalize(str) { return (make the string capitalized) }; } module "app" { import { capitalize } from "libs/string"; } The registry corresponds to a Loader which creates a Realm Mark Miller: But you can have more then one Realm Dave Herman: Think of Loader as a virtual "iframe", as a concrete way of describing it. When you create an "iframe", you get a whole new global object with it's own DOM. A Loader creates a "sandbox" global that can share system intrinsics. The System loader. var capitalize = System.get('libs/string').capitalize; var app = System.get('app').app; Custom loader: System.ondemand({ "": "jquery", "backbone.js": [ "backbone/events", "backbone/model" ] }); ...is sugar for... System.resolve = function(path) { switch (path) { case "jquery": return ""; case "backbone/events": case "backbone/model": return { name: "backbone.js", type: "script" }; } }; Mark Miller: This is changing the behavior only at the system Loader? Dave Herman: Yes. Sam Tobin-Hochstadt: This is the API that several people indicated was important during the last meeting. Use Case: ...? Use Case: Compile To JS System.translate = function(src, options) { if (!options.path.match(/\.coffee$/)) { return; } return CoffeeScript.translate(source); } Luke Hoban: Is this updated on the Wiki? Dave Herman: Will update. Much of the changes are about refining API and making common things easy and most things possible Use Case: Custom AMD Creating custom translations for extensions... data = "${escaped}"`; } // fall-through for default behavior } Waldemar Horwat: Why would you want to do it this strange way (escape text only to then eval it) instead of just letting the text be? [It feels kind of like the folks doing eval("p." + field) instead of p[field]]. Dave Herman: (explains James Burke's summary of static asset loading) Use Case: Importing Legacy Libraries (Specifically, not libraries that use CommonJS or AMD, but libraries that mutate the global object) var legacy = [ "jquery", "backbone", "underscore" ]; System.resolve = function(path, options) { if (legacy.indexOf(path) >= -1) { return { name: path, metadata: { type: "legacy" } }; } else { return { name: path, metadata: { type: "es6" } }; } }; function extractExports(loader, original) { var original = `var exports = {}; (function(window) { ${original}; })(exports); exports;` return loader.eval(original); } System.link = function(source, options) { if (options.metadata.type === 'legacy') { return new Module(extractExports(this, source)); } // fall-through for default } Luke Hoban: Once we ship this, we want people to start using modules as soon as possible. How? Yehuda Katz: Realistically, a "plugin" for something like require.js will have to provide an ES6 "shimming" mechanism. Luke Hoban: To paraphrase, we're providing the primitives that make the common cases easy to overcome. What about the legacy libraries that won't be brought up to date? Can we provide a simple mechanism? Dave Herman: No, legacy libs that just expose themselves to the global object, without any sort of shimming mechanism are out of reach Luke Hoban: Thank you, that's a sufficient answer Use Case: Import AMD style modules and Node style modules. Effectively, ES6 module importing from non-ES6 module. There is no way to tell System.link = function(source, options) { if (options.metadata.type !== "amd") { return; } let loader = new Loader(); let [ imports, factory ] = loader.eval(` let dependencies, factory; function define(dependencies, factory) { imports = dependencies; factory = factory; } ${source}; [ imports, factory ]; `); var exportsPosition = imports.indexOf("exports"); imports.splice(exportsPosition, 1); function execute(...args) { let exports = {}; args.splice(exportsPosition, 0, [exports]); factory(...args); return new Module(exports); } return { imports: imports, execute: execute }; }; Bernd Mathiske: Could you postulate that exports and Dave Herman: You could but, unrealistic Bernd Mathiske: Could be optimizing for module provider, but not consumer... ... Mark Miller: What does the Module constructor do? Dave Herman: Copies the own properties of the given object. Mark Miller: What is the job of the System.link hook? Sam Tobin-Hochstadt: To go from a piece of JavaScript source code to module instance object, translate is string->string. Waldemar Horwat: Is it a module or Module instance? Dave Herman: Module instance object Take the source code, all the deps, finds all the exports, links them together. The link hook can return - undefined for the default behavior - A Module instance, where everything is done and complete - An object, with list of deps and factory function to execute at some later time (eg. when all deps are downloaded and ready) Yehuda Katz: Explains that a two phase system is required whether you're using node, AMD or anything. Now you can use ES6 provided hook. Bernd Mathiske: Optionally specify the list of exports? Dave Herman: Yes. Conversation about specific example. Mark Miller: Clarify... noting that the positional args is similar to AMD positional args Dave Herman: Yes. Andreas Rossberg: No static checking for non-ES6 modules? Dave Herman: Yes, it's a hole that can't be filled if we want interop from AMD->ES6 and ES6->AMD (or node) Andreas Rossberg: Concern about having two imports, checked and unchecked. (implementation complexity concern) Bernd Mathiske: The alternative is to not support AMD and provide only one imports Sam Tobin-Hochstadt/Rick Waldron: This is an option, but a worse option. ...Discussion re: static checking for non-ES6 modules Andreas Rossberg: Every single construct, import, loading etc now has two different semantics to support. Bernd Mathiske: Forces users into thinking about which they need... optimizing for module authors, not module users. The wrong case... otherwise enforce static checking for all module code Alex Russell/Sam Tobin-Hochstadt: Not possible for all existing code Sam Tobin-Hochstadt: (whiteboard) Indirection via dynamic object makes static checking impossible. For example, if you write the code: import { a } from "some.js" ... a ... where "some.js" is an AMD library, then there's no static checking, but if you refactor "some.js" to be an ES6 module, you automatically get static checking. But if you don't support this use case, then there's indirection: import { exports } from "some.js" ... exports.a ... And changing "some.js" to use ES6 never results in new static semantics. ...Mixed discussion re: dynamic checks vs static checks. Bernd Mathiske: Was under the impression that the dynamic checks might be too late, but it has now become clear that they happen early enough Sam Tobin-Hochstadt: Cannot create references across Module instances to dynamic module code. Mark Miller: the world of JS uses feature detection, on the AMD side... can AMD code feature test? Sam Tobin-Hochstadt: (refers to an early slide, which shows example of importing module as single object, properties can then be tested for) Mark Miller: (confirms that Sam Tobin-Hochstadt answers Q) Dave Herman: Pause on the Question of dynamic import/export (Returns to the pipeline diagram) ...The "fetch" hook is the part where you go get the bits Dave Herman: (Slide: 1. Load and Process Dependencies Diagram) (Slide: 2. Link) Alex Russell/Dave Herman: Note that browsers can provide their own plugin points for the Fetch step Mark Miller: All of the hooks have been executed and there is no user code? If this fails, there are no side effects? Dave Herman: Correct Andreas Rossberg/Sam Tobin-Hochstadt: During Dave Herman: Modified the registry, but there is an inflight loading happening, when the inflight is finished, it will pave the changes to registry. (last op wins) Andreas Rossberg: When you evaluate a script that imports module Foo, which runs hooks that in turn import Foo into the module registry, what happens? Allen Wirfs-Brock: Why are they operating in the same Realm? Dave Herman: It sounds like an objection to modifying the list of modules in the registry by downloading code that modifies the list of modules in the registry... Sam Tobin-Hochstadt: Imagine we didn't have loader hooks, all you could do was eval and had two XHRs that fetched and eval'ed. We'd still have the same issues that we'd have with loader hooks, it's a problem with mutation and concurrency. Andreas Rossberg: Agree that the fundamental problem will always be there, but have a problem with shared global object for all modules. Dave Herman: If the same module is attempted to be defined in two places, that's an error and is a bug. Andreas Rossberg: Only when within the same compilation stage, silent overwriting otherwise. Waldemar Horwat: What if module A depends on both B and C and the initialization of B fails? Dave Herman: C remains uninitialized but present in the registry Waldemar Horwat: This breaks the model. It's not C's fault that its initializer didn't run. Allen Wirfs-Brock: Mark C as never having its initializer attempt to run and run it the next time it's imported. Dave Herman: Moving to next slide (Slide: 3. Execute) Produces "Result" Note that each step in the 3 parts has an Error path: - load/syntax error - link error - runtime exception ...Mixed discussion re: execution despite exceptions ...Mixed discussion clarifying fetch semantics (1. Load and Process) re: dynamically building URLs to batch load? re: browser knowledge of sources? Luke Hoban: What does the synchronous timeline of Slide 1 look like? Dave Herman: All normalize hooks first (need names and locations), then all resolve hooks Use Case: Importing into Node System.resolve = function(path, options) { if (node.indexOf(path) > -1) { return { name: path, metadata: { type: 'node' } }; } else { return { name: path, metadata: { type: 'es6' } }; } }; function extractNodeExports(loader, source) { var loader = new Loader(); return loader.eval(` var exports = {}; ${source}; exports; `); } System.link = function(source, options) { if (options.metadata.type === 'node') { return new Module(extractNodeExports(this, source)); } } Use Case: Single Export Modules Dave Herman: Brought this up 2 meetings ago, had a proposal that wasn't ready, it was shot down. This is something that I'm being told is very important and I agree with them. We can accommodate single exports via design protocols, but the developer community may not like it. Dave Herman/Yehuda Katz: (walk through the System.link implementation) Dave Herman: Can, should do better. Goal: Simple syntactic sugar. It's important, we will address it and we will do so with syntactic sugar. We will create a means by providing an "anonymous export". We will review the "sugar" at the next face-to-face meeting. ...Recognizes the community frustration regarding lack single/anonymous exports. ...No dissent. Luke Hoban: (Questions about how this works with the previously shown use cases) ... Yehuda Katz: (Shares anecdotal experience from developing the ES6 transpiler that was adopted by Square. Positive experience.) Sam Tobin-Hochstadt: Updated/removed junk from wiki Luke Hoban: Can imports be in scripts? Sam Tobin-Hochstadt: Yes Dave Herman: There was originally a use case that involved jQuery, we can't satisfy this without breaking everything (there is no way to be ES 3, 5, 6 at the same time) But... if (...some detection...) { System.set("jquery", ...); } <!-- once this is loaded, the jQuery module is registered and available for all scripts --> <script src="jquery.js"></script> <!-- which means all future scripts may have this: --> <script> import { $ } from "jquery"; </script> Luke Hoban: What about concatenation cases? Dave Herman: (whiteboards example of System.ondemand) System.ondemand({ "all.js": [ "a", "b", "c" ] }); Allen Wirfs-Brock/Sam Tobin-Hochstadt: (whiteboard) m.js: module "m" { export let a = 1; } n.js: module "n" { export let b = 2; } Needs: System.ondemand({ "m.js": "m", "n.js": "n" }); If you concatenate? m.js + n.js = concat.js... Needs: System.ondemand({ "concat.js": [ "m", "n" ] }); Arrays for files that contain multiple things ... Andreas Rossberg: We're over-prioritizing for concatenation. The language shouldn't be hostile, but should stop at good enough. We shouldn't optimize the design of the language around a secondary concept Allen Wirfs-Brock: modules are a concrete concept in the language, we need to focus on these as a building block Luke Hoban: Sam Tobin-Hochstadt: The claim that concatenation is going to become a not-important part of the web is ludicrous Andreas Rossberg: I think that mid-term concatenation will harm performance Yehuda Katz: Do you think that concatenation will go away? Andreas Rossberg: In the long term, it might Yehuda Katz/Sam Tobin-Hochstadt: This is what is ludicrous ...Mixed discussion re: library vs. language Allen Wirfs-Brock: There is a standard loader, defined by the language ...From Arv: Alex Russell: Joining files to optimize download and compilation Sam Tobin-Hochstadt: YUI optimized for reality and found that concatting is important Yehuda Katz: Should Ember ship 100 files? Alex Russell: Any modern library has a lot of files. Apps/libraries are making trade-offs to get good performance. Doug Crockford: Caching is not working. Browser will get better. Alex Russell: SPDY will make things better Yehuda Katz: Even with SPDY, there is a lot of IO Andreas Rossberg: It is perfectly fine to depend on a tool for concat Erik Arvidsson: We are designing based on concatenation. We should take that out of the picture. We can always write compilers that does the linking. Andreas Rossberg/Luke Hoban: With a compiler you can do linking/bundling and existing and future tools can do this. Sam Tobin-Hochstadt/Dave Herman: There will be holes in these. Luke Hoban: module "a" { ... } is leading developers down the wrong path Sam Tobin-Hochstadt: Recommmend doing modules the node style, where each file is a module Yehuda Katz: AMD developers use a build system that adds the name to the define(). They don't name the modules in their source. The build system names the modules. Mark Miller: AMD experience speaks in favor of a concatenator. Sam Tobin-Hochstadt: You will get a compile time error if you import a file that contains a module. ... Andreas Rossberg: How about adding a way to just register a module as a string containing the source of its body as if it was a file. Alex Russell: Then you have to allocate the string ... Allen Wirfs-Brock: Wants to use module "abc" { ... }. It is a good way to structure code. And you don't want to tie this to your configuration management ... Sam Tobin-Hochstadt: The strength of the system is that it supports both Andreas Rossberg: The approach Allen wants is not well supported because it lacks lexical scoping Alex Russell: If we use a string literal we cannot check the code to do prefetching etc Andreas Rossberg: It is a string so the string only needs to be lexed, then the parsing etc can be paralellized, not so with embedded module declaration ... Andreas Rossberg: There is no way to not modify the global registry when defining a module. Dave Herman: The file system (in SML) is also a shared registry. The module registry is no different Andreas Rossberg: Disagree. There is no way to create a local module here Sam Tobin-Hochstadt: JS has a lot of ways to struccture code: functions, classes etc and modules do not need to fill this role Andreas Rossberg: More interested in preventing accidents due to name clashes. ...Mixed discussion of module syntax related concerns Dave Herman: Ability to prevent people from using module syntax? Mark Miller: Yes Sam Tobin-Hochstadt: For Andreas' concern, look for the names of module declaration strings, check the registry and if any already exist, error. ...Defining a loader with right hook, prevent the mutation of the registry by anyone that does not have access to the loader Mark Miller: Satisfied from a security perspective. Andreas Rossberg: Would prefer for the default behavior to error, need to be explicit if you want module to override in an imperative manner. Dave Herman: Not opposed to moving towards scoped modules in the future. Just worried about complexities. Andreas Rossberg: Only concerned about import scope semantics Sam Tobin-Hochstadt: concern is that polyfills have to use eval and then System.set Andreas Rossberg: good to make it clear that polyfills are doing some special Dave Herman: agree with Andreas Rossberg about polyfills Sam Tobin-Hochstadt: This is something to be deferred without blocking progress, but ok with changing to error to achieve consensus. Yehuda Katz: agree with Sam Tobin-Hochstadt about consensus, but potentially concerned. Conclusion/Resolution - Default declarative form of a module is an error if a module of the same name already exists in the module registry. - Using System.setto overwrite an existing module is not an error. - Behavior of errors during module initialization (when some module initializers don't even get started) is still unresolved.
https://esdiscuss.org/notes/2013-03-12
CC-MAIN-2017-17
refinedweb
4,732
55.54
The objective of this post is to explain how to detect when a user presses one of the micro:bit buttons and react to those events, using the JavaScript Code Blocks editor. Introduction The objective of this post is to explain how to detect when a user presses one of the micro:bit buttons and react to those events, using the JavaScript Code Blocks editor. In this simple example, we will toggle some LEDs when the button pressed events are detected. The code In order to be able to detect button pressed events, we can use the onButtonPressed function of the input namespace. This function receives as parameters the button and a handling function that is executed when the click event on that button is detected. In our board we have two buttons that we can use as user inputs, which are defined as button A and button B. Button A is the one on the left, above pin 0. Button B is the one on the right, above the GND pin. So, for the first argument of the onButtonPressed function, the buttons are defined in the Button enum. For button A we use Button.A and for button B we use Button.B. In the case of the second argument, we will specify the handling function as a JavaScript arrow function, which leads to a more compact syntax than when defining a named function. Note that although we also have functions to check the state of each button (pressed or not) at a given time, we would need to use a technique called polling, which corresponds to periodically check the button of the state. Although in certain program architectures it is feasible to do it, it is much more efficient to use a callback function (like we are doing) that is triggered when the event occurs, leaving the CPU free to do other tasks when no click event is detected. Our arrow function will simply toggle a specific LED when it executes. You can check more about LED controlling here. We will toggle the LED at coordinates (0,0) on a click on button A, and the LED at coordinates (4,0) on a click on button B. You can check below the full source code. input.onButtonPressed(Button.A, () => { led.toggle(0, 0); }); input.onButtonPressed(Button.B, () => { led.toggle(4, 0); }); Testing the code To test the code, simply click the download button on the bottom left corner and drag the file to the micro:bit driver that should be available on your computer. After the procedure finishes, the code automatically runs and you can test it by clicking the buttons. As usual, you can also do the tests on the simulation mode. Figure 1 shows both LEDs in the on state in simulation mode, resulting from pressing the buttons. Figure 1 – LEDs toggled from button clicks, on simulation mode.
https://techtutorialsx.com/2017/10/03/microbit-javascript-blocks-editor-detecting-button-click-events/
CC-MAIN-2017-43
refinedweb
482
68.7
Hi, On Wed, Feb 25, 2015 at 12:52 PM, James Almer <jamrial at gmail.com> wrote: > On 25/02/15 2:36 PM, Ronald S. Bultje wrote: > > Hi James, > > > > On Wed, Feb 25, 2015 at 12:25 PM, James Almer <jamrial at gmail.com> wrote: > > > >> On 25/02/15 9:41 AM, Ronald S. Bultje wrote: > >>> Hi, > >>> > >>> On Tue, Feb 24, 2015 at 8:05 PM, James Almer <jamrial at gmail.com> > wrote: > >>>> > >>>> +#if HAVE_FAST_POPCNT > >>>> +#if AV_GCC_VERSION_AT_LEAST(4,5) > >>>> +#ifndef av_popcount > >>>> + #define av_popcount __builtin_popcount > >>>> +#endif /* av_popcount */ > >>>> +#if HAVE_FAST_64BIT > >>>> +#ifndef av_popcount64 > >>>> + #define av_popcount64 __builtin_popcountll > >>>> +#endif /* av_popcount64 */ > >>>> +#endif /* HAVE_FAST_64BIT */ > >>>> +#endif /* AV_GCC_VERSION_AT_LEAST(4,5) */ > >>>> +#endif /* HAVE_FAST_POPCNT */ > >>>> > >>> > >>> Is this just to get the sse4 popcnt instruction if we compile with > >>> -mcpu=sse4? The slightly odd thing is that we're using a built-in, yet > >>> configure still does an arch/cpu check. I'd expect the > built-in/compiler > >> to > >>> do that for us based on -mcpu, and we could always unconditionally use > >> this > >>> (as long as gcc >= 4.5); alternatively, you could use inline asm and > then > >>> have the configure check (HAVE_FAST_POPCNT). But doing both seems a > >> little > >>> odd. I have no objection to it, patch is still fine, just odd. > >>> > >>> Ronald > >> > >> I purposely made the checks for gcc 4.5 and in configure for cpus that > >> support popcnt > >> because otherwise __builtin_popcount (at least gcc's) is slower than our > >> generic > >> av_popcount_c function from lavu/common.h. > >> When the CPU supports popcnt the builtin becomes a single inlined > >> instruction. > >> > >> I tried the __asm__ approach, but the code generated by the builtin > seemed > >> better. > > > > > > That's interesting, can you show the code you tried? > > > > Ronald > > If instead of builtins i use > > +#if HAVE_INLINE_ASM > + > +#ifdef __POPCNT__ > + > +#define av_popcount av_popcount_abm > +static av_always_inline av_const int av_popcount_abm(uint32_t a) > +{ > + int x; > + > + __asm__ ("popcnt %1, %0" : "=r" (x) : "rm" (a)); > + return x; > +} > + > +#if ARCH_X86_64 > +#define av_popcount64 av_popcount64_abm > +static av_always_inline av_const int av_popcount64_abm(uint64_t a) > +{ > + uint64_t x; > + > + __asm__ ("popcnt %1, %0" : "=r" (x) : "rm" (a)); > + return x; > +} > +#endif > + > +#endif /* __POPCNT__ */ > + > +#endif /* HAVE_INLINE_ASM */ > > in the x86/ header from the second version i sent, on x86_32 av_cpu_count > from > libavutil/cpu.o on Windows (Will not work with other OSes because of > HAVE_GETPROCESSAFFINITYMASK) generates > > 1af: f3 0f b8 5c 24 18 popcnt ebx,DWORD PTR [esp+0x18] > 1b5: 31 c0 xor eax,eax > 1b7: f3 0f b8 c0 popcnt eax,eax > 1bb: 01 c3 add ebx,eax > > Whereas with the builtins i instead get > > 1af: f3 0f b8 5c 24 18 popcnt ebx,DWORD PTR [esp+0x18] > > This is probably because of av_popcount64_c (Used unconditionally on > x86_32) calling > av_popcount twice, and proc_aff being DWORD_PTR type means it's 4 bytes in > x86_32. > The builtin seems to know proc_aff can't be right shifted 32 bits so it > generates > a single popcnt. It seems to me rather than the second version is defined as "pure" (or whatever the keyword was), i.e. "we don't depend on external state" -> "if you call this with a constant, we can calculate this at compile-time". Ronald
http://ffmpeg.org/pipermail/ffmpeg-devel/2015-February/169312.html
CC-MAIN-2019-30
refinedweb
501
57.91
I am new to Xmonad (just installed it yesterday), and since I have never used haskell before, I found configuration a little bit confusing for me. I got somewhat made xmobar and trayer work, but I have no idea how might I make multimedia keys to adjust volume. Can anyone help with that? Additional question: How do you manage your volume in xmonad. Do you use tray icon, or other things like that? Here is my xmonad configuration: import XMonad import XMonad.Hooks.DynamicLog import XMonad.Hooks.ManageDocks import XMonad.Util.EZConfig(additionalKeys) import System.IO main = xmonad =<< statusBar myBar myPP toggleStrutKey myConfig -- Command to launch the bar myBar = "xmobar" -- Custom PP, it determines what is written to the bar myPP = xmobarPP { ppCurrent = xmobarColor "#429942" "" . wrap "<" ">" } -- Key bindings to toggle the gap for the bar toggleStrutKey XConfig {XMonad.modMask = modMask} = (modMask, xK_b) myConfig = defaultConfig { manageHook = manageDocks <+> manageHook defaultConfig, layoutHook = avoidStruts $ layoutHook defaultConfig, modMask = mod4Mask -- Rebind Mod to windows key } `additionalKeys` [ ((mod4Mask .|. shiftMask, xK_z), spawn "xscreensaver-command -lock") ] Use 'xev' and tap the multimedia keys to discover their names. One might be 'XF86XK_AudioMute'. Then look at the contents of '/usr/include/X11/XF86keysym.h' and look for the name. On my system, 'XF86XK_AudioMute' is '0x1008FF12'. Drop that where you would put a key in your config file. It might look like this: import XMonad import XMonad.Hooks.DynamicLog import XMonad.Hooks.ManageDocks import XMonad.Util.EZConfig(additionalKeys) import System.IO -CUT- } `additionalKeys` [ ((mod4Mask .|. shiftMask, xK_z), spawn "xscreensaver-command -lock"), ((0 , 0x1008FF11), spawn "amixer set Master 2-"), ((0 , 0x1008FF13), spawn "amixer set Master 2+"), ((0 , 0x1008FF12), spawn "amixer set Master toggle") ] 'amixer' will set your volume. The '0' replacing mod4Mask allows you to tap the multimedia key without holding your mod key. amixer -D pulse set Master toggle If amixer set Master 2- does not work. Try amixer -D pulse set Master 2- instead. Also 2%- and 2%+ will change the volume by 2 percent, which may be easier to use. You can test these commands in the terminal to adjust them to your liking before you put them in you xmonad config file. amixer set Master 2- amixer -D pulse set Master 2- 2%- 2%+ If you're using pulseaudio, pactl also should work. pactl , ((0 , xF86XK_AudioRaiseVolume), spawn "pactl set-sink-volume 0 +1.5%") , ((0 , xF86XK_AudioLowerVolume), spawn "pactl set-sink-volume 0 -- -1.5%") , ((0 , xF86XK_AudioMute), spawn "pactl set-sink-mute 0 toggle") ] 0 is sink id. pactl list short sinks will show sink list. 0 pactl list short sinks pactl stat|grep 'Default Sink' | cut -f2 -d':' will show current default sink. You can use sink name instead numeric id. Doulble dash -- tells 'this is not option(like -h), just value' to pactl. -- See this Graphics.X11.ExtraTypes.XF86 for keys you want and add to your config file: import Graphics.X11.ExtraTypes.XF86 myKeys conf@(XConfig {XMonad.modMask = modm}) = M.fromList $ [ ... , ((0, xF86XK_AudioLowerVolume ), spawn "amixer set Master 2-") , ((0, xF86XK_AudioRaiseVolume ), spawn "amixer set Master 2+") , ((0, xF86XK_AudioMute ), spawn "amixer set Master toggle") ...] [ ((modMask, xK_e ), spawn "dmenu_run") By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 6492 times active 1 month ago
http://superuser.com/questions/389737/how-do-you-make-volume-keys-and-mute-key-work-in-xmonad?answertab=active
CC-MAIN-2015-40
refinedweb
533
60.21
22 July 2009 05:06 [Source: ICIS news] By Mahua Chakravarty SINGAPORE (ICIS news)--Asian benzene may remain volatile in the coming weeks with no firm direction from crude, and with price support from tight supply being negated by expectations that its major downstream styrene market would weaken in August, industry sources said on Wednesday. “It’s very difficult to judge the direction of the market. Hence, we are not taking any big positions,” said a key regional trader. Benzene was largely stable with a tendency for weakness this week, market sources said. It was trading at $840-850/tonne (€588-595/tonne) FOB (free on board) ?xml:namespace> Crude oil movement will largely dictate the benzene market, industry sources said. NYMEX light sweet crude was trading at $65/bbl levels on Wednesday, slightly easing down in “The market will depend on crude and could remain stable around the current $850/tonne FOB (free on board) “Price-wise the market will likely fluctuate more,” said a Japanese producer. While strong crude and tight benzene supply support prices, the anticipated slowdown of demand from key downstream styrene monomer (SM) market in end-July or August could put some downward pressure on benzene, market sources said. SM is the largest downstream in Some regional SM producers could be considering cutting operating rates from next month due to poor outlook of derivatives styrenics, said producers and traders. The recent uptrend in feedstock benzene prices was also putting pressure on SM production margins, they added. SM prices were hovering at about $1,060-1,080/tonne CFR (cost and freight) Some market players, however, maintained that demand from regional SM producers could hold up in the coming month. Demand from another downstream segment - phenol - was also weak, with phenol production margins also under pressure, said some producers and traders. The tight benzene supply in Asia would be the key support for the market in the meantime, said traders and producers. “Prompt cargoes for first-half and second-half August is quite tight,” said a Japanese trader. “Many producers don’t have the flexibility of [increasing] supply at present,” said a Japanese producer, citing that refining margins were getting squeezed. Benzene supply in Asia has been tight in the past month as refineries in Some of the regional crackers were also using LPG (liquefied petroleum gas) as secondary feedstock which is known to reduce aromatics output, said traders and producers. But this tight supply scenario could change in the coming weeks as some toluene disproportionation (TDP) producers could be planning to restart their plants on the back of improved margins. An Indonesia-based producer was also expected to restart production, said two Japanese traders. ($1 = €0.70)
http://www.icis.com/Articles/2009/07/22/9234034/asia-benzene-to-stay-volatile-near-term-on-fluctuating-crude.html
CC-MAIN-2015-06
refinedweb
452
55.68
Co-simulation with Verilog¶ Introduction¶ One of the most exciting possibilities of MyHDLis to use it as a hardware verification language (HVL). A HVL is a language used to write test benches and verification environments, and to control simulations. Nowadays, it is generally acknowledged that HVLs should be equipped with modern software techniques, such as object orientation. The reason is that verification it the most complex and time-consuming task of the design process. Consequently, every useful technique is welcome. Moreover, test benches are not required to be implementable. Therefore, unlike with synthesizable code, there are no constraints on creativity. Technically, verification of a design implemented in another language requires co-simulation. MyHDL is enabled for co-simulation with any HDL simulator that has a procedural language interface (PLI). The MyHDLside is designed to be independent of a particular simulator, On the other hand, for each HDL simulator a specific PLI module will have to be written in C. Currently, the MyHDL release contains a PLI module for two Verilog simulators: Icarus and Cver. The HDL side¶ To introduce co-simulation, we will continue to use the Gray encoder example from the previous chapters. Suppose that we want to synthesize it and write it in Verilog for that purpose. Clearly we would like to reuse our unit test verification environment. To start, let’s recall how the Gray encoder in MyHDL looks like: from myhdl import block, always_comb @block def bin2gray(B, G): """ Gray encoder. B -- binary input G -- Gray encoded output """ @always_comb def logic(): G.next = (B>>1) ^ B return logic To show the co-simulation flow, we don’t need the Verilog implementation yet, but only the interface. Our Gray encoder in Verilog would have the following interface: module bin2gray(B, G); parameter width = 8; input [width-1:0] B; output [width-1:0] G; .... To write a test bench, one creates a new module that instantiates the design under test (DUT). The test bench declares nets and regs (or signals in VHDL) that are attached to the DUT, and to stimulus generators and response checkers. In an all-HDL flow, the generators and checkers are written in the HDL itself, but we will want to write them in MyHDL. To make the connection, we need to declare which regs & nets are driven and read by the MyHDLsimulator. For our example, this is done as follows: module dut_bin2gray; reg [`width-1:0] B; wire [`width-1:0] G; initial begin $from_myhdl(B); $to_myhdl(G); end bin2gray dut (.B(B), .G(G)); defparam dut.width = `width; endmodule The $from_myhdl task call declares which regs are driven by MyHDL, and the $to_myhdl task call which regs & nets are read by it. These tasks take an arbitrary number of arguments. They are defined in a PLI module written in C and made available in a simulator-dependent manner. In Icarus Verilog, the tasks are defined in a myhdl.vpi module that is compiled from C source code. The MyHDL side¶ MyHDL supports co-simulation by a Cosimulation object. A Cosimulation object must know how to run a HDL simulation. Therefore, the first argument to its constructor is a command string to execute a simulation. The way to generate and run an simulation executable is simulator dependent. For example, in Icarus Verilog, a simulation executable for our example can be obtained obtained by running the iverilog compiler as follows: % iverilog -o bin2gray -Dwidth=4 bin2gray.v dut_bin2gray.v This generates a bin2gray executable for a parameter width of 4, by compiling the contributing Verilog files. The simulation itself is run by the vvp command: % vvp -m ./myhdl.vpi bin2gray This runs the bin2gray simulation, and specifies to use the myhdl.vpi PLI module present in the current directory. (This is just a command line usage example; actually simulating with the myhdl.vpi module is only meaningful from a Cosimulation object.) We can use a Cosimulation object to provide a HDL version of a design to the MyHDL simulator. Instead of a generator function, we write a function that returns a Cosimulation object. For our example and the Icarus Verilog simulator, this is done as follows: import os from myhdl import Cosimulation cmd = "iverilog -o bin2gray.o -Dwidth=%s " + \ "../../test/verilog/bin2gray.v " + \ "../../test/verilog/dut_bin2gray.v " def bin2gray(B, G): width = len(B) os.system(cmd % width) return Cosimulation("vvp -m ../myhdl.vpi bin2gray.o", B=B, G=G) After the executable command argument, the Cosimulation constructor takes an arbitrary number of keyword arguments. Those arguments make the link between MyHDL Signals and HDL nets, regs, or signals, by named association. The keyword is the name of an argument in a $to_myhdl or $from_myhdl call; the argument is a MyHDL Signal. With all this in place, we can now use the existing unit test to verify the Verilog implementation. Note that we kept the same name and parameters for the the bin2gray function: all we need to do is to provide this alternative definition to the existing unit test. Let’s try it on the Verilog design: module bin2gray(B, G); parameter width = 8; input [width-1:0] B; output [width-1:0] G; assign G = (B >> 1) ^ B; endmodule // bin2gray When we run our unit tests, we get: % python test_gray.py testSingleBitChange (test_gray_properties.TestGrayCodeProperties) Check that only one bit changes in successive codewords. ... ok testUniqueCodeWords (test_gray_properties.TestGrayCodeProperties) Check that all codewords occur exactly once. ... ok testOriginalGrayCode (test_gray_original.TestOriginalGrayCode) Check that the code is an original Gray code. ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.706s OK Restrictions¶ In the ideal case, it should be possible to simulate any HDL description seamlessly with MyHDL. Moreover the communicating signals at each side should act transparently as a single one, enabling fully race free operation. For various reasons, it may not be possible or desirable to achieve full generality. As anyone that has developed applications with the Verilog PLI can testify, the restrictions in a particular simulator, and the differences over various simulators, can be quite frustrating. Moreover, full generality may require a disproportionate amount of development work compared to a slightly less general solution that may be sufficient for the target application. Consequently, I have tried to achieve a solution which is simple enough so that one can reasonably expect that any PLI-enabled simulator can support it, and that is relatively easy to verify and maintain. At the same time, the solution is sufficiently general to cover the target application space. The result is a compromise that places certain restrictions on the HDL code. In this section, these restrictions are presented. Only passive HDL can be co-simulated¶ The most important restriction of the MyHDL co-simulation solution is that only “passive” HDL can be co-simulated. This means that the HDL code should not contain any statements with time delays. In other words, the MyHDL simulator should be the master of time; in particular, any clock signal should be generated at the MyHDL side. At first this may seem like an important restriction, but if one considers the target application for co-simulation, it probably isn’t. MyHDL supports co-simulation so that test benches for HDL designs can be written in Python. Let’s consider the nature of the target HDL designs. For high-level, behavioral models that are not intended for implementation, it should come as no surprise that I would recommend to write them in MyHDL directly; that is one of the goals of the MyHDL effort. Likewise, gate level designs with annotated timing are not the target application: static timing analysis is a much better verification method for such designs. Rather, the targeted HDL designs are naturally models that are intended for implementation, most likely through synthesis. As time delays are meaningless in synthesizable code, the restriction is compatible with the target application. Race sensitivity issues¶ In a typical RTL code, some events cause other events to occur in the same time step. For example, when a clock signal triggers some signals may change in the same time step. For race-free operation, an HDL must differentiate between such events within a time step. This is done by the concept of “delta” cycles. In a fully general, race free co-simulation, the co-simulators would communicate at the level of delta cycles. However, in MyHDL co-simulation, this is not entirely the case. Delta cycles from the MyHDL simulator toward the HDL co-simulator are preserved. However, in the opposite direction, they are not. The signals changes are only returned to the MyHDL simulator after all delta cycles have been performed in the HDL co-simulator. What does this mean? Let’s start with the good news. As explained in the previous section, the concept behind MyHDL co-simulation implies that clocks are generated at the MyHDL side. When using a MyHDL clock and its corresponding HDL signal directly as a clock, co-simulation is race free. In other words, the case that most closely reflects the MyHDL co-simulation approach, is race free. The situation is different when you want to use a signal driven by the HDL (and the corresponding MyHDL signal) as a clock. Communication triggered by such a clock is not race free. The solution is to treat such an interface as a chip interface instead of an RTL interface. For example, when data is triggered at positive clock edges, it can safely be sampled at negative clock edges. Alternatively, the MyHDL data signals can be declared with a delay value, so that they are guaranteed to change after the clock edge. Implementation notes¶ This section requires some knowledge of PLI terminology. Enabling a simulator for co-simulation requires a PLI module written in C. In Verilog, the PLI is part of the “standard”. However, different simulators implement different versions and portions of the standard. Worse yet, the behavior of certain PLI callbacks is not defined on some essential points. As a result, one should plan to write or at least customize a specific PLI module for any simulator. The release contains a PLI module for the open source Icarus and Cver simulators. This section documents the current approach and status of the PLI module implementation and some reflections on future implementations. Icarus Verilog¶ Delta cycle implementation¶ To make co-simulation work, a specific type of PLI callback is needed. The callback should be run when all pending events have been processed, while allowing the creation of new events in the current time step (e.g. by the MyHDL simulator). In some Verilog simulators, the cbReadWriteSync callback does exactly that. However, in others, including Icarus, it does not. The callback’s behavior is not fully standardized; some simulators run the callback before non- blocking assignment events have been processed. Consequently, I had to look for a workaround. One half of the solution is to use the cbReadOnlySync callback. This callback runs after all pending events have been processed. However, it does not permit one to create new events in the current time step. The second half of the solution is to map MyHDL delta cycles onto real Verilog time steps. Note that fortunately I had some freedom here because of the restriction that only passive HDL code can be co-simulated. I chose to make the time granularity in the Verilog simulator a 1000 times finer than in the MyHDL simulator. For each MyHDL time step, 1000 Verilog time steps are available for MyHDL delta cycles. In practice, only a few delta cycles per time step should be needed. Exceeding this limit almost certainly indicates a design error; the limit is checked at run-time. The factor 1000 also makes it easy to distinguish “real” time from delta cycle time when printing out the Verilog time. Passive Verilog check¶ As explained before, co-simulated Verilog should not contain delay statements. Ideally, there should be a run-time check to flag non-compliant code. However, there is currently no such check in the Icarus module. The check can be written using the cbNextSimTime VPI callback in Verilog. However, Icarus 0.7 doesn’t support this callback. In the meantime, support for it has been added to the Icarus development branch. When Icarus 0.8 is released, a check will be added. In the mean time, just don’t do this. It may appear to “work” but it really won’t as events will be missed over the co-simulation interface. Cver¶ MyHDL co-simulation is supported with the open source Verilog simulator Cver. The PLI module is based on the one for Icarus and basically has the same functionality. Only some cosmetic modifications were required. Other Verilog simulators¶ The Icarus module is written with VPI calls, which are provided by the most recent generation of the Verilog PLI. Some simulators may only support TF/ACC calls, requiring a complete redesign of the interface module. If the simulator supports VPI, the Icarus module should be reusable to a large extent. However, it may be possible to improve on it. The workaround to support delta cycles described in Section Delta cycle implementation may not be necessary. In some simulators, the cbReadWriteSync callback occurs after all events (including non-blocking assignments) have been processed. In that case, the functionality can be supported without a finer time granularity in the Verilog simulator. There are also Verilog standardization efforts underway to resolve the ambiguity of the cbReadWriteSync callback. The solution will be to introduce new, well defined callbacks. From reading some proposals, I conclude that the cbEndOfSimTime callback would provide the required functionality. The MyHDL project currently has no access to commercial Verilog simulators, so progress in co-simulation support depends on external interest and participation. Users have reported that they are using MyHDL co-simulation with the simulators from Aldec and Modelsim. Interrupted system calls¶ The PLI module uses read and write system calls to communicate between the co-simulators. The implementation assumes that these calls are restarted automatically by the operating system when interrupted. This is apparently what happens on the Linux box on which MyHDL is developed. It is known how non-restarted interrupted system calls should be handled, but currently such code cannot be tested on the MyHDL development platform. Also, it is not clear whether this is still a relevant issue with modern operating systems. Therefore, this issue has not been addressed at this moment. However, assertions have been included that should trigger when this situation occurs. Whenever an assertion fires in the PLI module, please report it. The same holds for Python exceptions that cannot be easily explained. What about VHDL?¶ It would be nice to have an interface to VHDL simulators such as the Modelsim VHDL simulator. Let us summarize the requirements to accomplish that: - We need a procedural interface to the internals of the simulator. - The procedural interface should be a widely used industry standard so that we can reuse the work in several simulators. - MyHDL is an open-source project and therefore there should be also be an open-source simulator that implements the procedural interface. vpi for Verilog matches these requirements. It is a widely used standard and is supported by the open-source Verilog simulators Icarus and cver. However, for VHDL the situation is different. While there exists a standard called vhpi, it much less popular than vpi. Also, to our knowledge there is only one credible open source VHDL simulator (GHDL) and it is unclear whether it has vhpi capabilities that are powerful enough for MyHDL’s purposes. Consequently, the development of co-simulation for VHDL is currently on hold. For some applications, there is an alternative: see Conversion of test benches.
http://docs.myhdl.org/en/stable/manual/cosimulation.html
CC-MAIN-2021-39
refinedweb
2,614
56.35
Standard 16×2 LCD Display: There are build-in functions in Arduino software for LCD to display something on LCD, to clear, to move data. Interfacing LCD module to Arduino is very easy, here you will learn how to interface LCD with Arduino. Components Required: Arduino UNO R3 board Variable Resistor 10k Resistor 220 ohm 8 bit LCD Mini breadboard Few wires & Cable for power Circuit: click the image to enlarge. Circuit connections: - RS pin of LCD connects to digital pin 12 of Arduino - Pin E (Enable) connected to digital pin 11 - Pin D4 connects to pin number 5 - D5 pin connects to digital pin 4 - Pin D6 connects to digital pin 3 - Pin D7 connects to digital pin 2 - R / W pin connected to the GND - Pin 1 and pin 4 connected to GND - Pin 2 connected to + VCC - 10 K Ohm potentiometer controls brightness of panel - Second pin connected to pin 3 of the LCD - First pin of potentiometer connected to + VCC - Third pin of potentiometer connected to GND - The first and third pins of the potentiometer can be interchanged. Working by Code: #include <LiquidCrystal.h> int i = 0; LiquidCrystal lcd (12, 11, 5, 4, 3, 2); In the above instance LiquidCrystal object called LCD is created in which the LCD pins connected to the Arduino digital outputs are shown. void setup () { To set the number of rows and columns of lcd lcd.begin (16, 2); printed on lcd.print (“Hackatronic.com “); } void loop () { // place the cursor on column 0 and line 1 //lcd.print (micros () / 1000); <= this will print count lcd.print(“^^^”); lcd.print(” “); delay(100); } Arduino LCD Program: #include <LiquidCrystal.h> int i = 0; LiquidCrystal lcd (12, 11, 5, 4, 3, 2); void setup () { lcd.begin (16, 2); lcd.print (“Hackatronic.com “); } void loop () { // print the number of seconds since the last reset //lcd.print (micros () / 1000); lcd.print(“^^^”); lcd.print(” “); delay(100); } Here is Tinkercad simulation
http://www.hackatronic.com/arduino-lcd-program-interfacing-diagram/
CC-MAIN-2022-33
refinedweb
321
60.75
This post is an attempt to reduce the number of times I need to explain things in Stack Overflow comments. You may well be reading it via a link from Stack Overflow – I intend to refer to this post frequently in comments. Note that this post is mostly not about text handling – see my post on common mistakes in date/time formatting and parsing for more details on that. There are few classes which cause so many similar questions on Stack Overflow as java.util.Date. There are four causes for this: - Date and time work is fundamentally quite complicated and full of corner cases. It’s manageable, but you do need to put some time into understanding it. - The java.util.Dateclass is awful in many ways (details given below). - It’s poorly understood by developers in general. - It’s been badly abused by library authors, adding further to the confusion. TL;DR: java.util.Date in a nutshell The most important things to know about java.util.Date are: - You should avoid it if you possibly can. Use java.time.*if possible, or the ThreeTen-Backport (java.time for older versions, basically) or Joda Time if you’re not on Java 8 yet. - If you’re forced to use it, avoid the deprecated members. Most of them have been deprecated for nearly 20 years, and for good reason. - If you really, really feel you have to use the deprecated members, make sure you really understand them. - A Dateinstance represents an instant in time, not a date. Importantly, that means: - It doesn’t have a time zone. - It doesn’t have a format. - It doesn’t have a calendar system. Now, onto the details… What’s wrong with java.util.Date? java.util.Date (just Date from now on) is a terrible type, which explains why so much of it was deprecated in Java 1.1 (but is still being used, unfortunately). Design flaws include: - Its name is misleading: it doesn’t represent a Date, it represents an instant in time. So it should be called Instant– as its java.timeequivalent is. - It’s non-final: that encourages poor uses of inheritance such as java.sql.Date(which is meant to represent a date, and is also confusing due to having the same short-name) - It’s mutable: date/time types are natural values which are usefully modeled by immutable types. The fact that Dateis mutable (e.g. via the setTimemethod) means diligent developers end up creating defensive copies all over the place. - It implicitly uses the system-local time zone in many places – including toString()– which confuses many developers. More on this in the “What’s an instant” section - Its month numbering is 0-based, copied from C. This has led to many, many off-by-one errors. - Its year numbering is 1900-based, also copied from C. Surely by the time Java came out we had an idea that this was bad for readability? - Its methods are unclearly named: getDate()returns the day-of-month, and getDay()returns the day-of-week. How hard would it have been to give those more descriptive names? - It’s ambiguous about whether or not it supports leap seconds: “A second is represented by an integer from 0 to 61; the values 60 and 61 occur only for leap seconds and even then only in Java implementations that actually track leap seconds correctly.” I strongly suspect that most developers (including myself) have made plenty of assumptions that the range for getSeconds()is actually in the range 0-59 inclusive. - It’s lenient for no obvious reason: “In all cases, arguments given to methods for these purposes need not fall within the indicated ranges; for example, a date may be specified as January 32 and is interpreted as meaning February 1.” How often is that useful? I could find more problems, but they would be getting pickier. That’s a plentiful list to be going on with. On the plus side: - It unambiguously represents a single value: an instant in time, with no associated calendar system, time zone or text format, to a precision of milliseconds. Unfortunately even this one “good aspect” is poorly understood by developers. Let’s unpack it… What’s an “instant in time”? Note: I’m ignoring relativity and leap seconds for the whole of the rest of this post. They’re very important to some people, but for most readers they would just introduce more confusion. When I talk about an “instant” I’m talking about the sort of concept that could be used to identify when something happened. (It could be in the future, but it’s easiest to think about in terms of a past occurrence.) It’s independent of time zone and calendar system, so multiple people using their “local” time representations could talk about it in different ways. Let’s use a very concrete example of something that happened somewhere that doesn’t use any time zones we’re familiar with: Neil Armstrong walking on the moon. The moon walk started at a particular instant in time – if multiple people from around the world were watching at the same time, they’d all (pretty much) say “I can see it happening now” simultaneously. If you were watching from mission control in Houston, you might have thought of that instant as “July 20th 1969, 9:56:20 pm CDT”. If you were watching from London, you might have thought of that instant as “July 21st 1969, 3:26:20 am BST”. If you were watching from Riyadh, you might have thought of that instant as “Jumādá 7th 1389, 5:56:20 am (+03)” (using the Umm al-Qura calendar). Even though different observers would see different times on their clocks – and even different years – they would still be considering the same instant. They’d just be applying different time zones and calendar systems to convert from the instant into a more human-centric concept. So how do computers represent instants? They typically store an amount of time before or after a particular instant which is effectively an origin. Many systems use the Unix epoch, which is the instant represented in the Gregorian calendar in UTC as midnight at the start of January 1st 1970. That doesn’t mean the epoch is inherently “in” UTC – the Unix epoch could equally well be defined as “the instant at which it was 7pm on December 31st 1969 in New York”. The Date class uses “milliseconds since the Unix epoch” – that’s the value returned by getTime(), and set by either the Date(long) constructor or the setTime() method. As the moon walk occurred before the Unix epoch, the value is negative: it’s actually -14159020000. To demonstrate how Date interacts with the system time zone, let’s show the three time zones mentioned before – Houston (America/Chicago), London (Europe/London) and Riyadh (Asia/Riyadh). It doesn’t matter what the system time zone is when we construct the date from its epoch-millis value – that doesn’t depend on the local time zone at all. But if we use Date.toString(), that converts to the current default time zone to display the result. Changing the default time zone does not change the Date value at all. The internal state of the object is exactly the same. It still represents the same instant, but methods like toString(), getMonth() and getDate() will be affected. Here’s sample code to show that: import java.util.Date; import java.util.TimeZone; public class Test { public static void main(String[] args) { // The default time zone makes no difference when constructing // a Date from a milliseconds-since-Unix-epoch value Date date = new Date(-14159020000L); // Display the instant in three different time zones TimeZone.setDefault(TimeZone.getTimeZone("America/Chicago")); System.out.println(date); TimeZone.setDefault(TimeZone.getTimeZone("Europe/London")); System.out.println(date); TimeZone.setDefault(TimeZone.getTimeZone("Asia/Riyadh")); System.out.println(date); // Prove that the instant hasn't changed... System.out.println(date.getTime()); } } The output is as follows: Sun Jul 20 21:56:20 CDT 1969 Mon Jul 21 03:56:20 GMT 1969 Mon Jul 21 05:56:20 AST 1969 -14159020000 The “GMT” and “AST” abbreviations in the output here are highly unfortunate – java.util.TimeZone doesn’t have the right names for pre-1970 values in all cases. The times are right though. Common questions How do I convert a Date to a different time zone? You don’t – because a Date doesn’t have a time zone. It’s an instant in time. Don’t be fooled by the output of toString(). That’s showing you the instant in the default time zone. It’s not part of the value. If your code takes a Date as an input, any conversion from a “local time and time zone” to an instant has already occurred. (Hopefully it was done correctly…) If you start writing a method with a signature like this, you’re not helping yourself: // A method like this is always wrong Date convertTimeZone(Date input, TimeZone fromZone, TimeZone toZone) How do I convert a Date to a different format? You don’t – because a Date doesn’t have a format. Don’t be fooled by the output of toString(). That always uses the same format, as described by the documentation. To format a Date in a particular way, use a suitable DateFormat (potentially a SimpleDateFormat) – remembering to set the time zone to the appropriate zone for your use. 37. :-) Good article. The concrete example of the moon walk being an instant in time helps a lot. One suggestion would be to extend that with a more relative example, for example, dealing with a Date that was constructed from System.currentTimeMillis(). I think the point to get across would be that the time zone is irrelevant here as well, as long as the underlying system clock that the JVM uses is accurate; and by accurate, I mean the correct time/date for whatever time zone the system thinks it’s in (and maybe internally, it is always UTC). I’d like to hear a bit more about that aspect. For example, when I boot my laptop into Linux, and then back into Windows, the time is messed up for a bit, until it resyncs with the time service.
https://codeblog.jonskeet.uk/2017/04/23/all-about-java-util-date/?like_comment=27493&_wpnonce=03c9d0803c
CC-MAIN-2022-05
refinedweb
1,731
62.88
Edited by ddanbe: addition Edited by ddanbe: addition using System; using System.Speech.Synthesis; namespace Testing { class Program { static void Main(string[] args) { //construct a message to say: var user = "Danny"; var daypart = string.Empty; var now = DateTime.Now; if (now.Hour >= 6 && now.Hour <= 12) { daypart = " good morning"; } else if (now.Hour > 12 && now.Hour <= 18) { daypart = " good afternoon"; } else if (now.Hour > 18 && now.Hour <= 24) { daypart = " good evening"; } var message = "Hello " + user + daypart; //start to speak var synth = new SpeechSynthesizer(); synth.Speak(message); Console.WriteLine(message); Console.ReadKey(); } } } Edited by deceptikon Edited by JOSheaIV Edited by ddanbe: addition Are you able to help answer this sponsored question? Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies.
https://www.daniweb.com/programming/software-development/code/466697/speech-synthesis-in-net-very-easy
CC-MAIN-2017-17
refinedweb
133
61.53
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Can someone help me? changes the value of my cycle button, but does not execute it: Seems like it executes very well, but only delayed because you are changing a combobox/dropdown instead of the slider value itself. Try modifying the sliders instead of the combobox/dropdown for less delays EDIT: I forgot that those sliders might be also just Numerical IDs so it may be inaccurate as well in terms of it being able to handle the data being fed on, and may change depending on the constraint build-up I guess you can't do anything than to deal with the delay. Try making the time slider running while you switch that button back and forth if it updates everytime something changes on the scene. Usually it's just a 1-frame delay, or whenever something is updated. In alternative, try to use c4d.EventAdd() after each if-else statement Hi @pyxelrigger thanks for reaching us, the main issue is to properly update you need to send the message MSG_DESCRIPTION_COMMAND and this is what the UI, it set the parameter then call this message. MSG_DESCRIPTION_COMMAND But adding this call in the execute method of the python tag is not enough. This message is processed by the constraint tag and add an Undo, and as said multiple time, this is not possible to add an undo in a threaded environment(see Python Tags - Shouldn't really they can change the scene? so this fail. So the workaround is to not call the message during the scene execution of the scene but when the value is set in the user Data, so you react to a GUI interaction (main thread) so you are free to do what you want. import c4d #Welcome to the world of Python def message(msgId, msgData): # React to the change within the UI, since it came from the UI, this is executed in the Main Thread, so you can add Undo or Add Event if msgId == c4d.MSG_DESCRIPTION_POSTSETPARAMETER: # Check if its an User Data ID if msgData['descid'].IsPartOf(c4d.DESCID_DYNAMICSUB): # Checks if its the User Data ID 1 that changed if msgData['descid'][-1].id == 1: constraintTag = op.GetObject().GetTag(1019364) # This will update the stuff and create an Undo constraintTag.Message(c4d.MSG_DESCRIPTION_COMMAND, {"id": c4d.ID_CA_CONSTRAINT_TAG_PARENT_LOCALTRANFORM_UPDATEBUTTON}) c4d.EventAdd() return True def main(): option = op[c4d.ID_USERDATA,1] constraintTag = op.GetObject().GetTag(1019364) # Defines the new value (We could do it as well in the message method, but just to show you that Execute is called before) constraintTag[c4d.ID_CA_CONSTRAINT_TAG_PARENT_LOCALTRANFORM_UPDATEBUTTON] = option + 1 test_scene.c4d Cheers, Maxime. Thanks! seems to work well! the problem is that apparently it only runs on the tag, if I set the option as a UserData of my object, it doesn't work
https://plugincafe.maxon.net/topic/12479/update-button/4?lang=en-US
CC-MAIN-2022-05
refinedweb
506
53.92
GR-SAKURA Special Page Sketch on Web Compiler Overview This page introduce how to sketch on Web Compiler for GR-SAKURA. Both Windows and Mac are availbale. Preparation GR-SAKURA and USB Cable(mini B type) Procedure 1. Login to Web Compiler Login to Web Compiler from the top page of GADGET RENESAS. When you click the below image, the page is displayed on other tab page. Then click [Log In] or [Guest Log In]. If you chose the guest login, your sketch is removed when log out. Fill with your MyRenesas account and click [Sign in], when you entered from [Log In]. 2. Create a New Project To create a project, click the button as below. If you log in at the first time, skip this procedure. Select an appropriate template, such as GR-SAKURA_Sketch_Vxxx.zip, where xxx indicates the version. -SAKURA. 4. Build the code Since the code shown on the screen is a working sample, you don't need to debug the code. To build the code, click the "Execute Build" icon on the right navigation menu shown below. This should show you "The Compilation Is Complete" pop-up. Note that it takes a little time to build at the first time because of compilation of all source files. When the build is OK, the pop-up screen shows "Make process completed" at the bottom of screen. Then click the close button. 5. Download the sketch When the build is successfully completed, Web Compiler creates binary file "peach_sketch.bin" under the .CPP file shown below. Move the mouse onto the bin file and click the right click - that shows the pop-up menu below. Then click "Download" menu. 6. Connect GR-SAKURA Connect GR-SAKURA to PC with USB cable. And then push the Reset button on GR-SAKURA. GR-SAKURA is displayed as a USB storage. 7. Flashing (Copy program to GR-SAKURA) Copy & Paste the sakura_sketch.bin to the storage. Now the LED on GR-SAKURA is flashing, isn't it? Sample Code for LED flashing like Hotaru insect Try the below sample to copy & pase your sketch on Web compiler. #include <Arduino.h> void setup(){ } void loop(){ for(int i = 0; i < 256; i++){ analogWrite(PIN_LED0, i); delay(5); } for(int i = 0; i < 256; i++){ analogWrite(PIN_LED0, 255 - i); delay(5); } }
http://gadget.renesas.com/en/product/sakura_sp1.html
CC-MAIN-2018-43
refinedweb
388
76.52
Tell us what you think of the site. I was wondering if anyone knows why I am getting the error ‘Warning: File <> could not be written’. I am trying to export bvh from a second character in the scene.. I used fbx import to bring it into the scene. The original one that I did not import seems to plot then export bvh fine. The second char is the same as the first , reimporting it again. edit: I just wanted to add it does not seem to matter if it is just in a T with no animation or I have it animated with effectors and stuff , same message. Hi CooperN, BVH format is fairly rigid in requiring the joint naming and hierarchy fits standard. If you are trying to export BVH from non-BVH skeleton or have modified the hierarchy or naming of joints you may see this error. Also, if you are working with multiple character’s in scene- check whether you have used Namespace to protect naming when you’ve merged- otherwise the joints will be renamed with +1 suffix each time duplicate is merged. Best Regards, Lee Thank you Lee ! Like it said , was the same char used twice. The namespace is what I was missing. Thanks ! Problem here also :-( I have 2 female characters, mother and daughter, and i have tried naming them backwards and forwards and managed to get daughter bvh to finally export but mother is being stubborn. Using these examples, could you give me a naming convention please? Is it: Mesh:daughter Hip: daughter mesh:mother hip:mother character:daughter character:mother etc? I keep coming up with +1’s on characters, control rig,etc. Thanks in advance Spicey
http://area.autodesk.com/forum/autodesk-motionbuilder/autodesk-motionbuilder-2009/warning-file--could-not-be-written/page-last/
crawl-003
refinedweb
287
72.76
use webengine without webviewuser12956197 Feb 21, 2013 5:41 AM can webengine be used without webview? I want to use it as a headless browser to test webpages with javascript in linux without X server. is it possible? Thanks. 1. Re: use webengine without webviewe-dubkendo Feb 21, 2013 6:08 AM (in response to user12956197)Not to avoid your question or steer anyone away from javafx, but Phantom JS runs without X server since 1.5: You could use this, or something like casper.js( ) which builds on it, to do what you want, unless you are specifically needing a java/javafx solution. I'm actually interested in the answer to your question myself for other potential uses, so hope someone who can answer it chimes in. 2. Re: use webengine without webviewjsmith Feb 21, 2013 6:47 PM (in response to user12956197)WebEngine can work without WebView as demonstrated in the following sample: Whether you could "use it as a headless browser to test webpages with javascript in linux without X server", I don't know and I don't have the appropriate environment to test it - you would have to try it and see. import javafx.application.Application; import javafx.beans.value.*; import javafx.scene.Scene; import javafx.scene.control.TextArea; import javafx.scene.web.WebEngine; import javafx.stage.Stage; import org.w3c.dom.Document; public class HeadlessWebEngine extends Application { @Override public void start(Stage stage) { final TextArea docString = new TextArea(); docString.setEditable(false); final WebEngine engine = new WebEngine(); engine.documentProperty().addListener(new ChangeListener<Document>() { @Override public void changed(ObservableValue<? extends Document> observable, Document oldDoc, Document newDoc) { docString.setText((String) engine.executeScript("document.documentElement.innerHTML")); } }); engine.load(""); stage.setScene(new Scene(docString, 800, 600)); stage.show(); } public static void main(String[] args) { launch(args); } } 3. Re: use webengine without webviewb9a8eb1b-1939-49c4-972e-a18a0b94153e Jul 26, 2013 11:52 PM (in response to jsmith) While your example does not use WebView, it does try to create a window with stage.show(). I've tested and it does err on a server without an X server. Actually, even if you comment out stage.show() it still errs. I've been trying to find a way to run headless myself, not sure if it is possible. 4. Re: use webengine without webviewjsmith Jul 27, 2013 2:06 AM (in response to b9a8eb1b-1939-49c4-972e-a18a0b94153e) You might be interested in: RT-20494 Headless Glass toolkit needs to be connected to Quantum and Prism unit tests. It's one of those things currently scheduled for Lombard (Java 8) implementation that I think will be pushed from that least because the release is past feature complete date. But maybe there is some headless toolkit already implemented out there that you could just enable if you knew the appropriate command line switch, I don't know. Though I doubt it - you can only read so much into obscure text in jira issues. Similar to: RT-31175 permit headless creation of charts You could create an issue request to permit headless creation of charts. If your lucky, it would get scheduled for an 8.1 release timeframe (e.g. maybe 6-12 months after Java 8 is released). It's not an uncommon feature request, and it makes some sense, so there is probably a chance it would make it in 8.1.
https://community.oracle.com/message/11124388
CC-MAIN-2017-34
refinedweb
561
64.91
cylon A JavaScript robotics framework using Node.js Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install cylon Cylon.js is a JavaScript framework for robotics and physical computing using Node.js. It (). Examples: Basic Arduino with an LED, using the Firmata protocol. The example below connects to an Arduino, and every second turns the LED either on, or off. The example requires that the Arduino has the Firmata sketch installed, and that it is connected on the port /dev/ttyACM0. You need to install Firmata on your Arduino, and to change the port parameter to match the port that your system is actually using. Make sure to upload the "Standard Firmata" sketch or an equivalent Firmata sketch to your Arduino first. Without that code running on the Arduino, Firmata can't communicate with Cylon. You can find the example sketch in your Arduino software under "Examples > Firmata > StandardFirmata". var Cylon = require("cylon"); // Initialize the robot var robot = Cylon.robot({ // Change the port to the correct port for your Arduino. connection: { name: 'arduino', adaptor: 'firmata', port: '/dev/ttyACM0' }, device: { name: 'led', driver: 'led', pin: 13 }, work: function(my) { // we do our thing here every((1).second(), function() { my.led.toggle(); }); } }); // start working robot.start(); Parrot ARDrone 2.0(); Hardware Support Cylon.js has a extensible system for connecting to hardware devices. The following robotics, physical computing, or software platforms are currently supported: - Ardrone <==> Adaptor/Drivers - Arduino <==> Adaptor - Beaglebone Black <==> Adaptor - Crazyflie <==> Adaptor/Driver - Digispark <==> Adaptor - Joystick <==> Adaptor/Driver - Keyboard <==> Adaptor/Driver - Leap Motion <==> Adaptor/Driver - Neurosky <==> Adaptor/Driver - OpenCV <==> Adaptor/Drivers - Pebble <==> Adaptor/Driver - Rapiro <==> Adaptor/Driver - Raspberry Pi <==> Adaptor - Salesforce <==> Adaptor/Driver - Skynet <==> Adaptor - Spark <==> Adaptor - Sphero <==> Adaptor/Driver - Tessel <==> Adaptor/Driver Support for many devices that use General Purpose Input/Output (GPIO) have a shared set of drivers provided using the cylon-gpio module: - GPIO <=> Drivers - Analog Sensor - Button - Continuous Servo - IR Rangefinder - LED - MakeyButton - Motor - Maxbotix Ultrasonic Range Finder - Servo Support for devices that use Inter-Integrated Circuit (I2C) have a shared set of drivers provided using the cylon-i2c module: More platforms and drivers are coming soon... follow us on Twitter @cylonjs for latest updates. Getting Started Installation All you need to get started is the cylon module: npm install cylon Then install modules for whatever hardware support you want to use from your robot. For the example below, an Arduino using the Firmata protocol: npm install cylon-firmata CLI Cylon has a Command Line Interface (CLI) so you can access important features right from the command line. Usage: cylon [command] [options] Commands: generate <name> Generates a new adaptor Options: -h, --help output usage information -V, --version output the version number Generator Want to integrate a hardware device we don't have Cylon support for yet? There's a generator for that! You can easily generate a new skeleton Cylon adaptor to help you get started. Simply run the cylon generate command, and the generator will create a new directory with all of the files in place for your new adaptor module. $ cylon generate awesome_device Creating cylon-awesome_device adaptor. Compiling templates. $ ls ./cylon-awesome_device Gruntfile.js LICENSE README.md dist/ package.json src/ test/ Documentation We're busy adding documentation to our web site at please check there as we continue to work on Cylon.js If you want to help us with some documentation on the site, you can go to cylonjs.com branch and then, follow the instructions..13.1 - Add API authentication and HTTPS support Version 0.13.0 - Set minimum Node version to 0.10.20, add utils to global namespace and improve initialization routines Version 0.12.0 - Extraction of CLI tooling Version 0.11.2 - bugfixes Version 0.11.0 - Refactor into pure JavaScript Version 0.10.4 - Add JS helper functions Version 0.10.3 - Fix dependency issue Version 0.10.2 - Create connections convenience vars, refactor config loading Version 0.10.1 - Updates required for test driven robotics, update Robeaux version, bugfixes Version 0.10.0 - Use Robeaux UX, add CLI commands for helping connect to devices, bugfixes Version 0.9.0 - Add AngularJS web interface to API, extensible commands for CLI Version 0.8.0 - Refactored Adaptor and Driver into proper base classes for easier authoring of new modules Version 0.7.0 - cylon command for generating new adaptors, support code for better GPIO support, literate examples Version 0.6.0 - API exposes robot commands, fixes issues in driver/adaptor init Version 0.5.0 - Improve API, add GPIO support for reuse in adaptors Version 0.4.0 - Refactor proxy in Cylon.Basestar, improve API Version 0.3.0 - Improved Cylon.Basestar, and added API Version 0.2.0 - Cylon.Basestar to help develop external adaptors/drivers Version 0.1.0 - Initial release for ongoing development License Copyright (c) 2013-2014 The Hybrid Group. Licensed under the Apache 2.0 license.
https://www.npmjs.org/package/cylon
CC-MAIN-2014-15
refinedweb
823
50.43
I have a class list of class public class LinqTest { public int id { get; set; } public string value { get; set; } } List<LinqTest> myList = new List<LinqTest>(); myList.Add(new LinqTest() { id = 1, value = "a" }); myList.Add(new LinqTest() { id = 1, value = "b" }); myList.Add(new LinqTest() { id = 2, value = "c" }); [{id=1,value="a"},{ id = 2, value = "c" }] id value 1 a 1 b 2 c 3 d 3 e id value 1 a 2 c 3 d id myList.GroupBy(test => test.id) .Select(grp => grp.First()); Edit: as getting this IEnumerable<> into a List<> seems to be a mystery to many people, you can simply write: var result = myList.GroupBy(test => test.id) .Select(grp => grp.First()) .ToList(); But one is often better off working with the IEnumerable rather than IList as the Linq above is lazily evaluated: it doesn't actually do all of the work until the enumerable is iterated. When you call ToList it actually walks the entire enumerable forcing all of the work to be done up front. (And may take a little while if your enumerable is infinitely long.) The flipside to this advice is that each time you enumerate such an IEnumerable the work to evaluate it has to be done afresh. So you need to decide for each case whether it is better to work with the lazily evaluated IEnumerable or to realize it into a List, Set, Dictionary or whatnot.
https://codedump.io/share/kawifMivz8hX/1/select-distinct-using-linq
CC-MAIN-2018-13
refinedweb
240
66.27
Keeping (excluding component libraries, I track those here) worth talking about on a single page. Table of Contents - 3D - Accessibility - Animation - Browser Features - Data Fetching Libraries - Data Visualisation - Forms - Mocking APIs - State Management - Testing - Videos 3D react-three-fiber is a React renderer for threejs on the web and react-native. Chances are, if you've seen really cool 3D animations on a website, it was probably built in three.js - react-three-fiber gives you a way to tap into React while building your 3d scenes. There's also a Next.js + Tailwind starter worth looking into. Accessibility React Aria provides you with Hooks that provide accessibility for your components, so all you need to worry about is design and styling. Particularly useful for those building design systems. Example usage - useButton: import { useButton } from '@react-aria/button';function Button(props) {let ref = React.useRef();let { buttonProps } = useButton(props, ref);return (<button {...buttonProps} ref={ref}>{props.children}</button>);}<Button onPress={() => alert('Button pressed!')}>Press me</Button>; Animation Animation adds soul to otherwise boring things. These libraries let you build the web app equivalent of Pixar's Intro Animation, but in React. Both libraries have similar APIs and support spring physics over time-based animation, though Framer Motion seems to be used more often on GitHub. Framer Motion is an animation and gesture library built by Framer. The added benefit of Framer Motion is that your designers can build animations in Framer, then hand-off designs to be accurately implemented using Framer's own library. React Spring uses spring physics rather than time-based animation to animate your components. Relative to Framer Motion, React Spring has been under development for longer, with a greater number of open-source contributors. Browser Features Ever been asked to implement random features that someone on the product team saw on another website and thought was cool? These libraries save you time on building those features. useClippy is a React hook that lets you read and write to your user's clipboard. Particularly useful for improving UX, letting you save your users from double clicking on your data fields, by providing them a button to copy values. ReactPlayer is an awesome library that lets you embed video from major sources (YouTube, Facebook, Twitch, SoundCloud, and more), and define your own controls to the video. React Toastify allows you to add fancy in-app notifications (like the "Message Sent" notification in Gmail) to your React app with only four additional lines of code. Data Fetching Libraries You might be wondering why you'd even need a data fetching library, when you could use useEffect and fetch(). The short answer is that these libraries also handle caching, loading and error states, avoiding redundant network requests, and much more. You could spend hundreds of hours implementing these features in a Redux-like state manager, or just install one of these libraries. React Query lets you request data from the same API endpoint (for example api/users/1) across multiple components, without resulting in multiple network requests. Similar to React Query (in fact, based on the same premise, see this issue for more info), SWR is another data fetching library worth checking out. SWR has the added security of being used by Vercel in production as part of their platform. Data Visualisation Everyone wants to have beautiful charts, but nobody wants to learn no complicated-ass D3 - Ronnie Coleman, probably If you've ever used a popular charting library such as Recharts or Charts.js, you'll know it's surprisingly easy to reach the limits of what a charting library can do for you. visx is different, in that it provides you with lower-level React components that are much closer to D3 than other charting libraries. This makes it easier to build your own re-usable charting library, or hyper-customised charts. Forms Forms suck. Take it from someone who once had to build a "smart" form with 26 possible fields to fill out - you want to pass off as much as possible to your form library, leaving you with only quick field names to enter. React Hook Form is different to other form libraries, in that by default, you're not building controlled components and watching their state. This means your app's performance won't get slower as you add more fields to your form. On top of that, it's probably one of the best documented libraries out there - every example has a CodeSandbox, making it easy to fork and try out your particular use case. Mocking APIs You could probably argue mocking libraries belong more under testing utilities, but they have added benefits during development too. While not specifically tied to React, they can be extremely useful when you want dynamic responses from your mocked endpoints. MSW is a library that lets you mock both REST and GraphQL APIs, that uses service workers to generate actual network requests. Why is that cool? You can actually use MSW as part of your development workflow regardless of technology choice, not only while testing! It's so cool that Kent C. Dodds wrote a whole article about it. In short, you can use MSW to experiment with building frontend features before the backend is ready AND mock your backend for tests. State Management There's been a fair bit of innovation in state management since the early days of Redux, it's worth taking a look again if you're interested in using global state. Jotai describes itself as a primitive state management solution for React, and a competitor to Recoil. It's quite minimalist, meaning less API to learn, and if you understand React's useState hook, you'll probably understand Jotai's useAtom hook. Mobx is a global state management library that requires a bit of a mental model shift to get used to. Once you get it though, Mobx provides a solution to state management and change propagation without the boilerplate. Recoil is a state management library - think Redux meets React Hooks, minus the boilerplate. Redux Toolkit (or RTK), is the official, opinionated way to manage your state using Redux. RTK greatly reduces the amount of boilerplate necessary for using Redux, provides sensible defaults, and keeps the same immutable update logic that we know and love. XState is a library that lets you formalise your React app as a finite state machine. State machines aren't a particularly new concept, but developers have only recently started to realise that maybe our apps could be less buggy if we explicitly define the states they can be in, and the inputs required to transition between states. XState also generates charts for you based on your app's xstate configuration, meaning your documentation will stay up to date as you code. Zustand is another state management library, that aims to be simple and un-opinionated. The main difference is that you use hooks as the main way to fetch state, and doesn't require peppering your app with context providers. Testing React Testing Library (RTL) has become kind of a big deal for testing in React (and other libraries/frameworks). It's now the default testing library that comes with create-react-app. RTL replaces Enzyme in your testing stack. While both libraries provide methods for rendering React components in tests, RTL exposes functions that nudge developers away from testing implementation details, and towards testing functionality. Videos Videos?! You ask? Why yes, dear reader. There are now libraries for making videos in React. Remotion lets you make videos of your React components rendering - whether that involves fetching data from an API and displaying it, or showing cool animations.
https://maxrozen.com/keeping-up-with-react-libraries
CC-MAIN-2021-10
refinedweb
1,281
53.1
collections.namedtuple keys values get in for Python doesn't have a builtin frozendict type. It turns out this wouldn't be useful too often (though it would still probably be useful more often than frozenset is). The most common reason to want such a type is when memoizing function calls for functions with unknown arguments. The most common solution to store a hashable equivalent of a dict (where the values are hashable) is something like tuple(sorted(kwargs.iteritems())). This depends on the sorting not being a bit insane. Python cannot positively promise sorting will result in something reasonable here. (But it can't promise much else, so don't sweat it too much.) You could easily enough make some sort of wrapper that works much like a dict. It might look something like import collections class FrozenDict(collections.Mapping): """Don't forget the docstrings!!""" def __init__(self, *args, **kwargs): self._d = dict(*args, **kwargs) self._hash = None def __iter__(self): return iter(self._d) def __len__(self): return len(self._d) def __getitem__(self, key): return self._d[key] def __hash__(self): # It would have been simpler and maybe more obvious to # use hash(tuple(sorted(self._d.iteritems()))) from this discussion # so far, but this solution is O(n). I don't know what kind of # n we are going to run into, but sometimes it's hard to resist the # urge to optimize when it will gain improved algorithmic performance. if self._hash is None: self._hash = 0 for pair in self.iteritems(): self._hash ^= hash(pair) return self._hash It should work great: >>> x = FrozenDict(a=1, b=2) >>> y = FrozenDict(a=1, b=2) >>> x is y False >>> x == y True >>> x == {'a': 1, 'b': 2} True >>> d = {x: 'foo'} >>> d[y] 'foo'
https://codedump.io/share/2O9W8jC6WKbO/1/what-would-a-quotfrozen-dictquot-be
CC-MAIN-2018-22
refinedweb
299
68.26