text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Just returned from a 2 nights stay with our dog, had a relaxing pampered time. The room we stayed in had a sea view and was gorgeous, even had a blanket bowl and towel for the dog! Out dog was made to feel more than welcome and Cristine went out of her way to make us not feel awkward or a bother because we had him. Dunoon is picturesque although a little quiet, and having a dog makes it more difucult to find places that accommodate. All in all we would definatly recommend this hotel (with or without your dog) thanks Cristine Colin and staff, we will return ( but without harvey) ! -
https://www.tripadvisor.com/ShowUserReviews-g190750-d193991-r138876348-Abbot_s_Brae_Hotel-Dunoon_Cowal_Peninsula_Argyll_and_Bute_Scotland.html
CC-MAIN-2017-13
refinedweb
111
74.32
I am trying to plot the morphology of a neuron with a different color for each section. I am trying to use the PlotShape (really nice it is working with matplotlib now). The code i am running is: I tried to change the order of ax = ps.plot(pyplot) and to use the color_list function as well (commented out) Code: Select all from neuron import h from matplotlib import pyplot mosinit_hoc = '/global/homes/r/roybens/Cori/NeuronInverter/NeuronStuff/hoc_templates/L5_TTPC1_cADpyr232_1/mosinit.hoc' def plot_shape(): h.load_file(mosinit_hoc) ps = h.PlotShape(False) # False tells h.PlotShape not to use NEURON's gui #ps.color_list(h.cell.apical,3) for curr_sec in h.cell.apical: ps.color(curr_sec,4) ax = ps.plot(pyplot) pyplot.show() ps = plot_shape() But it seems like none of the sections are colored (although the sectionlist is not empty) What am i doing wrong? Thanks!
https://www.neuron.yale.edu/phpBB2/viewtopic.php?f=2&t=4314&p=18667&sid=882c01abc44b66508bada1ff21a2648a
CC-MAIN-2021-17
refinedweb
147
52.36
Report message to a moderator [Updated on: Wed, 30 September 2009 03:50] [Updated on: Wed, 30 September 2009 06:03] c:>imp Import: Release 9.2.0.6.0 - Production on Thu Mar 29 15:07:43 2007 Username: SYSTEM Password: password Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.6.0 - Production Import file: expdat.dmp > /mention/path/of/dumpFile/includingFileName.dmp Enter insert buffer size (minimum is 8192) 30720> (press enter to accept default) Export file created by EXPORT:V09.02.00 via conventional path import done in US7ASCII character set and AL16UTF16 NCHAR character set import server uses AL32UTF8 character set (possible charset conversion) List contents of import file only (yes/no): no > press enter Ignore create error due to object existence (yes/no): no > press enter Import grants (yes/no): yes > press enter Import table data (yes/no): yes > press enter Import entire export file (yes/no): no > press enter or type no Username: give the userName for which you want the data to be imported Enter table(T) or partition(T:P) names. Null list means all tables for user Enter table(T) or partition(T:P) name or . if done: press enter . importing TST_001_V2's objects into TST_001_V2 Sorry Ashish for disturbing again... have try that on DOS prompt but now i m stucked with Oracle version problem... when i tried to import the datbase i got error like: Export file created by EXPORT:V10.02.01 via convetional path.. so can i upgrade this? or any patch for this?? please help me..
http://www.orafaq.com/forum/t/150484/2/
CC-MAIN-2016-50
refinedweb
278
55.44
You are to develop a simple Python program that will prompt the user to enter the name of a movie or TV show and then it will display a list of the actors in the movie. You will get the data by sending a request, per the published API, to IMDbPY. Although the data returned may be in a variety of formats (JSON might be simplest), please have it return XML data. You may want experiment with other ways to retrieve the data using the API, but also use XML. XML is important enough that a little practice working with it seems important. Then pass the data to a parser such as BeautifulSoup, xml.dom.minidom or xml.etree.ElementTree. If you use BeautifulSoup, see also Parsing XML with BeautifulSoup. Due to the requirement of BeautifulSoup to use lxml to parse XML, xml.dom.minidom may be easiest to use. The lxml module is difficult to install in Windows. See the xml.dom.minidom documentation. IMDbPY may be installed with easy_install. Here is some starter code: from imdb import IMDb import xml.dom.minidom ia = IMDb() the_matrix = ia.get_movie('0133093') folks = the_matrix.getAsXML( 'cast' ) dom = xml.dom.minidom.parseString(folks) people = dom.getElementsByTagName('person') for peep in people: for node in peep.childNodes: if node.nodeName == u'name': for n in node.childNodes: if n.nodeType == n.TEXT_NODE: print n.data
http://faculty.salina.k-state.edu/tim/NPstudy_guide/web/prog4API.html
CC-MAIN-2015-27
refinedweb
231
70.9
Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. What's the Fuss?. Knowing Your Tool There is an old Chinese proverb that says "工欲善其事,必先利其器", which I interpret as "one needs to know one's tool well in order to do a good job". So, you will start by getting to know the basics of your tool — the Leap Motion controller. What's Leap Motion Controller? It's a sensor device that detects and captures hand and finger motions in the air as input to VR/AR applications. System Architecture Besides the Leap Motion controller hardware, you also need to install the Leap Motion SDK software on the computer that interfaces with the Leap Motion controller. The Leap Motion SDK runs as a background process on your computer. It receives motion tracking data from your hands and fingers in the real world via the USB connected Leap Motion controller. The motion tracking data are presented to your application as a series of snapshots called frames. Each frame contains the coordinates, directions, and other information about the hands and fingers detected in that frame. Each frame is represented as a Frame object in Leap Motion APIs. The Frame object is essentially the root of Leap Motion's tracking model. Your software application can then access the Frame object via one of the two APIs provided by the Leap Motion SDK. These two types of APIs are the Native Interface and the WebSocket interface. Native Interface The native interface is a dynamic library that you can use to create Leap-enabled desktop applications in a variety of languages and technologies: C#, C++, Java, Python, Objective-C, Unity, and Unreal. WebSocket Interface The WebSocket interface, on the other hand, allows you to create Leap-enabled web applications that work in conventional web browsers out of the box. It provides motion tracking data in the form of JSON formatted messages which are consumed by a JavaScript library, which in turn makes them available to your web applications as regular JavaScript objects for further processing. Leap Motion Coordinates The Leap Motion Controller provides right-handed coordinates in units of real world millimeters within the Leap Motion frame of reference, the origin (0, 0, 0) of which is the centre of the Leap Motion controller device. Interaction Box The Leap Motion controller can detects and tracks the movement of your hand or finger only if it is within its field of view, an invisible inverted pyramid sitting on the device. To take away much of the guessing work, Leap Motion further provides a virtual Interaction Box to help your hand or finger stays in the range. An Interaction Box in Leap Motion defines a box-shaped region completely within the field of view of the Leap Motion controller. It is Leap Motion's way to assure the users that their hands or fingers will be tracked as long as they stay within this box. Making Things Happen Having learned the basic mechanism of a Leap Motion controller, you are now ready to get your hands dirty in cooking up code that makes use of the motion tracking data received from the controller. Let's do it... Setting the Stage First thing first, you have to set up your computer so that it can interface with the Leap Motion controller. Follow this online setup guide to download and install the desktop developer SDK for your machine. As part of the installation, you may be prompted to upgrade the display driver, just ignore it as it is not required for this exploration trip. With the SDK installed, plug the Leap Motion controller into your computer via the USB. Next, get ready an HTML page with the following code and save it as, say, index.html. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Leaping into Motion</title> <script src=""></script> </head> <body> </body> </html> Included in the index.html page is the Leap Motion JavaScript library as shown: <script src=""></script> This Leap Motion JavaScript library receives motion tracking data from the WebSocket interface and make the data available to index.html for consumption by JavaScript code. Getting Connected With the Leap Motion JavaScript library included, you now have access to the Leap Motion API via the Leap global namespace. To start tracking your hands or fingers, you will call the loop() function of the Leap namespace to mediate the connection between your web application and the WebSocket interface, and to invoke a callback function that receives a Frame object at regular interval. Typically, this interval is set at 60 Frame objects per second. Each Frame object contains motion tracking data of hands or fingers detected by the Leap Motion controller at a particular instance. The code to implement this is as follows: Leap.loop(function(frame){ // Add code to process the frame of tracked data }) You will add it to index.html as shown: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Leaping into Motion</title> <script src=""></script> </head> <body> <script> Leap.loop(function(frame){ // Add code to process the frame of tracked data }) </script> </body> </html> Creating a Dashboard In index.html, create a HTML table, furnished with some CSS, for outputting the Frame object data received from the controller. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Leaping into Motion</title> <style> th, td { min-width: 300px; text-align: left; vertical-align: top } </style> <script src=""></script> </head> <body> <table> <tr> <th>Frame Data</th><th>Hand Data</th><th>Finger Data</th> </tr> <tr> <td id="frameData">Frame Data</td><td id="handData">Hand Data</td><td id="fingerData">Finger Data</td> </tr> </table> <script> Leap.loop(function(frame){ // Add code to process the frame of tracked data }) </script> </body> </html> Deciphering the Frame Each Frame object passed to the callback function of Leap.loop() is identified by an id and may contain Hand objects and Finger objects, among other things. The following code gets the id of a Frame object and the respective numbers of Hand objects and Finger objects contained in that Frame object, and displays them in the browser. <script> var frameData = document.getElementById('frameData'); Leap.loop(function(frame){ // Get and show frame data frameData.innerHTML = "Frame ID: " + frame.id + "<br>" + "No of Hands: " + frame.hands.length + "<br>" + "No of Fingers: " + frame.fingers.length + ""; }) </script> Deciphering the Hands Each Hand object contained in the Frame object possesses a set of properties, such as id, hand type (left or right hand), palm position, grab strength, among other things. The following code gets the id, type, palmPosition, grabStrength, and pinchStrength of each Hand object contained in the Frame object, and displays them in the browser. // Get and show hand data handData.innerHTML = ""; for(var i = 0; i < frame.hands.length; i++){ var hand = frame.hands[i]; handData.innerHTML += "Hand ID: " + hand.id + "<br>" + "Hand Type: " + hand.type + "<br>" + "Palm Position: " + hand.palmPosition + "<br>" + "Grab Strength: " + hand.grabStrength + "<br>" + "Pinch Strength: " + hand.pinchStrength + "<br><br>"; } The hand.palmPosition returns a 3D vector (x, y, z) indicating the coordinates of the centre position of the palm in millimeters from the Leap origin. Deciphering the Fingers Similarly, each Finger object contained in the Frame object possesses a set of properties, such as id of the finger object, the id of the Hand object that it belongs to, finger tip position, finger type, among other things. The following code below gets the id, handId, tipPosition, and type of each Finger object contained in the Frame object, and displays them in the browser. // Get and show finger data fingerData.innerHTML = ""; for(var i = 0; i < frame.fingers.length; i++){ var finger = frame.fingers[i]; fingerData.innerHTML += "Finger ID: " + finger.id + "<br>" + "Belong to Hand ID: " + finger.handId + "<br>" + "Finger Tip Position: " + finger.tipPosition + "<br>" + "Finger Type: " + finger.type + "<br>" + "<br>"; } The finger.tipPosition returns a 3D vector (x, y, z) indicating the coordinates of the tip position of a finger in millimeters from the Leap origin. Seeing is Believing The code discussed above has been created in the Dashboard code section in which the code is split into HTML, CSS, and JavaScript parts named as dashboard.html, dashboard.css, and dashboard.js respectively for ease of cross reference. Run the code! You should be able to see the constant updating of frame, hands, and fingers data in the browser as you move your hands and fingers above the Leap Motion controller. Have fun! Animating Hands and Fingers You have done the code to capture and display the constant update of frame, hands, and fingers data in the browser. Isn’t that a piece of cake? However, trying to make sense of textual data, not to mention ones that are changing constantly, is hard. It will be helpful if the data can be animated using some graphical cues. That sounds interesting, right? As a lead, let’s create graphical cues for one hand and five finger tips and animate them in the browser based on their position data, i.e. palm position of the hand and tip positions of the respective fingers. For simplicity, each of these hand and fingers is represented graphically by a rounded HTML <div>. Are you ready? In index.html, add six <div>‘s along with their related CSS as shown: <div id="palm"></div> <div class="finger"></div> <div class="finger"></div> <div class="finger"></div> <div class="finger"></div> <div class="finger"></div> div { background-color: red; border-radius: 50px; position: absolute; } div#palm { height: 100px; width: 100px; } div.finger { height: 20px; width: 20px; } In the <script> section, assign these <div>‘s to some JavaScript variables as shown: var palmDisplay = document.getElementById('palm'); var fingersDisplay = document.getElementsByClassName('finger'); You are ready to write code to animate one of your hand based on its palm position. The palm position is available in 3D vector coordinates in millimeters from the Leap origin available via the palmPosition property of the Hand object. Use the normalizePoint() method of the Leap’s InteractionBox class to convert these coordinates to their normalized coordinates in the range between 0 and 1. The code to do this is as follows: var normalizedPalmPosition = frame.interactionBox.normalizePoint(hand.palmPosition); To convert these normalized coordinates to your application's coordinates, simply multiply the normalized coordinate of each axis by the maximum range of the corresponding axis of the browser screen, which ignoring the z axis as it is not required for this exercise. The following code snippet converts the normalized x coordinate, i.e. normalizedPalmPosition[0], to the browser’s x coordinate which becomes the x coordinate of the centre of <div id="palm"></div> by re-positioning. var palmX = window.innerWidth * normalizedPalmPosition[0] - palmDisplay.offsetWidth / 2; palmDisplay.style.left = palmX + "px"; Similarly, The following code snippet converts the normalized y coordinate, i.e. normalizedPalmPosition[1], to the browser's y coordinate which becomes the y coordinate of the centre of <div id="palm"></div> by re-positioning. Note: The subtraction of normalized y coordinate from one which is needed to convert the upwards pointing y axis of Leap's coordinate system to its downwards pointing counterpart of browser's coordinate system. var palmY = window.innerHeight * (1 - normalizedPalmPosition[1]) - palmDisplay.offsetHeight / 2; palmDisplay.style.top = palmY + "px"; Where do you put these five lines of code? Add them to the for loop for the Hand object. The code discussed so far has been created in the Animating Hands and Fingers code section in which the code is split into HTML, CSS, and JavaScript parts named as animation.html, animation.css, and animation.js respectively for ease of cross reference. Run the code! See that the <div id="palm"></div> moves along with one of your hand within the region of the Interaction Box of the Leap Motion controller. Notice that the <div id="palm"></div> can sneak out of your screen. To confine its movement within the bound of your screen, you can either write additional code or simply add a true argument to the normalizePoint() method as shown: var normalizedPalmPosition = frame.interactionBox.normalizePoint(hand.palmPosition, true); Wait! What about the code to animate the fingers? The answer is that you have just learned it for the hand, how much different can it be for the fingers? I shall leave it as your homework. If you have done your homework well, you should be able to see the "fruit of your labour" as shown in this animated gif: Do not stop here, enhance the code to animate two hands and ten fingers. Going the Extra Mile Having written the code to animate your hands and fingers movement in the browser, the next natural thing to look forwards to is to be able to use these animated hands or fingers to interact with your web applications. One of the most common interactions is clicking a web element, e.g. a button, to trigger some event using a mouse. Can this be done using a finger in the air in place of the mouse? You bet! Clicking with Your Finger in the Air Imagine there is a virtual vertical touch plane above the Interaction Box. The distance between your finger tip and this touch plane can be obtained via the touchDistance property of the Finger object and is available in the range between -1 and 1. As shown in the diagram above, a value between zero and one indicates that the finger tip is in the hovering zone, a value of zero indicates that the finger tip has just touched the touch plane, and a value between zero and minus one indicates that the finger tip is in the touching zone. You can then use the value returned by the touchDistance property of the Finger object in your code to emulate the state that a finger is in, i.e. hovering or touching. However, It is entirely up to you to decide on the threshold value of the touchDistace property that demarcates the hovering state from the touching state. In other words, it isn't set in stone that the threshold value has to be always zero.. Run it and try out the buttons using your mouse! You mission is to spare the mouse and use your finger in the air instead to click the respective buttons. Note: The Leap Motion JavaScript library has already been added to this application. To animate your finger tip on the screen, add a rounded HTML <div> with its related CSS to the Open Sesame code section as shown: <div id="finger"></div> div#finger { height: 10px; width: 10px; position: absolute; background-color: red; border-radius: 5px } Next, add the following JavaScript code to animate one of your finger tip on the screen. var fingerDisplay = document.getElementById("finger"); Leap.loop(function(frame) { if (frame.fingers.length > 0) { var finger = frame.fingers[0]; var normalizedFingerPosition = frame.interactionBox.normalizePoint(finger.tipPosition); var appX = window.innerWidth * normalizedFingerPosition[0] - fingerDisplay.offsetWidth / 2; fingerDisplay.style.left = appX + "px"; var appY = window.innerHeight * (1 - normalizedFingerPosition[1]) - fingerDisplay.offsetHeight / 2; fingerDisplay.style.top = appY + "px"; // add code to emulate left mouse click } }); Run it and you should see a red dot moving along with one of your finger tip. You have just repeated what you have learned in the earlier section. You are ready to add the code to emulate left mouse click. Follow me: - Get the value of the touchDistanceproperty of the finger: var touchDistance = finger.touchDistance; - Assuming an emulated left mouse click occurs if the touchDistanceis less than zero: if (touchDistance < 0) { // code to handle click event } - When an emulated left mouse click is detected, you have to identify the HTML element (button or otherwise) beneath the red dot. fingerDisplay.style.display = "none"; var touchedElement = document.elementFromPoint(appX, appY); fingerDisplay.style.display = "block"; - If the HTML element beneath is the Openbutton, activate the click event for btnOpen. if (touchedElement == btnOpen) { btnOpen.click(); } With the code that you have added, you can now emulate left mouse click on the Open button using your finger in the air. Check out the action in the Open Sesame 2 code section. The code is far from complete. The missing pieces are as follows: The code for activating the click event for btnCloseif the HTML element beneath the red dot is the Closebutton. As it is now, there is no way of telling whether your finger enters or leaves the touching zone. The solution is to introduce different visual cue for each event, such as changing the red dot to blue when a finger enters the touching zone and vice versa when it leaves. Last but not least, you will soon notice that the door opens or closes excessively instead of at a fixed amount at each click, owing to the constant firing of the button event when the finger remains in the touching zone at each frame update. To overcome it manually, your finger has to enter and leave the touching zone at quick succession. This is neither user friendly nor palatable. One of the solutions I can offer is to use a flag (true or false) to prevent the firing of the same event from subsequent frames if the finger has not left the touching zone after entering it. In other words, there should be only one firing of click event for each cycle of entering and leaving the touching zone. Of course, the actual solution hinges upon your application requirements. I shall leave them as your homework. Go for it! Dragging with Your Finger in the Air Having learned the code to emulate left mouse click using your finger in the air, why not extend it to emulate a mouse drag? Usually, you drag an object on the screen by moving the mouse while holding down its left button, right? So, a mouse drag is effected through a combination of click, hold, and move actions. Translating this into Leap, a drag occurs when your finger enters the touching zone, remains there, while moving. Got it! Let's add the code to emulate a mouse drag to this partially completed code in the Drag Me Along! code section. Since drag is an extension of click, start by copying the Leap.loop() part of the JavaScript from the Open Sesame 2 code section to the Drag Me Along! code section. Next, zoom in to this part of the code as shown: if (touchedElement == btnOpen) { btnOpen.click(); } This is the only code that you need to modify in order to implement the drag. However, you can only know how to modify it after finding out the answers to these two questions: What is the touchedElementthis time? How to make this touchedElementmove along with the finger? I have already explained and demonstrated the code for similar implementations earlier, so I shall not repeat again. Tips Interacting with a web application via a Leap Motion controller is inherently a virtual experience. Since it is done without the feel and sensation of a real touch, users can neither control the pace of interactions nor know the state of their interactions with the web application. To alleviate these problems so as to make your web application more usable, consider incorporating the following measures in your implementation: Always provide feedback to the users on the status and progress of their interaction in the form of visual cues on the screen. Need to regulate the response rate of your code vis–à–vis the user's pace of interaction. Crossing the Finishing Line... In this article, you have learned the basic mechanism of a Leap Motion controller, gotten started on writing code to implement motion tracking of hands and fingers as well as clicking and dragging of web elements in the browser using your finger in the air via the Leap Motion controller, and picked up some usability tips on using the Leap Motion controller with your web application. Give yourself a pat on the back! Every Ending is Another Beginning As the saying goes, Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. Now that you have been empowered with the fishing skill, it's up to you to use it well to catch a bigger fish — rotating a wheel with your hand in the air via the Leap Motion controller as shown in this animated gif. The end of a journey is the beginning of another. Hope you find this one a fruitful one. The article Leaping into Motion appeared first on Peter Leow's Code Blog.
https://tech.io/playgrounds/5059/leaping-into-motion
CC-MAIN-2022-05
refinedweb
3,445
63.29
14.6. Manipulating geospatial data with Cartopy how to load and display geographical data in the Shapefile format. Specifically, we will use data from Natural Earth () to display the countries of Africa, color coded with their population and Gross Domestic Product (GDP). This type of graph is called a choropleth map. Shapefile () is a popular geospatial vector data format for GIS software. It can be read by cartopy, a GIS package in Python. Getting ready You need cartopy, available at. You can install it with conda install -c conda-forge cartopy. How to do it... 1. Let's import the packages: import io import requests import zipfile import numpy as np import matplotlib.pyplot as plt import matplotlib.collections as col from matplotlib.colors import Normalize import cartopy.crs as ccrs from cartopy.feature import ShapelyFeature import cartopy.io.shapereader as shpreader %matplotlib inline 2. We download and load the Shapefile that contains geometric and administrative information about all countries in the world (it had been obtained from Natural Earth's website at): url = ('' 'cookbook-2nd-data/blob/master/' 'africa.zip?raw=true') r = io.BytesIO(requests.get(url).content) zipfile.ZipFile(r).extractall('data') countries = shpreader.Reader( 'data/ne_10m_admin_0_countries.shp') 3. We keep the African countries: africa = [c for c in countries.records() if c.attributes['CONTINENT'] == 'Africa'] 4. Let's write a function that draws the borders of Africa: crs = ccrs.PlateCarree() extent = [-23.03, 55.20, -37.72, 40.58] def draw_africa(ax): ax.set_extent(extent) ax.coastlines() fig, ax = plt.subplots( 1, 1, figsize=(6, 8), subplot_kw=dict(projection=crs)) draw_africa(ax) 5. Now, we write a function that displays the countries of Africa with a color that depends on a specific attribute, like the population or GDP: def choropleth(ax, attr, cmap_name): # We need to normalize the values before we can # use the colormap. values = [c.attributes[attr] for c in africa] norm = Normalize( vmin=min(values), vmax=max(values)) cmap = plt.cm.get_cmap(cmap_name) for c in africa: v = c.attributes[attr] sp = ShapelyFeature(c.geometry, crs, edgecolor='k', facecolor=cmap(norm(v))) ax.add_feature(sp) 6. Finally, we display two choropleth maps with the population and GDP of all African countries: fig, (ax1, ax2) = plt.subplots( 1, 2, figsize=(10, 16), subplot_kw=dict(projection=crs)) draw_africa(ax1) choropleth(ax1, 'POP_EST', 'Reds') ax1.set_title('Population') draw_africa(ax2) choropleth(ax2, 'GDP_MD_EST', 'Blues') ax2.set_title('GDP') There's more... The geoplot package, available at, provides high-level tools to draw choropleth maps and other geospatial figures. See also - Creating a route planner for a road network
https://ipython-books.github.io/146-manipulating-geospatial-data-with-cartopy/
CC-MAIN-2019-09
refinedweb
430
53.47
Platforms like Heroku give you the freedom to focus on building great applications rather than getting lost setting up and maintaining infrastructure. One of the many great features of working with it is the Heroku logs that enable monitoring your stack error troubleshooting. It helps speed up the process when things go wrong. In this Heroku tutorial, we’ll uncover best practices for making the most of Heroku logs. Let’s begin with a survey of Heroku’s basic anatomy to provide a clear understanding of the terms and mechanics of Heroku’s logging functionality. Feel free to skip to the logging part if you’re already familiar. Heroku Logs Cheat Sheet Here’s a summary of CLI commands that are relevant for Heroku logging for your reference. Heroku Basic Architecture Applications deployed on Heroku live in lightweight Linux containers called Dynos. Dynos can range from holding simple web apps to complex enterprise systems. The scalability of these containers, both vertically and horizontally, is one of the flexible aspects of Heroku that developers leverage. They include the following types: - Web Dynos are web processes that receive HTTP traffic from routers. (We will demonstrate a Web Dyno in the “Hands On” Python app sample we create later in this resource.) - Worker Dynos may be any non-web-type process type that is used for background processes, queueing, and cron jobs. - One-off dynos are ad-hoc or temporary dynos which can run as either attached or detached from local machines. One-off Dynos are typically used for DB migrations, console sessions, background jobs, and various other administrative tasks, such as processes started by the Heroku Scheduler. Heroku Logging Basics Most PaaS systems provide some form of logging. However, Heroku provides some unique features which set it apart. One such unique feature is the Logplex tool which collects, routes, and collates all log streams from all running processes into a single channel that can be directly observed. Logs can be sent through a Drain to a third-party logging add-on which specializes in log analytics. For developers, one of the most important tools in Heroku is the command-line interface (CLI). After Heroku is installed locally, developers use the CLI to do everything including defining Heroku logs, filters, targets, and querying logs. We will explore the Heroku logging CLI in detail throughout this resource. Heroku View Logs The most commonly used CLI command to retrieve logs is: $ heroku logs Let’s look at the anatomy of an Heroku log. First, enter the following CLI command to display 200 logs: $ heroku logs -n 200 Heroku would show 100 lines by default without the -n parameter above. Using the -n, or –num parameter, we can display up to 1500 lines from the log. Here is an example of a typical log entry: 2020-01-02T15:13:02.723498+00:01 heroku[router]: at=info method=GET path="/posts" host=myapp.herokuapp.com" fwd="178.68.87.34" dyno=web.1 connect=1ms service=18ms status=200 bytes=975 In the above entry, we can see the following information: - Timestamp – The precise time when the Dyno generated the log entry, according to the standard RFC5424 format. The Default timezone is UTC (see below for how to change the default timezone). - Source – web dynos, background workers, and crons generate log entries shown as app. HTTP routers and dyno managers are shown as heroku. - Dyno – In this example, worker #3is the Dyno, and the Heroku HTTP router is shown as router. - Message – contains the content, in this case, the statuswhich is equal to 200, and the byte length. In practice, the message contents typically require smart analytics apps to assist with interpretation View Heroku Logs for a Specific Dyno The filter is another important CLI parameter. For example, by using the following filter we can choose to display only the log entries originating from a specific Dyno: $ heroku logs --dyno View Heroku App Logs $ heroku logs --source app View Heroku API Logs $ heroku logs --source app --dyno API View Heroku System Logs $ heroku logs --source heroku Heroku Log Timezone Heroku uses the UTC timezone for its logs by default. You can change it, although the recommended approach is to convert to the client’s local timezone when displaying the data, for example with a library like Luxon. To check the current timezone: $ heroku config:get TZ To change the timezone for Heroku logs: $ heroku config:add TZ="America/New_York" Here’s a full list of supported timezone formats Log Severity Levels To help monitor and troubleshoot errors with Heroku faster, let’s get familiar with Heroku log levels. Log data can be quantified by level of urgency. Here is the standard set of levels used in Heroku logs with examples of events for which Heroku Logplex generates a log entry: Types of Heroku Logs The Heroku platform maintains four categories of logs. For example, log entries generated by a dependency-related error thrown when running an app are separated from messages about the deployment of new code. Here are summaries of the four Heroku log categories: - App logs – Entries generated when an app runs and throws an exception, for example, for a missing dependency such as an inaccessible library. The CLI log filter is --source app. - API logs – Developer administrative actions (such as deploying new code) trigger entries to the API log. Scaling processes and toggling maintenance mode are other examples in this category. These logs can be used to set progressive delays in retrying an API call when one fails. API logs can also be used to catch authentication failures, and issues with push requests. The CLI filter is --source app --dyno api - System logs – Contain information about hardware system and system processes, or in other words, infrastructure. When the Heroku platform restarts an app because of an infrastructure issue, (e.g. failed HTTP request), a system log entry will be generated. The CLI filter to query system log entries is --source heroku - Add-on logs – Add-on logs are generated by add-ons to the Heroku platform, like Redis Cloud, MongoDB, SendGrid, MemCachier, etc generate their own logs. Heroku Build Logs Heroku build logs are a special log type contained in the file build.logs, and generated by both successful and failed builds. These logs are accessed in your app’s activity feed on the Heroku dashboard, or the build logs can be configured with a tool like Coralogix to benchmark errors for each build version. On the dashboard, click “View build log” to see build-related events in the activity feed. A running app can also write an entry directly to a log. Each coding language will have its own method for writing to Heroku logs. For example, see the Heroku Log Tips and Traps for Ruby further along in this article. Log entries do not live forever, and as we will see later on, the time retention of log entries is determined by log type. This aspect of logging is best managed through the use of a log analytics app which is machine learning (ML-capable). Heroku Release Logs Release logs show the status of each release of an app. This includes failed releases that are pending because of a release command which has not returned a status yet. In the following release log entry, version 45 of an app deployment failed: v45 Deploy ada5527 release command failed Use “curl” to programmatically check the status of a release. Curl the Platform API for specific releases or to list all releases. Release output can also be retrieved programmatically by making a GET request on the URL. The output is available under the output_stream_url attribute. Heroku Router Logs Router logs contain entries about HTTP routing in Heroku’s Common Runtime. These represent the entry and exit points for web apps and services running in Heroku Dynos. The runtime manages dynos in a multi-tenant network. Dynos in this network receives connections from the routing layer only. A typical Heroku router log entry looks like this: 2020-08-19T05:24:01.621068+00:00 heroku[router]: at=info method=GET path="/db" host=quiescent-seacaves-75347.herokuapp.com request_id=777528e0-621c-4b6e-8eef-74caa34c1713 fwd="104.163.156.140" dyno=web.1 connect=0ms service=19ms status=301 bytes=786 protocol=https In the example above, following the timestamp, we see a message beginning with one of the following log levels: at=info, at=warning, or at=error. After the warning level the entry contains additional fields from the following table which describe the issue being logged: Events which trigger log entries Ideally, an Heroku log should contain an entry for every useful event in the behavior of an application. When an app is deployed and while it is running in production, there are the many types of events which trigger log entries: - Authentication, Authorization, and Access: These events include things such as successful and failed authentication and authorizations, system access, data access, and application access. - Changes: These events include changes to systems or applications, changes to data (creation and destruction), application installation, and changes. - Availability: Availability events include startup and shutdown of systems and applications, builds and releases, faults and errors that affect application availability, and backup successes and failures. - Resources: Resource issues to log include exhausted resources, exceeded capacities, and connectivity issues. - Threats: Some common threats to logs include invalid inputs and security issues known to affect the application. Log Retention Period The retention period length we set is important because log data can quickly get out of control. Retaining unnecessary log data can add overhead to analysis, however, discarding log data too early may reduce the opportunity for insights. One useful way of determining which logs should be kept and for how long can be defined by ensuring we have accurately established the correct Heroku log levels, and by establishing different retention periods based on specific criteria like the log level, system, and subsystem. This can be accomplished programmatically by yourself or with a 3rd-party tool like the Coralogix usage optimizer. Managing Sensitive Log Data Investigation of recent security breaches at giant eCommerce enterprises like Uber and Aeroflot surprisingly revealed that the source of the web app’s vulnerability lay in poorly configured and inadequately monitored log streams. Many recent cases involving customer credit card loss and proprietary source code exposure occurred because developers were unaware that their log streams contained OAuth credentials, API secret keys, authentication tokens, and a variety of other confidential data. Cloud platforms generate logs with default output containing authentication credentials, and log files may not be adequately secured. In many recent security breaches, unauthorized users gained access by way of reading log entries which contained authentication credentials. Obscuring sensitive data should be done prior to shipping logs, but some tools like the Coralogix parser are capable of removing specific data from logs after the logs have been shipped. Log Runtime Metric Logs To monitor load and memory usage for apps running in Dynos, Heroku Labs offers a feature called “log-runtime-metrics.” The CLI command $ heroku logs --tail can be used to view statistics about memory and swap use, as well as load averages, all of which flow into the app’s log stream. Example runtime metric logs: Learn more about how to use runtime metrics in the documentation here. Heroku Log Drains: Centralizing Log Data In order to understand Drains in Heroku logs, we will first need to clarify how Heroku Logplex works. Logplex aggregates log data from all sources in Heroku (including dynos, routers, and runtimes) and routes the log data to predefined endpoints. Logplex receives Syslog messages from TCP and HTTPS streams from a node in a cluster, after which log entries are sent to Redis buffers (the most recent 1,500 log entries are retained by default). For example, Heroku then distributes the logs to display with $ heroku logs --tail, or for our purposes, to forward the logs to Drains. A Heroku Drain is a buffer on each Logplex node. A Drain collects log data from a Logplex node and then forwards it to user-defined endpoints. For our purposes, Heroku Drains connect to 3rd party log analytics apps for intelligent monitoring of log data. Two Types of Heroku Log Drains The two types of Heroku Drains provide log output to different endpoints. The two Drain types include: - Syslog Drains – forward Heroku logs to an external Syslog server - HTTPS Drains – write original log-processing code to forward to a web service How to set up an Heroku Log Drain Logplex facilitates collecting logs from apps for forwarding to log archives, to search and query, and also to log analytics add-on apps. To manage how application logs are processed, we can add Drains of the two types mentioned earlier: Syslog drains, and HTTPS drains. 1 – Install a log analytics app, preferably one with machine learning analytics capability, and obtain the authorization token to access that app. 2 – Configure a Syslog or HTTPS Heroku Log Drain to send data from an app running in a Heroku Dyno to the Add-on analytics app (appName). Here is the CLI command to start a TLS Syslog drain: $ heroku drains:add syslog+tls://logs.this-example.com:012 -a appName And for the same appName, here is the plain text Syslog drain: $ heroku drains:add syslog://logs.this-example.com -a appName To configure an HTTPS drain, use: $ heroku drains:add:[email protected]/logs -a appName 3 – Monitor the performance of the app running in the Dyno with the dashboard of visualizations provided by the add-on analytics app. Here is what it looks like while monitoring the live tail of an app with Coralogix Add-on: Heroku Tail Logs The Heroku logs –tail option is the real-time tail parameter. Its purpose is to display current log entries while keeping the session alive for additional entries to stream while the app continues to run in production. This is useful for testing live apps in their working environments. There are several subtle points to real-time log monitoring. Let’s look at some of the fundamentals before we tackle the actual usage of the log –tail. Heroku handles logs as time-ordered event streams. If an app is spread across more than one Dyno, then the full log is not contained in *.log files, because each log file only contains a view per container. Because of this, all log files must be aggregated to create a complete log for analysis. Moreover, Heroku’s filesystem is temporary. Whenever a Dyno restarts all prior logs are lost. Running Heroku console or “run bash” on Cedar Stack does not connect a running Dyno, but instead creates a new one for this bash command, which is why this is called a “one-off process.” So, the log files from other Dynos don’t include the HTTP processes for this newly created Dyno. With this in mind, to view a real-time stream from a running app for example, use the -t (tail) parameter: $ heroku logs -t 2020-06-16T15:13:46-07:00 app[web.1]: Processing PostController1#list (for 208.39.138.12 at 2010-09-16 15:13:46) [GET] 2020-09-16T15:13:20-07:00 app[web.1]: Rendering template layouts/application 2020-06-16T15:13:46-07:00 heroku[router]: GET myapp.heroku.com/posts queue=0 wait=1ms service=2ms bytes=1975 2020-06-16T15:13:47-07:00 app[worker.12]: 23 jobs processed at 16.6761 j/s, 0 failed ... In the above log entry, we are observing the behavior or a running app. This is useful for live monitoring. To store the logs for longer periods, and for triggers, alerts, and analysis, we can create a drain to an add-on log analytics app like Coralogix. Heroku logging with specific languages Each language has its own built-in logging functionality which can write to Heroku logs. Third-party logging apps are specifically designed to extend built-in logging functions and to compensate for inadequacies. Ruby was the original language supported by Heroku. As a result, many of the well-known developer shortcuts for making best use of Heroku logs arose from developing and deploying Ruby apps. For example, it’s possible for a running app to write entries to logs. In Ruby, this is done with puts: puts "User clicked twice before callback result, logs!" The same log entry would be written with Java like this: System.out.println("User clicked twice before callback result"); In the following sections, we’ll explore tips for working with popular programming languages and Heroku. Heroku Logging with Ruby Ruby / Rails was the first coding language supported by Heroku, and Rails works without trouble. Nevertheless there are measures which further optimize Rails app logging with Heroku. Here are several tips which may not be obvious to developers who are just beginning to deploy Rails to Heroku. - Configure a Rails apps to connect to Postgres - Configure logs to stream to STDOUT. - Enable serving assets for the app in production. Writing to STDOUT Heroku logs are data streams which flow to STDOUT in Rails. To enable STDOUT logging, add the rails_12factor gem. This measure will configure the application to serve assets while in production. Add this to the Gemfile: gem 'rails_12factor', group: :production In order to write logs from code, as mentioned earlier, use the following command: puts "User clicked twice before callback result, logs!" This will send the log entry to STDOUT. Omission of this configuration step will result in a warning when deploying the app, and assets and logs will not function. Ruby Logging Libraries - Lograge for Rails offers sophisticated log interpreting functions for Rails including request and URL endpoints for GET, POST, or PUT. - Request status: the HTTP status codes generated for a completed request and their elapsed response time. - Controller and action: a function to send a request from the application router - Templates and partials: generate log entries about files required to create web page views for a URL endpoint Heroku Logging with Node.js Important log attributes to define before testing a Node.js service on Heroku include: - Event timestamps - Log format readable to human and machine - Log path to standard output files - Log priority levels to dynamically select log output The following are common issues and tips for logging with Heroku and Node.js. Mismatched Node Versions A commonly overlooked mistake when deploying Node.js on Heroku can occur from mismatched Node versions. This issue will appear in the Heroku build log. The Node.js version in the production environment should match that of the development environment. Here is the Heroku CLI command to verify local versions: $ node --version $ npm --version We can compare the results with package.json engines version by looking at the Heroku Build Log, which will look like this: If the versions don’t match, be sure to specify the correct version in package.json. In this way you can use Heroku Logs to identify build issues when deploying Node.js apps. Async Async and callbacks are central to the functionality of Node.js apps. When deploying complex apps, we need tools that go beyond console.log. One obscure detail is that when the Heroku log destination is a local device or file, the console acts synchronously. This prevents messages from getting lost when an app terminates unexpectedly. However, the console acts asynchronously when the log channel is a pipe. This prevents a problem when a callback results in long period blocking. Node.js Logging Libraries Many developers will naturally gravitate toward an async logging library like Winston, Morgan, or Bunyan. The quintessential feature of Winston is its support for multiple transports which can be configured at various logging levels. However, problems with Winston include a lack of important details in its default log formatting. An example of missing details is log entry timestamps. Timestamps must be added via config. The lack of machine names and process IDs make it difficult to apply the next layer of third party smart log analytics apps. However, Heroku Add-Ons like Coralogix can easily work with Winston and Bunyan. Heroku Logging with Python When deploying a Python web app, testing and debugging can be frustrating if log entries are difficult to interpret. Using print() and sys.stdout.write() may not generate meaningful log entries when deploying to the Cloud and using the CLI command $ heroku logs to display log entries. Moreover, it is challenging to debug Python runtime errors and exceptions, because the origin of the HTTP request error may not be apparent. So, how can we write log entries from Python to resolve this issue? The underlying source of this general problem is that while stdout is buffered, stderr is not buffered. One solution is to add sys.stdout.flush() following print statements. Another tip to ensure the HTTP error origin is captured in the log is to verify that the right IP/PORT is monitored. If so, HTTP access entries from GET and index.html should appear in the Heroku log. Configuring a web framework to run in debug mode will make log entries verbose. Stacktraces should display in the browser’s developer console. The setting in Flask to achieve this outcome is: app.config['DEBUG'] = True or app.run( ….. , debug=True) Finally, when configuring a procfile to initiate the python CLI, use the ‘-u’ option to avoid stdout buffering in the following way: python -u script.py If using Django, use: import sys print ("hello complicated world!") sys.stdout.flush() As mentioned earlier, the Python logging library itself is the standard for Python coding, as opposed to other third party offerings. The Python developer community provides limitless blogs on trips and traps of Python logging with Heroku. Logging Libraries for Python The built-in Python logging functionality is the standard, while third party offerings from Delgan and Loguru are really intended to simplify the use of built-in Python logging. Here is a logging example with Python using loguru library. Loguru uses the global “anonymous” logger object. Import loguru as shown in the code sample below. Then, use bind() with a name to identify log entries originating from a specific logger in this way: from loguru import logger def sink(message): record = message.record if record.get("name") == "your_specific_logger": print("Log comes from your specific logger") logger = logger.bind(name="your_specific_logger") logger.info("An entry to write out to the log.") Heroku Logging with Golang Go’s built-in logging library is called “log,” and it writes to stderr by default. For simple error logging, this is the easiest option. Here’s a division by zero error entry from the built-in “log”: 2020/03/28 11:48:00 can't divide by zero Each programming language supported by Heroku contains nuances, and Golang is no exception. When logging with Golang, in order to avoid a major pitfall while sending log output from a Golang application to Heroku runtime logs, we must be clear about the difference between fmt.Errorf and fmt.Printf. Writing to standard out stdout and standard error stderr in Go sends an entry to the log, but fmt.Errorf returns an error instead of writing to standard out, whereas fmt.Printf writes to standard out. Be aware also that Go’s built-in log package functions write to stdout by default, but that Golang’s log functions add info such as filenames and timestamps. Logging Libraries for Golang The built-in Golang logging library is called “log.” It includes a default logger that writes to standard error. By default, it adds the timestamp. The built-in logger may suffice for quick debugging when rich logs are not required. The “Hello world” of logging is a division by zero error, and this is the realm of Golang’s built-in logger. For more sophisticated logging there are: logrus is a library that writes log entries as JSON automatically and inserts typical fields, plus custom fields defined by configuration. See more at Logrus. glog is specially designed for managing high volume logs with flags for limiting volume to configured issues and events. See more at glog. To explore Golang tracing, OpenTracing is a library for building a monitoring platform to perform distributed tracing for Golang applications. Heroku Logging with Java Let’s look at an example app using a REST API with JAX-RS. The purpose of this example is to demonstrate how to write log entries that identify which client request generated the log entry. We accomplish this by using MDCFilter.java and import MDC with: import org.slf4j.MDC; And here is an example use: @Diagnostic public class MDCFilter implements ContainerRequestFilter, ContainerResponseFilter { private static zfinal String CLIENT_ID = "client-id"; @Context protected HttpServletRequest r; @Override public void filter(ContainerRequestContext req) throws IOException { Optional clientId = Optional.fromNullable(r.getHeader("X-Forwarded-For")); MDC.put(CLIENT_ID, clientId.or(defaultClientId())); } @Override public void filter(ContainerRequestContext req, ContainerResponseContext resp) throws IOException { MDC.remove(CLIENT_ID); } private String defaultClientId() { return "Direct:" + r.getRemoteAddr(); } } As we will discuss log4J in our section on Java logging libraries, here is a conversion pattern: log4j.appender.stdout.layout.ConversionPattern= %d{yyyy-dd-mm HH:mm:ss} %-5p %c{1}:%L - %X{client-id001} %m%n Java Logging Libraries SLF4J is not actually a library for Java logging, but instead implements other Java logging libraries at deployment time. The “F” in the name is for “Facade,” the implication being that SLF4J makes it easy to import a library of choice at deployment time. Log4J is one such Java logging library. Below are some interesting capabilities of log4J. - Set logging behavior at runtime - Change log format by extending the class Layout - Thread-safe log implementation - Appender interface exposes target of log output - Capability to import and use other logging facilities Heroku Logging with PHP PHP writes and routes log entries in a variety of ways which depend on the configuration of err_log in the php.ini file. To write log output to a file, name the file in err_log. To send the log output to Syslog, simply set err_log to Syslog, so that log output will go to OS logger. In Linux, the newer Rsylog uses the Syslog protocol, while on Windows it is the Event Log. The default behavior, in the event that err_log is not set, is the creation of logs with the Server API (SAPI). The SAPI will depend on which PaaS is implemented. For example, a LAMP platform will use the Apache SAPI and log output will stream to Apache error logs. This example php.ini will include the maximum log output to file: ; Log all errors Error_reporting = E_ALL ; Don’t display any errors in the browser display_errors = 0ff ; Write all logs to this file: error_log = my_file.log Logging Libraries for PHP Although the popular frameworks for PHP like Laravel and Symfony have logging solutions built-in, there are several logging libraries that are noteworthy. Each has a set of advantages and disadvantages. Let’s have a look at the important ones: - Log4PHP is an Apache Foundation library for PHP logging. It features functionality including logging to multiple destinations and log formatting. With the configuration file multiple handlers can be set up. These are called “appenders.” The only disadvantage is that Log4PHP isn’t PSR-3 compliant. Another disadvantage is that it does not support namespacing for classes, so it’s difficult to integrate into large apps. - Monolog is a PSR-3 compliant logging library for PHP. With integration components for most popular frameworks including Symfony and Laravel, Monolog is the most comprehensive logging solution for PHP. Monolog supports logging to target handlers for browser console, database, and messaging apps like Slack. And Monolog also integrates with log analytics apps like Coralogix and Loggly. - Analog library is a simplistic logging solution compared to Monolog and does not have features like log formatters. Log handlers for email, database, and files are configured via static access to the Analog class of objects. Heroku Logging with Scala Scala-logging is essentially SLF4J for Java, but wrapped for use with Scala. In fact, SLF4J will run on a variety of logging frameworks including log4j which adds SLF4J to our application dependencies. 1) The first step to use SLF4J with Scala is to add the dependency for logging: Open build.sbt in an editor. Add scala-logging by including this: name := "Scala-Logging-101" version := "0.1" scalaVersion := "2.12.7" libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.0" 2) Now, download the jar files for logback and include this in the runtime classpath. 3) Next, create a new directory in the project folder and call it “libs.” Add libs to the project classpath. 4) Select “Open Module Settings” with a right-click on the project name. Now, select the “Dependencies” tab and add a new directory. 5) Select “Add jars” or directories and then IDEA will open the chooser to select the indicated folder. 6) Download logback jars and open the archive. 7) Copy logback-classic-(version).jar and logback-core-(version).jar to the libs directory. 8) Now we can run code and direct the output to an Heroku Drain. import com.typesafe.scalalogging.Logger object Scala-Logging-101 extends App { val logger = Logger("Root") logger.info("Hello Scala Logger!") logger.debug("Hello Scala Logger!") logger.error("Hello Scala Logger!") } The output will flow through the Heroku Drain already created and look like this: 18:14:29.724 [main] INFO ROOT - Hello Scala Logger! 18:14:29.729 [main] DEBUG ROOT - Hello Scala Logger! 18:14:29.729 [main] ERROR ROOT - Hello Scala Logger! Heroku Logging with Clojure Typically, Clojure developers use Log4j when deploying apps with Heroku. As we will see in the next section on logging libraries for specific coding languages, Log4j was developed for Java, but is now used for several other languages as well. The first step is to set up <code>clojure.tools.logging</code> and Log4j to write to Heroku logs. The <code>clojure.tools.logging</code> will write to standard output such as Heroku-style 12 Factor App and Syslog, and also as structured log files which will ultimately be translated by log analytics apps to provide alerts and performance monitoring. To start writing to logs from Clojure, first, add <code>clojure.tools.logging</code> and <code>log4j</code> to dependencies in <code>project.clj</code> using the following: :dependencies [[org.clojure/tools.logging "1.1.0"] [log4j/log4j "2.0.8" :exclusions [javax.mail/mail javax.jms/jms com.sun.jmdk/jmxtools com.sun.jmx/jmxri]] ;; ...code here ] Next set up the properties file for Log4j in resources/log4j.properties like this: log4j.rootLogger= error,=%p @ %l > %m%n To test this implementation, we will run a code snippet that contains errors which will then generate anticipated log entries. Save the following code to the file src/myApp/core.clj. Create a file named “myApp” containing the following code snippet: (ns myApp.core (:require [clojure.tools.logging :as log])) ;; Write log statements at severity levels: (log/trace "Lowest log level") (log/debug "Coder info") (log/warn "Warning") ;; Various log entries: (log/info "Performance issue happened:" {:name1 12 :name2 :que} "time out.") ;; Exceptions: (try (/ 10 (dec 1)) ;; <-- division by zero. (catch Exception e (log/error e "Division by zero."))) This will produce log entries like the following: 2020-02-20 13:18:38.933 | ERROR | nREPL-worker-2 | myApp.core | Division by zero To remain consistent with best practices in CI/CD we should consider automating log analytics. The next natural step in deploying Clojure is to use Log4j appenders to send logs to an app such as Coralogix to provide alerts, charting, and other visualizations. For example: log4j.appender.CORALOGIX=com.coralogix.sdk.appenders.CoralogixLog4j1Appender log4j.appender.CORALOGIX.companyId=*insert your company ID* log4j.appender.CORALOGIX.privateKey=*Insert your company private key* log4j.appender.CORALOGIX.applicationName=*Insert desired Application name* log4j.appender.CORALOGIX.subsystemName=*Insert desired Subsystem name* log4j.rootLogger=DEBUG, CORALOGIX, YOUR_LOGGER, YOUR_LOGGER2, YOUR_LOGGER3 Heroku Logging with C# NLog and its API are easy to set up as illustrated in the example below. Reviews claim it’s faster than log4net. Nlog handles structured logging of most popular databases. We can extend NLog to write logs to any destination. Here is an example to set up Nlog to send logging output to Heroku. First, configure an XML file in code like this: var config1 = new NLog.Config.LoggingConfiguration(); var logfile1 = new NLog.Targets.FileTarget("logfile1") { FileName = "file1.txt" }; var logconsole1 = new NLog.Targets.ConsoleTarget("logconsole"); config1.AddRule(LogLevel.Info, LogLevel.Fatal, logconsole); config1.AddRule(LogLevel.Debug, LogLevel.Fatal, logfile1); NLog.LogManager.Configuration = config1; Now, add a class with a method to write to the log: class MyClass { private static readonly NLog.Logger _log_ = NLog.LogManager.GetCurrentClassLogger(); public void Foo() { _log.Debug("Logging started"); _log.Info("Hello {Name}", "Johnathan"); } } Logging Libraries for C# Like Log4NET, which has always been the .NET logging standard, NLog supports multiple logging targets and logs messages to many types of data stores. As for standard logging practices, both present similar features. ELMAH is a C# logging library that does offer several differences. - ELMAH, which means “Error Logging Modules And Handlers,” on the other hand, does offer features beyond the standard fare. ELMAH is an open-source library that logs runtime errors in the production environment. A distinctive component of ELMAH, beyond error filtering, is its capability to display error logs on a web page in the form of RSS feeds. Here’s a screenshot of ELMAH displaying an error log as a web page: Hands-On Example: Troubleshooting Heroku To illustrate the value and importance of Heroku logs, we will run a sample app and look at some commonly encountered issues. We will start by deploying a simple Python app and watch how Heroku logs an issue when the app runs. Later we will scale the app and introduce a more subtle bug to find out how a vast log output ultimately calls for a solution to assist developers in the task of pinpointing bugs. Note: Be sure you have created a free Heroku account, and your language of choice is installed: Step 1: Install Heroku locally Step 2: Install GIT to run the Heroku CLI Step 3: Use the Heroku CLI to login to Heroku Step 4: Clone a GIT app locally to deploy to a Heroku Dyno $ git clone $ cd python-getting-started Step 5: Create your app on Heroku, which: - Creates a remote GIT via Heroku and - Associates it with your local GIT clone - Deploy on Linux, for example with $ git push heroku master Step 6: Open the app in the browser with the Heroku CLI shortcut: $ heroku open At this point, if you’re following along with Heroku’s example deployment, you can see the Heroku log generated by deploying and opening the app. Let’s look at the first obvious app issue: As you can see, Heroku generated a name and URL for this deployment, and the browser tab icon (favicon) which is missing instantly appears in the log: The volume of log output generated by deploying this simple Python app hints at the need for intelligent log monitoring and analytics. As we scale this simple app to reveal the more complex log output generated by Heroku during enterprise-level app development and deployment, the need for machine learning-capable analytics becomes imperative. Avoiding a Common Issue with Heroku Logs From this point on, developers can enter a natural CI/CD cycle of revising code and committing the changes to deployment, configuring a Drain with a logging app, and watching the dashboard for issues. However, for developers just starting out with Heroku, the next steps to deploy the code change from the local GIT repo may present two surprising problems. After making a code change to the local GIT, Heroku documentation offers these CLI instructions which should detect files that changed and deploy them to production, updating our app: $ git add $ git commit -m "First code change" $ git push heroku master The first problem, when using the CLI command $ git commit is a GIT message asking who you are. This is part of a first-time authentication setup: Define your identity with the CLI commands suggested: $ git config --global user.email "[email protected]" $ git config --global user.name "marko" Now, with that done, when deploying the app with $ git push heroku master another potentially confusing authentication message occurs: Notice that there is no mention of an API key in the dialog. As shown above, the “marko” id created previously is the correct username sought in this context. However, the “password” in this context is not the Heroku account password, but instead, the Heroku API key found on this page (you need to be signed in). As shown in the next screenshot, the “password” is the API key. Scroll down the account page and click the reveal button, and then copy and paste that key to the CLI in your terminal, depending on your setup: At this point, with Heroku and GIT both authenticated correctly, the new changes can be deployed from the local GIT repo with this Heroku CLI command: $ git push heroku master From this point forward, code changes can be made and committed to deployment easily so that Heroku log streams flow from Logplex to designated endpoints. Now that we have this workflow in place, it’s a simple matter with the Heroku platform to make code changes and commit them to deployment from the CLI. Finding Memory Leaks with Heroku Log Monitoring One common frustration for coders can occur due to the fact that, in spite of automatic garbage collection, memory leaks can appear in logs from applications running in production which seem to have no obvious origin. It can often be difficult for developers to find the cause in their code; it may seem logically correct, but we often need to look deeper for the cause. A few examples include: - Loopback app in Node.js deployed to Heroku with a drain to New Relic. Running the app locally does not show increased memory use, but when the app is deployed to Heroku with New Relic, the heap steadily increases, even with no new requests. - Subtle references to variables, such as closures. These still count as references to variables, and as a result, garbage collection will not release the memory. Various types of dependencies may likewise not be obvious when examining app code, but can also result in memory leaks which show up when the app is deployed. - Mysterious Gateway Timeout entries appear in the log, but a memory leak was not detected in log analysis. For example, in a desperate attempt to discover the problem, a programmer is likely to wrap the code in Q-promises, examine heap sizes and MongoDB payloads, and explore many other avenues before discovering the memory leak in a data streaming method failure. The common denominator in all these examples is that memory leak bugs may appear in your Heroku Logs and the developer may not recognize them in the app logic. These memory leaks can be extremely frustrating to troubleshoot and can lead coders to believe that the bug is actually in the V8 Heap, but more often, the bug lies in the app code itself. Beginner Tip: Memory leaks occur when a program does not release unused memory. When undetected, memory leaks tend to accumulate and reduce app performance and can even cause failure. The “garbage collection” function of many compilers, especially Rust, is capable of adding code to compiled apps that discard unused memory. For example, in C++ this can prevent attacks on discarded pointers. Heroku Logs For Faster Troubleshooting As we’ve seen, Heroku Logplex generates voluminous log content that contains entries generated by every behavioral aspect of an application’s deployment and runtime. Heroku logs are a vast resource for developers and members concerned with application performance and squashing bugs quickly. However, logging has evolved far beyond debugging software and is now one of many focal points for the use of machine learning techniques to extract all the latent value in Heroku log data. Software development in the context of enterprise CI/CD environments requires substantial automation to ensure high performance and reliable systems. Log management and analysis are critical areas of automation in software development. While Heroku logs are a vast source of data, Heroku log analysis add-ons like Coralogix are needed to extract the full value of your log data goldmine. Technologies that remove logging inefficiencies, reduce data storage cost, and propel problem-solving with automation and machine learning will play a decisive role in determining your organization’s ability to create business value faster.
https://coralogix.com/log-analytics-blog/heroku-logs-the-complete-guide/
CC-MAIN-2020-24
refinedweb
6,806
54.12
Hi Jill, "their names added to the deed of their father's house" - means - they receive the property as a gift. The basis for the gifted property is the same as the donor (in your situation - the father) had at the time of gifting - that is mainly the donor's purchase price adjusted by any improvement and some other expenses. Each sibling will report his/her share of the proceeds and basis and should calculate the capital gain on the schedule D. Because they owned the property more than a year - that will be long term capital gain. The father generally should report his share, but because he used the property as a primary residence - his gain is not taxable and generally he doesn't need to report it. Let me know if you need any help. If the house was purchased in 1973 - yes that purchase price would be the basis. There should be improvements over these years - I suggest you to investigate. At least I might guess that the roof was replaced at least twice... Check if windows and/or doors were replaced? Trees were planting? etc... The gift is not a taxable income and is not reported on the tax return. there is a separate gift tax return that should be filed by the donor - in your situation by the father in 2005. A recipient of the gift - does not need to claim it as income. Please see for reference IRS publication 525 - The donor - if he/she is an US person - would be required to file gift tax return (form 709 - ) if the gift is more than $11,000 per person per year (for 2005). There will not be any gift taxes unless lifetime limit of $1,000,000 is reached. I will repeat my statement from above They did not need to report the gift as income which they received in 2005.
http://www.justanswer.com/tax/3380f-december-2005-husband-siblings.html
CC-MAIN-2016-07
refinedweb
317
78.48
SYNOPSIS import guestfs g = guestfs.GuestFS (python_return_dict=True) g.add_drive_opts ("disk.img", format="raw", readonly=1) g.launch () DESCRIPTIONThisErrors from libguestfs functions are mapped into "RuntimeException" with a single string argument which is the error message. MORE DOCUMENTATIONType: $ python >>> import guestfs >>> help (guestfs) EXAMPLE 1: CREATE A DISK IMAGE #) # Create a raw-format sparse disk image, 512 MB in size. g.disk_create (output, "raw", 512 * 1024 * 1024); # Set the trace flag so that we can see each libguestfs call. g.set_trace (1) # Attach the disk image to libguestfs. g.add_drive_opts (output, format = "raw", readonly = 0) # Run the libguestfs back-end. g.launch () # Get the list of devices. Because we only added one drive # above, we expect that this list should contain a single # element.) # Create a filesystem on the partition. g.mkfs ("ext4", partitions[0]) # Now mount the filesystem so that we can add files. g.mount (partitions[0], "/") # reference counting. You only need to call close # if you want to close the handle right away. g.close () EXAMPLE 2: INSPECT A VIRTUAL MACHINE DISK IMAGE # Example showing how to inspect a virtual machine disk. import sys import guestfs assert (len (sys.argv) ==: raise (Error ("inspect_vm: no operating systems found"))) def compare (a, b): return len(a) - len(b) for device in sorted (mps.keys(), compare): try: g.mount_ro (mps[device], ().
https://manpages.org/guestfs-python/3
CC-MAIN-2022-40
refinedweb
221
60.41
Ovidiu Predescu wrote: > > On Mon, 10 Dec 2001 10:56:39 -0500, Berin Loritsch <bloritsch@apache.org> wrote: > > > Berin Loritsch wrote: > > > > > Torsten Curdt wrote: > > > > > > > >>> To be able to incorporate URLs to older points in the processing, the > > >>> send-response() function could return as value an object that contains > > >>> the URL to its continuation. You can pass this URL in the optional > > >>> dictionary, as an additional argument, if you wish so. > > >>> > > >>> function addNumbers(HttpRequest request) > > >>> { > > >>> send-response("first-page.xml", "my-pipeline"); > > >>> first = request.getIntParameter("first"); > > >>> send-response("second-page.xml", "my-pipeline", {"first" = first}); > > >>> second = request.getIntParameter("second"); > > >>> result = first + second; > > >>> send-response("third-page.xml", "my-pipeline", {"result" = result}); > > >>> } > > > > > > Having looked at this example, something doesn't sit right with me. This > > has to do with the fundamental URL contracts. This "flow" is managed within > > one URL. For instance, it could be mounted at "". It > > would behave like this: > > > > URL: > > PAGE: > > > > Enter First Number: _________ > > [Submit] > > > > > > > > URL: > > PAGE: > > > > Enter Second Number: _________ > > [Submit] > > > > > > > > URL: > > PAGE: > > > > The result is: _________ > > [Done] > > Because of the continuations, the URLs for the second and third pages > would probably look more like: > > > > with the last part identifying the continuation object. > This is similar with the way WebObjects does it. I'm open to > suggestions though on a better scheme to encode the continuation > identifier. I was thinking about some more informatively looking URI such as or[4ffds8454mfd90] but Mozilla URL-encodes the square brakets and turnes it into %xx syntax which ruins the entire concept. I like this more than appending a slash because, in fact, we are dealing with the same resource of before, we are just using a different entry point. Another solution is faking anchors but looks less appealing to me. Of course, when we have the ability to POST data, we should use alternatively an hidden field in the form. > > While this is a possibility, and might be helpful when you only want one > > entry point for a form process, it does not address options when you have > > multiple entry points. Nor does it address content to url issues. If we > > want the flow to be managed externally, we must have a method to either > > plant the target in the pipeline, or use redirects. Niether are very clean. > > > > In the first approach (embedding targets in the pipeline) we would > > have to add a <parameter name="target" value="second-page.html"/> to > > embed in the form. I find this to be acceptable when you don't want > > to depend on the redirection mechanism in a server--or when it is > > well known what the next page is going to be. This is in cases > > where there is only one "next page". > > > > In the second approach, you must depend on the servers > > implementation of redirects (while supposedly standard, I have run > > into some issues). There are also different type of redirects (a > > host of 30X HTTP response messages). None of them really supports > > the notion of "normal" navigational flow between form pages--though > > _many_ systems (like ColdFusion) routinely use them to manage form > > flow. > > > > For instance, with ColdFusion, it is concidered good practice to > > have two templates per form page. The first template is to display > > the form, and the second to simply process it. When the second is > > done, it simply redirects the user to the next page. I don't like > > this because it artificially polutes the URI space with intermediate > > targets who's sole responsibility is to point you to another > > resource. > > > > In Cocoon the only way to manage the URI space as well as processing > > is to use redirects and value substitution. For instance, I would > > much rather have a form explicitly say it's next page if it does not > > depend on user input like this: > > > > <form action="{target}"/> > > > > The target is chosen by the flow manager, and given to the form so > > no redirection has to take place. This is better IMO than forcing > > the use of redirects for these simple cases. > > > > What about multiple targets from one form? In those cases, it is > > much more desirable to use redirections after we service the > > request. Using the psuedo-markup I expressed earlier the concepts > > could be stated like this: > > > > <functions> > > <function name="addNumbers" pipeline="my-pipeline"> > > <send-response > > <parameter name="target" value="second-page.html"/> > > </send-response> > > <call-action > > <validate> > > <on-error> > > <redirect-to "first-page.html"/> > > </on-error> > > </validate> > > <send-response > > <parameter name="first" value="{first}"/> > > <parameter name="target" value="third-page.html"/> > > </send-response> > > <call-action > > <send-response > > <parameter name="result" value="{result}"/> > > </send-response> > > </function> > > </functions> > > > > This is not necessarily clean looking at it though. > > If you have a form with multiple entry points, you just split the > function that handles the navigation in multiple functions: > > function first-page(request) > { > send-response("first-page.xml", "my-pipeline"); > second-page(request); > } > > function second-page(request) > { > send-response("second-page.xml", "my-pipeline"); > third-page(request); > } > > function third-page(request) > { > send-response("third-page.xml", "my-pipeline"); > } > > You can then mount each function at different URLs: > > > > serves first-page.xml > > > > serves second-page.xml > > > > serves third-page.xml > > If a user starts filling in the form at the first page, he/she will > see the following sequence of URLs: > > for the first page > for the second page > for the third page > > This is because the navigation started from the first-page() function, > and logically all the computation starts as a result of the initial > invocation of the first-page() function. > > If instead the user starts from the second page in the form, he/she > will see the following sequence: > > for the second page > for the third page > > This is the similar with the first navigation I described above, > except that the entry point in the computation was the second-page() > function. > > I would say that this approach handles both the issues you mentioned > above, the multiple entry points in a form, and the content to URL > mapping. > >. > > Also look closely at get-user() and registration() functions. The idea > is to present the user with the option of registering himself/herself > if they don't have an account. After the user registers, you want the > processing to continue where it was interrupted, e.g. you want the > registration() function to return to get-user(), right after the call > to registration(). Since the send-page() function passes to the > processing pipeline in the environment the URL to its continuation, by > clicking on that link the user effectively returns from registration() > back into get-user(). > > Another scenario is when you don't want the user to return to the > current computation. In that case you put links in the generated pages > that point back to other functions. They will be invoked through the > sitemap as usual, starting a new stack frame on the server side. > > Notice how easy is to handle errors with this approach. In get-user() > for example, the user is asked to login into the system. If the user > isn't registered yet, he/she can choose to register. If the user > doesn't successfully register, he/she is sent to a page where he/she > cannot return in the flow of the program (the user can always use the > "Back" button to go back and complete the registration > successfully). There is no need in the buy() function to handle an > unsuccessful registration. Yes, I had the taste of the power of continuations when I dived more into the paper you presented (and implemented myself a few scheme examples to get the feeling of it).: > > > > <flow name="addNumbers" start="first-page"> > > <resource name="first-page"> > > <pipeline type="my-pipeline" source="first-page.xml"/> > > <next name="second-page"/> > > </resource> > > <resource name="second-page"> > > <action type="getFirst"> > > <pipeline type="my-pipeline" source="second-page.xml"/> > > <next name="third-page"/> > > </action> > > <redirect-to > > </resource> > > <resource name="third-page"> > > <action type="getSecondResult"> > > <pipeline type="my-pipeline" source="third-page.xml"/> > > <done/> > > </action> > > <redirect-to > > </resource> > > </flow> > > > > b) generally more readable (consider escaping <,>,& or using CDATA) c) easier to learn/use/understand for C/C++/C#/Java/JavaScript people (a very big percentage of the web tech population nowadays) At the same time, for global site-mapping semantics, the XML syntax is still the way to go. > -- > > I think with the new model, there's no need for actions, redirects and > other things that the sitemap currently has. Yes, I've always expressed my feeling that Actions were hacks. >. Interesting enough, what you described in your above example is exactly the kind of 'flowmap' that I've been looking for in order to move away actions from the sitemap. I knew that the intrinsic 'wait for request' behavior of the site forces the sitemap to follow a declarative approach, as much as the 'drive the flow' behavior of a web application would force the flowmap to follow a more procedural approach.. In this vision, Berin's proposal to make pipelines reusable by adding variable substitution might allow both sitemap and flowmap to use sort these pipeline definitions (pipemaps?) and reduce overall verbosity. Anyway, I think that by trying to prove the Flowmap concept wrong, Ovidiu gave us the best example of a flowmap in terms of syntax (code-based instead of XML-based) and functionality (continuations-based instead of FSM-based). >) <p>Today is ${date}</p> 2) use XML-namespaced solutions <p>Today is <dxml:date/></p> Again, the semantics are exactly the same, but the first approach if more friendly to code writers, the second to HTML writers. We could even have both and let the user decide which one he/she like the best. > With this model we have the clear separation of concerns. Each layer > does only the things it's supposed to do, in a clean way. Absolutely: flowmap -> handles the statefull needs Add a simple and effective way to pass data from the *maps to the pipelines and into the content XML files and we are
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200112.mbox/%3C3C16124A.DC33B7BC@apache.org%3E
CC-MAIN-2013-48
refinedweb
1,661
52.8
Introduction Integrating external applications with SharePoint data and functionality is pretty easy, but the documentation is scattered, so I thought it might be helpful to provide a complete solution that covers a few main scenarios. I’ve used the SharePoint web services to provide a custom search interface and to populate lookup tables. I’ve also seen it used as a workaround in a few scenarios where the SharePoint Object Model was behaving erratically. This post will focus on how to quickly get started with the web services interface in WSS, and particularly how to access SharePoint data and search results quickly, in a new project. It’s not very polished, but it gets the job done. Does this apply to me? First, when communicating with WSS and MOSS, you have to decide which remote access tool best fits your needs. As Alvin Bruney points out in his book Programming Excel Services (Safari), the web services interface is also “a way to access the resources of the server with reduced risk of instability to the server resources.” Secondly, figure out if you need to do this from scratch at all. Check into LINQ to SharePoint, which can use either the SharePoint Object model or the web services interface. It looks like a really slick and robust way to interface with your existing SharePoint data. On the down side, it appears to not have been updated since November 2007 and Bart De Smet, the project’s author (blog), notes that it is an alpha release and not ready for production. I’ve steered clear of it for that reason and due to client restrictions, but you might save yourself a lot of time if you can use it! Check out this video, from the project’s CodePlex page, for a good quick start: LINQ to SharePoint quick start video (5:37, 3.18MB, WMV) If you decide against LINQ to SharePoint, you can still easily add web references and consume the SharePoint web services in your custom code. Walkthrough: Accessing List data As it turns out, the web host for Hands on DC (a volunteer organization with which I work) does some hokey things with NTLM authentication that aren’t supported by WCF. Namely, it seems like they’re doing Transport level authentication over plaintext HTTP instead of HTTP over SSL. I can see why this isn’t supported, for security reasons. Despite Dan Rigby’s excellent post about impersonation in WCF, I couldn’t get anything to work. But alas, with no slight of hand whatsoever, the scenario works in VS2005 and a regular old-school ASMX-based web service proxy (web reference). Here’s a quick (and quite dirty) example, initially based on Ishai Sagi’s (blog) response on MSDN: Video hint: Click the full screen icon (looks like a TV) Download source code (39KB, ZIP file) The query that we send to the list service is written in CAML. When we want to get more complex in our query, such as only pulling back certain columns, check out the U2U CAML Query Builder. It provides an awesome interface into creating your CAML queries. In the WCF world, Kirk Evans has an awesome walkthrough, Calling SharePoint Lists Web Service using WCF, which includes how to streamline the XML access by using an XPathNavigator, and later using a DataTable that reads from an XmlNodeReader. Better than querying the fields directly for sure! Walkthrough: Querying WSS Search Results Search is a little more challenging. As is the case with every other web service, we send the query service XML, and it returns XML. We can take a shortcut and generate the request XML with a tool such as the Search Query Web Service Test Tool for MOSS (or we could also import the schema and generate a C# class from it). Since I’m a big fan of using XSD.EXE to generate a C# class, I chose to do so with the search result support classes. It was mostly productive, although the Results node is free-form (can take any node type) and the generator doesn’t seem to support that. In the end, we can use the generated classes to get statistics about the data set, and can navigate the Documents using regular XML methods. Here is a complete walkthrough of adding very basic search results, including a total result count and the first page of results, to our application: Video hint: Click the full screen icon (looks like a TV) Download source code (56KB, ZIP file) Conclusion Use SharePoint web services when you have the need to separate concerns and reduce risk inherent with new code deployments to the SharePoint farm, you want remote access to SharePoint from a non-SharePoint server, or you need to access SharePoint from something other than .NET (such as JavaScript). This post and associated videos walked you through creating a Windows Forms application, from scratch, that pulls data in from SharePoint, including List-based data and search results. The process is similar to any other web service, but there are a few gotchas and pain points that I hope have been cleared up in this resource. I collected some additional links in the process of putting this post together, and added them on delicious with the tag “sharepoint-webservices”.. For the past few months (and 100+ volunteer hours!) I’ve been creating a web application for Hands on DC that calculates volunteers, gallons of paint, and materials for work projects for their annual Work-a-Thon event. After the encouragement of a few coworkers who did some initial work on the project, I committed to using ASP.NET MVC, technology which has been out over a year but just reached a production 1.0 release at Mix 09 this year. Getting up and running with MVC wasn’t an easy task. The project was also my first foray into LINQ to SQL, and really .NET 3.5 in general, so it was a little intimidating at first! There’s not much documentation and it’s split across the many release versions of MVC. The main site will get you up doing very basic things (but is seriously lacking content), though Phil Haack’s webcast and Scott Hanselman, et. al.’s free e-Book are helpful. In the process, I discovered some important companion pieces in MvcContrib and jQuery, including the validation plugin and the datatable plugin. I want to highlight work that I did to combine the MvcContrib data grid with the datatable for sorting, paging and filtering. This was something I struggled with for several hours, so I’m hoping there is some value in posting the full example. Figure 1. Example of using MvcContrib with jQuery datatable plugin. Walkthrough Here is a complete from-scratch example. Figure 2. Solution after copying the datatable media folder. <%@ Import Namespace="MvcContrib.UI.Grid" %> <%@ Import Namespace="MvcContrib.UI.Grid.ActionSyntax" %> <%@="../../media/css/demos.css" rel="stylesheet" type="text/css" /> <script src="../../media/js/jquery.js" type="text/javascript"></script> <script src="../../media/js/jquery.dataTables.js" type="text/javascript"></script> </head> public enum Medal { Gold, Silver, Bronze } public class MedalWinner { public string Location { get; set; } public string Year { get; set; } public string Sport { get; set; } public Medal Medal { get; set; } public string Country { get; set; } public string Name { get; set; } public MedalWinner(string l, string y, string s, Medal m, string c, string n) { Location = l; Year = y; Sport = s; Medal = m; Country = c; Name = n; } } public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; var medalWinners = new List<MedalWinner>(); medalWinners.Add( new MedalWinner("Athens", "2004", "Handball", Medal.Gold, "Croatia", "LOSERT, Veni")); medalWinners.Add( new MedalWinner("Athens", "2004", "Handball", Medal.Gold, "Croatia", "BALIC, Ivano")); medalWinners.Add( new MedalWinner("Athens", "2004", "Handball", Medal.Gold, "Croatia", "ZRNIC, Vedran")); medalWinners.Add( new MedalWinner("Athens", "2004", "Handball", Medal.Silver, "Germany", "JANSEN, Torsten")); medalWinners.Add( new MedalWinner("Athens", "2004", "Handball", Medal.Silver, "Germany", "KRETZSCHMAR, Stefan")); medalWinners.Add( new MedalWinner("Athens", "2004", "Handball", Medal.Silver, "Germany", "VON BEHREN, Frank ")); ViewData["MedalWinners"] = medalWinners; return View(); } <ol> <% foreach (HomeController.MedalWinner winner in (List<HomeController.MedalWinner>)ViewData["MedalWinners"] ) { %> <li><%= winner.Name %>, <%= winner.Country %></li> <% } %> </ol> <% Html.Grid((List<HomeController.MedalWinner>)ViewData["MedalWinners"]) .Columns(column => { column.For(c => c.Year); column.For(c => c.Location); column.For(c => c.Name); column.For(c => c.Country); column.For(c => c.Medal.ToString()); column.For(c => c.Sport); }).Render(); %> <%); }).Render(); %> <script type="text/javascript" charset="utf-8"> $(document).ready(function() { $('#example').dataTable(); }); </script> <style> #example { width: 100%; } #container { width: 600px; } </style> <div id="container"> <%); }).Attributes(id => "example").Render(); %> </div> $(document).ready(function() { $('#example').dataTable({ "iDisplayLength": 25, "aaSorting": [[2, "asc"]], "aoColumns": [{ "bSortable": false }, null, null, null, null, { "bSortable": false}] }); }); [Download complete source code] (394KB, ZIP file) A few years ago I created an article around Reporting Services and dates. It could have been written more generically, because I reference this quite a bit to get common dates like "the beginning of this week", "midnight last night", etc, in my SQL queries. It's a fairly comprehensive list of relative dates that one might want to get in T-SQL for reporting, scheduling, etc. It can get pretty complex, such as this function for getting the end of the current week CREATE FUNCTION get_week_end (@date datetime)RETURNS datetime ASBEGIN return dateadd(yyyy, datepart(yyyy, dateadd(weekday,7-datepart(weekday, @date),@date))-1900, 0) + dateadd(ms, -3, dateadd(dy, datepart(dy, dateadd(weekday,7-datepart(weekday, @date),@date)),0) )END If you don't find what you need, you can typically use the dateadd function to tweak one of these. Here is the complete list outlined in the article:.: I've been working on various forms of displaying status messages from enums, and here's the latest preferred iteration of how to do this. Regurgitated and tweaked from WayneHartman.com. public enum XmlValidationResult { [Description("Success.")] Success, [Description("Could not load file.")] FileLoadError, [Description("Could not load schema.")] SchemaLoadError, [Description("Form XML did not pass schema validation.")] SchemaError } private string GetEnumDescription(Enum value) { // Get the Description attribute value for the enum value FieldInfo fi = value.GetType().GetField(value.ToString()); DescriptionAttribute[] attributes = (DescriptionAttribute[])fi.GetCustomAttributes( typeof(DescriptionAttribute), false); if (attributes.Length > 0) { return attributes[0].Description; } else { return value.ToString(); } } It's possible to do something even cooler like cache the values or add a ToDescription() method (in C#3.0), but I just wanted an simple, repeatable way to do this. Trademarks | Privacy Statement
http://blogs.msdn.com/paulwhit/
crawl-002
refinedweb
1,745
54.93
Getting the sum of two double parameters - Java - java Having a hard time figuring out the code neccessary to get the sum of the parameter. public class SumOfTwo { public static void main(String[] args) { double number = sumCalc(5.2, 4.2); System.out.println("Returned value: " + number); } // Method here } Returned value: Any help would be greatly appreciated thank you :) BTW. I have been trying to work on it for more than an hour now. Tried many different things and I would've continued to try but I needed to complete this by a certain time. It's not like I'm taking the help and running, I'm saving it so I can get better so next time I won't have to ask you guys because apparently you're not friendly to people trying to learn Java. Thank you Codetector. I would recommend a close look at some intro Java docs... But here is your functions static double sumCalc(double a, double b) { return a + b; } Related Can't Find Symbol in my method instance? This a very quick and i feel obvious mistake but i keep getting the CANNOT FIND SYMBOL symbol : method print(int,int) this would lead me to believe that i'm not giving the method the right data type parameters, however.. public class Test { public static void main(String[] args) { TestSrv srvObj = new TestSrv(); srvObj.print(0, 0); srvObj.print(1, 1); srvObj.print(2, 10); } } and this method, what it's meant to do aside, i keep getting errors from the above code for all 3 calls to the print method? I am passing it integers on all 3 occasions? public class TestSrv { public void print(int num, int count) { for (int i = 0; i <= count; ++i) { System.out.print(num + ". " + "*"); } } } Your code should compile. Make sure that you declare both classes in the same package or that you import TestSrv in Test.java. You almost certainly didn't compile TestSrv after making changes. Using an IDE such as Eclipse or IDEA will take care of much of that detail for you. while renaming my method and class and such so that it wasn't what i originally named it(as to not confuse anyone) i actually fixed the problem that i had.. that is why this compiled for everyone xD i feel stupid! thanks again Java Storing sequence of functions run i have a program in java and what i want to do is somehow store all the functions that have been run whilst the program ran. but i can not seem to find anything on this matter. Also then have to find out which of the function has been ran the most amount out time. my thought was that i could make an array, assign each function to have a variable with a name of the function and then everytime it is run return that char into the array, and print out the array at the end. But i dont know how to go about storing them in different arr[i] i's everytime same function is ran, also im not sure how i would then find the one that was ran most, any help is much appreciated. What I'd recommend is creating a boolean for each method, and at the beginning of the method, set the boolean to true. Then create a save method using the java.io class and save the boolean name and value to the file. EDIT I just realized i put boolean instead of integer. Have an integer for each method and do integer++ for each method run. AOP is great for something like this. See for an example that uses the Spring AOP library. i tried a program that may help you or not. i created a Interceptor class which will print out the details you may need. You didnot mention about any frameworks, so i just gave you an example program with plain old java. This approach also offers flexibility to print all the details at the end of program execution. public class Test { public void demoMethod() throws InterruptedException{ Intercept.printMethodStartTime(System.currentTimeMillis()); Intercept.printMethodName(Thread.currentThread().getStackTrace()[1].getMethodName()); Thread.sleep(5000); Intercept.printMethodEndTime(System.currentTimeMillis()); } public static void main(String[] args) throws InterruptedException { new Test().demoMethod(); } } class Intercept{ private static Long startTime; private static Long endTime; public static void printMethodName(String methodName){ System.out.println("Current method name: "+methodName); } public static void printMethodStartTime(Long time){ startTime = time; System.out.println("Method started at "+startTime); } public static void printMethodEndTime(Long time){ endTime = time; System.out.println("Method ended at "+endTime); printMethodRunTime(); } public static void printMethodRunTime(){ System.out.println("Method ran for "+((endTime-startTime)/1000)+" seconds"); } } very simple main where I am stuck adding two numbers I just want to write a program that adds two numbers so I wrote this public class Mainclass { public static void main(String[] args) { addTwoNumbers(5, 3); } public static int addTwoNumbers(int a, int b){ int c; c = a + b; return c;}} What is my problem? I know in java it excepts always a main and I think that is the point at which the program executes so I wrote the other function above it so it can read from that function. Thanks a lot. That actually works. You are just not seeing it because you are not printing anything. You should use System.out.println(addTwoNumbers(5,3)); Also for future reference, please use proper indentation. Testing Fuzzy Logic [closed] I have a problem to testing my fuzzy logic on Java programming either right or wrong. Do you have any simple source code to test it. May you share to me, please. Kindly need your help. Thank you so much. Best Regards, Deni Y. The codez: public class FuzzyLogicTester { public static void main(String[] args) { int a = 1 + 1; // \u000a\u0061\u002b\u002b; System.out.println(a); } } How to print answer of 'external' recursive function in GUI? This is my 2nd recursive function ever (I hope!) only this time I need it to print out in a textField. It prints out "5 x 4 x 3 x 2 x 1" nothing too fancy. I have a feeling my attempt is terribly wrong since in the program its underlined a very noticeable ugly shade of red. I'm trying to understand by researching it (not working too well) and I've yet to master the whole 'theoretical thinking' side of things, so any tips or hints would be greatly appreciated! public class Main { public static String fact(int n) { if(n == 1){ return "1"; } return n + " x " + (fact(n-1)); } public static void main(String[] args) { System.out.println(fact(5)); } private void itsAButtonActionPerformed(java.awt.event.ActionEvent evt) {//button on GUI //some other code that has no significant value to question itsATextField.setText("" + return); //only line underlined } The only thing it says when hovered over it is 'illegal start of expression' itsATextField.setText(fact(5)); would be syntactically correct. However, it will not be a complete program with a GUI, of course.
https://java.develop-bugs.com/article/10000591/Getting+the+sum+of+two+double+parameters+-+Java
CC-MAIN-2021-21
refinedweb
1,176
64.41
program_options¶ The module program_options is a direct fork of the Boost.ProgramOptions library (Boost V1.70.0). For more information about this library please see here. In order to be included as an HPX module, the Boost.ProgramOptions library has been moved to the namespace hpx::program_options. We have also replaced all Boost facilities the library depends on with either the equivalent facilities from the standard library or from HPX. As a result, the HPX program_options module is fully interface compatible with Boost.ProgramOptions (sans the hpx namespace and the #include <hpx/modules/program_options.hpp> changes that need to be applied to all code relying on this library). All credit goes to Vladimir Prus, the author of the excellent Boost.ProgramOptions library. All bugs have been introduced by us. See the API reference of the module for more details.
https://hpx-docs.stellar-group.org/branches/master/html/libs/core/program_options/docs/index.html
CC-MAIN-2021-49
refinedweb
139
60.51
toogreat4u 127 Report post Posted August 19, 2008 I have a question as to why I am getting a strange situation in a loop. The program does what it is suppose to do but because I am looping through the program again the getline() function somehow finds some input that is still floating around and I cant figure it out. This is a very simple problem just confusing to me because I can't figure it out. I would like the program to rerun and ask for the input before reading whether or not the user would like to continue. Code: #include <iostream> #include <string> #include <cctype> #include <cstdlib> using namespace std; /* Write a program that reasd in a line of text and replaces all four-letter words witht he word "love". * If the four-letter word starts with a capital letter, it should be replaced with "Love" not by "love". * A word is any string consisting of the letters of the alphabet and delimited at each end by a blank, endline, !letter * Program should repeat this action until the user says quit. */ void change_phrase(string& p); int main() { string phrase; bool stop = false; char ans; while(stop == false) { cout << "Enter in a phrase: \n"; getline(cin, phrase); change_phrase(phrase); cout << phrase << endl; cout << "Again? (yes/no): \n"; cin >> ans; if(ans == 'n' || ans == 'N') stop = true; } return 0; } void change_phrase(string& p) { string temp; int index = 0; int lc = 0; int wc = 0; bool spos = false; int pos; while(index < p.length()) { if(isalpha(p[index])) { if(spos == false) { pos = index; spos = true; } lc++; } if(isspace(p[index]) || ispunct(p[index])) { lc = 0; spos = false; pos = 0; } if(lc == 4 && !isalpha(p.at(index+1))) { if(islower(p[pos])) { p.erase(pos, 4); p.insert(pos, "love"); } else { p.erase(pos, 4); p.insert(pos, "Love"); } } index++; } } Output: Enter in a phrase: John will run home. Love love run love. Again? (yes/no): y Enter in a phrase: Again? (yes/no): 0 Share this post Link to post Share on other sites
https://www.gamedev.net/forums/topic/505523-loop-problem/
CC-MAIN-2017-43
refinedweb
343
80.62
JabChapter 8 From WikiContent Using Messages and Presence Now that we have a decent grounding in the Jabber protocol and technology, let's put it to work for us. This chapter fits Jabber into solutions for two or three common problems and shows how the technology and features lend themselves very well to application-to-person (A2P) scenarios. By way of introduction, we'll have a look at constructing and sending simple Jabber messages, to effect an "in-your-face" notification mechanism for a version control system. We'll also introduce a usage of the <presence/> element as an availability indicator for connecting systems. Finally we'll combine the two features (<message/> and the concept of availability) to make the notification mechanism "sensitive" to the presence of the person being notified. CVS Notification via Jabber CVS—the Concurrent Versions System—allows you to comfortably create and manage versions of the sources of your project. The most common use for CVS is to create and manage versions of program source code, but it can be readily used for any text files. For example, this book was written using DocBook SGML (), and a CVS repository was used to manage different versions of the manuscript throughout the writing and editing process. CVS allowed us to maintain the original source files for the chapters, to compare those versions against edited files, and served as a place from which older versions could be retrieved. You can find out more about CVS at. That's the "Versions System" part of CVS's name. The "Concurrent" part means that this facility is given an extra dimension in the form of group collaboration. With CVS, more than one person can share work on a project, and the various chunks of work carried out by each participant are coordinated—automatically, to a large extent—by CVS. Multiple changes by different people to the same file can be merged by CVS; any unresolvable conflicts (which may for example arise when more than one person changes exactly the same line of source code) are flagged and must be resolved by the participants involved. The general idea is that you can create a project containing files and directories and have it stored centrally in a CVS repository. Depending on what sort of access is granted to this repository, other project participants can pull down a copy of the project—those files and directories—and work on it independently. In this way, each participant's work is isolated (in time and space) from the others. When the work is done, the work can be sent back to the repository and the changes will be merged into the central copy. After that, those merged changes are available to the rest of the participants. CVS Watches and Notification While CVS automatically handles most of the tedious merging process that comes about when more than one person works on a project, it also offers a facility that allows you to set a "watch" on one or more files in the project and be alerted when someone else starts to work on those watched files. This is useful if you wish to preempt any automatic merging process by contacting the other participant and coordinating your editing efforts with him. There are two CVS commands involved in setting up watches and notifications. There are also a couple of CVS administrative files that determine how the notifications are carried out. Let's look at these commands and files in turn. CVS commands The CVS commands cvs watch and cvs notify are used, usually in combination, by project participants to set up the notification mechanism: - cvs watch on|off - Assuming we have a CVS-controlled project called proj1 and we're - currently inside a local checked-out copy of the project's files, we - first use cvs watch to tell CVS to watch a file ("turn a watch - on") that we're interested in, which is file4 in this example: : yak:~/projects/proj1$ cvs watch on file4 - This causes CVS to mark file4 as "watched," which means any time a - project participant checks out the file from the central repository, - the checked-out working copy is created with read-only attributes. - This means the participant is (initially) prevented from saving any - changes to that working copy. It is, in effect, a reminder to that - participant to use the CVS command cvs edit, specifying file4, before - commencing the edit session. Using cvs edit causes CVS to: # Remove - the read-only attribute for the file - Send out notifications (to those who have requested them with the cvs - watch add) that the participant has commenced editing it - cvs watch add|remove - While running cvs watch on against a file will set a marker causing - the file to be replicated with the read-only attribute when checked - out (which has the effect of "suggesting" to the participant editing - the file that he use the cvs edit command to signal that he's to - start editing), the actual determination of the notification - recipients is set up using the cvs watch add command. : Running the - command: : yak:~/projects/proj1$ cvs watch add file4 - will arrange for the CVS notification to be sent to us when - someone else signals their intention (via cvs edit) to edit file4. CVS administrative files A number of administrative files used to control how CVS works are kept in the central CVS repository. Two of these files, notify and users, are used to manage the watch-based notification process: - notify - The standard notify file contains a line like this: : ALL mail %s -s "CVS notification" - The ALL causes the formula described here to be used for any - notification requirements (an alternative to ALL is a - regular expression to match the directory name in which the edit - causing the notification is being carried out). : The rest of the line - is the formula to use to send the notification. It is a simple - invocation of the mail command, specifying a subject line (-s - "CVS notification"). The %s is a placeholder that CVS - replaces with the address of the notification's intended recipient. - The actual notification text, generated by CVS, is piped into the mail - command via STDIN. - users - The users file contains a list of notification recipient addresses: : dj:dj.adams@pobox.com piers:pxharding@ompa.net robert:robert@shiels.com ... - This is a mapping from the user IDs (dj, piers, and - robert) of the CVS participants, local to the host where the - CVS repository is stored, to the addresses (dj.adams@pobox.com, - pxharding@ompa.net,and robert@shiels.com) that are used to - replace the %s in the formula described in the notify file. ==== The notification ==== If the contents of the notify and users files have been set up correctly, a typical notification, set up by DJ using the cvs watch on file4 and cvs watch add file4 commands, and triggered by Piers using the cvs edit file4 command, will be received in DJ's inbox looking like the one shown in Example 8-1. A typical email CVS notification Date: Fri, 8 Jun 2001 13:10:55 +0100 From: piers@ompa.net To: dj.adams@pobox.com Subject: CVS notification testproject file4 --- Triggered edit watch on /usr/local/cvsroot/testproject By piers CVS Notifications via Jabber While email-based notifications are useful, we can add value to this process by using a more immediate (and penetrating) form of communication: Jabber. Although mail clients can be configured to check for mail automatically on a regular basis, using an IM-style client has a number of immediately obvious advantages: - It's likely to take up less screen real estate. * No amount of tweaking of the mail client's autocheck frequency (which, if available, will log in, check for, and pull emails from the mail server) will match the immediacy of IM-style message push. * In extreme cases, the higher the autocheck frequency of the mail client, the higher the effect on overall system performance. * Depending on the configuration, an incoming Jabber message can be made to pop up, with greater effect. * A Jabber user is more likely to have a Jabber client running permanently than an email client. * It's more fun! The design of CVS's notification mechanism is simple and abstract enough for us to put an alternative notification system in place. If we substitute the formula in the notify configuration file with something that will call a Jabber script, we might end up with something like: ALL python cvsmsg %s Like the previous formula, it will be invoked by CVS to send the notification, and the %s will be substituted by the recipient's address determined from the users file. In this case, the Python script cvsmsg is called. However, now that we're sending a notification via Jabber, we need a Jabber address—a JID—instead of an email address. No problem, just edit the users file to reflect the new addresses. Example 8-2 shows what the users file might contain if we were to use JIDs instead of email addresses. Matching users to JIDs in the notify file dj:dj@gnu.pipetree.com piers:piers@jabber.org robert:shiels@jabber.org As Jabber user JIDs in their most basic form (i.e., without a resource suffix) resemble email IDs, there doesn't appear to be that much difference. In any case, CVS doesn't really care, and it takes the portion following the colon separator and simply passes it to the formula in the notify file. The cvsmsg Script Let's now have a look at the script, called cvsmsg. It has to send a notification message, which it receives on STDIN, to a JID, which it receives as an argument passed to the script, as shown in Example 8-3. The cvsmsg Python script import jabber import sys.send(jabber.Message(cvsuser, message, subject="CVS Watch Alarm")) con.disconnect() It's not that long but worth breaking down to examine piece by piece. We're going to use the Jabberpy Python library for Jabber, so the first thing we do in the script is import it. We also import the sys module for reading from STDIN: import jabber import sys As the usage of the script will be fairly static, we can get away here with hardcoding a few parameters: Server = 'gnu.pipetree.com' Username = 'cvsmsg' Password = 'secret' Resource = 'cvsmsg' Specified here are the connection and authentication details for the cvsmsg script itself. If it's to send a message via Jabber, it must itself connect to Jabber. The Server variable specifies which Jabber server to connect to, and the Username, Password, and Resource variables contain the rest of the information for the script's own JID (cvsmsg@gnu.pipetree.com/cvsmsg) and password. cvsuser = sys.argv[1] message = for line in sys.stdin.readlines(): message = message + line The sys.argv[1] refers to the notification recipient's JID, which will be specified by the CVS notification mechanism, as it is substituted for the %s in the notify file's formula. This is saved in the cvsuser variable. We then build up the content of our message body we're going to send via Jabber by reading what's available on STDIN. Typically this will look like what we saw in the email message body in Example 8-1: testproject file4 --- Triggered edit watch on /usr/local/cvsroot/testproject By piers con = jabber.Client(host=Server) Another Jabberpy module, xmlstream, handles the connection to the Jabber server. We don't have to use that module explicitly, however; the jabber module wraps and uses it, shielding us from the details—hence the call to instantiate a new jabber.Client object into con, to lay the way for our connection to the host specified in our Server variable: gnu.pipetree.com. If no port is explicitly specified, the standard port (5222), on which the c2s service listens, is assumed. The instantiation causes a number of parameters and variables to be initialized, and internally an xmlstream.Client object is instantiated; various parameters are passed through from the jabber.Client object (for example, for logging and debugging purposes), and an XML parser object is instantiated. This will be used to parse fragments of XML that come in over the XML stream. try: con.connect() except IOError, e: print "Couldn't connect: %s" % e sys.exit(0) A connection is attempted with the connect() method of the connection object in con. This is serviced by the xmlstream.Client object and an XML stream header, as described in Section 5.3, is sent to gnu.pipetree.com:5222 in an attempt to establish a client connection. An IOError exception is raised if the connection cannot be established; we trap this, after a fashion, with the try: ... except as shown. Once connected (meaning the client has successfully exchanged XML stream headers with the server) we need to authenticate: con.auth(Username,Password,Resource) The auth method of the jabber.Client object provides us with a simple way of carrying out the authentication negotiation, qualified with the jabber:iq:auth namespace and described in detail in Section 7.3. Although we supply our password here in the script in plaintext (secret), the auth method will use the IQ-get (<iq type='get'...>) to retrieve a list of authentication methods supported by the server. It will try to use the most secure, "gracefully degrading" to the least, until it finds one that is supported. This is shown in Figure 8-1. </code> Note the presence of Resource in the call. This is required for a successful client authentication regardless of the authentication method. Sending an IQ-set (<iq type='set'...>) in the jabber:iq:auth namespace without specifying a value in a <resource/> tag results in a "Not Acceptable" error 406; see Table 5-3 for a list of standard error codes and texts. We're connected and authenticated. "The world is now our lobster," as an old friend used to say. We're not necessarily expecting to receive anything at this stage, and even if we did, we wouldn't really want to do anything with what we received anyway. So we don't bother setting up any mechanism for handling elements that might appear on the stream. con.send(jabber.Message(cvsuser, message, subject="CVS Watch Alarm")) The next step is to send the notification message (in message) to the user (in cvsuser). There are actually two calls here. The innermost call, jabber.Message(), creates a simple message element that looks like this: <message to='[value in cvsuser variable]'> <subject>CVS Watch Alarm</subject> <body>[value in message variable]</body> </message> It takes two positional (and required) parameters; any other information to be passed (such as the subject in this example) must be supplied as key=value pairs. The outermost call, con.send(), sends whatever it is given over the XML stream that the jabber.Client object con represents. In the case of the jabber.Message call, this is the string representation of the object so created (i.e., the <message/> element). Once the notification message has been sent, the script's work is done. We can therefore disconnect from the server before exiting the script: con.disconnect() Calling the disconnect() method of the jabber.Client sends an unavailable presence element to the server on behalf of the user who is connected: <presence type='unavailable'/> This is sent regardless of whether a <presence/> element was sent during the conversation but does no harm if one wasn't. After sending the unavailable presence information, the XML stream is closed by sending the stream's closing tag: </stream:stream> This signifes to the server that the client wishes to end the conversation. Finally, the socket is closed. Dialup System Watch These days, it's becoming increasingly common to have a server at home with a dialup connection to the Internet. Your data, your latest developments, and your mail are stored on there. This works really well when you're telecommuting and pulling those late night hacking sessions at home; you have access to all your information and can connect to the Net. For many people, however, the reality is that it's not just at home where the work gets done. Consultants, freelancers, and people with many customers have their work cut out for them in traveling to different sites to complete jobs. One of the biggest issues in this respect, especially in Europe where dialup and pay-per-minute connections still outweigh fixed or flat-rate connections, is the accessibility of the information on the server at home, sitting behind a modem. In a lot of cases, the expense of leaving the server dialed up for the duration of the trip is far too great to be justified. One solution is to have the server dial up and connect to the Internet at regular intervals, say, every hour or two, and remain connected for 5 or 10 minutes. If you need access to the information or need to log on to your server and run a few tests, you can hold the connection open, once you've connected to it, by running a ping, for example. The problem here, though, is timing. Due to the inevitable synchronization problems between wristwatch and PC clock, eddies in the space-time continuum, and the fact that people simply forget to check the time, the online window of the server's dialup is often missed. The essence of this problem is a presence thing. We need to know about the presence, the availability, of our server at home, with respect to the Internet. Using Jabber as your IM mechanism at work, it's likely that you'll have a Jabber client of some sort on your laptop or desktop at the customer sites. Whether it's WinJab on Windows, Jarl in Command Line Interface (CLI) mode on a remote server over an SSH connection, or any other type of Jabber client and connection, the point is that the client turns out to be an ideal ready-made component for solving the dialup timing problem. Here's how it works: - Get the server to dial up and connect to the Internet regularly. * On connection, start a script that sends Jabber presence to you. * On disconnection, get the script to end. If you add to your roster a JID that represents the server at home, it would be possible to subscribe to the server's presence and know when it was available—connected to the Internet—and when it wasn't. The script we're going to write to send Jabber presence is called HostAlive. Making Preparations for Execution Before diving into the script, it's necessary to do a bit of preparation. We're going to be using the presence subscription concept, which was described in Chapter 5 and is covered in more detail in the next section in this chapter. We're also going to have to get the script to run, and stay running, when the dialup connection is made and have it stop when the dialup connection is ended. Presence Rather than get involved in the nitty-gritty of presence subcriptions right now, let's use the tools that are around us to get things set up. In order for this to work, we need to be subscribed to the presence of the script that will be invoked when the server dials up and connects to the Internet. The script will connect to the Jabber server using a JID with a username that represents the Linux server: myserver@gnu.pipetree.com. My JID in this case is dj@gnu.pipetree.com, so we just use whatever Jabber client happens to be at hand, say, Jabber Instant Messenger (JIM), to effect both sides of the subscription. - Step 1 - Create JID myserver@gnu.pipetree.com - We need to create the script's JID if it doesn't already exist. We can - use the reguser script we wrote in Section 7.4 to do this: : [dj@yak dj]$ ./reguser gnu.pipetree.com username=myserver password=secret [Attempt] (myserver) Successful registration [dj@yak dj]$ - Step 2 - We start JIM with the JID dj@gnu.pipetree.com and then add - myserver@gnu.pipetree.com to the roster. This should automatically - send a presence subscription request to the JID. Adding the JID to the - roster using JIM is shown in Figure 8-2. - Step 3 - Accept presence subscription as myserver - Using the JIM client, we reconnect with the myserver JID and - accept the presence subscription request from Step 2, so that - dj@gnu.pipetree.com will automatically receive - myserver@gnu.pipetree.com's availability information. Whether or - not myserver subscribes to dj's presence is irrelevant in this - case, as the script itself is not interested in the availability of - anyone at all. </code> At this stage, the entry in dj@gnu.pipetree.com's roster that represents the Linux server will indicate whether the script run at dialup time is active. If we continue to use the JIM client, we will see that active status is shown by a yellow bulb and inactive by no icon at all. Starting and stopping the script The dialup connection is set up using the Point-to-Point Protocol daemon pppd. This uses a program such as chat to talk to the modem and get it to dial the ISP. The pppd mechanism affords us an ideal way to start and stop a script on the respective connection and disconnection of the line. When the connection has been made, the script /etc/ppp/ip-up is invoked and passed a number of connection-related parameters. Similarly /etc/ppp/ip-down is invoked when the connection is closed. Some implementations of pppd also offer /etc/ppp/ip-up.local and /etc/ppp/ip-down.local, which should be used in place of the ip-up and ip-down scripts if they exist. These .local versions are intended to separate out system-specific connection-related activities from general connection-related activities, in a similar way to how the rc.local file allows system-specific startup activities to be defined in the /etc/rc.d/ Unix System V set of runlevel directories. So what we want to do is start HostAlive with ip-up[.local] and stop it with ip-down[.local]. What these starter and stopper scripts might look like is shown in Example 8-4 and Example 8-5. They are simply shell scripts that share the process ID (PID) of the Jabber script via a temporary file. The starter starts the Jabber script and writes the PID of that script to a file. The stopper kills the script using the PID. An ip-up starter script #!/bin/sh - Change to working directory cd /jabber/java/ - Call the Jabber script and put to background /usr/java/jdk1.3.1/bin/java -classpath jabberbeans.jar:. HostAlive $5 /& - Write the running script's PID echo $! > /tmp/HostAlive.pid An ip-down stopper script #!/bin/sh - Simply kill the process using the JID written by the starter script /bin/kill `cat /tmp/HostAlive.pid` - Remove the PID file /bin/rm /tmp/HostAlive.pid Example 8-4 shows that we're passing through one of the parameters that pppd gives to the ip-up script: the remote IP address—by which the server is known during its temporary connection to the Internet—in the $5 variable.[1] This IP address can be passed along as part of the availability information in the <presence/> element, so that the recipient (dj) can see what IP address has been assigned to the server. The HostAlive Script As you might have guessed from looking at Example 8-4, we're going to write HostAlive in Java, shown in Example 8-6. We'll use the JabberBeans library; see Section P.4 in the Preface for details of where to get this library and what the requirements are. The HostAlive script, written in Java import org.jabber.jabberbeans.*; import org.jabber.jabberbeans.Extension.*; import java.net.InetAddress; public class HostAlive { public static final String SERVER = "gnu.pipetree.com"; public static final String USER = "myserver"; public static final String PASSWORD = "secret"; public static final String RESOURCE = "alive"; public static void main(String argv[]) { ConnectionBean cb=new ConnectionBean();); PresenceBuilder pb=new PresenceBuilder(); pb.setStatus(argv[0]); try { cb.send(pb.build()); } catch (InstantiationException e) { System.out.println("Fatal Error on Presence object build:"); System.out.println(e.toString()); return; } while (true) { try { Thread.sleep(9999); } catch (InterruptedException e) { System.out.println("timeout!"); } } Step by Step We'll examine the script a chunk at a time. We start by importing the libraries (the classes) we would like to use: import org.jabber.jabberbeans.*; import org.jabber.jabberbeans.Extension.*; import java.net.InetAddress; The JabberBeans library is highly modular and designed so we can pick only the features that we need; in this case, however, we're just going to import the whole set of classes within the org.jabber.jabberbeans and org.jabber.jabberbeans.Extension packages, for simplicity. We're also going to be manipulating the Jabber server's hostname, so we pull in the InetAddress class for convenience. The script must connect to the Jabber server on gnu.pipetree.com as the myserver user. We define some constants for this: public class HostAlive { public static final String SERVER = "gnu.pipetree.com"; public static final String USER = "myserver"; public static final String PASSWORD = "secret"; public static final String RESOURCE = "alive"; In the same way as with the Python-based CVS notification script earlier in this chapter, we also start off by building a connection to the Jabber server. As before, it's a two-stage process. The first stage is to create the connection object: public static void main(String argv[]) { ConnectionBean cb=new ConnectionBean(); A ConnectionBean object represents the connection between the script and the Jabber server. All XML fragments (Jabber elements) pass through this object. Then it's time to attempt the socket connection and the exchange of XML stream headers:; } We create an Internet address object in addr from the hostname assigned to the SERVER constant. As the creation of the addr instance may throw an exception (Unknown Host), we combine the instantiation with the connection() call on the ConnectionBean object, which may also throw an exception of its own—if there is a problem connecting. At this stage, we're connected and have successfully exchanged the XML stream headers with the Jabber server. So now we must authenticate:); Yes, that's an awful lot. Let's take it bit by bit. Figure 8-3 shows how the objects in this section of code interrelate and represent various parts of what we're trying to do—which is to construct an authorization packet. This takes the form of an IQ-set containing a <query/> tag qualified by the jabber:iq:auth namespace like this:[2] <iq type='set'> <query xmlns='jabber:iq:auth'> <username>myserver</username> <password>secret</password> <resource>alive</resource> </query> </iq> Constructing Jabber elements with the JabberBeans library uses so-called builders that allow individual element components to be created separately and then fused together into a final structure. In the code, we use two builders: an InfoQueryBuilder to construct the <iq/> envelope and an IQAuthBuilder to construct the <query/> content. Taking the code step by step, we create or declare each of the three things, iqb, iq, and iqAuthb: InfoQueryBuilder iqb=new InfoQueryBuilder(); InfoQuery iq; IQAuthBuilder iqAuthb=new IQAuthBuilder(); - iqb - This is the builder object with which we can build - <iq/> elements. - iq - This is the <iq/> element that we're going to build. - iqAuthb - This is another builder object with which we can build IQ extensions - (<query/> tags) qualified by the - jabber:iq:auth namespace. The process of creating the - authorization packet is detailed in Figure 8-3. Figure 8-3; these follow what happens in the rest of the authentication preparation: - Step 1 - Set the type attribute of the IQ - We call the setType() method on the iqb object that - represents the outer IQ envelope to set the value of the type - attribute: : iqb.setType("set"); - Step 2 - Set the values for the parts of the authorization section of the element - Having constructed the iqAuthb object, which represents the - <query/> portion of the element, we fill the values - with these calls: : iqAuthb.setUsername(USER); iqAuthb.setPassword(PASSWORD); iqAuthb.setResource(RESOURCE); - Steps 3a and 3b - Generate iqAuthb and add it to the iqb object - Once the values inside the authorization <query/> tag - are set, we can call the build() method on the object - representing that tag in iqAuthb to generate an extension - object (in other words, to assemble the tag) that can then be - attached to the iqb object using the addExtension() - method: : try { iqb.addExtension(iqAuthb.build()); } ... - Step 4 - Generate iqb and assign it to the IQ object - In the same way that we generated the authorization - <query/> tag, we can generate the whole element and - assign it to iq: : try { //build the full InfoQuery packet iq=(InfoQuery)iqb.build(); } ... Once we've constructed the authorization element, now held as the iq object, we can send it down the stream to the Jabber server with the send() method of the ConnectionBean object cb: cb.send(iq); Finally, once we've authenticated, we can construct the presence packet and send it using the same technique as before.[3] We construct a new object to represent the presence packet denoting general availability—<presence/>: PresenceBuilder pb=new PresenceBuilder(); In this case, there are no namespace-qualified extensions to add to the <presence/> element, but we do want to add the IP address that was passed into the script and available in argv[0]. We can use the setStatus() method on the presence object to set the optional <status/> to contain that IP address: pb.setStatus(argv[0]); After this, we can go ahead and generate the element, which will look like this: <presence> <status>123.45.67.89</status> </presence> After the generation with the build() call, we send it down the stream in the same way as the authorization <iq/> element: try { cb.send(pb.build()); } catch (InstantiationException e) { System.out.println("Fatal Error on Presence object build:"); System.out.println(e.toString()); return; } As for each of the build() calls, we must trap a possible exception that build() throws if it can't complete (for example, due to lack of information). This is the InstantiationException. We can see the results of myserver sending such an information-laden <presence/> element to dj in Figure 8-4. As the server connects to the Internet, the Java script is started via the ip-up script, and it relays the assigned IP address, which is shown in Jarl's status bar as the availability information reaches dj's client. </code> All that remains for the script to do now is to hang around. While the XML stream to the Jabber server remains, and the connection is not broken, its availability will remain as it was as described by the simple <presence/> element we sent. So we simply go into a sort of hibernation. We have no hope of escaping, but it should be taken care of by the ip-down script as described earlier. while (true) { try { Thread.sleep(9999); } catch (InterruptedException e) { System.out.println("timeout!"); } } In fact, when the ip-down script kills the script, the socket connection will be closed, but there was no clean disconnect—no <presence type='unavailable'/> was sent by the script to the Jabber server. In this case, the Jabber server will notice that the socket was closed and generate an unavailable <presence/> element on behalf of the client. Presence-Sensitive CVS Notification In Section 8.1 early in this chapter, we replaced the email-based CVS notification mechanism with a Jabber-based one. The script used was extremely simple—it connected to the Jabber server specified, authenticated, and sent off the notification message to the recipient JID. What if we wanted to make the script "sensitive"? Jabber's presence concept could help us here; if we extended the mechanism to allow for the building of presence-based relationships between the notification script and the notification recipients, we can make the sending of the notification message dependent on the recipient's availability. "Presence-based relationships" refers to the presence subscription mechanism described in Section 5.4.2.3. Here's how it would work: - Each potential recipient adds the JID used by the CVS notification script to his roster and sends a subscription request to it.[4] - The notification script, called cvsmsg-s ("cvsmsg-sensitive"), on receipt of the presence subscription from a recipient, accepts the request and reciprocates by sending a subscription request back to that recipient. - On receipt of the presence subscription from the notification script, the recipient accepts the request. - When the notification script starts up to send a message, it announces its own availability with a <presence/> element, which causes the availability of the JIDs to which it has a presence subscription to be sent to it. Based on these <presence/> packets received, it can make a decision as to whether to send the notification message or not. - The decision we're going to use here is an arbitrary one: if the recipient is online, we'll send the message, unless he's specified that he doesn't want to be disturbed, with the <show>dnd</show> element. Subscription Relationships This method will result in "balanced" subscription relationships between script and recipients. In other words, the script is subscribed to a recipient's presence, and vice versa. Of the two presence subscription "directions," the one where the notification script subscribes to the recipient's presence (as opposed to the one where the recipient subscribes to the notification script's presence) is by far the most important. While it's not critical that the recipients know when the notification script is connected and active, it's essential that the notification script know about a recipient's availability at the time it wants to send a message. So would it be more appropriate to create "unbalanced" subscription relationships? An unbalanced relationship is one where one party knows about the other party's availability but not vice versa. The idea for sensitizing the notification script will work as long as the script can know about the availability of the recipients. Whether or not the opposite is true is largely irrelevant. Nevertheless, it's worth basing the interaction on balanced, or reciprocal, presence subscriptions, primarily for simplicity's sake and also for the fact that most Jabber clients (and most users of these clients) tend to cope well and consistently with balanced subscriptions, whereby the representation and interpretation of unbalanced relationships is dealt with and understood in different manners. Some clients use a lurker group to classify one-way presence subscriptions from other JIDs (a "lurker" being one that can see you while you can't see it). Far from being nebulous concepts, balanced and unbalanced subscription relationships are characterized technically by values of a certain attribute specified in each item—each JID—in a roster: the subscription attribute of the <item/> tags within the roster. As we progress through the extensions to the CVS notification script,we'll be examining these values at various stages in this recipe description in Section 8.3.3. {{Sidebar|AnthropomorphismIt's worth pointing out at this stage that adding a JID that's used by a script to connect to Jabber is slightly symbolic of the extension of the instant messaging world into the wider arena of A2P messaging. Adding a service JID to your roster and sharing presence information with that service immediately widens the scope of what's possible with a humble instant messaging client, and blurs the boundaries between people and applications. </code> The cvsmsg-s Script The script, as it stands in Section 8.1.3, is what we want to extend and make sensitive to presence. Example 8-7 looks at the extended script, cvsmsg-s, and then walks through the additions. The cvsmsg-s script import jabber import sys from string import split.setPresenceHandler(presenceCB) con.requestRoster() con.sendInitPresence() for i in range(5): con.process(1) con.disconnect() Taking the cvsmsg-s Script Step by Step Now it's time to examine the script step by step. We'll concentrate mostly on the additions to the original cvsmsg script. We bring in a string function that we'll be needing later in the script to chop up JIDs into their component parts (username, hostname, and resource): import jabber import sys from string import split Server = 'gnu.pipetree.com' Username = 'cvsmsg' Password = 'secret' Resource = 'cvsmsg' cvsuser = sys.argv[1] message = Presence callback The next addition to the script is a callback to handle <presence/> elements. The callback in this script takes the form of a subroutine called presenceCB() ("presence callback"). Callbacks, in relation to programming with Jabber, are explained in Section 8.3.4. This is what the callback for handling <presence/> elements looks like:) Phew! Let's take it a bit at a time. The first thing to note is what's specified in the subroutine declaration: def presenceCB(con, prs): As a handler, the subroutine presenceCB() will be passed the connection object in con, and the presence node in prs. con is the same connection object that is created later in the script (con = jabber.Client(host=Server)) and is passed in for convenience, as it's quite likely we're going to want to use it, say, to send something back over the stream. The presence node in prs is an object representation of the XML fragment that came in over the stream and was parsed into its component parts. The object is an instance of the jabber.Presence class, which is simply a specialization of the more generic jabber.Protocol class, as are the other classes that represent the other two Jabber protocol elements that are to be expected: jabber.Message and jabber.Iq. The jabber.Protocol class represents protocol elements in general. As such, there are a number of <presence/> element-specific methods we can call on the prs object, such as getShow() and getStatus() (which return the values of the <show/> and <status/> tags—children of the <presence/> element—respectively) and general element methods such as getID, which returns the value of any id attribute assigned to the element, and setTo(), which can be used to address the element—to set the value of the to attribute. The first thing the handler does is to call a few of these element methods to determine the type of <presence/> element (presence types are described in Section 5.4.2), and who it's coming from: type = prs.getType() parts = split(prs.getFrom(), '/') who = parts[0] When the notification script is called, the JID found in the CVS users file is substituted for the %s in the formula contained in the CVS notify file. So if the user dj were to be notified, the JID passed to the script would be dj@gnu.pipetree.com. The way JIDs are passed around independently of the context of a Jabber session is usually in the simpler form—username@hostname, that is, without the resource suffix—username@hostname/resource. As described in Chapter 5, the resource is primarily used to distinguish individual sessions belonging to one Jabber user. But when the Jabber library—and subsequently a handler subroutine in the script—receives an element, it contains a from attribute whose value has been stamped by the Jabber server as it passes through. The value represents the session, the connection, from which the <presence/> element was sent and, as such, includes a resource suffix. So in order to properly match up the source JID for any incoming <presence/> element with the JID specified when the script was invoked (contained in the cvsuser variable), we need to strip off this resource suffix. The remaining username@hostname part is captured in the who variable. There's one more step to determine the presence type. The type attribute is optional; its absence signifies the default presence type, which is available. So we effect this default substitution here to make the subsequent code clearer: if type == None: type = 'available' At this stage, we want to take different actions depending on what sort of presence information has arrived. Recalling the sequence of events in the reciprocal presence subscription exchange described earlier in this chapter, one of the activities is for a potential notification recipient to subscribe to the presence of the script's JID. This subscription request is carried in a <presence/> element, with a type of subscribe. Example 8-8 shows what a typical subscription request would look like. A presence subscription request from dj@gnu.pipetree.com <presence type='subscribe' to='cvsmsg@gnu.pipetree.com' from='dj@gnu.pipetree.com/work'/> At this stage, dj@gnu.pipetree.com has just sent a request to subscribe to the script's presence. The subscription relationship between the two parties is nondescript, and this is reflected in the details of the item in dj's roster that relates to the script's JID: <item jid='cvsmsg@gnu.pipetree.com' subscription='none' ask='subscribe'/> The relationship itself is reflected in the subscription attribute, and the current state of the relationship is reflected in the ask attribute. If a subscription request is received, we want the script to respond by accepting the subscription request. Once the request has been accepted, a presence subscription request is made in return. This incoming subscription request is handled here: # Subscription request: # - Accept their subscription - Send request for subscription to # their presence if type == 'subscribe': print "subscribe request from %s" % (who) con.send(jabber.Presence(to=who, type='subscribed')) con.send(jabber.Presence(to=who, type='subscribe')) Each call to the jabber.Presence class constructor creates a node representing a <presence/> element. The two parameters passed in the call are fairly self-explanatory: we specify to whom the <presence/> element should be sent, and the type. If the presence subscription request came in from the JID dj@gnu.pipetree.com, then the XML represented by the node created in the first call here (specifying a presence type of subscribed) would look something like that in Example 8-9. Acceptance of a presence subscription request from dj@gnu.pipetree.com <presence type='subscribed' to='dj@gnu.pipetree.com'/> {{Sidebar|Addressing <presence/> ElementsIt's worth pointing out here that there's a subtle difference between sending <presence/> elements in a presence subscription conversation and sending general "availability" <presence/> elements. In the first case, we use a to attribute, because our conversation is one-to-one. In the second, we don't; our unaddressed availability information is caught by the server and in turn sent on to those entities that are subscribed to your presence. Although you can send <presence/> elements that convey availability information directly to a JID, it's not normal. However, explicitly addressing the elements in a subscription scenario is essential. There's another situation in which such "directed" (explicitly addressed) <presence/> elements are used—to partake of the services of the availability tracker. This is described in the Section 5.4.2.4. </code> Once constructed, each of the jabber.Presence nodes is sent back along the stream with the con.send() calls. Now that the script has accepted djs subscription request, djs roster item for the script reflects the new relationship: <item jid='cvsmsg@gnu.pipetree.com' subscription='to'/> subscription='to' denotes that the subscription relationship is currently one way—dj has a subscription to the script. There's no ask attribute as there's no current request going from dj to the script. While dj's roster item for the script shows a subscription value of to, the script's roster item for dj shows a subscription value of from: <item jid='dj@gnu.pipetree.com' subscription='from' ask='subscribe'/> which shows that the script has a subscription from'dj. Furthermore, remember that the script not only accepts dj's subscription request, it sends a reciprocal one of its own. (Hence the ask="subscribe' status in the item.) When dj accepts this request, the roster item changes yet again to reflect the balanced relationship: <item jid='cvsmsg@gnu.pipetree.com' subscription='both'/> We want the script to handle requests to unsubscribe from its presence in the same way: # Unsubscription request: # - Accept their unsubscription - Send request for unsubscription to # their presence elif type == 'unsubscribe': print "unsubscribe request from %s" % (who) con.send(jabber.Presence(to=who, type='unsubscribed')) con.send(jabber.Presence(to=who, type='unsubscribe')) The only difference between this section and the previous one is that it deals with requests to unsubscribe as opposed to subscribe to presence. Otherwise it works in exactly the same way. A sequence of <presence/> elements used in an "unsubscription conversation" between dj and the script, and the changes to the roster <item/> tags on each side, is shown in Figure 8-5. </code> While we must take action on presence types subscribe and unsubscribe, we don't really need to do anything for their acknowledgment counterparts: subscribed and unsubscribed ("I have accepted your request, and you are now subscribed/unsubscribed to my presence"). Nevertheless, just for illustration purposes, we'll include a couple of conditions to show what's going on when the script runs: elif type == 'subscribed': print "we are now subscribed to %s" % (who) elif type == 'unsubscribed': print "we are now unsubscribed to %s" % (who) Apart from the types of <presence/> element covering the presence subscription process, we should also expect the basic availability elements: <presence>...</presence> and <presence type='unavailable'/> It's an available <presence/> element that the functionality of the script hinges on: elif type == 'available': print "%s is available (%s/%s)" % (who, prs.getShow(), prs.getStatus()) if prs.getShow() != 'dnd' and who == cvsuser: con.send(jabber.Message(cvsuser, message, subject="CVS Watch Alarm")) This presenceCB() subroutine is set up to handle <presence/> elements. In a typical execution scenario, where the script is subscribed to the presence of many potential CVS notification recipients, the subroutine is going to be called to handle the availability information of all recipients who happen to be connected to Jabber at the moment of notification. We're interested in the availability information of only one particular recipient (who == cvsuser), and we want to check on the contents of the <show/> tag. If we get a match, we can send the notification message by creating a jabber.Message node that will look like this: <message to='dj@gnu.pipetree.com'> <subject>CVS Watch Alarm</subject> <body> testproject file4 --- Triggered edit watch on /usr/local/cvsroot/testproject By piers </body> As in the cvsmsg script, once created, the node can be sent with the con.send() method call. Like the conditions for the presence subscription and unsubscription acknowledgments, we're including a final condition to deal with the case where a recipient disconnects from the Jabber server during the execution of the script: an unavailable<presence/> element will be sent: elif type == 'unavailable': print "%s is unavailable" % (who) We're simply logging such an event for illustration purposes. Connection and authentication Most of the main part of the script is the same as the nonsensitive version from Section 8.1: reading in the notification message, preparing a connection to the Jabber server, and trying to connect: for line in sys.stdin.readlines(): message = message + line con = jabber.Client(host=Server) try: con.connect() except IOError, e: print "Couldn't connect: %s" % e sys.exit(0) con.auth(Username,Password,Resource) Registration of <presence/> handler While we've defined the presenceCB() subroutine to handle <presence/> packets, we haven't actually told the Jabber library about it. The call to the setPresenceHandler() method of the connection object does this for us, performing the "Register handler" step shown in Figure 8-6. The steps shown in Figure 8-6 are described in Section 8.3.4. con.setPresenceHandler(presenceCB) </code> Request for roster It's easy to guess what the next method call does: con.requestRoster() It makes a request for the roster by sending an IQ-get with a query qualified by the jabber:iq:roster namespace: <iq type='get' id='3'> <query xmlns="jabber:iq:roster'/> </iq> to which the server responds with an IQ-result: <iq type='result' id='3'> <query xmlns='jabber:iq:roster'> <item jid='dj@gnu.pipetree.com' subscription='both'/> <item jid='piers@jabber.org' subscription='both'/> <item jid='shiels@jabber.org' subscription='both'/> ... </query> </iq> However, as there are no explicit references to the roster anywhere in the script, it's not as easy to guess why we request the roster in the first place. We know that the client-side copy is merely a "slave" copy, and, even more relevant here, we know that subscription information in the roster <item/> tags is managed by the server—we as a client don't need to (in fact, shouldn't) do anything to maintain the subscription and ask attributes and keep them up to date. So why do we request it? Basically, it's because there's a fundamental difference between <presence/> elements used to convey availability information and <presence/> elements used to convey presence subscription information. If John sends Jim availability information in a <presence/> element, whether directly (with a to attribute) or indirectly (through the distribution of that element by the server to Jim as a subscriber to John's presence), and Jim's offline on holiday, it doesn't make sense to store and forward the message to him when he next connects: Jabber server: "Here's some availability information for John, dated 9 days ago." Jim: "Who cares?"The <presence/> elements conveying availability information are not stored and forwarded if they can't be delivered because the intended recipient is offline. What would be the point? However, <presence/> elements that convey subscription information are a different kettle of fish. While it's not important that a user is sent out of date availability information when he next connects to his Jabber client, any subscription (or unsubscription) requests or confirmations that were sent to him are important. So they need to be stored and forwarded. As we've already seen, the presence subscription mechanism and rosters are inextricably linked. And if we look briefly under the covers, we see how this is so. When a presence subscription request is sent to a user, it runs the gauntlet of modules in the JSM (see Section 4.4.4 for details on what these modules are). The roster-handling module mod_roster grabs this request, and, just in case the recipient turns out not to be connected, stores it. And here's how intertwined the presence subscription mechanism and rosters really are: the request is stored as a cluster of attribute details within an <item/> tag in the roster belonging to the recipient of the presence subscription request. It looks like this: <item jid='user@hostname' subscription='none' subscribe= hidden=/> On receipt of a presence subscription request, the mod_roster module will create the roster item if it doesn't exist already and then assign the attributes related to presence subscription—subscription="none' and subscribe="'—to it. There's no ask attribute, as this is assigned only to the item on the roster belonging to the sender, not the one belonging to the receiver, of the subscription request. The subscribe attribute is used to store the reason for the request, that, if specified, is carried in the <status/> tag of the <presence/> element that conveys the request. If no reason is given, the value for the attribute is empty, as shown here. Otherwise, it will contain what was stored in the <status/> tag. Example 8-10 shows a presence subscription request that carries a reason. A presence subscription request with a reason <presence to='dj@gnu.pipetree.com'> <status>I'd like to keep my eye on you!</status> </presence> The hidden attribute here: <item jid='user@hostname' subscription='none' subscribe= hidden=/> is used internally by mod_roster to mark the item as nondisplayable; it effectively is a pseudo <item/> that, when brought to life, actually turns out to be a <presence/> element. So when a request for the roster is made, mod_roster makes sure that it doesn't send these "hidden" items. The hidden attribute always has an empty value, as shown here. After storing the subscription request, mod_roster will actually send the original <presence/> element that conveyed that request to the recipient—that is, if the recipient is online and if the recipient has already made a request for his roster. As sending an availability presence packet: <presence/> causes the mod_offline module to forward any messages stored offline in that user's absence, so requesting the roster: <iq type='get'><query xmlns='jabber:iq:roster'/></iq> causes the mod_roster module to forward any subscription requests stored offline in that user's absence. Sending of availability information OK. We've connected, authenticated, defined, and registered the callback to handle <presence/> elements, and requested the roster, so mod_roster will send us any presence subscription (or unsubscription) requests. Now we need to make an availability announcement in the form of a simple <presence/> element: <presence/> We can do this by calling the sendInitPresence() method on the connection object: con.sendInitPresence() This availability information will be distributed to all the entities that are subscribed to the script's presence and are online at that moment. It will also signify to the Jabber server that we are properly online—in which case it can forward to us any messages that had been stored up in our absence. We're not really expecting any <message/> elements; indeed, we haven't set up any subroutine to handle them, so they'd just be thrown away by the library anyway. The real reason for sending presence is so that the server will actively go and probe those in a presence subscription relationship with the script and report back on those who are available (who have themselves sent their presence during their current session). This causes <presence/> elements to arrive on the stream and make their way to the presenceCB() handler. Waiting for packets Once everything is set up, and the script has announced its presence, it really just needs to sit back and listen to the <presence/> elements that come in. If one of these is from the intended notification recipient, and the availability state is right (i.e., not in dnd mode), we know that the circumstances are appropriate for sending the notification. But the elements being sent over the stream from the server don't spontaneously get received, parsed, and dispatched; we can control when that happens from the script. This is the nub of the symbiosis between the element events and the procedural routines, and it's name is process(). Calling process() will check on the stream to see if any XML fragments have arrived and are waiting to be picked up. If there are any, Steps 3 through 5, shown in Figure 8-6 and described in Section 8.3.4, are executed. The numeric value specified in the call to process() is the number of seconds to wait for incoming fragments if none is currently waiting to be picked up. Specifying no value (or 0) means that the method won't hang around if nothing has arrived. Specifying a value of 30 means that it will wait up to half a minute. We really want something in between, and it turns out that waiting for up to a second for fragments in a finite loop like this: for i in range(5): con.process(1) will allow for a slightly stuttered arrival of the <presence/> elements that are sent to the script as a result of the server-initiated probes. Finishing up We're just about done. The <presence/> elements that arrive and find their way to the callback are examined, and the CVS notification message is sent off if appropriate. Once the process() calls have finished, and, implicitly, the (potentially) multiple calls to presenceCB, there's nothing left to do. So we simply disconnect from the Jabber server, as before: con.disconnect() Jabber Programming and Callbacks When programming all but the simplest Jabber scripts, you're going to be using callbacks, as we've seen in this recipe. Callbacks are also known as handlers. Rather than purely procedural programming ("do this, then do that, then do the other"), we need a different model to cope with the event-based nature of Jabber or, more precisely, the event-based nature of how we converse using the Jabber protocol over an XML stream. Although we control what we send over the XML stream connection that we've established with the Jabber server, we can't control what we receive, and more importantly, we can't control when we receive it. We need an event-based programming model to be able to handle the protocol elements as they arrive. The libraries available for programming with Jabber offer callback mechanisms. With these callback mechanisms, we can register subroutines with the part of the library that's handling the reception of XML document stream fragments. Then, whenever an element appears on the incoming stream (a fragment in the stream document that the Jabber server is sending to us), the library can pass it to the appropriate subroutine in the script for us to act upon—to be "handled." This passing of elements to be handled by callbacks is referred to as dispatching. Figure 8-6 shows the relationship between the library and script, and the sequence of events surrounding registering a handler and having it called. Here are the steps shown: - Step 1 - Register handler - The script uses a library function to register a subroutine—in this - case, it's presenceCB()—as a handler with the library. In the - registration, the subroutine is assigned as a handler for - <presence/> elements. - Step 2 - <presence/> element arrives - An XML fragment arrives on the stream, sent by the Jabber server. - Step 3 - Parse, and create node - The fragment is parsed into its component parts by an XML parser, and - a node is created. A "node" is simply a term used to describe a - succinct XML fragment—containing attributes, data, and child tags—that - is usually in the form of an object that is programmatically - accessible. The node creation step is theoretically optional; we could - have the library pass on the fragment in a simple string - representation form, but that would put the onus on the script to - parse the string before being able to manipulate the fragment that the - string represented. - Step 4 - Determine handler - Once parsed, the library looks at what sort of element, or node, the - fragment is and determines what (if any) handler has been registered. - In this case, it's a <presence/> element, and it finds - that the subroutine presenceCB() has been registered as a - handler for <presence/> elements. - Step 5 - Call handler - The library calls the handler presenceCB() in the script, - passing in the node. It may pass in other information too (for - example, the Jabberpy library also passes in the stream - connection object, as we saw earlier, and the Perl library - Net::Jabber also passes in a session ID relating to the - stream).[5]</content></chapter>
http://commons.oreilly.com/wiki/index.php?title=JabChapter_8&direction=prev&oldid=25881
CC-MAIN-2017-17
refinedweb
9,935
51.28
I'm having a puzzling problem when trying to import a module in python only when the script is called from php via system or exec. From the python shell: import igraph #This works. if the previous line was in a file, say, test_module.py, then: python test_module.py in the bash works. Within PHP: exec("python test_module.py",$output,$retval) -> fails : $retval = 1. However, if the script is instead : import math, then this is fine. Anybody ever dealt with something similar? one thing to check is sys.path see what the difference is when called each way Is the igraph module in Python's standard module path, or is it in the same directory as your individual script? If so, it's quite possible that PHP is calling the python file with a different working directory, and it's trying to import things relative to that path instead of the path of the script. This is happening because you have installed those packages under a different user, maybe root, or something else. How i debugged this, is i checked the output of sys.path for both cases (shell, and php's exec, which has the user www-data by default), and than i compared both. I noticed the '/root/.local/lib/python2.7/site-packages' path missing when i ran it from PHP, which contained exactly these missing packages. So i just copied the content of this folder to '/usr/lib/python2.7/dist-packages/', which solved the issue.
http://www.dlxedu.com/askdetail/3/e5a200f19a793d156a8044f4a50e5716.html
CC-MAIN-2018-39
refinedweb
250
76.42
A small task runner inspired by npm scripts. Project Description A small task runner inspired by npm scripts. Features Install pip install pyxcute Usage Basic Create a cute.py file like this: from xcute import cute cute( hello = 'echo hello xcute!' ) then run: >cute hello hello... hello xcute! “hello” is the task to run. If cute.py is executed without a task name, it will run “default” task. (If you get a “not a command” error, see How do I make Python scripts executable?) Provide additional arguments: >cute hello 123 hello... hello xcute! 123 The arguments will be passed into the runner, which is xcute.Cmd.__call__ in this case. Tasks It can be a str: from xcute import cute cute( hello = 'echo hello' ) If it match the name of another task, pyxcute will execute that task: from xcute import cute cute( hello = 'world', world = 'echo execute world task' ) Use a list: from xcute import cute cute( hello = ['echo task1', 'echo task2'] ) Or anything that is callable: from xcute import cute cute( hello = lambda: print('say hello') ) Actually, when you assign a non-callable as a task, pyxcute converts it into a callable according to its type. See xcute.Cmd, xcute.Chain, xcute.Throw., and xcute.Task Task chain Define the workflow with _pre, _err, _post, _fin suffix: from xcute import cute cute( hello_pre = 'echo _pre runs before the task', hello = 'echo say hello', hello_err = 'echo _err runs if there is an error in task, i.e, an uncaught exception or non-zero return code', hello_post = 'echo _post runs after the task if task successfully returned', hello_fin = 'echo _fin always runs after _post, _err just like finally' ) When a task is involved, it will firstly try to execute _pre task, then the task itself, then the _post task. If the task raised an exception, then it goes to _err task. And finally the _fin task. Pseudo code: run(name + "_pre") try: run(name, args) except Exception: if run(name + "_err") not exist: raise else: run(name + "_post") finally: run(name + "_fin") Format string pyXcute expands format string with xcute.conf dictionary when the task is executed: from xcute import conf, cute conf["my_name"] = "world" def edit_conf(): conf["my_name"] = "bad world" cute( hello_pre = edit_conf, hello = "echo hello {my_name}" ) > cute hello hello_pre... hello... hello bad world Cross-platform utils There are some CLI utils inspired by npm-build-tools, including: - x-clean - x-cat - x-copy - x-pipe Run each command with -h to see the help message. Live example API reference xcute.conf A dictionary used to format string. By the default, it has following keys: - pkg_name - package name. See xcute.cute. - date - datetime.datetime.now(). - tty - a boolean shows if the output is a terminal. - version - version number. Available after Bump task. Also see pkg_name section in xcute.cute. - old_version - version number before bump. Only available after Bump task. - tasks - a dictionary. This is what you send to cute(). - curr_task - str. The name of current task. xcute.cute cute(**tasks) The entry point. Here are some special tasks: pkg_name - when this key is found in tasks, the key is removed and inserted into the conf dictionary. Then, cute() tries to find version number from {pkg_name}/__init__.py, {pkg_name}/__pkginfo__.py. If found, the filename is added to conf["version_file"], and the version is added to conf["version"]. The regex used to match version number is decribed at xcute.split_version. version - if not provided, pyxcute uses Log("{version}") as default. bump - if not provided, pyxcute uses Bump("{version_file}") as default. xcute.exc exc(message=None) Raise an exception. It reraises the last error if message is not provided. from xcute import cute, exc cute( ... task_err = ["handle error...", exc] ) xcute.f f(string) Expand string with xcute.conf dictionary. xcute.log log(items) A print function, but only works if conf["tty"] == False. xcute.noop noop(*args, **kwargs) A noop. xcute.split_version split_version(text) Split text into a (left, verion, right) tuple. The regex pattern used to find version: "__version__ = ['\"]([^'\"]+)" xcute.Bump Bump task can bump version number in a file, using xcute.split_version and semver. from xcute import cute, Bump cute( bump = Bump('path/to/target/file') ) then run cute bump [major|minor|patch|prerelease|build] the argument is optional, default to patch. xcute.Chain This task would run each task inside a task list. Chain(*task_list) Tasks are converted to Chain if they are iterable. xcute.Cmd This task is used to run shell command. Cmd(*shell_command) Tasks are converted to Cmd if they are str. xcute.Log A wrapper to print. It is useless if you can just "echo something". Log(*text) xcute.Task This task executes another task. Task(task_name) Tasks are converted to Task if they are keys of tasks dictionary. xcute.Throw This task throws error. Throw() Throw(error) Throw(exc_cls, message=None) - Reraise last error. - Raise the error. - Raise exc_cls(message) Tasks are converted to Throw if they are subclass or instance of BaseException. xcute.Try This task suppress exception. Try(*task) Changelog - 0.4.1 (Apr 3, 2017) - Better description for x-clean. - Fix broken pipe error in x-pipe. - 0.4.0 (Mar 28, 2017) - Switch to setup.cfg. - Add log, exc, noop, Throw, Try. - Drop Exc, Exit. - Add x-* utils. - 0.3.1 (Mar 23, 2017) - Find version from {pkg_name}/__pkginfo__.py. - 0.3.0 (Jul 21, 2016) - Add pkg_name task. - Add default tasks bump, version. - 0.2.0 (May 14, 2016) - Add _fin tag, which represent finally clause. - Add Exc and Exit tasks. - 0.1.2 (Apr 20, 2016) - Move _pre out of try clause. - 0.1.1 (Apr 20, 2016) - Bump dev status. - 0.1.0 (Apr 20, 2016) - First release. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyxcute/
CC-MAIN-2018-13
refinedweb
972
77.33
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum! Bear Bibeault wrote:All you need to do is to change the crop of the overlay image. Rob Spoor wrote:Right, and now I've tested it a bit. Quick and dirty: import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; public class Test { public static void main(String[] args) throws Exception { JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel before = new JPanel(new FlowLayout(FlowLayout.LEFT, 0, 0)); before.add(new JLabel(new ImageIcon(args[0]))); before.setMinimumSize(new Dimension(0, 0)); JPanel after = new JPanel(new FlowLayout(FlowLayout.RIGHT, 0, 0)); after.add(new JLabel(new ImageIcon(args[1]))); after.setMinimumSize(new Dimension(0, 0)); JSplitPane split = new JSplitPane(JSplitPane.HORIZONTAL_SPLIT, before, after); split.setPreferredSize(before.getPreferredSize()); split.setDividerLocation(0.5); split.setContinuousLayout(true); frame.add(split); frame.pack(); frame.setVisible(true); } }The minimum sizes are required to allow one panel to be completely hidden by the divider. The setting of the preferred size of the JSplitPane is to prevent the left and right panels to both be shown initially (although this will still occur if the JSplitPane is resized). Just one note: the setDividerLocation(0.5) doesn't work yet because if "the split pane is not correctly realized and on screen, this method will have no effect" (quoted from the Javadoc page of JSplitPane). Setting it to an int value based on the new preferred size should work though. Jesper de Jong wrote:Your could do that with Swing GUI components, such as JSplitPane as Rob showed, or by drawing the images yourself with the 2D graphics API. You can find good tutorials here: Creating a GUI With JFC/Swing 2D Graphics
http://www.coderanch.com/t/531791/java/java/view-Tsunami
CC-MAIN-2015-11
refinedweb
306
50.23
:joerg 2005/04/28 06:25:12 PDT : :DragonFly src repository : : Modified files: : lib/libc/string strcasecmp.c strnstr.c strpbrk.c strstr.c : strtok.c wcschr.c wcspbrk.c wcsrchr.c : wcsstr.c wmemchr.c : Log: : DragonFly has decided to depend on char being signed, use it. : Use __DECONST for the interface const violations, those are intended. : Ansify. That is going to break things... everything is fine except these lines: return (tolower(*us1) - tolower(*--us2)); Working on signed chars is going to break the sign of the return value. tolower() does NOT change the sign of the argument when no conversion is done. Please change it back to unsigned. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx>
http://leaf.dragonflybsd.org/mailarchive/commits/2005-04/msg00520.html
CC-MAIN-2014-42
refinedweb
115
59.3
This is a simple class which enables multiple processes to share the same block of memory. The difference with this class is that the memory can grow/shrink, enabling you to share any amount of data. A common problem when creating a new program is managing memory. The problem worsens when creating multithreaded applications where the use of Critical Sections, mutexes, and/or semaphores is required. But when it comes to multiple processes, there is no simple way of sharing data. When I started this class, I found and read many tutorials about inter-process communication. While they were well-written, none of them addressed my problem. Each article I read, showed how to share a string between processes. Not one showed how to share multiple strings, multiple data types, or even variable sized data. This was what I wanted. Rather than waste time trying to find examples, I decided to create my own method. This class addresses my two concerns, multiple strings and variable sized data. Sharing memory between processes is the simple part. The CreateFileMapping() function does most of the work for us. It can create a file either on the hard drive, or a temporary file in the systems page file. When two or more process want to share memory, all they need to do is call this function with the same filename. But there are limitations. First, the file cannot be resized without closing it first, and second, there is no convenient method to write multiple items to the file. CreateFileMapping() The first problem can be addressed by using a physical file on the disk. By doing this, you can specify a size, the file will grow to the required size when it's opened. The drawback here is that the application must handle the file creation and deletion; also, there is a security risk of having private data stored where anybody can read it. When using a physical file, it is also possible to use the DeviceIoControl() function to enable a file to become growable, though this will only work on NTFS5 partitions, leaving Win95/98 users out of the loop. DeviceIoControl() In essence, all a memory mapped file is, is a large byte array. To write data, all we need to do is call the basic memory functions memset(), memcpy(), and memmove(). We can also write to the array as with any other array by getting the pointer/position of an element and changing it. So to write multiple items, all we need do is write, increase the pointer, and write again. But what about the second process? How does that know where you wrote the data, the size of the data, and for that matter, if you wrote the data. memset() memcpy() memmove() The first problem is simple, we write the data in sequence. All the reader needs to do is parse the byte array until it finds the data it's looking for. The second problem can be solved by writing the size of the data alongside the data. The third requires work on the user's part. For each item added to the stream, a unique ID is required. I thought, if a process wants to read shared memory, then obviously it must know something about what it wants to read. In my case, I wanted to read strings that may or may not be there. So, for each item I wanted to add, I had a #define statement containing a unique ID. If I wanted to add items of the same type, I just looped through using the #define as a base number, adding the counter to it. #define #define SMTP_BODY 0 #define SMTP_SUBJECT 1 ... #define SMTP_SENDERNAME 9 ... #define SMTP_RECIPIENT 20 #define SMTP_CCRECIPIENT 30 ... #define SMTP_ATTACHEDNAME 200 #define SMTP_ATTACHEDFILE 300 #define SMTP_ATTACHEDTYPE 400 As you can see, for any item where there may be more than one instance, for example, SMTP_ATTACHEDNAME, I can simply use a loop, adding i to the value of SMTP_ATTACHEDNAME to create my unique ID. SMTP_ATTACHEDNAME i So now, for each item written to the stream, a further two items are stored. This actually works to our advantage. When reading the stream, all we need to do is read the ID, read the size, and jump to the next ID. Also, we don't need to store anything in a particular order. The class allocates an extra 8 bytes for each item added: 4 for the ID and 4 for the size. This may seem to be a waste, but it enables us more room for the ID, and allows for larger sized items to be added. BOOL CMemMap::AddString(LPCTSTR szString, UINT uId) { // Validate the ID if ( uId == 0xFFFFFFFF || uId == 0xFFFFFF00 ) return FALSE; LPBYTE lpBytePos = 0; UINT uPage = 0; // Check if the id already exists if ( FindID(uId,&uPage,&lpBytePos) == TRUE ) return FALSE; // Calc how many bytes we need UINT uStrlen = (_tcslen(szString) + 1 ) * sizeof(TCHAR); Write(&uPage, &lpBytePos, 4, &uId); Write(&uPage, &lpBytePos, 4, &uStrlen); Write(&uPage, &lpBytePos, uStrlen, (LPVOID)szString); Write(&uPage, &lpBytePos, 4, DOUBLE_NULL); return TRUE; } Just like any string, a special marker 0xFFFFFF00 is used to mark the end of the array. All free space is marked as unallocated with 0xFFFFFFFF. All the IDs and sizes entered will be in the form of an unsigned int, so choosing a marker from the higher range will prevent conflicts, though it does prevent those two hex values being used as an ID. 0xFFFFFF00 0xFFFFFFFF As mentioned above, there are several steps involved when resizing a file, and security considerations to think about. When using the systems pagefile, the data is only temporary. This means that when closing the handle to the file, the data would be lost. I decided to approach this from another angle, taking the pagefile itself as the basis of my ideas. Instead of creating a single file, we create a book of several files or pages. There are advantages and disadvantages to this. We are no longer dealing with a simple byte array, but several byte arrays. A page can be added at any time, but there is no guarantee that they will be sequential. The reader also needs to know if a page has been added and exactly how many pages there are at any given time. So, the first four bytes of the first page serves as a page count. Any time a reader/writer wants to perform an action, it can quickly adjust its internal page arrays by looking at this value. Because of this, the first page must always exist while the class is in scope. Also, each page must be exactly the same size. The next problem comes when reading and writing data. If we create a page of 56 K and add a bitmap image of 238 K, it's not going to fit. The answer is to span the page. Reading and writing requires a little more work, but the data can still remain sequential. This sequence of items is what holds the whole structure together. So, any time an item is removed, we can't just wipe the used space and leave a void, or the reader will have trouble jumping the IDs. We instead have to move all the following items down to fill the void. Instead of doing this item by item, which would be slow, we do it memory by memory. // uSize == size of the void // uRemaining == size of the data // lpDestPos == start of void // lpBytePos == start of data // loop through remaining pages while ( 1 ) { // move data into void memmove(lpDestPos,lpBytePos,uRemaining); // reset pointers if ( uPage < m_uPageCount-1 ) { uPage += 1; lpBytePos = (LPBYTE)m_pMappedViews[uPage]; lpDestPos += uRemaining; } else { // no more pages break; } // move from next page into void memmove(lpDestPos,lpBytePos,uSize); // reset the pointers lpBytePos += uSize; lpDestPos = (LPBYTE)m_pMappedViews[uPage]; } Dealing with strings also helps improve the performance. Remember, all we are really dealing with is a byte array, and all strings are null terminated. So to read a string from the file, all we need do is find the start. This pointer can be used in any string function since the byte array will also store the null value. The only time we can't is when the string spans a page. In this instance, we need to copy each half to a single buffer. LPCTSTR CMemMap::GetString(UINT uId) { // Validate the ID if ( uId == 0xFFFFFFFF || uId == 0xFFFFFF00 ) return NULL; LPTSTR lpString = NULL; // The string to return LPBYTE lpBytePos = 0; // a navigation pointer UINT uPage = 0; // Check if the id already exists if ( FindID(uId,&uPage,&lpBytePos) == FALSE ) return NULL; UINT uLen = 0; Read(&uPage,&lpBytePos,4,NULL); Read(&uPage,&lpBytePos,4,&uLen); // Check if the string is spanned UINT uRemaining = ((UINT)m_pMappedViews[uPage] + MMF_PAGESIZE) - (UINT)lpBytePos; if ( uLen > uRemaining ) { // delete previous buffer if used if ( m_lpReturnBuffer ) delete [] m_lpReturnBuffer; // allocate new buffer m_lpReturnBuffer = new BYTE [uLen]; return (LPTSTR)Read(&uPage,&lpBytePos,uLen,m_lpReturnBuffer); } else return (LPTSTR)Read(&uPage,&lpBytePos,uLen,NULL); } Reading and writing binary data to file works on a similar method, except that when reading the data, it must first be copied to a buffer. The class provides two methods for this: either it writes to a user entered buffer, or writes to an internal buffer and returns a pointer. This, in turn, can be type cast to your data type. I cannot take credit for the mutual exclusion code, it instead comes from another article I found while doing my research. The code was written by Alex Farber, and the article can be found here[^]. In my application, I was reading several items at the same time from several processes. Using a mutex for each call was undesirable and slow, Alex Farber's class enables multiple read processes to read the data, but only a single process to write. It served my needs perfectly. I have left it in the code for convenience, though you may like to use your own methods. #define MMF_PAGESIZE 4096 The size in bytes of each page. Use this #define if you wish to change the default page size from 4K to your own. Add the statement to your code before including the header file, otherwise the default value will be used. If you are adding large items to the file, I advice you set this to a higher value as it will decrease the amount of spanned pages and increase the performance. DWORD Create(LPCTSTR szMappedName, DWORD dwWaitTime, ULONG ulMappedSize); This function should be called prior to any reading or writing operation. A unique name for the shared memory must be passed into szMappedName, this name must be the same for all processes wanting to share the memory. dwWaitTime is the timeout in milliseconds for the mutex, this parameter may be INFINITE. ulMappedSize is the initial size in bytes of the shared memory. The value will be rounded up to the MMF_PAGESIZE boundary. If this value is less than MMF_PAGESIZE, the value of MMF_PAGESIZE is used in its place. szMappedName dwWaitTime INFINITE ulMappedSize MMF_PAGESIZE If the shared memory has already been created, the memory size will be that of the already created file. The function returns ERROR_SUCCESS if it successfully created a file, or ERROR_ALREADY_EXISTS if the file was created by another process. On failure, it returns the value from GetLastError(). ERROR_SUCCESS ERROR_ALREADY_EXISTS GetLastError() BOOL Close(); Closes all open handles to the mapped files. The destructor will call this, by default. VOID Vacuum(); When several items are deleted, the open handles remain open. Thus, the shared file size remains the same. Calling this function will close all unused pages, freeing the memory that was used to manage them. BOOL AddString(LPCTSTR szString, UINT uId); Adds a string to the file. The uId parameter must be a unique value. If the ID already exists or the function fails, it will return FALSE. uId FALSE BOOL UpdateString(LPCTSTR szString, UINT uId); Replaces the stored item with the same uId. If the ID does not exist, it adds a new item and returns TRUE. If the function fails, it returns FALSE. TRUE UINT GetString(LPCTSTR szString, UINT uLen, UINT uId); Reads uLen bytes into szString. If the szString parameter is NULL, it returns the string length in bytes including the null terminator. szString must be an allocated buffer large enough to hold uLen bytes. uLen szString NULL UINT GetStringLength(UINT uId); Returns the string length of uId in bytes including the null terminator. LPCTSTR GetString(UINT uId); Returns a pointer to a null terminated string. It is recommended you copy this string to your own allocated buffer, as the internal structure of the file is likely to change, causing the pointer to become invalid. BOOL AddBinary(LPVOID lpBin, UINT uSize, UINT uId); Adds binary data (int, long, struct... ) to the file. Specify the size of the data type in the uSize parameter. If the function fails, it returns FALSE. int long struct uSize BOOL UpdateBinary(LPVOID lpBin, UINT uSize, UINT uId); Adds or replaces the data stored at uId. UINT GetBinary(LPVOID lpBin, UINT uSize, UINT uId); Reads the uSize of the binary data into lpBin. If the lpBin parameter is NULL, the function returns the size of the data. If the uSize parameter is larger than that of the stored data, the size of the stored data is used instead. lpBin UINT GetBinarySize(UINT uId); Returns the size in bytes of the binary data. LPVOID GetBinary(UINT uId); Returns a pointer to the binary data. It's recommended that you copy the data because the internal structure of the file is likely to change, causing the pointer to become invalid. BOOL DeleteID(UINT uId); Removes the specified uId from the file. Internal memory is not unallocated. To free any used memory, you must call Vacuum(). Vacuum() UINT Count(); Returns the number of items currently being stored. This function serves little purpose, and is here mainly for debugging reasons. UINT64 UsedSize(); Returns the actual used bytes of the internal files. This function serves little purpose, and is here mainly for debugging reasons. BOOL WaitToRead(); Attempts to gain Read access to the shared file. Reading may be shared among other processes. When finished reading, you must call Done(), or you will lock out any process trying to write. Done() BOOL WaitToWrite(); Attempts to gain write access to the file. Write access has priority over any and all readers, and only one process may write to the file at the same time. When finished writing, you must call Done(). BOOL Done(); You must call this after WaitToRead() and WaitToWrite() and after having completed any reading or writing you may have done. This will release the lock, enabling another process to write. WaitToRead() WaitToWrite() I apologise for not providing a demo app, I just cannot think of a suitable demonstration as to what this class can do. If you have any ideas, please let me know, or if you would like to create a demo, I would be happy to include it in the article. The class is pretty straightforward to use as shown in the example below. Before calling any functions, you must call the Create() method. Most errors are returned by the functions, but in rare cases, an exception may be thrown, so it's good practice to wrap the code in try...catch blocks. If you decide to use the internal locking mechanism, be sure to call Done() to release the lock for another process. Failing to do this will not prevent other processes from reading, but it will prevent others from writing. Create() try...catch int main() { CMemMap mmp; unsigned int i; double j = -123.456; try { mmp.Create(_T("594855C7-9888-465a-8BC8-D9797874EB9F"),INFINITE,2048); if ( mmp.WaitToWrite() ) { for (i=0; i<3; i++,j*=7.23) { wcout << _T("Adding Binary: ") << j << endl; mmp.AddBinary(&j,sizeof(double),i); } for (i=0,j=0; i<3; i++) { mmp.GetBinary(&j,sizeof(double),i); wcout << _T("GetBinary Returned: ") << j << endl; } for (i=0; i<3; i++,j*=7.23) { wcout << _T("Updating binary to: ") << j << endl; mmp.UpdateBinary(&j,sizeof(double),i); } for (i=0,j=0; i<3; i++) { mmp.GetBinary(&j,sizeof(double),i); wcout << _T("GetBinary Returned: ") << j << endl; } for (i=0; i<3; i++) { wcout << _T("Deleting ID: ") << i << endl; mmp.DeleteID(i); } for (i=0; i<3; i++) { wcout << _T("Adding string \"Hello World!\"") << endl; mmp.AddString(_T("Hello World!"),i); } for (i=0; i<3; i++) { wcout << _T("GetString Size Returned: "); wcout << (UINT)mmp.GetString(0,0,i) << endl; } for (i=0; i<3; i++) { wcout << _T("GetString returned: "); wcout << (LPCTSTR)mmp.GetString(i) << endl; } for (i=0; i<3; i++) { wcout << _T("Deleting ID: ") << i << endl; mmp.DeleteID(i); } wcout << _T("Freeing the memory") << endl; mmp.Vacuum(); wcout << _T("Releasing lock") << endl; mmp.Done(); } } catch (LPCTSTR sz) { wcout << sz << endl; } char c(' '); while (c != 'q' && c != 'Q') { cout << "Press q then enter to quit: "; cin >> c; } return 0; } My latest project called for multiple processes to read/write/store many strings. Some of these strings were up to 10 MB in size (base64 encoded files). When I started the project, the first thing I did was to create a class which handled these strings. At that time, all the data was stored within the class. When the question of multiple processes came to mind, I realised that I couldn't store strings in this manner. So after creating this class, I no longer needed to. Instead of calling new to allocate a buffer and then copying the string into it, I could instead store the strings directly into the shared file. Any time a class member wanted to use the string, I simply used the pointer returned by GetString(). new GetString() This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here // Check if page count changed if ( uPageCount > m_uPageCount ) { // open the pages Grow(m_uPageCount - uPageCount); } else if ( uPageCount > m_uPageCount ) { // a vacuum was performed close unused handles Shrink(uPageCount - m_uPageCount); Vacuum(); } // Check if page count changed if ( uPageCount > m_uPageCount ) { // open the pages Grow(m_uPageCount - uPageCount); } else if ( uPageCount > m_uPageCount ) { // a vacuum was performed close unused handles Shrink(uPageCount - m_uPageCount); Vacuum(); } m_pMappedViews m_hMappedFiles assert(m_pMappedViews); assert(m_hMappedFiles); void aa() { CMemMap mmp; } Create CMemMap::CMemMap() : m_hMappedFiles(0), m_pMappedViews(0), m_lpContainerName(0), m_lpUniqueContainerNames(0), m_uPageCount(0), m_lpReturnBuffer(0), m_lpDataBuffer(0), m_sMutexName(0), m_sSemReadersName(0), m_sSemWritersName(0), m_sMemFileName(0), m_hMutex(0), m_hsemReaders(0), m_hsemWriters(0), m_hFileMapping(0), m_pViewOfFile(0), m_pnWaitingReaders(0), m_pnWaitingWriters(0), m_pnActive(0) BOOL CMemMap::EndProtect() { if ( m_sMutexName ) delete [] m_sMutexName; if ( m_sSemReadersName ) delete [] m_sSemReadersName; if ( m_sSemWritersName ) delete [] m_sSemWritersName; if ( m_sMemFileName ) delete [] m_sMemFileName; ...... } HANDLE m_hFileMapping; LPVOID m_pViewOfFile; iLen += 6 * sizeof(TCHAR); iLen += 6 * sizeof(TCHAR) + 1; mmp.Create(_T("594855C7-9888-465a-8BC8-D9797874EB9F"),INFINITE,24800); if ( mmp.WaitToWrite() ) { char s[5000] = {0}; mmp.AddBinary(s, 5000, 0); mmp.GetBinary(s, 5000, 0); mmp.DeleteID(0); // If s size more than 2000, that's error ! CIPCReadWriteLock CMemMap Garth J Lancaster wrote:the MMF_PAGESIZE - I thought there was a system call to get the default page size per machine/os - I'll see if I can find it GetSystemInfo() Garth J Lancaster wrote:One think Im not sure about is the way you manage what you are storing and keep track of it - sorry, let me re-phrase that, its not a criticism - just an observation - in your case, each program using the shared memory must have hard-coded in it what the memory looks like/whats stored within it - whereas, I would sacrifice some memory for a catalog of sorts at the start of the block so that anything else could look at the catalog and see whats there General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/15548/Dynamic-Inter-Process-Shared-Memory?msg=2852653
CC-MAIN-2015-32
refinedweb
3,339
62.78
First, thanks a lot to Debasish Gosh who kindly accepted to answer my questions and inspect my code too. May he be assured of all my gratitude. Exchanging ideas with a so skilled person like him was quite an experience and enlightened these times of trouble for me. I hope I have reported all the modifications he suggested, may he forgive me if I did not. I may not have found yet some Master Craftsman in Europe in order to teach me about actors and Functional programming (40 year old may be too...old), but the kind help of this person was a real support. Back to Pi. Some approximated value of pi can be quickly computed if based on the Cesaro demonstration that the square of Pi is inversely proportional to probability that two integers chosen at random will have no factors in common. More clearly: P(gcd(N1,N2) == 1) = 6 / π * π Because if so, their gcd is 1. Quoting the Wikipedia article "Monte Carlo methods (or Monte Carlo experiments) are a class of computational algorithms that rely on repeated random sampling to compute their results". So running a Monte Carlo experiment in order to process the Pi number can be resumed in running a gcd calculus on random natural integer values till our result converges to some expected value. Therefore, random number generation takes place here, if we consider running a gcd function on a series of nearly random integer. In order not to make things too complex, let's assume that the action of generating numbers can be described as the process of creating a string of numbers extrapolated from a suite taking its roots from a seed value. I picture that as a mathematical suite : Vn = f(Vn-1) so Vn = f(f(f(.................f(Vo) where Vo = some_seed where at each step one need to cache the new generated value in order for the following one to be processed and so on. (Ok my mathematical symbolic vocabulary is limited but I do not want to copy n'paste Wikipedia) This is a typical problem solved with the help of assignable variables. Of course some solution does exist which does not depends on some external variable but the result is much more cluttered (have a look there). Having installed Leinengen (version 1.6.1 is very nice with repl at its top), I now have all the tools I need to challenge my code with tests. In the Clojure package cesaro.test I created a suite of small tests in a core_spec.clj file. Starting content is : (ns cesaro.test.core-spec (:use cesaro.core) (:use clojure.test)) Basic. As in the cesaro.test package, in the core_spec.clj file content, my namespace is cesaro.test.core-spec. I claim there my intent to use the tools of the clojure.test namespace in order to challenge the functions inplemented in the cesaro.core namespace (probably the content of a core.clj file in a cesaro package... got it? :)) What I need is a working gcd function first. So are the very dumb tests used to create it: (deftest gcd-with-matching-known-numbers-should-return-value (is (= (gcd 1989 867) 51))) (deftest gcd-with-matching-some-other-numbers-should-return-value (is (= (gcd 36 27) 9))) (deftest gcd-with-prime-numbers-should-return-one (is (= (gcd 23 11) 1))) (deftest gcd-with-unordered-numbers-should-return-gcd (is (= (gcd 11 23) 1))) The last test appeared later as I used some Euclide method so to process the gcd and had to face slight problems due to my expectation of ordered parameters :) (told you I was dumb...but still learning). This leads me to: (defn euclide-gcd [numberOne numberTwo] (cond (= numberTwo numberOne) numberOne (= numberTwo 1) 1 :else (let [dif (- numberOne numberTwo)] (if (< dif numberTwo) (recur numberTwo dif) (recur dif numberTwo))))) (defn gcd [x y] (if (< x y) (euclide-gcd y x) (euclide-gcd x y))) The euclide-gcd function expects ordered parameters. I do not like the conception and what I hate most are the cluttered cond/if branches. This is something I intend to avoid in the future. I hate branches as they do remind me of goto. Clojure deserves better than that. Then I need some random generator. A mechanism that would allow me to generate a seed and the derived random numbers. The tests look like: (deftest seeds-with-two-invocations-should-differ (is (not= (seed) (seed)))) (deftest random-with-two-invocations-should-differ (is (not= (random) (random)))) The implementation I provided : (defn seed [] (rand-int Integer/MAX_VALUE)) (defn marsaglia-first[value] (+ (* 36969 (bit-and value 65535)) (bit-shift-right value 16))) (defn marsaglia-second[value] (+ (* 18000 (bit-and value 65535)) (bit-shift-right value 16))) (defn marsaglia-sum [x y] (+ (bit-shift-left x 16) y)) (def random (let [x (ref (seed)) y (ref (seed))] (defn update [] (dosync (alter x marsaglia-first) (alter y marsaglia-second) (marsaglia-sum @x @y))))) And yes, I adopted a Marsaglia algorithm (see wikipedia previous reference). Although quite idiomatic, I find the result not as elegant as the Scheme solution where the set! form allows for the modification of a declared variable in a let form expression: (define rand (let ((x random-init)) (lambda () (set! x (rand-update x)) x))) The Scheme code shape appears to be more concise. This concision is not due to the fact that I chose a Marsaglia algorithm involving two parameters instead of one, but it finds its origin in the fact that - by nature -, variables set by a let form in Clojure are immutable. The only solution I found was to use the idiomatic form of the references in Clojure, in order to both alter the references wrapped values. Any suggestion will be welcomed. Frustrated by the two previous pieces of code, I decided to force myself to write a Monte carlo small running function, without no branching at all. A dumb test is to provide my upcoming function with always/never failing tests in order to check its limits: (deftest montecarlo-with-always-successfull-simulation-should-return-1 (defn always-successfull [] true) (is (= 1 (montecarlo 10 always-successfull)))) (deftest montecarlo-with-never-successfull-simulation-should-return-0 (defn never-successfull [] false) (is (= 0 (montecarlo 10 never-successfull)))) The montecarlo function accepts two input parameters, the first being the number of attempts and the second the simulation to be applied. The returned result is the ratio of passed simulation versus the whole number of simulations. No branches so? I would lie saying that that one was a piece of cake for a beginner in functional programming as I am. I had to switch into Scala, than come back to Clojure with this following precise implementation: (defn with-result-updates [] (let [increments {true [1 0] false [0 1]}] (fn [status] (increments status)))) (defn run-montecarlo [updates in-simulation] (fn [[passed missed] number-of-essays] (vec (map + (updates (in-simulation)) [passed missed])))) (defn montecarlo [essays in-simulation] (let [runs (run-montecarlo (with-result-updates) in-simulation) from-start [0 0]] (/ (first (reduce runs from-start (range essays))) essays))) Of course the entry point is the montecarlo function. As I know the number of essays to run, all I need is to iterate and not recur over a range of essays: (range essays) Meanwhile, at each step, I can run a simulation (the purpose of the run-montecarlo function) that will update a vector of statistics, provided as [0 0] at the very beginning: from-start [0 0] The first element being the number of passed essays and the second the number of failed ones. The reduce form (the equivalent of the Scala foldLeft) aims to produce a vector of statistics. With reduce, one can produce whatever he wants, even another list, from the driving list. The purpose of the with-result-updates function is to create a lambda function instance (so a procedure) capable of producing a vector of the deltas to be applied whether a simulation has succeeded or not: - [1 0] on success - [0 1] on failure {true [1 0] false [0 1]} We close onto one immutable single instance of the hasmap, embedded into the frame (context of execution) of the created procedure. The principle of closing applies too in the montecarlo function where we both close onto the previous described procedure instance and the simulation to be applied: (let [runs (run-montecarlo (with-result-updates) in-simulation) from-start [0 0]] The final test to run to check on pi can be : (deftest square-pi-with-enough-essays-must-be-close-to-9-9 (let [result (square-pi)] (println "square-pi" result) (is (< (- result 9.9) 0.025))))where we assert on the vicinity of the values of square pi and 9.9. Come then a natural implementation: (defn cesaro-test [] (let [value (gcd (random) (random))] (= 1 value))) (defn square-pi [] (/ 6 (montecarlo 16192 cesaro-test))) Works nice. What about Scala ? Scala helped me to find the no-branch version of the montecarlo method. For the trained Java eye, Scala is a pleasant bridge to take in order to embrace good functional programming habits to be adopted in any other functional programming language. Don't misunderstand me. Clojure and Scheme overwhelm me with thrilling sensations each time I do practice them. They also show me I was rambling in the dark before. So, going back again to Scala (did I say I bought the Scala T-shirts and Teddy bear? ), I first had to write Random number generator tests. This was an opportunity to try Specs2: import org.specs2._ import RNG._ final class RNGTest extends Specification{ def is = "RNG specification" ^ p^ "Seed Generator should" ^ "Generate two different seeds" ! e1^ "Generate positive numbers" ! e2^ p^ "Number Generator should" ^ "Generate two different numbers" ! e3 def e1 = seed should (not (beEqualTo(seed))) def e2 = { val rand = RNG() Range(0, 100).map((value: Int) => rand() should (beGreaterThan(0))) } def e3 = { val rand = RNG() rand() should not (beEqualTo (rand())) } } leading me to : import util.Random import Long._ import scala.math._ class RNG(var seedOne: Int, var seedTwo: Int ) { def apply(): Int = { seedOne = 36969 * (seedOne & 65535) + (seedOne >> 16) seedTwo = 18000 * (seedTwo & 65535) + (seedTwo >> 16); abs((seedOne << 16) + seedTwo) } } object RNG { Random.setSeed(MaxValue) def seed : Int = { abs(Random.nextInt()) } def apply(): RNG = { new RNG(seed, seed) } } where the seed is provided by the native number generator. I also need a GCD so let's test it import org.specs2._ import com.promindis.montecarlo.MathModule._ final class GCDTest extends Specification { def is = "GCD calculus specification" ^ p^ "GCD with macthing numbers should" ^ "Find a first known result" ! e1 ^ "Find a second known result" ! e2 ^ "Find known result with reversed paramteres" ! e3 ^ p^ "GCD with non macthing numbers should" ^ "assert on known mismacth" ! e4 ^ "assert on known mismacth with invert parameters" ! e5 def e1 = gcdOf(1989, 867) should be equalTo (51) def e2 = gcdOf(36, 27) should be equalTo (9) def e3 = gcdOf(27, 36) should be equalTo (9) def e4 = gcdOf(23, 11) should be equalTo (1) def e5 = gcdOf(11, 23) should be equalTo (1) } and then the solution: object MathModule { //......... def sorted(firstNumber: Int, secondNumber: Int): (Int, Int) = { if (firstNumber < secondNumber) (secondNumber, firstNumber) else (firstNumber, secondNumber) } def gcdOf(firstNumber: Int, secondNumber: Int): Int = { def gcdOf(pair: (Int, Int)): Int = { pair match { case (y, x) if (x == y) => x case (y, x) if x == 1 => 1 case (y, x) if x < y => gcdOf(sorted(y - x, x)) case (y, x) if x < y => gcdOf(sorted(x - y, y)) } } gcdOf(sorted(firstNumber, secondNumber)) } } One can admire the beauty of the self expressive pattern matching in Scala. Finally I need a Monte Carlo simulator. Writing the tests again with the beautiful specs2: import org.specs2._ final class MontecarloTest extends Specification{ def is = "Montecarlo simulation dshould" ^ p^ "have 100% success with always successful scenario" !e1 ^ "have 0% success with always failing scenario" !e2 def e1 = { def alwaysSuccessful(): Boolean = true val stats = Montecarlo.simulation(alwaysSuccessful, 100) stats._1 should (beEqualTo(100)) stats._2 should (beEqualTo(stats._1)) } def e2 = { def neverSuccessful(): Boolean = false val stats = Montecarlo.simulation(neverSuccessful, 100) stats._3 should (beEqualTo(stats._1)) } } helped me to get to : object Montecarlo { type test = () => Boolean val update: Map[Boolean, List[Int]] = Map[Boolean, List[Int]](true -> List(1,0), false -> (List(0, 1))) def simulation(onRunning: test, essays: Int): (Int, Int, Int) = { val results: List[Int] = Range(0, essays).foldLeft(List(0, 0)) { (stats: List[Int], index: Int) => (stats, update(onRunning())).zipped.map[Int, List[Int]](_ + _) } (essays, results(0), results(1)) } } Here the result is provided as 3-Tuple, returning the number of essays, the number of passed essays , then the number of failed essays. I suffered only on the map method application after zipping the resulting lists. The compiler seemed in need of some help with explicit typing in order to infer the returned type. The astute reader will note that we used the same trick as in Clojure, storing the increments values definitions for the two success/failure scenarii into a Map. I have all the tools I need to run a Cesaro test: import org.specs2.mutable.Specification final class PiResolutionTest extends Specification{ "Resolution of Pi" should { "be close to 9.9 " in { val epsilon = (9.9 - MathModule.estimatPiSquare()) epsilon should(beLessThan(0.05)) } } } and challenge it: object MathModule { def forCesaro(random: RNG): () => Boolean = { () => gcdOf(random(), random()) == 1 } def estimatPiSquare(): Double = { val stats = Montecarlo.simulation(forCesaro(RNG()), 10000) println(stats) val ratio: Double = int2double(stats._2) / stats._1.asInstanceOf[Double] println(ratio) println(6.0 / ratio) (6.0 / ratio) } //.............. } What was learnt ? Well, living without branches is not easy, but worthwhile because it helps in learning how to use and reuse functional programming bricks. But I also learnt I have to digg into the RNG code so to find a smarter way to generate my numbers in Clojure. Maybe a stream oriented approach would be better... Nice, got to finish chapter 11 of the Joy of Clojure and read one chapter more in Gul Agha's Actors book. Be seeing you ! :) 2 comments: When you talk about two integers chosen at random, I believe you are talking about two integers between 0 and N, uniformly distributed; and then getting N growing to infinity. Right? Or else there's a confusion there. You are right ! :) Thank you for noticing that.
http://patterngazer.blogspot.com/2011/08/sipping-some-monte-carlo-in-scala-and.html
CC-MAIN-2019-09
refinedweb
2,402
54.12
From AS3 to C#, Part 20: Preprocessor Directives Today’s article continues the series by looking at C#’s preprocessor support, which is like an expanded version of AS3’s compile-time constants and conditional compilation. Read on to learn about all the strange (and powerful) #something lines you can put in your To recap AS3’s support for compile-time constants, consider compiling some code like this: mxmlc -define MATH::pi,"3.14159265" MyApp.as That’ll define a compile-time constant called MATH::pi which you can use like this: function circleArea(radius:Number): Number { return MATH::pi * radius * radius; } The MATH::pi value is not a variable. Instead, the compiler replaces it with whatever text you passed on the command line for -define. This means that MATH::pi gets replaced in the source code with 3.14159265 before the file is actually compiled. What gets compiled looks like this: function circleArea(radius:Number): Number { return 3.14159265 * radius * radius; } Since this happens before the code is compiled, we call it a “preprocessor” step. In AS3, it can also be used to conditionally remove blocks of code like so: // Only hide the context menu's built-in items in release builds CONFIG::release { var menu:ContextMenu = new ContextMenu(); menu.hideBuiltInItems(); this.contextMenu = menu; } The system, unfortunately, can do little more than this so the coverage ends here. C#, however, has more functionality. Let’s start with defining compile-time constants. C# already has const variables that take care of cases like the MATH::pi above. However, your code can define a boolean value like CONFIG::release above using #if: #if RELEASE This can then be used by a #if/ #endif pair to check the value: #if RELEASE Debug.Log("Running in release mode"); #endif If RELEASE is defined by a #define or in the build settings, the Debug.Log line will be compiled. Otherwise, it get stripped out just like with the AS3 conditional compilation. As you might have guessed, there is a #else that works just like the normal else: #if RELEASE Debug.Log("Running in release mode"); #else Debug.Log("Running in debug mode"); #endif There is also a #elif: #if RELEASE Debug.Log("Running in release mode"); #elif QA Debug.Log("Running in QA mode"); #else Debug.Log("Running in debug mode"); #endif The #if and #elif directives can contain arbitrary boolean logic, just like the normal if: #if (RELEASE == true && !QA) Debug.Log("Release and not QA"); #endif Another trick is that you can conditionally define (with #if) and un-define (with #undef) these flags: #define RELEASE #if RELEASE // Release mode doesn't show FPS // Un-define in case it was ever defined #undef SHOWFPS #else // Debug mode shows FPS // Define it in case it wasn't defined before #define SHOWFPS #endif // Show FPS if we're supposed to // Note: don't need to know if this is based on debug/release #if SHOWFPS Debug.Log("FPS: " + curFPS); #endif One special case about the #define and #undef directives is that they must occur at the beginning of the file. You can have comments beforehand, but no other code. The #if, #elif, and #endif directives can occur anywhere else in the rest of the file, just like the rest of the preprocessor directives. Speaking of other directives, let’s start with #error. This generates a compile-time error with a custom message: #define RELEASE #define SHOWFPS #if (RELEASE && SHOWFPS) #error Release can't show FPS #endif Or if you think an error is too harsh, you can use a #warning to generate a compile-time warning instead: #define RELEASE #define SHOWFPS #if (RELEASE && SHOWFPS) #warning Release shouldn't show FPS #endif Another kind of directive is #pragma, which tells the compiler to do something compiler-specific. For example, Microsoft’s C# compiler supports #pragma warning disable X to disable compiler warning X and #pragma warning restore X to restore it. Think of these like non-standard extensions to the language which may or may not work on specific compilers. The next directive is #line, which allows you to tell the compiler to change the line numbering and file name for the purposes of debugging. Here’s how that works: int Add(int a, int b) // line: 1, file: Add { // line: 2, file: Add #line 100 "ThatFileWithAddInIt" int sum = a + b; // line: 100, file: ThatFileWithAddInIt return sum; // line: 101, file: ThatFileWithAddInIt #line default } // line: 7, file: Add You can also use #line to hide particular lines from the debugger: Debug.Log("this line can be debugged); #line hidden Debug.Log("this line can NOT be debugged); Debug.Log("this line can be debugged); Finally, you can use #region and #endregion to mark off areas of the file. This is commonly respected by text editors and IDEs (e.g. Visual Studio, MonoDevelop) by collapsing the document’s regions using code folding. For example, it’s common to use regions to segment large files or sections of files: #region Usings using System; using System.Collections.Generic; using System.Linq; #endregion public struct Vector2 { #region Variables public float X; public float Y; #endregion #region Constructors public Vector2(float uniform) { X = uniform; Y = uniform; } public Vector2(float x, float y) { X = x; Y = y; } #endregion #region Add functions public float Add(Vector2 vec) { X += vec.x; Y += vec.y; } public float Add(float val) { X += val; Y += val; } #endregion } Regions that are collapsed with code folding can yield a nice table-of-contents-style overview of the file that can be drilled into by expanding just the region you’re interested in. That wraps up today’s coverage of the preprocessor in C#. The following side-by-side comparison shows the differences between it and the closest relative in AS3: compile-time definitions. //////// // C# // //////// // Define compile-time constant #define DEBUG #if (DEBUG || QA) #define SHOWFPS #elif TESTING #define SHOWFPS #else // Undefine compile-time constant #undef SHOWFPS #endif public class FramerateDisplayer { public void Display(float rate) { // Only include if defined by preprocessor #if SHOWFPS #if RELEASE // Trigger compile-time error #error Can't show FPS in release #endif #if QA // Trigger compile-time warning #warning Shouldn't show FPS in QA #endif Debug.Log("FPS: " + rate); #endif } } // Hide next line from the debugger #line hidden int result = 123 + 456; // Change line numbering and file name #line 100 "SomeOtherFile" int masked = 123 + 456; // Restore line numbering #line default // Trigger compiler-specific functionality #pragma warning disable 12345 float x = 3.14; // Mark a region of the file #region Usings using System; using System.Collections.Generic; using System.Linq; #endregion ///////// // AS3 // ///////// // Define compile-time constant // {only in build settings} // Undefine compile-time constant // {impossible in AS3} public class FramerateDisplayer { public function display(rate:Number): void { // Only include if defined by preprocessor SETTINGS::SHOWFPS { SETTINGS::RELEASE { // Trigger compile-time error // {impossible in AS3} } SETTINGS::QA { // Trigger compile-time warning // {impossible in AS3} } trace("FPS: " + rate); } } } // Hide next line from the debugger // {impossible in AS3} // Change line numbering and file name // {impossible in AS3} // Restore line numbering // {impossible in AS3} // Trigger compiler-specific functionality // {impossible in AS3} // Mark a region of the file // {impossible in AS3} // That’s all for today. Stay tuned for next week when we’ll continue the series with even more exciting new features in C#! Spot a bug? Have a question or suggestion? Post a comment! #1 by Merlin on December 6th, 2014 · | Quote “#region” rather, it is the function of editor. FlashDevelop has #2 by jackson on December 6th, 2014 · | Quote That’s a good clarification. It is impossible in AS3, but possible in non-standard language extensions implemented by text editors and IDEs like FlashDevelop.
https://jacksondunstan.com/articles/2885
CC-MAIN-2019-09
refinedweb
1,276
50.87
How to Customize Firefox About:Blank Page Do you want to have a custom Firefox about:blank page on Windows? It's pretty easy if you follow these steps! Steps - 1First,]: Ad - For most of us there should only be one folder like this. For others (who created multiple profiles) there will be multiple folders. If there are multiple folders, select the folder corresponding to the profile that you want to customize about:blank for. - 2Once you are in the [randomtext].[profilename]: folder, look for the chrome folder. In that folder, find userContent-example.css and rename it to userContent.css. - 3Next, open userContent.css in Notepad or any other text editor (Notepad is preferred over something like Word because it is more simple) add the following code to the end of the file: - @namespace url(); - 4@-moz-document url("about:blank") { - { - 5background: url('[INSERT IMAGE URL HERE]'); - 6background-color: #000000; - 7background-position:center center; - 8background-attachment:fixed; - 9background-repeat:no-repeat; - 10}} Ad - Where it says [INSERT IMAGE URL HERE], put the URL of either an image from the web (Ex:) or the URL of an image on your computer (Ex: Pictures/desert.jpg). -. - After you have edited the settings as you please, save the file and restart Firefox (if you had it open). You should then see something like this every time you open a new tab We could really use your help! dogs? sugar rockets? Rome Total War? seizing opportunities? Sources and Citations - - Original source, shared with permission. About this wikiHow
http://www.wikihow.com/Customize-Firefox-About:Blank-Page
CC-MAIN-2015-11
refinedweb
254
65.32
Welcome to the Work with selected classes from the java API tutorial offered by Simplilearn. The tutorial is a part of the Java certification course. In this tutorial, we will work with java.utility classes, and classes that will allow us how to format and work with date and time. In the next section, we will look at the objectives of the Work with selected classes from the java API tutorial. From the Work with selected classes from the java API tutorial, you will learn to - Create and manipulate strings Manipulate data using the StringBuilder class and its methods Work with StringBuffer class Create and manipulate calendar data Declare and use an ArrayList of a given type Write a simple Lambda expression that consumes a Lambda Predicate expression Let's begin working with selected classes from the Java API in terms of strings. A string is an object that represents a sequence of character values. An array of char works the same way as a Java string. For example: Given below is a character array, which has a sequence of characters associated with it. Char[ ] ch={‘s’, ‘a’, ‘m’, ‘p’, ‘l’, ‘e’, ‘j’, ‘a’, ‘v’, ‘a’}; String s = new String (ch); Or String s = “sample java”; When we create a string object saying, string s is equal to a new string; then, we simply pass the character array to it and it will convert the character array into a string. In this example, string ‘s’ can be equal to a direct string value like simple Java or also converted from a character array into a string. In the next section, we will look at the String Objects in Java. The java.lang.String class is used to create a string object. String objects are immutable and cannot be modified or changed. This means that, once a string object is created its data or state cannot be changed, however, a new string object can be created. Consider the example shown. class Testimmutablestring { public static void main (String args[ ]){ String s=“Abdul"; s.concat(“Kalam");//concat() method appends the string at the end// System.out.println(s); //will print Abdul because strings are// immutable objects } } Here, we have a simple class with the entry point of a static void main. The string object ‘s’ is equal to Abdul. Next, we will try to add or append a new value called Kalam to the stream string. The command string.concatenate, means that we're adding the value Kalam to the value Abdul and then we will try and print out the string. We then observe that the output continues to be Abdul. This means that the strings are immutable, it also infers that when a string object is created and a value is assigned to it, it is fixed and the value of the string object at the same memory location cannot be modified where the current string has allocated space in the ram. If modification of the string value is required, then the allocation of a new memory location would be necessary and only then we will be able to modify the string value. Thus we infer that the string objects are immutable and cannot be modified and changed at their current memory location. In the next section, we will look at the list of symbols in the regular expression. Now let's look at the list of symbols in terms of the regular expressions. Regular expressions allow you to do matching with strings and they're very effective in terms of validations and you can validate the string value or the value that the user is assigning to your class member variables. In instances where you want to put a restriction, that the stream should be in a certain format or it should be of a certain length or probably the user is assigning a zip code to your class value and you want to validate that the zip code should be in abc dash xyz format, you can do that using these set of regular expressions. The carrot symbol matches the beginning of the line and the dollar matches the end of the line. In the next section let us see an example of using some of the regular expressions along with the matches method of the Java API. In the next section, we will look at the matches() method in Java. Some of the main points under the matches() method include - A matches() method checks, whether the string matches with a specified regular expression. If the string fits in the specified regular expression, then this method returns true. Otherwise, it will return false. Let us consider the example given below. public class MatchesExample{ public static void main(String args[ ]){ String str = new String("Java String Methods"); System.out.print("Regex: (.*)String(.*) matches string? " ); System.out.println(str.matches("(.*)String(.*)")); System.out.print("Regex: (.*)Strings(.*) matches string? " ); System.out.println(str.matches("(.*)Strings(.*)")); System.out.print("Regex: (.*)Methods matches string? " ); System.out.println(str.matches("(.*)Methods")); } } Output: Regex: (.*)String(.*) matches string? true Regex: (.*)Strings(.*) matches string? false Regex: (.*)Methods matches string? true In the above set of codes, String str is equal to a new string. This string provides the java string methods. Next, we want to do a match. Thus, we use the string dot matches dot asterisk string dot asterisk, this means, we're looking for the word ‘String’ if it is present anywhere within the ‘Java String Methods’ string value. If it is a match, it renders true and if you look at the output it says that the string is actually present in this line java string methods. In the second matches that we do, we check if the word ‘strings’ with an s is present inside this line now since this is not present it will return false the third match checks if the word methods is actually present in the street and here we see that the word methods is actually present in the line ‘Java String Methods’, now since it is not present, it returns as false. The third match checks if the word methods are present in the ‘Java String Methods’ line, and since the word methods are present, the value returned will be true. In the next section, we will look at the character sequence interface in Java. The character sequence interface used in Java is used to represent a sequence of characters. It is implemented by the String, StringBuffer, and StringBuilder classes available in the Java API. [image] Thus, all three classes implement the character sequence interface. Java string class provides many methods to perform operations on the string, such as - compare() - which is used to compare two strings and checking quality, concat() - if you want to add the value of the two strings together, equals() - if you want to check whether the value that both the strings hold are the same, split() - whether you want to split and break the string into small parts and tokens based on a separator like a comma or space, length() - to give you the length of the string, replace() - if you want to replace one word in the string of one value or character in the string, compareTo(), intern(), substring(), and so on. These are the methods defined in the character sequence interface and hence, most of the string class available in java will give the set of methods available on them. In the next section, we will look at the compareTo() Method in Java. The function of the compareTo() method in Java includes - The compareTo() method compares a given string with the current string lexicographically. This means it will actually do a comparison based on the ASCII character of the alphabets present in the string It returns a positive number, negative number, or 0. This means that, if the string s1 is greater than s2, it returns a positive number if string s1 is less than s2 it returns a negative number and if string s1 is equal to s2 it goes ahead and returns a zero. That is, if s1 > s2, it returns positive number if s1 < s2, it returns negative number if s1 == s2, it returns 0 Let's look at the compareTo() method with an example. Here, we are declaring a set of strings, s1, s2,s3,s4,s5. Each string has a different value. The first two strings have a similar value assigned to them and the other three strings have different values. public class CompareToExample { public static void main(String args[ ]) { String s1=“Hello"; String s2=“Hello"; String s3=“Meklo"; String s4=“Hemlo"; String s5=“Flag"; System.out.println(s1.compareTo(s2));//0 because both are equal System.out.println(s1.compareTo(s3));//-5 because "h" is 5 times lower than "m" System.out.println(s1.compareTo(s4));//-1 because "l" is 1 times lower than "m" System.out.println(s1.compareTo(s5));//2 because "h" is 2 times greater than "f“ }} Output: 0 -5 -1 2 We can infer from the above code that - (s1.compareTo(s2) returns zero because both the strings have the same value. (s1.compareTo(s3) returns the value minus five. This is because ‘H’ is five times lower than ‘M’ in terms of its ASCII key code. When we compare to (s1.compareTo(s4), it returns a negative one because ‘L’ is one, times lower than ‘M’, in terms of the ASCII code. When we compare (s1.compareTo(s5) that is “Hello” with “Flag”, it returns 2 because ‘H’ is two times greater than ‘F’ from the ASCII code perspective. The output of this particular example is also shown under the code box. In the next section, let us look at the concat() method in Java. Let's now look at the concat() method. The contact method available as part of the string class combines the specified string to the end of the current string. In short, it simply does an addition or an append operation. It returns a combined stream. Therefore, the method defined below is the public string concat another string; So that it takes another string and appends it to the existing string. public String concat(String anotherString) Next, let us look at an example of the contact method. Let us now look at an example of the concat method. public class ConcatExample { public static void main(String args[ ]) { String s1="java string"; s1.concat(“Sample"); System.out.println(s1); s1=s1.concat(" Sample example for String Concat"); System.out.println(s1); }} Output: Sample Sample example for String Concat In the example shown above, we have a class called ConcatExample, an entry point method called static void main, String s1 with a start value of "java string". To it, we are saying, s1.concat and we're passing the value sample. Now when we print s1, you can observe the output as it is shown under the code snippet. Now, when we say ‘s1=s1.concat(" Sample example for String Concat")’, we observe that since we have now concatenated this new value and stored the return back into s1, and since the strings are immutable there is a new memory location that has been created. Thus, s1 has started pointing to this new memory location. The new memory location where the object s1 has been allocated now holds the new value which says “Sample example for String Concat” and a reference to this new memory location is returned and stored in s1. Hence, when we say s1, we get the new value which was concatenated to the initial value, which is- ‘Sample example for String Concat’. If we hadn’t assigned the reference of the new memory location back to s1, then it wouldn’t have worked and we would have got back the old value because strings are immutable and the value cannot be updated at the original memory location. In the next section, let us learn about the equals() method in Java. Let us now look at the equals method available on the string The string equals method, compares two given strings based on the content of the string If any character is not matched, it returns false If all characters are matched, it returns true It overrides the equals method of object clas public boolean equals(Object anotherObject) Thus, we have public boolean equals and the name of the object. Let us now look at an example for the equals method. Let us now look at an example of the equals() method. Here, we are declaring 4 strings. Each string has different values. Since the value of String s1 and String s2 are the same, the command (s1.equals(s2)); returns true since the content and the casing is the same. Whereas, since String s1 and String s3 have same values but different casing, the command (s1.equals(s3)); returns false. public class EqualsExample { public static void main(String args[ ]) { String s1=“Sample"; String s2=“Sample"; String s3=“SAMPLE"; String s4=“Java"; System.out.println(s1.equals(s2));//true because content and case is same System.out.println(s1.equals(s3));//false because case is not same System.out.println(s1.equals(s4));//false because content is not same }} Output: true false true In this example, we're declaring four strings. Each string has different values. String s1 and s2, however, have the same value, so when we say String ‘s1.equals(s2)’ the value returned will be true. This is because the content and the casing is the same. On the other hand when we say String ‘s1.equals(s3)’, the casing is not the same; we have a small sample and we have a sample in all caps, so the value returned will be false. In the last line of the code, we have ‘s1.equals(s4)’. Here, we are trying to equate the word ‘Sample’ with the word ‘Java’ and check if it has the same value. It is quite evident that the value returned will be false, because the content held in both the variables is not the same. Let us now move on to the next method available for string operations which is the split() method. Java Certification Training caught your attention? Check out the course preview now! In the string split method, the string is split based on a given regular expression and returns a character array. public String split(String regex) and, public String split(String regex, int limit) Here, two options are available; two overloads for the string split. We can either pass a regex expression that splits the string wherever it finds that expression within the string, or we could use a regex expression and a limit, where we can say, split the string until this particular limit is met. Let us look at an implementation of this method. public class SplitExample { public static void main(String args[ ]) { String s1="java string split method sample"; String[ ] words=s1.split("\\s");//splits the string based on whitespace //using java foreach loop to print elements of string array for(String w:words) { System.out.println(w); } }} Output: java string split method sample Here, we have the entry point public void main, String s1 and a line associated with it which is "java string split method sample". We now call the split function and we pass slash s, which means, search for blanks or whitespace characters. So every time a blank or space is encountered within the string, the string will be split and those values will be stored into the array called words. Using a for loop as shown in the code, for every string object w, which is a temporary object we are creating of the type stream that is found inside the array words, go ahead, return, and print that particular value. Therefore, as seen, w will stand for every value that is existing in this words array of the stream. And, since the split function would have split the string into individual values everywhere or every time it found a space, the words array would contain individual words based on the splitting that has been done. Therefore, when we iterate over these words carry it actually prints ‘java’ and due to the presence of the space after this, so it splits the word string into a new element in the words array. Once again, the word ‘split’ has space after it so it is cut and put in as a separate element in the words array and it does the same thing for the words ‘method’ and ‘sample’. The practical implementation of these methods is really important because, while working with live data from a database, and want to split up content that is sent to you so that you can process individual elements in that content in that case these methods are really useful. In the next section, let us look at the substring method in Java. The substring method is a method on the string class that returns a new string that is a part of the primary string. It has two overloads, it takes the start index or the start and the end index. Let us now look at an example public String substring(int startIndex) and public String substring(int startIndex, int endIndex) Example: public class SubstringExample { public static void main(String args[ ]) { String s1=“samplesubstring"; System.out.println(s1.substring(2,4)); //returns mp System.out.println(s1.substring(2)); //returns mplesubstring }} Output: mp mplesubstring We have string s1=“samplesubstring". When we say s1.substring (2,4), on having a zero-based index, ‘S’ is at zero, ‘A’ is at one, and ‘M’ is at two. So it starts splitting at 2. That is the reason for it to have a start of ‘M’ and it stops set 4, and thus P is at position four and hence it returns M and P. In the second example or the second line of code, we see s1.substring(2), which means it starts breaking the stream from the second position onwards since it's a zero-based index, ‘S’ is at zero, ‘A’ is at one, and ‘M’ is at two. So, it starts splitting at M, and since we haven’t given an end index, it simply goes ahead and returns the entire string after the second position. Which is what is obtained as an output of this program. In the next section, we will learn the format() Method in Java. Let us now go to the next method available on the string class which is the format method. public static String format(String format, Object... args) and, public static String format(Locale, String format, Object... args The format method has two overloads. You can pass the string format that is to be formatted and some object arguments. You can also pass in locale information. Let us now take a look at an example. Example: public class FormatExample { public static void main(String args[ ]) { String name=“james"; String sf1=String.format("name is %s",name); String sf2=String.format("value is %f",32.33434); String sf3=String.format("value is %32.12f",32.33434);//returns 12 char fractional part filling with 0 System.out.println(sf1); System.out.println(sf2); System.out.println(sf3); }} Output: name is James value is 32.334340 value is 334340000000 Here we can observe a string name “James”, we will format this name with a %s identifier. The output thus obtained when we print, it says ‘name is James’. If this value is formatted with a %f identifier, and the value 32.33434 is passed, we can observe the extended position that is provided because of the %f format specified. Consider another example of the format specifier of %32.12f, which means it returns twelve characters fractional part filling with zero and the outcome of using these format specifiers can be observed. Thus, there are the various format specifiers that we can use for passing to this formatting method and it would go ahead and format the string based on the format specifier that is provided. In the next section, we will look at the StringBuilder class in Java. Now, let us work with selected classes which is StringBuilder and a part of the Java API The string builder class like the string class has a length method that returns the length of the character sequence in the builder It is used to create mutable or strings that can be modifiable in terms of the existing memory location they occupy, you can go ahead and modify the current value at the same location where the original variable has been created and stored The objects are like string objects except that they can be modified at the current location where they are stored in memory. Internally these objects are treated with variable-length arrays that contain a sequence of the characters. The string builder class has a series of methods available with it. The table below explains it. In the next section, we will look at the example for the StringBuilder class in Java. In this illustration and example, we see that when we create a StringBuilder object, we actually get an initial capacity off sixteen elements. But if we say sb.append("Greetings") and we want to store the value greetings. The length occupied by the capacity of sixteen elements will be only nine. // creates empty builder, capacity 16 StringBuilder sb = new StringBuilder(); // adds 9 character string at beginning sb.append("Greetings"); In the next section, we will look at the StringBuilder method in Java. These two StringBuilder methods are - The string builder also has additional methods called setLength. This sets the length off the character sequence. If newLength is less than length, the last characters in the character sequence are truncated. If newLength is greater than Length. Nullcharacters are added at the end of the character sequence. The next method, available in the class, ensures capacity, which passes or takes, or integer value, which is minimum capacity. This ensures that the capacity that is provided to you as part of the string builder object storage space is at least equal to the specified minimum capacity that you have provided. In the next section, let us look at the various StringBuilder methods in Java. These are the additional set off stringbuilder methods that are available. You have multiple overloads available off the append method, which take booleans, characters, character array, floats, integers, long values, and strings. These append the arguments that you pass to the existing value held in the StringBuilder object, and the data is automatically converted to a string before the append operation takes place. The next set of methods available is delete, which takes the start and the end position. The deleteCharAt which takes the index position. The first method deletes the sequence from start to end, inclusive in the string builders character sequence. In the second method, which is deleteCharAt, deletes the character located at the index position. Insert is different from append as, append will add the new value to the end of the string, with the insert you can inject the new value somewhere in between the string based on the offset value that you are providing.Then we have the insert method available on the string builder, which also has multiple overloads. You can give the offset value with boolean character, character arrays or double float or integers values that you want to insert at a particular position inside the string builder. Some points about the StringBuffer is as given below - A StringBuffer is a string that can be modified. At any point in time, it contains a particular sequence of characters, but the length and content of the sequence can be changed through certain method calls. The append method adds the characters at the end of the buffer. The insert method adds the characters at a specified point. The principal operations on a StringBuffer are the append and insert methods, which are overloaded so as to accept data of any type. The append method always adds the characters at the end of the buffer; the insert method adds the characters at a specified point. Let's, look at an example for working with StringBuffer. class StringBufferExample { public static void main(String args[ ]) { StringBuffer sb=new StringBuffer(“Welcome "); sb.append("Java"); //now original string is changed System.out.println(sb); //prints Welcome Java } } This is an example for append method. Here we see a StringBuffer object has a start value of welcome. The command, sb.append("Java"), means that we want to add the word Java to the existing string welcome. Next, we print out the string. The output thus obtained will be “Welcome Java”. This proves that these objects are mutable. The value of the existing object that has been present in the memory can be changed and the value can be updated at the same location. Next, we will look at an example of the insert method. class StringBufferExample2 { public static void main(String args[ ]){ StringBuffer sb=new StringBuffer(“Welcome "); sb.insert(1,"Java"); //now original string is changed System.out.println(sb); //prints WJavaelcome } } Here, we are creating a StringBuffer object and assigning it a value of Welcome. Then, we say sb.insert (StringBuffer.insert) and we insert java at position 1. Since position 1 is the position of the letter ‘e’ in the string Welcome after the letter ‘W’ and so when we print the object, we see the word java has been inserted in between the current string. Thus the output obtained is WJavaelcome. In the next section, let us look at the comparison between StringBuilder and StringBuffer. Let us now compare StringBuilder and StringBuffer. In StringBuilder, the data object is mutable, which means that once you store the value at a memory location using a StringBuilder object, you're free to change that value. In StringBuffer also, the data object is mutable, which means that the data that you store at the memory locations using a StringBuffer object, can be updated. In this section, let us learn in detail about Mutable and Immutable. Mutable can be defined as an object is mutable when you can change its value and it actually creates a new reference in memory of this object. For example, int i=0; while(i<10) { System.out.println(i); i+=1; } In this example, we have integer i=0. Integer is a mutable value; which means that, in the memory location of the ram, we have a value called ‘i’ on the stack and an initial value of zero. As the while loop progresses, we can observe that the value of the ‘i’ keeps getting updated with a new value. The value of ‘i’ gets updated to 1,2, or 3 at the same location where the current value of ‘i’ is located. Therefore, it is mutable as the memory location can be updated with a new value. An object is immutable when you cannot change its value once it is referenced. The only thing that can be done is to redeclare the object, which means it will start pointing to a new memory location. This will result in loss of all the values. Some of the classes that are immutable in Java are the Wrapper classes like Integer, Float, Double, Character, and Byte. For example, integer a=0; while(a<10) { System.out.println(a); a+=1; } In this example, we have taken an integer value ‘a’. Let's move to the creation and manipulation of the calendar data. Let us now look at the next topic, which is the creation and manipulation of the calendar data. We will now look at the java.util.Date Class. The java.util.Date class represents date and a specific instant in time, with millisecond precision. It offers methods and constructors to deal with date and time in Java. It implements Serializable, Cloneable, and Comparable<Date> interface. It is inherited by the following interfaces: java.sql.Date java.sql.Time java.sql.Timestamp The Java.util.Date Class can be created as shown in the example below. Here, we are creating an object of this class equal to new java.util.Date and we are passing the milliseconds to it and we print system.out.println(date), and thus the output will be obtained. Example of printing date in Java using java.util.Date class long millis=System.currentTimeMillis(); java.util.Date date=new java.util.Date(millis); System.out.println(date); Output: Wed Mar 27 08:22:02 IST 2017 Let us now learn about the SimpleDateFormat Class in java. In Java, the SimpleDateFormat is a concrete class that provides methods to format and parse date and time. It inherits java.text.DateFormat class. Let us now look at the example for formatting date in Java using java.text.SimpleDateFormat class. In the example shown below, we are importing the java.text.SimpleDateFormat package so that we can use APIs from that package. We are also importing import java.util.Date. Example: import java.text.SimpleDateFormat; import java.util.Date; public class SimpleDateFormatExample { public static void main(String[ ] args) { Date date = new Date(); SimpleDateFormat formatter = new SimpleDateFormat("dd/MM/yyyy"); String strDate= formatter.format(date); System.out.println(strDate); } } Output: 13/04/2017 Thus, the output obtained is the date formatted as per the simple date formatter class that we had provided in a dd mm yyyy format. Some of the main points under the java.util.Calendar and GregorianCalendar include - In Java, the java.util.Calendar class is used to perform date and time arithmetic Java only comes with a Gregorian calendar implementation, the java.util.GregorianCalendar class. GregorianCalendar is a subclass of Calendar, which provides the standard calendar system that is used globally.); In this code sample, we are creating an object of type calendar and associating it with a new Gregorian calendar object. You can create and manipulate calendar data using the following functions: java.time.LocalDate java.time.LocalTime java.time.LocalDateTime java.time.Period java.time.format.DateTimeFormatter Next, we will learn about these functions to understand what they do. This is an immutable class with the default date format of yyyy-mm-dd. The functionality is similar to java.sql.Date API. import java.time.LocalDate; import java.time.ZoneId; public class LocalDateExample { public static void main(String[ ] args) { LocalDate localDateToday = LocalDate.now(); System.out.println("Today's Date : "+localDateToday); LocalDate localDateZone = LocalDate.now(ZoneId.of("America/Los_Angeles")); System.out.println("Today's Date at Zone America/Los_Angeles : "+localDateZone); } } Output: Today's Date : 2017-06-14 Today's Date at Zone America/Los_Angeles : 2017-06-14 Thus, we can see that the output actually gives us the date personalized to a particular country and city. This is the zone capability which can also be provided as localized information in terms of the zoneId when you want to generate a local date zone. java.time.LocalTime class is similar to LocalDate. It provides human readable time in the format hh:mm:ss.zzz. This class also provides the ZoneId support to get the time for the given ZoneId. import java.time.LocalTime; import java.time.ZoneId; public class LocalTimeExample { public static void main(String[ ] args) { LocalTime currentTime = LocalTime.now(); System.out.println("Current Time : " + currentTime); LocalTime localTimeZone = LocalTime.now(ZoneId.of("America/Los_Angeles")); System.out.println("Current Time at America/Los_Angeles : " + localTimeZone); } } Output: Current Time : 15:37:00:518 Current Time at America/Los_Angeles : 03:07:00.518 On running this code, the time that is printed now is localized time to that particular city and country. LocalDateTime is an immutable object that presents a date-time. The default format for the datetime value is yyyy-MM-dd-HH-mm-ss.zzz. LocalDateTime class provides a factory method that takes LocalDate and LocalTime arguments to create LocalDateTime instance. import java.time.LocalDate; import java.time.LocalDateTime; import java.time.LocalTime; import java.time.ZoneId; public class LocalDateTimeExample { public static void main(String[ ] args) { LocalDateTime localDateTime = LocalDateTime.now(); System.out.println("Current Date Time : " + localDateTime); LocalDateTime localDateTimeZone = LocalDateTime.now(ZoneId.of("America/Los_Angeles")); System.out.println("Current Date Time at America/Los_Angeles : " + localDateTimeZone); } } Output: Current Date Time : 2017-06-14T15:37:00:518 Current Date Time at America/Los_Angeles : 2017-06-14T03:07:00.518 The output obtained is in year, month, day, hours, minutes, seconds, and milliseconds. The moment we create another object of the class and pass the zone id to it, it will actually give you the same output with localized time information specific to that particular country and city. Let us now look at the code for java.time.Period class. import java.time.LocalDate; import java.time.Period; public class PeriodExample { public static void main(String[ ] args) { LocalDate localDate1 = LocalDate.of(2016, 06, 16); LocalDate localDate2 = LocalDate.of(2017, 10, 15); Period = Period.between(localDate1, localDate2); System.out.println(“16-June-2016 to 15-September-2017 :Years (" + period.getYears() + "), Months(" + period.getMonths() + "), Days(" + period.getDays() + ")"); } } Output: 16-June-2016 to 15-September-2017 :Years (1), Months(2), Days(29) The java.time.Period class provides the quantity or amount of time in terms of years months and days. The preferred date/time classes are no longer maintained in the java.util package. The new date/time handling classes are part of the java.time package. import java.time.ZonedDateTime; import java.time.format.DateTimeFormatter; public class DateTimeFormatterExample { public static void main(String[ ] args) { DateTimeFormatter dateTimeFormatter1 = DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm:ss z"); DateTimeFormatter dateTimeFormatter2 = DateTimeFormatter.ofPattern("yyyy/MM/dd"); DateTimeFormatter dateTimeFormatter3 = DateTimeFormatter.ofPattern("dd/MMM/YYYY"); ZonedDateTime zonedDateTime = ZonedDateTime.now(); String formatter1 = dateTimeFormatter1.format(zonedDateTime); String formatter2 = dateTimeFormatter2.format(zonedDateTime); String formatter3 = dateTimeFormatter3.format(zonedDateTime); System.out.println(formatter1); System.out.println(formatter2); System.out.println(formatter3); } } Output: 2017/06/15 15:14:51 IST 2017/06/15 15/Jul/2017 Let us now look at the use of the ArrayList class in the Java API. How about investing your time in the Java Certification Training? Take a look at the course preview now! Java ArrayList class inherits AbstractList class and implements List interface. It uses a dynamic array for storing the elements, this means that it does not have a fixed capacity, length or size. Depending on the amount of the elements that you add to the ArrayList, being a collection class, it will keep auto incriminating its size depending on the values that are stored inside it. The ArrayList will also assist you to store different data types at the same time, unlike an array. The important points about Java ArrayList are as follows - It maintains the insertion order. In terms of the sequence in which you insert elements into the list It can contain duplicate elements. It allows random access because array works at the index spaces. However, in the ArrayList, you can go ahead and randomly access elements inside the ArrayList. It is non-synchronized In Java ArrayList class, manipulation is slow because a lot of shifting of elements and values in the memory has to occur if any element is removed or deleted from the ArrayList. In the next section let us learn about the Class Declaration and its use. Let us now look at an example of the java.util.ArrayList class provides a resizable-array and implements the List interface for ArrayList. public class ArrayList<E> extends AbstractList<E> implements List<E>, RandomAccess, Cloneable, Serializable import java.util.*; class TestCollection1 { public static void main(String args[ ]){ ArrayList<String> list=new ArrayList(); //Creating arraylist list.add(“John"); //Adding object in arraylist list.add(“James"); list.add(“Mathews"); list.add(“Nitin"); //Traversing list through Iterator Iterator itr=list.iterator(); while(itr.hasNext()) { // while new record still available in arraylist System.out.println(itr.next()); // print the available value } } } Output: John James Mathews Nitin Let us look at predicates with lambda expressions in Java. To get Predicate with Lambda Expression:; }} To create a Database: : filtered) { System.out.println(child.getAge()); }}} Lambda expression allows passing on an entire expression as a parameter to a function. It also gives the capability to write less code to do the filtering of data, sorting of data, by making use of the arrow operator. Thus by the above two programs, we can understand that a lambda expression and a predicate function being used effectively, largely reduces the effort and the number of lines of code we would have written to other write the same program by using the if condition. Let us now look at what we have learned in this Work with selected classes from the java API tutorial: StringBuilder objects are like String objects, except that they can be modified. Internally, these objects are treated like variable-length arrays that contain a sequence of characters. In Java, the string is basically an object that represents a sequence of char values. An array of characters works same as Java string. You can create and manipulate Calendar Data using functions Java ArrayList class uses a dynamic array for storing the elements. It inherits AbstractList class and implements List interface. Sample program to predicate using Lambda Expression. With this, we come to an end to the Work with selected classes from the java API tutorial. A Simplilearn representative will get back to you in one business day.
https://www.simplilearn.com/work-with-selected-classes-from-the-java-api-tutorial
CC-MAIN-2019-22
refinedweb
6,236
55.44
setegid - set the effective group ID #include <unistd.h> int setegid(gid_t gid); If gid is equal to the real group ID or the saved set-group-ID, or if the process has appropriate privileges, setegid() shall set the effective group ID of the calling process to gid; the real group ID, saved set-group-ID, and any supplementary group IDs shall remain unchanged. The setegid() function shall not affect the supplementary group list in any way. Upon successful completion, 0 shall be returned; otherwise, -1 shall be returned and errno set to indicate the error. The setegid() function shall fail if:The following sections are informative. None. None. Refer to the RATIONALE section in setuid() . None. exec() , getegid() , geteuid() , getgid() , getuid() , seteuid() , setgid() , setregid() , setreuid() , setuid() , .
http://manpages.sgvulcan.com/setegid.3p.php
CC-MAIN-2017-47
refinedweb
127
54.12
#include <hallo.h> Dr. Guenter Bechly wrote on Sat Jun 09, 2001 um 03:41:21PM: > iceme I person skilled in python coding should adopt this one. There are two bugs as far as I can see (I reported one to the BTS), and the upstream is no longer maintaining the package. If there is noone readdy for this task, I would adopt it (unwillingly, I would have to learn python). Gr{us,eeting}s, Eduard. -- > To do is to be (Karl Marx) > To be is to do (Jean Paul Sartre) > Do be do be do (Frank Sinatra) jabbadabbadoo (Fred Feuerstein)
https://lists.debian.org/debian-devel/2001/06/msg00394.html
CC-MAIN-2017-17
refinedweb
101
81.16
SYNOPSIS #include <sys/types.h> #include <unistd.h> ssize_t pread(int fildes, void *buf, size_t nbyte, off_t offset); ssize_t pread64(int fildes, void *buf, size_t nbyte, off64_t offset); ssize_t read(int fildes, void *buf, size_t nbyte); DESCRIPTION The If the number of bytes is 0, then On file that support seeking, such as regular files, the On files that do not support seeking, such as terminals, reads always occur from the current position. The value of a file offset associated with such a file is undefined. No data transfer occurs past the current end-of-file. If the starting position is at or after the end-of-file, 0 is returned. If the file refers to a device or special file, the result of subsequent Reads larger than SSIZE_MAX are unsupported. When attempting to read from an empty pipe or FIFO: - If no process has the pipe open for writing, read()returns 0 and indicates end-of-file. - If some process has the pipe open for writing, and O_NONBLOCK is set, read()returns -1 and sets errno to EAGAIN. - If some process has the pipe open for writing, and O_NONBLOCK is clear, read()blocks()returns -1 and sets errno to EWOULDBLOCK. - If O_NONBLOCK is clear, read()blocks the calling thread until some data becomes available. - The use of the O_NONBLOCK flag has no effect if there is some data available. The Upon successful completion, if the number of bytes requested is greater than 0, The The PARAMETERS - fildes Is the file descriptor that references an open file. - buf Points to the buffer to place the read information into. - nbyte Specifies the maximum number of bytes to attempt to read. - offset Specifies the point in the file at which pread()or pread64()begins reading. RETURN VALUES If successful, these functions return a non-negative integer that indicates the number of bytes read. The number of bytes read may be less than the number of bytes requested in any of the following conditions: - The number of bytes left in the file is less than the requested length. - The read()was interrupted by a signal. - The file is a pipe or FIFO or special device and has fewer bytes than requested immediately available for reading. On error, these functions return -1, and set errno to one of the following values: - EAGAIN The O_NONBLOCK flag is set for the file descriptor and the process would be delayed. - EBADF The fildes parameter is not a valid file descriptor open for reading. - EFAULT The buf parameter is not a valid pointer, or the buffer was overrun during the read()request. - EINTR The read(), pread(), or pread64()request was interrupted by a signal. - EIO A physical I/O error occurred. - EISDIR The fildes parameter refers to a directory. Use readdir()to read from directories. - ENXIO A request was made of a non-existent device, or the request was outside the capabilities of the device. - EOVERFLOW For all functions, the file is a regular file, nbyte is greater than 0, the starting position is before the end-of-file, and the starting position is greater than or equal to the offset maximum established in the open file description associated with fildes. For pread()and pread64(), the specified offset would cause a read beyond the 2 GB boundary. - EWOULDBLOCK The O_NONBLOCK flag is set for the file descriptor and the process would be delayed. CONFORMANCE POSIX.1 (1996), with exceptions. UNIX 03, with exceptions. MULTITHREAD SAFETY LEVEL Async-signal-safe. PORTING ISSUES Refer to File Management in the Windows Concepts chapter of the PTC MKS Toolkit UNIX to Windows Porting Guide for a detailed discussion of file handling, including a discussion of text mode and binary mode for files. The test which causes the EOVERFLOW error condition to be returned can be disabled by using the While the UNIX 03 specification states that Additionally, although the UNIX 03 specification states that using AVAILABILITY PTC MKS Toolkit for Professional Developers PTC MKS Toolkit for Enterprise Developers PTC MKS Toolkit for Enterprise Developers 64-Bit Edition SEE ALSO - Functions: creat(), dup(), dup2(), fcntl(), ioctl(), lseek(), open(), pipe(), readdir(), readv(), socket(), write(), writev() PTC MKS Toolkit 9.6 patch 1 Documentation Build 5.
http://www.mkssoftware.com/docs/man3/read.3.asp
CC-MAIN-2015-11
refinedweb
701
61.67
Lesson 19. LED BAR + MIC. LightMusic The purpose of this lesson Hi! Today we will learn how to light multi-color led panels (only in the fire M5), relying on the components of the frequency of the audio signal received from the built-in (only in M5 fire) microphone. Figure 1 This lesson will teach you how to connect and use third-party libraries to work with fast Fourier transform (FFT) and LEDs SK6818. Short help Color music is an art form based on a person's ability to associate sound sensations with light perceptions; this phenomenon in neurology is called synesthesia. Light music as an art is a derivative of music and is an integral part of it. Its purpose is to reveal the essence of music through visual perception. The main purpose of light music as an art is to study the ability of a person to experience the sensations imposed by light images when accompanied by music. Music lovers have long noticed that musical instruments sound much louder and clearer in a well-lit room than in a darkened one. Therefore, when performing serious music, the light in the hall is usually not extinguished. For the first time the connection between hearing and vision was very convincingly shown by Russian physicist and Russian physiologist academician P. P. Lazarev. Details on Wiki:Светомузыка List of components for the lesson - PC/MAC; - M5STACK FIRE; - USB-C cable from standard set. Let's start! Step 1. The installation for the library with Led work bar Go to the link library Led bar in the section Download and download an example of the library files (Fig. 2). Figure 2 Next, extract the archive to a new folder sketch and delete the file demo1.ino (Fig. 3). Figure 3 Step 2. Installing the FFT library Go to the link Arduino Library FFT in the section Download and download an example of the library files (Fig. 4). Figure 4 In the extracted contents folder and Then copied C:\Users\USER_NAME\Documents\Arduino\libraries new name arduinoFFT is arduinoFFT-master (Fig. 5). Figure 5 Great! With libraries all :) Step 3. Writing a sketch Create a new sketch in the Arduino development environment and save it in the folder where the library files from step 1 are located (Fig. 6). Figure 6 Immediately connect the necessary libraries and create the necessary variables: #include <M5Stack.h> #include "arduinoFFT.h" #include "esp32_digital_led_lib.h" arduinoFFT FFT = arduinoFFT(); #define SAMPLES 256 //Must be a power of 2 #define SAMPLING_FREQUENCY 10000 //Hz, must be 10000 or less due to ADC conversion time. Determines maximum frequency that can be analysed by the FFT. #define amplitude 50 unsigned int sampling_period_us; unsigned long microseconds; byte peak[] = {0,0,0,0,0,0,0}; double vReal[SAMPLES]; double vImag[SAMPLES]; unsigned long newTime, oldTime; strand_t m_sLeds = {.rmtChannel = 0, .gpioNum = 15, .ledType = LED_WS2812B_V3, .brightLimit = 32, .numPixels = 10, .pixels = nullptr, ._stateVars = nullptr}; For simplified access to the LED BAR, I suggest using the following function void ledBar (int R, int G, int B, int M). Where int R, int G, int B - brightness of RED, GREEN and BLUE - respectively (from 0 to 255); int M - mode: from 0 to 9 - led number, 10 - all LEDs from the left panel, 11 - all LEDs from the right panel, 12 - all LEDs. The LEDs are arranged clockwise (Fig. 7). Figure 7 Figure 7.1 void ledBar(int R, int G, int B, int M) { if ((M < 0) || (M > 13)) return; if (M == 11) // right { for (int i = 0; i < 5; i++) { m_sLeds.pixels[i] = pixelFromRGBW(R, G, B, 0); } } else if (M == 10) // left { for (int i = 5; i < 10; i++) { m_sLeds.pixels[i] = pixelFromRGBW(R, G, B, 0); } } else if (M == 12) // all { for (int i = 0; i < 10; i++) { m_sLeds.pixels[i] = pixelFromRGBW(R, G, B, 0); } } else { m_sLeds.pixels[M] = pixelFromRGBW(R, G, B, 0); } digitalLeds_updatePixels(&m_sLeds); } Main part. Do not forget about void dacWrite(25, 0); that the speaker does not make strange sounds and cod. void setup(){ M5.begin(); pinMode(25, OUTPUT); pinMode(34, INPUT); sampling_period_us = round(1000000 * (1.0 / SAMPLING_FREQUENCY)); pinMode(15, OUTPUT); digitalWrite (15, LOW); if (digitalLeds_initStrands(&m_sLeds, 1)) { Serial.println("Can't init LED driver()."); } digitalLeds_resetPixels(&m_sLeds); } Note that pinMode (34, INPUT) is the analog input to which the built-in microphone is connected (Fig. 8) through the amplifier, so set it to INPUT. Figure 8 void loop() { for (int i = 0; i < SAMPLES; i++) { newTime = micros() - oldTime; oldTime = newTime; vReal[i] = analogRead(34); // A conversion takes about 1mS on an ESP8266 vImag[i] = 0; while (micros() < (newTime + sampling_period_us)); // do nothing to wait } FFT.Windowing(vReal, SAMPLES, FFT_WIN_TYP_HAMMING, FFT_FORWARD); FFT.Compute(vReal, vImag, SAMPLES, FFT_FORWARD); FFT.ComplexToMagnitude(vReal, vImag, SAMPLES); dacWrite(25, 0); for (int i = 2; i < (SAMPLES/2); i++){ // Don't use sample 0 and only first SAMPLES/2 are usable. Each array eleement represents a frequency and its value the amplitude. if (vReal[i] > 200) { // Add a crude noise filter, 4 x amplitude or more if (i<=5 ) displayBand(0,(int)vReal[i]/amplitude); // 125Hz if (i >5 && i<=12 ) displayBand(1,(int)vReal[i]/amplitude); // 250Hz if (i >12 && i<=32 ) displayBand(2,(int)vReal[i]/amplitude); // 500Hz if (i >32 && i<=62 ) displayBand(3,(int)vReal[i]/amplitude); // 1000Hz if (i >62 && i<=105 ) displayBand(4,(int)vReal[i]/amplitude); // 2000Hz if (i >105 && i<=120 ) displayBand(5,(int)vReal[i]/amplitude); // 4000Hz if (i >120 && i<=146 ) displayBand(6,(int)vReal[i]/amplitude); // 8000Hz } } } Light the LEDs will be using the function void displayBand (int band, int dsize). Where band - frequency in the signal; dsize - number of frequency in the signal. You can safely experiment here and achieve the best flashes. void displayBand(int band, int dsize){ if (band > 1) dsize += 100; if (dsize >= 150) { ledBar(0, 0, 0, 12); if (band == 0) { if (dsize >= 300) { ledBar(255, 0, 0, 0); ledBar(255, 0, 0, 1); } } else if (band == 1) { if (dsize >= 180) { ledBar(255, 255, 0, 3); ledBar(255, 255, 0, 4); } } else if (band == 2) { if (dsize >= 170) { ledBar(0, 255, 0, 5); ledBar(0, 255, 0, 6); } } else { ledBar(0, 0, 255, 8); ledBar(0, 0, 255, 9); } } } Final step That's all :) Downloads - Library LED BAR (GitHub): - Arduino FFT library (GitHub):
https://forum.m5stack.com/topic/402/lesson-19-led-bar-mic-lightmusic
CC-MAIN-2022-05
refinedweb
1,050
61.46
Setting up an ES6 Project Using Babel and webpack In this article, we’re going to look at creating a build setup for handling modern JavaScript (running in web browsers) using Babel and webpack. This is needed to ensure that our modern JavaScript code in particular is made compatible with a wider range of browsers than it might otherwise be. JavaScript, like most web-related technologies, is evolving all the time. In the good old days, we could drop a couple of <script> tags into a page, maybe include jQuery and a couple of plugins, then be good to go. However, since the introduction of ES6, things have got progressively more complicated. Browser support for newer language features is often patchy, and as JavaScript apps become more ambitious, developers are starting to use modules to organize their code. In turn, this means that if you’re writing modern JavaScript today, you’ll need to introduce a build step into your process. As you can see from the links beneath, converting down from ES6 to ES5 dramatically increases the number of browsers that we can support. The purpose of a build system is to automate the workflow needed to get our code ready for browsers and production. This may include steps such as transpiling code to a differing standard, compiling Sass to CSS, bundling files, minifying and compressing code, and many others. To ensure these are consistently repeatable, a build system is needed to initiate the steps in a known sequence from a single command. Prerequisites In order to follow along, you’ll need to have both Node.js and npm installed (they come packaged together). I would recommend using a version manager such as nvm to manage your Node installation (here’s how), and if you’d like some help getting to grips with npm, then check out SitePoint’s beginner-friendly npm tutorial. Set Up Create a root folder somewhere on your computer and navigate into it from your terminal/command line. This will be your <ROOT> folder. Create a package.json file with this: npm init -y Note: The -y flag creates the file with default settings, and means you don’t need to complete any of the usual details from the command line. They can be changed in your code editor later if you wish. Within your <ROOT> folder, make the directories src, src/js, and public. The src/js folder will be where we’ll put our unprocessed source code, and the public folder will be where the transpiled code will end up. Transpiling with Babel To get ourselves going, we’re going to install babel-cli, which provides the ability to transpile ES6 into ES5, and babel-preset-env, which allows us to target specific browser versions with the transpiled code. npm install babel-cli babel-preset-env --save-dev You should now see the following in your package.json: "devDependencies": { "babel-cli": "^6.26.0", "babel-preset-env": "^1.6.1" } Whilst we’re in the package.json file, let’s change the scripts section to read like this: "scripts": { "build": "babel src -d public" }, This gives us the ability to call Babel via a script, rather than directly from the terminal every time. If you’d like to find out more about npm scripts and what they can do, check out this SitePoint tutorial. Lastly, before we can test out whether Babel is doing its thing, we need to create a .babelrc configuration file. This is what our babel-preset-env package will refer to for its transpile parameters. Create a new file in your <ROOT> directory called .babelrc and paste the following into it: { "presets": [ [ "env", { "targets": { "browsers": ["last 2 versions", "safari >= 7"] } } ] ] } This will set up Babel to transpile for the last two versions of each browser, plus Safari at v7 or higher. Other options are available depending on which browsers you need to support. With that saved, we can now test things out with a sample JavaScript file that uses ES6. For the purposes of this article, I’ve modified a copy of leftpad to use ES6 syntax in a number of places: template literals, arrow functions, const and let. "use strict"; function leftPad(str, len, ch) { const cache = [ "", " ", " ", " ", " ", " ", " ", " ", " ", " " ]; str = str + ""; len = len - str.length; if (len <= 0) return str; if (!ch && ch !== 0) ch = " "; ch = ch + ""; if (ch === " " && len < 10) return () => { cache[len] + str; }; let pad = ""; while (true) { if (len & 1) pad += ch; len >>= 1; if (len) ch += ch; else break; } return `${pad}${str}`; } Save this as src/js/leftpad.js and from your terminal run the following: npm run build If all is as intended, in your public folder you should now find a new file called js/leftpad.js. If you open that up, you’ll find it no longer contains any ES6 syntax and looks like this: "use strict"; function leftPad(str, len, ch) { var cache = ["", " ", " ", " ", " ", " ", " ", " ", " ", " "]; str = str + ""; len = len - str.length; if (len <= 0) return str; if (!ch && ch !== 0) ch = " "; ch = ch + ""; if (ch === " " && len < 10) return function () { cache[len] + str; }; var pad = ""; while (true) { if (len & 1) pad += ch; len >>= 1; if (len) ch += ch;else break; } return "" + pad + str; } Organizing Your Code with ES6 Modules An ES6 module is a JavaScript file containing functions, objects or primitive values you wish to make available to another JavaScript file. You export from one, and import into the other. Any serious modern JavaScript project should consider using modules. They allow you to break your code into self-contained units and thereby make things easier to maintain; they help you avoid namespace pollution; and they help make your code more portable and reusable. Whilst the majority of ES6 syntax is widely available in modern browsers, this isn’t yet the case with modules. At the time of writing, they’re available in Chrome, Safari (including the latest iOS version) and Edge; they’re hidden behind a flag in Firefox and Opera; and they’re not available (and likely never will be) in IE11, nor most mobile devices. In the next section, we’ll look at how we can integrate modules into our build setup. Export The export keyword is what allows us to make our ES6 modules available to other files, and it gives us two options for doing so — named and default. With the named export, you can have multiple exports per module, and with a default export you only have one per module. Named exports are particularly useful where you need to export several values. For example, you may have a module containing a number of utility functions that need to be made available in various places within your apps. So let’s turn our leftPad file into a module, which we can then require in a second file. Named Export To create a named export, add the following to the bottom of the leftPad file: export { leftPad }; We can also remove the "use strict"; declaration from the top of the file, as modules run in strict mode by default. Defult Export As there’s only a single function to be exported in the leftPad file, it might actually be a good candidate for using export default instead: export default function leftPad(str, len, ch) { ... } Again, you can remove the "use strict"; declaration from the top of the file. Import To make use of exported modules, we now need to import them into the file (module) we wish to use them in. For the export default option, the exported module can be imported under any name you wish to choose. For example, the leftPad module can be imported like so: import leftPad from './leftpad'; Or it could be imported as another name, like so: import pineapple_fritter from './leftpad'; Functionally, both will work exactly the same, but it obviously makes sense to use either the same name as it was exported under, or something that makes the import understandable — perhaps where the exported name would clash with another variable name that already exists in the receiving module. For the named export option, we must import the module using the same name as it was exported under. For our example module, we’d import it in a similar manner to that we used with the export default syntax, but in this case, we must wrap the imported name with curly braces: import { leftPad } from './leftpad'; The braces are mandatory with a named export, and it will fail if they aren’t used. It’s possible to change the name of a named export on import if needed, and to do so, we need to modify our syntax a little using an import [module] as [path] syntax. As with export, there’s a variety of ways to do this, all of which are detailed on the MDN import page. import { leftPad as pineapple_fritter } from './leftpad_es6'; Again, the name change is a little nonsensical, but it illustrates the point that they can be changed to anything. You should keep to good naming practices at all times, unless of course you’re writing routines for preparing fruit-based recipes. Consuming the Exported Module To make use of the exported leftPad module, I’ve created the following index.js file in the src/js folder. Here, I loop through an array of serial numbers, and prefix them with zeros to make them into an eight-character string. Later on, we’ll make use of this and post them out to an ordered list element on an HTML page. Note that this example uses the default export syntax: import leftPad from './leftpad'; const serNos = [6934, 23111, 23114, 1001, 211161]; const strSNos = serNos.map(sn => leftPad(sn, 8, '0')); console.log(strSNos); As we did earlier, run the build script from the <ROOT> directory: npm run build Babel will now create an index.js file in the public/js directory. As with our leftPad.js file, you should see that Babel has replaced all of the ES6 syntax and left behind only ES5 syntax. You might also notice that it has converted the ES6 module syntax to the Node-based module.exports, meaning we can run it from the command line: node public/js/index.js // [ '00006934', '00023111', '00023114', '00001001', '00211161' ] Your terminal should now log out an array of strings prefixed with zeros to make them all eight characters long. With that done, it’s time to take a look at webpack. Introducing webpack and Integrating it with Babel As mentioned, ES6 modules allow the JavaScript developer to break their code up into manageable chunks, but the consequence of this is that those chunks have to be served up to the requesting browser, potentially adding dozens of additional HTTP requests back to the server — something we really ought to be looking to avoid. This is where webpack comes in. webpack is a module bundler. Its primary purpose is to process your application by tracking down all its dependencies, then package them all up into one or more bundles that can be run in the browser. However, it can be far more than that, depending upon how it’s configured. webpack configuration is based around four key components: - an entry point - an output location - loaders - plugins Entry: This holds the start point of your application from where webpack can identify its dependencies. Output: This specifies where you would like the processed bundle to be saved. Loaders: These are a way of converting one thing as an input and generating something else as an output. They can be used to extend webpack’s capabilities to handle more than just JavaScript files, and therefore convert those into valid modules as well. Plugins: These are used to extend webpack’s capabilities into other tasks beyond bundling — such as minification, linting and optimization. To install webpack, run the following from your <ROOT> directory: npm install webpack webpack-cli --save-dev This installs webpack locally to the project, and also gives the ability to run webpack from the command line through the addition of webpack-cli. You should now see webpack listed in your package.json file. Whilst you’re in that file, modify the scripts section as follows, so that it now knows to use webpack instead of Babel directly: "scripts": { "build": "webpack --config webpack.config.js" }, As you can see, this script is calling on a webpack.config.js file, so let’s create that in our <ROOT> directory with the following content: const path = require("path"); module.exports = { mode: 'development', entry: "./src/js/index.js", output: { path: path.resolve(__dirname, "public"), filename: "bundle.js" } }; This is more or less the simplest config file you need with webpack. You can see that it uses the entry and output sections described earlier (it could function with these alone), but also contains a mode: 'development' setting. webpack has the option of using either “development” or “production” modes. Setting mode: 'development' optimizes for build speed and debugging, whereas mode: 'production' optimizes for execution speed at runtime and output file size. There’s a good explanation of modes in Tobias Koppers’ article “webpack 4: mode and optimization” should you wish to read more on how they can be configured beyond the default settings. Next, remove any files from the public/js folder. Then rerun this: npm run build You’ll see that it now contains a single ./public/bundle.js file. Open up the new file, though, and the two files we started with look rather different. This is the section of the file that contains the index.js code. Even though it’s quite heavily modified from our original, you can still pick out its variable names: /***/ "./src/js/index.js": /*!*************************!*\ !*** ./src/js/index.js ***! \*************************/ /*! no exports provided */ /***/ (function(module, __webpack_exports__, __webpack_require__) { "use strict"; eval("__webpack_require__.r(__webpack_exports__);\n/* harmony import */ var _leftpad__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ./leftpad */ \"./src/js/leftpad.js\");\n\n\nconst serNos = [6934, 23111, 23114, 1001, 211161];\nconst strSNos = serNos.map(sn => Object(_leftpad__WEBPACK_IMPORTED_MODULE_0__[\"default\"])(sn, 8, '0'));\nconsole.log(strSNos);\n\n\n//# sourceURL=webpack:///./src/js/index.js?"); /***/ }), If you run node public/js/bundle.js from the <ROOT> folder, you’ll see you get the same results as we had previously. Transpiling As mentioned earlier, loaders allow us to convert one thing into something else. In this case, we want ES6 converted into ES5. To do that, we’ll need a couple more packages: npm install babel-loader babel-core --save-dev To utilize them, the webpack.config.js needs a module section adding to it after the output section, like so: module.exports = { entry: "./src/js/index.js", output: { path: path.resolve(__dirname, "public/js"), filename: "bundle.js" }, module: { rules: [ { test: /\.js$/, exclude: /(node_modules)/, use: { loader: "babel-loader", options: { presets: ["babel-preset-env"] } } } ] } }; This uses a regex statement to identify the JavaScript files to be transpiled with the babel-loader, whilst excluding anything in the node_modules folder from that. Lastly, the babel-loader is told to use the babel-preset-env package installed earlier, to establish the transpile parameters set in the .babelrc file. With that done, you can rerun this: npm run build Then check the new public/js/bundle.js and you’ll see that all traces of ES6 syntax have gone, but it still produces the same output as previously. Bringing It to the Browser Having built a functioning webpack and Babel setup, it’s time to bring what we’ve done to the browser. A small HTML file is needed, and this should be created in the <ROOT> folder as below: <!DOCTYPE html> <html> <head lang="en"> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta http- <title>Webpack & Babel Demonstration</title> </head> <body> <main> <h1>Parts List</h1> <ol id="part-list"></ol> </main> <script src="./public/js/bundle.js" charset="utf-8"></script> </body> </html> There’s nothing complicated in it. The main points to note are the <ol></ol> element, where the array of numbers will be going, and the <script></script> element just before the closing </body> tag, linking back to the ./public/js/bundle.js file. So far, so good. A little more JavaScript is needed to display the list, so let’s alter ./src/js/index.js to make that happen: import leftPad from './leftpad'; const serNos = [6934, 23111, 23114, 1001, 211161]; const partEl = document.getElementById('part-list'); const strList = serNos.reduce( (acc, element) => acc += `<li>${leftPad(element, 8, '0')}</li>`, '' ); partEl.innerHTML = strList; Now, if you open index.html in your browser, you should see an ordered list appear, like so: Taking it Further As configured above, our build system is pretty much ready to go. We can now use webpack to bundle our modules and transpile ES6 code down to ES5 with Babel. However, it’s a bit of a niggle that, to transpile our ES6 code, we have to run npm run build every time we make a change. Adding a ‘watch’ To overcome the need to repeatedly run npm run build, you can set up a 'watch' on your files and have webpack recompile automatically every time it sees a change in one of the files in the ./src folder. To implement that, modify the scripts section of the package.json file, as below: "scripts": { "watch": "webpack --watch", "build": "webpack --config webpack.config.js" }, To check that it’s working, run npm run watch from the terminal, and you’ll see that it no longer returns to the command prompt. Now go back to src/js/index.js and add an extra value into the serNos array and save it. Mine now looks like this: const serNos = [ 6934, 23111, 23114, 1001, 211161, 'abc']; If you now check the terminal, you’ll see that it’s logged out, and that it has re-run the webpack build task. And on going back to the browser and refreshing, you’ll see the new value added to the end of the list, having been processed with leftPad. Refresh the Browser Automatically It would be really good now if we could get webpack to refresh the browser automatically every time we make a change. Let’s do that by installing an additional npm package called webpack-dev-server. Don’t forget to Ctrl + c out of the watch task first, though! npm install webpack-dev-server --save-dev With that done, let’s add a new script to the package.json file to call the new package. The scripts section should now contain this: "scripts": { "watch": "webpack --watch", "start": "webpack --watch & webpack-dev-server --open-page 'webpack-dev-server'", "build": "webpack --config webpack.config.js" }, Notice the --open-page flag added to the end of the script. This tells webpack-dev-server to open a specific page in your default browser using its iframe mode. Now run npm start and you should see a new browser tab being opened at with the parts list being displayed. To show that the 'watch' is working, go to src/js/index.js and add another new value to the end of the serNos array. When you save your changes, you should notice them reflected almost immediately in the browser. With this complete, the only thing remaining is for the mode in webpack.config.js to be set to production. Once that is set, webpack will also minify the code it outputs into ./public/js/bundle.js. You should note that if the mode is not set, webpack will default to using the production config. Conclusion In this article, you’ve seen how to set up a build system for modern JavaScript. Initially, this used Babel from the command line to convert ES6 syntax down to ES5. You’ve then seen how to make use of ES6 modules with the export and import keywords, how to integrate webpack to perform a bundling task, and how to add a watch task to automate running webpack each time changes to a source file are detected. Finally you’ve seen how to install webpack-dev-server to refresh the page automatically every time a change is made. Should you wish to take this further, I’d suggest reading SitePoint’s deep dive into webpack and module bundling, as well as researching additional loaders and plugins that will allow webpack to handle Sass and asset compression tasks. Also look at the eslint-loader and the plugin for Prettier too. Happy bundling …
https://www.sitepoint.com/es6-babel-webpack/?utm_source=rss
CC-MAIN-2019-47
refinedweb
3,406
61.77
In my previous blog post SAP Fiori Tools – SAPUI5 Freestyle App, I’ve shown how to set up a SAPUI5 Freestyle App using SAP Fiori Tools. After you have finished building and testing your UI5 app, the next logical thing to do is prepare it for deployment. In this blog post, I will show how to deploy a SAPUI5 App to the ABAP server using SAP Fiori Tools. Prerequisites - SAP Business Application Studio - Visual Studio Code with SAP Fiori Tools extension installed - ABAP server (in this demo, I’m using S/4HANA) - The OData Service to Load Data to the SAPUI5 ABAP Repository is activated UI5 Tooling As I’ve already mentioned in my previous blog post, UI5 Tooling is one of the many tools behind SAP Fiori Tools. UI5 Tooling is also responsible for deploying the UI5 App to multiple target systems, including the ABAP server. If you are still using the SAPUI5 Tools for Eclipse, then you should read this blog post — SAPUI5 Tools for Eclipse – Now is the Time to Look for Alternatives. In short, SAPUI5 Tools for Eclipse has been officially retired. And while SAP WebIDE can still be used to deploy UI5 App to ABAP server, SAP has been promoting the next generation IDE which is SAP Business Application Studio, and therefore, its use for deploying UI5 App to ABAP. And from SAP Business Application Studio, the tool that is used to deploy the UI5 app is the UI5 Tooling. To sum it up, we are already in the 3rd generation of SAPUI5 Tools in the form of UI5 Tooling. And through this tool, you can utilize it by using your preferred development IDE. While this tool opens up the possibility to be used with the IDE of your choosing, it may be not so straight-forward to use for those who are very used to using SAP WebIDE. Hence, I’ve created this blog post, to show how it can be used to deploy your UI5 App to the ABAP server. Deploy to ABAP server The base project for this demo is the solution from my previous blog post — SAP Fiori Tools – SAPUI5 Freestyle App. If you followed through the demo on that blog post, then you’re good to proceed with the steps detailed in this blog post. But in case you didn’t have the solution, you can find the solution in the link below: However, you need to make sure that you have installed modules if you are using the Visual Studio Code. Now, with the base project ready, let’s start the deployment process. - 1. Inspect the scripts inside the package.json file. The initial state of the package.json have the deploy script looking like below: "deploy": "fiori add deploy-config" As you can see, if you run the deploy script, it will execute the command and generate the deploy config. - 2. Execute the deploy script on the command line: > npm run deploy You will be prompted to answer the information like ABAP Package and Transport Request number — see below screenshot for more details: Notice that the deploy script in package.json has been updated with: "deploy": "ui5 build preload --config ui5-deploy.yaml" Also, ui5-deploy.yaml file has been created. This serves as the deployment descriptor for the UI5 app. specVersion: "1.0" metadata: name: "gwsample" type: application ui5Theme: sap_fiori_3 builder: customTasks: - name: deploy-to-abap afterTask: generateVersionInfo configuration: target: url: app: name: gwsample package: Z_DEMO_UI5_TOOLING transport: <your.transport.request> - 3. Update the deployment descriptor ui5-deploy.yaml file with the configuration for your server credentials. And also make sure that the name of the app starts with Z or Y for the customer namespace (every ABAPer should already know this). specVersion: "1.0" metadata: name: "gwsample" type: application ui5Theme: sap_fiori_3 builder: customTasks: - name: deploy-to-abap afterTask: generateVersionInfo configuration: target: url: http://<your server hostname>:<port> # <-- modify this client: <development client> # <-- modify this auth: basic credentials: username: env:UI5_USERNAME password: env:UI5_PASSWORD app: name: zgwsample # <-- modify this package: Z_DEMO_UI5_TOOLING transport: <your transport request from previous step> You can follow through the changes from the template above, but keep the credentials section unchanged because we will define UI5_USERNAME and UI5_PASSWORD as environment variables in the next step. We are doing this because you don’t want your username and password to be part of the project files that you commit to git. - 4. Create the .env file that will contain the values of environment variables UI5_USERNAME and UI5_PASSWORD. The file should have below entries: UI5_USERNAME=<your username> UI5_PASSWORD=<your password> - 5. Now that the deployment setup is complete, it’s time to execute the deploy command again: > npm run deploy If everything executed smoothly, you will get a series of logs printed in the terminal, and it will end with the following messages: info builder:custom deploy-to-abap Deployment Successful. info builder:builder Build succeeded in 20 s info builder:builder Executing cleanup tasks... Note that before the actual deployment, the tool builds the UI5 project first and do the minification of UI5 files and the generation of the *preload.js files. So the tool is already taking care of all the best practices build process before the actual deployment. Test the Deployed UI5 App - 1. Go to SICF tcode and search for your app by using the app name as the value for the service name. Once found, test the app by right-clicking on the ui5 app node and click on test service. This will launch the URL of your UI5 app, and you should see the below results: That’s it! You’ve deployed a SAPUI5 App using SAP Fiori Tools / UI5 Tooling! Closing Now you see how easy it is to deploy a SAPUI5 App using the new UI5 Tooling. Hopefully, this blog post can help you with transitioning from your current preferred tool into using this new SAPUI5 Tooling. The beauty of this tooling is openness because you can use it on your preferred IDE. And to further ease your transition, I suggest for you to gain a basic understanding of Node.js runtime (if you haven’t acquired this yet). ~~~~~~~~~~~~~~~~ Appreciate it if you have any comments, suggestions, or questions. Cheers!~ Hello Jhodel Cailan, thanks for sharing. While deploying a WebIDE development to an ABAP backend with report /UI5/UI5_REPOSITORY_LOAD, I experienced 2 issues: JNN Hi Jacques Nomssi Nzali Thanks for the comment! However, your issue is a bit off-topic. Kindly post your issue in the Question and Answer section, from there, somebody might be able to help you with your issue. Thanks! Hello Jhodel, sorry, the question I have is: how long does it take for a projekt with say 70 MB to deploy? what is your experience with the UI5 tooling? best regards, JNN Hi Jacques Nomssi Nzali In that case, for 14 MB project files, it takes roughly 10 seconds (or less) to deploy using UI5 Tooling. And so far, I don’t really come across of having a single UI5 app that is more than 30 MB in build files. Nevertheless, I don’t think there will be any issues for your example of 70 MB size. Hi Jhodel, Thank you for your blog post. To me it’s good to know that I can execute Fiori apps from SICF, without configuring the launchpad. Best regargds, Mio Hi Mio Yasutake Yes – as long as you have an HTML file included in the deployed app, you can run it in standalone mode. The one blog..I was waiting for…I had issues deploying into ABAP server n even created a discussion abt it…. let me try this…… Thanks Jhodel Hi Vishnu Pankajakshan Panicker I hope this blog post can help you fix your issue. Can u have a look at below thread Hi Vishnu Pankajakshan Panicker Are you sure this is a deployment issue? The command npm start is to start running your app locally. And this command will refer to your ui5.yaml config. From there you can set the ignoreCertError to true. Jhodel Check step 5 in below tutorial Vishnu Pankajakshan Panicker Can you be more specific? Step 5 says npm start Hi, Can we create a adaptation or extension project using the Fiori tools? Hi udayagiri swamy As far as I know, it is not available yet, but my assumption is that it will be available soon, if not Q3 then Q4 this year. Hi, I tried with the given steps. I am getting the below error info builder:custom deploy-to-abap Deployment failed with error Request failed with status code 403bap… And another error after replacing the ui5-deploy.yaml with another ABAP system info builder:custom deploy-to-abap Deployment failed with error self signed certificate deploy-to-abap.. I changed the Please help me to resolve the issue Thanks Hi udayagiri swamy Where did you put that ignoreCertError? at the ui5-deploy.yaml? So far I don’t any support for that function on deploy configuration, and unfortunately, I can’t replicate the same issue you have in my system. in the ui5.yaml. I got this error (self signed certificate ) when i use https protocol. When I use http, below error occured. info builder:custom deploy-to-abap Deployment failed with error Request failed with status code 403bap… I had the same error while using http: By debugging into node_module code, I found related URL: By searching for ABAP_REPOSITORY_SRV, it can be resolved by register service in T-code: /n/iwfnd/maint_service as blog post below: Then the error changed, I got following 400 error: By search Virus in SPRO, it can resolved by disable scan or activating /SCMS/KPRO_CREATE profile. Then I still face 400 error, with detailed but useless message: I know ADT is mandatory dev service but not sure, then I try activating service /sap/bc/adt in T-code sicf Then the deployment failed with 500 error at the first time, but will success in the second time. Finally the I can deploy my app successfully using Fiori Tools by command “npm run deploy”
https://blogs.sap.com/2020/08/10/sap-fiori-tools-deploy-to-abap-server/
CC-MAIN-2020-40
refinedweb
1,690
60.04
Hi all, I wonder if anyone could point me in the right direction... I can successfully create a WebView object, and get it to load a url from the www. But the documentation specifically says: Note: The WebView does not support loading content through the Qt Resource system. WebView QML Type | Qt WebView 5.9 I am wanting to compose, view, load, edit and save an html file locally, but this suggests to me I can't do it using the WebView object. I can read the html file as text, and load it with the loadHtml() method, but it looks like the 'baseUrl' it expects is an online resource, and I can't get it to display any resources (such as image files). How can I interact with a local html file? import QtWebView 1.1 App { id: app width: 500; height: 500 WebView{ id: mywebview anchors.fill: parent } } Check the Quick Report template. It supports loading of the webpage for help both local and remote. Here is the location of the code that deals with loading local HTML files: QuickReport/controls/WebPage.qml
https://community.esri.com/thread/196501-webview-that-displays-a-local-html-file
CC-MAIN-2019-39
refinedweb
185
73.37
Pass variable into Laravel {{ route }} helper I have routes for /admin/login and also for /user/login I'm looking for both of the views to share the same layout file and so looking to pass the first segment (admin or user) into the route helper. Is that possible? So effectively, I'd be looking to do something like: {{ route($thisIsDynamic.'.login') }} where $thisIsDynamic would either be admin or user depending on the URL. 3 answers - answered 2018-02-13 02:55 Kasnady route only can contain of route name. If you want to pass link, you should use URL:: {{ URL::to($thisIsDynamic.'login') }} Refer this: Laravel blade templates, foreach variable inside URL::to? - answered 2018-02-13 02:57 Jonjie Based on my understanding, you may try using a wild card and passing the data into the URL like so: routes Route::get('/login/{type}', function($type){ return view('login', compact('type')); }); You can now have access on the $typevariable on the view. Now, if you want to access the route, just add it a name then pass the $typeon it. - answered 2018-02-13 05:47 p01ymath I don't recommend redirection based on route names when they are dynamic. But if you really want to do that, Here is how you do it. Route::get('admin/login',"SomeController@someMethod")->name('admin-login'); Route::get('user/login',"SomeController@someMethod2")->name('user-login'); And, whenever you want to pass by name, you can do this {{ route("$thisIsDynamic-login") }}. But, As I said, this is not the right way to do this. Here is how you can do it the right way. Web.php Route::get('{type}/login',function(){ return view('login')->with('type',$type); }); login.blade.php (Just an example of how you do it) @extends('layout.you.want.to.extend') @section('content') @if($type == 'user') // User login form @else // Admin login form @endif @endsection I assume this is what you want to do. Let me know if I misunderstood anything or if you have any more queries.
http://quabr.com/48758688/pass-variable-into-laravel-route-helper
CC-MAIN-2018-34
refinedweb
339
55.84
File formats may sound mundane, but they can give strategic value to those who control them as a gateway to the data held by people and companies. --Stephen Shankland Read the rest in Google mapping spec now an industry standard | Tech news blog The OpenOffice Project has posted the first beta of OpenOffice 3.0, an open source office suite for Linux, Solaris, and Windows that saves all its files as zipped XML... The." The W3C XML Core Working Group has published the finished recommendation Canonical XML 1.1. This attempts to address some of the weirdnesses of Canonical XML, such as the movement of xml:id attributes from one element to another and breaking of base URLs when canonicalizing. The W3C XML Processing Model Working Group has published a new Working Draft of XProc: An XML Pipeline Language. According to group lead Norm Walsh, changes in this draft are: - Fairly substantial syntax changes. A <p:pipeline> is now just syntactic sugar for a particular <p:declare-step>. -; you have to declare them explicitly if you need them. - Added p:base-uri() and p:resolve-uri() XPath extension functions to support (XPath 1.0) pipelines that need access to the base URI of documents. - Removed ignored namespaces, added <p:pipeinfo>. - Redefined the <p:label-elements> step to use a step-local variable in the XPath context. - Added psvi-required attribute to pipelines. - Changed definition of <p:error> to better address localization issues. The syntax changes, and making <p:pipeline> syntactic sugar for a particular <p:declare-step>, have the effect of making very simple, straight-through pipelines syntactically simple again. Reorganizing some of the option and parameter elements, and adding a variable element, makes the language bigger (in the sense that it has more elements) but I think it has significantly reduced some of the confusing sublty that used to exist around declaration and use of options. In general, I think these are all changes for the better. And I think we're done. This is a Last Call working draft in all but name. The changes are significant enough that we thought it would be best to float them in an ordinary working draft first. That will, I hope, save us the embarrassment of having to do more than two last calls. M. Another day, another WordPress security bug. Matt Mullenweg has released Wordpress 2.5.1 an open source (GPL) blog engine based on PHP and MySQL. All users should upgrade. The." The W3C Web API Working Group has posted the last call working draft of The XMLHttpRequest Object. The XMLHttpRequestobject implements an interface exposed by a scripting engine that allows scripts to perform HTTP client functionality, such as submitting form data or loading data from a server. The name of the object is XMLHttpRequestfor. M. XMLMind has released version 3.8.0 of their XML Editor. This $300 payware product features word processor and spreadsheet like views of XML documents. This release adds support for MathML 2 presentation markup. A free-beer hobbled version is also available. The value MUST be interpreted as being from the XHTML vocabulary at. For a list of all roles in the default vocabulary, see [XHTMLVOC). - banner - A region that contains the prime heading or internal title of a page.. - complementary - Any section of the document that supports but is separable from the main content, but is semantically meaningful on its own even when separated from it.. - contentinfo - Meta information about the content on the page or the page as a whole. For example, footnotes, copyrights, links to privacy statements, etc. would belong here. -?)
http://www.cafeconleche.org/
crawl-001
refinedweb
602
65.73
Log message: regen (for new patch in protobuf) Log message: Adjust DEPENDS to reflect current reality and make it compatible with Python 3 too. Bump PKGREVISION Discussed with <khorben> Log message: Add comments to patches. Log message: Updated py-protobuf to 3.3.0. same as protobuf 3.3.0 Log message: Update *protobuf to 3.2.0: 2017-01-23 version 3.2.0 (C++/Java/Python/PHP/Ruby/Objective-C/C#/JavaScript/Lite) General * Added protoc version number to protoc plugin protocol. It can be used by protoc plugin to detect which version of protoc is used with the plugin and mitigate known problems in certain version of protoc. C++ * The default parsing byte size limit has been raised from 64MB to 2GB. * Added rvalue setters for non-arena string fields. * Enabled debug logging for Android. * Fixed a double-free problem when using Reflection::SetAllocatedMessage() with extension fields. * Fixed several deterministic serialization bugs: * MessageLite::SerializeAsString() now respects the global deterministic serialization flag. * Extension fields are serialized deterministically as well. Fixed protocol compiler to correctly report importing-self as an error. * Fixed FileDescriptor::DebugString() to print custom options correctly. * Various performance/codesize optimizations and cleanups. Java * The default parsing byte size limit has been raised from 64MB to 2GB. * Added recursion limit when parsing JSON. * Fixed a bug that enumType.getDescriptor().getOptions() doesn't have custom options. * Fixed generated code to support field numbers up to 2^29-1. Python * You can now assign NumPy scalars/arrays (np.int32, np.int64) to protobuf fields, and assigning other numeric types has been optimized for performance. * Pure-Python: message types are now garbage-collectable. * Python/C++: a lot of internal cleanup/refactoring. PHP (Alpha) * For 64-bit integers type (int64/uint64/sfixed64/fixed64/sint64), use PHP integer on 64-bit environment and PHP string on 32-bit environment. * PHP generated code also conforms to PSR-4 now. * Fixed ZTS build for c extension. * Fixed c extension build on Mac. * Fixed c extension build on 32-bit linux. * Fixed the bug that message without namespace is not found in the descriptor pool. (#2240) * Fixed the bug that repeated field is not iterable in c extension. * Message names Empty will be converted to GPBEmpty in generated code. * Added phpdoc in generated files. * The released API is almost stable. Unless there is large problem, we won't change it. See- … -generated for more details. Objective-C * Added support for push/pop of the stream limit on CodedInputStream for anyone doing manual parsing. C# * No changes. Ruby * Message objects now support #respond_to? for field getters/setters. * You can now compare “message == non_message_object” and it will return false instead of throwing an exception. * JRuby: fixed #hashCode to properly reflect the values in the message. Javascript * Deserialization of repeated fields no longer has quadratic performance behavior. * UTF-8 encoding/decoding now properly supports high codepoints. * Added convenience methods for some well-known types: Any, Struct, and Timestamp. These make it easier to convert data between native JavaScript types and the well-known protobuf types. Log message: Add python-3.6 to incompatible versions. Log message: regen (for new patch added to protobuf) Log message: Updated py-protobuf to 3.1.0. 2016-09-23 version 3.1.0 (C++/Java/Python/PHP/Ruby/Objective-C/C#/JavaScript/Lite) Python * JSON support * Fixed some conformance issues.
http://pkgsrc.se/devel/py-protobuf
CC-MAIN-2017-30
refinedweb
558
53.78
💬 Easy/Newbie PCB for MySensors... @ElCheekytico - I dont know why they wont update to rev 10. I have uploaded the new revision to openhardware... nothing else I can do, sorry. @pierre1410 - I would suggest you connect the battery, and then the usb/ftdi cable, but only GND, TX and RX fro the usb connection. This will have the node running on the batteries VCC and get you closer to the real deal. Using VCC from the usb adapter might be higher VCC than the batteries, and in worst case scenario it works great with that connected but not when you deploy and only use battery power. Hello I think there is a mistake in your code (on github) to mesure battery. The array have 4 values, but you divide by 3. So you get to high result. On line 77 Everything else is ok in my case the board works well with 2 battery AA, and 47uF cap (not with 4.7uF) .]); @mfalkvidd : I think you have to change the condition in if too : if (batLoop > 1) { With that, batLoop = 2 when entering in the if (so the 3 values of the array are filled : 0, 1, 2), calculating the average and reseting the batLoop to 0. If not modify, the array will be set with batArray[3] just before entering the if (batLoop > 2) { Other way is to set the batArray size to 4 values, and divide by 4 (I'm actually trying @pierre1410 - no stupid questions, simple answer though, no Hello Sundberg84 et al, I purchased Easy Newbie PCB (Rev 10) with the hope of jumping into the IoT rave... I tried to create the Gateway - and it works perfectly. I also found and created an advanced Water Sprinkler MyS project by Pete and Co!... that also worked perfectly. However, I cant seem to get simpler beginner sketches to work with EasyNewbie. I have soldered the need components (the regulated version). I plan to connect to a 4 relay module, but for now, I am simulating the relay with a simple LED attached to D2. I have uploaded the following sketch which was originally written for version 1.0 MySensors - but have been corrected using guidelines I found for converting 1.0 - 2.0 version. but still no cheese - the controller (Domoticz) displays the Node, but not the complete child nodes, - the LED connected to pin D2 (or D3, D5..) on the Easy Newbie doesn't light up even when I toggle the switches and devices on controller. Don't know what I might be doing wrong, please help. // Example sketch showing how to control physical relays. // This example will remember relay state even after power failure. // Enable debug prints #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_NRF24 #define MY_NODE_ID 4 // Set this to fix your Radio ID or use AUTO or 1 #define MY_REGISTRATION_FEATURE // Forece registration #define MY_REGISTRATION_RETRIES 5 #include <Wire.h> #include <TimeLib.h> #include <SPI.h> #include <MySensors.h> #include <LCD.h> #include <LiquidCrystal.h> #include <LiquidCrystal_I2C.h> // For Debug #ifdef DEBUG_ON #define DEBUG_PRINT(x) Serial.print(x) #define DEBUG_PRINTLN(x) Serial.println(x) #else #define DEBUG_PRINT(x) #define DEBUG_PRINTLN(x) #define SERIAL_START(x) #endif //#define RELAY_1 2 // Arduino Digital I/O pin number for first relay (second on pin+1 etc) #define RELAY_PIN 3 #define NUMBER_OF_RELAYS 4 // Total number of attached relays #define RELAY_ON 0 // GPIO value to write to turn on attached relay #define RELAY_OFF 1 // GPIO value to write to turn off attached relay #define SKETCH_NAME "Base Relay Node" #define SKETCH_VERSION "0.1.2" #define CHILD_ID 0 MyMessage msg(1,V_LIGHT); //void before() void setup() { for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Then set relay pins in output mode pinMode(pin, OUTPUT); // Set relay to last known state (using eeprom storage) digitalWrite(pin, loadState(sensor)?RELAY_ON:RELAY_OFF); } } void presentation() { sendSketchInfo(SKETCH_NAME, SKETCH_VERSION); // Fetch relay status for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Register all sensors to gw (they will be created as child devices) // present(sensor, S_BINARY); present(sensor, S_LIGHT); // Then set relay pins in output mode pinMode(pin, OUTPUT); // Set relay to last known state (using eeprom storage) boolean savedState = loadState(sensor); digitalWrite(pin, savedState?RELAY_ON:RELAY_OFF); send(msg.set(savedState? 1 : 0)); } DEBUG_PRINTLN(F("Sensor Presentation Complete")); } void loop() { // Alway process incoming messages whenever possible // Sleep until interrupt comes in on motion sensor. Send update every two minute. // sleep(digitalPinToInterrupt(DIGITAL_INPUT_SENSOR), CHANGE, SLEEP_TIME); // update Relays; for (int sensor=1, pin=RELAY_PIN; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) { // Register all sensors to gw (they will be created as child devices) // present(sensor, S_BINARY); send(msg.set(sensor).set(false), false); wait(50); } } void incomingMessage(const MyMessage &message) { // We only expect one type of message from controller. But we better check anyway. if (msg.type==V_STATUS) { // Change relay state digitalWrite(msg.sensor-1+RELAY_PIN, msg.getBool()?RELAY_ON:RELAY_OFF); // Store state in eeprom saveState(msg.sensor, msg.getBool()); // Write some debug info Serial.print("Incoming change for sensor:"); Serial.print(msg.sensor); Serial.print(", New status: "); Serial.println(msg.getBool()); } } @eme - hi! Im very sorry, code is not my strong side. Do you think its a problem with the hardware? Do you have any logs? @sundberg84 I have the Domoticz log below. but just incase it doesn't reveal much, do you have a working sketch for EasyNewbie and a 4-relay module block? 2019-04-05 22:28:20.172 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:28:21.229 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:28:22.169 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:28:55.333 Status: MySensors: Node: 3, Sketch Name: Base Relay Node 2019-04-05 22:28:56.459 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:29:00.141 Status: LUA: All based event fired 2019-04-05 22:29:53.736 Status: User: Eme initiated a switch command (29/Light/On) 2019-04-05 22:29:54.297 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:29:57.701 Status: User: Eme initiated a switch command (29/Light/Off) 2019-04-05 22:29:58.206 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:00.533 Status: User: Eme initiated a switch command (28/Light/On) 2019-04-05 22:30:00.534 Status: LUA: All based event fired 2019-04-05 22:30:01.729 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:07.427 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:07.095 Status: User: Eme initiated a switch command (27/Light/On) 2019-04-05 22:30:13.674 Status: User: Eme initiated a switch command (29/Light/On) 2019-04-05 22:30:15.706 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:19.773 Status: User: Eme initiated a switch command (26/Light/On) 2019-04-05 22:30:20.138 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:55.858 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:55.427 Status: User: Eme initiated a switch command (26/Light/Off) 2019-04-05 22:30:59.611 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:30:59.256 Status: User: Eme initiated a switch command (29/Light/Off) 2019-04-05 22:31:00.280 Status: LUA: All based event fired 2019-04-05 22:31:02.813 Status: User: Eme initiated a switch command (27/Light/Off) 2019-04-05 22:31:03.242 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:31:06.944 Status: User: Eme initiated a switch command (28/Light/Off) 2019-04-05 22:31:07.393 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:31:22.892 (My Sensors Gateway) Light/Switch (Light) 2019-04-05 22:31:22.527 Status: User: Eme initiated a switch command (26/Light/On) 2019-04-05 22:32:00.351 Status: LUA: All based event fired ``` @eme sorry I don't. The most common "issue" is that power is to weak for 4 relays or noice is introduced in the radio from the relay. It's very hard to understand without real logs. Sorry I eventually got it to work. I am not using the EasyNewbie to power the load (its for 24volts swimming pool light). Couldn't get the 4th relay to work - not sure why its not responding to the controller. but I noticed some bug with my batch of 16 PCB - Okay I think I have seen some bugs... my batch of EasyNewbie Rev 10 has some hardware issues - The provisions to attach sensors are not correctly marked. D6 doesn't actually connect to D6 pin on the Arduino (must have failed a via somewhere). I am not sure how many pins are used for the radio and other components, but if you intend to use this for a small project, I will avoid D8 - D13 as they appears to be taken. D1 and D2 are used for the TX & RX so you only have D2 - D6 to play with (more than enough pins for a newbie if you ask me). @eme good you found the issue. First time I ever hear of a comfirmed PCB hardware bug, so its very rare. Very strange the silkscreen were moved or not aligned correctly as well. From where did you order the PCB? The documentation states which pins connect to what on the PCB, so make sure you check that out. D9-D13 are used by the radio as in all MySensors projects. D8 for the extra flash. You can find the MysX documentation here as well if you need to find out more about the pins. .
https://forum.mysensors.org/topic/2740/easy-newbie-pcb-for-mysensors/617
CC-MAIN-2019-51
refinedweb
1,639
66.64
Tax Have a Tax Question? Ask a Tax Expert You or your husband are not required to file any forms with the IRS. Gifts of a present interest (which cash is) between spouses can be unlimited. However, any gifts between nonspouses would require the transferor to file a gift tax return (IRS Form 709) if the gift exceeded $12,000. Congratulations on your marriage. No gift tax return (Form 709) is needed if you and your husband were USA citizens or resident aliens at the time of the gift. There is an unlimited allowable amount of "present interest" (not future interest such as capital gain, etc) for spouses. Be aware that the regular 12000 per year limit for gifts does still stand if either of you were nonresident aliens at the time of the gift. Form 709 would need to be filed to report 108000 as a gift (120000 - 12000 = 108000) by him if this is your situation.
http://www.justanswer.com/tax/0vi0f-one-time-spouse-gift-exemption.html
CC-MAIN-2016-26
refinedweb
158
70.73
As we learned in the beginning of the chapter, another custom tool you can build with .NET is a component. A component is similar to a control, but doesn't inherit from the Control class and typically doesn't render any UI. A common use for components is to encapsulate business logic or data access logic. In this section, we will create a simple SalesTax component and demonstrate how to use it in an ASP.NET page. For more on components, refer to the IBuySpy case study in Chapter 14. To create a component, we simply write a class just as we did with our initial HelloWorld control. However, for a component we don't need to inherit from anything (although we can if we choose to), so for this example just declaring the class is sufficient. The complete listing for our SalesTax component is in Listing 12.15. namespace ASPNETByExample { public class SalesTax { static double taxRate = 0.06; public static double computeTax(double price) { return price * taxRate; } } }
https://flylib.com/books/en/3.146.1.110/1/
CC-MAIN-2021-43
refinedweb
168
65.12
In .NET 4.0, we have a set of new API's to simplify the process of adding parallelism and concurrency to applications. This set of API's is called the "Task Parallel Library (TPL)" and is located in the System.Threading and System.Threading.Tasks namespaces. In the Table shown below, I have mentioned some of the classes used for Parallel programming in .NET 4.0. If you are new to Parallel Tasks in .NET, check this article Introduction to .NET Parallel Task In this article, I have described a scenario where a WPF application is trying to retrieve data using a WCF service and also trying to access data from the local database server. The application dedicates a long running WCF service call to the Task class, so that the call to the service can be made asynchronously. The below diagram explains the scenario: Step 1: Open VS2010 and create a blank solution, name it as ‘CS_Task_Demo’. In this project, add a new WCF Service application project and name it as ‘WCF40_DataService’. Rename IService1.cs to IService.cs and Service1.svc to Service.svc. Step 2: Write the following code in IService.cs with ServiceContract, OperationContract and DataContract. The OperationContract method also applies the WebGet attribute so that the WCF service is published as WCF REST service. Step 3 : Now write the following code in the Service class. This code makes a call to Database and retrieve the Employee records. Note: The above code uses the Thread.Sleep(10000) which waits for 10 seconds to get the data from the Database. I did this to emulate the effect of the Task class. Step 4: Change the code of Service.svc as shown below: Step 5: Make the following changes in Web.Config file which adds the WebHttpBinding for the WCF REST Services. <protocolMapping> <add binding="webHttpBinding" scheme="http"/> </protocolMapping> Step 6: Publish the WCF service on IIS. Step 7: In the same solution, add a new WPF project and name it as ‘WPF40_TasksClass’,make sure that the framework for the project as .NET 4.0. Add the following XAML: Step 8: Open MainPage.xaml.cs and add the below classes: Step 9: Add the following code in the GetData click event. This code defines a Task object which initiates an Asynchronous operation to the WCF REST Service. This downloads the XML data by making call to WCF service. During this time of the service call, a call to the local database is made and completes the call. Once the service call is over, the data is processed. The code is as below: The above code creates an instance of the Task class using its Factory property to retrieve a TaskFactory instance that can be used to create task. Note: Please read comments in the code carefully to understand it. Step 10: Run the application and click on the ‘Get Data’ button. You will get the data immediately in the DataGrid. This data is fetched from the local sql server database as below: Click on the ‘OK’ for the message box and wait for some time, you will get data in the DataGrid on the Left hand side which gets the data from the WCF REST service as shown below: Now if you see the above output, the time for the WCF service call is more than 10 Seconds and the local database call takes 0.2 Seconds. The entire source code of this article can be downloaded over here
http://www.dotnetcurry.com/wpf/754/using-task-parallel-library-load-data
CC-MAIN-2017-09
refinedweb
581
73.47
#include <OSDataStructures.h> List of all members. Assume there are n variables in what follows Definition at line 244 of file OSDataStructures.h. Default constructor. An Alternative Constructor. Default destructor. bDeleteArrays is true if we delete the arrays in garbage collection set to true by default Definition at line 251 of file OSDataStructures.h. hessDimension is the number of nonzeros in each array. Definition at line 256 of file OSDataStructures.h. hessRowIdx is an integer array of row indicies in the range 0, . .., n - 1. Definition at line 261 of file OSDataStructures.h. hessColIdx is an integer array of column indicies in the range 0, . .., n - 1. Definition at line 266 of file OSDataStructures.h. hessValues is a double array of the Hessian values. Definition at line 271 of file OSDataStructures.h.
http://www.coin-or.org/Doxygen/CoinAll/class_sparse_hessian_matrix.html
crawl-003
refinedweb
132
52.46
It’s common to see forms that have a label displayed inside text fields which then disappear when you click into the field. Let’s go through an example for a login form that demonstrates a method for doing this. First, start with a simple HTML form: <form action="" method="post" class="login"> <ul> <li> <div class="title">Login</div> </li> <li> <label for="username">Username</label> <input id="username" type="text"> </li> <li> <label for="password">Password</label> <input id="password" type="password"> </li> <li> <button type="submit">Login</button> </li> </ul> </form> Next let’s start styling up the form. /* Create a bordered box for the form */ form.login { background-color:#eee; width:300px; padding:10px 20px; border:5px solid #ddd; } /* Set the font for all of the elements */ form.login, form.login input, form.login button { font-family: Helvetica, Arial; font-size:18px; } /* Make our title a bit bigger */ form.login div.title { font-size:24px; font-weight:bold; margin-bottom:10px; } /* Remove the bullets and padding from our list */ form.login ul { list-style-type:none; padding:0; margin:0; } /* Give our input fields a fixed width and a bit of padding */ form.login input { width:280px; padding:5px; margin-bottom:10px; } For the next step we want to have our labels displayed over the top of our input fields. So a bit of extra CSS to do that. /* Make each field container relative. This lets us position the label absolutely inside it. */ form.login ul li { position:relative; } /* Position the labels inside our input fields. */ form.login label { position:absolute; top:8px; left:9px; color:#aaa; } Lastly we need to have the labels disappear when we start editing a text field and reappear again if we leave the text field without entering a value. We will use the jQuery focus() function to handle entering a text field and the blur() function to handle when leaving a field. We’ll also hide the labels for fields that have a pre-populated value. $(document).ready(function(){ // Find each of our input fields var fields = $("form.login input"); // If a field gets focus then hide the label // (which is the previous element in the DOM). fields.focus(function(){ $(this).prev().hide(); }); // If a field loses focus and nothing has // been entered in the field then show the label. fields.blur(function(){ if (!this.value) { $(this).prev().show(); } }); // If the form is pre-populated with some values // then immediately hide the corresponding labels. fields.each(function(){ if (this.value) { $(this).prev().hide(); } }); }); However jQuery supports function chaining, so perhaps a nicer way to write this may be: $(document).ready(function(){ $("form.login input") .each(function(){ if (this.value) { $(this).prev().hide(); } }) .focus(function(){ $(this).prev().hide(); }) .blur(function(){ if (!this.value) { $(this).prev().show(); } }); }); EDIT: Dan G. Switzer, II provided a slightly more succinct version of the jQuery code: $(document).ready(function(){ $("form.login input") .bind("focus.labelFx", function(){ $(this).prev().hide(); }) .bind("blur.labelFx", function(){ $(this).prev()[!this.value ? "show" : "hide"](); }) .trigger("blur.labelFx"); }); 18 Comments @Kevan: Instead of using the each() method, you can simplify the code by doing: $(document).ready(function(){ $(“form.login input”) .bind(“focus.labelFx”, function(){ $(this).prev().hide(); }) .bind(“blur.labelFx”, function(){ // hide or show the label based on whether we have values in the field $(this).prev()[!this.value ? "show" : "hide"](); }) .trigger(“blur.labelFx”); }); What I did was change the blur() event to hide or show the label based upon whether or not the value is present. We need to make sure to explicitly use hide() if there’s a value in there, because I use the trigger() method to call the blur() event on each field after initialization. This makes sure that all your pre-filled in values get processed correctly. Also, you’ll notice I also gave namespaces to the event. That was so I could explicitly trigger just the blur.labelFx. Without namespaces, calling trigger(“blur”) would run all blur events and if you end up attaching multiple behaviors to the fields you really don’t want to re-run those behaviors more than you need to. The namespace gives you a way to trigger only the events you care about. @Dan That’s outstanding! Thank you. hello.. I’m new to this.. hope you can help me with this.. i’m using “Using form labels as text field values” jquery. where can I set the email receiver? when the comment or message will automatically send to the your mail?.. thanks @ben, to send an email from a web form you’ll need to use a server language such as PHP or ColdFusion. Here’s a Google search that might help get you started: Sorry to sound stupid, but I have two questions. Firstly, do you have to create the username and passwords as a list. The reason I ask is that if you were doing this for a whole form, I wonder if there’s a way to create a div or something for all fields in the form and then refer to the div. Secondly, where exactly in your code did you put the javascript – I can’t seem to get it to work on mine. Thanks! @Benjy, the form fields can definitely be marked up different ways, but you will most likely need to have a separate container for each label/formfield pair (such as a DIV or an LI, as in my example). I had the JavaScript code up within the head of the page. You can see a live demo over here: Ok, thanks for that. I sort of have it working, but now have a different problem. I can only ‘see’ the username field. Below is my various bits of code. In main index file: Username From css file: #LoginPage { position:relative; float:left; margin-right:3px; } #uname { position:relative; } #pword { position:relative; } form.login label { position:absolute; top:3px; left:10px; color:#aaa; } Any ideas? Thanks! @Benjy, sorry looks like wordpress ate your html tags. Feel free to repost. To get it to display properly in the comment wrap your html code with a pre tag: And you can wrap your CSS code with thanks needed this, but it my case it did not work correctly due to the prev() function. because I had other dom elements that were after the label and before the input, i had to use sibling(“label”) i.e. replace all instances of prev()with sibling("label") another update i had to do was to refine the selector a bit. instead of input i filtered it further to limit the selector to text input types only. $(“form.login input[type=text]“) @johnny, thanks for the note. Nice writeup. Whenever I need to do something like this, I tend to wrap the label and input in a fieldset. One thing worth adding is handling users without JavaScript turned on as otherwise the label will appear in the text and it will look very confusing. To do so you can add the following squirt of jQuery: /* Only apply the positioning style if JS is on */ $('form.login label').addClass('labels-inside-inputs'); Then change the CSS selector to be: form.login label.labels-inside-inputs { ... } @Rich, great tip, thanks! I’m trying to get the email address above my label to go inside my label…my site i’m working on is How do I get the label to display email address inside the label? Thanks in advance. @ed, you’ll need to set your container div to have a position:relative , then you can set your label to have an position:absolute and set your top and left values to position the label within the text box. For example: Hi! Thanks for the great A-B-C instructions for this plugin! I have a slight problem, tho. When you click in one input field, the other labels move position. The address is so you can see what I’m talking about. I used absolute positing with ID’s for each label, they are not a class. Can you suggest a remedy for this? I would greatly appreciate it! Have a good one! A small issue occurs following this guide – the absolutely positioned label on top of the field doesn’t propagate the click to the field, so you have to click on the white space to engage the field. $(“… label”).click(function(){ $(this).next().focus(); }); and showing a text cursor with CSS over the label would solve the problem. Cheers! Thank for this! By the way, you also need a change handler in case a user fill in the form using the Browser’s autofill settings – selecting an autofill in one field could automatically fill all the rest of the fields, without any focus/blur events being fired. I got around that by doing this: .bind(“change.labelFx”, function(){ $(this).trigger(“blur.labelFx”); }) One Trackback [...] solution is to either use a top-aligned label, or add the form label as a text-field value. The latter requires less vertical real estate, but can be a little annoying if the field [...]
http://blog.stannard.net.au/2011/01/07/creating-a-form-with-labels-inside-text-fields-using-jquery/
CC-MAIN-2013-20
refinedweb
1,522
64.91
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 1 results of 1 Hi all, I'm a newcomer to this library (starting working with it yesterday) and so far I'm greatly impressed with almost every aspect. The documentation is fantastic for an open-source library, the set of functionality is perfect for my needs, the included samples and unit tests are a dream. Great job all around. That said, I have been spinning around in circles for several hours now on a bizarre issue that is just about to do me in. I cannot, to save my life, initialize an IPAddress (or any object that builds one as part of its initialization) to anything other than a wildcard address - which is great for the server but useless for a test client. Specifically, the exception traces back to passing in an explicit host. For some reason I cannot root out, the constructor gets a bad pointer to the string. Here's my current code: // begin sample #include "Poco/Foundation.h" #include "Poco/Net/SocketAddress.h" using Poco::Net::SocketAddress; int main(int argc, char* argv) { SocketAddress socketaddress("localhost", 8765); return 0; } // end sample Pretty simple right? But it just plain will not work! When I trace in, I see that "localhost" goes through the string constructor as expected, returns the correct value, but when I enter the SocketAddress constructor "addr" claims it's a bad pointer. I'm running Visual Studio 2005 Standard Edition, compiling all of the libraries without issue and everything links and compiles in my project no problem. If anybody can point me in the direction of something else to try it would be much appreciated. Thanks! Casey
http://sourceforge.net/p/poco/mailman/poco-develop/?viewmonth=200609&viewday=10
CC-MAIN-2015-14
refinedweb
301
61.77
virtiofsd - Man Page QEMU virtio-fs shared file system daemon Synopsis virtiofsd [Options] Description - -h, --help Print help. - -V, --version Print version. - -d Enable debug output. - --syslog Print log messages to syslog instead of stderr. - -o OPTION - debug - Enable debug output. - flock|no_flock - Enable/disable flock. The default is no_flock. - modcaps=CAPLIST Modify the list of capabilities allowed; CAPLIST is a colon separated list of capabilities, each preceded by either + or -, e.g. ''. - posix_acl|no_posix_acl - Enable/disable posix acl support. Posix ACLs are disabled by default. - --socket-path=PATH Listen on vhost-user UNIX domain socket at PATH. - --socket-group=GROUP Set the vhost-user UNIX domain socket gid to GROUP. - --fd=FDNUM Accept connections from vhost-user UNIX domain socket file descriptor FDNUM. The file descriptor must already be listening for connections. - --thread-pool-size=NUM Restrict the number of worker threads per request queue to NUM. The default is 64. - --cache=none|auto|always Select the desired trade-off between coherency and performance. none forbids the FUSE client from caching to achieve best coherency at the cost of performance. auto acts similar to NFS with a 1 second metadata cache timeout. always sets a long cache lifetime at the expense of coherency. The default is auto. Extended Attribute (Xattr) Mapping. Mapping syntax ':' as the separator a rule is of the form: :type:scope:key:prepend: scope is: - 'client' - match 'key' against a xattr name from the client for setxattr/getxattr/removexattr - 'server' - match 'prepend' against a xattr name from the server for listxattr - 'all' - can be used to make a single rule where both the server and client matches are triggered. type is one of: - 'prefix' - is designed to prepend and strip a prefix; the modified attributes then being passed on to the client/server. - 'ok' - Causes the rule set to be terminated when a match is found while allowing matching xattr's through unchanged. It is intended both as a way of explicitly terminating the list of rules, and to allow some xattr's to skip following rules. - 'bad' - If a client tries to use a name matching 'key' it's denied using EPERM; when the server passes an attribute name matching 'prepend' it's hidden. In many ways it's use is very like 'ok' as either an explicit terminator or for special handling of certain patterns. key is a string tested as a prefix on an attribute name originating on the client. It maybe empty in which case a 'client' rule will always match on client names. prepend is a string tested as a prefix on an attribute name originating on the server, and used as a new prefix. It may be empty in which case a 'server' rule will always match on all names from the server. e.g.: :prefix:client:trusted.:user.virtiofs.: will match 'trusted.' attributes in client calls and prefix them before passing them to the server. :prefix:server::user.virtiofs.: will strip 'user.virtiofs.' from all server replies. :prefix:all:trusted.:user.virtiofs.: combines the previous two cases into a single rule. :ok:client:user.:: will allow get/set xattr for 'user.' xattr's and ignore following rules. :ok:server::security.: will pass 'securty.' xattr's in listxattr from the server and ignore following rules. :ok:all::: will terminate the rule search passing any remaining attributes in both directions. :bad:server::security.: would hide 'security.' xattr's in listxattr from the server. A simpler 'map' type provides a shorter syntax for the common case: :map:key:prepend: The 'map' type adds a number of separate rules to add prepend as a prefix to the matched key (or all attributes if key is empty). There may be at most one 'map' rule and it must be the last rule in the set. Note: When the 'security.capability' xattr is remapped, the daemon has to do extra work to remove it during many operations, which the host kernel normally does itself. Security considerations Operating systems typically partition the xattr namespace using well defined name prefixes. Each partition may have different access controls applied. For example, on Linux there are multiple partitions - system.* - access varies depending on attribute & filesystem - security.* - only processes with CAP_SYS_ADMIN - trusted.* - only processes with CAP_SYS_ADMIN - user.* - any process granted by file permissions / ownership While other OS such as FreeBSD have different name prefixes and access control rules. When remapping attributes on the host, it is important to ensure that the remapping does not allow a guest user to evade the guest access control rules. Consider if trusted.* from the guest was remapped to user.virtiofs.trusted* in the host. An unprivileged user in a Linux guest has the ability to write to xattrs under user.*. Thus the user can evade the access control restriction on trusted.* by instead writing to user.virtiofs.trusted.*. As noted above, the partitions used and access controls applied, will vary across guest OS, so it is not wise to try to predict what the guest OS will use. The simplest way to avoid an insecure configuration is to remap all xattrs at once, to a given fixed prefix. This is shown in example (1) below. If selectively mapping only a subset of xattr prefixes, then rules must be added to explicitly block direct access to the target of the remapping. This is shown in example (2) below. Mapping examples - 1. Prefix all attributes with 'user.virtiofs.' -o xattrmap=":prefix:all::user.virtiofs.::bad:all:::" This uses two rules, using : as the field separator; the first rule prefixes and strips 'user.virtiofs.', the second rule hides any non-prefixed attributes that the host set. This is equivalent to the 'map' rule: -o xattrmap=":map::user.virtiofs.:" - 2. Prefix 'trusted.' 'trusted.' and stripping of 'user.virtiofs.'. The second rule hides unprefixed 'trusted.' attributes on the host. The third rule stops a guest from explicitly setting the 'user.virtiofs.' path directly to prevent access control bypass on the target of the earlier prefix remapping. Finally, the fourth rule lets all remaining attributes through. This is equivalent to the 'map' rule: -o xattrmap="/map/trusted./user.virtiofs./" - 3. Hide 'security.' attributes, and allow everything else "/bad/all/security./security./ /ok/all///' The first rule combines what could be separate client and server rules into a single 'all' rule, matching 'security.' in either client arguments or lists returned from the host. This stops the client seeing any 'security.' attributes on the server and stops it setting any. Examples Author Stefan Hajnoczi <stefanha@redhat.com>, Masayoshi Mizuma <m.mizuma@jp.fujitsu.com> 2021, The QEMU Project Developers
https://www.mankier.com/1/virtiofsd
CC-MAIN-2021-49
refinedweb
1,097
59.7
I'm doing a C program that involves printing out a sine wave using the "*" character. The main coding is complete but for some strange reason the "*"'s just won't display on the screen. All I'm getting is blanks. Here's my code: #include <stdio.h> #include <math.h> int main(void) { /* Declare variables */ int line_count, /* Counts the lines in the program */ no_of_lines; /* The number of lines to display in the graph */ double result, /* The sine result from the calculation */ initial_step_size, /* The initial step size in degrees */ current_step_size, /* The current step size in degrees */ sine_count; /* Counts the values in the sine range */ /* Prompt user for information */ printf("\nEnter the initial step-size in degrees: "); scanf("%lf", &initial_step_size); printf("\nEnter the number of lines to be displayed in the graph: "); scanf("%d", &no_of_lines); /* Assign to current step size */ current_step_size = initial_step_size; /* Display graph */ for (line_count = 0; line_count < no_of_lines; line_count++) { result = sin(current_step_size); for (sine_count = -1; sine_count <= 1; sine_count += 0.01) { if (result == sine_count) printf("\n*\n"); else printf(" "); } /* Increment step size */ current_step_size += initial_step_size; } return 0; }Both for loops run well and displayed the necessary values during tests. The only problem left is the if condition that is supposed to compare the calculated sin result with the sine range value. Basically, if the result matches the sine range value, display a star. Otherwise, display a blank. Is it a problem with my code? Or is it possible that it may be a problem with my compiler? Any help will be appreciated. Thanks! :)
http://forum.codecall.net/topic/63090-trying-to-draw-a-sine-wave-in-c/
crawl-003
refinedweb
251
61.56
Comparing Python to Other Languages Comparing Python to Other Languages Disclaimer: This essay was written sometime in 1997. It shows its age. It is retained here merely as a historical artifact. --Guido van Rossum Python is often compared to other interpreted languages such as Java, JavaScript, Perl, Tcl, or Smalltalk. Comparisons to C++, Common Lisp and Scheme can also be enlightening. In this section I will briefly compare Python to each of these languages. These comparisons concentrate on language issues only. In practice, the choice of a programming language is often dictated by other real-world constraints such as cost, availability, training, and prior investment, or even emotional attachment. Since these aspects are highly variable, it seems a waste of time to consider them much for this comparison.). Javascript Python's "object-based" subset is roughly equivalent to JavaScript. Like JavaScript (and unlike Java), Python supports a programming style that uses simple functions and variables without engaging in class definitions. However, for JavaScript, that's all there is. Python, on the other hand, supports writing much larger programs and better code reuse through a true object-oriented programming style, where classes and inheritance play an important role. Perl Python and Perl come from a similar background (Unix scripting, which both have long outgrown), and sport many similar features, but have a different philosophy. Perl emphasizes support for common application-oriented tasks, e.g. by having built-in regular expressions, file scanning and report generating features. Python emphasizes support for common programming methodologies such as data structure design and object-oriented programming, and encourages programmers to write readable (and thus maintainable) code by providing an elegant but not overly cryptic notation. As a consequence, Python comes close to Perl but rarely beats it in its original application domain; however Python has an applicability well beyond Perl's niche. Tcl Like Python, Tcl is usable as an application extension language, as well as a stand-alone programming language. However, Tcl, which traditionally stores all data as strings, is weak on data structures, and executes typical code much slower than Python. Tcl also lacks features needed for writing large programs, such as modular namespaces. Thus, while a "typical" large application using Tcl usually contains Tcl extensions written in C or C++ that are specific to that application, an equivalent Python application can often be written in "pure Python". Of course, pure Python development is much quicker than having to write and debug a C or C++ component. It has been said that Tcl's one redeeming quality is the Tk toolkit. Python has adopted an interface to Tk as its standard GUI component library. Tcl 8.0 addresses the speed issuse by providing a bytecode compiler with limited data type support, and adds namespaces. However, it is still a much more cumbersome programming language. Smalltalk Perhaps the biggest difference between Python and Smalltalk is Python's more "mainstream" syntax, which gives it a leg up on programmer training. Like Smalltalk, Python has dynamic typing and binding, and everything in Python is an object. However, Python distinguishes built-in object types from user-defined classes, and currently doesn't allow inheritance from built-in types. Smalltalk's standard library of collection data types is more refined, while Python's library has more facilities for dealing with Internet and WWW realities such as email, HTML and FTP. Python has a different philosophy regarding the development environment and distribution of code. Where Smalltalk traditionally has a monolithic "system image" which comprises both the environment and the user's program, Python stores both standard modules and user modules in individual files which can easily be rearranged or distributed outside the system. One consequence is that there is more than one option for attaching a Graphical User Interface (GUI) to a Python program, since the GUI is not built into the system.. Python shines as a glue language, used to combine components written in C++. Common Lisp and Scheme These languages are close to Python in their dynamic semantics, but so different in their approach to syntax that a comparison becomes almost a religious argument: is Lisp's lack of syntax an advantage or a disadvantage? It should be noted that Python has introspective capabilities similar to those of Lisp, and Python programs can construct and execute program fragments on the fly. Usually, real-world properties are decisive: Common Lisp is big (in every sense), and the Scheme world is fragmented between many incompatible versions, where Python has a single, free, compact implementation.
https://www.python.org/doc/essays/comparisons/
CC-MAIN-2016-36
refinedweb
755
51.89
I have a string. - Code: Select all s='....' I print the string to the terminal. It displays fine. - Code: Select all print s I put the string into a tkinter StringVar - Code: Select all text_v.set(s) I fetch the string back from this StringVar. - Code: Select all s=text_v.get() I try to print this string again. I get an exception. - Code: Select all print s The exception I get is: UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1 From what I read in various forums, the reason could be that my terminal does not support the unicode characters used. However, since I could print the string fine in the first place, this can't be the reason here. I have a self-contained example showing the problem, and wanted to include it in this post, but if I do this, I always get a SQL error. I guess this is also related to the fact that my code contains unicode characters. Hence, to show the example program, I did two things: - I uploaded it as attachment, and for the safe side - I also provide a screen shot of the program (just in case the upload also gets messed up). Here it is: - Code: Select all #!/usr/bin/python # -*- coding: utf-8 -*- from Tkinter import * def test(): global text_v print text_v.get() root=Tk() text_v=StringVar() textarea=Entry(root,textvariable=text_v) s='東京' print s text_v.set(s) ##### print text_v.get() button=Button(root,text='押してよ!',command=test) textarea.pack() button.pack() root.geometry('+70+50') root.mainloop() root.destroy() A few notes: - Line 13 displays the string 東京 correctly on the command line of my terminal - After the mainloop is entered, the string is shown in the entry field, and the button text is also shown correctly. Doesn't look like an encoding problem to me. - When I press the button, the exception is thrown in line 7 - When I uncomment line 15 and run the program again, the exception is already thrown in this line, 15. What's causing this, and how can I remedy it?
http://www.python-forum.org/viewtopic.php?f=12&t=10833
CC-MAIN-2016-36
refinedweb
352
74.49
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 7 months, 2 weeks ago. Trouble with 2 InterruptIn I'm having trouble when using 2 InterruptIn in the same code while using ublox evk-NINA-B112. I came up with this test code, that I've already tried to compile in other boards, like FRDM-KL25Z and K64, ST Nucleo L152RE, and all worked fine. I don't know if there's any problem with my code, or if this could be a problem with my board. I also changed the pins from the Interrupt, made many combinations, tried to use the both switches at the evk-board, but only the first assigned Interrupt works... 2 InterruptIn #include "mbed.h" InterruptIn isr_01(D3); InterruptIn isr_02(D5); //InterruptIn isr_01(D3, PullUp); //InterruptIn isr_02(D5, PullUp); Serial uart(D1, D0); int pino_01 = 0; int pino_02 = 0; void subida_01(){ pino_01 = 1; } void descida_01(){ pino_01 = 2; } void subida_02(){ pino_02 = 1; } void descida_02(){ pino_02 = 2; } int main(){ isr_01.rise(&subida_01); isr_01.fall(&descida_01); isr_02.rise(&subida_02); isr_02.fall(&descida_02); uart.printf("Inicio\n"); while(1){ if(pino_01 == 1){ uart.printf("subida no pino 1\n"); pino_01 = 0; } if(pino_01 == 2){ uart.printf("descida no pino 1\n"); pino_01 = 0; } if(pino_02 == 1){ uart.printf("subida no pino 2\n"); pino_02 = 0; } if(pino_02 == 2){ uart.printf("descida no pino 2\n"); pino_02 = 0; } wait_us(500000); uart.printf("0,5 sec\n"); } } Any tips? Question relating to: 2 Answers 7 months, 2 weeks ago. If you look at PinNames.h for that target PinNames.h D5 = NC, // SWDIO SWDIO of course is one of the programming pins. If you dig into the schematic that pin on J3 is also labeled SWDIO. So I think you are going to need to pick a different pin. 7 months, 2 weeks ago. Can you try making one small change to your code: volatile int pino_01 = 0; volatile int pino_02 = 0; Without the volatile keyword the compiler can perform optimizations which mean that if a variable is changed in an interrupt that change may not be seen by the main loop. Volatile prevents it from doing this and forces it to check the actual variable value every time. Missing out the volatile keyword when it is needed can result in code that works on some platforms and not on others or which stops / starts working due to seemingly unrelated changes in other parts of the code.
https://os.mbed.com/questions/87521/Trouble-with-2-InterruptIn/
CC-MAIN-2020-29
refinedweb
425
73.17
DARPA to Fund Open Source Security Research 108 divert writes "Just got an email on the SEC-PROG mailing list that DARPA is looking to fund security research for open source operating systems." Maybe someone should just tell them about OpenBSD, save some time and money. michael, dude... (Score:2) This is sooo arrogant, I'm disgusted. Dude, you're talking about DARPA. They funded the development of The Internet. Were it not for them this site wouldn't exist. Re:openbsd (Score:2) There are environments where you need performance and security. This is especially true of supercomputing environments where different people with different security levels all have access to the same physical machine(s). Just because you have a firewall, doesn't mean you aren't prone to attack. You are certainly less likely to be attacked from the outside world, but who said the attack had to come from the outside world? If you have a person with physical access to a machine you are trying to secure, it should still be extremely difficult for the person to gain entry into it. Don't bother to submit as an independent. (Score:2) Yes, it's a very inbred, good-ol-boys type of process, but that's life in military research... "If I have seen further... (Score:2) *Real* research is about incremental improvements to the existing base of knowledge. If they could do this one... (Score:3) That'd help. Sounds like they have some pretty high goals that require a lot of cooperation between various groups. I wonder how they intend to solicit that cooperation. Re:DARPA - The government gets involved. (Score:1) Namely, StackGuard [immunix.org] and several of the other Immunix [immunix.org] technologies were developed under DARPA grants. Wil -- Re:Of course they aren't going to use BSD... (Score:1) And let's also not forget a little startup that got its start from DARPA: Sun Microsystems. *** Linux Intrusion Detection System *** (Score:1) [lids.org] I am running it on a test system and I am extremely impressed. It implements capabilities allowing you to assign least priviledge so if someone gets root on the box they still can't do anything. No longer do you need to open yourself up to attack just because a program needs to bind to a low number port, for example. It's a huge boost to the security of any Linux system. This plus the standard techniques used to secure a box can really lock things down. Methodology? (Score:1) I know just the methodology they need: get more people to do the code audit. ___ Re:DARPA Involvement (Score:1) Re:unix badness (Score:1) Re:OpenBSD is not a Trusted System (Score:1) Re:OpenBSD is not a Trusted System (Score:1) I was at the last CHATS workshop and both OpenBSD and TrustedBSD were present, along with representatives from the Linux community and other open source projects as well as commercial vendors. While OpenBSD maybe secure, it is not trusted and will never meet the requirements for a highly trusted system (LSPP/the old B1). This is because Theo's customers don't want it (as I recall it). richard. Re:openbsd (Score:1) It has no SMP support for one thing. Why does this matter? Sure doesn't matter if you're running a firewall. Re:Of course they aren't going to use BSD... (Score:3) Who do you think put up the money to develop BSD in the first place? DARPA, of course. OpenBSD (Score:2) I've seen OpenBSD folks make a lot of claims, but I've never before seen one claim that all research into secure OSes should come to a halt now that it exists. - Re:michael, dude... (Score:2) OBTW, DARPA funded the development of BSD as well. Re:Then let Open BSD people sumit a proposal. (Score:3) It's a lot easier if you affiliate yourself with a business or academic institution that already does business w/DARPA. Re:unix badness (Score:1) a) Accesing the net is fine, but setting up a server is not allowed (helps defeating trojans). b) Just for safety, my Napster client may only access MP3's on my harddisk. MP3's on my harddisk and the NFS share are accessible to everybody. The Napster client may not access any other file except for its configuration, etc. c) user joe may not run X, only console. Re:DARPA Involvement (Score:2) Re:unix badness (Score:2) I think the point is to push the state of the art ahead, not fiddle with existing systems. I mean your analogy is similar to "Would you rather take a bicycle or a skateboard to fly to the moon" instead of researching how to make rockets. Re:unix badness (Score:1) But thats not the choice. Its unix vs. writing a new os. New OS wins for me. Re:unix badness (Score:1) Re:unix badness (Score:1) Read it again. Stop worshipping at the altar of unix. It is not perfect. Typing from a unix system... damn , still no IE beating browsers yet! Re:SUBTERFUGUE (Score:1) Hacks to the unix security model are nothing new, and are also nothing interesting. Posix ACLs, privilege bits, online tripwire style things, ptrace abusers, are all pretty damn skanky. And I would prefer an elegant unix with lax security that I know the limits of, to a clunky add on laden unix with no real coherent security model. Of course, an OS that used a capability model would be better... Re:uninformed: redefine userspace as app-space? (Score:1) to be secure, there are lots of user accounts. Each bit runs under a different one. Unfortuanately, its just a hack. To add users you need root access, ie ultimate boredom for root. Or package management nightmares with coordinating uids. Maybe this could be solved with a better PAM plugin. However, if you make a new user for every app automatic, kernel checks go like this: userspace: system_call(arg1,arg2,...); kernelspace: user = current_program->user; do_check_on_whether_user_is_allowed(user); this could be: looking through a set of acls on a file. checking a privelege bit. checking if the uid is 0. do_the_job(); in a cap based os: userspace1: call(cap, arg1, arg2, kernelspace call handler: dest = get_dest(cap); copy_args_to_dest(dest); schedule_dest(); (dest can be kernel or user task) dest: do_the_job(); So in a cap based os the possesion of a cap means you are allowed to do something. No funky checks. The checks are done in userspace (no kernel policy) when you are given the caps. so doing this in a unixy os would be drastically inefficient if it was done system wide. Re:OpenBSD is not the be all and end all... (Score:1) is system security. Ie you don't want to get rooted. But to be honest, I don't trust most programs I run with my own files. I don't want the huge unaudited mozilla to be able to write to my thesis. Thats where unix can not be fixed in an efficient way. You need to fundamentally break posix, unfortunately. Re:OpenBSD is not the be all and end all... (Score:2) You seem to have got the userspace/kernelspace split mixed up with the root/normal user split. The first is a difference in memory mapping. When you are running a normal program, your own memory is mapped appropriately as some of readable, writable, and executable. The kernel is always mapped non readable, non writable, and non executable. When entering the kernel ( eg system call, page fault, interrupt), the kernel memory is changed to be readable, writable, and executable. The second is how the kernel responds to system calls. When a system call is called, if it is a privileged operation, the kernel will perform a check to see if the program is allowed to do this. In old unix, this was often just a check to see if the uid in the process control structure was 0. In linux, it is usually a check of a privelege bit ( evilly called capabilities by posix and linux). So different processes can have different set of priveleges. So, in unix, you su to root. This doesn't make you run in kernel mode. You are still running just like a normal user. The only difference is, when you do a system call, the kernel grants you a special privelege to bypass normal security checks. This is wierdo special casing. Not nice. In a capability system, a token is passed along with any other arguments to a system call. This token proves to the kernel that you are allowed to do the call you asked for. No wierd special cases. No acl systems or even the concept of a "user" in the kernel. This can and is being implemented on x86. See eros - OpenBSD is not the be all and end all... (Score:4) Any program you run can do anything to every file you have write access to, and can also leak information by default to anyone on the internet. Not good. This means a very large trusted code base, which is a bad thing. The set of code which need to be trusted (ie the kernel and very few programs) should be as small as possible. There are some approaches to improving security. Capabilty models look like the best hope for the future. This comment is too small to hold a reasonable explanation - take a look at [eros-os.org]. Don't get me wrong, OpenBSD is a good firewall and general unix server platform, but its security model is limited by posix compliance. unix badness (Score:4) 1) All programs you run are trusted with all files you have access to. 2) All programs are also given a default set of actions they can perform, eg open random connections to the internet. This is nice for leaking information. This can be amelorated via so called posix capabilities. These are more properly called privelege bits as in VMS. 3) Global filesystem. Everyone can see the filesystem. Chroot may help. Plan 9 style namespaces are better too. Better would be to take the human namespace out of the kernel and only give it to programs that need it. probably lots of other things. Basically unix was designed when everything you ran on your computer was written by yourself of someone you knew and trusted. And then commercial unix just got featuritis. It would probably not be good to declare it the one true operating system. Re:DARPA Involvement (Score:2) -stax Bummer. (Score:2) Incidentally, if we want secure OS's, it's long past time to give up on UNIX. EROS is the way to go.. -jcr Re:Sigh... (Score:1) The former is a task easily done in parallel with little or no intra-personnel communication. The later is something which, as Brooks points out, requires more intra-personnel communication as more personnel is added, until the marginal gain in productivity turns into a loss. ... Re:OpenBSD is not a Trusted System (Score:1) You sound like a Windows user who just does not have a clue about what a real operating system should be like, and YES I do realize that you are little troll who just learned just what the Internet is!!!! One part of trusted is that a system _MUST NOT_ have any underlying bugs. Don't forget that and just go away. Re:michael, dude... (Score:2) They wanted a UNIX. They wanted TCP/IP. They happened to use Berkeley- that's quite different from generally "funding BSD development" Of One little EROS detail... (Score:1) For those of you suggesting that EROS may be the way for DARPA to go, you may be on to something. Note this statement on the EROS website [eros-os.org]: Guess that either means that DARPA's gonna funnel more money into EROS, or that EROS wasn't up to some standard, and they're looking for a replacement. Re:Don't bother to submit as an independent. (Score:1) The no part is that you don't have to have a months-long lead. In fact, the CHATS BAA came out only a few weeks ago. I could tell you the exact date if I weren't too lazy to check my mail logs. DARPA projects tend to be big, on the order of $500K per year. That means that they expect an effort that involves several people. It also means that they expect fully thought out stuff. (How do I know? I've participated in lots of DARPA submissions and research projects. I was involved in two potential responses to CHATS, one of which we dropped because we didn't like our own idea. I withdrew from the other because of reasons mildly related to the issue under discussion, mainly that if you get more than $50K per year from DARPA, you have to file a lot of paperwork that my college isn't set up to produce.) Re:OpenBSD is not a Trusted System (Score:2) Security (Score:1) Hold on, don't flame me yet. Open Source has the most vulnerable model available, yes. Anybody who knows how to code can put anything they want into the code. Exploits should be abundant, right? Of course, we all know that there are no exploits for Linux, and dozens of them for Windoze. But what does this mean? I believe that it means Linux has such a great backing in the community that people are watching over each other. But what happens when some malicious person decides to screw with the code? That's right. Disaster. It's on the horizon. Linux has only been around a few years, and it's long overdue for hackers to install some exploits. I admire DARPA for putting money into this boiling pot, and hope that they can defuse the problem before it gets out of hand. Because once Linux has been shown to be unstable and vulnerable (security wise) then Slashdot is no more... ------ That's just the way it is NSA Linux (Score:2) Re:unix badness (Score:2) What planet are *YOU* from? (Score:2) What basis do you make THIS claim? The 'byline' is "news for nerds, stuff that matters". Slashdot has a BSD section. What reasons do you have for thinking *THIS* site is the #1 advocacy site? Re:Why DARPA is doing this (Score:2) Dunno if they ever pursued the project further. Re:Why DARPA is doing this (Score:2) Absolutely true. I didn't mean to impugn the project managers at all. I actually reported directly to a project manager at the ISO, and he was astute at political infighting, but his overwhelming passion was the technology behind his project. No doubt about it - there are some very smart, very clearheaded people running projects at DARPA. I also agree with your analysis as to why they'd be delving into Open Source. Many of these program managers are military folks who came in through the military-industrial-govt merry-go-round, but many of them are also essentially hackers who pay attention to things like.. well.. Slashdot. Why DARPA is doing this (Score:4) DARPA is interested not in current technology, or even next-generation technology. Their mandate is to fund and evaluate what they call "high-risk, high-payoff" projects. They fully expect that most of their projects will fail to achieve their goals. However, they also realize that even those projects that fail will stimulate advances in other, sometimes unforseen areas. Of course, those projects that succeed become the wonder-technologies of tomorrow. Another thing to keep in mind is that DARPA is a government agency, and as such has a mandate to diseminate their findings as far as possible within the federal government. I actually worked on a liason project with FEMA, where we were trying to help kick-start FEMA's web-based emergency-mitigation effort. The secondary effect of this mandate to spread the wealth is that it's key for an agency's survival that they be known as the originators of the wealth. That is, when DARPA comes up with something, they sure as hell make sure that every other agency knows it came from DARPA. That way when the budget axe comes along, DARPA isn't first on the chopping block. So DARPA's desire to fund this project probably has a lot more to do with going beyond what's already been done, and taking the credit for it, than it has to do with acknowledging what's already out there. Re:OpenBSD is not a Trusted System (Score:1) Then let Open BSD people sumit a proposal. (Score:2) Re:DARPA - The government gets involved. (Score:2) OpenBSD is not a Trusted System (Score:5) The DARPA program is called Composable High Assurance Trusted Systems (CHATS) which implies that they are interested in Trusted Systems [ncsc.mil] not systems that claim to be secure because a bunch of hackers allegedly have fixed all the buffer overflows. Being "secure" and being a trusted system are completely different things. Maybe micheal meant to mention TrustedBSD [boardwatch.com] which is attempting to become certified as a Trusted System? Re:unix badness (Score:1) Uh, actually none of these statements are true. Next time you actually use a *nix system please type "man chmod" Re:unix badness (Score:1) You should know then that UNIX systems allow you to change the read/write/execute permissions on any file on the system, and since everything is a file you can use this to control who can use what devices. You can also manipulate the user whose permissions an executable will use to run. Granted, systems often come with stupid default permissions, but that's hardly a reason to write a new OS. Re:unix badness (Score:1) I have never seen a program that needs root priveleges to run. There are many that default that way however. Take for example tcpdump: Typically it is run as root, but, this is only because it needs to be able to set the ethernet adapter to promiscuous mode (which by default can only be done by root). We can always change the permissions of eth0 to allow it to be put into promiscuous mode by another user if we want. The statement that there are only two levels of security is completly untrue. You can have as many levels of security as you have users and groups. What about AtheOS? (Score:2) Re:OpenBSD not ideal (Score:2) remember who funded much of the BSD development (Score:1) Remember that DARPA resources can promote development an improvements in operating systems. After all that is in part how BSD came into existence in the first place! Much of the OpenBSD code came about as a direct or indirect result from DARPA efforts (via CSRG and friends). A fair amount BSD code DARPA helped fund found its was into the GNU and Linux efforts as well. If DARPA wants to fund more research and development let them! Re:DARPA has been funding OS research for a long t (Score:1) This is dead wrong. There is not, nor has there ever been, a conflict between public domain and open source. You are probably confusing it with the GNU (Lesser) Public License, which places the additional requirent of passing on source along with any binaries (or ensuring the availability and knowledge of the source). uninformed: redefine userspace as app-space? (Score:2) how much of a difference would it make to assign each executable its own "user" space - ie, executables have access to whatever the user has access to, so implement an interface framework to always run executables as their own user (unless directed otherwise by trusted real user)? this would seem to define another layer of security, with all the security checks already in place for users. next implement interface for users to run apps... could then a simple(?) tmp redirect to "user-app" space take care of the global tmp access problem as well? does any of this make sense? Re:"Secure Linux"? (Score:1) Regards, Tommy DARPA Involvement (Score:2) Re:Why DARPA is doing this (Score:2) You've got some good (but cynical) points about the overall structure of the agency, but you've left out one major piece. The program managers themselves have a responsibility to find new and interesting projects in their expertise that fulfill this "high-risk, high-payoff" goal. The desire to take credit is quite possibly the motivation of the political appointees at the top of the agency, and the reason why the program was approved and given funds. The proposal for the program itself probably came from some technically competent program manager who has intrest in and knowledge of open source, and a desire to see what defense applications can come out of it. I'm sure that there are some people in DARPA who are at least as interested in developing cool new technologies as covering their asses. In the document itself, they even say that the primary goal of the program is to achieve "Revolutionary advances in the state-of-the-art [...] improving the security functionality, services, and assurance of existing open source operating systems." The question is whether the tens of millions of dollars that DARPA is going to spend will do as much good as the millions they spent trying to realize "distributed networking" did for what is now the internet. It probably won't, but it can't be a bad thing for the community, because it's not like they can buy open source and control the means of production of Free Software. One other thing that might be motivating this study is the increased worrying in the Pentagon about information warfare. They look around and realize that they don't have a fraction of the best hackers. If it comes down to a real war where the existence of the US is threatened, what are they going to do? They can't draft them and expect them to work, and they probably don't have the resources (human or legal -- as a government agency, the DoD is somewhat limited in what they can pay people) to go on an all-out recruiting binge. So how do you use some of the talent that is out there? Maybe you can get some help from what the best are doing for themselves.BMangneton ---------------- Care for a Spin? Re:OpenBSD is not the be all and end all... (Score:2) No, it's a problem with Unix. In Unix, root is god; he has complete control over the system. If root wants to read Joe Shmoe's files, bcc: all incoming and outgoing email to a computer in China, or rm -rf /, then that's what's going to happen. Any exploitable bug- not just buffer overruns but any other kind of problem like a tempfile that depends on user provided information- in a program that's running SUID will let an attacker turn himself into root (and then do anything he wants). This is a problem with the Unix security model, not with the processor architecture. With a more sophisticated priviledge model- one that gave priviledged programs only enough power to do what they need to do- a broken program would only allow the user to do the same kinds of things that the broken program did. A broken mail program would only let a user do things relevant to moving mail, and not read all the files in /home/jshmoe/private. A broken PPP program would only let you do things about ppp, not rewrite /etc/shadow. There would still be a few programs (like login authentication) truly critical to system security, and a bad program could still cause problems, but the situation wouldn't be as critical. Re:OpenBSD is not the be all and end all... (Score:3) I'm not sure that I'd agree that capabilities are necessarily the best hope for the future. At the very least they have to overcome the obstacle that they require a substantial reorientation of people's views toward the way that operating systems behave. I'm not saying that we don't ultimately need to do so, just that it's a substantial obstacle. The real problem with the Unix model is that it utterly fails to implement any real least priviledge system. Every program that needs any priviledges not available to an ordinary user gets full root priviledge, so that a single security crack in any SUID root program opens up the whole system. That's worse than just account level granularity. There's literally only two levels of operation, peon and god. It's a terrible security model, and only an outrageous level of code auditing has any hope of preserving anything like real world data security. That people have been willing to go as far as they have in auditing the code is commendable (and, of course, any system can benefit from the level of auditing that OBSD has instituted) but it's not a reliable route to high grade security. Re:unix badness (Score:1) --- Re:Go away darpa (Score:1) A new OS? (Score:1) Personally, I think it would be rediculous for them to write their own OS, since Linux/BSD, while they have their flaws, are already pretty well suited to what they're trying to do... the only reason I can see them writing they're own is if they don't want anyone to have the code. Re:unix badness (Score:2) "8 electronic copies"?! (Score:1) Also notice the Microsoft character [fourmilab.ch] for apostrophe (looks like a question mark on my screen). Slashdot won't let me post that char literally (nice job), so I replaced it with a litaral question mark. Of course they aren't going to use BSD... (Score:1) Re:Of course they aren't going to use BSD... (Score:1) Re:Security (Score:2) Sorry to diagree, but I don't think this guy deserves to be modded back up. He is apparantly one of these guys that thinks open source means a guy like myself can go change the official linux code, and no one will know. His post should be ignored and everyone should move along. OpenBSD not ideal (Score:4) Re:unix badness (Score:1) Problem 1: Letting any user put the NIC into promisc mode isn't a security hazard? Problem 2: This is just wrong. Read up on ACLs, Capabilities, Mandatory Access Control, Auditing. (From trustedbsd.org [trustedbsd.org]). Here is a good intro to capabilities [eros-os.org]. --- In a hundred-mile march, Re:unix badness (Score:3) This is not the point. You basically have two permissions on Unix systems--users and root. In order to get certain things done, programs often need root privileges, which means they can do *anything*. It also means you can't have an 'audit' user who can monitor the system reliably. A bad admin who is root can cover her tracks because root can do anything. (I don't think a tripwire-type solution will work here.) All the files for one user are the same permission-wise. That means you can't jail certain progs to protect things. Groups don't help too much with this, and don't scale well. Bottom line--Unix has some great applications, especially with its network services. But it was *never* designed as a secure OS. Basically, some guys in a lab and some guys at universities built an OS to do things they wanted to do, working with other guys they trusted. Later some rudimentary security got added in, but this was not a basic element. Maybe, in fact, this is *why* Unix was/is popular--OS's with massive security models tend to suck to use because all that security has a usability tradeoff. Basically, you could get stuff done on Unix, and from time to time you'd figure out how to keep people from messing with the stuff you were working on after something bad happened. --- In a hundred-mile march, No short cuts (Score:1) There's never too much research. ___ DARPA has been funding OS research for a long time (Score:1) Ted Goranson, who has done research under Darpa grants, often laments that there has been amazing little development in operating systems since the 1970s. The files systems of most of todays operating systems remain primitive and little changed from 30 years ago. Much research has been done and it has lead to many good experimental technologies (file systems that work as databases,instead of being flat). However, these technologies are slow to be incorporated into commercial products, partly because those products labor under the need for backwards compatibility. Goranson remarked that some of the Darpa funded research on OSs was incorporated into the latest OS from Apple, but I'm not sure of the details. Re:DARPA has been funding OS research for a long t (Score:1) Obligatory Microsoft Slam (Score:2) SUBTERFUGUE (Score:1) (It runs under vanilla Linux 2.4 and a Debian package is available, but it is kind of slow and alpha.) --Mike Sigh... (Score:1) DoD has all the fun. (Score:2) This'll definitely be the wave of the future, I can hear it now: "Hello ladies and gentlemen and welcome to CounterStrike 2002: Judgement Day. I'm Al Micheals along with my lovely co-host Killcreek, who knows a thing or two about pointy weapons, err, I mean "pointing" weapons at people. Tonight's matchup will be Iraq, headed by the "Multikill" master Saddam Hussein versus that tenacious Colt weilding mastermind George W. Bush, who currently leads the United States in terrorist headshots. It's gonna be a winner take all brawl of the century!" Godlike killing spree's: The Linux Pimp [thelinuxpimp.com] Re:A chance for a GUI OS come out of this? (Score:1) Re:openbsd (Score:1) Re:Go away darpa (Score:1) Nice try clown. DARPA invented the internet (it used to be ARPAnet). it's actually pretty sensible (Score:2) Many people do research on reliability and repair costs before buying a new car and will be reluctant to buy a car from a company with no track record. Even VCs give money preferentially to people with track records (most of them won't even talk to you unless you have been referred--it isn't worth their time). If anything, DARPA seems a bit more open to new ideas and new people. *BSD isn't research (Score:5) Perhaps some of this research will be done on top of one of the BSD platforms. Perhaps it will be done on Linux. Perhaps some of it will be completely platform independent. But no matter what it will be done on, there are more interesting research questions to ask about open source, secure operating systems, and heterogeneous environments than whether we can fix a few more bugs in BSD or Linux. Re:Security (Score:1) I think that this is a bit of exaggeration. Definatelly openbsd (Score:1) According to Netcraft [netcraft.com] that site is running IIS [netcraft.com] Re:What about AtheOS? (Score:2) This is neat stuff, and he looks like he is really onto something. The real trick is going to getting enough 'market saturation' so that drivers and apps are ported to this. Star Office and Mozilla, being OS, are givens. The real trick, far down the line, is getting Adobe to do ports for their 'industry standard' (*sigh*) software to AtheOS. They *almost* committed for BeOS. A chance for a GUI OS come out of this? (Score:5) Re:Security (Score:2) Oh come on (Score:1) WarGames (Score:1) I'm sure on some level this is pretty obvious. However, I guess I've always considered script kiddies as pranksters rather than a threat to national security. Does this scare anyone else? -- "Sir, I'm scared." we've known since '98 that DARPA is evil (Score:1) You shouldn't be allowed to lord over boxes (Score:2) Intial Premise: I write a firewall that requires you to specify which ports should be open initially and how often to rotate them. It also allows you block access of information, in-going and out-going, or IP's you don't specify. Then, I allow to decide the level of access each net-accessing application and external IP may have to your system.* Concept: This is all done Raymond style, i.e. open source. Any script-kiddie and his uncle can stare at the source. By your conception, allowing this makes my firewall weak. Environment: Now, naturally, only a person with root priveleges can make alterations to the entailment of the firewall, unless otherwise specified, right? That's obviously yes if you have ever used any firewall worth it's weight in electrons. On top of that, we'll assume you were smart enough to download from MY site, not some third party site, which would put you at risk. You know that already, like most of us, and that's why you're at MY site. Nothing mentioned so far is abnormal, or even sufficiently outside the realm of what's expected of a super user, i.e. the ability to think. Paradox: The script-kiddie knows of some really stupid flaw that I didn't think of, oy, well, that happens***. He/She will assume you will initialize ICQ/ICU on its normal port****. Why do you do that? Same reason you wrote this post to begin with. Anywho, they create a portal string through ICQ/ICU. You're not tracking the IP movement because of the pre-mentioned reason. Ditto for why you don't cut&rotate for additional IP-links. Now, how's this script-kiddie going to affect the firewall? He doesn't have the localhost IP or root priveleges. You're thinking, "But he got inside, he can do stuff!" NO HE CAN'T!!! Where have you been!? He doesn't have root priveleges! He has NO user priveleges! THIS IS LINUX!** Conclusion: Well written, open-source software is more than secure enough*, especially on the right system**. Even if the software has a flaw***, a capable user can take extra precautions to increase it's ability****. Comment: Hack your own box, but, whatever happens to you will nolonger be my fault:P I will avoid saying, "Class dismissed," only because it's used ATLEAST once a week on Slashdot. Besides, I now have lots of time, because I'm on strike due to an anti-semetic comment in, I think, The Mandrake article. As long as that's up, I have all sorts of extra time to kvetch an jibber. Actually, I'm thinking about making "Dotslash: The Crossfire of the Geeks" text adventure...well, slashdot-facade, but that's all; it'll be like that old commodore 64 game "Portal" but less plot and more "Nonsense", see Jon's Humorix Toys at i-want-a-website.com/about-linux and yes, Jon likes dashes very much. Hmm, I guess I will now be intergrating Nonsense; feh, now Jon will want a copy before I release it. I hope this was informative to you "Open Source Isn't Secure" types. In fact, just to mention about BSD for a moment: The reason why it seems constantly out of date is because it is constantly being tested for those "flaws" and insecurities. I compliment the effort, but it does cause the appearance of antiquation. Sure, their 3.0 compiler is more stable than your 4.0, but it lacks features and advancement. Their 4.6 firewall is more powerful than your 6.2, but it's not as customizable or as scalable. However, if you would consider OpenBSD, or any for that matter, you would have little in the ways of worries and only the occasional woe. And, every once in awhile...you can get an impressive application that makes us GNU-ists stop and say, "Woah!" ^_^ Now...about that anti-semetic AnonCow, could someone do something...NOW-ish? Down the road... (Score:2) Can you see it? Someday, all transactions on digital networks will require secure p2p operation such as this would provide. Meaning, that companis would only do business with you if they can be assured you won't take advantage of them. This would be a very marketable product in the future. Wouldn't the MPAA love it when all television sets in the future run this future OS? It would assure them that your TV is who it says it is, and would make sure those silly kids aren't trying to record a TV shows... God forbid. DARPA - The government gets involved. (Score:2) One reason that commercial companies are reluctant to use OSS is that they do not like to relinquish control to unknown elements. We all know the standard rebuttals to this point, but the military could be worse. The military and security agencies are incompatibvle in terms of ethos with the OSS atmosphere. Will they give outside developers, like Joe Bloggs from Birmingham, UK, or Pu Kong Yon from Bangkok, the same access to internal information and the same time of day as external developers? I fear , very much, that there could be difficult times ahead in this project. I am hedging my bets as to the outcome. You know exactly what to do- Your kiss, your fingers on my thigh- There's still room for research (Score:3) Maybe someone should just tell them about OpenBSD, save some time and money. Maybe someone shuld just tell Michael about EROS [eros-os.org], a GPL'd x86 capabilities OS currently under development. Read more on capabilities [eros-os.org] and why they're important to OS security. A capabilities system is relatively resistant to a lot of the big security issues that plague other types of systems. For example, even if buffer overruns do occur, the damage that can be done is very limited. This is a really cool project. Re:If they could do this one... (Score:2) Dare to dream... :-) Re:Go away darpa (Score:3) Too bad that DARPA INVENTED the Internet! Back when they were still ARPA (Advanced Research Projects Agency). Now they've become DARPA by throwing a Defense in front of the ARPA. So as Mr. T would say, "Cut that jibba-jabba, fool! Internet wuzn't no creation of the free-market!" Don't beat up the good guys - and deadline's soon (Score:4) DARPA is trying to advance what's already available - and advances in security would be great. I suspect they will be able to make advances, since they're planning to spend $10 million on the winning proposals. As has been noted, OpenBSD is not a perfect solution - its packages are often quite old and it has many functionality limits (e.g., no support for SMP). It also doesn't meet the principle of "least privilege" - root is still all-powerful, programs can do anything their owners can, etc. The deadline is soon for those interested in submitting a proposal. The full proposal (all copies) must be submitted in time to reach DARPA by 4:00 PM (U.S. Eastern Time) Monday, March 5, 2001, in order to be considered; it CANNOT be sent by email or fax (they REQUIRE PHYSICAL COPIES). People interested in submitting a proposal should also read the Proposer Information Pamphlet (PIP) [darpa.mil], which isn't easy to find unless you know where it is.
https://slashdot.org/story/01/02/28/2148223/darpa-to-fund-open-source-security-research
CC-MAIN-2016-50
refinedweb
6,675
73.17
Create a Next.js App with DatoCMS and Deploy It with Vercel Deploy your Next.js and DatoCMS app with Vercel in a serverless environment. This guide walks you through creating a blog using Next.js' Static Generation feature and DatoCMS through the DatoCMS GraphQL API and deploying the project with Vercel. For the frontend of this project, you will be using Next.js – the production-ready React framework. For the backend you will be using DatoCMS – a powerful headless CMS that allows you to rapidly create, manage, and distribute content. Two of the biggest selling points for DatoCMS are: - GraphQL API for powerful developer tools and complete control over the data your website downloads; - Out-of-the-box support to responsive, progressive, lazy-loaded images thanks to the fact that its GraphQL API already exposes pre-computed low-quality image placeholders (LQIP, or blur-up placeholders), allowing webpages to get everything they need in a single request, avoiding content reflows. Preview of the website, complete with blur-up image placeholders. Follow this guide to create a starting point for you to get your own blog up and running. Step 1: Create your content First, create an account on DatoCMS. After creating an account, create a new project from the dashboard. You can select a Blank Project. Once created, enter the project. Create an Author model Click the Settings tab, and choose the Models option. Click on the plus icon, and create a new Model called Author. Next, add these fields (you don't have to modify the settings): - Name: field of type Single-line string (under the Text group) - Picture: field of type Single asset (under the Media group) Follow these steps to create the Author model. Create a Post model Repeat the process to create a new model: this time call it Post. Next, add these fields (you don't have to modify the settings unless specified): - Title: field of type Single-line string (under the Text group) - Content: field of type Multiple-paragraph text (under the Text group) - Excerpt: field of type Single-line string (under the Text group) - Cover image: field of type Single asset (under the Media group) - Date: field of type Date (under the Date and time group) - Author: field of type Single link (under the Links group). From the "Validations" tab under "Accept only specified model", select Author - Slug: field of type Slug (under the SEO group). From the "Validations" tab under "Reference field" select Title Create a Blog model The last model needed is called Blog. Make sure to check the "Single instance?" option for this, as you only want to create a single record of this type that will hold the SEO information for the blog page. Next, add these fields (you don't have to modify the settings unless specified): - SEO: field of type SEO meta tags (under the SEO group) Populate content From the Content menu at the top, select Author and then create a new record. In this case, you can use dummy data for the text and download an image from Unsplash. Follow these steps to populate your content. Then create a couple of Post records. You can write markdown for the Content field, and for the Author you can pick one of the authors you created earlier. The last step is to fill in some SEO meta tags for your blog. Go to the Blog section, insert a title and a description and click "Save". That's it for creating content! In general, you can edit both Content and Models at any time, giving you complete flexibility over your content. Next, create a set of API tokens to be used in your app, these will allow the DatoCMS Client to request your posts from the DatoCMS API. Step 2: Create a read-only API key Click the Settings tab, and choose the API tokens option, then click on the plus icon to create a new API token. Follow these steps to create a read-only API Token for DatoCMS GraphQL API. Make a note of the newly created access token, as it will be used later on. That's all the setup required for DatoCMS! Within only a few minutes you have managed to create a Content Model, add content, and generate an API token. Step 3: Create Your Next.js App To display your new blog and content, create a new Next.js application using create-next-app, then run the following command and follow the wizard: npm init next-app my-datocms-project Bootstrap a Next.js application from your command line. Enter inside the project directory, install the graphql-request and react-datocms package, and start the development server: cd my-datocms-project && yarn add graphql-request react-datocms && yarn dev Moving to your project, installing dependencies, and starting a local development server from the command line. You'll need to setup a GraphQL client pointing to the API of your DatoCMS project. Create a new lib directory for that, and inside of it create a file called datocms.js: import { GraphQLClient } from 'graphql-request' export function request({ query, variables, preview }) { const endpoint = preview ? `` : `` const client = new GraphQLClient(endpoint, { headers: { authorization: `Bearer ${process.env.DATOCMS_API_TOKEN}`, }, }) return client.request(query, variables) } Creating a function in lib/datocms.js to get data from DatoCMS. Then, you can set the environment variable inside a .env.local file: echo 'DATOCMS_API_TOKEN=YOUR-API-TOKEN' >> .env.local Creating an environment variable in a .env.local file for your API token. Next, go to pages/index.js — that is, the component that renders the homepage of the project — and replace its contents the following code: import { request } from '../lib/datocms' import { Image, renderMetaTags } from 'react-datocms' import Head from 'next/head' const HOMEPAGE_QUERY = ` query HomePage($limit: IntType) { site: _site { favicon: faviconMetaTags { attributes content tag } } blog { seo: _seoMetaTags { attributes content tag } } allPosts(first: $limit) { id title excerpt date author { name } coverImage { responsiveImage(imgixParams: { fit: crop, w: 300, h: 300, auto: format }) { srcSet webpSrcSet sizes src width height aspectRatio alt title base64 } } } }` export async function getStaticProps() { const data = await request({ query: HOMEPAGE_QUERY, variables: { limit: 10 }, }) return { props: { data, }, } } export default function Home({ data }) { return ( <div> <Head>{renderMetaTags(data.blog.seo.concat(data.site.favicon))}</Head> {data.allPosts.map((blogPost) => ( <article key={blogPost.id}> <ImageFigure data={blogPost.coverImage.responsiveImage} /> <h6>{blogPost.title}</h6> </article> ))} </div> ) } An example Next.js index.js file for use with DatoCMS. What the index.js Achieves This index.js example requires the GraphQL client you created previously. Then, inside the getStaticProps function, a GraphQL request is performed, so that Next.js will pre-render this page at build time using the props returned by it. The responsiveImage part of the GraphQL query returns image attributes that will help you set up responsive images in your frontend without any additional manipulation. The page component passes this data to the <Image/> component of the react-datocms package to render a lazy-loaded image with a low-quality image placeholder. Similarly to what DatoCMS offers with responsive images, both the faviconMetaTags and _seoMetaTags parts of the query return pre-computed meta tags based on the content you insert inside DatoCMS. You can easily append such meta tags to the head of your page using Next.js's <Head/> component and the renderMetaTags helper, which is also part of the react-datocms package. Setup Next.js Preview Mode When using getStaticProps, the props will be generated at build time, which is great from a performance point of view, but not ideal when you’re writing a draft on DatoCMS. In this case, you want to preview the draft immediately on your page. For solving that problem, Next.js has the feature called Preview Mode. Create a preview API route This API route file can have any name - e.g. pages/api/preview.js, thought it must be within the pages/api directory. In this API route, you will call setPreviewData on the response object. The argument for setPreviewData should be an object, and this can be used by getStaticProps (more on this later). export default (req, res) => { res.setPreviewData({}) res.writeHead(307, { Location: '/' }) res.end() } The contents of a preview API route in Next.js. To test this, manually access the route from your browser by heading to. You’ll notice that you'll be redirected to the homepage with two cookies set: __prerender_bypass and __next_preview_data. Update res.setPreviewData. In this case, use the endpoint to access records at their latest version available, instead of only the currently published. Both endpoints offer exactly the same queries, the only thing that will change will be the returned content: export async function getStaticProps(context) { const data = await request({ query: HOMEPAGE_QUERY, variables: { limit: 10 }, preview: context.preview, }) return { props: { data }, } } A Next.js getStaticProps function requesting preview content as specified in lib/datocms.js. Step 4: Deploy For a complete look at what you will deploy, see the example GitHub repository. To deploy your Next.js + DatoCMS site with a Vercel for Git, make sure it has been pushed to a Git repository. During the import process, you will need to add the following environment variable: DATOCMS_API_TOKEN Next.js + DatoCMS site with a few clicks using the Deploy button, and create a Git repository for it in the process for automatic deployments for your updates.
https://vercel.com/guides/deploying-next-datocms-with-vercel
CC-MAIN-2021-25
refinedweb
1,560
55.95
Hi folks on the Internet! Today I am presenting you another problem! (Yay?) I am using webpack with the ts-loader to compile typescript code. However when I'm importing angular like this: import * as angular from "angular"; angular.module("app", []); [18:11:21] Starting 'build'... ts-loader: Using typescript@2.0.6 and C:\testProject\tsconfig.json [18:11:24] [webpack] Hash: 155db0dc394ae32ae9e6 Version: webpack 1.13.2 Time: 2845ms Asset Size Chunks Chunk Names app.js 3.11 MB 0 [emitted] main chunk {0} app.js (main) 1.19 MB [rendered] [0] ./app/app.module.ts 2.26 kB {0} [built] [1] ./~/angular/index.js 48 bytes {0} [built] [2] ./~/angular/angular.js 1.19 MB {0} [built] index.js angular.js index.js webpack.config.js entry: "./app/app.module.ts", output: { publicPath: "/lib/", path: path.join(__dirname,"lib"), filename: "app.js" }, // source map devtool: "#inline-source-map", module: { loaders: [ { test: /\.ts$/, // Exclude node modules exclude: [/node_modules/], loader: 'ts-loader' }, { test: /\.html$/, // Exclude node modules exclude: [/node_modules/], loader: 'raw-loader' } ] } I think you mis-understand how webpack works. All modules are executed once no matter how many times you require them. For example if you do: var angular = require('angular'); var anotherAngular = require('angular'); The angular script will only really execute once, and the result "cached" for all subsequent calls to require. In your case what you are seeing is perfectly normal. When you load angular from an npm package, the npm package uses the index.js which looks like: require('./angular'); module.exports = angular; It is common in npm packages to have a minimal index.js that just re-exports another script. When you are loading with webpack webpack will load index.js which will in turn load angular.js and return the result. This shouldn't cause you any problems, and nothing is really getting loaded twice.
https://codedump.io/share/RmO00Ket5Ytv/1/importing-angular-in-typescript-using-webpack-load
CC-MAIN-2017-04
refinedweb
312
54.69
Guide for making a custom starter template... Guide for making a custom starter template... Is there any guides on starter templates? I've started toying around with the sencha command and using my own starter template. And I wanted to know if there is some hidden features we can use with sencha command to make our templates more flexible to what we need. Thanks, Ron The starter templates use the x-generate Ant command under-the-hood, so in your template you should be able to use any of the x-generate features, which you can read about here. The main feature of course, is that files with a ".tpl" extension are rendered using the XTemplate engine. You can then use any of the parameters supplied by Sencha Cmd inside your template files. I'm not using the latest version of Sencha Cmd, but here's the available parameters for Sencha Cmd 3.1.2: Code: <param name="name" value="${args.name}"/> <param name="appName" value="${args.name}"/> <param name="library" value="all"/> <!-- These are needed for the theme template--> <param name="themeName" value="${args.themeName}"/> <param name="controllerName" value="${args.controllerName}"/> <param name="controllerFileName" value="${args.controllerName}"/> <param name="viewName" value="${args.viewName}"/> <param name="viewFileName" value="${args.viewName}"/> <param name="frameworkName" value="${framework.name}"/> <param name="frameworkPath" value="${framework.path}"/> <param name="packagesRelPath" value="${packages.extract.path}"/> <param name="senchadir" value="${senchadir}"/> <param name="uniqueId" value="${app.id}"/> <!-- placeholders for mvc structures --> <param name="appModels" value=""/> <param name="appViews" value=""/> <param name="appControllers" value=""/> <param name="appStores" value=""/> <param name="controllerNamespace" value="${args.name}.controller"/> <param name="modelNamespace" value="${args.name}.model"/> <param name="viewNamespace" value="${args.name}.view"/> @burnnat: Thanks for the reply but I'm using Sencha Command 4.0.1 and started making my own template folder with needed skeleton files. I'm using Sencha Command 4.0.1 with Ext JS 4.2.2. I'd like to make a template for our other developers at my company with a mix of our architecture code and Ext JS 4.2.2 that follows the sencha generate app structure for app building purposes. I couldn't find much about them other than what I've been toying around with. Thanks, Ron Everything I've said should apply to Sencha Cmd 4.0 as well as 3.1. I just checked against a copy of Sencha Cmd 4.0 and the list of available template parameters is the same as what I've listed above. Assuming you're using something like sencha generate app --starter path/to/template to perform the generation, you should be able to use any or all of these parameters in your template. Hi, Wondering if you got anywhere with this? Am wondering how the files are structured under the starter path, I'm guessing it's an app.js.tpl at least? It is possible to alter the templates for controllers, views, models etc, perhaps by adding some sub-directories containing them? Looking at those variables, I still think I'm going to need some manual updating, since I want to add a "namespaces" value to my generate application. Cheers, Westy We have been able to customize packages so it auto creates our template for us. We aren't using app's because of how much views we have we are sharing common files and not doing the app build all in one js file. Inside the Sencha CMD install directory you will see templates which has the app's {senchadir} folder that tells sencha command to change that folder name to .sencha. If you want to make a custom app look inside the plugins folder in the sencha cmd install directory. We are using Ext JS so the folder is ext. You can setup any template changes you want there. For packages everything needed is under the templates folder the package folder will let you do custom lines in the sencha.cfg and the starter-package folder is the skeleton for any package we have customized a lot in here for our needs. Just make sure if you add any files you follow the naming scheme. {pkgName}-app.js.tpl.merge will let the parser know to change any variables with { } around them to the value they are set to. Let me know if you have any other questions. Ahh, I see. I was looking for templates under Ext somewhere, but they're in Cmd. That makes sense I suppose. Thanks for the pointers, I'll have a poke around.
http://www.sencha.com/forum/showthread.php?279447-Guide-for-making-a-custom-starter-template..
CC-MAIN-2015-11
refinedweb
760
68.16
malte Wrote:Hmmm, I don't really like this idea. It will be a lot of work and I don't think that it will be of interest for many users. I would have to look into the zip file during import, store the information which file contains which games and unzip it before launching the game. I will think about it again if there is an easier way to implement it but I don't think this will happen. There are some emulators that handle zipped games (zsnes for example) but I think it will only work with one game per archive. darknior Wrote:Hi everybody I write here to ask for, if this fantastic XBMC script can be use on xbox with the ResX Xtras ? Because on xbox there are lots of emulators working fine ...... And all the xtras with XMV videos are almost finished Please respond wimpy Wrote:Malte will need to answer that XBOX question jpschouten Wrote:when i try to open a rom with your script i get a black screen then a cursor XBMC login:..... import xbmc os.system(cmd) #this minimizes xbmc some apps seems to need it xbmc.executehttpapi("Action(199)") os.system(cmd) #this brings xbmc back xbmc.executehttpapi("Action(199)") <emulatorCmd>uae {-%I% "%ROM%"}</emulatorCmd> jpschouten Wrote:The last comment and tip that you made seems to do the trick for zsnes. It minimizes xbmc and maximizes zsnes. Exiting zsnes maximizes xbmc again. I had a few crashes in RCB and when this happens xbmc starts minimized when i reboot my asrock. Paybac Wrote:I've been having a play with trying to get this to work on the xbox. My problem is this part here Code: <emulatorCmd>uae {-%I% "%ROM%"}</emulatorCmd> And what i need to change it to to launch a default.xbe xbmc.executebuiltin("XBMC.Runxbe(%s)" %cmd) <emulatorCmd>full path to default.xbe</emulatorCmd>
http://forum.xbmc.org/showthread.php?tid=70115&page=4
CC-MAIN-2014-41
refinedweb
318
63.8
Jan 03, 2009 05:24 PM|dwebster84|LINK Question has been resolved. reference here to incomplete link mentioned previous is deleted. -Danny Jan 03, 2009 06:40 PM|dwebster84|LINK OK, here's another link -- possibly better, and I'll be working to make a full example from it: (this reference was very helpful--I combined it with previously written program to make the complete example provided below). Some code used comes from: -Danny Jan 03, 2009 07:22 PM|dwebster84|LINK <%@ Page <html xmlns=""> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <p>Please upload your picture: <asp:FileUpload <asp:Button<br /> </p> <asp:Image<br /> </form> </body> </html> Jan 03, 2009 07:23 PM|dwebster84|LINK using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Drawing; //Bitmap needed this using System.Drawing.Imaging; //ImageFormat needed this using System.IO; // .delete needed this public partial class Color2BW : System.Web.UI.Page { public static Bitmap Grayscale(Bitmap bitmap) { //Declare myBitmap as a new Bitmap with the same Width & Height Bitmap myBitmap = new Bitmap(bitmap.Width, bitmap.Height); for (int i = 0; i < bitmap.Width; i++) { for (int x = 0; x < bitmap.Height; x++) { //Get the Pixel Color BitmapColor = bitmap.GetPixel(i, x); //I want to come back here at some point and understand, then change, the constants //Declare grayScale as the Grayscale Pixel int grayScale = (int)((BitmapColor.R * 0.3) + (BitmapColor.G * 0.59) + (BitmapColor.B * 0.11)); //Declare myColor as a Grayscale Color Color myColor = Color.FromArgb(grayScale, grayScale, grayScale); //Set the Grayscale Pixel myBitmap.SetPixel(i, x, myColor); } } return myBitmap; } protected void AddPictureButton_Click(object sender, EventArgs e) { if (PictureUploadControl.HasFile) { PictureUploadControl.SaveAs(Server.MapPath("~/images/ORIGINAL") + PictureUploadControl.FileName); Bitmap oldBitmap = new Bitmap(Server.MapPath("~/images/ORIGINAL") + PictureUploadControl.FileName, false); Bitmap newBitmap = Grayscale(new Bitmap(oldBitmap)); string name = "grayscale"; newBitmap.Save(Server.MapPath("~/images/") + name + ".jpg", ImageFormat.Jpeg); oldBitmap.Dispose(); //we will delete the old File.Delete(Server.MapPath("~/images/ORIGINAL") + PictureUploadControl.FileName); GrayscaledPhoto.ImageUrl = "images/" + name + ".jpg"; } } } Jan 03, 2009 07:26 PM|dwebster84|LINK (I have two codes now, Color2BandW.aspx, and Color2BW which has improvements). I look forward to hearing back from you. I'm also thinking about trying some other stuff with this--also I need to do some digging on what happened in public static Grayscale(). -Danny (wow--this thing cycles through the picture and does work on every pixel -- Prabu, I've had it in the back of my mind for years that I wanted to do that. Jan 03, 2009 07:35 PM|dwebster84|LINK I played around with the code and found out you can't go into the public static Bitmap Grayscale(Bitmap bitmap) and add things like sending text to an div (.InnerHtml) or a textbox (.Text). However, you can copy the iteration loop to another place and loop through a bit and do things like printing out BitmapColor.R, BitmapColor.G, BitmapColor.B. I'm thinking of doing some other experimentation. -Danny Jan 05, 2009 06:05 AM|dwebster84|LINK This thread is answered. After working several hours, a complete solution was prepared. The person who wrote the question has not replied and is considered MIA. I hope nothing bad has happened to him. If anyone else comes along, and has any questions, please feel free to ask. I enjoy helping people and this was an interesting question. -Danny Feb 05, 2009 05:12 AM|NiravVyas|LINK HI Danny I am really amazed at your dedication.I am streching my hair from last three days for a purpose which now seems to be very difficult.While Googling I came across this thread.Here is my problem:I allow user to upload a image of his choice, then I resize it to a 45 X 45,Then make a copy of that in transparent(I used ur code for that).Till here all thing were going fine. Now I want it to cut from lower diagonal(The upper diagonale should get transparent).Then After I need to convert it to cursor file.Isn't it difficult. What do you have to say..?? Feb 05, 2009 07:12 AM|dwebster84|LINK I will need to look up the exact answer but a friend of mine who used my code to help someone else said something about this code may look like it is only working in RGB (red, green, blue) but if you look closer it is working with A too. I think the trick is that if you only feed it three numbers then it assumes (in our case, correctly) that it was given R, G and B. So we just feed it four variables. You modify the code to decide what to put in for A. A is alpha and I forget if it is 0 is completely transparent or if 255 is completely transparent--it is one or the other. Now when you are (I can't decide whether to say rastering or enumerating) looping through the dots if you use 4 variables and put in either 255 or 0 for the A, the resultant picture has transparency. This can be proven by showing the picture on a background with a small pattern. I'm going on what my friend told me and my memory of the phone call is several weeks old. Let me know if you find the above to be puzzling and I'll ask him to cough up the entire example. Feb 06, 2009 02:47 AM|NiravVyas|LINK HI Danny Thanks for reply.But I think I need to explain my problem in more detail.I dont have to deal with trasnsparency,as I had already got that.Only question is "How can I cut Upper diagonal from Image and lower diagonal should remain in its original form" Actually I need to have six diferent format of image from one which user uploads.Transparent was one of them, this called as half image is another and .cur(cursor file) is again another.So please go through my question again have try for it. Feb 06, 2009 06:11 AM|dwebster84|LINK Hmmm... I can't visualize what you mean by cut an upper diagonal from an image. Can you explain this operation further? Feb 06, 2009 07:36 AM|NiravVyas|LINK Sure Consider two right angled triangle.Join both of them so as to make one square/rectangle. I just want the one triangle you joined to get parted from the final image.Same is my case.Here it is a square(image user uploads) and I need to dug the above right angled triangle.from that sqaure. Hope you got my point if not provide me you e-mail I would mail you the images.Thanks for reply Feb 06, 2009 09:58 PM|dwebster84|LINK Oh!!!! I can sort of see the math (Murphy's Law, I explain how to cut off a diagonal one way but you need the other) Assume you have a picture that is 10 wide and 100 high. You want to cut the first 9 from the first row, the first 8 from the next row, and so on. As we enumerate across rows and down columns of the image, as long as the value of the row is less than the width of the image (in our example that would mean we were in rows 1 through 9) we would need to do cutting. The row number is part of the calculation and if the column index < or = (imagewidth - row number) then the pixel gets made transparent. Thus the top row 1: 10 - 1 = 9, the first 9 pixels go transparent next row 2: 10 - 2 = 8, thefirst 8 pixels go transparent next row 3: 10 - 3 = 7, the first 7 row pixels go transparent I will need to go back and see if the indices that enumerate width and height start at 0 or 1 so this could throw off my math by one but I think you will still see the idea. To cut the other diagonal, make transparent if column index > row number top row: pixels 2 through 10 go transparent next row: pixels 3 through 10 go transparent Feb 08, 2009 11:46 PM|NiravVyas|LINK Hi Danny Thanks a lot for your reply .It helped a lot,in fact I almost got my solution.At present I able to get rid of upper diagonal by making it transparent using your logic.But now the only problem is I want that upper diagonal to be "Opaque" i.e it should be such that If I place it on some other control with some background image. the part behind the upper diagonal should be visible.I am giving a try to this lets see If I can figure it out before your reply. Once again thanks for your help. Feb 09, 2009 06:29 PM|dwebster84|LINK Oh, you want to "flip flop" it. OK, once again, I think (but am not sure) that 255 for A is opaque and 0 is transparent. If I have this backwards you can switch the two and fix the problem in less than 90 seconds. Can you configure your code to automatically put in 0 for A for everything, except that if something passes the test for a diagonal, A gets a 255. I'm thinking you can take it from there, but I also realize and respect that this sort of thing is abstract. So, if you want me to show you an example, post the code that you have right now, and I'll do surgery. I have a very sharp scapel. ;-) Best regards, -Danny Feb 09, 2009 11:24 PM|NiravVyas|LINK HI Danny.I tried it yesterday but it didnt work.Below is my code, Public Shared Function Grayscale1(ByVal bitmap As Bitmap) As Bitmap 'Declare myBitmap as a new Bitmap with the same Width & Height Dim myBitmap As New Bitmap(bitmap.Width, bitmap.Height) For i As Integer = 0 To bitmap.Width - 1 For x As Integer = 0 To bitmap.Height - 1 'Get the Pixel If (x >= bitmap.Height - 1 - i) Then Dim BitmapColor As Color = bitmap.GetPixel(i, x) 'I want to come back here at some point and understand, then change, the constants 'Declare grayScale as the Grayscale Pixel Dim grayScale As Integer = CInt(((BitmapColor.R * 0.3) + (BitmapColor.G * 0.59) + (BitmapColor.B * 0.11))) 'Declare myColor as a Grayscale Color Dim myColor As Color = Color.FromArgb(grayScale, grayScale, grayScale) 'Set the Grayscale Pixel myBitmap.SetPixel(i, x, BitmapColor) Else Dim myColor As Color = Color.FromArgb(255, 255, 255, 255)'(r,g,b,a) myBitmap.SetPixel(i, x, myColor) End If Return myBitmap End Function Protected Sub AddPictureButton_Click(ByVal sender As Object, ByVal e As EventArgs) If PictureUploadControl.HasFile Then PictureUploadControl.SaveAs(Server.MapPath("~/images/original") + PictureUploadControl.FileName) Dim oldBitmap As New Bitmap(Server.MapPath("~/images/original") + PictureUploadControl.FileName, False) Dim newBitmap As Bitmap = Grayscale1(New Bitmap(oldBitmap)) Dim name As String = "grayscale" newBitmap.Save(Server.MapPath("~/images/") + name & ".gif", ImageFormat.Gif) newBitmap.Save(Server.MapPath("~/images/") + name & ".png", ImageFormat.Png) oldBitmap.Dispose() 'we will delete the old File.Delete(Server.MapPath("~/images/original") + PictureUploadControl.FileName) GrayscaledPhoto.ImageUrl = "images/" & name & ".png" End If End Sub ''Note that In my case all images would be of 45 X 45 17 replies Last post Feb 09, 2009 11:24 PM by NiravVyas
https://forums.asp.net/t/1367168.aspx?Change+Color+Image+to+Balck+and+White
CC-MAIN-2017-51
refinedweb
1,922
64.91
On Tue, Mar 13, 2012 at 9:10 AM, Vivek Goyal <vgoyal@redhat.com> wrote:> On Mon, Mar 12, 2012 at 04:04:16PM -0700, Tejun Heo wrote:>> On Mon, Mar 12, 2012 at 11:44:01PM +0100,.>>>> Yeah, the great pain of full hierarchy support is one of the reasons>> why I keep thinking about supporting mapping to flat hierarchy. Full>> hierarchy could be too painful and not useful enough for some>> controllers. Then again, cpu and memcg already have it and according>> to Vivek blkcg also had a proposed implementation, so maybe it's okay.>> Let's see.>> Implementing hierarchy is a pain and is expensive at run time. Supporting> flat structure will provide path for smooth transition.>> We had some RFC patches for blkcg hierarchy and that made things even more> complicated and we might not gain much. So why to complicate the code> until and unless we have a good use case.how about ditching the idea of an FS altogether?the `mkdir` creates and nests has always felt awkward to me. maybeinstead we flatten everything out, and bind to the process tree, butenable a tag-like system to "mark" processes, and attach meaning tothem. akin to marking+processing packets (netfilter), or maybe likesysfs tags(?).maybe a trivial example, but bear with me here ... other controllersare bound to a `name` controller ...# my pid?$ echo $$123# what controllers are available for this process?$ cat /proc/self/tags/TYPE# create a new `name` base controller$ touch /proc/self/tags/admin# create a new `name` base controller$ touch /proc/self/tags/users# begin tracking cpu shares at some default level$ touch /proc/self/tags/admin.cpuacct.cpu.shares# explicit assign `admin` 150 shares$ echo 150 > /proc/self/tags/admin.cpuacct.cpu.shares# explicit assign `users` 50 shares$ echo 50 > /proc/self/tags/admin.cpuacct.cpu.shares# tag will propogate to children$ echo 1 > /proc/self/tags/admin.cpuacct.cpu.PERSISTENT# `name`'s priority relative to sibling `name` groups (like shares)$ echo 100 > /proc/self/tags/admin.cpuacct.cpu.PRIORITY# `name`'s priority relative to sibling `name` groups (like shares)$ echo 100 > /proc/self/tags/admin.cpuacct.cpu.PRIORITY[... system ...]# what controllers are available system-wide?$ cat /sys/fs/cgroup/TYPEcpuacct = monitor resourcesmemory = monitor memoryblkio = io stuffs[...]# what knobs are available?$ cat /sys/fs/cgroup/cpuacct.TYPEshares = relative assignment of resourcesstat = some stats[...]# how many total shares requested (system)$ cat /sys/fs/cgroup/cpuacct.cpu.shares200# how many total shares requested (admin)$ cat /sys/fs/cgroup/admin.cpuacct.cpu.shares150# how many total shares requested (users)$ cat /sys/fs/cgroup/users.cpuacct.cpu.shares50# *all* processes$ cat /sys/fs/cgroup/TASKS1123[...]# which processes have `admin` tag?$ cat /sys/fs/cgroup/cpuacct/admin.TASKS123# which processes have `users` tag?$ cat /sys/fs/cgroup/cpuacct/users.TASKS123# link to pid$ readlink -f /sys/fs/cgroup/cpuacct/users.TASKS.123/proc/123# which user owns `users` tag?$ cat /sys/fs/cgroup/cpuacct/users.UID1000# default mode for `user` controls?$ cat /sys/fs/cgroup/users.MODE0664# default mode for `user` cpuacct controls?$ cat /sys/fs/cgroup/users.cpuacct.MODE0600# mask some controllers to `users` tag?$ echo -e "cpuacct\nmemory" > /sys/fs/cgroup/users.MASK# ... did the above work? (look at last call to TYPE above)$ cat /sys/fs/cgroup/users.TYPEblkio[...]# assign a whitelist instead$ echo -e "cpu\nmemory" > /sys/fs/cgroup/users.TYPE# mask some knobs to `users` tag$ echo -e "shares" > /sys/fs/cgroup/users.cpuacct.MASK# ... did the above work?$ cat /sys/fs/cgroup/users.cpuacct.TYPEstat = some stats[...]... in this way there is still a sort of heirarchy, but eachcontroller is free to choose:) if there is any meaning to multiple `names` per process) ... or if one one should be allowed) how to combine laterally) how to combine descendents) ... maybe even assignable strategies!) controller semantics independent of other controllerswhen a new pid namespace is created, the `tags` dir is "cleared out"and that person can assign new values (or maybe a directory is createdin `tags`?). the effective value is the union of both, and identicalto whatever the process would have had *without* a namespace (nodifference, on visibility).thus, cgroupfs becomes a simple mount that has aggregate stats andsystem-wide settings.recap:) bound to process heirarchy) ... but control space is flat) does not force every controller to use same paradigm (eg, "you mustbehave like a directory tree")) ... but orthogonal multiplexing of a controller is possible if thecontroller allows it) allows same permission-based ACL) easy to see all controls affect a process or `name` group with asimple `ls -l`) additional possibilities that didn't exist with directory/arbitrarymounts paradigmdoes this make sense? makes much more to me at least, and i thinkallow greater flexibility with less complexity (if my experience withFUSE is any indication) ...... or is this the same wolf in sheep's skin?-- C Anthony--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2012/3/13/419
CC-MAIN-2016-30
refinedweb
833
51.04
Are you sure? This action might not be possible to undo. Are you sure you want to continue? Submitted To Submitted By Er. Gunjan Oberoi (Deptt. Of CSE ) Nitish Kamal D3 CSE (A2) 95065/90370305200 1 2 INDEX S.no 01 02 03 04 05 06 07 08 09(a) 09(b) Name of Practical To Execute Various Queries Using Commands Of Sql To Use A Loop In Sql To Find Out The Area Of A Circle When Radius Is Given Introduction To Views Program To Create And Drop An Index Introduction To Packages Introduction To Triggers Write A Program To Find Salary Grade Using Cursor Create Or Replace Procedure My_Proc As Create Table Mytable (Num_Col Number, Char_Col Varchar2(60)); P.no 02 19 21 23 28 31 33 35 37 38 3 PRACTICAL NO. 1 TO EXECUTE VARIOUS QUERIES USING COMMANDS OF SQL (1) To create an employee table with the fields- employee name , department name, salary. You must create your tables before you can enter data into them. Use the Create Table command. Syntax: Create table tablename using filename (fieldname fieldtype(length), fieldname fieldtype(length), fieldname fieldtype(length)); 4 the ALTER TABLE syntax is: ALTER TABLE table_name ADD column_name column-definition. modify. 5 .(2) To add the following fields to the employee table: employee number. date of birth. phone number. It can also be used to add. or drop a column from an existing table. The ALTER TABLE statement allows you to rename an existing table. To add a column to an existing table. You can use the modify SQL command if you need to resize a column in MySQL. example: alter table people modify name VARCHAR(35) . It is written as: alter table modify [column name] VARCHAR(5) . By doing this you can allow more or less characters than before.(3) To modify the width of the field in employee table. 6 . This changes the column called "name" on the table called "people" to now allow 35 characters. '10/10/2005') When using SQL INSERT INTO you might not want to enter values for all columns and in this case you have to specify the list of columns you are entering values for. 20. Here is how you can insert a new row into the Weather table. 20. AverageTemperature. with a slightly modified SQL INSERT INTO syntax: INSERT INTO Weather VALUES ('Los Angeles'. The SQL INSERT INTO clause facilitates the process of inserting data into a SQL table.(4) To insert the values in employee table. '10/10/2005') You can produce the same result. If you do not enter values 7 . using SQL INSERT INTO: INSERT INTO Weather (City. Date) VALUES ('Los Angeles'. The following SQL INSERT example enters only 2 of the 3 columns in the Weather table: INSERT INTO Weather (City.for all columns. we can also insert a bulk of data into d tables by specifying the columns of the table and creating an input interface like the following: 8 . Instead of using insert into command again and again. '10/10/2005') (5) To insert various records into employee table without repeating insert command. Date) VALUES ('Boston'. then the columns you have omitted must allow NULL values or at least have a default value defined. And the data values inserted there will be inserted into the table as below: (6) To update the records into employees table where employee name is “neha” 9 . … (7) To increase the salary of all the employees by Rs. 10 . Column2 = Value2.12000. The SQL UPDATE clause basic syntax looks like this: UPDATE Table1 SET Column1 = Value1.The SQL UPDATE clause serves to update data in database table. Hence the querry that will use will execute will raise the salary of all the employees in equal amount. the amount is raised by Rs.The salary of the employees in tha table can be increased by adding the equal amount to the salary column of each employee. 11 . 12000 and the corresponding changes are shown by again showing all the contents of the table. (8) To delete a particular record from employees table. Here in this particular querry. 12 .The SQL DELETE clause is used to delete data from a database table. The simplest SQL DELETE syntax looks like this: DELETE FROM Table1 The SQL DELETE statement above will delete all data from the Table1 table. Most of the time we will want to delete only table rows satisfying certain search criteria defined in the SQL WHERE clause. 13 .(9) To delete all the records from employee table. that means the query will delete all the data from the table. Hence this statement will delete all the data contained in the table. we can do that. it will not be deleted. But the structure f the table will remain stored . In the case we want to again enter the data values in the same table. but the structure will still remain preserved in the database. The delete command will delete the columns in the table. Here we use delete employees. as we can use the DROP TABLE command. Fortunately. it would be problematic if we cannot do so because this could create a maintenance nightmare for the DBA's.(10) To drop the employee table. The syntax for DROP TABLE is DROP TABLE "table_name" So. In fact. if we wanted to drop the table called customer that we created in the CREATE TABLE section. we simply type DROP TABLE customer. 14 . Sometimes we may decide that we need to get rid of a table in the database for some reason. SQL allows us to do it. After we have dropped the table. the query will give an error saying that the table doesn’t exist. so if we will perform any transaction on deleted table. the data as well as the structure of the table is deleted.(11) To view all the records of employee table.. 15 . After the command returns. 16 . Changes that have not been moved to the table are not committed. whether they were made through Oracle OLAP or through another form of access (such as SQL) to the database. The COMMIT command only affects changes in workspaces that you have attached in read/write access mode. The COMMIT command executes a SQL COMMIT command. UPDATE moves changes from a temporary work area to the database table in which the workspace is stored. then you must first update the workspace using the UPDATE command.(12) To make changes permanent to the database. all committed changes are visible to other users who subsequently attach the workspace. When you want changes that you have made in a workspace to be committed when you execute the COMMIT command. All changes made in your database session are committed. 17 .(13) To undo the uncommitted change in the database. The ROLLBACK aborts the current transaction. (14) To give access rights to a user on a particular database. Syntax ROLLBACK [WORK] Description ROLLBACK rolls back the current transaction and causes all the updates made by the transaction to be discarded. WORK has no special sense and is supported for compatibility with SQL standards only. 18 . Here's the syntax of the statement: GRANT <permissions> [ON <table>] TO <user/role> [WITH GRANT OPTION] (15) To take rights from a user on a table. it's time to begin strengthening security by adding permissions. We'll accomplish this through the use of the SQL GRANT statement. Our first step will be to grant appropriate database permissions to our users.Once we've added users to our database. it often proves necessary to revoke them at a later date. Here's the syntax: REVOKE [GRANT OPTION FOR] <permissions> ON <table> FROM <user/role> (16) To add not null constraint on the salary column of the table. Fortunately. SQL provides us with the REVOKE command to remove previously granted permissions. 19 .Once we've granted permissions. The NOT NULL constraint enforces a column to NOT accept NULL values. a table column can hold NULL values. or update a record without adding a value to this field. 20 .By default. This means that you cannot insert a new record. The NOT NULL constraint enforces a field to always contain a value. 2 21 . PRACTICAL NO.Now whenever we try to leave the coloumn where not null constraint is applied. put_line(‘loop exited as the value of I has reached ‘ || to_char(i)). end.TO USE A LOOP IN SQL Declare i number :=0 . Begin Loop i:=i+2. exit when i>10. Output:- 22 . end loop. dbms_output. 3 23 .PRACTICAL NO. 14. begin radius :=3. radius number(5).2) := 3. end.2). end loop.TO FIND OUT THE AREA OF A CIRCLE WHEN RADIUS IS GIVEN create table areas(radius number(5). area number(14. Output: 24 .2)).2). declare pi constant number(4. area number(14.area). while radius<=7 loop area:=pi*power(radius. radius:= radius+1. insert into areas values(radius. PRACTICAL NO 4 25 . INTRODUCTION TO VIEWS VIEW:.acct_fd_no.’sb432’.A view is a virtual table which provides access to a subset of columns from one or more tables. Create a view: create view v_nominees as select nominee_no. The dynamic result of one or more relational operations operating on base relations to produce new relation. It is a query stored as an object in the database which does not have its own data.’ram’). Inserting values in a view: • insert into v_nominees values (‘n100’. 26 . A view is a list of columns or a series of records retrieved from one or more existing tables or as a combination of one or more views and one or more tables.name from from nominee_mstr. 27 .Displaying view: Updating values in view: • update v_nominees set name=’vaishali’ where name=’sharan’. deleting values from view:• delete from v_nominees where name=’vaishali’. 28 .Dropping view:drop view v_nominees. 29 . <column name2>). example: create index idxtransAcctno ON trans_mstr (trans_no. 30 . composite index: create index <index name> ON <table name> (<column name1>. example: create index idxveri Empno ON acct_mstr (veri_emp_no).PRACTICAL NO 5 PROGRAM TO CREATE AND DROP AN INDEX theory: there are two types of indexes: simple index and composite index simple index: create index <index name> ON <table name> (<column name>). acctNo). select <index name> from <cursor indexes> unique index: create unique index <index name> ON <table name> (<column name >. Function based index: Create index <index name> ON <table name> (<function> (<column name>)). <column name>). Example: create index idx_name ON cust_mstr (upper(fname)). Alter index <index name> rebuild no reverse. Dropping an index: Drop index<index name>. alter index: example: if reverse index re built into normal index. reverse key index: create index <index name> ON <table name> (<column name>) reverse. 31 . PRACTICAL NO 6 INTRODUCTION TO PACKAGES 32 . sal number. job varchar. sal. Name out varchar. Dno number) is Begin Insert into emp (empno. If SQL%Found Then Return (‘y’). amount number) Return number is N number. End retrieve. sal into name. deptno) values (Eno. Dno). End if. Procedure retrieve (Eno in number. 33 . End insert_oper.Create or replace package body operation as procedure insert_oper (Eno number. name varchar. sal out number) is Begin Select ename. ename. Begin Update emp set sal=sal + amount where deptno = Dno. name. Function update_oper( dno number. Else Return(‘n’). N:= SQL%RowCount. End update_oper. Return(N). Function Delete_oper (Eno number) return char is Begin Delete emp where empno=Eno. job. job. sal from emp where empno=Eno. sal. Output:- PRACTICAL NO 7 INTRODUCTION TO TRIGGERS 34 . End operation.End delete_oper. losal number(10).’salary’||to_char(:new.Ename). job_classification number(10)). End. Create table salgrade ( grade number(5). hisal number(10).job. If (:new.Program to explain the working of a trigger.job||’for employee’||:new. Maxsal number(10). job ON emp99 for each row Declare Minsal number(10).sal > maxsal) then Raise salary_out_of_range. Salary_out_of_range exception. Exception When salary_out_of_range then Raise_application_error(-20300. When no_data_found then Raise _application_error(-20322. Begin Select losal. maxsal from salgrade where job_classification =:new. 35 . hisal into minsal. Create or replace trigger salary_check Before insert or update of sal. Endif.’invalid job classification’||:new.sal)||’out of range for’||’job classification’||:new.sal < minsal or :new.job_classification). OUTPUT: PRACTICAL NO 8 Write a program to find salary grade using cursor Declare 36 . Esal emp. If Esal > 2500 and esal < 3500 then grade=”C”. Loop exit when csal%not found. esal.sal%type. Begin Epen csal. If Esal > 3500 and esal < 4500 then grade=”B”. sal from emp.Cursor csal is Select empno. 37 . Grade varchar(2). endif dbms_output. endif If Esal > 4500 and esal < 5500 then grade=”A”.empno% type. endif If Esal > 5500 then grade=”A+”. Fetch csal into eno.put_line(eno + ’has’ + grade + ’grade’). endif. Eno emp. OUTPUT:- PRACTICAL NO 9(a) CREATE OR REPLACE PROCEDURE MY_PROC AS begin dbms_output.put_line(‘hello world’). end. 38 .endloop. OUTPUT PRACTICAL NO 9 (b) CREATE TABLE MYTABLE (NUM_COL NUMBER. begin my_proc. create or replace procedure insertIntoTemp as v_num 1 number:=1. CHAR_COL VARCHAR2(60)). v_string1 varchar2(50):= ‘hello world!’. begin 39 .end my_proc. v_outputstr varchar2(50). end. OUTPUT:- 40 .put_line(v_output str). select char_col into v_output str from mytable where num_col=v_num1. end insertIntoTemp.insert into mytable(num_col. char_col) values (v_num1. v_string1). end. begin insertIntoTemp. dbms_output. 41 .
https://www.scribd.com/doc/85187889/Reference-Practicle-file-for-RDBMS-II
CC-MAIN-2018-13
refinedweb
2,216
68.47
Raymond -------- how we currently localize method access with name mangling ------ class A: def __m(self): print 'A.__m' def am(self): self.__m() class B(A): def __m(self): print 'B.__m' def bm(self): self.__m() m = B() m.am() # prints 'A.__m' m.bm() # prints 'B.__m' -------- how I would like to localize method access with a decorator ------ class A: @localmethod def m(self): print 'A.m' def am(self): self.m() class B(A): @localmethod def m(self): print 'B.m' def bm(self): self.m() m = B() m.am() # prints 'A.m' m.bm() # prints 'B.m' --------------------- P.S. Here's a link to the descriptor how-to: I had some time this lunchtimes and I needed to calm my nerves so I took up your challenge :) Here is my poor effort. I'm sure lots of things are wrong with it but I'm not sure I'll look at it again. from types import MethodType, FunctionType # The suggested localmethod decorator class localmethod(object): def __init__(self, f): self.f = f self.defclass = None self.nextmethod = None def __get__(self, obj, objtype=None): callobj = obj or objtype if callobj.callerclass == self.defclass: return MethodType(self.f, obj, objtype) elif self.nextmethod: return self.nextmethod.__get__(obj, objtype) else: raise AttributeError class BoundMethod(object): def __init__(self, meth, callobj, callerclass): self.meth = meth self.callobj = callobj self.callerclass = callerclass def __call__(self, *args, **kwargs): callobj = self.callobj try: callobj.callerclass = self.callerclass return self.meth(*args, **kwargs) finally: callobj.callerclass = None # A 'mormal' method decorator is needed as well class method(object): def __init__(self, f): self.f = f self.defclass = None def __get__(self, obj, objtype=None): callobj = obj or objtype return BoundMethod(MethodType(self.f, obj, objtype), callobj, self.defclass) class Type(type): def __new__(self, name, bases, attrs): for attr, val in attrs.items(): if type(val) == FunctionType: attrs[attr] = method(val) return type.__new__(self, name, bases, attrs) def __init__(self, name, bases, attrs): for attr, val in attrs.iteritems(): if type(val) == localmethod: val.defclass = self for base in self.mro()[1:]: if attr in base.__dict__: nextmethod = base.__dict__[attr] val.nextmethod = nextmethod break elif type(val) == method: val.defclass = self class Object(object): __metaclass__ = Type # Note: any class or object has to have a callerclass attribute for this to work. # That makes it thread - incompatible I guess. callerclass = None # Here is your example code class A(Object): @localmethod def m(self): print 'A.m' def am(self): self.m() class B(A): @localmethod def m(self): print 'B.m' def bm(self): self.m() # Untested beyond this particular example! Arnaud Well in fact I couldn't help but try to improve it a bit. Objects now don't need a callerclass attribute, instead all necessary info is stored in a global __callerclass__. Bits that didn't work now do. So here is my final attempt, promised. The awkward bits are: * how to find out where a method is called from * how to resume method resolution once it has been established a local method has to be bypassed, as I don't know how to interfere directly with mro. Feedback of any form is welcome (though I prefer when it's polite :) -------------------- from types import MethodType, FunctionType class IdDict(object): def __init__(self): self.objects = {} def __getitem__(self, obj): return self.objects.get(id(obj), None) def __setitem__(self, obj, callerclass): self.objects[id(obj)] = callerclass def __delitem__(self, obj): del self.objects[id(obj)] # This stores the information about from what class an object is calling a method # It is decoupled from the object, better than previous version # Maybe that makes it easier to use with threads? __callerclass__ = IdDict() # The purpose of this class is to update __callerclass__ just before and after a method is called class BoundMethod(object): def __init__(self, meth, callobj, callerclass): self.values = meth, callobj, callerclass def __call__(self, *args, **kwargs): meth, callobj, callerclass = self.values if callobj is None and args: callobj = args[0] try: __callerclass__[callobj] = callerclass return meth(*args, **kwargs) finally: del __callerclass__[callobj] # A 'normal' method decorator is needed as well class method(object): def __init__(self, f): self.f = f def __get__(self, obj, objtype=None): return BoundMethod(MethodType(self.f, obj, objtype), obj, self.defclass) class LocalMethodError(AttributeError): pass # The suggested localmethod decorator class localmethod(method): def __get__(self, obj, objtype=None): callobj = obj or objtype defclass = self.defclass if __callerclass__[callobj] is defclass: return MethodType(self.f, obj, objtype) else: # The caller method is from a different class, so look for the next candidate. mro = iter((obj and type(obj) or objtype).mro()) for c in mro: # Skip all classes up to the localmethod's class if c == defclass: break name = self.name for base in mro: if name in base.__dict__: try: return base.__dict__[name].__get__(obj, objtype) except LocalMethodError: continue raise LocalMethodError, "localmethod '%s' is not accessible outside object '%s'" % (self.name, self.defclass.__name__) class Type(type): def __new__(self, name, bases, attrs): # decorate all function attributes with 'method' for attr, val in attrs.items(): if type(val) == FunctionType: attrs[attr] = method(val) return type.__new__(self, name, bases, attrs) def __init__(self, name, bases, attrs): for attr, val in attrs.iteritems(): # Inform methods of what class they are created in if isinstance(val, method): val.defclass = self # Inform localmethod of their name (in case they have to be bypassed) if isinstance(val, localmethod): val.name = attr class Object(object): __metaclass__ = Type # Here is your example code class A(Object): @localmethod def m(self): print 'A.m' def am(self): self.m() class B(A): @localmethod def m(self): print 'B.m' def bm(self): self.m() # Added: B.am(m) # prints 'A.m' B.bm(m) # prints 'B.m' m.m() # LocalMethodError (which descends from AttributeError) # Untested beyond this particular example! -------------------- Arnaud What would the semantics be if m is decorated as local only in A or only in B ? George OK that wasn't really thought through. Because I changed the design mid-way through writing it __callerclass__ wasn't doing the right thing. I've sorted the issues I could see and made it (hopefully) thread-safe. I'm not going to pollute this list again with my code so I've put it at the following address: The problem is that all normal functions need to be decorated with '@function' for it to work completely: if I understand correctly the snippet below should raise an exception. It only does so if 'f' is decorated with '@function' as below. @function def f(x): x.l() class C(Object): @localmethod def l(self): print "Shouldn't get here" def v(self): return f(self) C().v() # Raises LocalMethod exception ---------- PS: in fact I'm not sure it's a good idea to decorate local methods: what about local attributes which are not methods? They have to be treated differently as only functions can be decorated. What about functions / classes which are local to a module? Arnaud The goal is to as closely as possible emulate the sematics of under- under name mangling. Raymond I fooled around with this a bit and even when using different techniques than Arnaud (namely stack inspection and/or class decorators) it ends up looking the same. In order to pull this off you need to 1) mark the localmethods as special (@localmethod works here) 2) mark all non-localmethods as not special (metaclasses, class decorators, or module-level stack inspection) 3) keep track of the call stack (can be implemented as part of #1 and #2 but adds overhead regardless) Double underscore names take care of #1 at compile time and by definition anything not name-manged falls into the non-special class of #2. After that the special/non-special calls are given different names so the native call semantics take care of the call stack for you. With some bytecode manipulation it should be possible to fake the same name mangling by using just @localmethod decorators and adding some overhead to always checking the current class (eg A.m() for the function am()) and falling back on doing the normal call of self.m(). This could be made more exact by doing inspection after the fact with any of metaclasses/class decorators/module inspection because then we could inspect what is a @localmethod and what isn't all the way through the class tree. I could be wrong on the byte-hack part as I've only recently learned to read the chicken bones that are byte codes (thanks to PyCon sprints I got to spend time picking python-devs's brains over burritos). It seem plausible if fragile. -Jack So you are asking for a serious hack, right? As soon as I saw your challenge I thought "That's difficult. Very difficult. No way I can solve that with a simple descriptor/decorator. I need more POWER. Time to look at the byteplay module". The byteplay module by Noam Raphael ( svn/trunk/byteplay.py) seems to exist just to make possible spectacular hacks. So I took your challenge as an opportunity to explore a bit its secrets. In doing so, I have also decided to break the rules and not to solve your problem, but a different one, which is the one I am more interested in ;) Basically, I am happy with the double underscores, but I am unhappy with the fact that a method does not know the class where it is defined, so that you have to repeat the name of the class in ``super``. With some bytecode + metaclass hackery it is possible to make the methods smart enough to recognize the class where they are defined, so your example could be solved as follows: from currentclass import CurrentClassEnabled class A(CurrentClassEnabled): def m(self): print 'A.m' def am(self): CurrentClass.m(self) # the same as A.m(self) class B(A): def m(self): print 'B.m' def bm(self): CurrentClass.m(self) # the same as B.m(self) def superm(self): super(CurrentClass, self).m() # the same as super(B, self).m() m.superm() # prints 'A.m' As you see, as a byproduct, double underscores are not needed anymore, since methods are called directly from the CurrentClass. The approach also works for ordinary attributes which are not methods. The code to enable recognition of CurrentClass is short enough to be includede here, but I will qualify it as a five star-level hackery: $ cat currentclass.py # requires byteplay by Noam Raphael # see from types import FunctionType from byteplay import Code, LOAD_GLOBAL, STORE_FAST, LOAD_FAST def addlocalvar(f, locname, globname): if locname not in f.func_code.co_names: return f # do nothing c = Code.from_code(f.func_code) c.code[1:1] = [(LOAD_GLOBAL, globname), (STORE_FAST, locname)] for i, (opcode, value) in enumerate(c.code[2:]): if opcode == LOAD_GLOBAL and value == locname: c.code[i+2] = (LOAD_FAST, locname) f.func_code = c.to_code() return f class _CurrentClassEnabled(type): def __new__(mcl, name, bases, dic): for n, v in dic.iteritems(): if isinstance(v, FunctionType): dic[n] = addlocalvar(v, 'CurrentClass', name) return super(_CurrentClassEnabled, mcl).__new__(mcl, name, bases, dic) class CurrentClassEnabled: __metaclass__ = _CurrentClassEnabled Enjoy! Michele Simionato > The code to enable recognition of CurrentClass is short enough to be > includede here, but I will qualify it as a five star-level hackery: You forgot the standard disclaimer: "This is extremely dangerous stuff, only highly trained professionals can do that! Kids, never try this at Gabriel Genellina ;) Yep, the only good use case for this kind of games is for prototyping in current Python features that are candidate for inclusion in future versions of Python, but this was what Raymond asked. However, there is also the fun of it ;) Michele Simionato
https://groups.google.com/g/comp.lang.python/c/pgEMdJSHG7E?hl=en
CC-MAIN-2022-21
refinedweb
1,964
58.99
Dear fellow monks, recently, perhaps due to some slight change in my programming, I have frequently found myself debugging mysterious bugs where, after three or four careful readings of the source code, there are seemingly none. Most of these have boiled down to misunderstanding precedence rules. Example: sub done { my $self = shift; return not $self->foo and not $self->bar; # gotcha! } [download] Due to the low precedence of and, the Perl compiler parses this as (return not $self->foo) and not $self->bar; [download] which will always return not $self->foo and never evaluate not $self->bar, let alone perform the and. Using the alternative logical operators fixes everything: sub done { my $self = shift; return !$self->foo && !$self->bar; } [download] Now, the expression is parsed as return ( (!$self->foo) && (!self->bar) ), which is what I mean. Further surprises of the similar kind include the following line: my $bool = 1 and 0; # $bool == 1 and $_ == 0; may not be what you expect! [download] What is your way of avoiding mistakes such as this? Do you avoid using and et al. unless inside parenthesis or in if-else? Some other method of discipline? Peer review? Linting? Agile methods? -- say "Just Another Perl Hacker"; In reply to Burned by precedence rules by w-ber Lots Some Very few None Results (215 votes), past polls
http://www.perlmonks.org/index.pl?parent=732286;node_id=3333
CC-MAIN-2016-07
refinedweb
222
63.49
The official blog of the Microsoft SharePoint Product Group A couple of days ago, I reviewed how our test teams have been doing to extend out the configurations we validate as we move closer to the next beta. Beyond per-feature validation, the teams collaborate to produce a “config of the week” that combines several different topology elements into a deployment that’s then the basis for both focused and ad-hoc validation that week. Each week the team rolls a new config, with the previous week’s config left up for a while so the developers can debug, test fixes, etc. In the earlier phases of the project, the testing was very directed – essentially try one thing and make sure it works– ADFS support or SQL auth for instance. Then it’s a case of “Lather, rinse, repeat” for existing and new config options. Once the stuff works in isolation, we move on to simple combinations, then more complex ones, etc. Last week’s “config of the week” was an extranet deployment with fully qualified domain names, ssl, simultaneous vanilla LDAP & Windows auth (on separate URL namespaces, both hosting the same base content), inter-farm federation (between two medium farms), a mix of 32-bit and 64-bit builds, all running a non-English language. This week they switched to an English build and added multiple shared service providers (our new way of hosting different search, user profile, and other services for different portals all on the same farm) so one portal on the farm got its user profile info from an LDAP-based directory while another synchronized with the Active Directory. Finally, they drastically dialed back the privileges for all the service accounts to lowest privilege level. Now as a program manager on the team, I am of course supremely confident in our engineering methodology and dev/test prowess. But even I would have given even odds that the first time all this stuff was turned on at once, it would have taken a couple days to get pages to render reliably. (On my more cynical days, I might have predicted needing a haz-mat crew to wipe down what was left of the lab.) We did find a bunch of bugs, but all the pages on all the portals in both farms rendered in both auth contexts. They’d never admit it, but I think the test guys were a little disappointed. Obviously, there are still miles to go on config scale-up/scale-out, but I like the progress. --Jonathan Kauffman, Group Program Manager, SharePoint Portal Server If you would like to receive an email when updates are made to this post, please register here RSS >>> Any tests in French ? ;+) EROL Yes, we are testing right now in French. And a host of other languages as well. We do most of our testing on pseudo-localized builds to make sure we handle general globalization cases. Pseudo-localization means we create a "language" made out of exotic analogues of boring western characters (e.g. à -> å, c -> ç, n -> ñ), add a bunch of extra characters to all strings, etc. This way, the UI is readable by everyone on the test team but it breaks in all the ways that actual localized builds break. In parallel, we're translating and localizing in real languages (like French) and producing builds. We build out both full localized builds as well as language packs that allow a single SharePoint deployment to support multiple language sites simultaneously. I would love some information on how to deploy Sharepoint Portal Server 2007 into a non-Active Directory Environment. The configuration wizard is giving me an error.
http://blogs.msdn.com/sharepoint/archive/2006/01/27/518483.aspx
crawl-002
refinedweb
613
56.89
Hello,. View Complete Post I'm fairly new to MSSQL and I need help designing a stored procedure. What I want to do is create a stored procedure with three parameters, startDateTime, endDateTime, and interval; that generates a count for the number of rows in my table. Here is some sample data in the data_time column that I want to use for grouping: 2010-11-02 07:56:34.083 2010-11-02 07:56:34.617 2010-11-02 08:21:05.567 2010-11-02 08:21:05.997 2010-11-02 08:33:04.080 Here is what I want the stored procedure to return (when the interval param is set to 10 this would be in minutes) dateTime cnt 2010-11-02 07:56:34.083 123 2010-11-02 08:06:34.083 37 2010-11-02 08:16:34.083 145 Is this something that can be done in a stored proced or does my application need to do the grouping? Here is my table definition: CREATE TABLE [dbo].[wsaxis2_rlog]( [id] [int] IDENTITY(1,1) NOT NULL, [date_time] [datetime] NULL, [thread] ? ALTER PROCEDURE @CompanyID ,@DepartmentID ,@ApplicationID ,@UserID ,[UserID] ,[QueryName] ,[CreateDate] ,[UpdateDate] ,[ExpirationDate] ,[LastAccessedDate] WHERE [UserID] = @UserID. Any one pls tell me how to call the stored procedure in SQL server Hi friends , I am very newbie I have created stored procedure and student database and also asp.net application for asp.net page but it could not found stored procedure what is the mistake actually I don't no Please help me Warm Regards Durga Hi; When I try to call CallableStatement.setString( name, value ) it throws an exception. Is there something I have to do to either change the parameter names or set them up? I use the full "@Beginning_Date" as the name. thanks - dave? Hey guys I am creating a buddy list and I need to select my entire buddy list and then the ones that are online I have two querries one is your standard FROM buddies the other is a bit more complicated SELECT Buddy as Online FROM Buddies WHERE EXISTS (SELECT Buddy FROM online WHERE online.Buddy = Buddies.Buddy) I can't use the union command since I create another column What I would like is to append the column to my original result so I can access it from some vb script using the sqlclient.datareader Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/41814-mssql-2005-jdbc-driver--multiple-select.aspx
CC-MAIN-2018-13
refinedweb
413
64
Hi everyone.. This code was from a book and I was trying to compile it with VC++ Express. And this is the error I get. I probably didn't include necessary files. I am new at C++ and I am still learning my way to libraries and such.And this is the error I get. I probably didn't include necessary files. I am new at C++ and I am still learning my way to libraries and such.Code:#include <cstdio> #include <cstdlib> #include <iostream> using namespace std; int main (int nNumberofArgs, char * pszArgs[]); { /ln; //wait until user is ready before terminating program to allow the //user to see the program results system ("PAUSE"); return 0; } Thanks... Code:'temp.exe': Loaded 'C:\Documents and Settings\Tangog\My Documents\Visual Studio 2005\Projects\temp\debug\temp.exe', Binary was not built with debug information. 'temp.exe': Loaded 'C:\WINDOWS\system32\ntdll.dll', No symbols loaded. 'temp.exe': Loaded 'C:\WINDOWS\system32\kernel32.dll', No symbols loaded. The program '[1556] temp.exe: Native' has exited with code 0 (0x0).
https://cboard.cprogramming.com/cplusplus-programming/94023-program-wont-run-vcplusplus-express.html
CC-MAIN-2017-43
refinedweb
177
61.93
In my recent quest to quick cram as much Flex knowledge possible into my head, I thought I’d take a look at injecting Flash or pure actionscript content into Flex. For those who’ve been using Flex, this will be old hat. For those such as myself, though, who are just making their way into Flex from a Flash development point of view, this may just save a bit of frustration and aggravation. Probably the easiest way to add a little Flash to a Flex project is to load in a compiled .swf file using the SWFLoader component. Let’s begin with a basic rotating cube made with Actionscript and Papervision3D. I’ll add this to a package named com.onebyonedesign.cube which will later be in the Flex project’s classpath, but for a moment, I’ll just be using good ol’ Flash. CubeExample.as Now, let’s open up Flash and compile a 400×400 pixel .swf containing the CubeExample using the following document class: Now, fire up a new Flex project however you normally do so (personally I use the FlashDevelop Actionscript editor and the free Flex SDK). Make sure the .swf you just generated above (we’ll call it “cube.swf”) is in your output directory (usually named “bin”) and create the following Main.mxml file: Now, after building that file you should wind up with a nice rotating red cube in a black panel. All good and well, but what if you want to simply use Actionscript to do the work for you rather than precompiling a .swf file in Flash. It’s a little trickier but not much. In order to use the addChild() method of Flex components it’s necessary for your display object to either extend the mx.core.UIComponent object or implement the IUIComponent interface. Since it’s much easier to extend than implement (at least in this case), we’ll go that route. First create a new package com.onebyonedesign.components and add to that package a slightly modified version of our original document class: As you may have noticed, it’s essentially the same as the document class we used to compile our cube in Flash, but now extends the mx.core.UIComponent class rather than the flash.display.Sprite class. And, believe it or not, that is all we need. Now we can add our CubeComponent instance using a teensy bit of actionscript which we’ll call once the application fires its applicationComplete event. Here’s the new .mxml file: And, now that our CubeComponent is a bonafide Flex component, we can take this one step further (or at least one step over). If we add a new namespace to our application tag, we can actually add our cubecomponent in .mxml like so: Regardless of which one of the above methods you use – the final product will end up looking something like this. Sorry, the comment form is closed at this time.
http://blog.onebyonedesign.com/actionscript/adding-flashactionscript-content-to-a-flex-project/
CC-MAIN-2019-39
refinedweb
495
72.66
Session StateSession State Session State is a way to share variables between reruns, for each user session. In addition to the ability to store and persist state, Streamlit also exposes the ability to manipulate state using Callbacks. Check out this Session State basics tutorial video by Streamlit Developer Advocate Dr. Marisa Smith to get started: Initialize values in Session StateInitialize values in Session State The Session State API follows a field-based API, which is very similar to Python dictionaries: # Initialization if 'key' not in st.session_state: st.session_state['key'] = 'value' # Session State also supports attribute based syntax if 'key' not in st.session_state: st.session_state.key = 'value' Reads and updatesReads and updates Read the value of an item in Session State and display it by passing to st.write : # Read st.write(st.session_state.key) # Outputs: value Update an item in Session State by assigning it a value: st.session_state.key = 'value2' # Attribute API st.session_state['key'] = 'value2' # Dictionary like API Curious about what is in Session State? Use st.write or magic: st.write(st.session_state) # With magic: st.session_state Streamlit throws a handy exception if an uninitialized variable is accessed: st.write(st.session_state['value']) # Throws an exception! Delete itemsDelete items Delete items in Session State using the syntax to delete items in any Python dictionary: # Delete a single key-value pair del st.session_state[key] # Delete all the items in Session state for key in st.session_state.keys(): del st.session_state[key] Session State can also be cleared by going to Settings → Clear Cache, followed by Rerunning the app. Session State and Widget State associationSession State and Widget State association Every widget with a key is automatically added to Session State: st.text_input("Your name", key="name") # This exists now: st.session_state.name Use Callbacks to update Session StateUse Callbacks to update Session State A callback is a python function which gets called when an input widget changes. Order of execution: When updating Session state in response to events, a callback function gets executed first, and then the app is executed from top to bottom. Callbacks can be used with widgets using the parameters on_change (or on_click), args, and kwargs: Parameters - on_change or on_click - The function name to be used as a callback - args (tuple) - List of arguments to be passed to the callback function - kwargs (dict) - Named arguments to be passed to the callback function Widgets which support the on_change event: st.checkbox st.color_picker st.date_input st.multiselect st.number_input st.radio st.select_slider st.selectbox st.slider st.text_area st.text_input st.time_input st.file_uploader Widgets which support the on_click event: st.button st.download_button st.form_submit_button To add a callback, define a callback function above the widget declaration and pass it to the widget via the on_change (or on_click ) parameter. Forms and CallbacksForms and Callbacks Widgets inside a form can have their values be accessed and set via the Session State API. st.form_submit_button can have a callback associated with it. The callback gets executed upon clicking on the submit button. For example: def form_callback(): st.write(st.session_state.my_slider) st.write(st.session_state.my_checkbox) with st.form(key='my_form'): slider_input = st.slider('My slider', 0, 10, 5, key='my_slider') checkbox_input = st.checkbox('Yes or No', key='my_checkbox') submit_button = st.form_submit_button(label='Submit', on_click=form_callback) Caveats and limitationsCaveats and limitations Only the st.form_submit_buttonhas a callback in forms. Other widgets inside a form are not allowed to have callbacks. on_changeand on_clickevents are only supported on input type widgets. Modifying the value of a widget via the Session state API, after instantiating it, is not allowed and will raise a StreamlitAPIException. For example: slider = st.slider( label='My Slider', min_value=1, max_value=10, value=5, key='my_slider') st.session_state.my_slider = 7 # Throws an exception! Setting the widget state via the Session State API and using the valueparameter in the widget declaration is not recommended, and will throw a warning on the first run. For example: st.session_state.my_slider = 7 slider = st.slider( label='Choose a Value', min_value=1, max_value=10, value=5, key='my_slider') Setting the state of button-like widgets: st.button, st.download_button, and st.file_uploadervia the Session State API is not allowed. Such type of widgets are by default False and have ephemeral True states which are only valid for a single run. For example: if 'my_button' not in st.session_state: st.session_state.my_button = True st.button('My button', key='my_button') # Throws an exception!
https://docs.streamlit.io/library/api-reference/session-state
CC-MAIN-2022-21
refinedweb
744
50.23
Agenda See also: IRC log <jkomoros> ScribeNick: jkomoros <ArtB> ScribeNick: jkomoros <ArtB> Scribe: Alex <scribe> Chair: Dimitri_Glazkov <scribe> Meeting: Styling Issues in Shadow DOM and CSS <divya> i think we need cofffee DG: Hoping that the main focus of this meeting will be primarily arounds CSS + Shadow DOM ... we had one original idea, but developers trying to use it gave feedback that it wasn't exactly the right "knobs" ... there are people here who are "Browser Vendors", and there are people who are the "web developers" ... a bunch of folks in the latter group here are from Polymer, Daniel Buchner (who should join at some point) represents x-tags ... and then spec folks, fantasai and tabatkins ... who aren't here yet. dbaron: Blake Kaplan and William Chen (?) have been working on Shadow DOM at Mozilla ... and I've been talking with them [by the way, we all took a coffee break] [break over] DG: The general idea of Shadow DOM is that you have an ability to create trees, like before, but connected for rendering purposes, render in place of nodes in document <rniwa> wasn't there explainer somewhere? DG: this existed in many different systems before. It allows composability (one tree vs multiple) <rniwa> is still up to date? DG: if I can replace the rendering of a node, what happens to its children? <slightlyoff> rniwa: or, also: DG: the general overview gets trickier and trickier, but we have converged on a solution in today's Shadow DOM spec [dglazkov draws a diagram on the board] <rniwa> Also see: scribe: every node that has children, you can associate (off to the right) with a shadowRoot: a DocumentFragment with extra stuff in it <slightlyoff> rniwa: this loads for me: scribe: extra stuff is effectively a subclass of DocumentFragment. Things like getElementByID, querySelector. Stuff that has migrated into Document mainly anyway dbaron: So those just query what's in the Shadow DOM? <rniwa> slightlyoff: oh oops, yeah. i guess it doesn't have an ordinary index.html > / rewrite :/ DG: think of the line connecting ShadowRoot is not a normal connection--it's a separate tree ... insertion points can be any elements inside the tree. They're called <content> ... we use a rhombus for insertion points ... <content> name comes from XBL2 ... you can have more than 1 content ... content can have a select attribute, which takes a narrow subset of CSS selctors ... that match against children of the parent node. ... currently limited to ID, tagname, attributes, and class ... no combinators. ... that's the conceptual model. But actually a node can have MULTIPLE shadow roots ... the method on the ndoe is "createShadowRoot" ... there's an ordering. ... Sometimes the element already has a shadowtree (like InputElement or TextArea) ... they're basically the same as how the native implementation might be done ... it's actually a stack of trees. new ones go on top of old ones; the newest one is the visible one. The ones underneath don't render ... there's a concept of older and younger shadow tree ... youngest one is the one that gets rendered ... soemtimes you want to use parts of the older shadow tree ... which is why there's an insertion point called <shadow> ... when you put it in a shadow root, it will show whatever what is the older shadow root ... it allows the youngest guy to channel the older guy ... explicit children can only go to one insertion point. ... there's an idea conceived by Jan on the polymer-dev list, the shadow acting as a function (?) ... but as of now, there is an order, only selected once ... this allows developers to take existing elements, and adorn them with existing stuff from older shadow trees ... if there's nothing in the older shadow tree, it works as the last content element--whatever hasn't ... been distributed ... whole point of Shadow DOM spec is distributed. That's the majority of the spec ... how are they distributed, what's the effect ... things like focus, events, and rendering/sytling ... the latter is what I want to talk abou ttoday ... the others we have figured out mostly dbaron: I was involved in the XBL RCC thing in 2004 (?) so these concepts are not all new to me DG: now we get into style ... this is where things get interesting <dbaron> (also XBL1 :-) DG: if the shadow root is a document fragment, what does that mean from styling perspective? ... if I'm distributing a text node into a content element, what is its style? rniwa: What does the current spec say about style? [I missed about 30 seconds :-( ] dbaron: I think it's worth separting selector matching and inheritance [esprehn draws a diagram on the board] es: When you attach the shadow root, content doesn't render. But in this shadow, the content is "teleported" as though it was there when rendering ... so you get styles from where you came from, and styles where you're going ... there's a way to reset styles at shadow boundary <divya> es: when the tree gets flattened out, conceptually it gets flattened out [es draws the "composed" result on the board for clarity] scribe: we use "composed" tree to mean, the thing with all of the things teleported ... don't use "flattened" tree dg: Although at some point we might, depending on if there's mutiple trees rniwa: If you have a style in the distributed content, that follows hierarchy in original content ... and merge in with shadow styles ... I'm not sure that even in an complex widget that makes sense ... things get really wonky sorvell: It's just inherited styles that work this way es: One special case si that if you have a style element inside of the shadow root, it's automatically scoped to the shadow root dbaron: So the selectors in the scoped style only match things in the shadow tree, NOT stuff that gets "teleported" there sorvell: This is one part of the spec as a developer that makes total sense ... allows you to worry just about this shadow root. it works really well in practice sjmiles: Occasionally you have to pierce through that barrier, that's when it gets harder <divya> sjmiles: as a practical matter WE haven't run into that problem ... (confused styles) dbaron: By pierce through, do you mean that someimes you want the explicit children of the node to inherit from what before as opposed to from shadow dom? ... is there a way to say that, in that case, the span shoul dinherit font size but not color sjmiles: no, it's all or nothing ... (basically) ... we haven't run into that need in practice y et ... that's based on empirical data with n points, where n is a relatively small number in the grand scheme of things ... it's possible at some point in the future someone will need it, but we don't now dg: let's enumerate the cool styling hooks that we have today, then figure out which ones are missing ... 1) Style scoped. ... it's acgtually a close cousin of shadowRoot. It's very similar scoping behavior ... but it's a scoping NODE, and style scoped is a scoping ELEMENT ... but they have similar abilities, except none of the styles from the document (outside the SR) don't apply down. ... style scoped in isolation, essentially dbaron: So nothing from the author style sheets don't match SR. But still UA styles DG: we have applyAuthorSTyles ... that allows the component to explicitly allow outside styles out to come inside ... user styles are treated like Author styles eo: It's problematic that user styles by default get blocked dg: Actually, we don't know what we do here, we need to check tabatkins: It's reasonabl eto say, yeah, User styles apply by default dbaron: How selector matching works is intersting dg: If you say applyAuthorStyles, there's still a weird relationship, where you even though a child of shadowRoot might LOOK like it's child of the host ... it's actually not. The selector matching can either be fully in the host, or in the SR [I think I got that right?] scribe: so in this example, div > div will not match ... but if you just do `div` and have applyAuthorStyles it will match both divs sjmiles: If I put div class=foo, and foo is defined in document, it won't see that ... as a user I go, applyAuthorStyles will make it work, but it won't dg: No, that will work es; BAsically, the selector must COMPLETELY Match outside, or completely inside. There's no boundary crossing eo: What about a boundary crossing combinator? <dbaron> db: yeah dg: That's what we want to talk about today: :-) ... There's another flag on SR that says resetStyleInheritance ... it's very powerful ... everything inside of the SR, when you flip to true, it will look like it's initial styling dbaron: Kind of like you had a parent in between all:initial dg: similar thing exists on insertion points ... so that you can have the styles in SR not go into the composed children ... that's all the styling machinery (minus any boundary-piercing things) ... but this isn't enough. how do you make the subtrees intreact with doc? ... similarly, sorvell wants to be able to style inside the shadow tree, style the composed children as well ... like say in a tab strip, styling the active children ... you want to be able to let SOME stuff in from SR in from document, and also in ... similar for content dbaron: The XBL solution to that is that you have a separate binding for active tab that is different, point to that instead dg: We didn't want the content of the host to not know what's happening to it (?) ... we had two solutions, both of which have strengths and weaknesses ... 1) let CSS Variables bleed through the SR boundary. So you could specify a CSS variable in doc, and catch it inside the SR tabatkins: in Style WG, we decided that variable resetting isn't covered in "all". YOu'd need to explicitly say "vars" as well (syntax I probably got wrong) dbaron: i don't like describing inheritance blocking as all property fantasai: You probably do want the ability to jump inheritance over the shadows dbaron: I'm nervous about cutting off inheritance from stuff outside SR to stuff inside. I'm less nervous about inheriting from the shadow into the children tabatkins; that turns out to be extremely popular for writing components in the real world scribe: like components in jquery have to go through and manually reset everything ... they want consistent, predictable starting point--even if they allow poking in after that fantasai: But imagine we're using this to rearrange list items into new structure. The expectation of the author is that setting font on the root of the doc, it sets it everywhere. But if you do cut off inheritance, then those list items will have UA default tabatkins: That's why it's a flag. Component authors can decide if it works es: Actually, default is to allowing sjmiles: So if you turn it off, the component author did it on purpose dbaron: So in the cases where you have a binding wiht lots of content inside, like say a tab widget. You probably want the inheritance through to your big piece of content. Bu there's some little content you don't want it dg: Think about disqus use case. They mostly want it to match the blog they're embedded in. But if you're building an app, you might want a certain style that's very particular no matter where it is ... like the G+ share widget, as an example, that wants to have complete control over exactly what's inside it sjmiles: db makes a good point tabatkins: wihtin the shadow, if you want to block it only in some places, the 'all' property exists sjmiles: Or make a component for just the parts where you want to reset it sorvell: We don't use resetting much in practice, it's such a blunt tool ... generally we want to control a small number of properties rniwa: In disqus use case, you want to be able to read the background color of surrounding, but decide how to interpet that tabatkins: Today you can do that. Use 'all' to reset all, then 'inherit' for the other properties you want to allow in. Or the other way around, use 'initial' es: LIke in a facebook button as an example, you want to force the font, but don't care about the size of it dg: Let's keep going with explaining the tools ... those variables are cool but not enough ... we see this alot with WebKit's internal input elements ... you want access to a sepcific element to style arbitrarily ... leads to: ... 2) Custom pseudo elements ... you define a pseudo attribute on an element ... then you can use it with standard ::foo syntax in selectors ... so like <outer-element>::x-foo dbaron: Like functionality, but want a function eo: agreed dg: Agreed, at the time when we proposed this dave hyatt didn't want it to be a functional syntax, but we can revisit es: ONe of the goals of the project is to explain lower-level magic eo: I don't agree with that goal, for what it's worth es: If we swithc to functional syntax, we miss out on explaining the ::foo magic rniwa: We could change the syntax for pseudos like that if we want, only blink and webkit do this dbaron: If implementations want to implement web platform features that have pseudos, they can have their own versions that don't use the functional syntax ... I'm SLIGHTLY sympathetic to wanting to explain the magic. But some of them are things we really don't want to freeze ... like if we had done styling of form controls "right" back in 2000/2002, it wouldn't have been web-compatible to make the form controls used on iOS and ANdroid ... because the web would have depended on fixed structures that work on destop, but don't make sense on mobile devices dg: There is a larger debate here. I want to table that for now. Keep it in mind today, but avoid engaging today eo: We'll only be able to make so much progress without it dg: So even with this second knob, it doesn't complete all of the use cases from developers <dbaron> (I think he used "put it aside" rather than "table" (which is en-GB/en-US ambiguous).) dg: now I'm crossing threshold to the boundary-crossing thing ... I'll first describe things they way they WERE/ARE divya: What do you mean by "functional syntax"? dg: things like ::shadow(foo) dbaron: the advantage is, there's a rule in selector spec that says rules UA doesn't understand get dropped ... pseudo elements/classes are part of that rule. WebKit/Blink don't do that correctly ... all other browsers drop the entire rule, but WebKit/Blink retains those es: That was willful; we can fix it tabatkins: But peopel do use it today already :-( dg: In querySelector, incidetnally, we don't violate spec ... onto the new things that we're thinking of ... in order to select things that are distirbuted into an insertion point, we invested the distirbuted pseudo element function ...: :distributed(---------) ... where ----- allows combinators ... on an insertion point, inside of a SR, it matches the element that was distributed into that matches the inner selector in the function es: example: content::distributed(span) { border: ________ } ... in the example we diagrammed, that style that earlier didn't match, now matches tabatkins: Remember, this is current junky stuff that we don't like es: Essentially content has a list of things that have been distributed in, the selector inside the parens says which in that list to select sorvell: It's relative dg: It's relative to the virtual node that represents the thing that envelopes all distributed elements sorvell: use case: i want to style all children, not all descendants ... so you can do like content::distributed( > span) es: It's like find() on content dbaron: I don't know if I like leading cominbators yet fantasai: I have reservations, but I think at this point we have to go with it, everyone expects it to work that way tabatkins: jquery uses it for years, and now it's documented in selectors level 4. It's a small section dbaron: leading combinators only work when there's an implicit node being targeted to sjmiles: This is very necessary in our experience divya: Can I have a class on the content element and use that in the selector? dg: yes es: Although content element itself is NOT stylable ... which I don't like. I wish that I could style the content to, say, display:none it ... currently it has no effect ... it's bizarre dg: I agree <stearns> +1 to styling content nodes dg: I would like to explain content as a display:contents es: In the current model, it's easy to distribute two things, but if you want to hide it, you need ANOTHER wrapper ... styles targeted at <content> don't inherit down; it's unstylable, no rendernode fantasai: If it's an intermediary, it makes for example uls and lis nested not work dg: I hope we can solve it by having <content> have display:contents on it fantasai: but that doesn't address the ul/li use case <fantasai> also mentioned :nth-child <fantasai> ul/li shouldn't be a problem dg: let's talk about @host <fantasai> otherwise dg: we want to get rid of this ... when you put a SR inside a tree, I want to be able to apply borders on the component, for example ... works like @host { [selector] { border: 1px solid red }} ... the inner selector matches only host dbaron: and what would antyhing other than * eo: You can imagine a case where you want to embed a widget in two different places, and you only want one (regarding why you'd want something other than *) tabatkins: And because of is attribute, you could have one component with different tag-names dbaron: So @host lets the SR influence the containing box ... I think that in XBL1 we replace the outer box, but I might be misremembering dg: This is the entire family of styling stuff. Now we want to get rid of many of these <fantasai> side discussion of pseudo-element syntax, vs . combinators vs. @rule <fantasai> ::distributed() matches pattern of ::cue() and ::region(), seems we're alinging on that <fantasai> ScribeNick: fantasai dg: @host did not solve the body class=light, and have components be able to see that sjmiles: ANd we never used this @host { * {}} fantasai: You probably want ::shadow, ::light, and ::context (to reach out) dbaron: or a combinator to jump out fantasai: Issue with a combinator is that it breaks the rule where combinators limit the matched set as you go ... so if you did bar <magic cobminator> foo, suddenly you're selecting a different set of foos dbaron: What I was thinking of was a combinator that would let you get to the scoped root from the selecto rthat's selecting inside it ... which is adding restrictions, right? ... in the dark theme use case tabatkins: yeah, that works with combinator ... eventually we rejected that idea [tab opens a Google Doc to show this idea off. He will share a link here] [we will share the doc later] docs.google.com/document/d/19fpRugyOO8kZfVVfdN1vkorwv8rmKztfNOEVLzIU2pU/edit?usp=sharing scribe: does that work? ... no :-( tabatkins: host element and shadow have equal claim ... we need to pretend the hos telement is in the root of the shadow tree ... so i fyou want to select on it, all you do is [writes in doc] ... this example will target the host element outside ^ that link works s/selecors/selectors/ tabatkins: you want to be able to select based on the content further up in the document. like the theme use case, or modernizer up higher ... but you don't want to allow arbitrary selecors above <jkomoros> can you see this? <dbaron> ScribeNick: jkomoros <singhalpriyank> yes tabatkins: If you have the outer document followed by a shadow element, and inside of that another component ... the outer document you would see includes the shadow tree of the outer component. ... that breaks encapsulation, allows developers to depend on details of components outside ... we still need a communication channel to outside ... we think we have a simple thing that satisfies use csaes ... here's an example. The context pseudo class is placed on the root element (hsot element) ... it matches if something in the compound selector matches in the fully composed ancestor list (?) ... including stuff in other boundaries ... because maybe you're applying a theme inside of one of the parent components above ... it allows some information to be piped through, but not enough to allow a fragile dependency (we hope) ... the list of elements checked starts with host element itself, goes up to the root, through any of the composed shadow trees ... that's the only way to select up outside ... going the other way, we still use ::distributed ... works the same way ... we think this solves all the use cases we know of ... and it's convenient and easy ,not the contorted tree hopping of @host and everything else dbaron: What is removed? tabatkins: What is removed is the @host (in favor of moving host element into shadow tree for styling purposes, and using context pseudo class to select up) rniwa: So if you have multiple composed layers, it selects each composited layer (?) tabatkins: no, the fully flattened tree up above ... you do see the shadow dom of things up the tree... but not very much ... so at any point you can inject information in rniwa: So a shadow DOM A, inside sahdow dom B tabatkins: we haven't changed the way normal selectors work. In a shadow style sheet, still only match within that shadow tree ... only thing that I think we might want to change, as a result of the recent discussion aroudn region pseudo-element (as opposed to rule) ... the problem is this distributed pseudo class isn't compatible with any nesting mechanisms we might add in future ... like, if you had foo bar baz {} as foo { @nest bar baz { ... }} , the distributed pseudo class wouldn't let you do the nested selectors ... same problem applies to regions, because regions often have complex selectors inside of the regions ... a possible aternate syntax is to have content selected and inside have a @distributed rule that takes ... ... content { @distributed { :scope > foo {}}} ... behaves similarly, but more future-compatible ... it's an @-rule inside of a declaration block ... we agreed to use it in error handling rules. This would be the first other use of it es: I like the ::distributed sjmiles: Yeah, that one is easier to type fantasai: what about ::distributed <space> <other stuff>? tabatkins: problem about jumping sub-trees, not narrowing matching ... if that's not a problem, then maybe that's fine es: That space one requires deeper architectural change to selector matching sorvell: I don't think the notion of limiting across selectors is something web developers know or care about dbaron: I don't think it violates it, although ::distributed is the wrong name in this formulation fantasai: light? sjmiles: well, one person's light is another person's shade es: As an implementor I don;t like it dg: We have the same basic thing with pseudo elements already ... we take this linked list and grab and swap it around at the end <fantasai> fantasai: It's just a syntactic difference; implementation can store it in whatever structures it wants es: yeah, but this would come at the end es; Why is this not an @ rule? scribe: like @teleport <dbaron> I'd rather have content::back-to-light-dom > .foo { ... } tabatkins: You don't want you to accidentally select hidden things in shadows above you dbaron: I think that we want selectors that continue to the right of pseudo elements. And the implementation model there is treat them like you treat pseudoelements today, where you match the thing to the left first, and then you ... say , oh, pseudo element, do this other stuff ... I think it makes sense without parens es: But selector matchign starts from right side dbaron: not really, not with pseudo-elements. You have to start from just to the left of hte pseduo-element <stearns> the key is that we're combining two selectors. You can still use right-to-left evaluation on each piece dbaron: now we're going to allow more stuff to the right of ::, but still same model es: why is the other thing not good as a functional syntax but this is? dbaron: Because it's a singular thing (?) es: The current distributed thing matches cue dbaron: Tab doesnt want's functional syntax because nested syntax will come along in the ftuure es: I'm not comfortable with rewriting whole selector checker dg: You just do it when parsing rules tabatkins: Find first pseudo element, run part before it, then ... [didn't get] es: But it's not "at end", it's a nesting relationship. It's more complicated tabatkins: exactly like a b is not all b's just b's inside of a's dbaron: I agree it's hard. I think we want implementation experience on concept before we commit to using it ... but it's the same concept we've come up with in multiple places already (like overflow fragments, here, regions) fantasai: cue? es: cue currently works like distributed does ... distributed is consistent with that ... the inner selector in there is not even HTML, it's a totally different world tabatkins: But different constraints: the document exposed is completely flat ... whereas this will expose more complex things inside the parens <dbaron> It needs to be called the see-you-eee element (the "cue" element) and not the queue element (the "q" element) dfreedm: I've hit this before. Nesting would be great es: What people are arguing for is a "reuse this selector" ability in CSS, like a #define for selectors dbaron: But with that, you'd end up having something with an unmatched parens in your #define, tabatkins: yeah, that would be painful sjmiles: Looking at the multiple {} solution, if I write that rule, I might be tempted to ask, can I put stuff to the right that is different than what's to the left? (?) ... as a developer, it's just getting in my way. Confusing. es: In this syntax, how do I match stuff that is a sibling of the stuff that's distributed ... example: content::distributed(> .foo) + span {} ... what does that do? tabatkins: That doesn't do anything in current syntax ... given the assumptions of functional syntax, we're doing that because "only one pseudo element, and at end rule". So this is nonsense, because it comes after pseudo-element ... but content::distributed > .foo {} is also nonsensical dbaron: So you want a pseud-class instead of a pseudo-element? es: I want to style the heading element that immediately follows the first heading element tabatkins; The general use case of dropping down to jump back up (?) is a generic discussion not limited to this discussion es: I'll reserve judgement, but I don't know what happens if you have two distributed tabatkins: You can never ahve a double distributed, because content nodes are gone dg: yes you can <fantasai> content:matches(!::distributed > .foo) + span dg: imagine that you're inside of a tree that's inside of a shadow tree tabatkins: But as far as you can tell, you're not in a distributed tree (?) ... content::distirbuted > .foo::region p content {} will never match anything es: ... no? tabatkins: remembe,r the :context selects on flattened tree ... below you, any contents you contain you can't access content es: That's not how it's currently specced. It's currentlys pecced that elements are distirbuted, but not that <content> is gone tabatkins: But it doesn' matter for the purpose of this example dg: What he's proposing works the same way as one with parens, just no parens es: So in <an example> you could have interleaved with multiple implied parens dg: Positive impression from CSS people around dropping parens? ... what about people who implement? dbaron: It doesn't seem easy, I think we should get implementation experience before we commit, but I think it's probably the right thing tabatkins; So we leave spec as it is right now, we add notes with paren-less version, that says implement and give feedback, if it does work then we use it scribe: someone has to solve these similar problems (e.g. in regions) first that leads the solution <stearns> I'm happy to change to this es: So you change region, and hixie changes cue? dbaron: cue might be a special case? it's selecting into a different document es: I want to hear from apple rniwa: We don't like ANY changes. ideally we wouldn't implement anything, but we'll have to implement SOMETHING dg: I'm interested in how hayato-san feels about this ... and see how much he screams eo: my rule of thumb is to let dbaron do his experiment and see what happens <scribe> ACTION: tabatkins to update the spec to the paren-less version of the :distributed, with a note that we will use that syntax if implementors don't scream after experimenting with implementation [recorded in] <trackbot> Error finding 'tabatkins'. You can review and register nicknames at <>. es: It's defiintely implementable, it's a question if the implementation cost justifies the developer confusion benefit dbaron: Remember, either of these solutions is hard dg: so instead of a list, it's a tree ... so what will happen is that at parsing you'll have to be aware that when you see this pseudo-element you change what you've seen already into a tree and parse selector again es: One of these was described in a grammar. But the current proposal can't be done in a grammar; it's context sensitive ... so it makes it harder to implement. The parser has to be made more complex (in the action above) dg: context is confusing, because it looks like content es: what about "projection" rniwa: THat's too complicated dbaron: Let's get rid of context entirely es: what about ":path" dg: ":composed" dfreedm: I prefer path es: "has" looks closer to "matches" sjmiles: Front end developers want everything to be as short as possible rniwa: What about ":host" since we got rid of @host? sjmiles: not bad dbaron: agreed es: But this could be arbitrary levels above rniwa: I like ancestor eo/sjmiles: I find it confusion fantasai: I like host best es: What about :inside? sjmiles: that's the opposite of how we think about it as devleopers rniwa: yeah, I'd expect that to be OUTSIDE es: If we do distributed shenanigans, why don't we do same thing here? dbaron: This is intentionally limited <dfreedm> too late dbaron: I think we're moving to :host? tabatkins: not bad es: but x is the host here, and the theme is on body ... path makes sense, like a traversal path fantasai: What if you allowed host element to be matched in host tabatkins: you can: :host(*) sjmile: Is there a way to avoid me having to write "x" all the time for my placehodler ... now we don't use the name, we just use @host tabatkins: no need to worry about it. will only match pseudo element es: is there a way to reference your host without explicit tag name? tabatkins: you can: :host() es: how does that differ from :scope tabatkins: no, because things aren't actually scoped ... the shadow is not actually a scoped style sheet ... it happens to be scoped, but it isn't technically a scoped style es: I think we should go with :host() tabatkins: We can omit empty parens rniwa: I like :host proposed resolution: :host(<simple selector on ancestor path>), or :host, which is equivalent to :host(*) fantasai: What's the specificity tabatkins: I think we can add the specificity inside the host (?) <dbaron> I think s/simple selector/chain of simple selectors without combinators/ es: what about like [data-foo]:host(.dark) CONCLUSION: :host(<chain of simple selectors without combinators on ancestor path>), or :host, which is equivalent to :host(*) sjmiles: This is a different concept than ancestor. It has some similarity to "something that's above me" ... it feels a bit weird to put what would be on the left side would be in the parens to the RIGHT whoops, sorry strike that resolution I misunderstood what "resolution" meant in this context thanks for the information! I'll get the hang of this some day :-) sjmiles: To be clear, this syntax is fine, but ultimately developers would have wanted somethign similar: .dark goes on left, then some host, then wormhole ... but this is fine, given all the constraints. <dbaron> just wait until I propose :上(.dark) tabatkins: earlier we had a ^ combinator which said (jump boundary), but it was weird because you could have only one simple thing on the left <dbaron> or :下(.dark)? Not sure which makes more sense. tabatkins: I think it's easier to internalize the restrictions that things inside the parens play by different rules than the things on the left of that magic combinator rniwa: I agree the ^ is weirder than :host [discussion has gotten disorganized; scribe has been unable to keep up for the past 2 minutes] fantasai: We should agree that words should either be plural or not plural <fantasai> fantasai: And CSS already has 'content' in some places CONCLUSION: rename ::distributed to ::content sjmiles: you don't need to have the content ahead of the ::content fantasai: This helps people remember what it means, what concepts it's connected to <fantasai> (because the tag name is content -- distributed is just out of the blue) rniwa: Ideally the developer needs to know the minimum of terrms <dbaron> so 'content: contents' in should be 'content: content' ? <fantasai> either that or something else, yeah [lunch break] [lunch break over] dg: Any more CSS - related topics? dbaron: I don't know if we agreed if custom pseudos should be functional or not? eo: Yes es: I think it would be said to sacrifice explanatory effect dg: I agree with that eo: I think it makes sense; it calls out that pseudos you come across are from a component somewhere, as opposed to from the system ... there's a clarity that gets sacrificed by confusing the two tabatkins: Even if we use no (), we still need to have a prefix (?) ... even the explanatory power of a non-functional syntax is still limited because custom ones would have to be named in a way that wouldn't conflict with new ones es: Okay, I won't fight for it dg: dbaron mentioned a solution I'm okay with ... have a switch in shadow DOM spec that says that UA can define pseudos if they want es: Long ago, the idea that in this future world all the quirks of the bedrock go away, evertyhing is components and who cares ... but we're leaving warts all over the place sjmiles: There's a tension. Sometimes it's a clarity concept, but sometimes user doesn' tneed to know eo: In the future with lots of components, you can clearly tell when you're dealing with a part of a components es: Why need a prefix? Why can't you implement ::placeholder tabatkins: We want to avoid namespaces that are unchecked, so that we don't have to worry about compat checks for new keywords in the future es: Why not expose things that are actually inside of shadow roots, why aren't they just a namespace open to anyone? ... placeholder is specced ... you'd need to say ::part(placeholder) (?) eo: you wouldn't want to expose it as a shadow root; it's an implementation detail es: But every browser implements placeholder in the same way ... and this argument is, what if someone hypothetical browser wants to do it some other way? dbaron: Placeholder in gecko uses native anonymous content ... that's different from XBL, which is different from web components es: Gecko creates divs inside of an input element, right? dbaron: The inside of a select box is more complicated. There are things in there that have boxes but no content nodes for those boxes ... which is different from Native Anonymous Content ... because NAC is where we construct actual content nodes and construct boxes for those ... for a select we just make boxes es: I don't understand the harm in claiming that <input> uses component model ... like, keep ::placeholder. But why not also allow the new syntax to address the same thing? eo: Because the implementors can choose how to implement it! dg: The question is not, whether it's implemented with Shadow DOM, but rather how it interacts with rest of content ... so you don't have all of these weird behaviors and edge cases. It's all described by semantics of shadow DOM dbaron: You assume shadow dom will be web compatible dg: We have implemented almost all crazy elements uses Shadow DOM machinery ... everything exceltp select, which we have plans to switch over as well ... it's been for two years now that we've been using it es: If I have input::part(placeholder) and it was a normal input, now I inherit it, I shouldn't have to change the CSS that targets it; I should just have a placeholder pseudo in my new shadow dOM [murmurs of aggremeent from sjmiles, dg] dg: yeah, big fan ... part() comes from an old discussion dbaron: There are three sorts: pseudos that are not tree like, some that are leaf like, some are tree-like and contain real elements ... CSS doesn't define any in the third category. But there's been discussion about various ones es: ::first-line can't possibly be implemented as an element, for example ... it's different from somethign that can be exposed as an API surface to style dfreedman: I don't understand why it has to be different eo: part() takes an author-defined ident sjmiles: I think it's a strong point: this is a specific kind of styling we're doing here--it's a node. It's the same for web components and implemented pieces ... it seems this makes sense. It makes sense that ::first-line is different dg: it's hard to explain ::backdrop in terms of these primitives ... so that would stay the same es: like there are some things that are clearly defined by what's inside ... like scrollbars eo: I disagree about scrollbars dbaron: We plan to get rid of all internals to scrollbars es: What kinds of scrollbars would you not want to build in the div rniwa: That's for the scrollbars that have been released to date; in the future they might not eo: Currently in WebKit if you use legacy pseudos for scrollbars, you go to legacy scrollbar mode. If you don't, then you get the special new ones es: What I'm saying is that if you don't want the API surface, don't expose it. In this case, it's already been exposed sjmiles: I think it's reasonable to say, for this thing, the way we implement it we shouldn't reveal it eo: I want to make a distinction between UA-implemented things, and things built in WC world. We want that distinction es: But why? If the standard says ::placeholder, it cannot be removed entirely ... so for new ones, it should use part() so that others can override later. But if it's not exposed, doesn't matter dg: it gives a clear message to author of how to reason about it--it's just like Shadow DOM. ... the UA is acting as an author in those cases where the guts are exposed like this rniwa: We don't want to be adding APIs that limit the ways in which we can tweak UIs dg: agreed. I don't think it matters. ::placeholder behaves like it's implemented as Shadow DOM. So it should behave that way fully eo: The entire existing platform prior to the feature means that user-land psueods and UA pseudos are different es: today in WebKit, ::-webkit-spin-button IS a div today. That's an API surface ... right now it has a prefix ... but if a spec adds a non-prefixed api surface, it's now a real API surface and it should behave like the other non-magic things eo: as an author, I want to be able to send a bug report to the right person dbaron: Authors aren't the users. End users are the user. Web platforms are designed for both of those constiunenciies. Sometimes you don't want to give control es: But we're talking about, for things that ARE already exposed dbaron: Some of the motivation for it, is there's stuff that might be the same now, but we don't want to be stuck with it forever eo: we're stuck with legacy scrollbar contrrols forever. :-( dbaron: mobile use cases are a great example. I'm glad we didn't commit to select's having a dropdown button ... I think you're being conservative about what to expose es: You seem to be arguing that ::placeholder was a mistake and we should remove immediately? dbaron: I think freezing everything about form controls at this moment is a bad thing. dg: I think we're talking about two different things sjmiles: UAs should be able to choose what to expose. But, IF you have chosen what to expose, like ::placeholder, then what you expose is Shadow DOM dg: ::placeholder already behaves, as specced today, as a custom pseudo-element ... as if it was a custom pseudo-element rniwa: Spec doesn't say that ::placeholder needs to be text inside a Shadow DOM. Just that color and whatever can be controlled ... this allows UAs to match the convention of the platform ... placeholder is okay, but in case of like scrollbars, we have to show legacy scroll bars even on platforms that have other scrollbars --it's confusing ot users es: I don't care so much about scroll bars; mozilla doesn't want to implement them ... I'm saying when it's a standardized API surface, it should be exposed with Shadow DOM eo: take input type=password ... users before authors, etc ... it would be reasonable for an impelemnter to say that user type=password cannot be overriden by components es: that's fine, the standard should say that for those cases dbaron: I think there's another answer to this requirement ... your requirement is about re-impelmenting existing parts of the platform ... I'm okay with the component model having a way to match an existing part of the web platform ... that's different from saying that you can extend the pseudo-element space arbitrarily es: To clarify, I'm arguing we should do ::part(). And that we should fix the platform to fit this eo: Dbaron is saying that I want to impelement a new component and react to ::placeholder. Because it's an existing platform feature, you should have a way to say, this thing is my ::placeholder ... and then if you add a new pseudo, it goes in the box where user-land new pseudos go sjmiles: Eliot's proposal does everything yours says, but simpler, from our perspective es: We all seem to be talking from different contexts sjmiles: I think we're closer than we think we are dbaron: eliot wants ::part(placeholder) to react to all the same thigns that ::placeholder does. I think they should be two different name spadces, and Shadow DOM should be able to get to both of them sjmiles: But you're telling Bob Web Developer that in case A you use this syntax, but in case B use a different one. But there's no difference ... why does it matter what comes from UA and what comes from a component? rniwa: That is an API surface. It's a namespace issue es: It doesn't matter if you don't reproject dbaron: the reason I don't like them being the same, that goes back to the assumption you'd implement all of the platform in Shadow DOM sjmiles: I think rniwa has a good point about namespaces *namespace rniwa: We could have collisions with new formalized pseudos if they share the same namespace es: ::part(placeholder) is exactly like a method that gets stomped on by some new web platform feature adding a new method to a type of node ... those types of collisions happen all the time. ... You're trying to protect yourself from one, particular type of namespace collision. THere are lots of them! dbaron: There's a different error-handling rule for pseudos, and WebKit implemented something else, and didn't come back to the standards groups ... now you come back and say you want to build a new feature on top of that broken feature es: We agreed on that part--we should move to ::part(placeholder) ... isn't this like saying, custom elements are bad idea eo: yes! I think you should only do it if it's a good reason dfreedman: but how do you know in advance if they'll have a good reason? es: I'm saying, all platform widgets, however they're implemented (jquery, etc) they should have the same API surface ... not if you use Shadow DOM surface. dfreedman: part(oplaceholder) just describes the same thing as ::placeholder dg: eo is saying htat part defines user-defined pseudos dbaron: I think I'm okay with either formulation (?) rniwa: think about for example, translation attribute ... we shipped and broke a lot of libraries in india eo: We have compat constraints we need to take into account rniwa: We want to avoid breaking existing websites es: If we go there, and say part is magical only userland, and every pseudo in user space should start with data- so you don't conflict ... so you're saying that every method in say polymer has to be polymer_foo() sjmiles: We tried somethign at the beginning, where we presented a different public face. Developer users HATED it ... developers want more power AND backwards compatibility. It's hard. rniwa: MediaWiki added a special style attribute to work around WebKit bug ... we fixed it, but now we can't remove it because mediawiki content is deployed with the old stuff eo: Quickly: this is the same argument from last F2F: the idea if we should allow custom element names over the wire ... is=foo nicely delineates which is user land dg: it's solved with the - requirement rniwa: I'm less concerned if authors can only extend HTMLElement es: Either developers are responsible for avoiding namespace collisions everywhere, or nowhere sjmiles: Seems like everyone agrees the ::part() is a good idea in many cases CONCLUSION: ::part() is a good solution for user-land pseudo elements ... although the group doesn't necessarily agree about existing pseudo elements at this point dg: ::Part and ::pseudo is fine (?) <dbaron> (by either formulation -- one formulation is that ::pseudo and ::part(foo) are just two different namespaces, shadow DOM can't extend the first namespace, but it can create things that match either -- the other formulation is that ::part(foo) is an extensible namespace and future Web features that could be represented as such a part (whether or not they are implemented in terms of shadow DOM) plus ::placeholder should have such parts done as ::part(foo), an <dbaron> d that ::part(placeholder) should be an alias for ::placeholder for consistency <dbaron> ) CONCLUSION: the attribute to register these custom pseudos is "part" (not pseudo as it currently is in spec) The CSS portion of this discussion is over. We'll stop taking notes h ere <MikeSmith> .win 15
http://www.w3.org/2013/06/21-webapps-minutes.html
CC-MAIN-2019-22
refinedweb
8,105
67.18
setvbuf man page Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. setvbuf — assign buffering to a stream Synopsis #include <stdio.h> int setvbuf(FILE *restrict stream, char *restrict buf, int type, size_t size); Description The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 Section 2 Section 2.5, Standard I/O Streams, fopen(), setbuf() cat(1p), setbuf(3p), stdin(3p), stdio.h(0p).
https://www.mankier.com/3p/setvbuf
CC-MAIN-2017-47
refinedweb
120
57.77
Are you sure? This action might not be possible to undo. Are you sure you want to continue? FC-SW FieldCommander JavaScript Refererence Guide Table of contents About this manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. The Javascript language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 toLocaleString()) CER International bv 4 . This chapter starts with a quick introduction of the language. 1. This will create an entry in the diagnostics log file.jse.1. It uses the ECMAScript standard for Javascript. you might want to skip to chapter 3 which discusses the differences between Javascript and C.").1 Javascript quick start This section provides examples and information to get started with Javascript Keep in mind that Javascript scripts may be written as simple scripts. 1. When you are comfortable with the C language.1. function main() { writeLog("third. the following fragment: writeLog("first "). Any text editor may be used to work with script files. To execute this script. Assume that the single line of this example has been saved to a file named simple. writeLog('A simple script') The writeLog() function is a simple way to get feedback from the script.FieldCommander JavaScript Refererence Guide 1. ECMAScript is the core version of Javascript which has been defined by the European Computer Manufacturers Association and is the only standardization of Javascript. Then follow the functions and other programming concepts of the Javascript language. The Javascript language Javascript is one of the most popular scripting language in today's world. results in the following output in the diagnostic log: first second third. it shares characteristics of both batch and program scripts.1 Simple script The following line is a simple and complete script. in which lines of code execute sequentially. or they may be written as structured programs. much like simple batch files.2 Date and time display The following fragment: var d = new Date writeLog(d. 1. upload it FieldCommander and start the script from the System configuration. } writeLog("second "). Refer to the User's Guide for details. For example. When a script has code outside of functions and code inside of functions. if it exists. However. then the first batch script would likely be the best choice. date. is the first function to be executed in a script. as a new instance of a Date object. The following fragment is another variation that produces the same result.FieldCommander produces output similar to the following. Fri Oct 23 10:29:05 1998 JavaScript Refererence Guide The first line creates a variable d as a new Date object. namely.toLocaleString()). All of the fragments accomplish the same thing. shown after it. then one of the program scripts would be the best choice. which follow a batch style. produces the exact same result as the first two lines. one in which the display of the date and time was only a small part. This batch script could be written as a program script as shown in the following fragment. the first fragment shown consists of two lines of code written as a simple batch script. The following fragment is yet another variation. Remember. The second line uses the Date toLocaleString() method of the Date object to display local date and time information. CER International bv 5 . and time information. writeLog(d. All fragments work equally well. function main() { writeLog(d. This script. using a structured programming style. var d = new Date. Javascript scripts may be as simple or as powerful as a user chooses. } function DisplayTime() { var d = new Date. } To repeat. writeLog(d. if a user wanted to write a more involved program.toLocaleString()). What are the differences? If a user wanted a simple script to display date and time information. are all written as program scripts. function main() { var d = new Date. } The main() function. or more accurately. displaying local day. function main() { DisplayTime(). } Remember that lines of script outside of functions are executed before the main() function.toLocaleString()). The fragments. which holds information about the current date and time that can be retrieved in various formats. but often scripts need to be able to pass data or information to a function which then works with different data when called for differing reasons in a script. is the same for all objects. In this script. . several variations of scripts were presented showing different ways to accomplish the same result. Notice that the variable. do not have to have the same names as the parameters to which they are passed. that are available to all Date objects that are created as in this example.getDay() == 6) { var FirstLine = "It is Saturday. Saturday. then the variable. The Date object has many methods. This behavior. However. . such as FirstLine. The purpose of the fragment is to write a custom message to the diagnostics if the day of the week is Saturday.3 Function with parameters In the section above on date and time display. and each one may use all the methods of the Date object. does not have the same name as the parameter. Many times such functions are used. of constructing an object which is insulated from operations within other instances of the same type of Object.". // Sun == 0 . The variable dat is only one instance of a Date object. When DisplayTime() was called. if date information is altered in one instance. Sunday is the first day of the week and is zero. The variable FirstLine did not have to be created at all. Then the function WriteMessage() is called with the variable as the first parameter of the function. no parameters. LineOne. the date information in the other instances is not affected. Arguments. the only date information used is the day of the week.1. The third line tests. The third line of the script calls the method Date getDay() which returns the day of the week as a number. A script can create or construct as many Date objects as desired. } // The rest of the script follows writeLog("The program is continuing. If the day is 6. with an if statement. A detailed explanation follows. whether the current day is day number 6. in this case. were passed to the function. The function WriteMessage() uses the information passed to it in its parameters: LineOne. } function WriteMessage(LineOne) { writeLog(LineOne). var dat = new Date(). The last variation shown defined the function DisplayTime() which was called from the main() function. The function WriteMessage() could have been called with a literal strings instead of a variable CER International bv 6 . See the section on passing information to functions for more information about arguments and parameters.FieldCommander JavaScript Refererence Guide 1. such as getDay(). that is. Sat == 6 if (dat. The following script fragment illustrates the use of a function with parameters. The first line creates a new Date object. WriteMessage(FirstLine). FirstLine. FirstLine is created with string information in it. no information or arguments. LineOne. not just Date objects."). The line displaying typeof n displays a number in both cases. typeof n and typeof(n) are the same. CER International bv 7 . in front of the method name. calls parseFloat() as a method. but both fragments are identical in behavior. The term function is used for functions of the global object and functions that a user defines that are not attached to a specific object. the term routine refers to a function or procedure that may be called in a program. The typeof operator returns the type of data of the value following it. For example. a little explanation of terminology might help. The term procedure is not used. such as C. But in general.1. A procedure is a routine that does something but does not return a value. A function is a routine that returns a value. The following fragment. they look like and act like plain functions in other languages.FieldCommander JavaScript Refererence Guide name. function MyFunction() { writeLog("My function has been called."). The use of variables in the if statement makes the code easier to read and to alter. the term routine is a general term used for functions and methods (and procedures. which is the same as the one above with the addition of global. But such a line could become too long. Said another way.21"). } MyFunction(). Without the variables. The methods of the global object may be called without placing global.MyFunction(). The term method is normally used for a function that has been attached as a property of an object. In the current discussion. For example. and these terms do not make the distinction between a function that returns a value and one that does not. writeLog(n). var n = global. writeLog(n). that is called like a function and then as a method. var n = parseFloat("3. One problem with terminology is that it is has developed over the years and is not used uniformly. MyFunction().parseFloat("3. The following fragment has a user defined function. In Javascript. global. The typeof operator may be invoked with "()". the call to WriteMessage() would have been: WriteMessage("It is Saturday.21").4 Terminology Before going further. though this term is not used). Such functions are actually methods of the global object. writeLog(typeof n). a procedure is a function that does not return a value. The following fragment calls parseFloat() like a function. Thus. parseFloat() may be referred to as a function reflecting these calling conventions. 1. the terms used are methods and functions. Thus. Both calls to MyFunction() are identical in behavior. the function parseFloat() is actually a method of the global object."). writeLog(typeof n). 2 White space characters White space characters. but the letters #IF fail.FieldCommander JavaScript Refererence Guide In the current Javascript manual. 1. for methods that do not require an object name or name of an instance of an object to precede the method name. which was used above in the section about a function with parameters. White space makes code more readable for humans. multiplies the number times itself three times. But. is an example of such a method. 1. functions may do things and return values. Lines of script end with a carriage-return. The Date getDay() method. the following code fragment defines two separate variables: var testvar = 5 var TestVar = "five" All identifiers in Javascript are case sensitive. the statement while is valid. that is.5 Function with a return Functions may simply do something as the function ExitOnError() above does. and each line is usually a separate statement. if the capitalization is changed to something like. writeLog(CubedNumber). The term method is used for methods that require an object name or name of an instance of an object.2 Basics of Javascript 1.1. The following fragment illustrates a function that returns a value. function Cubed(n) { return n * n * n. The function Cubed() simply receives a number as parameter n. 1. The directive #if works. but the word While is not. CER International bv 8 . the number 9 is displayed.1 Case sensitivity Javascript is case sensitive. and both of them can exist in a script at the same time. space. carriage-return and new-line. and returns the result.2. to display the word "dog" on the screen. Control statements and preprocessor directives are also case sensitive. A variable named "testvar" is a different variable than one named "TestVar". CubedNumber is writen to the diagnostics file. The variable CubedNumber is assigned the return value from the function Cubed(). govern the spacing and placement of text. WriteLog("dog"). The term routine is generally used for functions and methods. but is ignored by the interpreter. For example. } //Cubed var CubedNumber = Cubed(3). or they may return a value to a calling routine. The term function is used for methods of the global object. Thus. and in this example. Of course. tab. the writeLog() method could be used: writeLog("dog").2. For example. the following distinctions generally are followed. then the Javascript interpreter generates an error message. Any text between these markers is a comment. "*/". var ab = 2 is valid. "//". Any text after two consecutive slash characters is ignored to the end of the current line. even if the comment extends over multiple lines. the following Javascript statements are equivalent to each other: var var var var x=a+b x = a + b x = x = a + b a + b White space separates identifiers into separate entities.3 Comments A comment is text in a script to be read by humans and not the interpreter which skips over comments. because tab size settings vary from editor to editor and programmer to programmer. The following code fragments are examples of valid comments: // this is an end of line comment /* this is a block comment This is one big comment block. Block comments are enclosed within a beginning block comment.2. the fragment. Thus.FieldCommander JavaScript Refererence Guide (Technically.) Since the interpreter usually sees one or more white space characters between identifiers as simply white space. The interpreter begins interpreting text as code on the next line. There are two formats for comments: end of line comments and block comments. the format of a script will look the same on all editors. but var a b = 2 is not. and "a b" is two. and an end of block comment. End of line comments begin with two slash characters. "ab" is one variable name. // except for poodles //This line is a comment but var TestStr = "this line is not a comment". lines end with a carriage-return and linefeed pair. For example. // this comment is okay inside the block Isn't it pretty? */ var FavoriteAnimal = "dog". Good comments. in many editors. but end of line comments can exist within block comments. help people alter code that they have written in the past or that was written by someone else. 1. Comments help people to understand the purpose and program flow of a program. "/*". By using spaces only. Many programmers use all spaces and no tabs. which explain lines of code well. CER International bv 9 . "\r\n". Block comments may not be nested within block comments. "{}". By enclosing multiple statements in curly braces. Many programmers put semicolons at the end of statements. Identifiers may be as long a programmer needs.FieldCommander JavaScript Refererence Guide 1. CallthePerson(name). such as the code var TestSum = 4 + 3 which computes a sum and assigns it to a variable. Identifiers may not have white space in them since white space separates identifiers for the interpreter.4 Expressions. If the braces were omitted. Identifiers may use only ASCII letters. the script goes through all lines until everyone on the list has been called. Javascript code is executed one statement at a time in the order in which it is read. } All three lines after the while statement are treated as a unit.3 Identifiers Identifiers are merely names for variables and functions. underscore. or dollar sign. CER International bv 10 . digits. and blocks An expression or statement is any sequence of code that performs a computation or an action. statements. With the braces. to make scripts easier to read and edit. That is. Statements within blocks are often indented for easier reading. Each statement is usually written on a separate line. "$". A block can be used anywhere that a single statement can. the script goes through all names on the list. "+-<>&|=!*/%^~?:{}. Two very different procedures. the underscore.()[].2. upper or lower case. Programmers must know the names of built in variables and functions to use them in scripts and must know some rules about identifiers to define their own variables and functions. The following rules are simple and intuitive. A statement block is a group of statements enclosed in curly braces. "_". they may use only characters from the following sets of characters. the while loop would only apply to the first line. 1. Without the braces. A while statement causes the statement after it to be executed in a loop. LeaveTheMessage(). and the dollar sign. they are treated as one statement and are executed in the while loop.'"`#. although they are not required. which indicate that the enclosed individual statements are a group and are to be treated as one statement. with or without semicolons. "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789" "_$" Identifiers may not use the following characters. The following fragment illustrates: while( ThereAreUncalledNamesOnTheList() == true) { var name = GetNameFromTheList()." Identifiers must begin with a letter. but only the last one is called. but may have digits anywhere else. Variables may change their values. are not valid: 1sid 2nancy this&that Sid and Nancy ratsAndCats? (Minus)() Add Both Figures() JavaScript Refererence Guide 1.1 Prohibited identifiers The following words have special meaning for the interpreter and cannot be used as identifiers. if programmers want to display a name literally. Variables are used to store and represent information in a script.2 Variables A variable is an identifier to which data may be assigned. but literals may not. CER International bv 11 . For example. var Name = "Rumpelstiltskin Henry Constantinople" writeLog(Name) Then they can use shorter lines of code for display and use the same lines of code repeatedly by simply changing the contents of the variable Name. variables and functions. as in the following.3. variables and functions. are valid: Sid Nancy7436 annualReport sid_and_nancy_prepared_the_annualReport $alice CalculateTotal() $SubtractLess() _Divide$All() The following identifiers.3. writeLog("Rumpelstiltskin Henry Constantinople") But they could use a variable to make their task easier. they must use something like the following fragment multiple times. are both local. though not usually a good idea. var d = 3. To make a local variable. b. The variable d may be used in the main() function and is explicitly passed as an argument to someFunction() as the parameter e. consider the following code fragment. b. since they are defined within functions with the var keyword. d and c.FieldCommander JavaScript Refererence Guide 1. In the fragment above. declare it in a function using the var keyword: var perfectNumber. The variable c may not be used in the main() function. There are no absolute rules for preferring or using global or local variables. In such a case. function main() { b = 1. The variables. The default behavior of Javascript is that variables declared outside of any function or inside a function without the var keyword are global variables. Each type has a value. this behavior can be changed by the DefaultLocalVars and RequireVarKeyword settings of the #option preprocessor directive. the global variable a can be referenced anywhere in its script by using: global. Further.3 Variable scope Variables in Javascript may be either global or local. var a = 1. a global variable must be referenced as a property of the global object.3. The following lines show which variables are available to the two functions: main(): a. CER International bv 12 . It is generally easier to understand how local variables are used in a single function than how global variables are used throughout an entire program. programmers prefer to use local variables when reasonable since they facilitate modular code that is easier to alter and develop over time. Local variables may only be accessed from the functions in which they are created. to have local and global variables with the same name. since it is undefined in the scope of that function. Global variables may be accessed and modified from anywhere in a script. since a is declared outside of a function and b is defined without the var keyword. c. This directive is explained in the section on preprocessing. However. In general. a and b are both global variables. e It is possible.a. A value may be assigned to a variable when it is declared: var perfectNumber = 28. For now. } function someFunction(e) { var c = 2 } In this example. d someFunction(): a. and the variable name used by itself refers to the local variable. local variables conserve system resources. someFunction(d). b) { return a + b } writeLog(SumTwo(3. 4)) // fragment two function SumTwo(a. the variables can be used anywhere in a script where the literal values could to be used. names.4 Data types Data types in Javascript can be classified into three groupings: primitive. and special. much like global variables.FieldCommander JavaScript Refererence Guide 1. and the variable aString is assigned the literal "test string". // fragment one function SumTwo(a. Data types need to be understood in terms of their literal representations in a script and of their characteristics as variables. The parameters. data can be represented by literals or variables. The reason they are mentioned here is simply to point out that they have identifiers. the literals. 4)) 1. The following lines illustrates variables and literals: var TestVar = 14. A function may not be declared within another function so that its scope is merely within a certain function or section of a script. as variables are. and the second calls SumTwo() as a method of the global object. The first calls a function. think of functions as methods of the global object. a and b. Functions perform script operations. 3 and 4. var aString = "test string". The variable TestVar is assigned the literal 14. and variables store data. in literal or variable form. 1. All functions may be called from anywhere in a script. The following two code fragments do exactly the same thing.5 Function scope Functions are all global in scope. CER International bv 13 . Data .SumTwo(3. a and b. as a function.3. In the fragment above which defines and uses the function SumTwo(). is assigned to a variable with an assignment operator which is often merely an equal sign. "=" as the following lines illustrate. are variables for the function the hold the literal values that were passed to it. b) { return a + b } writeLog(global. After these assignments of literal values to variables. SumTwo(). that follow the same rules for identifiers as variable names do.3. In a script. are passed as arguments to the function SumTwo() which has corresponding parameters. Functions do the work of a script and will be discussed in more detail later. composite.4 Function identifier Functions are identified by names. If it is helpful. as many as three copies of "abc" exist at one time. variable a. JavaScript Refererence Guide The first time a variable is used. the parameter/variable c has a copy. by actually copying the data to the new location. its type is determined by the interpreter.4. the original literal and the copy in the variable a. Variable types are described below. var b = ReturnValue(a). joyfulVariable = "free chocolate". Decimal Decimal notation is the way people write numbers in everyday life and uses base 10 digits from the set of 0-9. such as C. A-F. theWorldIsFlat = true. and octal. programmers normally do not have to worry about type conversions as they do in strongly typed languages. Since Javascript automatically converts variables from one type to another when needed. two copies of the string "abc" exist. and String. each of a different type. 1. While the function ReturnValue is active. would remain unchanged. Javascript is not case sensitive when it comes to hexadecimal numbers. 0.FieldCommander var var var var happyVariable = 7. are the most common numbers encountered in daily life. These digits are preceded by 0x. CER International bv 14 . Decimal integers. a copy of "abc" is in the variable b. and 999 var a = 101. hexadecimal. Boolean. 10. Number type Integer Integers are whole numbers. and three copies of the string "abc" exist. the second is a string. } After "abc" is assigned to variable a. After the function ReturnValue is finished. and the third is a boolean variable. Examples are: 1. The first is a number. happyToo = happyVariable. If c were to be changed in such a function. During the execution of the fragment. The following fragment illustrates: var a = "abc". such as 1 or 10.1 Primitive data types Variables that have primitive data types pass their data by value. but the copy in the variable c in the function is gone because the function is finished. and a-f. Hexadecimal Hexadecimal notation uses base 16 digits from the sets of 0-9. Javascript has three notations for integers: decimal. function ReturnValue(c) { return c. which was passed as an argument to the function. The primitive data types are: Number. and the type remains until a later assignment changes the type automatically. The example above creates three variables. 0x1F. and 99. it is converted to 0. and 1. Since Javascript automatically converts values when appropriate. JavaScript Refererence Guide Octal Octal notation uses base 8 digits from the set of 0-7. and "344". 0x100. for example. but it will work using the concepts of zero and not zero. 4. These digits are preceded by 0. String type A String is a series of characters linked together. 0xABCD var a = 0x1b2E. and the second is a value that may be used in numerical calculations.55 + .333e-2. If a number is used in a string context. The first is an array of characters. Examples are: 00. Decimal floats Decimal floats use the same digits as decimal integers but allow a period to indicate a fractional part.32. Examples are: 0. The string "344" is different from the number 344. and 4. 0x1f. Strings. for example: "I am a string". Scientific floats Scientific floats are often used in the scientific community for very large or small numbers. 'so am I'. Examples are: 4. Floating point Floating point numbers are number with fractional parts which are often indicated by a period. Automatic type conversion is discussed more fully in a later section. A string is written using quotation marks.33. and 077 var a = 0143.087E2. false and true. if it is true. 1. 10. false is zero. If a string is used in a number context. if it is false. A script is more precise when it uses the actual Javascript values. 05.087e+2.321e33 + 9. are actually a hybrid type that shares characteristics of primitive and composite data types.087e2. Javascript automatically converts strings to numbers and numbers to string. 4. it is converted to a numeric value. and true is non-zero. Strings are discussed more fully a later section. Scientific notation is sometimes referred to as exponential notation. When a Boolean is used in a numeric context. They use the same digits as decimals plus exponential notation. Booleans can be used as they are in languages such as C. 0x01. Namely.087E-2 var a = 5. CER International bv 15 .45. Floating point numbers are often referred to as floats.FieldCommander Examples are: 0x1. it is converted to a string.44 var a = 100.44. `me too`. Boolean type Booleans may have only one of two possible values: false or true. depending on context. though classified as a primitive. counting the original string literal. When AnObj is passed to the function ReturnName. think of objects as having methods.old receives the return from the function. Test[1] = "two". Test[0] = "one". CER International bv 16 . it is passed by reference. Two copies of the string "Joe" exist. The composite data types are: Object and Array. var Test1 = "two". The following fragments illustrate the storage of the data in separate variables or in one array variable: var Test0 = "one". But for practical programming. consisting of one or more pieces of data of any type which are grouped together in an object. Indeed.4.old. Object type An object is a compound data type. CurObj does not receive a copy of the Object. to the property AnObj. The Object data type is similar to the structure data type in C and in previous versions of Javascript.name = "Joe". With this reference. the return is assigned by value. Objects and their characteristics are discussed more fully in a later section. Test[2] = "three". called methods. composite types are passed by reference. in Javascript. but only a reference to the Object.old = ReturnName(AnObj) function ReturnName(CurObj) { return CurObj. by value since a property is a variable within an Object.name would be changed at the same time. The object data type also allows functions.name and one in the property . AnObj. the string "Joe" is assigned.FieldCommander JavaScript Refererence Guide 1. which are variables and constants. The following fragment illustrates: var AnObj = new Object. If CurObj. var Test = new Array.name were to be changed while the function was executing. which are functions.name.2 Composite data types Whereas primitive types are passed by value. Array type An array is a series of data stored in a variable that is accessed using index numbers that indicate particular data. to be used as object properties. When AnObj. CurObj can access every property and method of the original. only a reference that points to its data is passed. AnObj holds two copies of the string "Joe": one in the property . When a composite type is assigned to a variable or passed to a parameter. functions are considered to be like variables. then AnObj. and properties. Data that are part of an object are called properties of the object. and a copy of the string "Joe" transferred to the property. AnObj. Three total copies of "Joe" exist. Thus. var Test2 = "three".name } After the object AnObj is created. A null variable holds no value. and the strings can be accessed individually. In the second fragment. in grouping. Since Javascript automatically converts data types. between Arrays and Objects is more than slight. the three strings are stored for later use. The code fragment above will work if undefined is changed to null. there is no literal representation for undefined. var test. Though variables may be of type undefined.4. For practical programming. The similarities. if (test == undefined) writeLog("test is undefined") After var test is declared.3 Special values undefined If a variable is created or accessed with nothing assigned to it. An undefined variable merely occupies space until a value is assigned to it. The null type is represented literally by the identifier. But. if (test == null) writeLog("test is undefined") Since null has a literal representation. the test. Consider the following invalid fragment.FieldCommander JavaScript Refererence Guide After either fragment is executed. Arrays and their characteristics are discussed more fully in a later section. it is assigned a type according to the value assigned. CER International bv 17 . null is both useful and versatile. However. assignments like the following are valid: var test = null. but not always. null. the value NULL is defined as 0 and is used in some scripts as it is found in C based documentation. it is undefined since no value has been assigned to it. three separate variables have the three separate strings. These variables must be used separately. as shown in the following: var test. it is of type undefined. Because of automatic conversion in Javascript. is invalid because there is no way to literally represent undefined. though it might have previously. They are two separate values. 1. Arrays and Objects are both objects in Javascript with different notations for accessing properties. test == undefined. one variable holds all three strings. the two values often operate alike. Any variable that has been assigned a value of null can be compared to the null literal. In the first fragment. null The value null is a special data type that indicates that a variable is empty. a condition that is different from being undefined. The value null is an internal standard ECMAScript value. In fact. When a variable is assigned a value. Arrays may be considered as a data type of their own. This array variable can be used as one unit. NaN Number. Number constants Several numeric constants can be accessed as properties of the Number object.MIN_VALUE Number. NaN is an acronym for the phrase.2250738585072014e-308 NaN Infinity -Infinity Description Largest number (positive) Smallest number (negative) Not a Number Number above MAX_VALUE Number below MIN_VALUE 1. Javascript has many global functions to cast data as a specific type. adding the two converts the number to a string and concatenates them.FieldCommander NaN JavaScript Refererence Guide The NaN type means "Not a Number". When the global. String always convert to a base 10 number and must not contain any characters other than digits. Further. when converting strings to numbers there are several limitations.MAX_VALUE Number.5 Automatic type conversion When a variable is used in a context where it makes sense to convert it to a different type.POSITIVE_INFINITY Number. To test for NaN. it returns NaN.parseInt() function tries to parse the string "a string" into an integer. While subtracting a string from a number or a number from a string converts the string to a number and subtracts the two. global. CER International bv 18 .NEGATIVE_INFINITY Value 1.parseFloat() methods. These functions are described in the section on global functions that are specific to Javascript. NaN does not have a literal representation. must be used. Constant Number.isNaN(). Such conversions most commonly happen with numbers and strings. global. as illustrated in the following fragment: var Test = "a string". functions that are not part of the ECMAScript standard.7976931348623157e+308 2. The string "110n" will not convert to a number.parseInt() and global. though they do not have a literal representation. since "a string" does not represent a number like the string "22" does. For example: "dog" + "house" == "doghouse" "dog" + 4 == "dog4" 4 + "4" == "44" 4 + 4 == 8 23 . Javascript automatically converts the variable to the appropriate type. You can specify more stringent conversions by using the global methods."17" == 6 // // // // // // two strings a number is to a string two numbers a string is to a number are joined converted are added converted Converting numbers to strings is fairly straightforward. However. However. because the Javascript interpreter does not know what to make of the "n" character. the function. if (isNaN(parseInt(Test))) writeLog("Test is Not a Number"). abs() is a method of the Math object. they are used internally by the interpreter.2 valueOf() This method returns the value of a variable. it is used directly with the Math object instead of an instance of the object. Many methods are instance methods. not an instance of the Math object.abs(-3) The variable AbsNum now equals 3. This operator allows properties and methods of an object to be accessed and used. they are used with instances of an object instead of the object itself. For the most part. The Math.6 Properties and methods of basic data types The basic data types. The properties and methods of the basic data types are retrieved in the same way as from objects. 1. have properties and methods assigned to them that may be used with any variable of that type. 1. Every data type has toString() as a method.6. you can use the toString() method as illustrated in the following fragment. if you have a numeric variable called number and you want to convert it to a string. Math.6.7. The following two methods are common to all variables and data types. The CER International bv 19 . The variable AbsNum is an instance of the Number object. ". Why? It is assigned the number 3 which is the return of the Math. An instance method is not used with an object itself but only with instances of an object.1 Object operator The object operator is a period.7 Operators 1. all String variables may use all String methods.substring(). For example. that is. such as Number and String.FieldCommander JavaScript Refererence Guide 1. toString() is documented here and not in every conceivable place that it might be used.1 toString() This method returns the value of a variable expressed as a string. 1. the variable n contains the number 5 and the variable s contains the string "5".toString() After this fragment executes. For example. The String substring() method is never used with the String object as String. Thus.abs() method. It may be accessed as follows: var AbsNum = Math. The String substring() method is an instance method of the String object. For example. but you may use them if choose. Thus. valueOf() is documented here and not in every conceivable place that it might be used.". Every data type has valueOf() as a method. var n = 5 var s = n. that is.abs() method is a static method. which then accesses the method or property. 1. Expressions inside parentheses are processed first. and then uses the String substring() method with this instance by using the object operator. The variable s is an instance of the String object since it is initialized as a string. i = i % 3. the information inside square brackets. = + * / % assignment addition subtraction multiplication division modulo assigns a value to a variable adds two numbers subtracts a number from another multiplies two numbers divides a number by another returns a remainder after division The following are examples using variables and arithmetic operators. 2. 3. CER International bv 20 . (5-3) i is now 10." is an object operator that may be used with both static and instance methods and properties. In the following examples. which is an instance of the String object.7).2 Mathematical operators Mathematical operators are used to make calculations using mathematical data. All multiplications and divisions are calculated for an expression before additions and subtractions unless parentheses are used to override the normal order. The following sections illustrate the mathematical operators in Javascript. (10%3) Expressions may be grouped to affect the sequence of processing. (10/3) (remainder is ignored) i is now 10 i is now 1. "[]. (2+3) i is now 2. var new = s. before other calculations. i is now 2 i is now 5. (2*5) i is now 3. i = 10. The variable new now equals "Two" and is also an instance of the String object since the String substring() method returns a string.FieldCommander JavaScript Refererence Guide following fragment declares and initializes a string variable. A method or property is simply attached to an appropriate identifier using the object operator. var s = "One Two Three".substring(4. i + i i * i / 3. 3. var i = i = i = i = i = i. 5. The main point here is that the period ".7." are summaries of calculations provided with these examples and not part of the calculations. Basic arithmetic The arithmetic operators in Javascript are pretty standard. 5 * 3. to or from a variable. (2+3) 2. 1. --. operator. i *= 5. or auto-decrement. i++ is a shortcut for i += 1. i i i i i i i is is is is is is is now now now now now now now 2 5.5) * 3. which is a shortcut for i = i + 1.FieldCommander Notice that: 4 * 7 . var i. i /= 3.(5 * 3). i = 10. due to the order of precedence.15 = 13] but has a different meaning than: 4 * (7 . (10/3) 10 1. i += 3. even when they are not necessary. use the auto-increment. (10%3) same same same same as as as as i i i i = = = = i i i i + * / 3 3 5 3 same as i = i % 3 Auto-increment (++) and auto-decrement (--) To add or subtract one. [28 . Thus. These operators add or subtract 1 from the value to which they are applied. CER International bv 21 . [28 . i -= 3. = += -= *= /= %=. =. as: (4 * 7) . (5-3) 10. The result of the operation is then assigned to the value on the left.(5 * 3)). i %= 3. Such assignments use the value to the right of the assignment operator to perform an operation with the value to the left. i = 2. [4 * -8 = -32] The use of parentheses is recommended in all cases where there may be confusion about how the expression is to be evaluated. Assignment arithmetic Each of the above operators can be combined with the assignment operator.15 = 13] JavaScript Refererence Guide has the same meaning. as a shortcut for performing operations. (2*5) 3. ++. [4 * 2 * 3 = 24] which is still different from: 4 * (7 . It is often safer to make comparisons based on false. Basically. bytes. integers. and if used after. >>= 2. Expressions can be combined with logic operators to make complex true/false decisions. which is introduced in the next section. that is. = i ^ 1 ^= 1 i = ~i.FieldCommander JavaScript Refererence Guide These operators can be used before. that is. j is 5. exclusive or assignment bitwise xor. which can be many. everything except 0. j is 5. 1. The following lines demonstrates prefix and postfix operations. The computer tells you if you are right or not and whether your guess is higher or lower than the target number. as a postfix operator. i = 4. j is 5.4 Logical operators and conditional expressions Logical operators compare two values and evaluate whether the resulting expression is false or true. A variable or any other expression may be false or true.7. binary numbers. If CER International bv 22 . and hexadecimal numbers.7. The computer thinks of a number between 1 and 100. i is 5 (i was incremented) 1. zero or non-zero. As an example. Many values are evaluated as true. rather than to true. This procedure uses the if statement. i is 4 (i was decremented after use) i++. and you guess what it is. suppose that you are designing a simple guessing game. i is 5 (i was incremented before use) j = i++. their variables. or after. The value false is zero. which is only one value. = i | 1 |= 1. << <<= >> >>= >>> >>>= & &= | |= ^ ^= ~ shift left assignment shift left shift right assignment shift right shift left with zeros assignment shift left with zeros bitwise and assignment bitwise and bitwise or assignment bitwise or bitwise xor. based on how a conditional expression evaluates. and true is not false. exclusive or Bitwise not. An expression that does a comparison is called a conditional expression. the statement block following the if statement is executed. the variable is altered after it is used in the statement. Bit operations require a knowledge of bits. <<= 2. it is altered before it is used in a statement. if the conditional expression in the parenthesis following an if statement is true. as a prefix operator. = i >> 2. complement i i i i i i i i i i i i = i << 2. Not every programmer needs to or will choose to use bit operators. i is 6 (i was incremented after use) j = --i. i is 5 (i was decremented before use) j = i--. in fact. Logical operators are used to make decisions about which statements in a script will be executed. If they are used before a variable. j is 5. i is 4 j = ++i. = i >>> 2 >>>= 2 = i & 1 &= 1. anything not zero.3 Bit operators Javascript contains many operators for operating directly on the bits in a byte or an integer. . true if the values are not equal. If (a+b) is true.. //get the user input if (guess > target_number) { .you guessed the number!.guess is too low. else false. if the first expression is false. else false. No type conversions are performed as with the equality operator. the interpreter returns true and does not bother with evaluating the second. true if. ==. and only if. } This example is simple. a < b is true if a is less than b.. Since only one of the expressions in the or statement needs to be true for the expression to evaluate as true.. and the computer continues executing the script at the next statement after the ignored block. || or == equality != === inequality identity !== non-identity < > <= >= less than greater than less than or equal to greater than or equal to CER International bv 23 . a > b is true if a is greater than b.. true if either expression is true. true if the values are not identical or not strictly equal. No type conversions are performed as with the inequality operator. there is no need to evaluate the second expression. =.. The script might have a structure similar to the one below in which GetTheGuess() is a function that gets your guess... then !(a+b) is false. else false. The logical operators are: ! && not and reverses an expression.FieldCommander JavaScript Refererence Guide false.. Since both expressions must be true for the statement as a whole to be true. a <= b is true if a is less than or equal to b. } if (guess < target_number) { . } if (guess == target_number) { .. a >= b is true if a is greater than b. both expressions are true. Do not confuse the equality operator. but it illustrates how logical operators can be used to make decisions in Javascript. if the first expression evaluates as true. true if the values are identical or strictly equal. since the whole expression is false.guess is too high. var guess = GetTheGuess().. with the assignment operator. else false. true if the values are equal.. the statement block is ignored. 1. since there are times when you have to use them both in the same statement. which also may be used as typeof(). 1.7. that is. "three"}. The two meanings of equal signs must be kept separate. and six. "string". even among experienced programmers. var s = new String("abcde"). five.5 delete operator The delete operator deletes properties from objects and elements from arrays. the second.FieldCommander JavaScript Refererence Guide Remember.7. writeLog(s instanceof String). the variable result is set to a string that is represents the variable's type: "undefined". ==. the second line displays true. the assignment operator. six:666}. five:555. "two". 1. determines if a variable is an instance of a particular object.7 typeof operator The typeof operator. This is a common pitfall. and an object with three properties: four.6 instanceof operator The instanceof operator. "number". Any memory cleanup is handled by normal garbage collection. CER International bv 24 .five). is different than the equality operator. =. Since the variable s is created as an instance of the String object in the following code fragment. "boolean". var o = {four:444. var a = {"one". delete(a[1]). with or without parentheses. Deleted properties and arrays are actually undefined. // Displays true The second line could also be written as: writeLog(s instanceof(String)). your script will not function the way you want it to. 1. which also may used as instanceof(). and there is no way the computer can differentiate them by context. or "function". and 2. It then deletes the middle. If you use one equal sign when you intend two. delete(o. The following fragment defines an array with three elements: 0. element of the array and property of the object. var result = typeof variable var result = typeof(variable) After either line. provides a way to determine and to test the data type of a variable and may use either of the following notations. "object".7. 8. } } else if ( goo > 10 ) { writeLog("goo is greater than 10"). } 1. } CER International bv 25 . it looks like the following. It allows you to tell your program to do something else if the condition in the if statement was found to be false. if ( goo < 10 ) { writeLog("goo is less than 10"). else can be combined with if to match one out of a number of possible conditions.8. if ( goo < 0 ) { writeLog("goo is negative. so it's less than 10"). It allows you to test a condition and act on it. In Javascript code. 1. if ( goo < 10 ) { writeLog("goo is smaller than 10"). } else { writeLog("goo is 10").1 if The if statement is the most commonly used mechanism for making decisions in a program. the statement or statement block following it are executed.FieldCommander JavaScript Refererence Guide 1. The following fragment is an example of an if statement. } else { writeLog("goo is not smaller than 10").2 else The else statement is an extension of the if statement. If an if statement finds the condition you test to be true. } To make more complex decisions. if ( goo < 10 ) { writeLog("goo is smaller than 10"). The following fragment illustrates using else with if. Use these statements to make decisions and to repeatedly execute statement blocks.8 Flow decisions statements This section describes statements that control the flow of a program. if there are no names on the list.. "Alert".8. ProcessData(value). until an expression evaluates as false. sendMail("mymessage. CER International bv 26 . Then the interpreter tests the expression again.5 for The for statement is a special looping statement.8. while( ThereAreUncalledNamesOnTheList() != false) { var name=GetNameFromTheList(). before the test condition is checked. the script will run into problems! 1.com"). the interpreter carries out the statement or statement block following it. The following fragment illustrates a while statement with a two lines of code in a statement block. It allows for more precise control of the number of times a section of code is executed.3 while The while statement is used to execute a particular section of code. "Alert".8. } while( value < 100 ).txt". Of course.4 do {. var value = 0. while (expression) { DoSomething(). If the expression is true. whereupon the program continues after the code associated with the while statement. A while loop repeats until the test expression evaluates to false. over and over again. "address@domain. } When the interpreter comes across a while statement.txt".. } 1. sendMail("mymessage. do { value++.FieldCommander JavaScript Refererence Guide 1. "address@domain. The for statement has the following form. The code used to demonstrate the while statement could also be written as the following fragment. do { var name = GetNameFromTheList(). } while (name != TheLastNameOnTheList()).} while The do statement is different from the while statement in that the code block is executed at least once.com"). it first tests to see whether the expression is true or not. //unless the program breaks out of the for loop somehow.com"). and the program continues with the next line of code after the statement. sendMail("mymessage. x++) { writeLog(x). For example. The following code fragment does nothing but illustrate the break statement. "address@domain. CER International bv 27 . } None of the statements that appear in the parentheses following the for statement are mandatory. so the above code demonstrating the while statement would be rewritten this way if you preferred to use a for statement: for( . The break statement terminates the innermost loop of for. } Since we are not keeping track of the number of iterations in the loop. while. case.) { break. "Alert". The program resumes execution on the next line following the loop. See switch.8.. the following code displays the numbers from 1 to 10.. } 1. for(var x=1.txt". loop_expression ) { statement } The initialization is performed first. for(. ThereAreUncalledNamesOnTheList() . } The break statement is also used at the close of a case statement.) { //the code in this block will repeat forever. and default. x<11. the statement is executed.FieldCommander JavaScript Refererence Guide for ( initialization.6 break Break and continue are used to control the behavior of the looping statements: for. then the statement is not executed. and the expression is re-evaluated. or do statements. as shown below.} while. conditional. and then the expression is evaluated. If the result is true or if there is no conditional expression. and do {. switch.. You can use an empty for statement to create an endless loop: for(. while. ) { var name=GetNameFromTheList(). Then the loop_expression is executed. there is no need to have an initialization or loop_expression statement.. beginning the loop again. If the expression evaluates as false. and default The switch statement makes a decision based on the value of a variable or statement. .8 switch. beginning: writeLog("Enter a number less than 2:") var x = getche(). The following code fragment continuously prompts for a number until a number less than 2 is entered. CER International bv 28 .7 continue The continue statement ends the current iteration of a loop and begins the next. value2. case value2: statement2 break. the default statement is executed. if the break statement after the writeLog("B") statement were omitted.. The statement or statements following the matched case are executed until the end of the switch block is reached or until a break statement exits the switch block.FieldCommander JavaScript Refererence Guide 1. case.. If no match is found. Any conditional expressions are reevaluated before the loop reiterates. since the interpreter executes commands until a break statement is encountered. A common mistake is to omit a break statement to end each case. The switch statement follows the following format: switch( switch_variable ) { case value1: statement1 break.8. In the preceding example. .9 goto and labels You may jump to any location within a function block by using the goto statement. The continue statement works with the same loops as the break statement. 1.8. if there is one. . default) until a match is found. //get a value for x if (a >= 2) goto beginning. and then it is compared to all of the values in the case statements (value1. . 1. the computer would print both "B" and "C". . The syntax is: goto LABEL.8. where label is an identifier followed by a colon (:). default: default_statement } The variable switch_variable is evaluated. The following fragment illustrates the use of the conditional operator. If all exceptions have been handled when execution reaches the finally statement block. then expression_if_true is evaluated. Testing for simple errors and unwanted results is usually handled most easily with familiar if or switch statements.FieldCommander writeLog(a). It is harder to read than conventional if statements. the final code is executed. The syntax is: test_expression ? expression_if_true : expression_if_false First. If test_expression is false.10 Conditional operator The conditional operator. the exception thrown is caught and handled in a catch statement block. and the value of the entire expression replaced by the value of expression_if_true. A function has code in it to detect unusual results and to throw an exception. the rest of the code in a function is ignored.8.8. 1. catch. If there is a problem in the function. provides a shorthand method for writing if statements. and the value of the entire expression is that of expression_if_false. Remember these execution guides: When a throw statement executes. 1.11 Exception handling Exception handling statements consist of: throw. since explanation and illustration are the goals. foo = ( 5 < 6 ) ? 100 : 200. try. // foo is set to 100 writeLog("Name is " + ((null==name) ? "unknown" : name)). the discussion and examples deal with simple situations. In this section. but do not lose sight of the fact that they are very powerful and elegant in real world programming where error recovery can be very complex and require much code when using traditional statements. The function is called from inside a try statement block which tries to run the function successfully. here is some generalized phrasing that might help working with exception handling statements. since they make it difficult to track program flow. and finally. then expression_if_false is evaluated. If test_expression is non-zero. JavaScript Refererence Guide As a rule. Exception handling that uses the try related statements is most useful with complex error handling and recovery. and so is generally used when the expressions in the if statements are brief. Another advantage of using try related exception handling is that much of the error trapping code may be in a function rather than in the all the places that call a function. test_expression is evaluated. goto statements should be used sparingly. Before getting to specifics. true. and the function does not return a value. A program continues in the next catch statement block after the try statement block in CER International bv 29 . The exception handling statements might seem clumsy or bulky here. The concept of exception handling includes dealing with unusual results in a function and with errors and recovery from them. "? :". it is fixed. For example. as an argument. } catch (err) { // Catch the exception info that was thrown by the function. then the number is squared and returned. } finally { // Finally.FieldCommander JavaScript Refererence Guide which an exception occurred. assume that an odd number being passed to SquareEven() is an error or extraordinary event. The following simple script illustrates all exception handling statements. When the throw statement executes. as shown. For purposes of this illustration. If you change rtn = SquareEven(4) to rtn = SquareEven(3). make even. If an even number is passed to the function.rtn). have been caught and handled. the script below. // If odd. If an odd number is passed. which throws an exception if an odd number is passed to it. writeLog("We caught odd and squared even. // "throw an exception" to be caught by caller if ((num % 2) != 0) { CER International bv 30 . and an exception is thrown. A program executes a finally statement block if all exceptions. function main() { var rtn. display this line after normal processing // or exceptions have been caught. with information for the catch statement to use. that have been thrown. The try block calls SquareEven()."). // No display here if number is odd writeLog(rtn). the display is: Fixed odd number to next higher even. try { rtn = SquareEven(4). // In this case. writeLog(err.. and any value thrown is caught in a parameter in the catch statement. } } // Check for odd integers.msg + err. The example script below does not actually handle errors. catch. simplistic by adding 1 // Square even number function SquareEven(num) { // Catch an odd number and fix it. the info was returned in an object. and finally statement blocks. The main() function has try. Its purpose is to illustrate how exception handling statements work. displays: 16 We caught odd and squared even. it passes an object. 16 We caught odd and squared even. 1. The interpreter uses the function nearest the end of the script.9 Functions A function is an independent section of code that receives information from a program and performs some action with it. using descriptive function names helps you keep track of what is going on with your script. Just call the function. the last function to load is the one that to be executed when the function name is called. } // Normal return for an even number. ".jsh files. We could have thrown a primitive. "()". // We would have to alter the catch statement to expect whatever data // type is used. You only need to know what information the function needs to receive. Javascript allows you to have two functions with the same name. By taking advantage of this behavior. meaning it has no return value.9. writeLog is a void function. Any code in a function following the execution of a return statement is not executed. throw {msg:"Fixed odd number to next higher even. you do not have to think again about how to perform the operations in it. following their names. You can use a function anywhere you can use a variable. Once a function has been written. writeLog() is an example of a function which provides an easy way to write formatted text to the Diagnostic Log. evaluating to whatever the function's return value is. you can write functions that supersede the ones included in the interpreter or .FieldCommander JavaScript Refererence Guide num += 1. These functions are described in this manual. return num * num. rtn:num * num}. } 1. the parameters. functions are declared with the "function" keyword.1 Function return statement The return statement passes a value back to the function that called it. and let it handle the work for you. It receives a string from the function that called it and writes the string to the Log. In Javascript. that is. functions are considered a data type. and whether it returns a value to the statement that called it. // We throw an object here. Several sets of built-in functions are included as part of the Javascript interpreter. such as: // throw("Caught and odd"). Data to be passed to a function is included within these parentheses. and functions have the function operator. function DoubleAndDivideBy5(a) { return (a*2)/5 } CER International bv 31 . that is. Two things set functions apart from the other variable types: instead of being declared with the "var" keyword. Like comments. Any valid variable name may be used as a function name. They are internal to the interpreter and may be used at any time. the values of variables is: num1 == 5 num2 == 4 num3 == 5 The variable num1 was passed by reference to parameter n1. If a function changes one of these variables. the data that is passed to a function are called arguments. Instead of passing the value of the object. num3. are passed by value. num2 remained CER International bv 32 . Thus. the changes will not be visible outside of the function where the change took place. Composite types. are passed by reference. may be put in front of one or more of its parameters. var b = DoubleAndDivideBy5(20). an argument. n2. The reference indicates where in a computer's memory that values of an object's properties are stored. The variable num2 was passed by value to parameter n2. JavaScript Refererence Guide 1. is passed by reference instead of by value. When a function is defined. In Javascript it is possible to pass primitive types by reference instead of by value. SetNumbers(num1. that change will be reflected throughout in the calling routine. The value of theses variables are passed to a function. a reference to the object is passed. objects and arrays. that is. numbers. var num3. } This script displays 12. function main() { var a = DoubleAndDivideBy5(10). When n2. when the function is called. &. and the variables in a function definition that receive the data are called parameters. } After executing this code. the values of each property. which is the default. &n4) { n1 = n2 = n3 = n4 = 5. var num1 = 4. depending on the type of variable being passed. var num2 = 4. which received an actual value of 4. strings. &n3. namely. Such distinctions ensure that information gets to functions in the most complete and logical ways.9. When n1 was set to 5. num2. writeLog(a + b). If you make a change in a property of an object passed by reference. To be technically correct. 6) function SetNumbers(&n1. num1 was actually set to 5 since n1 merely pointed to num1. and booleans. an ampersand. corresponding to a parameter with an ampersand. Primitive types.FieldCommander Here is an example of a script using the above function. The following fragment illustrates.2 Passing information to functions Javascript uses different methods to pass variables to functions. for example. with a width of 5 and a height of 3. for example. This line creates a new array with three elements set to 1.a == 1. and c set to the values shown. 2. The following code fragment shows the differences. c:3}. By using these initializers. CER International bv 36 . The distinction between Object and Array initializer might be a bit confusing when using a line with syntax similar to the following: var a = {1. The elements may be used with normal array syntax. The examples above creates a Rectangle class and two instances of it. This line creates a new object with the properties a.motto = "ad astra per aspera". instances of Objects and Arrays may be created without using the new constructor. 3]. a[0] == 1. 2. the second and third lines produce the same results. var o= {a:1. Every object created by a constructor function is called an instance of that class. for example. c:3}. All of the instances of a class share the same properties._class +" | "+ a). Array initializer. 2. Objects may be initialized using a syntax similar to the following: var o = {a:1. Arrays may initialized using a syntax similar to the following: var a = [1. writeLog(typeof o +" | "+ o. b:2.FieldCommander JavaScript Refererence Guide This code fragment creates two rectangle objects: one named joe. 2. var a= {1. This line also creates a new array with three elements set to 1. b:2. if we add the following line: joe. The properties may be used with normal object syntax. and 3. For example. in that there are no property identifiers and differs from the second line. 2. var a = [1. in that it uses "{}" instead of "[]". Constructor functions create objects belonging to the same class.2 Initializers for objects and arrays Variables may be initialized as objects and arrays using lists inside of "{}" and "[]"._class +" | "+ o). But the rectangle sally has no motto property. with a width of 3 and a height of 4. we add a motto property to the Rectangle joe. 2. Object initializer. o. writeLog(typeof a +" | "+ a. 3}. 1. The elements may be used with normal array syntax. writeLog(typeof a +" | "+ a. 3].10. a[0] == 1. b. and 3. 3}. although a particular instance of the class may have additional properties unique to it._class +" | "+ a). The line differs from the first line. and another named sally. In fact. a method refers to its variables with the this operator. this. but the class. joe.FieldCommander The display from this code is: object | Object | [object Object] object | Array | 1. the variable o is created and initialized as an Object. this function is meaningless unless it is called from an object. 1.assigning functions to objects Objects may contain functions as well as variables.height = height. The method is available to any instance of the class: CER International bv 37 .3 JavaScript Refererence Guide As shown in the first display line. A method is assigned to an object as the following lines illustrates. function rectangle_area() { return this.width = width. } function Rectangle(width. The second and third lines both initialize the variable a as an Array. The function will now use the values for height and width that were defined when we created the rectangle object joe. } creates an object class Rectangle with the rectangle_area method included as one of its properties. Notice that in all cases the typeof the variable is object.width * this.height.area = rectangle_area. which corresponds to the particular object and which is reflected in the _class property. Methods may also be assigned in a constructor function. shows which specific object is created and initialized.height.height. For example. It needs to have an object to provide values for this. } Because there are no parameters passed to it.width * this.area = rectangle_area. The following fragment is an example of a method that computes the area of a rectangle. A function assigned to an object is called a method of that object.3 Methods .10. this. the following code: function rectangle_area() { return this.2.3 object | Array | 1. Like a constructor function. height) { this. again using the this keyword.width and this.2. area().area. Object prototypes are useful for two reasons: they ensure that all instances of an object use the same default values. Then all instances of the object will use the same function instead of each using its own copy of it.height. we assume that joe is a special Rectangle. var area2 = sally. This redundant memory waste can be avoided by putting the shared function or property in an object's prototype. joe and sally.3). If you assign a method or data to an object prototype. height) { this. If you try to write to a property that was assigned through a prototype. You can add methods and data to an object prototype at any time. function rectangle_area() { return this. but you do not have to create an instance of the object before assigning it prototype values.10. var sally = Rectangle(5. they were each assigned an area method. var area2 = sally. we can modify joe as follows. The rectangle_area method can now be accessed as a method of any Rectangle object as shown in the following.width = width.area. a new variable will be created for the newly assigned value. var area1 = joe. for the sake of this example. all instances of that object are updated to include the prototype. Memory was allocated for this function twice. its value is used for the object property. even though the method is exactly the same in each instance. The object class must be defined. var area1 = joe.4).4 Object prototypes An object prototype lets you specify a set of default values for an object. 1. } function Rectangle(width. } Rectangle. were created in the previous section. and the values of area2 to 15. If.prototype. The following fragment shows how to create a Rectangle object with an area method in a prototype. this. and they conserve the amount of memory needed to run a script. If such a property exists in the prototype. All other instances of the object will still refer to the prototype for their values.FieldCommander var joe = Rectangle(3.area(). CER International bv 38 . JavaScript Refererence Guide This code sets the value of area1 to 12.width * this.area = rectangle_area.height = height. whose area is equal to three times its width plus half its height. When an object property that has not been assigned a value is accessed. When the two Rectangles. This value will be used for the value of this instance of the object's property. the prototype is consulted. regardless of how the interpreter accesses or leaves the block.area that supercedes the prototype value. . writeLog(yyy). you do not need to prefix "global. The property sally. You may not use goto and labels to jump into or out of the middle of a with statement block.10. .FieldCommander joe.width * 3) + (this.area = function joe_area() { (this. Math. Note that properties that have been marked with the DontEnum attribute are not accessible to a for . The statement has the following form. If you were to jump. } JavaScript Refererence Guide This fragment creates a value. with (Math) { xxx = random() * 100). For each iteration of the loop. . 1. 1. even if the names of the properties are unknown. that is. for joe. . } where object is the name of an object previously defined in a script.11 Predefined constants and values 39 CER International bv . in statement.area is still the default value defined by the prototype. The following fragment illustrates using the Clib object.floor() in the sample above are called as if they had been written with Math prefixed.random() and Math. from within a with statement. The instance joe uses the new definition for its area method. When using the for . which in this case is a function. The object is automatically supplied by the interpreter. In other words. Global functions are still treated normally. the statement block will execute once for every property of the object. the with statement would no longer apply. All code in the block following a with statement seems to be treated as if the methods associated with the object named by the with statement were global functions. yyy = floor(xxx).height/2). in statement in this way.10.6 with The with statement is used to save time when working with objects. } The Math methods." to them unless you are distinguishing between two like-named functions common to both objects. to another part of a script.5 for/in The for/in statement is a way to loop through all of the properties of an object. so you need not put the object name in front of its properties and methods. for (var property in object) { DoSomething(object[property]). 1. the with statement only applies to the code within its own block. It lets you assign a default object to a statement block. the variable property contains the name of one of the properties of object and may be accessed with "object[property]". false null true 0 Boolean false A null value with multiple uses Boolean true 0. These values may be used in any normal statements in scripts.NaN Number.js" Note: Files with extension . 1. true.10b 1 EOF NaN Number. indicating that the processor stores the low byte of a value in high memory.10b The revision letter of Javascript.js are not listed in the ‘installed scripts’ overview in the Admin pages of FieldCommander.NEGATIVE_INFINITY SEEK_CUR SEEK_END SEEK_SET VERSION_MAJOR VERSION_MINOR VERSION_STRING 1. 10 in 4.MAX_VALUE Number. 4 in 4. indicating that the processor stores the low byte of a value in low memory.FieldCommander JavaScript Refererence Guide The following values are predefined values in Javascript and are available during the execution of a script. such as Motorola.10b The minor version number of Javascript.. such as Intel. for example. In file operations. for example.POSITIVE_INFINITY Number. false. CER International bv 40 .MIN_VALUE Number. for example.12 Extending the FCscript One FCscript can be extended by including other FCscripts: #include "/home/public/script/morefunctions. Javascript versus C language This section is primarily for those who already know how to program in C.max(a. } could be converted to the following Javascript code: Clib.1 Automatic type declaration There are no type declarations nor type castings as found in C. the following C code: int max(int a. C programmers will appreciate this ability.2 Array representation This section on the representation of arrays in memory only deals with automatic arrays which are part of the C portion of Javascript. the variable i is a number type. The most basic idea underlying this section is that the C portion of Javascript is C without type declarations and pointers. return result. } } A with statement can be used with large blocks of code which would allow Clib methods to be called like C functions. 2. result = (a < b) ? b : a. with (Clib) { max(a. b) { var result = (a < b) ? b : a. return result.FieldCommander JavaScript Refererence Guide 2. Javascript uses constructor functions that create instances of Javascript arrays which are actually objects more than arrays. Other users who decide to use the extra power of C functions will come to appreciate this ability. var i = 6. } The code could be made even more like C by using a with statement as in the following fragment. return result. int b) { int result. 2. Most of the pertinent differences involve the Clib object. b) { var result = (a < b) ? b : a. In the statement. though novice programmers can learn more about the Clib objects and C concepts by reading it. The emphasis is on those elements of Javascript that differ from standard C. Everything said in CER International bv 41 . For example. Users who are not familiar with C should first read the section on Javascript. Types are determined from context. var ac[3][3]. is stored in consecutive bytes in memory.FieldCommander JavaScript Refererence Guide this section is about automatic arrays compared to C arrays. positive or negative. The following line creates an automatic array in Javascript.3 Automatic array allocation Arrays are dynamic. Arrays are used in Javascript much like they are in C. for example. then Javascript ensures that such an element exists. Javascript arrays are covered in the section on Javascript. just as in C.4 Literal strings A literal string in Javascript is any array of characters. to ensure that the element foo[6] exists. Back quotes are sometimes referred to as back-ticks. 2. Arrays can be of any order of dimensions. an array of numbers. If a later statement refers to foo[6] then Javascript expands foo. the reference is to an automatic array. single. For example. The following fragment creates a Javascript array. A single dimension array. then Javascript makes an array of 5 integers referenced by the variable foo. 2. the two arrays c[0] and c[1] are not necessarily adjacent in memory. The same is true for negative indices. appearing in source code within double. that is. When foo[-10] is referenced. For example. The methods and functions used to work with Javascript constructed arrays and Javascript automatic arrays are different. the property aj. but arrays of arrays are not in consecutive memory locations. except that they are stored differently. Though the characters in c[0] and the characters in c[1] are in consecutive bytes. and any index.length provides the length of the aj array. thus foo[6][7][34][-1][4] is a valid variable or array. but foo[4] still refers to the initial 7. if necessary. or back quotes. into an array is always valid. if a statement in a script is: var foo[4] = 7. In Javascript a similar statement such as the following: var c[2][2] = 'a'. // this is the Javascript version indicates that there are at least three arrays of characters. foo is grown in the negative direction if necessary. When the term array is used in the rest of this section. The following lines show examples of literal strings in Javascript: "dog" // literal string (double quote) CER International bv 42 . The following C declaration: char c[3][3]. // this is the C version indicates that there are nine consecutive bytes in memory. a string. If an element of an array is referenced. and the third array of arrays has at least three characters in it. but the function getArrayLength(ac)provides the length of the ac automatic array. The two arrays are different entities that require different methods and functions. var aj = new Array(). a copy is made of the string.'\0'} JavaScript Refererence Guide // literal string (single quotes) // literal string (back-ticks) // not a literal string. For example. if (animal == "dog") if (animal < "dog") if ("dog" <= animal) In Javascript. } results in the following output: doghouse doghouse doghouse A strict C interpretation of this code would not only overwrite memory.4.FieldCommander 'dog' `dog` {'d'.'g'. if (animal == "dog") writeLog("hush puppy"). writeLog(str). the following code: for (var i = 0.'o'. i++) { var str = "dog".1 Literal strings and assignments When a literal string is assigned to a variable. i < 3.4. 2. • • • To protect literal string data from being overwritten accidentally To reduce confusion for novice programmers who do not think of strings as arrays of bytes To simplify writing code for common operations.2 Literal strings and comparisons The following examples demonstrate how literal strings compare. the following fragment: var animal = "dog". but would also generate the following output: doghouse doghousehouse doghousehousehouse 2. but array initialization Literal strings have special treatment for certain Javascript operations for the following reasons. str = str + "house". and the variable is assigned the copy of the literal string. displays: "hush puppy" CER International bv 43 . foo is a Javascript object and animal is a property. i++) { var str = dog() + "house". i < 3. } CER International bv 44 . The following code: for (var i = 0. When Javascript encounters a statement such as: foo. that is.4. by value.5 Structures Structures are created dynamically.4 Literal strings and returns When a literal string is returned from a function by a return statement.animal = "dog" it creates a structure element of foo that is referenced by "animal" and that is an array of characters. The "animal" variable becomes an element of the "foo" variable. writeLog(str) } function dog() { return "dog". and their elements are not necessarily contiguous in memory. Though foo. writeLog(str) } results in the following output: doghouse doghouse doghouse 2. may be thought of and used as a structure and animal as an element. i < 3. i++) { var str = "dog" + "house". } results in the following output: doghouse doghouse doghouse 2. it is passed as a copy. The resulting code looks like regular C code. except that there is no separate structure definition anywhere. For example.4. int Column. The following C code: struct Point { int Row. the following code: for (var i = 0. in this example.3 Literal strings and parameters When a literal string is a parameter to a function. it is returned as a copy of the string.FieldCommander JavaScript Refererence Guide 2. in actuality. return( width * height ).animal. which allows a statement like the following.BottomLeft. sq.Row .BottomLeft. } can be easily converted into Javascript code as shown in the following. sq.Row + 1.bo Some operations.TopRight. return( width * height ).6 Pointer operator * and address operator & No pointers. } function AreaOfASquare(s) { var width = s.TopRight.TopRight. height.FieldCommander JavaScript Refererence Guide struct Square { struct Point BottomLeft.7 Case statements CER International bv 45 . } int AreaOfASquare(struct Square s) { int width. sq. width = s.Row = 1. int Area.TopRight. foo[8]. The * symbol never means pointer in Javascript.Row = 1. and modified just as any other variable. such as addition.TopRight.BottomLeft. 2. } Structures can be passed.Column .Row . But the situation turns out not to be such a big deal. } void main() { struct Square sq.Row + 1. function main() { var sq.Row = 82.Column = 15. *var can be replaced by var[0].Column + 1.TopRight.BottomLeft. struct Point TopRight. which might cause seasoned C programmers to gasp in disbelief. For example.BottomLeft.TopRight. structures and arrays are different and independent. height = s.Column = 120. None.Column = 15. returned.Column = 120.TopRight. var height = s. The pointer operator is easily replaced.s. sq.s.BottomLeft.s.BottomLeft.s. var Area = AreaOfASquare(sq).BottomLeft.forge[3] = bil. sq. Area = AreaOfASquare(sq).Row = 82. Of course. sq. 2.Column + 1.Column . sq. are not defined for structures. especially when used with return statements.sqrt(foe()): case (PILLBOX * 3 . results in the following output: first second third.8 Initialization code which is external to functions All code not inside a function block is interpreted before main() is called and can be thought of as initialization code. "(" and ")". being equal to 7. n and x. "foo()" and "foo(). then he should not. The use of semicolons is personal. then he should. C programmers are trained to use semicolons to end statements. Similarly. var n = 1 + 2 * 3 var x = 2 * 3 + 1 CER International bv 46 . they are usually unnecessary in Javascript which allows more flexibility in writing scripts and is less onerous for users not trained in C. are often unnecessary. Semicolons that end statements are usually redundant and do not do anything extra when a script is interpreted. Indeed.9 Unnecessary tokens If symbols are redundant. It does not hurt to use semicolons. But widespread or regular use of semicolons simply is not necessary.2): default: } 2. or other statements that can be evaluated to a value. but if he does not want to. function main() { writeLog("third. switch(i) { case 4: case foe(): case "thorax": case Math. 2." are identical.". the following fragment is valid and results in both of the variables. a practice that can be followed in Javascript. The following switch statement has case statements which are valid in Javascript. it shares characteristics of both batch and program scripts."). For example. When a script has initialization code outside of functions and code inside of functions. the following Javascript code: writeLog("first "). } writeLog("second "). such as "return. parentheses. some programmers think that the use of semicolons in Javascript is a good to be pursued. Thus. If a programmer wants to use them. In Javascript the two statements. variables.FieldCommander JavaScript Refererence Guide Case statements in a switch statement may be constants. Many people who are not trained in C wonder at the use of redundant semicolons and are sometimes confused by their use. The back quote character. strings that are delimited by back quotes do not translate escape sequences. For example. not unlike efforts to control the Internet. also known as a back-tick or grave accent.bat` // traditional C method. a macro gains little over a function call. var x = (2 * 3) + 1.12 Back quote strings Back quotes are not used at all for strings in the C language. As an example.11 Token replacement macros The #define preprocessor directive. The fragments could be rewritten to be: var n = 1 + 2 * 3 var x = 2 * 3 + 1 and: var n = 1 + (2 * 3).FieldCommander JavaScript Refererence Guide The following fragment is identical and is clearer. 2. Macros simply become functions. However. Since speed is not of primary importance in a scripting language. the following token replacement is recognized and implemented during the preprocessing phase of script interpretation. var n = 1 + (2 * 3). which can be thought of and used as a macro. Efforts to standardize programming styles over the last three decades have been abysmal failures. #define NULL 0 2. which is also // valid in Javascript // alternative Javascript method CER International bv 47 . the following two lines describe the same file name: "c:\\autoexec.bat" `c:\autoexec. but it requires more typing because of the addition of redundant tokens. Which fragment is better? The answer depends on personal taste. var x = (2 * 3) + 1. 2. is supported by Javascript.10 Macros Function macros are not supported. may be used in Javascript in place of double or single quotes to specify strings. `. Type declarations. int col. * and &. 3}. they usually refer to the base value of a pointer address. char. Since the * operator and & operator work together when the address of a variable is passed to a function.col // when needed. C code is on the left and can be replaced by the Javascript code on the right. var goo(a. Javascript var i. int zoo[] = {1. buf. // no struct type // Simply use st.13 Converting existing C code to Javascript Converting existing C code to Javascript is mostly a process of deleting unnecessary text. } char name[] = "George". Finally. struct { int row. CER International bv 48 . If code has * operators in it. should be deleted. var name = "George". // or nothing var foo = 3. char *s. and []. Another step in converting C to Javascript is to search for pointer and address operators. such as int. float. A statement like "*foo = 4" can be replaced by "foo[0] = 4". int goo(int a. these operators are unnecessary in the C portion of Javascript. int foo = 3. var st. 3}. var zoo = {1. 2. c). C int i.FieldCommander JavaScript Refererence Guide 2. 2. the -> operator in C which is used with structures may be replaced by a period for values passed by address and then by reference. The following two columns give examples of how to make such changes. struct.row // and st. int c). x < 32. April[1] = 233. suppose you wanted to keep track of how many jellybeans you ate each day. April[2] = 344."). array[1] = "fowl". April[3] = 155. The purpose is to ease the programming task by providing another easy to use tool for scripters. you can have an array with elements at indices 0 and 2 but none at 1. April[4] = 32. The simplest is to call the function with no parameters: CER International bv 49 .FieldCommander JavaScript Refererence Guide 3. Arrays usually start at index [0]. var April = new Array(). You can find out how many jellybeans you ate on day x by checking the value of April[x]: for(var x = 1.1 Creating arrays Like other objects.1. Note that arrays do not have to be continuous." The variables foo and goo must be either numbers or strings. Be careful not to confuse an array variable that has been constructed as an instance of the Array object with the automatic or dynamic arrays of Javascript. The elements in an array do not all need to be of the same type. For example. array[foo] = "creeping things" array[goo + 1] = "etc. they provide an easy way to work with sequential data. Since arrays use a number to identify the data they contain. There are three possible ways to use this function to create an array. arrays are created using the new operator and the Array constructor function. Array indices must be either numbers or strings. Now you have all your data stored conveniently in one variable. array["joe"] = new Rectangle(3. Array elements can be of any data type. not index [1]. called an index. 3. An Array is a special class of object that refers to its properties with numbers rather than with variable names. is written in brackets following an array name. The following statements demonstrate assigning values to arrays. Javascript API reference 3. Arrays provide an ideal solution for storing such data. that is. and there is no limit to the number of elements an array may have. The number used to identify an element.1 Array Object An Array object is an object in Javascript and is in the underlying ECMAScript standard. var array = new Array(). array[0] = "fish". Properties of an Array object are called elements of the array.4). The current section is about Array objects. Javascript offers automatic arrays in addition to the Array object of ECMAScript. x++) writeLog("On April " + x + " I ate " + April[x] + " jellybeans. so you can graph your jellybean consumption at the end of the month. b:2. CER International bv 50 . The parentheses are optional when creating a new array. c:3}. 2. "blast off"). The elements may be used with normal array syntax. for example. The elements may be used with normal array syntax. var b = new Array(31). The properties may be used with normal object syntax. In fact. instances of Objects and Arrays may be created without using the new constructor. This line initializes variable a as an array with no elements. 2. which is set to the string "blast off". in most circumstances. created as described in this paragraph. The following code fragment shows the differences. This line creates a new object with the properties a. If you wish to create an array of a predefined size. This line also creates a new array with three elements set to 1. so it is recommended that you use. Object initializer. 3]. Arrays may be initialized using syntax similar to the following: var a = [1. Finally. not array[1]. 2. c[1] is set to 4. Automatic arrays. an array with length 31 is created. For example: var c = new Array(5. The array that is created is an automatic or dynamic array which is different than an instance of an Array object created as described in this section. By using these initializers. Note that the first element of the array is array[0]. Array initializer. and c set to the values shown. c:3}. By referring to a variable with an index in brackets. c[0] is set to 5. in that there are no property identifiers and differs from the second line. 3}. The line differs from the first line. 2. for example. the new Array() constructor function to create arrays. for example. In this case. and so on up to c[5]. pass variable a the size as a parameter of the Array() function. creates an array with a length of 6. and 3. if there are no arguments. a variable is created as or converted to an array. The distinction between Object and Array initializer might be a bit confusing when using a line with syntax similar to the following: var a = {1. Arrays may also be created dynamically. o. b:2. 4. a[0] == 1. the second and third lines produce the same results. Initializers for arrays and objects Variables may be initialized as objects and arrays using lists inside of "{}" and "[]".FieldCommander JavaScript Refererence Guide var a = new Array(). and 3. 1. The following line creates an array with a length of the size passed. which creates an array containing all of the parameters passed. var o = {a:1. 2. a[0] == 1.a == 1. Objects may be initialized using syntax similar to the following: var o = {a:1. This line creates a new array with three elements set to 1. in that it uses "{}" instead of "[]". you can pass a list of elements to the Array()function. b. 3. are unable to use the methods and properties described below. 3]. with a value of 88.2. For example. global. By changing the value of the length property. ant[0] = 3. writeLog(typeof a +" | "+ a.length to 2.2. Notice that in all cases the typeof the variable is object. The display from this code is: object | Object | [object Object] object | Array | 1.getArrayLength()._class +" | "+ a). even though ant has twice // as many actual elements as bee does. var bee = new Array(). you can remove array elements. writeLog(typeof a +" | "+ a.length The length property returns one more than the largest index of the array.3 JavaScript Refererence Guide As shown in the first display line. the variable o is created and initialized as an Object. var a = [1.FieldCommander writeLog(typeof o +" | "+ o. ant[1] = 4.3 object | Array | 1. then bee will consist of two members: bee[0]. 2.2 Array object instance properties Array length SYNTAX: DESCRIPTION: SEE: EXAMPLE: array. which corresponds to the particular object and which is reflected in the _class property. and bee[1]. but the class. // The length property of both ant and bee // is equal to 4. bee[0] = 88._class +" | "+ a). CER International bv 51 . global. Note that this value does not necessarily represent the actual number of elements in an array. with an undefined value.length to 2. 2. Array().setArrayLength() // Suppose we had two arrays "ant" and "bee". if you change ant. The second and third lines both initialize the variable a as an Array.1. since elements do not have to be contiguous. // with the following elements: var ant = new Array(). bee[3] = 99. ant[3] = 6. var a= {1. 3._class +" | "+ o). ant will only have the first two members. and the values stored at the other indices will be lost. ant[2] = 5. 3}. If we set bee. shows which specific object is created and initialized. then an empty array of length 0 is created.an Array object of the length specified or an Array object with the elements specified. then it is the length of the array to be created. Note that this can also be called as a function. CER International bv 52 .concat([element1. The array returned from this function is an empty array whose length is equal to the length parameter. The string conversion is the standard conversion. then the length of the new array is set to 1. the separator is added. The return array is first constructed to consist of the elements of the current object. of an array. If separator is not supplied. and if they are arrays then the elements of the array are appended to the end of the return array.string consisting of the elements. from 0 to the length of the object.. var b = a. If the current object is not an Array object. except the undefined and null elements are converted to the empty string "".list of elements to be concatenated to this Array object. the array elements will be separated by a comma.a value to be converted to a string and used to separate the list of array elements. The arguments are inserted in order into the array. Automatic array allocation var a = new Array(5). If no arguments are supplied. This method then cycles through all of the arguments. If an argument is not an array.FieldCommander JavaScript Refererence Guide 3. 3). The length of the newly created array is adjusted to reflect the new length. For example: var a = new Array(3. then the object is converted to a string and inserted as the first element of the newly created array. without the new operator.]) elementN . it is the element of a single-element array to be created. string . . The length of the new array is set to the total number of arguments. var a = new Array(1. starting with element 0. 5.1. The default is an empty string. The Array join() method creates a string of all of array elements. If length is not a number. including empty elements.list of elements to be in the new Array object being created.3 Array object instance methods Array() SYNTAX: WHERE: RETURN: DESCRIPTION: new Array(length) new Array([element1. String concat() var a = new Array(1.three). The elements of the current object. Otherwise.]) length .concat(3). Note that the original object remains unaltered. The join() method has an optional parameter. Array join() SYNTAX: WHERE: RETURN: DESCRIPTION: array.. a string which represents the character or characters that will separate the array elements.. object ." is used. delimited by separator. then the singlecharacter string ".a new array consisting of the elements of the current object."two". In between each element.. and the first element is set to the length parameter. By default. are sequentially converted to strings and appended to the return string. SEE: EXAMPLE Array concat() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE array. with any additional arguments appended. 6. then it is first converted to a string and appended as the last element of the array.join([separator]) separator . object .If this is a number. The alternate form of the Array constructor initializes the elements of the new array with the arguments passed to the function. . elementN .2). ]) elementN . writeLog( array ). 4 ). // Will print out the string "one::2::3::".push( 3.push([element1. which is the result of converting the undefined // value to a string. whereas. Array push() // The following code: var array = new Array( "four" ).the length of the new array. Array pop() // The following code: var array = new Array( 1.. the Array shift() method works on the beginning. 2 ). // Will first print out the string "four".a list of elements to append to the end of an array. 6.3". array.. writeLog( array. var string = a. then undefined is returned. see: EXAMPLE Array toString() // The following code: var array = new Array( "one".join(). and then print out // "undefined".6. number . writeLog( array. 2.pop() value .4". Otherwise. 5. 3.pop() ). this method appends the arguments to the end of this array. undefined ). and the length of current object is decreased by one.2. JavaScript Refererence Guide will set the value of "string" to "3. If the length is undefined or 0. var a = new Array(3. The pop() method works on the end of an array.join("*/*"). 3). this method first gets the length of the current object. the element at this index is returned. This element is then deleted. The element is removed from the array after being returned.the last element of the current Array object. in the order that they appear. writeLog( array. For example.pop() ). CER International bv 53 . You can use another string to separate the array elements by passing it as an optional parameter to the join() method.3. // Will print the array converted to the string "1. . The array will be empty after these calls. creates the string "3*/*5*/*6*/*3". Array push() SYNTAX: WHERE: RETURN: DESCRIPTION: see: EXAMPLE Array.5. The length of the current Array object is adjusted to reflect the change.FieldCommander var string = a. Array pop() SYNTAX: RETURN: DESCRIPTION: see: EXAMPLE Array.join("::") ). If either is beyond the length of the array. array[2] = "wasp".3). The element is removed from the array after being returned. 2. end . If the length of the current Array object is 0. Array slice() SYNTAX: WHERE: RETURN: DESCRIPTION: see: EXAMPLE array. and the value length+start or length+end is used instead.a new array consisting of the elements in the current Array object in reverse order. writeLog( array ). starting at start and proceeding to (but not including) end.reverse() object .shift() ). but not including. Otherwise.the first element of the current Array object. If end is not supplied. a new Array object is created. 4 ). this method creates a subset of the current array. // which converts to the string "2. If either start or end is negative. The shift() method works on the beginning of an array. object .reverse().slice(start[.FieldCommander Array reverse() SYNTAX: RETURN: DESCRIPTION: JavaScript Refererence Guide EXAMPLE Array. 3 ).the element offset to end at. 3.3". Otherwise. var b = a. the first element is returned. then the current Array object is simply returned.shift() value .the element offset to start from. then the length is used instead. The elements are then copied into the newly created array. the Array pop() method works on the end.slice( 1. and then the contents of the array.reverse().a new array containing the elements of the current object from start up to.2. element end. if the length of the current Array object is 0. This element is deleted from the array. // First prints out "1". 2. then it is treated as an offset from the end of the array. // Produces array[0] == array[1] == array[2] == the following array: "wasp" "bee" "ant" Array shift() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE array. If either is less than 0 after adjusting for negative values. then the value 0 is used instead. array[0] = "ant". -1 ) ). Array pop() //The following code: var array = new Array( 1. // The following code: var array = new Array. then undefined is returned. and the elements of the current Array object are put into this new array in reverse order. then the length of the current object is used instead. array. preserving any empty or undefined elements. and any remaining elements are shifted down to fill the gap that was created. array[1] = "bee". writeLog( array. Array unshift(). writeLog( array. var a = new Array(1. whereas. end]) start . CER International bv 54 . String substring() // The following code: var array = new Array( 1. object .]) start . then the length of the array is used. array..identifier for a function which expects two parameters x and y. y = ToNumber(y). If a compare function is supplied.. and appear at the end of the Array before any empty values.3. // which is "-1. // Prints out the sorted array.the index at which to splice in the items. sort b to a higher index than a.sort([compareFunction]) compareFunction . // Consider the following code. and returns a negative value if x < y.the number of items to remove from the array. // which results in the string "2.". // rather than the default string comparison. JavaScript Refererence Guide Array sort() SYNTAX: WHERE: RETURN: DESCRIPTION: Array.3". elementN . ) -1. The comparison of elements is done based on the supplied compareFunction. then: If compareFunction(a. If compareFunction(a. deleteCount[. and if it beyond the end of the array. If this is negative. . b) is greater than zero.splice(start. b) is less than zero. Once these two tests are performed. If compareFunction is not supplied. x == y ) 0. Non-existent elements are always greater than any other element. // Notice the undefined value // at the end of the array. if( x < y return else if ( return else return } var array = new Array( 3. deletecount . element1. leave a and b unchanged relative to each other. y ) { x = ToNumber(x). // which sorts based on numerical values. Undefined values are also always greater than any defined element. then the appropriate comparison is done. sort b to a lower index than a. this method sorts the elements of the array. zero if x = y. the array elements are sorted according to the return value of the compare function.this Array object after being sorted. function compare( x. The sort is not necessarily stable (that is. "4". b) returns zero. undefined.a list of elements to insert into the array in place of the ones which were CER International bv 55 . 1. and consequently are sorted to the end of the array.4. If compareFunction(a. EXAMPLE Array splice() SYNTAX: WHERE: Array. elements which compare equal do not necessarily remain in their original order). or a positive value if x > y.sort(compare).FieldCommander // Print out the elements from 1 up to 4. then the elements are converted to strings and compared. -1 ). then (length+start) is used instead. If a and b are two elements being compared.. writeLog(array). writeLog( array ).toString() string . writeLog( array. The result is a string consisting of the string representation of the array elements (except for null and undefined.unshift(1).4. Array join() // The following code: var array = new Array( 1. number .string representation of an Array object. Array toString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE Array.FieldCommander RETURN: DESCRIPTION: JavaScript Refererence Guide SEE: EXAMPLE deleted. // The array has been modified to include the extra items in // place of those that were deleted. 2.]) elementN . 6. // // // // Will print out the string "1. rather the function ToString() is used.two. false ). any arguments are inserted at the beginning of the array.. Array push() // The following code: var array = new Array( 1. The remaining arguments are then inserted sequentially in the space created in the current object.3" and then "1. // Will print "2.6.the length of the new array after inserting the items.8. .splice( 1. Note that this method is the opposite of Array. "two".. 4. 5 ). Array push() var a = new Array(2. such that their order within the array is the same as the order in which they appear in the argument list.false".an array consisting of the elements which were removed from the current Array object. The elements of the current object are then adjusted to make room for the all of the items passed to this method.push(). Array shift(). Array unshift() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE Array. 8 ) ). this method behaves exactly the same as if Array join() was called on the current object with no arguments. CER International bv 56 . Beginning at index start. which are empty strings) separated by commas. 2. Note that this method is rarely called. deleteCount elements are first deleted from the array and inserted into the newly created return array in the same order. var b = a. .7. 3. which implicitly calls this method. writeLog( array.unshift([element1.3). this method splices in any supplied elements in place of any elements deleted.toString() ). which adds the items to the end of the array.. object . null.5"..a list of items to insert at the beginning of the array. 7. 2 Boolean Object 3.1 Buffer object instance properties Buffer bigEndian buffer.bigEndian SYNTAX: DESCRIPTION: SEE: EXAMPLE: This property is a boolean flag specifying whether to use bigEndian byte ordering when calling Buffer getValue() and Buffer putValue(). A new Buffer object may be created from scratch or from a string.3. var bb = false.a Boolean object with the parameter value converted to a boolean value.toString() ). object . Buffer unicode buffer.cursor The current position within a buffer. // "true" writeLog( bb. in which case the contents of the string or buffer will be copied into the newly created Buffer object."true" or "false" according to the value of the Boolean object. this function creates a Boolean object that has the parameter value converted to a boolean value. NOTE: the Javascript Buffer Object is not the same as the FieldCommander data buffer that is created with the addBuffer() command. then the return is simply the parameter value converted to a boolean. This value is always between 0 and . buffer.size. Boolean. Any type of data may be stored in a Buffer object. Buffer cursor SYNTAX: DESCRIPTION: buffer. var b = new Boolean( name == "Joe" ).2. // The Boolean object "b" is now true. writeLog( b. If a user attempts to move the cursor beyond the end of CER International bv 57 . It can be assigned to as well. but may be changed at any time. Boolean toString() var name = "Joe".toString() SYNTAX: RETURN: DESCRIPTION: EXAMPLE Boolean.toString() ). // "false" 3. var name = "Joe". or Buffer object. It is needed whenever the relative location of data in memory is important.bigEndian = true.FieldCommander JavaScript Refererence Guide 3.a value to be converted to a boolean.toString() string . var b = new Boolean( name === "Joe" ).3 Buffer Object The Buffer object provides a way to manipulate data at a very basic level. 3. If the function is called without the new constructor.1 Boolean object instance methods Boolean() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE new Boolean(value) value . This value is set when a buffer is created. This property defaults to the state of the underlying OS and processor. this toString() method returns a string corresponding to the value of a Boolean object or primitive data type. size = 5. If a user attempts to set the cursor to less than 0.data This property is a reference to the internal data of a buffer. SWORD8. then it is set to the beginning of the buffer. then it is filled with NULL bytes. such as goo = foo[5] or foo[5] = goo. and filled with null bytes. A user may either get or set these values. This value is set when the buffer is created. CER International bv 58 . bigEndian]]]) Buffer(string[. This property defaults to the unicode status of the underlying Javascript engine. than the buffer is extended to accommodate the new position.unicode SYNTAX: DESCRIPTION: SEE: EXAMPLE: This property is a boolean flag specifying whether to use unicode strings when calling Buffer getString() and Buffer putString(). Buffer data SYNTAX: DESCRIPTION: buffer. Buffer cursor var n = buffer. If a user changes the size of the buffer to something larger. Buffer[] Array Buffer[offset] SYNTAX: DESCRIPTION: This is an array-like version of the Buffer getValue() and Buffer putValue() methods. In the future. unicode[. 3.string of characters from which to create a buffer. This property may be assigned to. which works only with bytes. such as foo. If offset is beyond the end of a buffer. If offset is less than 0. then the cursor is moved to the end of the new buffer. all Javascript library functions should be able to recognize Buffer objects and to get this member on their own. If the user sets the size to a value smaller than the current position of the cursor. It is only a temporary value to assist in passing parameters to OS and system library type calls. Every get/put operation uses byte types. Buffer bigEndian var p = buffer.cursor. SEE: EXAMPLE: Buffer unicode buffer. but may be changed at any time. the size of the buffer is extended with null bytes to accommodate it. Buffer putValue() var c = 'a'. then 0 is used.size of buffer to be created. string . unicode[. SEE: EXAMPLE: Buffer getValue(). c = buffer[4]. bigEndian]]]) Buffer(bufferObject) WHERE: size . Buffer size SEE: Buffer size SYNTAX: DESCRIPTION: buffer. that is.3.2 Buffer object instance methods Buffer() SYNTAX: new new new new Buffer([size[.FieldCommander JavaScript Refererence Guide SEE: EXAMPLE: a buffer. to position 0.bigEndian = false. unicode[. bigEndian]]]) Buffer(buffer[. buffer[5] = c.size The size of the Buffer object.size. Buffer bigEndian buffer. getString(2). foo. though it can be extended dynamically later. then the buffer is created with a size of 0. Buffer putString() foo = new Buffer("abcd"). If string is a unicode string (unicode is enabled within the application). A line of code following this syntax creates a new Buffer object. The size of the buffer is the length of the string (twice the length if it is unicode). A line of code following this syntax creates a new Buffer object from the string provided. even if a length parameter is not provided. Similarly.buffer to be duplicated. except that this simply returns the buffer part (equivalent to the data member). A terminating null byte is not added. unicode[. new Buffer(bufferObject). string . A line of code following this syntax creates a new Buffer object from another Buffer object. unicode[.boolean flag for the initial state of the unicode property of the buffer bigEndian . The unicode parameter is an optional boolean flag describing the initial state of the .number of characters to get from the buffer. Similarly.unicode flag of the buffer. The unicode and bigEndian parameters do not affect this conversion. The bigEndian flag behaves the same way as in the first constructor. The contents of the buffer are copied as is into the new Buffer object. This behavior can be overridden by specifying true or false with the optional boolean unicode parameter. To create a Buffer object. bigEndian]]]). bigEndian]]) A line of code following this syntax creates a new Buffer object from the buffer provided. rather than the entire Buffer object. unicode[. All of the above calls have an equivalent call form (such as Buffer(15)). including the cursor location. The string is read according to the value of the . follow of the syntax below. then the buffer is created as an ASCII string. these parameters default to the values described below. //goo is now "bc" CER International bv 59 .unicode flag of the object. unicode .FieldCommander JavaScript Refererence Guide RETURN: DESCRIPTION: buffer .getString([length]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: length . and data. regardless of whether or not the original string was in unicode or not. If no size is specified. If this parameter is set to false. though they do set the relevant flags for future use. This constructor does not add a terminating null byte at the end of the string.the new buffer created. Everything is duplicated exactly from the other bufferObject. If no length is specified.cursor = 1. goo = foo.buffer of characters from which to create another buffer. bigEndian describes the initial state of the bigEndian parameter of the buffer.starting from the current cursor location and continuing for length bytes. new Buffer(buffer[. filled with null bytes. then the buffer is created as a unicode string.numeric description of the initial state of the bigEndian property of the buffer. object . specifying true will ensure that the buffer is created as a unicode string. new Buffer([size[. bufferObject . If unspecified. If size is specified. Buffer getString() buffer. then the method reads until a null byte is encountered or the end of the buffer is reached. size. then the new buffer is created with the specified size. bigEndian]]]). new Buffer(string[. or "float". This call is similar to the Buffer putValue() function. The default type is: "signed. "unsigned". then the string is put as a unicode string.value to be put into the buffer.cursor. The parameter valueSize or both valueSize and valueType may be passed as additional parameters. If the . Buffer putValue() buffer. valueType]]) SYNTAX: WHERE: JavaScript Refererence Guide RETURN: DESCRIPTION: SEE: EXAMPLE: valueSize . valueType]]) SYNTAX: WHERE: RETURN: DESCRIPTION: value . valueSize[.putValue( 0 ).unicode flag is set within the Buffer object.3. or "float". void.3. and 10 valueType . // To put a null terminated string.a positive number describing the number of bytes to be used and defaults to 1.getValue(goo).Any string.One of the following types: "signed"." The value is put into buffer at the current cursor position. and the cursor value is automatically incremented by the size of the value to reflect this addition. The parameter valueSize is a positive number describing the number of bytes to be used and defaults to 1. The following are acceptable values: 1.One of the following types: "signed". This method puts a string into the Buffer object at the current cursor position. valueSize . Acceptable values for valueSize are CER International bv 60 . The value must be a number. */ // Save the old cursor location var oldCursor = foo." value .cursor = 20.4. // Set to new location foo.2.putString(string) SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: string .8. // Restore cursor location foo. The following are acceptable values: 1.8. otherwise it is put as an ASCII string. // Add terminating null byte foo. This method puts the specified value into a buffer. // Put the string into the buffer foo.cursor = oldCursor //Please see Buffer.4.putValue(value[.putString( "Hello" ). and 10 valueType .from the specified position in a Buffer object.FieldCommander Buffer getValue() buffer. Buffer putValue(). do something similar to the following. "unsigned". Note that terminating null byte is not added at end of the string. // the following can be done.2. The default type is: "signed. except that it gets a value instead of puts a value. // Get goo at offset 20 bar = foo.putValue // for a more complete description. The cursor is incremented by the length of the string (or twice the length if it is put as a unicode string).getValue([valueSize[. Buffer[] Array /* To explicitly put a value at a specific location while preserving the cursor location. Buffer putString() buffer.a positive number describing the number of bytes to be used and defaults to 1. "float") != 1.3.getValue(4.cursor = 20. A valueSize of 4 may still be used for floating point values.4 ) // This is not necessarily true due // to significant figure loss."float").FieldCommander JavaScript Refererence Guide 1.4" will actually be converted to something to the effect of "1. Buffer getValue().bigEndian flag.4.cursor.putValue(goo).2. unsigned signed. /*. .) The parameter valueType must be one of the following: "signed". and the cursor value is automatically incremented by the size of the value to reflect this addition.8. The value is put into the buffer with byte-ordering according to the current setting of the .putValue(1. // Save the old cursor location foo. */ var oldCursor = foo. The value is put into buffer at the current cursor position. Buffer[] Array /* To explicitly put a value at a specific location while preserving the cursor location. unsigned. such as 4. This is sufficiently insignificant to ignore. It defaults to "signed. some significant figures are lost. unsigned signed. "unsigned". if( foo.4.4. providing that it does not conflict with the optional valueType flag.39999974". but be aware that some loss of significant figures may occur (though it may not be enough to affect most calculations). and 10.cursor -= 4. // Put goo at offset 20 foo. foo. do something similar to the following. The following list describes the acceptable combinations of valueSize and valueType: valueSize 1 2 3 4 8 10 valueType signed. Note that when putting float values as a smaller size. A value such as "1.cursor = oldCursor // Restore cursor location /*. any type of data can be put.*/ CER International bv 61 .*/ foo. This situation can be prevented by using 8 or 10 as a valueSize instead of 4. . float float float (Not supported on every system) SEE: EXAMPLE: Any other combination will cause an error. unsigned signed. or "float". but note that the following does not hold true." The valueType parameter describes the type of data to be read. Combined with valueSize. // Set to new location foo. (See listing below. end) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide SEE: EXAMPLE: begin . 3. // bar is now the string "bc" // "a" was at position 0. "b" at position 1. create if doesn't exist.fopen(filename.FieldCommander Buffer subBuffer() buffer. If the parameter end is beyond the end of the buffer. String subString() foo = new Buffer("abcd").toString(void). number . The parameter filename is a string.3). bar = foo. create if doesn't exist. Buffer getString() foo = new Buffer("hello").subBuffer(begin. excluding wildcard characters.a string equivalent of the current state of the buffer. etc. Buffer toString() buffer.4. For example. The parameter mode is a string composed of one or more of the following characters. the start of the buffer. It may be any valid file name. This method opens the file specified by filename for file operations specified by mode.a string with a filename to open.another Buffer object consisting of the data between the positions specified by the parameters: beginning and end. // The parameter "3" // or "nEnd" is the postion to go up to. Methods to access files and formatted strings are part of the Clib object.a file pointer to the file opened. set for writing at end-of-file • b binary mode.1 File I/O Clib. Any conversion to or from unicode is done according to the . file must already exist • w open file for writing. if file exists then truncate to zero length • a open file for append.end of offset (up to but not including this point) object . then the new sub-buffer is extended with null bytes. bar = foo. then it is treated as 0. null is returned in case of failure. // but NOT to be included in the string. including "\0". returning a file pointer to the file opened. "r" or "rt" • r open file for reading.fopen() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib.toString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . with all characters.how or for what operations the file will be opened. //bar is now the string "hello" 3.unicode flag of the object.subBuffer(1. If the parameter beginning is less than 0. null in case of failure.4 Clib Object The Clib object contains functions that are a part of the standard C library. mode) filename . mode . if b is not specified then open file in text mode (end-of-line CER International bv 62 . but the original buffer is not altered.start of offset end . else EOF.0 on success. This method returns an integer which is non-zero if the file cursor is at the end of the file. else 0. Returns zero if successful.fgets() CER International bv 63 . This method flushes the file buffers of a stream and closes the file. number .feof(filePointer) filePointer .feof() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib.EOF if there is a read error or the file cursor is at the end of the file. The parameter filePointer is a file pointer as returned by Clib.fclose(fp). otherwise returns EOF.fopen(). Clib.fopen() SEE: Clib. if ( fp == null ) writeLog("Error opening file for reading. stdout).ferror() will indicate the error condition.fclose() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib.fgetc() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: Clib.pointer to file to close. This method returns the next character in the file stream indicated by filePointer as a byte converted to an integer.pointer to file to use. Clib.fputs(line. number .txt". number .fopen("ReadMe. and 0 if it is NOT at the end of the file. Causes any unwritten buffered data to be written to filePointer. If there is a read error then Clib.").fclose() Clib. else EOF. Returns zero if successful.txt" mode reading. The file pointer ceases to be valid after this call.fopen() SEE: Clib.fgets(fp)) ) { Clib. Clib. and display each line in the file. otherwise EOF.fclose(filePointer) filePointer .fgetc(filePointer) filePointer .a non-zero number if at end of file.FieldCommander • • translation) t text mode + open for update (reading and writing) JavaScript Refererence Guide SEE: EXAMPLE: When a file is for automatic Clib. "r").0 on success.fflush() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: Clib.fflush(filePointer) filePointer . its error status is cleared and a buffer is initialized buffering of reads and writes to the file.pointer to file to use. If filePointer is null then flushes buffers in all open files. } Clib. number .pointer to file to use. else while ( null != (line=Clib. text file "ReadMe.fopen().fclose() // Open the // for text successfully opened. The parameter filePointer is a file pointer as returned by Clib. var fp = Clib. Clib. Clib. number . else EOF. string . This number is the maximum length of the string to be returned if no newline character was encountered. else returns a non-negative value. formatString[. A second syntax of this function takes a number as its first parameter. filePointer) str . The newline will be returned as part of the string. If chr is a number.fgets([length.fputs(str.maximum length of string. The second parameter.pointer to file to use.variable to hold the current file position.the characters in a file from the current file cursor to the next newline character on success. else a negative number. filePointer . filePointer .fgetpos() Clib.fgetc() SEE: Clib.character written on success. filePointer) chr .non-negative number on success. else EOF.fprintf() SYNTAX: WHERE: Clib.pointer to file to use.. Clib.]) filePointer . Clib.fputc(chr.fputc() CER International bv 64 .fprintf(filePointer. the character corresponding to its unicode value will be added. variables . number . null will be returned. else null. variables . If chr is a string. number . This method returns a string consisting of the characters in a file from the current file cursor to the next newline character.fputs() SEE: Clib. If there is an error or the end of the file is reached. RETURN: DESCRIPTION: Clib.FieldCommander Clib. The file position will be stored in the variable pos. Clib. This method stores the current position of the file stream filePointer for future restoration using Clib.characters written on success.] filePointer) length .string to write to file.fgets() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib.fsetpos() Clib. number .pointer to file to use.pointer to file to use. use it with Clib.string that specifies the final format. formatString..fputs() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: Clib. formatString . pos . This method writes the value of str to the file indicated by filePointer. the first character of the string will be written to the file indicated by filePointer.0 on success. Returns EOF if write error.fputc() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib. filePointer .fsetpos().values to be converted to and formatted as a string.fsetpos() to restore the cursor to its position. else non-zero and stores an error value in Clib. is a string of the same pattern as Clib. This flexible function writes a formatted string to the file associated with filePointer.pointer to file to use.errno. Clib.sprintf().character to write to file. pos) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide SEE: filePointer .fgetpos(filePointer. input items assigned. else non-zero.fputc() SEE: Clib. This method is identical to Clib. Clib. May be lower than the number of items requested if there is a matching failure. variables . 68 CER International bv .sscanf() SYNTAX: WHERE: RETURN: Clib.new name for file on disk.on success.remove() Clib.0 on success.remove(filename) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: filename .putc() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib. the next character.FieldCommander writeLog("Error writing to file.. This method is identical to Clib.pointer to file to use. This method writes the character chr. in a file.rewind() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib. 0. It returns chr on success and EOF on a write error. number .]) str .the name of the file to delete from a disk. newFilename . Else EOF if a read error or at the end of file.fseek() SEE: Clib. Clib.fputc().sscanf( str. formatString .fgetc().pointer to file to use. number .rename(oldFilename.rewind(filePointer) filePointer . number . This method renames oldFilename to newFilename. else EOF on write error. This method sets the file cursor to the beginning of file.rename().getc() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib. Returns EOF if there is a read error or if at the end of the file.character to write to file.fopen() Clib. Clib. converted to a byte. Both oldFilename and newFilename are strings.list of variables to hold data input according to formatString. number .string holding the data to read into variables according to formatString.").0 on success. void. This call is the same as Clib.character written on success.fseek(filePointer.current name of file on disk to be renamed.remove() Clib. newFilename) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: oldFilename . } JavaScript Refererence Guide Clib. It returns the next character in a file as an unsigned byte converted to an integer..rename() Clib. variables . If there is a read error then Clib. Clib. Clib. Returns zero if successful and non-zero for failure.specifies how to read and store data in variables. filePointer) chr .ferror() will indicate the error condition. SEEK_SET) except that it also clears the error indicator for this stream. formatString[. as an unsigned byte converted to an integer. number .getc(filePointer) filePointer . Delete a file with the filename provided.putc(chr.pointer to file to use. void. else non-zero. to an output file stream. Clib. filePointer . X : hexadecimal integer • f. O : octal integer • x. Each subsequent parameter after the format string gets the next parsed value takes from the next parameter in the list following format.sprintf() SYNTAX: WHERE: Clib. %. is reached. A parameter specification takes this form (square brackets indicate optional fields. This method writes output to the string variable specified by str according to formatString. It will be created large enough to hold the result. SEE: Modifies any number of parameters following the format string. May be lower than the number of items requested if there is a matching failure.string that specifies the final format variables . and returns the number of characters written or EOF if there was an error.FieldCommander DESCRIPTION: JavaScript Refererence Guide This flexible method reads data from a string and stores it in variables passed as parameters following formatString. i. The format string specifies the admissible input sequences. number . variables . The parameter str need not be previously defined. void. angled brackets indicate required fields): %[*][width]<type> *. and type may be: • • • * : suppress assigning this value to any parameter width :maximum number of characters to read. else right justifies RETURN: DESCRIPTION: CER International bv 69 . setting the parameters to data according to the specifications of the format string.characters written to string on success. Characters are printed as read to standard output until a percent character. G : floating point number • c : character.sprintf(str. % indicates that a value is to be read and stored to subsequent parameters following the format string. and how the input is to be converted to be assigned to the variable number of arguments passed to this function.. E.]) str . g.. width.precision]<type> flags may be: • : Left justification in the field with blank padding. The parameter formatString specifies how data is read and stored in variables. I : signed integer • u. Each subsequent parameter specification takes from the next parameter in the list following format. where A-Z represents range "A" to "Z" • [^abc] : string consisting of all character NOT within brackets. formatString[. Characters are matched against the input as read and as it matches a portion of the format string until a % character is reached. else EOF on failure. The parameter formatString may contain character combinations indicating how following parameters are to be written.to hold the formatted output formatString . % indicates that a value is to be printed from the parameters following the format string. The format string can contain character combinations indicating how following parameters are to be treated. fewer will be read if white space or nonconvertible character type : may be one of the following: • d. if width was specified then this will be an array of characters of the specified length • s : string • [abc] : string consisting of all characters within brackets. e. assigned.sprintf() Clib. D. A parameter specification has the following form in which square brackets indicate optional fields and angled brackets indicate required fields: %[flags][width][. Clib. U : unsigned integer • o.values to be converted to and formatted as a string. 'a'. d. Clib.g. padded on the left with zeros • * : The next value in the argument list is an integer specifying the output width • . 'b'. This method pushes character chr back into an input stream. d. number . the character put back into a file stream.ungetc(chr.pointer to file to use. u : No effect o : 0 (zero) is prepended to non-zero output x. depending on precision • c : character (e. Clib. Only one character is guaranteed to be pushed back.on success. are prepended to output f. X : 0x.. f • X : hexadecimal integer with 0-9 and A. it is converted to a byte and is again in an input stream for subsequent retrieval. e.dddE+dd or [-]d. E : Output includes decimal even if no digits follow decimal g.dddE-dd • g : floating point of f or e type. you must use two % characters together.dddd • e : floating point of the form [-]d. I : signed integer • u : unsigned integer • o : octal integer x • x : hexadecimal integer with 0-9 and a. C. else EOF on failure. The method returns chr on success. F • f : floating point of the form [-]dddd.getc() SEE: CER International bv 70 . b. no decimal point is output • n : n characters or n decimal places (floating point) are output • * : The next value in the argument list is an integer specifying the precision width type may be: • d. c.). positive values begin with a blank : Convert using the following alternate form.ungetc() SYNTAX: WHERE: RETURN: DESCRIPTION: Clib. padded with blanks • 0n : At least n characters are output.g. s. e. or 0X. to prevent the computer from trying to interpret it as one of the above forms.ddde-dd • E : floating point of the form [-]d. G : Same as e or E but trailing zeros are not removed • width may be: • n : (n is a number e. depending on output data type: c. then it must begin with a period (. %%. depending on precision • G : floating point of For E type.ddde+dd or [-]d. B. When chr is put back.sscanf() SEE: Clib. 14) At least n characters are output. else EOF.precision : If precision is specified. i. filePointer) chr . '8') • s : string To include the % character as a character in the format string. D. E. and may be as follows: • 0 : For floating point type.FieldCommander • • • + blank # • • • • JavaScript Refererence Guide with zero or blank padding : Force numbers to begin with a plus (+) or minus (-) : Negative values begin with a minus (-). filePointer .character to write to file. 5 Date Object To create a Date object which is set to the current date and time. Date(year. day A day of the month is specified as a number from 1 to 31. 30. use the new operator. is 23. January is 0. 1970. The first second of a minute is 0. which. Otherwise four digits must be supplied. the 1900s. Generally. is 13:13 hours and 15 seconds. January 1. and the last is 59. The format of such a datestring is: month day. var currentDate = new Date(). and the last is 59. year hours:minutes:seconds For example. The first day of a month is 1 and the last is 28. day. 1995. one thirteen and 15 seconds p. hours An hour is specified as a number from 0 to 23. seconds A second is specified as a number from 0 to 59. 29. The following lines all demonstrate ways to get and set dates and times. The third and fourth syntaxes are self-explanatory. The second syntax accepts a string representing a date and optional time. year If a year is in the twentieth century. Instead. For example. month A month is specified as a number from 0 to 11. There are several ways to create a Date object which is set to a date and time. and the time. minutes A minute is specified as a number from 0 to 59. you convert them to milliseconds format before doing calculations. var var var var aDate bDate cDate dDate = = = = new new new new Date(milliseconds). expressed in 24 hour time. month. Date(datestring).m. The first minute of an hour is 0. Midnight is 0. hours. you need only supply the final two digits. This representation in milliseconds is a standard way of representing dates and times that makes it easy to calculate the amount of time between one date and another. the following line of code: CER International bv 71 .m. minutes. Date(year. All parameters passed to them are integers. 1995 13:13:15" specifies the date. day). Friday 13. the seconds specification is optional. you do not create dates in this way. The first syntax returns a date and time represented by the number of milliseconds since midnight.. month. or 31. the following string: "Friday 13. and December is 11. The time specification is optional and if included. and 11 p.FieldCommander JavaScript Refererence Guide 3. as you would with any object. seconds). Instance methods are shown with a period.parse().a day of a month. For example.a minute in an hour.a day in a week. The first millisecond in a second is 0.of a month in a year.getDate() SYNTAX: RETURN: DESCRIPTION: number . the Date object aDate was created above. Sunday is 0. 30. as a number from 0 to 6. Static methods have "Date. Date getHours() date. The first day of a month is 1. This method returns the year. as a number from 0 to 11. ". 9.getDate().getMinutes() SYNTAX: RETURN: DESCRIPTION: number . Date getMonth() date. Date getFullYear() date. Date getDay() date. This method returns the millisecond." at their beginnings since these methods are called with literal calls. the call would be: aDate.getMonth() SYNTAX: RETURN: DESCRIPTION: number . and the last is 999.getDay() SYNTAX: RETURN: DESCRIPTION: number . and the last is 59. of a Date object. This method returns the minute. of a Date object. JavaScript Refererence Guide The following list of methods has brief descriptions of the methods of the Date object.getMilliseconds() SYNTAX: RETURN: DESCRIPTION: number . CER International bv 72 . of a Date object.5.FieldCommander var aDate = new Date(1492.getHours() SYNTAX: RETURN: DESCRIPTION: number . Midnight is 0. This method returns the hour. and Saturday is 6.m. as a number with four digits. of a Date object. of a Date object. 12) creates a Date object containing the date. This method returns the day of the month. of a Date object. as a number from 0 to 999. is 23. and. to call the Date getDate() method. 1492. in the syntax: line. A specific instance of a variable should be put in front of the period to call a method. Date getMilliseconds() date.four digit year. This method returns the day of the week. as a number from 0 to 23. Date getMinutes() date. and 11 p.a millisecond in a second.an hour in a day. and the last is 28. 29. as a number from 0 to 59. or 31.getFullYear() SYNTAX: RETURN: DESCRIPTION: number . 3.1 Date object instance methods Date getDate() date. October 12. as a number from 1 to 31. These methods are part of the Date object itself instead of instances of the Date object. This method returns the month. of a Date object. The first minute of an hour is 0.". such as Date. Date getUTCMinutes() date.getUTCHours() SYNTAX: RETURN: DESCRIPTION: number .four digit year. as a number with four digits. GMT. Date getTimezoneOffset() date.getTimezoneOffset() SYNTAX: RETURN: DESCRIPTION: number .getSeconds() SYNTAX: RETURN: DESCRIPTION: number . This method returns the second. and 11 p. of a Date object. Date getUTCHours() date. The first day of a month is 1.a second in a minute.FieldCommander January is 0. Date getUTCFullYear() date. to the date and time specified by a Date object. This method returns the UTC day of the month.an hour in a day. as a number from 1 to 31.getUTCDay() SYNTAX: RETURN: DESCRIPTION: number . of a Date object. of a Date object.a day in a week. Date getUTCDate() date. and Saturday is 6. Date getUTCDay() date. and December is 11.getUTCMilliseconds() SYNTAX: RETURN: DESCRIPTION: number . and the last is 59. of a Date object. as a number from 0 to 23.the milliseconds representation of a Date object. as a number from 0 to 6. 30. of a Date object. and the last is 28. This method returns the difference. This method returns the UTC hour.getTime() SYNTAX: RETURN: DESCRIPTION: number .getUTCMinutes() SYNTAX: RETURN: number . as a number from 0 to 999.m. This method returns the day of the week. Date getTime() date. is 23.a day of a month.a minute in an hour. This method returns the UTC millisecond. between Greenwich Mean Time (GMT) and local time. JavaScript Refererence Guide Date getSeconds() date. Sunday is 0. as number from 0 to 59.getUTCDate() SYNTAX: RETURN: DESCRIPTION: number .a millisecond in a second. Midnight is 0. The first second of a minute is 0. The first millisecond in a second is 0. 29.minutes.getUTCFullYear() SYNTAX: RETURN: DESCRIPTION: number . of a Date object. and the last is 999. CER International bv 73 . This method returns the UTC year. in minutes. 1970. Date getUTCMilliseconds() date. Gets time information in the form of an integer representing the number of seconds from midnight on January 1. or 31. and the last is 28. day .two digit year. The parameter year is expressed with four digits. The parameter month is the same as for Date setMonth(). or 31. The parameter minute is the same as for Date setMinutes(). The parameter day is the same as for Date setDate(). 30. Date getUTCSeconds() date. and 11 p. millisecond . second . is 23.of a month in a year.time in milliseconds as set. minute[. of a Date object. Date setMilliseconds() date.a millisecond in a second. as a number from 1 to 31. Date setHours() Date.an hour in a day. as a number from 0 to 59. This method sets the hour. 29.FieldCommander DESCRIPTION: JavaScript Refererence Guide This method returns the UTC minute. Date setDate() date. number .a second in a minute.a minute in an hour. Date getYear() date. CER International bv 74 .setMilliseconds(millisecond) SYNTAX: WHERE: millisecond . SYNTAX: millisecond]]]) WHERE: RETURN: DESCRIPTION: hour . month . number . The first day of a month is 1. of a Date object. The first minute of an hour is 0. as a number from 0 to 23. and the last is 59.a second in a minute. of a Date object to the parameter day. as number from 0 to 59.a four digit year.a millisecond in a minute. minute . Date setFullYear() date.setDate(day) SYNTAX: WHERE: RETURN: DESCRIPTION: day . second[.getUTCMonth() SYNTAX: RETURN: DESCRIPTION: number .a day in a month. of a Date object to the parameter hours. The first second of a minute is 0. The parameter second is the same as for Date setSeconds().a month in a year.a day in a month. Date getUTCMonth() date. This method returns the UTC second.time in milliseconds as set. This method returns the year.time in milliseconds as set. number .m. This method sets the year of a Date object to the parameter year.setFullYear(year[.of a month in a year. date]]) SYNTAX: WHERE: RETURN: DESCRIPTION: year . The parameter milliseconds is the same as for Date setMilliseconds(). number . This method sets the day.getYear() SYNTAX: RETURN: DESCRIPTION: number . and the last is 59. as a number with two digits. month[.getUTCSeconds() SYNTAX: RETURN: DESCRIPTION: number . Midnight is 0. of a Date object.setHours(hour[. a day in a month. day . 1970. 29. Date setMonth() Date.a second in a minute. The parameter milliseconds is the same as for Date setMilliseconds(). number . The parameter day is the same as for Date setDate(). millisecond]]) SYNTAX: WHERE: RETURN: DESCRIPTION: minute .time in milliseconds.time in milliseconds as set. 30. The parameter second is the same as for Date setSeconds(). number . January is 0. This method sets the minute. and the last is 59. second[. as a number from 0 to 11.a month in a year.setUTCDate(day) SYNTAX: WHERE: RETURN: DESCRIPTION: day . This method sets the UTC day. day .a month in a year. millisecond . Date setUTCFullYear() date. date]]) SYNTAX: WHERE: RETURN: DESCRIPTION: year . second .the time in milliseconds.a four digit year. The parameter milliseconds is the same as for Date setMilliseconds(). Date setUTCDate() date. number . month . and the last is 999.setSeconds(second[. Date setTime() date. and the last is 59.FieldCommander RETURN: DESCRIPTION: JavaScript Refererence Guide number . millisecond]) SYNTAX: WHERE: RETURN: DESCRIPTION: second . and December is 11.a millisecond in a second. This method sets the millisecond.setMonth(month[. Date setSeconds() date.a minute in an hour. The CER International bv 75 .setTime(millisecond) SYNTAX: WHERE: RETURN: DESCRIPTION: millisecond . as a number from 0 to 59. day]) SYNTAX: WHERE: RETURN: DESCRIPTION: month . This method sets the UTC year of a Date object to the parameter year. and the last is 28. The first second of a minute is 0. month[.time in milliseconds as set. number . millisecond .setUTCFullYear(year[. of a Date object to the parameter month. Date setMinutes() date. GMT. This method sets the month. of a Date object to the parameter second.time in milliseconds as set. of a Date object to the parameter day. as a number from 0 to 59. or 31. This method sets a Date object to the date and time specified by the parameter milliseconds which is the number of milliseconds from midnight on January 1.a millisecond in a second.a day in a month. The first minute of an hour is 0. The first millisecond in a second is 0. number .time in milliseconds as set. of a Date object to the parameter millisecond. This method sets the second.setMinutes(minute[.time in milliseconds. The first day of a month is 1. of a Date object to the parameter minute.a second in a minute. as a number from 1 to 31. number . as a number from 0 to 59.time in milliseconds.a day in a month. number . millisecond .a month in a year. and the last is 59. and 11 p. minute[. as a number from 0 to 59. as a number from 0 to 59.FieldCommander JavaScript Refererence Guide parameter year is expressed with four digits.a millisecond in a minute. CER International bv 76 . The parameter milliseconds is the same as for Date setUTCMilliseconds(). number . number . This method sets the UTC month.time in milliseconds as set. The parameter day is the same as for Date setUTCDate(). day .a second in a minute. millisecond]) SYNTAX: WHERE: RETURN: DESCRIPTION: second . The parameter minute is the same as for Date setUTCMinutes(). and December is 11. SYNTAX: millisecond]]) WHERE: RETURN: DESCRIPTION: minute . is 23.time in milliseconds as set.time in milliseconds. The first millisecond in a second is 0. Date setUTCHours() Date. The parameter milliseconds is the same as for Date setUTCMilliseconds(). millisecond .a millisecond in a second. and the last is 59. This method sets the UTC second. as a number from 0 to 59. SYNTAX: millisecond]]]) WHERE: RETURN: DESCRIPTION: hour .setUTCSeconds(second[. Date setUTCMilliseconds() date. second[. The parameter day is the same as for Date setUTCDate(). January is 0. The parameter second is the same as for Date setUTCSeconds(). of a Date object to the parameter second. number .a millisecond in a second. Date setUTCSeconds() date. Midnight is 0.setUTCHours(hour[. second . This method sets the UTC minute. This method sets the UTC millisecond.a millisecond in a second. The parameter second is the same as for Date setUTCSeconds().time in milliseconds. day]) SYNTAX: WHERE: RETURN: DESCRIPTION: month . The first second of a minute is 0.a second in a minute. This method sets the UTC hour. as a number from 0 to 11. The parameter milliseconds is the same as for Date setUTCMilliseconds().setUTCMilliseconds(millisecond) SYNTAX: WHERE: RETURN: DESCRIPTION: millisecond . Date setUTCMinutes() date. second . The parameter month is the same as for Date setUTCMonth(). of a Date object to the parameter hours. of a Date object to the parameter minute.setUTCMinutes(minute[.setUTCMonth(month[. as a number from 0 to 23. of a Date object to the parameter month.a minute in an hour. The first minute of an hour is 0. number . of a Date object to the parameter millisecond.a second in a minute.a day in a month. Date setUTCMonth() Date. second[.time in milliseconds. millisecond .m.an hour in a day. minute .a minute in an hour. and the last is 999. This function is designed to take in the current locale when formatting the string.toGMTString() SYNTAX: RETURN: DESCRIPTION: EXAMPLE: string . Date toString(). based on Greenwich Mean Time. number . Date toString(). This function is designed to take in the current locale when formatting the string. the 1900s. Date toLocaleString() date.toLocaleDateString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . Date toLocaleString() var d = new Date(). Returns the Date portion of the current date as a string. This method converts a Date object to a string. Date toString(). writeLog(d.representation of the date portion of the current object. The parameter year may be expressed with two digits for a year in the twentieth century.toDateString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . This string is formatted to read "Month Day. This function behaves in exactly the same manner as Date toString(). var s = d.toGMTString()).four digit year. Date toGMTString() date.locale-sensitive string representation of the time portion of the current date.toLocaleString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . Locale reflects the time zone of a user. Date toLocaleTimeString(). Locale reflects the time zone of a user. for example. Date toLocaleTimeString(). unless in the 1900s in which case it may be a two digit year.FieldCommander Date setYear() date. var d = new Date(). though this functionality is currently unimplemented. Year".setYear(year) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide year . "May 1. Date toLocaleDateString() var d = new Date(). Date toDateString() date.toLocaleTimeString() SYNTAX: RETURN: DESCRIPTION: string .locale-sensitive string representation of the date portion of the current date. Locale reflects the time zone of a user. Date toLocaleTimeString() date. 2000". Date toLocaleDateString() var d = new Date(). not UTC time.string representation of the GMT date and time.toLocaleDateString(). Four digits are necessary for any other century. // The fragment above would produce something like: // Mon May 1 15:48:38 2000 GMT Date toLocaleDateString() date.locale-sensitive string representation of the current date.time in milliseconds as set. Date toTimeString(). This function behaves in exactly the same manner as Date toDateString(). This method sets the year of a Date object to the parameter year. This method uses the local time. This function is designed to take in the current locale when formatting the string.toDateString(). CER International bv 77 . var s = d. var s = d. This function behaves in exactly the same manner as Date toTimeString().toLocaleString(). representation of the date and time data in a Date object.5. Date toTimeString() var d = new Date().toString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: JavaScript Refererence Guide string .toTimeString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . Date toLocaleString(). // To create a Date object CER International bv 78 . as in "16:43:23". Date toTimeString() var d = new Date(). var s = d. and Date.fromSystem() Date. This function uses the local time. which is in the same format as returned in timestamp. Date toLocaleString().valueOf() SYNTAX: RETURN: DESCRIPTION: SEE: number .2 Date object static methods The Date object has three special methods that are called from the object itself. var s = d. rather than from an instance of it: Date.toString(). Date toLocaleDateString() var d = new Date().representation of the UTC date and time data in a Date object. rather than the UTC time. Date.the Date object date and time value converted to the system date and time. Date toString().toTimeString().toUTCString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . Date toSystem() date.Date object with the time passed This method converts the parameter time.time in system data format in the format as returned in sec of timestamp object . This method converts a Date object to a system time format which is the same as that returned in the timestamp structure. Date valueOf() date.toString(). This string is formatted to read "Hours:Minutes:Seconds".fromSystem(). The numeric representation of a Date object.fromSystem(time) SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: time .parse(). Date toUTCString() date. Converts the UTC date and time information in a Date object to a string in a form such as: "Mon May 1 09:24:38 2000" Date toDateString(). Date toTimeString() date.UTC(). to a standard Javascript Date object.representation of the Time portion of the current object.the value of the date and time information in a Date object. Date toString() 3. see the Date. To create a Date object from a variable in system time format.fromSystem() method.toSystem() SYNTAX: RETURN: DESCRIPTION: number . Date. Converts the date and time information in a Date object to a string in a form such as: "Mon May 1 09:24:38 2000" Date toDateString().FieldCommander Date toString() date. Date toDateString(). This function returns the time portion of the current date as a string. var s = d. time zone.A year.UTC() SYNTAX: WHERE: Date.milliseconds between the datestring and midnight .parse(datestring). This is an optional argument which may be omitted if Seconds and Minutes are omitted as well.A string representing the date and time to be passed number .milliseconds from midnight.A number between 0 (January) and 11 (December) representing the month day . minutes[. seconds[. Date. hours[.A number between 0 and 999 which represents the milliseconds. SEE: EXAMPLE: Date. 1992 var theDate = Date. milliseconds .fromSystem(SysDate. 8)) RETURN: DESCRIPTION: SEE: EXAMPLE: CER International bv 79 . This is an optional parameter. January 1. 0.sec). //is equivalent to: var theDate = new Date(datestring). 3. January 1. Date setTime().A number between 0 (midnight) and 23 (11 PM) representing the hours minutes .A number between 0 and 59 representing the seconds. var ObjDate = Date. time specification or seconds field may be omitted.sec = ObjDate.parse("March 2. NOTE: For year 2000 compliance.parse(). 1970.parse() SYNTAX: WHERE: RETURN: DESCRIPTION: Date.toSystem().FieldCommander JavaScript Refererence Guide // from date information obtained using getBufferDataElement() // use code similar to: var SysDate = getBufferDataElement(bufferID.A number between 0 (one minute) and 59 (59 minutes) representing the minutes. The string must be in the following format: Friday. DT_RS. October 31. The day of the week. Date setTime() // The following code creates a Date object // using UTC time: foo = new Date(Date. 1992") //Note: var theDate = Date. Date. 9. Note that Month uses 1 as its lowest value whereas many other arguments use 0 hours . 1998 15:30:00 -0500 This format is used by the Date toGMTString() method and by email and Internet applications. seconds . // To convert a Date object to system format // use code similar to: SysDate. "timestamp"). Date toGMTString(). Date object. 0. This parameter is optional. this year MUST be represented in four-digit format month . 1970 GMT. The parameters are interpreted as referring to Greenwich Mean Time (GMT). Date. represented in four or two-digit format after 1900.UTC() //The following code sets the date to March 2. The method interprets its parameters as a date.UTC(year. 1. This method converts the string dateString to a Date object. day[. to the date and time specified.A number between 1 and 31 representing the day of the month. month.UTC(1998. Date object. number . milliseconds]]]]) year .parse(dateString) dateString . print = printFunction. body .]. b". object . although multiple parameter names can be grouped together with commas.1 Function object instance methods Function() SYNTAX: WHERE: new Function(params[. x + 4. Each way has a strength that is very powerful in some circumstances. "c". RETURN: DESCRIPTION: EXAMPLE: Function apply() function.prototype.print(). .} All three of three of these ways of defining and using functions produce the same result. // This code will print out the value "9". var printFunction = new Function ("writeLog(this.6 Function Object The Function object is one of three ways to define and use objects in Javascript. power that allows elegance in programming. foo. Note that this function can also be called directly. "c".a new function object with the specified parameters and body that can later be executed just like any other function. "return").} Construct a new Function object: var myFunc = new Function("x". then the global object is used instead.FieldCommander JavaScript Refererence Guide 3. a runtime error is generated. For example. 5 ). new Function("a". then a new object is created whose internal _prototype property is equal to the prototype property of the new function object. "return x + 4.").one or a list of parameters for the function. If there is an error parsing either the parameter list or the function body. The body of the function is parsed just as any other function would be.the body of the function as a string.apply([thisObj[.object that will be used as the "this" variable while calling this function. var foo = new myFunction( 4. Define and assign a function literal: var myFunc = function(x) {return x + 4. If this function is later called as a constructor. The parameters passed to the function can be in one of two formats. All parameters are strings representing parameter names.. The differences are in definition and use of functions. "b". 3. The methods and discussion in this segment on the Function object deal with the second way shown above. arguments]) SYNTAX: WHERE: thisObj .value)"). which was the value stored // in foo when it was created with the myFunction constructor. the construction of a new Function object. CER International bv 80 .. "return") is the same as new Function("a.6. // The following will create a new Function object // and provide some properties // through the prototype property. If this is not supplied. var myFunction = new Function("a". myFunction. without the new operator. "this. "b".value = a + b"). The three ways to work with objects are: Use the function keyword and define a function in a normal way: function myFunc(x) {return x + 4. These two options can be combined as well. body) params . call(this.call(this.a// from the current object (which is obj) and adding the first parameter passed. arguments[. This method is almost identical to calling the function directly.apply(global. only the user is able to pass a variable to use as the "this" variable. myFunction. from the args array. Note that the similar method Function call() can receive the same arguments as a list. // This code sample will return 9. // // // // This code fragment returns the value 9.. and the arguments to the function are passed as an array. "return this. arg2[. var obj = { a:4 }.list of arguments to pass to the function. Function call() function.FieldCommander JavaScript Refererence Guide arguments . 5 ).. [4.apply(this.apply(this. Compare the following ways of passing arguments: // Uses an Array object function.arg2]) // Uses argument list function. This method is similar to calling the function directly. Note that the similar method Function apply() can receive the same arguments as an array..apply(this. Function().An object that will be used as the "this" variable while calling this function. argArray) // Uses brackets function. then the global object is used instead. If this is not supplied..arg2) RETURN: DESCRIPTION: SEE: EXAMPLE: variable . arguments .array of arguments to pass to the function as an Array object or a list in the form of [arg1. Compare the following ways of passing arguments: // Uses an Array object function.apply(this.5]). Function.5).b".arg2) RETURN: DESCRIPTION: SEE: EXAMPLE: variable . CER International bv 81 .arg1.[arg1.a + arg"). .]].the result of calling the function object with the specified "this" variable and arguments.]]]) SYNTAX: WHERE: thisObj . var args = new Array(4.arg2]) // Uses argument list function. The brackets "[]" around a list of arguments are required. it is the same. //or myFunction. myFunction( obj. If arguments is not supplied. which is 5. args). Function(). .apply(global. Otherwise.arg1. which is // the result of calling myFunction with // the arguments 4 and 5.[arg1."return a + b").apply() // The following code: var myFunction = new Function("arg". then no arguments are passed to the function. If the arguments parameter is not a valid Array object or list of arguments inside of brackets "[]".the result of calling the function object with the specified "this" variable and arguments.call([thisObj[. which is the result of fetching this. Function call() var myFunction = new Function("a. only the user is able to supply the "this" variable that the function will use. then a runtime error is generated. argArray) // Uses brackets function. value = a + b"). var aNumber = ToNumber("123").toString() SYNTAX: RETURN: DESCRIPTION: JavaScript Refererence Guide EXAMPLE: string . which is a variable or piece of data. there are times when the types of variables or data must be specified and controlled.FieldCommander Function toString() function. For example.7 Global Object The properties and methods of the global object may be thought of as global variables and functions. To access global properties. var aString = ToString(123). but the first one illustrates how global functions are usually invoked. Each of the following casting functions. semicolons. note that this function is very rarely called directly.a representation of the function. if (defined(name)) writeLog("name is defined"). This method attempts to generate the same code that built the function. "this. "b". the following two if statements are identical. has one parameter. b) { this .value = a + b. global variables are members of the global object. if (global. you must use the global keyword to reference the global variable. For example. The exception to this rule occurs when you are in a function that has a local variable with the same name as a global variable. var aString = ToString(123) var aString = global. even though the function object has a name. The following two lines of code are also equivalent. the object name generally is not used. var myFunction = new Function("a".ToString(123) Remember. // This fragment will print the followingto the screen: function anonymous(a. the functions below that begin with "To".defined(name)) writeLog("name is defined"). Any spacing.. The first variable aString is created as a string from the number 123 converted to or cast as a CER International bv 82 . } 3.ToString(). you do not need to use an object name. Note that the function name is always "anonymous".1 Conversion or casting Though Javascript does well in automatic data conversion. newlines. etc. In such a case. rather it is called implicitly through conversions such as global. Indeed. Also. This method tries to make the output as human-readable as possible. writeLog( myFunction ). are implementation-dependent.7. 3. the following fragment creates two variables. to be converted to or cast as the data type specified in the name of the function. because the function itself is unnamed. The object identifier global is not required when invoking a global method or function. the second line could also have been: var aNumber = ToNumber(aString). All other characters are replaced by their respective unicode sequence. value .a valid expression to be parsed and treated as if it were code or script. // Returns "Hello%20there%21" global. numbers. This escaping conversion may be called encoding. Since aString had already been created with the value "123".undefine() var t = 1. 3. The third use checks an object "t.t" global.escape() escape(str) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: str . if (defined(t)) writeLog("t is defined").a value or variable to check to see if it is defined. if (!defined(t. The following fragment illustrates three uses of defined().unescape() escape("Hello there!"). remain in the string. @ * + . This method is the reverse of global.the result of the evaluation of expression as code. All uppercase and lowercase letters. string .with special characters escaped or fixed so that the string may be used in special ways.eval() SYNTAX: WHERE: RETURN: DESCRIPTION: eval(expression) expression .with special characters that need to be handled specially. The function defined() may be used during script execution and during preprocessing.t)) writeLog("t.. such as being a URL.2 global object methods/functions global. When used in preprocessing with the directive #if.t is not defined"). but is more powerful. and the special symbols.defined() defined(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: value . Evaluates whatever is represented by the parameter expression. If expression is not CER International bv 83 .FieldCommander JavaScript Refererence Guide string. The function returns true if a value has been defined. or value has been defined. #endif // // // // // The first use of defined() checks whether a value is available to the preprocessor to determine which platform is running the script.true if the value has been defined. global. The second use checks a variable "t". or else returns false. The second variable aNumber is created as a number from the string "123" converted to or cast as a number. /. that is. the function defined() is similar to the directive #ifdef. else false This function tests whether a variable.unescape().7. The type of the variable or piece of data passed as a parameter affects the returns of some of these functions. The escape() method receives a string and escapes the special characters so that the string may be used with a URL. object property. boolean . escaped. a hexadecimal escape sequence. global. #if defined(_WIN32_) writeLog("in Win32"). var a = "who". CER International bv 84 . Array length var arr = {4. Otherwise it returns false.getArrayLength() getArrayLength(array[. global. the return variable. minIndex]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: array . that is.setArrayLength(). which is one more than the highest index of an array. global. or Number. global.isNaN() isNaN(number) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: number . This function and its counterpart.setArrayLength(). if the first element of the array is at index 0. the method returns false.5.6. boolean . minIndex . setArrayLength(). This function should be used with dynamically created arrays. is or can be converted to a number. calling eval(5) returns the value 5.isNaN() if (isFinite(99)) writeLog("A number").the minimum index to use. for example. boolean . Use the Array length property to get the length of arrays created with the constructor function and not getArrayLength(). // Evaluates the contents of the string as code. arrays not created with the Array() constructor function. The length property is not available for dynamically created arrays which must use the functions.an automatic array. global. For example.true if number is not a number. number . writeLog(getArrayLength(arr)). // the result of the evaluation writeLog(eval('a == "who"')). evaluates to NaN.to check if it is a finite number.isFinite() isFinite(number) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: number . the interpreter tries to interpret the string as if it were Javascript code. This method returns true if the parameter. it will be returned. else false. If the parameter minIndex is passed. If successful. with arrays that were not created using the new Array() operator and constructor. global.POSITIVE_INFINITY. global. This method returns true if the parameter. use the length property of the Array object. the method returns the last variable with which was working.FieldCommander JavaScript Refererence Guide EXAMPLE: a string. Number. // and displays "true". that is.NEGATIVE_INFINITY. "Not a Number". If the method is not successful. else false. it returns the special value.the length of an array.isFinite() if (isNan(99)) writeLog("Not a number"). when working with array lengths.a value to if it is not a number.getArrayLength() and global. number. are intended for use with dynamically created arrays. which is most common. which will be zero or less. number. // Displays the string as is writeLog('a == "who"').7}. When working with arrays created using the new Array() operator and constructor. The getArrayLength() function returns the length of a dynamic array. If the parameter evaluates as NaN. You can use this function to get the length of an array that was not created with the Array() constructor function. then it is used to set to the minimum index. If expression is a string. undefined.if the parameter is or can be converted to a number. global. the special value NaN is returned. See global. radix . str. the second argument is length and the minimum index of the newly sized array is 0.to be converted to an integer.parseInt() except that it reads decimal numbers with fractional parts.3"). If only two arguments are passed to setArrayLength().parseFloat() parseFloat(str) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: str .parseFloat() var i = parseInt("9"). var i = parseInt("9.parseInt() var i = parseInt("9. This method converts an alphanumeric string to an integer number.the number base to use. and any following digits are the fractional part of the number. else NaN. become undefined. radix. radix defaults to base 8. If no attributes are set. the return is 0.an automatic array. which is octal. the method defaults to base 10. If three arguments are passed to CER International bv 85 .setAttributes() global. White space characters at the beginning of the string are ignored. If the string is unable to be converted to a number. the first period.the length of the array to set.representing the attributes set for a variable. If the radix parameter is not supplied. that is. that is. "0x". expressed in the base specified by the radix variable. See global. and the second parameter. length]]) SYNTAX: WHERE: RETURN: DESCRIPTION: array .the minimum index to use.setAttributes() for a list of predefined constants for the attributes that a variable may have. which is hexadecimal. up to the first non-numeric character. i == 9 global.the float to which the string converts. name.the integer to which string converts. All numeric characters following the string will be read. Any elements outside the bounds set by MinIndex and length are lost. Default is 0. radix defaults to base 16.parseInt() parseInt(str[. is the string to be converted. ". Variable attributes may be set using the function setAttributes(). // In both cases. default is 10.getAttributes() getAttributes(variable) SYNTAX: WHERE: RETURN: JavaScript Refererence Guide DESCRIPTION: SEE: variable . minIndex[. radix]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: str .FieldCommander global. global. All characters including and following the first non-numeric character are ignored. void. minIndex . In other words. If the first digit of string is a zero. is an optional number indicating which base to use for the number.setArrayLength() setArrayLength(array[. The method parseFloat() does not take a second parameter. number .a variable identifier.3"). This function sets the first index and length of an array.setAttributes() for more information and descriptions of the attributes of variables that can be set. The first non-white space character must be either a digit or a minus sign (-). number . and the result will be converted into a number. in the parameter string is considered to be a decimal point. global.to be converted to a decimal float. Gets and returns the variable attributes for the parameter variable. This method is similar to global. If the first digit is zero followed by an "x". which is decimal. global. length . else NaN.". global. number . The first parameter. If this flag is set. setArrayLength(arr. if variable TestVar is not found in a local variable context.getArrayLength(). is the minimum index of the newly sized array. the flag setting READ_ONLY | DONT_ENUM sets both of these attributes for one variable. b. // After this code is run.foo().FieldCommander JavaScript Refererence Guide SEE: EXAMPLE: setArrayLength(). if the __parent__ property is present. and the third argument is the length. name. global. If more than one attribute is being set. which must be 0 or less. READ_ONLY This variable is read-only. IMPLICIT_PARENTS) var a.setAttributes() setAttributes(variable. Any attempt to write to or change this variable fails. then the "this" variable is inserted into a scope chain before the activation object. The example below illustrates the effect of this flag. activation object.value = 4. b. b. IMPLICIT_PARENTS This attribute applies only to local functions and allows a scope chain to be altered based on the __parent__ property of the "this" variable.the attribute or attributes to be set for a variable. void.value is set to 5. DONT_DELETE This variable may not be deleted. "|". If the delete operator is used with a variable. For example. of a function. IMPLICIT_THIS This attribute applies only to local functions.6. the second argument. use the or operator. Array length var arr = {4. global. If this flag is set. and if a variable is not found in the local variable context. var b.__parent__ = a. This function sets the variable attributes for the parameter variable using the parameter attributes. The following list describes the attributes that may be set for variables. then the parents of the "this" variable are searched backwards before searching the global object. This function has no return. the interpreter searches the current "this" variable of a function. to combine them. activation object. SEE: EXAMPLE: CER International bv 86 . global. attributes) SYNTAX: WHERE: RETURN: DESCRIPTION: variable .5. writeLog(getArrayLength(arr)). a.7}. a. function foo() { value = 5.foo = foo. Variables in Javascript may have various attributes set that affect the behavior of variables. Multiple attributes may be set for variables by combining them with the or operator.a variable identifier. } setAttributes(foo. 9). For example. attributes .getAttributes() // The following fragment illustrates the use // of setAttributes() and the behavior affected // by the IMPLICIT_PARENTS flag. DONT_ENUM This variable is not enumerated when using a for/in loop. nothing is done. ToInteger() except that if the return is an integer.FieldCommander global. This function converts value to a buffer and differs from global. If result is +0. boolean . +Infinity or -Infinity. it is in the range of -231 through 231 . global.to be cast as a boolean.ToBytes() ToBytes(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: value . number .ToBuffer() in that the conversion is actually a raw transfer of data to a buffer.1. If result is NaN. call global.ToString() except that the resulting array of characters is a sequence of ASCII bytes and not a unicode string. For example. "". the unicode string "Hit" is stored in a buffer as "\0H\0\i\0t".conversion of value.ToNumber(). else true Object true String false if empty string. The following list indicates how different data types are converted by this function.conversion of value. as the hexadecimal sequence: 00 48 00 69 00 74.ToInt32() ToInt32(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: value . The raw transfer does not convert unicode values to corresponding ASCII values.ToBuffer() ToBuffer(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: value .conversion of value.conversion of value. This function converts value to an integer type. buffer . number .to be cast as a buffer. buffer .to be cast as a buffer. Else CER International bv 87 . global. global. return +0. that is. global.ToBoolean() ToBoolean(value) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide value .to be cast as a signed 32-bit integer. -0 or NaN. First.ToBuffer() global. else true undefined false global. Boolean same as value Buffer same as for String null false Number false. if value is 0. This function converts value to a buffer in a manner similar to global. +0.ToInteger(). This function is the same as global. -0. return result.to be cast as an integer.ToInteger() ToInteger(value) SYNTAX: WHERE: RETURN: DESCRIPTION: value .conversion of value.ToBytes() global.ToNumber() global. value .FieldCommander JavaScript Refererence Guide SEE: return floor(abs(result)) with the appropriate sign.ToNumber() global.to be cast as a number. global. global.ToInt32() global.to be cast as an object. if successful.conversion of value to one of the primitive data types.to be cast as a primitive. object . global. The following table lists how different data types are converted by this function. if value is false.ToPrimitive() ToPrimitive(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: value . number . For example. the value -4. else NaN undefined NaN SEE: global. else 1 Buffer same as for String null +0 Number same as value Object first. call ToPrimitive().ToObject() CER International bv 88 .ToNumber() ToNumber(value) SYNTAX: WHERE: RETURN: DESCRIPTION: value . The following table lists how different data types are converted by this function. Boolean +0. then call ToNumber() and return result String number. global.ToPrimitive() global.8 is converted to -4. This function does conversions only for parameters of type Object.conversion of value.conversion of value. Boolean new Boolean object with value null generate runtime error Number new Number object with value Object same as parameter String new String object with value undefined generate runtime error SEE: global.ToInteger().ToObject() ToObject(value) SYNTAX: WHERE: RETURN: DESCRIPTION: value . An internal default value of the Object is returned.ToInt32(). return "0". If a value was previously defined so that its use with the function global. if value is false. That is. global. it is in the range of 0 through 216 .1. If a number. number .conversion of value. then call ToString() and return result String same as value undefined "undefined" SEE: global. return "-" concatenated with the string representation of the number. If +0 or -0.ToUint16() ToUint16(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: value . // Returns "Hello there!" global.ToInteger() except that if the return is an integer.unescape() unescape(str) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: str . an encoded string is decoded.ToUint32() ToUint32(value) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: value . number .ToInteger() except that if the return is an integer. If Infinity. string . The following table lists how different data types are converted by is this function.FieldCommander global. This function undefines a variable.value.with escape characters replaced by appropriate characters.ToPrimitive(). global. return a string representing the number. global. Object first.1.escape() unescape("Hello%20there%21").ToInteger() global.conversion of value.to be cast as a 32 bit unsigned integer. string .ToInt32(). Object property. This function is the same as global. or value. return "Infinity". else "true" null "null" Number if value is NaN.ToString() ToString(value) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide value . call ToPrimitive(). Boolean "false". global. This method is the reverse of the global.holding escape characters. global.to be cast as a 16 bit unsigned integer. CER International bv 89 .conversion of value. global. This function is the same as global.defined() returns true. If a number is negative. void. it is in the range of 232 . or property to be undefined. variable. return "NaN".ToNumber() global.ToUint32().escape() method and removes escape sequences from a string and replaces them with the relevant characters.to be cast ass a string.undefine() undefine(value) SYNTAX: WHERE: RETURN: DESCRIPTION: value .ToInteger() global. LN10 var n = Math.302585092994046.LN2.LN2. the base of the natural logarithms.8. the base of natural logarithms. var n = Math. This value is represented internally as approximately 1.E SYNTAX: DESCRIPTION: EXAMPLE: Math.LOG10E The number value for the base 10 logarithm of e. var n = Math. // In the following fragment an object o is created and a // property o. The property is then undefined but // the object o remains defined.8 Math Object The methods in this section are preceded with the Object name Math. global.LOG2E is approximately the reciprocal of the value of Math. var n = Math. EXAMPLE: Math. The value of Math.one = 1.E The number value for e. The value of Math. var n = 2. This value is represented internally as approximately 0. 3. Math. var o = new Object.LN2 SYNTAX: DESCRIPTION: EXAMPLE: Math. o.LN2 The number value for the natural logarithm of 2.LOG2E SYNTAX: DESCRIPTION: Math. since individual instances of the Math Object are not created. This value is represented internally as approximately 2.LOG10E is approximately the reciprocal of the value of Math. the variable n // is defined with the number value of 2 and // then undefined.LOG10E SYNTAX: DESCRIPTION: Math. Math.LN10.4426950408889634. the base of the natural logarithms.LN10 SYNTAX: DESCRIPTION: EXAMPLE: Math.defined() // In the following fragment.1 Math object static properties Math. var n = Math.one).one is defined. undefine(n).LOG2E.LOG2E The number value for the base 2 logarithm of e. Undefining a value is different than setting a value to null. This value is represented internally as approximately 0.LN10 The number value for the natural logarithm of 10.7182818284590452354. This value is represented internally as approximately 2.FieldCommander JavaScript Refererence Guide SEE: EXAMPLE: then after using undefine() with the value.4342944819032518. Math. Math. For example.E. undefine(o. 3.6931471805599453.abs() is the syntax to use to get the absolute value of a number. defined() returns false.LOG10E EXAMPLE: CER International bv 90 . PI. Returns NaN if x cannot be converted to a number.. The value of Math.0 and -1.abs(x) x . Returns NaN if x cannot be converted to a number.the absolute value of x. Math..a number between 1.PI The number value for pi. the ratio of the circumference of a circle to its diameter. This value is represented internally as approximately 3.asin() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math.the arc cosine of x.acos(x) x .2 Math object static methods Math. function compute_asin(x) { return Math. Math. if you pass 3 the // return is NaN since 3 is out of the range of Math.SQRT1_2 Math.SQRT2 The number value for the square root of 2.SQRT2.8.FieldCommander Math.SQRT2 SYNTAX: DESCRIPTION: EXAMPLE: Math. is greater than 1.SQRT1_2 SYNTAX: DESCRIPTION: EXAMPLE: The number value for the square root of 2.a number. var n = Math.a number between 1 and -1.acos. 3. or is less than -1. Returns NaN if x cannot be converted to a number.asin(x) } // If you pass -1 to the function compute_acos(). which is represented internally as approximately 1. is greater than 1. number . number . The return value is expressed in radians and ranges from 0 to pi.SQRT1_2. var n = Math. Math.PI SYNTAX: DESCRIPTION: EXAMPLE: JavaScript Refererence Guide Math.7071067811865476.acos(x) } // If you pass -1 to the function compute_acos().asin(x) x .implementation-dependent approximation of the arc sine of the argument.acos() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math. Computes the absolute value of a number.e. the return is the // value of pi (approximately 3.SQRT2.4142135623730951. Math.SQRT1_2 is approximately the reciprocal of the value of Math.0 number .abs() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math. The return value is expressed in radians and ranges from -pi/2 to +pi/2. function compute_acos(x) { return Math. the return value is 2): var n = Math. or less than -1. which is represented internally as approximately 0.). the return is the CER International bv 91 . var n = Math.14159265358979323846.abs(-2). //The function returns the absolute value // of the number -2 (i.1415. Math.ceil() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math.any number or numeric expression.atan2() SYNTAX: WHERE: RETURN: Math. Returns NaN if x cannot be converted to a number.the smallest number that is not less than the argument and is equal to a mathematical integer.x coordinate of the point. if you pass 3 the return is // NaN since 3 is out of Math. //The smallest number that is //not less than the argument and is //equal to a mathematical integer is returned //in the following function: function compute_small_arg_eq_to_int(x) { return Math. Returns NaN if x cannot be converted to a number. y) { return Math.atan2(x. If the argument is already an integer. measured in radians.arctangent2(x.atan(x) x . number .any number.arctangent(x) } Math.acos's range. number .atan() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math. The return value is expressed in radians and ranges from -pi to +pi. y) x . the result is the argument itself. y) } DESCRIPTION: EXAMPLE: Math. The return value is expressed in radians and ranges from -pi/2 to +pi/2. It is intentional and traditional for the two-argument arc tangent function that the argument named y be first and the argument named x be second. where the signs of the arguments are used to determine the quadrant of the result.ceil(x) x . of the arguments y and x. y/x. //The arctangent of the quotient y/x //is returned in the following function: function compute_arctangent_of_quotient(x. number .an implementation-dependent approximation to the arc tangent of the quotient. //The cosine of x is returned //in the following function: function compute_cos(x) CER International bv 92 .y coordinate of the point.cos() x .ceil(x) } Math.an angle. number .an implementation-dependent approximation of the arctangent of the argument.cos() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math. x . //The arctangent of x is returned //in the following function: function compute_arctangent(x) { return Math. In order to convert degrees to radians you must multiply by 2pi/360.FieldCommander JavaScript Refererence Guide // value of -pi/2 .an implementation-dependent approximation of the cosine of the argument The argument is expressed in radians. max() SYNTAX: WHERE: RETURN: Math.log() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math. CER International bv 93 . y) x . //the result is +infinity Math.FieldCommander { return Math.exp(x) } Math.greater than zero.78 is passed to compute_floor. y .floor(x) x .the larger of x and y. //the result is NaN //If the argument is +0 or -0.exp() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math.either a number or a numeric expression to be used as an exponent number . //the result is -infinity //If the argument is 1. For example returns e raised to the power of the x. where e is the base of the natural logarithms.exp(x) x . the return is NaN //The natural log of x is returned //in the following function: function compute_log(x) { return Math.a number. If the argument is already an integer.log(x) x .a number. Math. number . number .floor(x) } //If 6. //7 will be returned.log(x) } //If the argument is less than 0 or NaN. 90 will be returned.a number. If 89. Returns NaN if x cannot be converted to a number.max(x.cos(x) } JavaScript Refererence Guide Math. //The floor of x is returned //in the following function: function compute_floor(x) { return Math.the greatest number value that is not greater than the argument and is equal to a mathematical integer.an implementation-dependent approximation of the natural logarithm of x. the result is +0 //If the argument is +infinity. the return value is the argument itself. If a negative number is passed to Math. //The exponent of x is returned //in the following function: function compute_exp(x) { return Math.a number.1 //is passed. number .floor() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math.an implementation-dependent approximation of the exponential function of the argument.log(). the result is 1.pow() SYNTAX: WHERE: RETURN: DESCRIPTION: Math.pow is //an imaginary or complex number.pow(x.random() SYNTAX: RETURN: DESCRIPTION: EXAMPLE: number . number .pow() is an imaginary or complex number. y) } //If x = a and y = 4 the return is NaN //If x > y the return is y //If y > x the return is x Math. //Return a random number: function compute_rand_numb() CER International bv 94 .max(x.min(x.min() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math.the smaller of x and y.min(x. it may be because the floating-point value has experienced overflow.The number which will be raised to the power of Y y . Returns NaN if either argument cannot be converted to a number. y) } //If x = a and y = 4 the return is NaN //If x > y the return is x //If y > x the return is y Math. //even if x is NaN //If x = 2 and y = 3 the return value is 8 EXAMPLE: Math. //The larger of x and y is returned //in the following function: function compute_max(x. y) x .a number which is positive and pseudo-random and which is greater than or equal to 0 but less than 1. y) x . y) } //If the result of Math. This method takes no arguments.pow(x.random() Math. y) { return Math. If the result of Math.FieldCommander DESCRIPTION: EXAMPLE: JavaScript Refererence Guide Returns NaN if either argument cannot be converted to a number. //x to the power of y is returned //in the following function: function compute_x_to_power_of_y(x. Returns NaN if either argument cannot be converted to a number. y) { return Math.a number. NaN will be returned.the value of x to the power of y.pow() unexpectedly returns infinity. //the return is NaN //If y is NaN. //The smaller of x and y is returned //in the following function: function compute_min(x.a number.The number which X will be raised to number . Seeding is not yet possible. y . y) { return Math. Calling this method numerous times will result in an established pattern (the sequence of numbers will be the same each time). Please note that if Math. the result is NaN //If y is +0 or -0. floor(x+0.exp() //Return the square root of x: function compute_square_root(x) { return Math.2. Returns NaN if x is a negative number or cannot be converted to a number. Returns NaN if x cannot be converted to a number. The value of Math.round(x) is the same as the value of Math.5. X is rounded up if its fractional part is equal to or greater than 0. for these cases Math. Math.5) returns -3. 4.FieldCommander { return Math.round(-3.floor() //Return a mathematical integer: function compute_int(x) { return Math.5. In order to convert degrees to radians you must multiply by 2pi/360. the result is NaN CER International bv 95 .a number or numeric expression greater than or equal to zero. expressed in radians. //Return the sine of x: function compute_sin(x) { return Math.random() } JavaScript Refererence Guide Math. number . //If the argument is 3.a number. the result is NaN //If the argument is already an integer //such as any of the //following values: -0. 9. 8.sqrt(x) } //If the argument is NaN. +0. then the result is 0.sin(x) x . the result is +0 //If the argument is -0.5 and is rounded down if less than 0. //but Math. //then the result is the //argument itself.round() SYNTAX: WHERE: RETURN: Math. Math.value that is closest to the argument and is equal to a mathematical integer.the sine of x.round(3.floor(x+0.the square root of x.5.5).round(x) returns *0. the result is -0 //If the argument is +infinity or -infinity. //If the argument is .round(x) x . the result is NaN //If the argument is +0. except when x is *0 or is less than 0 but greater than or equal to -0. //the result is NaN Math.round(x) } //If the argument is NaN.5) returns 4.sin() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math.sqrt() SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: Math. but Math. then the result is 4 //Note: Math. number . DESCRIPTION: SEE: EXAMPLE: Math.sin(x) } //If the argument is NaN.sqrt(x) x . number .5) returns +0.an angle in radians. a string representation of this number. This method returns a string containing the number represented in fixed-point notation with fractionDigits digits after the decimal point.tan(x) } //If the argument is NaN. string . //the result is NaN //If the argument is +0. Such things as placement of decimals and comma separators are affected. In order to convert degrees to radians you must multiply by 2pi/360.the digits after the significand's decimal point. var s = n. //the result is NaN 3. the result is -0 //If the argument is +infinity or -infinity.1 Number object instance methods Number toExponential() number.9.the digits after the decimal point.tan() SYNTAX: WHERE: RETURN: DESCRIPTION: EXAMPLE: Math. This method returns a string containing the number represented in exponential notation with one digit before the significand's decimal point and fractionDigits digits after the significand's decimal point.toLocaleString().the tangent of x. Number toString() var n = 8.A string representation of this number in exponential notation. the result is NaN //If the argument is +0. Number toLocaleString() number. number .9. the result is +0 //If the argument is -0.9 Number Object 3.an angle measured in radians. string .toLocaleString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string .A string representation of this number in fixed-point notation.toFixed(fractionDigits) SYNTAX: WHERE: RETURN: DESCRIPTION: fractionDigits .toExponential(fractionDigits) SYNTAX: WHERE: RETURN: DESCRIPTION: fractionDigits . This method behaves like Number toString() and converts a number to a string in a manner specific to the current locale. the result is -0 //If the argument is +infinity. the result is +0 //If the argument is -0. //Return the tangent of x: function compute_tan(x) { return Math. Returns NaN if x cannot be converted to a number.tan(x) x . //the result is +infinity JavaScript Refererence Guide Math. Number toFixed() number. expressed in radians.FieldCommander //If the argument is less than 0. CER International bv 96 . 10.toString(). CER International bv 97 . then the method returns true.true if variable is an object and the current object is present in the prototype chain of the object.1 Object object instance methods Object hasOwnProperty() object. the method recursively searches the internal _prototype property of the object and if at any point the current object is equal to one of these prototype properties. This method simply determines if the object has a property with the name propertyName.toPrecision(precision) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide precision . and the internal _hasProperty() method of the object may be called.significant digits in fixed notation. If variable is not an object. If the property has the DontEnum attribute set.name of the property about which to query. then this method immediately returns false. otherwise false. true is returned. This method returns a string containing the number represented in either in exponential notation with one digit before the significand's decimal point and precision-1 digits after the significand's decimal point or in fixed notation with precision significant digits. If the current object has no property of the specified name. except that undefined values are different from non-existent values. This method behaves similarly to Number toLocaleString() and converts a number to a string using a standard format for numbers. Object propertyIsEnumerable() object.FieldCommander Number toPrecision() number. boolean .propertyIsEnumerable(propertyName) SYNTAX: WHERE: RETURN: DESCRIPTION: property .A string representation of this number in either exponential notation or in fixed notation.10 Object Object 3. or digits after the significand's decimal point in exponential notation. 3. then false is returned. Otherwise. Number toLocaleString() var n = 8.9. This is almost the same as testing defined(object[propertyName]). otherwise it returns false. boolean .isPrototypeOf(variable) SYNTAX: WHERE: RETURN: DESCRIPTION: variable . then false is immediately returned.the object to test.a string representation of this number.name of the property about which to query. string .indicating whether or not the current object has a property of the specified name. Otherwise. boolean . Number toString() number.toString() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . var s = n.true if the current object has an enumerable property of the specified name. Object isPrototypeOf() object.hasOwnProperty(propertyName) SYNTAX: WHERE: RETURN: DESCRIPTION: property . *o/. str. in addition to Javascript books. has implemented regular expression searches that do everything that wildcards do and much.indexOf("two"). var pat = /t. When this method is called. Instead. Note that this function is rarely called directly. the internal class property. Javascript does not use wildcards to extend search capability. rather it is called implicitly through such functions as global.bat will list all files that begin with "t" and end "o" in the filename and that have an extension of "bat". there are many books in the PERL community. searches that are very powerful if a person makes the effort to learn how to use them. Regular expressions are used to search text and strings.ToString(). _class. Object toLocaleString() 3. A string is then constructed whose contents are "[object classname]".toString() SYNTAX: RETURN: DESCRIPTION: SEE: string . the DOS command: dir t*o. Most computer users are familiar with wildcards in searching. // == 4 This fragment illustrates one way to use regular expressions to find "t" followed by "o" with any number of characters between them. especially since they may be used in finding files. // == 4 The String indexOf() method searches str for "two" and returns the beginning position of "two". that explain regular expressions.*o/. which is 4. much more.toLocaleString() SYNTAX: RETURN: DESCRIPTION: SEE: JavaScript Refererence Guide string .11 RegExp Object Regular expressions do not seem very regular to average people. Object toString() Object toString() object.a string representation of this object. For advanced information on regular expressions. var str = "one two three". ECMAScript. str. What if you wanted to find "t" and "o" with or without any characters in between. Anyone who can use regular expressions in PERL already knows how to use Javascript regular expressions. is retrieved from the current object. an "o" only at the beginning of a string. or an "e" only at the end of a string? Before answering. Two things are different. One the variable pat which is assigned /t. the standard for Javascript. lets consider wildcards. Simple searches may be done like the following: var str = "one two three". where classname is the value of the property from the current object. For example.a string representation of this object. This method is intended to provide a default toLocaleString() method for all objects. The slashes indicate the beginning and end of a regular expression pattern. Regular expressions follow the PERL standard.search(pat). similar to how quotation marks indicate a string. It behaves exactly if toString() had been called on the original object. though the syntax has been made easier to read. The String search() method is a method of CER International bv 98 .FieldCommander Object toLocaleString() object. Now lets answer the question about how to find the three cases mentioned above. discussed later. are set for a regular expression.search(pat). It anchors the characters that follow to the end of a string or line and is one of the special anchor characters. To find an "e" only at the end of a string. str.match(pat). /" do? First. var pat = /e$/. results in an Array. pat may be used with RegExp methods and with the three String methods that use regular expression patterns. // == an Array with pertinent info pat. the string to be searched becomes the argument rather than the pattern to search for. often the same.*o/. if you need to do intensive searching in which a single regular expression pattern is used many times in a loop. namely. pat.*o/. they create a RegExp object. the start of a string or line. similar to the String indexOf() method. The three methods are: String match() String replace() String search() The methods in the RegExp object. It anchors the characters that follow to the start of a string or line and is one of the special anchor characters. . In our example. Thus. the end of a string or line. var pat = /^o/. The return may vary depending on exactly which attributes. Before we move on to the cases of an "o" at the start or an "e" at the end of a string. of the RegExp object. and the slashes cause pat to be a RegExp object. namely. Note that there is a very important distinction between searching for pattern matches using the String methods and using the RegExp methods. What do the slashes "/ . To find an "o" only at the start of a string. are explained below in this section. Second. var pat = /t. they define a regular expression pattern. So. // == an Array with pertinent info The String match() and RegExp exec() methods return very similar. for using regular expressions.FieldCommander JavaScript Refererence Guide the String object that uses a regular expression pattern to search a string.search(pat). as with the string methods. . str. // == 12 The dollar sign "$" has a special meaning. // == true By using a method.exec(str). In fact. the start position of "two". The String object has three methods for searching using regular expression patterns. The RegExp methods execute much faster. // == 0 The caret "^" has a special meaning. in these examples. var str = "one two three". the quotes cause str to be a String object. var pat = /t. consider the current example a little further. var str = "one two three". use something like: var str = "one two three". use the RegExp CER International bv 99 . such as test(). they both return 4. but the String methods are often quicker to program. str.test(str). use something like: var str = "one two three". The RegExp test() method simply returns true or false indicating whether the pattern is found in the string. the range of lowercase characters represented by [a-z]. Thus. These regular expression literals operate in the same way as quotation marks do for string literals. namely. Every time a RegExp object is constructed using new. and "three" The slashes delimit the characters that define a regular expression. only one at a time is matched. just as everything between quotation marks is part of a string. Everything between the slashes is part of a regular expression. such as. they define attributes of the regular expression.1 Regular expression syntax The general form for defining a regular expression pattern is: /characters/attributes Assume that we are searching the string "THEY.FieldCommander JavaScript Refererence Guide methods. the three men. var re = new RegExp("^THEY"). the pattern is compiled into a form that can be executed very quickly. Just as strings have special characters. a regular expression has three elements: literals. If you just need to do a few searches. Instead. In general. the choice of which methods to use depends on personal preferences and the particular tasks at hand. Though some special characters. they define and create an instance of a RegExp object: var re = /^THEY/. and so do the following two lines: var re = /^THEY/i. "i"). The following are valid regular expression patterns followed by a description of what they find: /the three/ /THE THREE/ig /th/ /th/igm // // // // "the "the "th" "th" three" three" in "the" in "THEY". Three letters may occur after the second slash that are not part of the regular expression. may have multiple matches. or three of the letters may be used. "the". the RegExp object allows the use of regular expression patterns in searches of strings or text. use the String methods. that is.11. 3. a pattern executes much faster. which may be thought of as a large and powerful subset of PERL regular expressions. and attributes. Any one. The following two lines of code accomplish the same thing. Every time a new pattern is compiled using the RegExp compile() method. Regular expression characters Each character or special character in a regular expression represents one character. [a-z] will only find one of these 26 characters at one position in a string being searched. The literals are a slash "/" at the beginning of some characters and a slash "/" at the end of the characters. Other than the difference in speed and script writing time. any one or more of the attributes may be defined. characters. Thus. Regular expression literals Regular expression literals delimit a regular expression pattern. The syntax follows the ECMAScript standard. CER International bv 100 . var re = new RegExp("^THEY". two. left". var pat = new RegExp("test$". For anyone who works with strings and text.2 Regular expression special characters Regular expressions have many special characters. The instance property multiline is set to true. When working with multiple lines the "^" and "$" anchor characters will match the start and end of a string and the start and end of lines within the string. "igm"). /test$/m. with the same meaning as the same escape sequence in strings. No other characters are allowed as attributes. The following regular expressions illustrate the use of attributes. var var var var pat pat pat pat = = = = /^The/i.FieldCommander JavaScript Refererence Guide namely. which are also known as metacharacters. /the/g. the effort to become proficient with regular expression parsing is more than worthwhile. Some are simple escape sequences. Example: /pattern/m i m Attributes are the characters allowed after the end slash "/" in a regular expression pattern. /test$/igm. Regular expression attributes The following table lists allowable attribute characters and their effects on a regular expression. The newline character "\n" in a string indicates the end of a line and hence lines in a string. Character g Attribute meaning Do a global match. with special meanings in a regular expression pattern.11. a newline "\n". The instance property global is set to true. But. escape sequences. CER International bv 101 . // // // //". Allow the finding of all matches in a string using the RegExp and String methods and properties that allow global operations. Example: /pattern/g Do case insensitive matches. "g"). Example: /pattern/i Work with multiple lines in a string. "i"). var pat = new RegExp("the". regular expression patterns have various kinds of special characters and metacharacters that are explained below. The instance property ignoreCase is set to true. regular expressions have many more special characters that add much power to working with strings and text. 3. such as. var pat = new RegExp("test$". "m"). much more power than is initially recognized by people being introduced to regular expressions. search continues after the last text matched.e). For example.) (?:.+?e)/ are used. Example: /\bthe\b/ Not a word boundary. Also. When a search continues. The position of the match is at the beginning of the text not matching the sub pattern. since it is regular expression character. but not "Javascript " in "Javascript Desktop" or "Javascript ISDK".+?e)/ are used. but the text matched is not captured or saved and is not available for later use using \n or $n. Group without capture.) (?!. then the back references $1 or \1 use the text "one". then the back references $1 or \1 use the text "wo thre". Groups are numbered according to the order in which the left parenthesis of a group appears in a regular expression.. but not "Javascript " in "Javascript Web Server". Group with capture.. Negative look ahead group without capture.. not after the text that matches the look ahead sub pattern. Characters inside of parentheses are handled as a single unit or sub pattern in specified ways. not after the text that matches the look ahead sub pattern. Match the character or sub pattern on the left or the character or sub pattern on the right.+(w. not after "Desktop" or "ISDK". the search continues after the last text matched. "\W" is included in a match. That is. and "\B" is not. The characters that are actually matched are captured and may be used later in an expression (as with \n) or in a replacement expression (as with $n). When a search continues. if the string "one two three two one" and the pattern /(?:o. in this table. not after "Desktop" or "ISDK". not the regular expression itself.. such as with the first two explanations. /Javascript (?!Desktop|ISDK)/ matches "Javascript " in "Javascript Web Server".). Matches the same text as (.. Match the same characters. but "\b" is not included in a match. it begins after "Javascript ". For example. The most notable difference is that "\w" is included in a match. For example. Groups may be nested. Two. (. For example. The character class "\w" is similar.. That is. Example: /l\B/ Regular expression reference characters Character | \n Meaning Or. matched by group n.+(w. some expressions and replacements can be easier to read and use with fewer numbered back references with which to keep up. | and \n.) (?=. /Javascript (?=Desktop|ISDK)/ matches "Javascript " in "Javascript Desktop" or "Javascript ISDK".FieldCommander JavaScript Refererence Guide \B backspace. The position of the match is at the beginning of the text that matches the sub pattern. Reference to group.. Groups are sub patterns that are contained in parentheses. it begins after "Javascript "... The overhead of not capturing matched text becomes important in faster execution time for searches involving loops and many iterations. if the string "one two three two one" and the pattern /(o.e). Positive look ahead group without capture.) CER International bv 105 .. Expression $1. The text matched by the last group. \cM. \cJ. but they also want to make powerful replacements of found text. \x0A. same as \n.). and (?!.. \x09. that is. (. under regular expression reference characters. \013 The character: / The character: \ The character: . A character represented by its code in hexadecimal. \cI. $9 Meaning The text that is matched by sub patterns inside of parentheses. See the groups..). parenthesized sub pattern. to the right of. The character: * The character: + The character: ? The character: | The character: ( The character: ) The character: [ The character: ] The character: { The character: } A character itself. The text before. A control character. newline. For example. same as \f. (?=. This section describes special characters that are used in replacement strings and that are related to special characters used in search patterns. and \101 is "A". \x0D.. (?:. Regular expression replacement characters All of the special characters that have been discussed so far pertain to regular expression patterns. most people not only want to do powerful searches.. \x0C.. $+ $` $' $& \$ CER International bv 106 .). However.. Seldom. same as \n.. $1 substitutes the text matched in the first parenthesized group in a regular expression pattern. that is. to the left of.. For example. For example. $2 . the text matched by a pattern. \011 Vertical tab. \cL is a form feed (^L or Ctrl-L). to finding and matching strings and patterns in a target string. \015 Horizontal tab. \* \+ \? \| \( \) \[ \] \{ \} \C \cC \x## \### JavaScript Refererence Guide Character represented Form feed. \cK. The text after.. \x0A is a newline. \014 Line feed. $. A character represented by its code in octal. if ever. used.FieldCommander Regular expression escape sequences Sequence \f \n \r \t \v \/ \\ \.). and \x41 is "A". \012 is a newline. For example. the text matched by a pattern The text matched by a pattern A literal dollar sign.. \x0B. \cL. If all you want to do is find text. \012 Carriage return. if not one of the above. then you do not need to know about regular expression replacement characters. RegExp ignoreCase regexp. [] *.11. +. Operator \ (). It is true if "i" is an attribute in the regular expression pattern being used. Since the property is read/write. in the next search. ?.exec(str). It is true if "g" is an attribute in the regular expression pattern being used. Some of the metacharacters can be understood as operators. Read/write property. Regular expression attributes var pat = /^Begin/i.multiline SYNTAX: DESCRIPTION: A read-only property of an instance of a RegExp object.lastIndex == 7 RegExp multiline regexp. Regular expression attributes var pat = /^Begin/g. RegExp exec(). Read-only property. {n. "g").global SYNTAX: DESCRIPTION: SEE: EXAMPLE: A read-only property of an instance of a RegExp object.11.ignoreCase SYNTAX: DESCRIPTION: SEE: EXAMPLE: A read-only property of an instance of a RegExp object. // pat. (?=).o/g. Read-only property. //or var pat = new RegExp("^Begin".lastIndex SYNTAX: DESCRIPTION: SEE: EXAMPLE: The character position after the last pattern match and which is the basis for subsequent matches when finding multiple matches in a string. lastIndex is the starting position. Use RegExp compile() to change.FieldCommander JavaScript Refererence Guide 3.4 RegExp object instance properties RegExp global regexp. Use RegExp compile() to change. That is. {n. var pat = /t. (?!). String match() var str = "one tao three tio one". characters. If a match is not found by one of them. then lastIndex is set to 0. there is an order of precedence. and metacharacters of regular expressions comprise a sub language for working with strings. $. \metacharacter | Descriptions Escape Groups and sets Repetition Anchors and metacharacters Alternation 3. "i"). and. It is true if "m" is an CER International bv 107 .}.m} ^. //or var pat = new RegExp("^Begin". like operators in all programming languages. pat. The following tables list regular expression operators in the order of their precedence.3 Regular expression precedence The patterns. (?:). RegExp exec() and RegExp test() use and set the lastIndex property. you may set the property at any time to any position. {n}. This property is used only in global mode after being set by using the "g" attribute when defining or compiling a search pattern. RegExp lastIndex regexp. index and input. A pattern definition such as this one. //or var pat = new RegExp("^Begin". CER International bv 108 .input == "one tao three tio one" input (RegExp) returnedArray. Use RegExp compile() to change. If there were no "m" in the attributes. sets the instance property regexp.source == "t.exec(str). The property index has the start position of the match in the target string. This property determines whether a pattern search is done in a multiline mode. String match() returns an array with two extra properties.input SYNTAX: DESCRIPTION: When String match()is called and the "g" is not used in the regular expression. /^t/m. var pat = /t. //or var pat = new RegExp("^Begin".index SYNTAX: DESCRIPTION: SEE: EXAMPLE: When String match() is called and the "g" is not used in the regular expression. var pat = /(t. the multiline attribute may be set. The properties that might be set are described in this section.o)\s(t.multiline is set // to true. input (RegExp). "igm").11.o/g. var pat = /^Begin/m. not the contents of the array elements. Read-only property. There is no static (or global) RegExp multiline property in Javascript Javascript since the presence of one is based on old technology and is confusing now that an instance property exists. index (RegExp) returnedArray.source SYNTAX: DESCRIPTION: SEE: EXAMPLE: The regular expression pattern being used to find matches in a string. String match() var str = "one tao three tio one". Use RegExp compile() to change. // rtn[0] == "tao thr" // rtn[1] == "tao" // rtn[2] == "thr" // rtn. String match() and RegExp exec() return arrays in which various elements and properties are set that provide more information about the last regular expression search.multiline would be set to false.multiline to true. for example. not including the attributes. "igm"). // pat. //or var pat = /^Begin/m.5 RegExp returned array properties Some methods. RegExp source regexp. pat. var rtn = pat.index == 4 // rtn. pat.exec(str). Read-only property. When a pattern is defined. // then pat.o" 3.r)/g. Regular expression syntax var str = "one tao three tio one". RegExp exec(). Regular expression attributes // In all these examples.FieldCommander JavaScript Refererence Guide SEE: EXAMPLE: attribute in the regular expression pattern being used. exec(str).+o". Creates a new regular expression object using the search pattern and options if they are specified.sets the global property to true m .o)\s(t. RegExp exec().a string with the new attributes for this RegExp object.compile("t. This method changes the pattern and attributes to use with the current instance of a RegExp object.set the multiline property to true Regular expression syntax.index == 4 // rtn.11. // use it some more CER International bv 109 . attributes . attributes . it must contain one or more of the following characters or be an empty string "": i . // set both to be true var regobj = new RegExp( "r*t".input == "one two three two one" 3.a string containing a regular expression pattern to use with this RegExp object.6 RegExp() SYNTAX: WHERE: RegExp object instance methods new RegExp([pattern[. Regular expression syntax var regobj = new RegExp("now"). String replace().set the multiline property to true RegExp(). var rtn = pat. it must contain one or more of the following characters or be an empty string "": i . // global search var regobj = new RegExp( "r*t".compile("r*t"). // rtn[0] == "two thr" // rtn[1] == "two" // rtn[2] == "thr" // rtn. attributes]]) pattern . An instance of a RegExp object may be used repeatedly by changing it with this method.sets the ignoreCase property to true g .FieldCommander JavaScript Refererence Guide SEE: EXAMPLE: String match() returns an array with two extra properties. "ig" ). "" ). // ignore case var regobj = new RegExp( "r*t".a string with the attributes for this RegExp object. or null on error. "i" ). "g" ). object . If the attributes string is passed. void. "ig"). RETURN: DESCRIPTION: SEE: EXAMPLE: RegExp compile() regexp.sets the ignoreCase property to true g . // use it some more regobj. String search() // no options var regobj = new RegExp( "r*t". var pat = /(t.r)/g.a new regular expression object. String match() var str = "one two three two one". If the attributes string is supplied.sets the global property to true m . The property input has a copy of the target string. index and input. // use this RegExp object regobj. index (RegExp). attributes]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: pattern .a string with a new regular expression pattern to use with this RegExp object.compile(pattern[. String match(). exec(string)!=null. Default is RegExp. Tests a string to see if there is a match for a regular expression pattern. These elements and their numbers correspond to groups in regular expression patterns and replacement expressions. else false.exec(str)) != null) writeLog("Text = " + rtn[0] + " Pos = " + rtn. other methods are quicker and easier to use. Since RegExp exec() always includes all information about a match in its returned array. String match().true if there is a match. Default is RegExp. is both the most powerful and most complex. RegExp. probably most. A string.o".rightContext. When executed with the global attribute being set. The index property has the start position of the first text matched. it is the best. if any matches are made. while ((rtn = pat. of all the RegExp and String methods. but the behavior is more complex which allows further operations. if a match is found.index + " End = " + pat. Returns null if no match is found. is used as the target string.lastIndex.leftContext. "g". For many. appropriate RegExp object static properties. are set. you can easily loop through a string and find all matches of a pattern in it. appropriate RegExp object static properties. then RegExp. being set.FieldCommander RegExp exec() regexp. RegExp. and so forth. CER International bv 110 .input. boolean .input. specified by this. the returned array has the index and input properties. the target. As with String match(). When executed without the global attribute. element 0 of the returned array is the text matched. element 1 is the text matched by the first sub pattern in parentheses. When no more matches are found.lastIndex is set to the position after the last character in the text matched.a string on which to perform a regular expression match. If no string is passed. way to get all information about all matches in a string. If there is a match. This method is equivalent to regexp. and the input property has the target string that was searched.input. in the target string.exec([str]) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide SEE: EXAMPLE: str . such as RegExp. element 2 the text matched by the second sub pattern in parentheses. The property this.leftContext.rightContext. and so forth. The length property indicates how many text matches are in the returned array. Thus. RegExp. In addition. the same results as above are returned. and so forth are set. This method exec() begins searching at the position.an array with various elements and properties set depending on the attributes of a regular expression.lastIndex). providing more information about the matches. searches. RegExp. var pat = new RegExp("t. which is a read/write property. array . to be searched is passed to exec() as a parameter. providing more information about the matches. After a match.lastIndex is read/write and may be set at anytime. this.test([str]) SYNTAX: WHERE: RETURN: DESCRIPTION: str .$n. "g").a string on which to perform a regular expression match. These two properties are the same as those that are part of the returned array from String match() when used without its global attribute being set. perhaps only. // Display is: // Text = two Pos = 4 End = 7 // Text = tio Pos = 14 End = 17 RegExp test() regexp. This method. RegExp object static properties var str = "one two three tio one". this. such as RegExp.lastIndex is reset to 0.$n. $n SYNTAX: DESCRIPTION: RegExp.input If no string is passed to RegExp exec() or to RegExp test().lastMatch SYNTAX: DESCRIPTION: SEE: EXAMPLE: This property has the text matched by the last pattern search.lastParen is equivalent to RegExp["$+"].input SYNTAX: DESCRIPTION: RegExp. var pat = /(t.o/. for compatibility with PERL. in the last pattern search. // "two" is matched SEE: EXAMPLE: RegExp. 3. back references in patterns. for compatibility with PERL. var pat = /t. It is the same text as in element 0 of the array returned by some methods.lastParen SYNTAX: DESCRIPTION: This property has the text matched by the last group. RegExp exec(). RegExp.exec(). String match(). RegExp exec(). String match(). String search() var rtn.test(str). Read-only property.lastMatch RegExp. "g". the lastIndex property is set to the character position after the text match. Read-only property. RegExp. regular expression replacement characters var str = "one two three two one". var str = "one two three tio one". var pat = /(t. RegExp returned array properties var str = "one two three two one". though there are few reasons to do so. Read-only property. The numbering corresponds to \n. pat.input = "one two three two one". that is.input is used as the target string. and $n.lastParen RegExp. Thus. when a match is found.match(pat) // RegExp. // RegExp.lastMatch is equivalent to RegExp["$&"]. substitutions in replacement patterns. then RegExp. RegExp test() var pat = /(t. To be used as the target string.o)/ pat.o)\s/ str. CER International bv 111 . One reason would be if you only wanted to know if a string had more than one match.input is equivalent to RegExp.lastMatch == "two" RegExp. RegExp.FieldCommander JavaScript Refererence Guide SEE: EXAMPLE: Though it is unusual.o/.7 RegExp object static properties RegExp. // rtn == true rtn = pat.$n The text matched by the nth group. test() may be used repeatedly on a string.exec(str). parenthesized sub pattern. RegExp exec(). Regular expression reference characters.11. Read/write property. for compatibility with PERL. it must be assigned a value.$1 == "two" SEE: EXAMPLE: RegExp. the nth sub pattern in parenthesis.$_. RegExp. test() may be used in a special way when the global attribute. is set for a regular expression pattern. Like with RegExp exec(). // RegExp. // RegExp.leftContext is equivalent to RegExp["$`"].12.12 String Object The String object is a data type and is a hybrid that shares characteristics of primitive data types and of composite data types. This allows the interpreter to distinguish between a quotation mark that is part of a string and a quotation mark that indicates the end of the string.leftContext var str = "one two three two one". to the right of.rightContext RegExp. the text matched by the last pattern search.1 String as data type A string is an ordered series of characters.o)/ pat. The String is presented in this section under two main headings in which the first describes its characteristics as a primitive data type and the second describes its characteristics as an object. // RegExp.leftContext is equivalent to RegExp["$'"]. Read-only property. have special meaning to the interpreter and must be indicated with special character combinations when used in strings. that is. var pat = /(t. for compatibility with PERL.leftContext SYNTAX: DESCRIPTION: SEE: EXAMPLE: This property has the text before. Read-only property. that is. RegExp. Escape sequences for characters Some characters. For example. the text matched by the last pattern search.exec(str). for compatibility with PERL.exec(str). the first statement below puts the string "hello" into the variable hello.o)/ pat. RegExp.lastMatch. RegExp. var pat = /(t.rightContext SYNTAX: DESCRIPTION: SEE: EXAMPLE: This property has the text after.$n var str = "one two three two one". RegExp. it is enclosed in quotation marks.lastParen == "thr" RegExp. RegExp. such as a quotation mark. var word = hello. The second sets the variable word to have the same value as a previous variable hello: var hello = "hello".exec(str).FieldCommander SEE: EXAMPLE: JavaScript Refererence Guide RegExp.leftContext RegExp.o)+\s(t. The most common use for strings is to represent text. 3.lastMatch.leftContext == "one " RegExp. The table below lists the characters indicated by escape sequences: CER International bv 112 . RegExp.r)/ pat. to the left of. var pat = /(t.leftContext == " three two one" 3. To indicate that text is a string.rightContext var str = "one two three two one". . you should not use them. except that double quote strings are used less commonly by many scripters. the number is converted to a string. such as "\n". So if you are planning to port your script to some other Javascript interpreter.g. the following lines show different ways to describe a single file name: "c:\\autoexec. (e. cannot be used in back tick strings. which are explained below. Back quote Javascript provides the back quote "`". (e.g.bat" // traditional C method 'c:\\autoexec.g. Long Strings You can use the + operator to concatenate strings. (e. Any special characters represented with a backslash followed by a letter. as an alternative quote character to indicate that escape sequences are not to be translated..g." creates the variable proverb and assigns it the string "A rolling stone gathers no moss. There is no difference between the two in Javascript. var newstring = 4 + "get it". Single quote You can declare a string with single quotes instead of double quotes.bat` // alternative Javascript method Back quote strings are not supported in most versions of Javascript. For example.bat' // traditional C method `c:\autoexec.. "\0"is the null character) "033"is the escape character) "x1B"is the escape character) "u001B"is escape character) Note that these escape sequences cannot be used within strings enclosed by back quotes. This bit of code creates newstring as a string variable and assigns it the string "4get it"." If you try to concatenate a string with a number. CER International bv 113 . The following line: var proverb = "A rolling stone " + "gathers no moss. also known as the back-tick or grave accent.. value to be converted to a string as this string object. Strings have both properties and methods which are listed in this section. when strings are assigned using the assignment operator. then the empty string "" is used instead. a copy of a string is actually transferred to a variable. \"Why did the " "Italians lose the war?\" I told him I had " "no idea. This method returns a new string object whose value is the supplied value. Javascript strings may contain the "\0" character. String lastIndexOf() var s = "a string". on the other hand.3 String object instance properties String length SYNTAX: DESCRIPTION: SEE: EXAMPLE: string.12.12. var TestLen = TestStr." creates a long string containing the entire bad joke. var n = s. ". the following: var badJoke = "I was standing in front of an Italian " "restaurant waiting to get in when this guy " "came up and asked me.length property. Strings have instance properties and methods and are shown with a period. are assigned to variables and passed to parameters by reference. but the returned variable is DESCRIPTION: CER International bv 114 .length. The exception to this usage is a static method which actually uses the identifier String.length The length of a string. that is. the equal sign. a variable or parameter points to or references the original object.FieldCommander JavaScript Refererence Guide The use of the + operator is the standard way of creating long strings in Javascript.2 String as object Strictly speaking. For example. without the new operator. 3. when strings are passed as arguments to the parameters of functions. that is. Otherwise. as an example for calling a String property or method: var TestStr = "123".4 String() SYNTAX: WHERE: RETURN: String object instance methods new String([value]) value . The following code fragment shows how to access the . As an example of its hybrid nature. 3.\" he replied. \"Because they ordered ziti" "instead of shells. they are passed by value. These properties and methods are discussed as if strings were pure objects. A specific instance of a variable should be put in front of a period to use a property or call a method. the value ToString(value) is used.length. Objects. the assignment is by value. that is. It is a hybrid of a primitive data type and of an object. the number of characters in a string. at their beginnings. Further.". the + operator is optional.12. If value is not supplied. Note that if this function is called directly. then the same construction is done. instead of a variable created as an instance of String. 3. the String object is not truly an object. In Javascript. string .offset within a string.fromCharCode() This method gets the nth character code from a string. // // // // The use of the + operator is the standard way of creating long strings in Javascript.index of the character the encoding of which is to be returned.1).charAt(string. the number is converted to a string.. String charAt() string. the + operator is optional. This method returns a string value (not a string object) consisting of the current object and any subsequent arguments appended to it. String. or if position is less than 0. String charCodeAt() // To get the first character in a string. string.ToString() and appended to the newly created string. RegExp() var s = new String(123). use: string. Returns NaN if there is no character at the position. // To get the last character in a string. Note that the original object remains unaltered. number . If no character exists at location position.charAt(0)." If you try to concatenate a string with a number. In Javascript.length . // This bit of code creates newstring as a string // variable and assigns it the string // "4get it". This method creates a new string whose contents are equal to the current object.FieldCommander SEE: EXAMPLE: JavaScript Refererence Guide converted to a string. For example. String concat() string. Array concat() // The following line: var proverb = "A rolling stone " + "gathers no moss.representing the unicode value of the character at position index of a string. the following: CER International bv 115 .character at position This method gets the character at the specified position.charAt(position) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: position . var newstring = 4 + "get it".]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: stringN . Each argument is then converted to a string using global. String charCodeAt() string. The '+' operator performs the same function. // use as follows: var string = "a string". then NaN is returned. This value is then returned.A list of strings to append to the end of the current object. ." // // // // // creates the variable proverb and assigns it the string "A rolling stone gathers no moss.. rather than being returned as an object.charCodeAt(index) SYNTAX: WHERE: RETURN: SEE: DESCRIPTION: position .concat([string1. String charAt(). if substring not found. String charAt(). Since the index of the first character is 0. String indexOf() string. String substring() var string = "what a string". offset .The substring that is to be searched for within string offset . number .index of the last appearance of a substring in a string. String lastIndexOf() string. index of the first "a" to be found in the string when starting from the second letter of // the string. \"Why did the " "Italians lose the war?\" I told him I had " "no idea.FieldCommander JavaScript Refererence Guide var badJoke = "I was in front of an Italian " "restaurant waiting to get in when this guy " "came up and asked me. String lastIndexOf()." // creates a long string containing the entire bad joke.a string with which to compare an instance string.substring to search for within string. of the first "a" appearing in the string. which is 2 in this example. String indexOf() This method is similar to String indexOf().An optional integer argument which specifies the position within string at which the search is to start. The method indexOf()may take an optional second parameter which is an integer indicating the index into a string where the method starts searching the string. Character positions within the string are numbered in increments of one beginning with zero. Default is 0. 1). String indexOf() returns the position of its first occurrence.index of the first appearance of a substring in a string.indexOf("a") // // // // // // returns the position. The search begins at offset if offset is specified. number . string. String localeCompare() string. the index of second character is 1. // // // // returns 3. number . String indexOf() searches the string for the string specified in substring. else -1. else -1. except that it finds the last occurrence of a character in a string instead of the first.\" he replied. otherwise the search begins at the beginning of the string.lastIndexOf(substring[. If substring is found.indexOf("a".indicating the relationship of two strings. var secondA = magicWord. offset]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: substring . Default is 0.localeCompare(compareStr) SYNTAX: WHERE: RETURN: compareStr . if substring not found.optional integer argument which specifies the position within string at which the search is to start. For example: var magicWord = "abracadabra". offset]) SYNTAX: WHERE: RETURN: SEE: DESCRIPTION: substring . < 0 if string is less than compareStr = 0 if string is the same as compareStr > 0 if string is greater than compareStr CER International bv 116 . \"Because they ordered ziti" "instead of shells.indexOf(substring[. and input has the target string. appropriate RegExp object static properties. the return is an array with information about the match. The array has two extra properties: index and input. If one or more matches are found.an array with various elements and properties set depending on the attributes of a regular expression.$n. or string comes after compareStr.replace(pattern. The result is intended to order strings in the sort order specified by the system default locale. such as RegExp. the return is an array in which each element has the text matched for each find. If a match is found. var str = "one two three tio one". A null is returned if no match is found. RegExp exec(). CER International bv 117 . String replace() string. // rtn[0] == "two" // rtn[1] == "tio" // rtn. Regular expression replacement characters. String match() string. that is. string is searched for the first match to pattern. This method behaves differently depending on whether pattern has the "g" attribute.match(pat).a regular expression pattern to find or match in string. or a function. A null is returned if no match is found. zero. // rtn == "two" // rtn[0] == "two" // rtn[1] == "two" // rtn[2] == "w" // rtn.)o)/. The element numbers correspond to group numbers in regular expression reference characters and regular expression replacement characters. var str = "one two three tio one". string .rightContext. RegExp.a regular expression pattern to find or match in string.match(pattern) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: pattern . The length property of the array indicates how many matches there were in the target string.a replacement expression which may be a string. replexp . If the match is not global. a string with regular expression elements. If the match is global.index == 4 // rtn. RegExp object static properties // not global var pat = /(t(.FieldCommander DESCRIPTION: JavaScript Refererence Guide This method returns a number that represents the result of a locale-sensitive string comparison of this object with that object. and will be negative. replexp) SYNTAX: WHERE: RETURN: pattern . Elements 1 and following have the text matched by sub patterns in parentheses.leftContext. Returns null if no match is found. on whether the match is global. String replace(). The property index has the position of the first character of the text matched. string is searched for all matches to pattern. If any matches are made. RegExp. depending on whether string comes before compareStr in the sort order. String search(). and so forth are set. // global var pat = /(t(. the strings are equal. or positive.length == 2 rtn = str. Element 0 has the text matched. providing more information about the matches.match(pat). array .)o)/g. There are no index and input properties.the original string with replacements in it made according to pattern and replexp.input == "one two three two one" rtn = str. \$ The dollar sign character.. If a match is found. RegExp. rtn == "one twozzz three twozzz one".leftContext.replace(pat. Regular expression replacement characters. // rtn = // rtn = // rtn = // rtn = rtn == "one zzz three zzz one" str. five()). $` The text to the left of the text matched by a regular expression pattern. For example. that is.rightContext. "zzz"). "$&$&). String search().$n. rtn == "one twotwo three twotwo one". "$1zzz"). $9 The text that is matched by regular expression patterns inside of parentheses. String match(). rtn == "one 5 three 5 one" str..replace(pat. See (. $' The text to the right of the text matched by a regular expression pattern. str. $+ The text that is matched by the last regular expression pattern inside of the last parentheses. providing more information about the replacements. str.replace(pat. the last group. $1 will put the text matched in the first parenthesized group in a regular expression pattern. $2 . The parameter replexp may be a: a simple string a string with special regular expression replacement elements in it a function that returns a value that may be converted into a string If any replacements are done. RegExp. $& The text that is matched by a regular expression pattern. it is replaced by the substring defined by replexp. The special characters that may be in a replacement expression are (see regular expression replacement characters): $1. such as RegExp.) under regular expression reference characters. var pat = /(two)/g.FieldCommander DESCRIPTION: JavaScript Refererence Guide This string is searched using the regular expression pattern defined by pattern. var str = "one two three two one".. } CER International bv 118 . and so forth are set.replace(pat. appropriate RegExp object static properties. RegExp object static properties var rtn.. SEE: EXAMPLE: function five() { return 5. FieldCommander String search() string. string or regular expression where the string is split. After a search is done. start of t in two str. in that it returns a substring from one index to another. RegExp exec(). var pat = /th/.substr(start. // == 14. Essentially this will mean that the string is split character by character.search(/Four/i). to create an array of all of the words in a sentence.integer specifying the position within the string to begin the desired substring.if no delimiters are specified.search(pattern) SYNTAX: WHERE: RETURN: DESCRIPTION: JavaScript Refererence Guide SEE: EXAMPLE: pattern . Returns -1 if there is no match.split(' '). Array join() /* For example. the position is relative to the beginning of the string.index from which to start.character. This method returns a number indicating the offset within the string where the pattern matched or -1 if there was no match.the starting position of the first matched portion or substring of the target string. RETURN: DESCRIPTION: SEE: EXAMPLE: String substr() string.index at which to end. var wordArray = sentence.split([delimiterString]) delimiterString . That is. object . then it is treated as length + start or length + end. an array will be returned with the name of the string specified. The only difference is that if either start or end is negative.a substring (not a String object) consisting of the characters. string . search() cannot be used for global searches in a string. end]) start . determines where the string is split. The difference is that indexOf() is simple and search() is powerful. RegExp Object. This method splits a string into an array of strings based on the delimiters in the parameter delimiterString. String substring() This method is very similar to String substring(). The search() method ignores a "g" attribute if it is part of the regular expression pattern to be matched or found. end . If start is positive. start of th in three str. RegExp object static properties var str = "one two three four five". number . Both search() and indexOf() return the same character position of a match or find. If start is CER International bv 119 . // == 4. the appropriate RegExp object static properties are set.a regular expression pattern to find or match in string.search(/t/). If either exceeds the bounds of the string. If substring is not specified. length) SYNTAX: WHERE: start .search(pat). String match(). Regular expression syntax. String split() SYNTAX: WHERE: string. then either 0 or the length of the string is used instead. start of four String slice() SYNTAX: WHERE: RETURN: SEE: DESCRIPTION: string. // == 8. String replace(). str.slice(start[. use code similar to the following fragment: */ var sentence = "I am not a crook". The parameter delimiterString is optional and if supplied. The return is the same character position as returned by the simple search using String indexOf(). returns an array with one element which is the original string. The length parameter determines how many characters to include in the new substring.a substring starting at position start and including the next number of characters specified by length. This integer must be one greater than the desired end position to allow for the terminating null byte. The length of the substring retrieved is defined by end minus start. The start parameter is the first character in the new string. 5) // == "01234" str.the length. end . string . One. The end position is the index or position after the last character to be included. It is designed to CER International bv 120 .substring(2. though this functionality is currently unavailable.a substring starting at position start and going to but not including position end. in substring() the start position cannot be negative. This method gets a section of a string.toLocaleUpperCase() SYNTAX: RETURN: DESCRIPTION: string . in characters. This method retrieves a section of a string. str.a copy of a string with each character converted to upper case.substr(0. String indexOf(). String charAt(). use a Start position // of 0 and add 9 to it. that is. This method behaves exactly the same as String toUpperCase(). String toLowerCase().substr(2.a copy of a string with each character converted to lower case. 10) // == "0123456789" String toLocaleLowerCase() string. 5) // == "01234" str. it must be 0 or greater.toLocaleLowerCase() SYNTAX: RETURN: DESCRIPTION: SEE: string . String substring() var str = ("0123456789"). that is. String toLocaleUpperCase() String toLocaleUpperCase() string. Once it is implemented. // "0 + 9". end) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: start . It is designed to convert the string to lower case in a locale sensitive manner. to get the End position // which is 9. length .integer specifying the position within the string to begin the desired substring.substring(start. String substr() // For example.integer specifying the position within the string to end the desired substring. str.substring(0. the second parameter in substring() indicates a position to go to. var str = "0123456789". though for the majority it will be identical to toLowerCase(). The start parameter is the index or position of the first character to include. of the substring to extract. this function may behave differently for some locales (such as Turkish). Two. to get the first nine characters // in string. substr() differs from String substring() in two basic ways. The end parameter marks the end of the string. 2) // == "56" String substring() string. String slice().substring(0. Another way to think about the start and end positions is that end equals start plus the length of the substring desired. String lastIndexOf(). string . This method. not the length of the new substring. 5) // == "23456" str. the position is relative to the end of the string.FieldCommander JavaScript Refererence Guide RETURN: DESCRIPTION: SEE: EXAMPLE: negative. This method behaves exactly the same as String toLowerCase().substr(-4. 5) // == "234" str. The following fragment illustrates. String toLocaleUpperCase() var string = new String("Hello.FieldCommander JavaScript Refererence Guide SEE: convert the string to upper case in a locale sensitive manner.toLowerCase() // This will return the string "hello. string .toLowerCase() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . Once it is implemented. String toLocaleLowerCase() var string = new String("Hello. This method changes the case of a string. though this functionality is currently unavailable. CER International bv 121 .5 String object static methods String. World!"). world!".0x0042) // will set the variable string to be "AB".toUpperCase() SYNTAX: RETURN: DESCRIPTION: SEE: EXAMPLE: string . WORLD!".fromCharCode(chrCode[. The identifier String is used with this static method. instead of a variable name as with instance methods. String toUpperCase()..]) SYNTAX: WHERE: RETURN: DESCRIPTION: SEE: EXAMPLE: chrCode . .. string. This method changes the case of a string.a copy of a string with all of the letters changed to upper case. string.copy of a string with all of the letters changed to lower case. String toUpperCase() string.character code. 3. String().fromCharCode(0x0041.toUpperCase() // This will return the string // "HELLO.fromCharCode() string.string created from the character codes that are passed to it as parameters. String toUpperCase(). this function may behave differently for some locales (such as Turkish). or list of codes. World!"). The arguments passed to this method are assumed to be unicode characters.12. String toLowerCase(). String charCodeAt() // The following code: var string = String. to be converted. though for the majority it will be identical to toUpperCase(). String toLocaleLowerCase() String toLowerCase() string. Date getUTCMilliseconds() . . . . . . . . . . . . 65. . 68. . . . . 67 . 68 . . . . . . . Date getMilliseconds() . 63 . . .fprintf() . 63. . . . 119 . . . 56. . . . 60. . . . . . . Array unshift() . . . . . 67 . 66 . .fromSystem() . . . . . . . 68. . . . 67 . . . 7. . . . . . . . . . . . . 62 . . . Array join() . . 77. . . . . . . Clib. . . . . .getc() . . . . . . . . . 70 . . . 74. . 78. . . . . . . . . . . . . . . . . . . . . . . . 74. 54 . . . 54 .fopen() . . . . . 62 57. . . . . Date getTime() . . . .FieldCommander JavaScript Refererence Guide Function index Array Object . . . . . . . Buffer getValue() . . . . . . . 53. . Date setUTCMilliseconds() . Array splice() . . . . . . . . . . . . . . . 68 . . . 58. . . . Clib. 63. . . . . . Clib. . 57. . . . . . . . . . . . . . . . . . . . . . . . . . . Date setUTCDate() . . . . . . . . . . . . . . . . 72. . 74. . . . . . . . . . . . . . . . Date getUTCFullYear() . . 63. . . . 80-83. 64 . . . . . . . . . . . . . . . . 58 . . . . . . . . . . . . Clib. . . . Date getUTCDate() . . . . .fwrite() . 58 . . . . . . . . Date getMonth() . . . . . . . Buffer[] Array . . 67 . . . . . . . . 64. Date toUTCString() . . . . . . . . . . . . . . . Date setYear() . Date setHours() . 66-68 . . . . . Clib. . . . . . . . . . . . . Clib. Array sort() . . . . .fseek() . . . 58. . . 74. 59. 58 . . . . 75. Array pop() . . . . . . . . . . . . . . . . Clib. . 80. . . Buffer() . Date valueOf() . . . . 55 . . . 61 . . 84 . . . . . . . . . Date. . . 78. . . Global Object . 58. . . . . . . . Clib. . . . Array length . . . 68 .rename() . . 52. . . . . . . . . . . . 86 . . . . . . . 57. . 54. . . . Clib. . . . . . . . . . . . . . . . .sprintf() . . . . . . . . . . . . . . . 6. . 58-60 . . 69 . . . . . Date setUTCHours() . . 77. . . 68 . 81. . Date setUTCFullYear() . . . . . . . .feof() . . . . . . . . . . . Date getUTCMonth() . . . . . . . . . . 57 . . . . . .freopen() . 53. . . . . . . . . . . . . . . . . . . . . . .ftell() . . Date setFullYear() . . . . . . Buffer data . . . . . . . . . . . 61 . . . . . . . . . 54 . . . . . . . . . . . . . . . . . . . . . . . . 70 CER International bv 122 . . . . . 68 . . Date. . . 60. . 65. Clib. . . . . . . 55 . 57 . . Date getUTCSeconds() . . . . 75. . . Buffer size . . . . . . 56. . . . . .fputc() . . . . . . 56 . . . . . . . . . . Date toDateString() . . . . . . . . . Date getUTCDay() . . . . . . . 5. . . . . . 8. . . . . Buffer cursor . Date setMilliseconds() . . . . Date setTime() . . . . . . . . Clib. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Date getUTCMinutes() . . . . . .fread() . Clib. . . . . Date toSystem() . . . . . . . . 68 . . Date toLocaleTimeString() . . 66 . Date setMinutes() . . . . . . . . . . . . . . Clib. . . . . . . . . . . . . . . . . . . . . .fflush() . . 53. . . . . . . 58 . . . . . . . . . . . . . Buffer getString() . 53. . . . . . . . 64. 64. . Date getSeconds() . . . . . Date setUTCMonth() . . . . . . . .toString() Boolean() . Buffer subBuffer() Buffer toString() . . . Date toGMTString() . . . . . . . . . . Clib. . . Date setSeconds() . . Date getDay() . . . . . . . .putc() . . . . . . . . . Boolean Object Boolean. . . . . . . . . . . 74.remove() . . . . . . . . .fgetpos() . . . . . . . . . . . . 77. . . . . . . . . . . . . . . . .fgets() . . . . . . . . . . . . . . . . . . . . .fscanf() . . . . . . . . . . . . . . . . . . . . . . . . . . Date getTimezoneOffset() . . . . 63 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Date setUTCSeconds() . . Date setMonth() . . . . . Clib Object Clib. . . . 52. . . . . . . . . . . . . . . . . . . . . . . 12. . . Clib. . Clib. . . . . 54. . . . . . . . . . 65. . . . . . . . . . 58. 115 . . . . . . . Clib. Array concat() . . . . . . . . 8. . . . Date getMinutes() . . . . Buffer putString() Buffer putValue() .parse() . . . . 57. Clib. . 77. . . . 64 . . . . . . . . . . . 66. Date setDate() . . . . . . . . . . . . . . . . . . . . . . . . . . 56 42. . . . . . . . . . . . Clib. Date getUTCHours() . . . . . . . . Function call() . . .ungetc() . . Array() . . Array push() . . . . . . . . 68 . . . . . . . . 58 . . . . . . . . . 58 . . Function() . . . . . . 80. . . 77. . . . . . . Date getHours() . . Function toString() . . . . . 62 . . . . . . . . . . . 60 . . . 64. . . . . . Date toTimeString() . . . . . . . . . . Clib. . . . . . . 66. . . . . . . . . . . . . . . . . . . Array reverse() . 64. . . . . . . . Clib. . . . . . . . . . Date. . . . 56 . . . . . . . . . 51. 63 . . . . . . . . . . . . . 77. . . . . . . 49-52. . .UTC() . . . . . . Array toString() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57. . . . . . . . . . . . . .fputs() . . . . . . Clib. . . . . . . . . . . . . . . . . . . . Array shift() . . . 13. . . . . . . . . . .fgetc() . . . . . . . . . . . . . . . . . . . . . . . . . . .fsetpos() . . . . . . . . . . . . . . . Date toString() . . . . . . 84. . 64 34. . . . . . . . Date Object Date getDate() . . . . . . . . . Buffer unicode . . . . . 56 . Date toLocaleString() . . 62. . . . . . . . . . . . Date getFullYear() . . . . . . . . . . 67 . .rewind() . Clib. . . . . . Date getYear() .sscanf() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Buffer Object Buffer bigEndian . . . . . . . . . . . . Date setUTCMinutes() . 49-56. . . . . . Function Object Function apply() . . . . . . . . . . . . . . . . . . . . .fclose() . . Array slice() . Date toLocaleDateString() . . . . . . . 84 Clib. . . . . . . . .rightContext . . . . . . . . . . .fromCharCode() . . . 95 Math. . . . . 52. . . . . . . . . 7. . . . . . . . . . . . 89 ToString() . . . . . . . .abs() . 98. . . . . . 90 Math. . . . 91 Math. . . . . 87. . . 116 String match() . . . . . . . . . . . . . . . . . . . 91 Math. . . . . . . . . . . . . . . 114 String localeCompare() . . . . . . . . . 95 Math.random() . . 94 Math. . . . . . . . 83. . . 96. . . . .tan() . . 97 Object propertyIsEnumerable() . . . . . . . . . . . . . . . . . . 119. . . . . . . . . . . . . . . . . 91 Math. . . . . .pow() . . . . . . . 112 RegExp. 46 Math Object Math. . . . . . . 112 RegExp. . . . . . . . . . . . . . . . . . . . . 96 Number toLocaleString() . 18. . . 121 String toLocaleUpperCase() . . . 109. . . . . . . 91 Math. . 89 instanceof() . . . . . . . . . . . . . . . . . . 95 Math. . . . 51. 96-98. . . . . . . . . . . . . . . . . . . . . . . . . 53. .LOG10E . . . . . . . . . . . . . .min() . . . . 120 String split() . . . . . . . . . . . . . . . . . . . 19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 ToInt32() . 99. . . . . . . . . 97 Object Object Object hasOwnProperty() . . . . . . 118 RegExp. . . . . . . . 97 Object toLocaleString() . . . . . . . . 90 Math. . . . . . . . . . . 88 ToPrimitive() . 117. . . . . 117. . . . . . 93 Math. . . . 109. . . . . . .E . . . . . 99. . 117119 String slice() . . . . . . . . . . . . . . . . . . . . . . 92 Math. . . . . . . . . 24 valueOf() . . . . 120 String substring() . . . . 19. . 68. . . . . . . . 107-111. . . 107 RegExp ignoreCase .LOG2E .sin() . . . . . . .lastParen . 96. . . . . 84 isNaN() . . . 120. . . . . . 121 String. . . . . . 116. . . . . . . 83 getArrayLength() . . . 120 String lastIndexOf() . . . . . . . . . . . 112. . . . 87-89 ToInteger() . . . . . . . . . . . . 108 RegExp test() . . . . . . . . . . . . 121 String toUpperCase() . 83. . 35. 121 String toLowerCase() . . . . . . . . . 89. . . 107 RegExp lastIndex . . 85. . . . . . 85 setArrayLength() . . .PI . . . . . . . . . . . . 78. . . . . . 19. . . . 121 String concat() . . 20. 78 CER International bv 123 .acos() . . . . . . . . . . . . . .$n . 119 String search() . . . 89. 107 RegExp multiline . 18. . . . . . . 99. . . . . . . . . 92 Math. . . . . . . .atan() . . . 77. . . . . . 62. . . . . . . . . 90. . . . . . . . 95 Math. . . . 90 Math. . . 117. . . . . . . . . . . . . . . . . . . .exp() . . . . . . . . . . . . . . . . 120 String toLocaleLowerCase() . . . . . . . . . . . . . . . 98. . . . . . . . . . 93. . 89 ToUint32() . . . . . . 91 Math. . . . . . . . . . . . . . . . . . 39.input . . . . . . . . 98 Object toString() . . . . . 107. . 62. . . . . . . . . . .LN2 . . . . . 54. . . . . . . 115 sscanf() . 108. . . . 111. . . . . . . . 119 String substr() . . .leftContext . .round() . . . . 117. . . . . . . . . . . . . 118 RegExp. . . . . . 107-109 RegExp exec() . . . . .lastMatch . . . . . . . . . . . . 117. . 110. . . 12. . . . . . . . 89 eval() . . . . . . . . . . . . . . . . . . . .floor() . . 90 Math. . . . . . . . . . . . . . . . 120 String length . . . . 82. . . . . . . . . 83. . . . 115. . . . 97 Object isPrototypeOf() . . 115. 119 RegExp global . . . . . . . . . . 89 undefine() . . . 116. . . 115 ToUint16() . .max() . . . .cos() . . . . 110. . . . . . . . . . . . . . . . . . . . . . . . . 30. . . . . . . . . . . . . . . . . . 57. . . . . .FieldCommander defined() . . . . . . . . . . . . . . . . 111. . . . . . . 24 main() . . . . . . . . . . . 116. . . . . . . . . 98 RegExp Object index (RegExp) . . . . .SQRT1_2 . . . . . . . . . 99. 119. . . . . . . . . . . 114. . . . . . . . . 120. . 97 JavaScript Refererence Guide Number toString() . . . . . . . . . . . . . . . . .LN10 . . . . . 32. . . 92 Math. . 85. 109 input (RegExp) . . . . . . 99. 108 RegExp compile() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84-86 setAttributes() . . 121 typeof() . . . . . . .sqrt() . . . . . . 90 unescape() . . . . . . . 90 escape() . . . . . 94. . . . . . . 121 String() . . 85 parseInt() . 88. . . . 120. 45. 56. . 86 getAttributes() . . . . . . . .ceil() . . . 84 parseFloat() . . . . 119. 108 RegExp source . . . . 120 String charCodeAt() . . . . . . . 94 Math. . . . . . . 110. . . . . . . . . . . . . . 119. . . . 107-111. . . . . . 87-89 ToObject() . 115 String indexOf() . . . . . . 110-112. . . . . . . . . . . 112. 110. . . . . . . . . . . . . . 39. . . . 96 Number toFixed() . . . . . 95 Math. . . . . . . . . . 19. 111 RegExp. . 99. . . . . . .log() . . 69 String Object String charAt() . . . . . . . . . . . . 18. . . 100. 111. . . . . . . . 87-89 ToNumber() . . . . . . . . . . . . . . . . 51. . . 107. . .asin() . . . . . . . . . . . . 90 Math. . . . . . . . . 87 ToBuffer() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120. . . . . . . . . . 87 ToBytes() . . . . . . . . 109. . 86 ToBoolean() . . . . . . . . . . 97 Number toPrecision() . . . 115. . . . . . . . . . . 83. . . 114. . . . . . . . 118 RegExp() . . . 111 RegExp. . . . . . . . 91 Math. 116. . . . . . . . . . . . 95 Math. . . 117-119 String replace() . 84. 96 Number Object Number toExponential() . . . . . . . . . . . . . . . . . . . 93 Math. . . . . . . . . . . 4-6. . . . . . . . 92 Math. . . . . . . 89. . . . . . . . . . . . . . . . . . . . . . . . .atan2() . . . . 93. . . . . . . . .SQRT2 . . . . . . . . . . 86 isFinite() . . . . . . . . . . . . . No part of this publication may be reproduced in any form. AND IN NO EVENT SHALL CER INTERNATIONAL BV OR THE AUTHORS BE LIABLE FOR DIRECT. except as permitted by the Copyright Act of 1976 and except that program listings may be entered. CER International bv 124 . INDIRECT. recording. mechanical photocopying. without the prior written permission of CER International bv. THE INFORMATION AND MATERIAL CONTAINED IN THIS MANUAL ARE PROVIDED "AS IS. or transmitted or distributed in any form by any means. OMISSIONS. or otherwise. is intended to convey endorsement or other affiliation with this manual. or stored in a database or retrieval system. All rights reserved. stored and executed in a computer system. OR OTHER INACCURACIES IN THE INFORMATION OR MATERIAL CONTAINED IN THIS MANUAL. SPECIAL. They are used throughout this manual in editorial fashion only and for the benefit of such companies. NEITHER CER INTERNATIONAL BV NOR THE AUTHORS SHALL BE RESPONSIBLE FOR ANY CLAIMS ATTRIBUTABLE TO ERRORS. OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF SUCH INFORMATION OR MATERIAL. INCLUDING WITHOUT LIMITATION ANY WARRANTY CONCERNING THE ACCURACY. INCIDENTAL. electronic.FieldCommander JavaScript Refererence Guide Trademarks CER and FieldCommander are registered trademarks of CER International bv. All other product names and services identified throughout this book are trademarks or registered trademarks of their respective companies. or the use of any trade name. OR COMPLETENESS OF SUCH INFORMATION OR MATERIAL OR THE RESULT TO BE OBTAINED FROM USING SUCH INFORMATION OR MATERIAL. No such uses. EXPRESS OR IMPLIED. ADEQUACY. Copyrights Copyright © 2008 CER International bv." WITHOUT WARRANTY OF ANY KIND. com FieldCommander is a product of: CER International bv Postbus 258 NL 4700 AG Roosendaal The Netherlands TEL: +31 (0)165 557417 FAX: +31 (0)165 562151 Part no.cer. FCJSREF.31 . revision 3.cer.The latest documentation is available from.
https://www.scribd.com/document/62621848/Ajax
CC-MAIN-2017-43
refinedweb
41,882
62.34
<iostream> using namespace std; int main (){ for (int i = 1; i <= 5; i++){ cout<<i<<"\n"; } return 0; } 1 2 3 4 5 Multiple initializations, condition checks and loop counter updates can be performed in a single for loop. Please see the below example. #include <iostream> using namespace std; int main (){ for (int i = 1, j = 100; i <= 5 || j <= 800; i++, j = j + 100){ cout<<"i="<<i<<", j="<<j<<"\n"; } return 0; } i=1, j=100 i=2, j=200 i=3, j=300 i=4, j=400 i=5, j=500 i=6, j=600 i=7, j=700 i=8, j=800
https://www.alphacodingskills.com/cpp/cpp-for-loop.php
CC-MAIN-2019-51
refinedweb
105
77.2
Trying to run a test project on Fastapi. on Ubuntu 20.04 C. Docker. . I use the docks. DockerFile From Python: 3.7 Run Pip Install Fastapi Uvicorn Expose 8080. Copy ./App /App Cmd ["uvicorn", "app.main: app", "--host", "0.0.0.0", "--port", "8080"] main.py from typing import optional From Fastapi Import Fastapi App= Fastapi () @ app.get ("/") DEF read_root (): Return {"Hello": "world"} @ App.Get ("/Items /{Item_ID}") Def Read_Item (Item_ID: INT, Q: Optional [StR]= None): Return {"Item_ID": Item_ID, "Q": Q} Create Docker Image so Docker Build -t MyImage. . Then run in the container Docker Run -D --Name Mycontainer -P 80:80 MyImage . I get a mistake docker: Error response from daemon: driver failed programming external connectivity on endpoint mycontainer (8c74a07c055929891d80994584452fde936a76d690ed60150d134838c0a50b59): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use. If you change the port when the container is started on Docker Run -D --Name MyContainer -P 8000: 8000 MyImage . The container is created, but at the address 127.0.0.1:8080/docs. Page does not open what I do wrong? The container is running, but for some reason the port is not specified. And why the container immediately completes the container. Container ID Image Command Created Status Ports Names 62740AB0782B MyImage "Uvicorn App.Main: AP ..." 11 Minutes Ago exited (1) 11 minutes Ago MyContainer @Michaeltelev Well, in the docks it is written that it is not necessary. Yes, even if you write this nothing changesВадим2021-09-04 15:00:07 The 80th port is already busy, and it is impossible not to root it. When you use 8080, see the Docker Logs Container-ID container logs, there is probably an error message.Roman Konoval2021-09-04 15:00:07 @Romankonoval Yes, writes Cannot Assign Requested Address DockerВадим2021-09-04 15:00:07 Try listening to Localhost, i.e. Specify "--host", "127.0.0.1" in the start parameters. Well, the error message should be written into the text of the question, and not in the comment, since this is part of the question.Roman Konoval2021-09-04 15:00:07 - Python does not see pip3 modules - python : Is it possible to format code without list and string methods? - python : The code works, but the execution time limit is exceeded - How to count the number of specific letters in a word generated by a loop -Python - Python code in Docker. Cannot allocate memory error - I do not understand much in deploying the server, but didn't the dokarterfile need to write the Uvicorn MAIP command: app?Michael Tetelev2021-09-04 15:00:07
https://www.tutorialfor.com/questions-370100.htm
CC-MAIN-2021-49
refinedweb
431
67.45
# Ways to Get a Free PVS-Studio License ![PVS-Studio Free](https://habrastorage.org/r/w1560/getpro/habr/post_images/22b/881/1ec/22b8811ec8c6c8324bae5652fe6aa70b.png) There are several ways to get a free license of the PVS-Studio static code analyzer, which is meant for searching for errors and potential vulnerabilities. Open source projects, small closed projects, public security specialists and owners of the Microsoft MVP status can use the license for free. The article briefly describes each of these options. [PVS-Studio](https://www.viva64.com/en/pvs-studio/) is a tool designed to detect errors and potential vulnerabilities in the source code of programs, written in C, C++, C# and Java. It works in Windows, Linux and macOS environments. PVS-Studio is a paid B2B solution, it is used by many teams in various companies. List of [clients](https://www.viva64.com/en/customers/). So let's consider the cases, when the PVS-Studio analyzer can be used free of charge. Open Source Projects -------------------- PVS-Studio can be used for free by the programmers, participating in development of open source projects, posted on GitHub, GitLab or Bitbucket. Everyone who wishes, can get a free license for 1 year. To get a license, you need to: * Go to the page: [Free PVS-Studio License for Open Source](https://www.viva64.com/en/open-source-license/) * Enter your name and the e-mail, to which you'd like to receive a license key; * Enter the link to your GitHub/GitLab/Bitbucket profile; * Send a request for a free license. Upon expiration of the license, you can get a new license key in the same way. The key is individual and can only be used to check open source projects hosted on GitHub/GitLab/Bitbucket. Free license doesn't extend to projects' mirrors. More details about this type of free licensing are given in the article "[Free PVS-Studio for those who develops open source projects](https://www.viva64.com/en/b/0600/)". Closed Projects --------------- There are many small private projects developed by enthusiasts. For example, games, created by indie developers or projects of academic focus. The option of free licensing, based on adding comments of a special type in the code, will be right for these projects. The point is that such comments are unacceptable in large corporate projects, but developers may well add them in their individual projects. Here are these comments: **Comments for students (academic license):** ``` // This is a personal academic project. Dear PVS-Studio, please check it. // PVS-Studio Static Code Analyzer for C, C++, C#, and Java: http://www.viva64.com ``` **Comments for individual developers:** ``` // This is an independent project of an individual developer. Dear PVS-Studio, please check it. // PVS-Studio Static Code Analyzer for C, C++, C#, and Java: http://www.viva64.com ``` By the way, this type of free licenses can be actually used not only in closed, but in open projects as well. **Comments for free open source projects:** ``` // This is an open source non-commercial project. Dear PVS-Studio, please check it. // PVS-Studio Static Code Analyzer for C, C++, C#, and Java: http://www.viva64.com ``` You need to go through two steps to start using PVS-Studio code analyzer for free. **Step 1.** If you are using PVS-Studio as a Visual Studio plugin or you are using the «C and C++ Compiler Monitoring UI» (Standalone.exe) utility, enter the following license key: Name: PVS-Studio Free Key: FREE-FREE-FREE-FREE If you are using PVS-Studio for Linux/macOS, use the following command: pvs-studio-analyzer credentials PVS-Studio Free FREE-FREE-FREE-FREE Note. Previously, a comment was enough to activate the free license for the Linux version. Now you also need to enter this special key, because without it, some scenarios for using the analyzer turned out to be inconvenient. [Read more](https://stackoverflow.com/a/65475501/7772356). **Step 2.** Make edits in all the compilable files of your project. We mean files with the extensions c, cc, cpp, cs, java and others. You don't have to change header files. You have to write two lines of comments at the beginning of each file. If your project has a large number of files, then you can use the 'how-to-use-pvs-studio-free' utility. You will need to specify the comment to insert and the directory with the code. After that the utility will recursively traverse all the files in the folder and subfolders, adding necessary comments to the code. You can download the utility (together with the source code) here: [how-to-use-pvs-studio-free](https://github.com/viva64/how-to-use-pvs-studio-free). You can read about some additional details of this type of free licensing in the article "[How to use PVS-Studio for Free](https://www.viva64.com/en/b/0457/)". Please, be sure to read this article in case if you decided to choose the above option of free usage. Security experts ---------------- Public security experts, specialized in searching for vulnerabilities, can [write to us](https://www.viva64.com/en/about-feedback/) and get a free license for the PVS-Studio analyzer. Everyone who wishes to get the license and support will have to confirm, that they specialize in security issues and conduct public activities, for example, write articles. We'll specify these things by mail. Read more: "[Handing out PVS-Studio Analyzer Licenses to Security Experts](https://www.viva64.com/en/b/0510/)". Microsoft MVP ------------- The first people, whom we offered free licenses, were Microsoft MVPs. The post "[Free PVS-Studio licenses for MVPs](https://www.viva64.com/en/n/0089/)" appeared back in 2011. Hardly anyone remembers about this now, but this offer still stands. If you are a Microsoft MVP, [email us](https://www.viva64.com/en/about-feedback/) and specify your MVP profile on Microsoft's website. We will send you the license, which allows using PVS-Studio for 12 months without any restrictions, including usage for commercial purposes. After it expires, it will be possible to extend the license. Conclusion ---------- We probably need to remind about the main scenario. * A trial analyzer version is available on the site. Here is the [page](https://www.viva64.com/en/pvs-studio-download/) where you can download PVS-Studio and get a trial key. * A company can [purchase](https://www.viva64.com/en/order/) the license and get quick and proficient support. Your developers will communicate directly with our developers from the PVS-Studio team. No middlemen, support staff and so on. Only direct communication with programmers and me (technical director). [Example](https://www.viva64.com/en/b/0612/). Thanks for your attention and we wish you to prevent as many errors as possible by using PVS-Studio. Just don't forget that the point of static code methodology is in its regular use, not in single checks. Good luck!
https://habr.com/ru/post/443340/
null
null
1,154
57.37
1 Hi Everyone, I have to read data from a file into an array. I have that part figured out. I can't get my function to work. I have to write a sumArray, avgSales, highSales, and lowSales. Here is what I got so far. Could someone please help me with one of the functions. I think I can get the rest if I have a god example to follow. Thanks in advance. #include <iostream> #include <string> #include <fstream> using namespace std; //Constant array size declaration. const int LOCATIONS = 20; //Function Prototypes void readData(int a[], int size, double& data); double sumArray(double& totalSales); int main() { cout.setf(ios::fixed); cout.setf(ios::showpoint); cout.precision(2); ifstream in; char inFile[256]; double a[LOCATIONS] = {0}, sales, totalSales = 0; int data = 0; //Prompts the user for the input file name. cout << "Enter the input file name: "; cin >> inFile; // Opens the stream and connects to the file. in.open(inFile); //Checks to see if the input file opened properly. //Displays an error message if file not opened. if(in.fail( )) { cout << "Input file opening failed.\n"; exit(1); //Closes file explicitly. in.close( ); } //Finds and prints the total sales amount. sumArray(totalSales); cout << "The total sales are $" << totalSales << endl; //Reads in and displays data. while(in >> a[data]) { sales = a[data]; data++; } cout << "Number Of Locations In File: " << a[0] << endl << endl; for(int index = 1; index < data; index++) { cout << "Location Number " << index << ": $" << a[index] << endl; } } double sumArray(double& totalSales) { ifstream in; double a[LOCATIONS] = {0}, sales; int data = 0; //Reads in and displays data. while(in >> a[data]) { for(int index = 1; index < data; index++) { totalSales = totalSales + a[index]; index++; return(totalSales); } } } My total sales output is $0.00, so I know that it's not reading in correctly. Please guide me.
https://www.daniweb.com/programming/software-development/threads/85648/using-functions-with-arrays
CC-MAIN-2017-39
refinedweb
302
67.65
#include <djv_security.h> This class provide an access to the internal security information. Remove all the user property-sets. Create a new SecurityProvider instance of PWD1 for encrypting the resource. Create a new SecurityProvider instance of PWD2 for encrypting the resource. Create a new SecurityProvider instance of PXL4 for encrypting the resource. Duplicate the SecurityProvider instance. This method is useful when you inherit a SecurityProvider instance to serialize the Secure DjVu to a new file. Create SINF chunk from the security information gathered into this instance. SINFchunk. Get the list of the users, who have a permission entry on this document. This method fails if you don't have sufficient privilege. Get the list of available users. This method gathers the full list of the users, which can be obtained with your privileges. Please note that there might be users who you cannot obtain with this method. This method fails if you don't have sufficient privileges. Get the property-set which can be edited. Get the user's property-set which can be edited. Get the number of user property-sets. Get the property from both of global and user property-sets. This method firstly tries to get the specified property from the global one, which is obtained by getPropSet method and then tries to get the same one from the user ones. Get the property-set. Get the current user ID. Get the user's property-set. Determines whether the document has expiry. trueif this document has expiry; otherwise false. Determine whether the specified property is enabled or not. Actual security enforcement status is determined by isEnforced method and you should use isPermitted method to determine whether an user can (or cannot) do the things. Determine whether the security is enforced or not. trueif the security is enforced; otherwise false. Determine whether the document is expired or not. This method checks both document-level and user-level expiries. Determine whether an user has the specified privilege or not. This method is used to determine the actual security enforcement and not a method to determine a property status. Determine whether a specified user has a permission entry on this document or not. This method fails if you don't have sufficient privileges. trueif associated; otherwise false. Remove a user property-set. Throws an exception of errInvalidParam if no permission for the specified privilege.
https://www.cuminas.jp/sdk/classCelartem_1_1DjVu_1_1SecurityProvider.html
CC-MAIN-2017-51
refinedweb
394
61.12
Apple began accepting Swift-coded applications in the App Store after the launch of both iPhone 6 and iPhone 6 Plus. In this article, I provide a brief introduction of the most interesting features included in this new programming language, which provides an alternative to Objective-C when you want to create native iOS apps with XCode 6.0.x. A New Language, A New XCode Version Swift is an object-oriented language that includes some functional programming concepts. I believe the most important benefit that Swift provides is that you can avoid working with Objective-C to develop a native iOS app. However, you still have to work with XCode, as it is the only IDE that allows you to work with Swift to develop iOS apps. Luckily, XCode 6.0.1 solved many of the bugs related to Swift that made it almost impossible to complete a working session without unexpected errors and crashes in the beta versions of the IDE that supported Swift. There are still many bugs that would require an update to get solved, but I was able to use XCode and the iOS emulator while working on complex apps without major problems for many days. You can install XCode 6.0.1 from the Mac App Store. I don't recommend you work with earlier XCode versions that included support for Swift because they are very unstable. If you have some experience with C#, Java, Python, Ruby, or JavaScript, you will find Swift's syntax easy to learn. You may still miss many advanced features included in those programming languages that aren't available in Swift, but you will benefit from the features that Swift did borrow from these and other modern programming languages. Swift doesn't require the use of header files. You can import any Objective-C module and C libraries to Swift with simple import statements. XCode 6.0.1 offers a Swift interactive Playground that allows you to write Swift lines of code and check the results immediately. This Playground is really helpful for learning Swift and its interaction with the APIs because it provides nice code completion features. You simply need to start XCode, select File | New | Playground…, enter a name for the Playground, select iOS as the desired platform, click Next, select the desired location for the Playground file, and click Create. XCode will display a Playground window with the following lines of code: // Playground - noun: a place where people can play import UIKit var str = "Hello, playground" You can add your Swift lines of code and check the results as you enter them in the right hand side of the window. In fact, you can use the Playground to test the sample lines of code I will provide (see Figure 1). Figure 1: Checking the results of Swift code in the Playground. Type Inference, Variables, and Constants Swift doesn't require you to write semicolons at the end of every statement. The type inference mechanism determines the best type, and you can avoid specifying the type of value. You can declare constants with the let keyword. and variables with var. For example, the following line creates an immutable String named welcomeText. let welcomeText = "Welcome to my Swift App" The following line uses the var keyword to create a highScore Int mutable variable. var highScore = 5000 One nice feature is that you can use underscore ( _) as a number separator to make numbers easier to read within the code. For example, the following lines assign 3000000 and 5000 to the previously declared highScore variable: highScore = 3_000_000 highScore = 5_000 You can explicitly specify the desired type for a variable. The following line creates a scoreAverage Double mutable variable. In this case, the type inference mechanism would choose Int as the best type because the initial value is 100. However, I want a Double to support future values, so I specify the type name in the variable declaration. var scoreAverage: Double = 100 Swift makes it simple to include values in strings. You just need to use a backslash ( \) and include an expression within parenthesis. For example, the following line: var highScoreText = "The highest score so far is: \(highScore)" Stores the following string in highScoreText: "The highest score so far is: 5000" The following line includes a more complex expression that doubles the value of highScore: var doubleHighScoreText = "Are you ready to reach this score: \(highScore * 2)" The line stores the following string in doubleHighScoreText: "Are you ready to reach this score: 10000" You can declare an optional value by adding a question mark after the type. The following lines declare the optionalText variable as an optional String. The initial value is nil and indicates that the value is missing. The next lines use if combined with let to retrieve the value in an operation known as optional binding. In the first case, the value is nil and the println line doesn't execute. Notice that nil means the absence of a value (and don't confuse it with the usage of nil in Objective-C). After the line that assigns a value to optionalText executes, the next if combined with let retrieves the value in text and prints an output with the retrieved String. Notice that the Boolean expression for the if statement doesn't use parenthesis because they are optional. However, braces around the body are always required even when the statement is just one line. var optionalText: String? if let text = optionalText { println("Optional text \(text)") } optionalText = "You must work harder to increase the score!" if let text = optionalText { println("Optional text \(text)") } Generic Classes and Functions Swift allows you to make generic forms of classes. The following lines show a simple example of a Point3D class that specifies T inside angle brackets to make a generic class with three variables: x, y, and z and a method that returns a String description. You can also use generics with functions, methods, enumerations, and structures. class Point3D<T> { var x: T var y: T var z: T init(x: T, y: T, z: T) { self.x = x self.y = y self.z = z } func description() -> String { return "Point.X: \(self.x); Point.Y: \(self.y); Point.Z: \(self.z)." } } The initializer ( init) sets up the class when you create an instance. In this case, the initializer assigns the values received as three arguments for x, y, and z. The arguments use the same name as the class variables, so it is necessary to use self to distinguish the x, y, and z properties from the arguments. If you need to add some cleanup code before the instance is deallocated, you can put some code in deinit. The following lines create an instance of a Point3D with Int values and another Point3D with Double values. Then, two lines print the result of calling the description method for each Point3D instance. var pointInt = Point3D<Int>(x: 10, y: 5, z: 5) println(pointInt.description()) var pointDouble = Point3D<Double>(x: 15.5, y: 5.5, z: 32.5) println(pointDouble.description()) I'll use a Point3D<Int> instance to clarify the use of the let keyword with instances. The following line declares pointIntConst as a constant. let pointIntConst = Point3D<Int>(x: 5, y: 5, z: 5) Thus, you cannot change the value for pointIntConst; that is, you cannot assign a different instance of Point3D<Int> to it. However, you can change the values for the instance properties. For example, the following line is valid and changes the value of x. pointIntConst.x = 3 As happens in many modern programming languages, functions are first class citizens in Swift. You can use functions as arguments for other functions or methods. The following lines declare the applyFunction function that receives an array of Int ( list) and a function ( condition) that receives an Int and returns a Bool value. The function executes the received function ( condition) for each element in the input array and adds the element to an output array whenever the result of the called function is true. This way, only the elements that meet the specified condition are going to appear in the resulting array of Int. func applyFunction(list: [Int], condition: Int -> Bool) -> [Int] { var returnList = [Int]() for item in list { if condition(item) { returnList.append(item) } } return returnList }
https://www.drdobbs.com/mobile/swift-introduction-to-apples-new-program/240169130
CC-MAIN-2020-16
refinedweb
1,387
61.67
Controlled Integrator example 2¶ Nengo Example: Controlled Integrator 2: $\dot{x} = \mathrm{Ax}(t) + \mathrm{Bu}(t)$ The control in this circuit is A in that equation. This is also the controlled integrator described in the book “How to build a brain.” import numpy as np import matplotlib.pyplot as plt %matplotlib inline import nengo %load_ext nengo.ipynb <IPython.core.display.Javascript at 0x7f271036d610> Step 1: Create the network As before, we use standard network-creation commands to begin creating our controlled integrator. An ensemble of neurons will represent the state of our integrator, and the connections between the neurons in the ensemble will define the dynamics of our integrator. The control signal will be 0 function that changes half way through the run control_func = piecewise({0: 0, 0.6: (212)26d3ed2c 0 (t < 0.6), the neural integrator performs near-perfect integration. However, when the control value drops to -0.5 (t > 0.6), the integrator becomes a leaky integrator. This means that with negative input, its stored value drifts towards zero. Download controlled_integrator2 as an IPython notebook or Python script.
https://pythonhosted.org/nengo/examples/controlled_integrator2.html
CC-MAIN-2017-13
refinedweb
182
51.14
The Canvas widget is one of the more unique and advanced widgets Tkinter has to offer in Python. It’s use of similar to that of a drawing board, which you can draw and paint on. You can draw all kinds of things on the Canvas widget, such as graphs, plots, pie charts, lines, rectangles etc. One of it’s more interesting uses is with the Matplotlib library used for statistics. Matplotlib can create all kinds of graphs and plots, and you can directly attach these plots onto the Tkinter Canvas. Canvas Syntax mycanvas = Canvas(master, options...) Canvas Options List of all options available for Canvas. Using the Tkinter Canvas Widget We’ll now proceed to cover many different uses of the Canvas widget. The Canvas widget is pretty large and can be paired with many other widgets and libraries. As such what we have described below is only a portion of the Canvas’s widget’s total functionality. If you’re further interested in Canvases, make sure to read up on it’s documentation for more. The Canvas widget is very dependent on coordinates and positions, You should know that unlike a regular axis, the “origin point” or the coordinates (0,0) are located on the top left corner of the screen. The X values increase left to right and Y values increase from top to bottom. Canvas with arcs One of the most popular functions, create_arc() is used to draw arcs on the Tkinter Canvas. It takes a set of coordinates in the following format X0, Y0, X1, Y1. What you’re actually doing is defining two points (like a line) and then drawing a circle using that line. The extent option takes values from 1 to 360, representing the 360 degrees of a circle. The start option determines where to start the arc from (in terms of degrees) from tkinter import * root = Tk() frame=Frame(root,width=300,height=300) frame.pack(expand = True, fill=BOTH) canvas = Canvas(frame,bg='white', width = 300,height = 300) coordinates = 20, 50, 210, 230 arc = canvas.create_arc(coordinates, start=0, extent=250, fill="blue") arc = canvas.create_arc(coordinates, start=250, extent=50, fill="red") arc = canvas.create_arc(coordinates, start=300, extent=60, fill="yellow") canvas.pack(expand = True, fill = BOTH) root.mainloop() Since all the arcs have the same origin, we give them the same co-ordinates. Another thing to note is that the arc extends counter clockwise. Canvas with Lines The create_line() function is pretty simple. It takes a set of coordinates for two points in the format X0, Y0, X1, Y1 and draws a line between them. from tkinter import * root = Tk() frame=Frame(root,width=300,height=300) frame.pack(expand = True, fill=BOTH) canvas = Canvas(frame,bg='white', width = 300,height = 300) coordinates = 50, 50, 250, 250 arc = canvas.create_line(coordinates, fill="blue") coordinates = 250, 50, 50, 250, arc = canvas.create_line(coordinates, fill="red") canvas.pack(expand = True, fill = BOTH) root.mainloop() Canvas with Image Using the PhotoImage class you can import photos and turn them into a format compatible with other libraries such as Tkinter. The syntax is pretty simply, you simply pass the filepath to the option “file”. You can then pass this file object into the canvas.create_image() function’s image option. The two numbers you see, 150 and 150 represent the X and Y location of the origin of the image. Since we set the origin to the center of the canvas, the image shows up in the center. from tkinter import * root = Tk() frame=Frame(root,width=300,height=300) frame.pack(expand = True, fill=BOTH) canvas = Canvas(frame,bg='white', width = 300,height = 300) file = PhotoImage(file = "download.png") image = canvas.create_image(150, 150, image=file) canvas.pack(expand = True, fill = BOTH) root.mainloop() Canvas with Scrollbar You can also use Canvas with another Tkinter widget called Scrollbar. To learn more about scrolling in Canvases, follow the link to the Scrollbar widget.(xscrollcommand=hbar.set, yscrollcommand=vbar.set) canvas.pack(side=LEFT, expand = True, fill = BOTH) root.mainloop() Video Code The Code from our Video on the Tkinter Canvas Widget on our YouTube Channel for CodersLegacy. (You will need an image called “castle.png” for this to work) import tkinter as tk class Window: def __init__(self, master): self.master = master self.frame = tk.Frame(self.master) self.frame.pack() self.scrollbary = tk.Scrollbar(self.frame, orient = tk.VERTICAL) self.scrollbary.pack(side = tk.RIGHT, fill = tk.Y) self.scrollbarx = tk.Scrollbar(self.frame, orient = tk.HORIZONTAL) self.scrollbarx.pack(side = tk.BOTTOM, fill = tk.X) self.canvas = tk.Canvas(self.frame, width = 300, height = 300, bg = "white", scrollregion = (0, 0, 500, 500), yscrollcommand = self.scrollbary.set, xscrollcommand = self.scrollbarx.set) self.canvas.pack() self.scrollbary.config(command = self.canvas.yview) self.scrollbarx.config(command = self.canvas.xview) img = tk.PhotoImage(file = "castle.png") self.master.img = img self.canvas.create_image(200, 200, image = img) root = tk.Tk() window = Window(root) root.mainloop() This marks the end of the Python Tkinter Button Widget article. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the material in the article can be asked in the comments section below. Head back to the main Tkinter Article to learn about the other great widgets!
https://coderslegacy.com/python/tkinter-canvas/
CC-MAIN-2022-40
refinedweb
882
51.55
Figure 1 - Complex Schedule Introduction Let me present you with the following problem. Let's say I have a company of 32 consultants. Suppose I have 5 consulting projects over a 3 month period and I want to rotate each of my consultants through all these projects in such a way so that they all end up with equal pay at the end of the project. One project pays $500/day, one project pays $400/day, one project pays $300/day, one pays $200/day and finally one pays $100/day. To simplify the problem, one consultant can work the same day and earn the sum of the two projects, but there are only 5 slots you can occupy in a day, one for each project. In other words on any given day, you can have at the most 5 consultants working the 5 projects. How would I go about solving this problem? Well I had several thoughts on it at first. You could some how round robin your consultants. Choose the first 5 then the next 5 then the next 5 and then have then rotate their order. This doesn't work out very easily though, because the number of consultants does not divide easily into the 3 month period. You could just randomly assign them to all the slots and see if it works out. Then just keep switching them around until it evens out. This particular solution is not as easy to resolve as it seems. So what is the easiest solution? Genetic Algorithms. Let the computer do the work through trial and error. Each time the GA will try to fill a schedule, give a fitness to the result, and the highest fitness survives. (Ain't GA's grand!) The beauty about genetic algorithms is that you never really care how the GA gets to the answer, you only care about how accurately you represent the fitness of the solution contained in each Genome. Although this solution can take a long arduous time to complete, it works. (Imagine when quantum computing comes into play, then the long arduous part goes away!) In this article we will use a genetic algorithm called PBIL (Population based Learning) to converge on the solution. Population based learning maintains a learning vector and converges on a solution using this vector in conjunction with the fitness of the genomes. Genome Representation As is true with every genetic algorithm, the trick is to figure out how the genome can be a representation of a possible solution. In the case of our problem, we want the genome to represent the schedule for the entire 3 months of our 5 projects. Each gene in the genome is a consultant in a particular slot in the schedule. Since we conveniently have 32 consultants, then we can represent each consultant as a 5 digit binary number from 00000b - 11111b. The schedule has 5 slots over a 3 month period. There are 5 working days in a week and 4 weeks in a month, so the calculation of number of slots is: (3 months ) * (20 days/month) * (5 slots) = 300 slots. The calculation of the number of bits in our genome is: (5 bits/consultant) * 300 slots = 1500 bits Now we have a representation in the genome of the entire schedule containing a consultant for each slot of the schedule over a 3 month period. For example, here is the start of a possible genome: 11000 | 00100 | 00000 | 10001 | 00111 | .... The string above represents the first 5 consultants in the first 5 slots of the 300 slot schedule. In other words, these are all the consultants for a possible genome on day 1. If we translate to base 10, we see that this comes out to: 24 , 4 , 0 , 17 , 7 If we assigned each consultant an ID from 0-31 then consultant with ID 24 would be filling slot #1 on day 1, consultant 4 fills slot #2, consultant 0 fills slot #3, consultant 17 fills slot #4 and consultant 7 fills slot #5. The part of the genetic algorithm that constructs the genome will continue to fill the slots all the way up until the end of the month to represent one genome or an entire schedule of consultants for the 3 month period. Since we want them all to get paid equally, we need to come up with a fitness function that will find a genome that balances the pay of all consultants for the 3 month period. Fitness Function The easiest way to figure out what the fitness of the problem is to ask ourselves "What is it exactly we are trying to accomplish?". What we want in this problem is for all consultants to get paid equally over the 3 month period. That means that if we add up the total billing for the period of 3 months for a consultant and compare it to another consultant, the billing should be close to the same (give or take a few hundred bucks). How can we do this comparison? One way to do the comparison is to first add up the amount billed each individual consultant in a particular genome. Then take the average billing for all consultants represented in the genome. Then sum the standard deviations (the amount each consultant's total billing varies from the average). The larger the sum of the standard deviations, the worse the fitness. Therefore we should take the reciprocal of the calculated sum in order to make a higher deviation a worse fitness. Using this method, as the fitness of the genes get higher and higher as we run our algorithm, the consultants total billing over the 3 month period should begin to equalize. The Code for Fitness Following the code for the fitness function, it performs just as we explained in the section above. First the function loops through the 1500 bit genome and calculates the total amount each consultant makes in the 3 month project period. Next it computes the average billing among all of the consultants. Finally it computes the standard deviation for each consultant and sums all of the standard deviations together. Finally it computes the fitness by taking the reciprocal value of the sum of the standard deviations. For very large standard deviation sums, our fitness will be close to zero. If we have a perfect fitness (all consultants getting paid the same exact amount) then our fitness should be close to infinity. Listing 1 - Fitness function for Computing the Optimal Billing Schedule int slotCount = 0; // used to keep track of the current slot # in the genome // go through all slots in the month and calculate billing for each consultantfor (int i = 0; i < LENGTH; i += NUMBER_SIZE) { // convert the binary part of the genome to an 5 bit integer int nextConsultant = FormNumber(i, NUMBER_SIZE); billing[nextConsultant] += _slotValues[slotCount]; // keep a running total of the slots slotCount = (slotCount + 1) % NUMBER_OF_SLOTS; } // Now the fitness is based on how close the billing is for all consultants // so we need to go through each consultant's billing,// and compute a standard deviation// the smaller the sum of the deviations, the higher the fitness// compute the average billing float totalBilling = 0.0f; float averageBilling = 0.0f; foreach (float nextBilling in billing) { totalBilling += nextBilling; } averageBilling = totalBilling / billing.Length; // sum the standard deviations for each consultantforeach (float nextBilling in billing) { float sqrtOfDeviation = (nextBilling - averageBilling); standardDevSum += (sqrtOfDeviation * sqrtOfDeviation); } // the higher the sum of the deviations, the worse the fitness // take the reciprocal (add a small value to prevent a divide by 0 // exception) fitness += 1.0f / (standardDevSum + .0001f); return (double)fitness;} Results After running the PBIL algorithm for 1000 generations with our fitness function we get the following results as shown in Figure 2. As we can see, the total billing is beginning to equalize. All total billing for the 3 months are at least $1600 and at most $4400 for each of our 32 consultants. Figure 2 - Generation 1000 of the PBIL algorithm By the 14000th generation (Figure 3) the total billing is even more equalized with the lowest paid consultant being paid a total of $2200 and the highest paying consultant being paid a total of $3400. Figure 3 - Generation 14000 of the PBIL Algorithm Finally at around generation 18000 (Figure 4), the total billing has pretty much equalized with everyone being paid around $2800. (There are a few lucky consultants who managed to squeak out an extra $100). Note that Figure 4 also shows the scheduling for all of the consultants. Consultants 6, 4, 31, 25, and 27 fill the first 5 slots on day 1. Consultants 14, 22, 4, 11, and 25 fill the next 5 slots on day 2, and so on. Note, the fitness function doesn't take into account that a consultant can work 2 slots on the same day or any distribution of time for each consultant. For example, on day 3, consultant 3 is working two slots. To solve this problem, you probably can easily add something to the program that swaps two consultants in the same slot, if one of them is working two different slots on the same day after the PBIL algorithm has finished converging. Figure 4 - Generation 18000 of the PBIL Algorithm (Convergence) Conclusion In the past we have examined how to use genetic algorithms to search for solutions to problems such as creating sudoku puzzles, electronic logic design, and playing mastermind. In this article we discuss how to solve a scheduling problem with genetic algorithms that might normally be very difficult to solve otherwise. Perhaps you can use this solution to administer some of your own difficult day-to-day activities. Anyway, don't leave your consultants hanging in an unequal billing cycle, get them organized using C# and .NET. View All
http://www.c-sharpcorner.com/article/using-a-genetic-algorithm-to-do-consultant-scheduling-in-C-Sharp/
CC-MAIN-2017-22
refinedweb
1,636
57.5
i tried to make a executable jar file.a program as follows import java.io.*; class studentmarks { String name; double engmarks,phymarks,chemmarks; double tot,avg; public studentmarks(String s,double p,double e,double c) { name=s; engmarks=e; phymarks=p; chemmarks=c; } void compute() { tot=engmarks+phymarks+chemmarks; avg=tot/3; } void display() { System.out.println("nameis:"+name); System.out.println("totalis:"+tot); } } i saved it as marks.java file then i compiled it into a class using command prompt then i made a manifest file using command prompt main:marks then i made a executable jar file using command prompt as follows C:\mywork> jar cvfm marks.jar manifest.txt *.class when i click on the jar file it does not open.i know something has terribly gone wrong please can someone help me.i am just a begginer. please.please thank you
https://www.daniweb.com/programming/software-development/threads/383763/jar-file-not-opening
CC-MAIN-2019-04
refinedweb
146
50.43
In the preceding chapter, I discussed some built-in object types. But I have not yet explained object types themselves. As I mentioned in Chapter 1, Swift object types come in three flavors: enum, struct, and class. What are the differences between them? And how would you create your own object type? That’s what this chapter is about. I’ll describe object types in general, and then each of the three flavors. Then I’ll explain three Swift ways of giving an object type greater flexibility: protocols, generics, and extensions. Finally, the survey of Swift’s built-in types will conclude with three umbrella types and three collection types. Object types are declared with the flavor of the object type ( enum, struct, or class), the name of the object type (which should start with a capital letter), and curly braces: class Manny { } struct Moe { } enum Jack { } An object type declaration can appear anywhere: at the top level of a file, at the top level of another object type declaration, or in the body of a function. The visibility (scope), and hence the usability, of this object type by other code depends upon where it appears (see Chapter 1): Declarations for any object type may contain within their curly braces the following things: A variable declared at the top level of an object type declaration is a property. By default, it is an instance property. An instance property is scoped to an instance: it is accessed through a particular instance of this type, and its value can be different for every instance of this type. Alternatively, a property can be a static/class property. For an enum or struct, it is declared with the keyword static; for a class, it may instead be declared with the keyword class. Such a property belongs to the object type itself: it is accessed through the type, and it has just one value, associated with the type. A function declared at the top level of an object type declaration is a method. By default, it is an instance method: it is called by sending a message to a particular instance of this type. Inside an instance method, self is the instance. Alternatively, a function can be a static/class method. For an enum or struct, it is declared with the keyword static; for a class, it may be declared instead with the keyword class. It is called by sending a message to the type. Inside a static/class method, self is the type. An initializer is a function called in order to bring an instance of an object type into existence. Strictly speaking, it is a static/class method, because it is called by talking to the object type. It is usually called using special syntax: the name of the type is followed directly by parentheses, as if the type itself were a function. When an initializer is called, a new instance is created and returned as a result. You will usually do something with the returned instance, such as assigning it to a variable, in order to preserve it and work with it in subsequent code. For example, suppose we have a Dog class: class Dog { } Then we can make a Dog instance like this: Dog() That code, however, though legal, is silly — so silly that it warrants a warning from the compiler. We have created a Dog instance, but there is no reference to that instance. Without such a reference, the Dog instance comes into existence and then immediately vanishes in a puff of smoke. The usual sort of thing is more like this: let fido = Dog() Now our Dog instance will persist as long as the variable fido persists (see Chapter 3) — and the variable fido gives us a reference to our Dog instance, so that we can use it. Observe that Dog() calls an initializer even though our Dog class doesn’t declare any initializers! The reason is that object types may have implicit initializers. These are a convenience that save you from the trouble of writing your own initializers. But you can write your own initializers, and you will often do so. An initializer is kind of function, and its declaration syntax is rather like that of a function. To declare an initializer, you use the keyword init followed by a parameter list, followed by curly braces containing the code. An object type can have multiple initializers, distinguished by their parameters. The parameter names, including the first parameter, are externalized by default (though of course you can prevent this by putting an underscore before a parameter name). A frequent use of the parameters is to set the values of instance properties. For example, here’s a Dog class with two instance properties, name (a String) and license (an Int). We give these instance properties default values that are effectively placeholders — an empty string and the number zero. Then we declare three initializers, so that the caller can create a Dog instance in three different ways: by supplying a name, by supplying a license number, or by supplying both. In each initializer, the parameters supplied are used to set the values of the corresponding properties: class Dog { var name = "" var license = 0 init(name:String) { self.name = name } init(license:Int) { self.license = license } init(name:String, license:Int) { self.name = name self.license = license } } Observe that in that code, in each initializer, I’ve given each parameter the same name as the property to which it corresponds. There’s no reason to do that apart from stylistic clarity. In the code for each initializer, I can distinguish the parameter from the property by using self to access the property. The result of that declaration is that I can create a Dog in three different ways: let fido = Dog(name:"Fido") let rover = Dog(license:1234) let spot = Dog(name:"Spot", license:1357) What I can’t do is to create a Dog with no initializer parameters. I wrote initializers, so my implicit initializer went away. This code is no longer legal: let puff = Dog() // compile error Of course, I could make that code legal by explicitly declaring an initializer with no parameters: class Dog { var name = "" var license = 0 init() { } init(name:String) { self.name = name } init(license:Int) { self.license = license } init(name:String, license:Int) { self.name = name self.license = license } } Now, the truth is that we don’t need those four initializers, because an initializer is a function, and a function’s parameters can have default values. Thus, I can condense all that code into a single initializer, like this: class Dog { var name = "" var license = 0 init(name:String = "", license:Int = 0) { self.name = name self.license = license } } I can still make an actual Dog instance in four different ways: let fido = Dog(name:"Fido") let rover = Dog(license:1234) let spot = Dog(name:"Spot", license:1357) let puff = Dog() Now comes the really interesting part. In my property declarations, I can eliminate the assignment of default initial values (as long as I declare explicitly the type of each property): class Dog { var name : String // no default value! var license : Int // no default value! init(name:String = "", license:Int = 0) { self.name = name self.license = license } } That code is legal (and common) — because an initializer initializes! In other words, I don’t have to give my properties initial values in their declarations, provided I give them initial values in all initializers. That way, I am guaranteed that all my instance properties have values when the instance comes into existence, which is what matters. Conversely, an instance property without an initial value when the instance comes into existence is illegal. A property must be initialized either as part of its declaration or by every initializer, and the compiler will stop you otherwise. The Swift compiler’s insistence that all instance properties be properly initialized is a valuable feature of Swift. (Contrast Objective-C, where instance properties can go uninitialized — and often do, leading to mysterious errors later.) Don’t fight the compiler; work with it. The compiler will help you by giving you an error message (“Return from initializer without initializing all stored properties”) until all your initializers initialize all your instance properties: class Dog { var name : String var license : Int init(name:String = "") { self.name = name // compile error } } Because setting an instance property in an initializer counts as initialization, it is legal even if the instance property is a constant declared with let: class Dog { let name : String let license : Int init(name:String = "", license:Int = 0) { self.name = name self.license = license } } In our artificial examples, we have been very generous with our initializer: we are letting the caller instantiate a Dog without supplying a name argument or a license argument. Usually, however, the purpose of an initializer is just the opposite: we want to force the caller to supply all needed information at instantiation time. Thus, in real life, it is much more likely that our Dog class would look like this: class Dog { let name : String let license : Int init(name:String, license:Int) { self.name = name self.license = license } } In that code, our Dog has a name and a license, and values for these must be supplied at instantiation time (there are no default values), and those values can never be changed thereafter (these properties are constants). In this way, we enforce a rule that every Dog must have a meaningful name and license. There is now only one way to make a Dog: let spot = Dog(name:"Spot", license:1357) Sometimes, there is no meaningful default value that can be assigned to an instance property during initialization. For example, perhaps the initial value of this property will not be obtained until some time has elapsed after this instance has come into existence. This situation conflicts with the requirement that all instance properties be initialized either in their declaration or through an initializer. You could, of course, just circumvent the problem by assigning a default initial value anyway; but this fails to communicate to your own code the fact that this isn’t a “real” value. A sensible and common solution, as I explained in Chapter 3, is to declare your instance property as a var having an Optional type. An Optional has a value, namely nil, signifying that no “real” value has been supplied; and an Optional var is initialized to nil automatically. Thus, your code can test this instance property against nil and, if it is nil, it won’t use the property. Later, the property will be given its “real” value. Of course, that value is now wrapped in an Optional; but if you declare this property as an implicitly unwrapped Optional, you have the additional advantage of being able to use the wrapped value directly, without explicitly unwrapping it — as if this weren’t an Optional at all — once you’re sure it is safe to do so: // this property will be set automatically when the nib loads @IBOutlet var myButton: UIButton! // this property will be set after time-consuming gathering of data var albums : [MPMediaItemCollection]! Except in order to set an instance property, an initializer may not refer to self, explicitly or implicitly, until all instance properties have been initialized. This rule guarantees that the instance is fully formed before it is used. This code, for example, is illegal: struct Cat { var name : String var license : Int init(name:String, license:Int) { self.name = name meow() // too soon - compile error self.license = license } func meow() { print("meow") } } The call to the instance method meow is implicitly a reference to self — it means self.meow(). The initializer can say that, but not until it has fulfilled its primary contract of initializing all uninitialized properties. The call to the instance method meow simply needs to be moved down one line, so that it comes after both name and license have been initialized. Initializers within an object type can call one another by using the syntax self.init(...). An initializer that calls another initializer is called a delegating initializer. When an initializer delegates, the other initializer — the one that it delegates to — must completely initialize the instance first, and then the delegating initializer can work with the fully initialized instance, possibly setting again a var property that was already set by the initializer that it delegated to. A delegating initializer appears to be an exception to the rule against saying self too early. But it isn’t, because it is saying self in order to delegate — and delegating will cause all instance properties to be initialized. In fact, the rules about a delegating initializer saying self are even more stringent: a delegating initializer cannot refer to self, not even to set a property, until after the call to the other initializer. For example: struct Digit { var number : Int var meaningOfLife : Bool init(number:Int) { self.number = number self.meaningOfLife = false } init() { // this is a delegating initializer self.init(number:42) self.meaningOfLife = true } } Moreover, a delegating initializer cannot set an immutable property (a let variable) at all. That is because it cannot refer to the property until after it has called the other initializer, and at that point the instance is fully formed — initialization proper is over, and the door for initialization of immutable properties has closed. Thus, the preceding code would be illegal if meaningOfLife were declared with let, because the second initializer is a delegating initializer and cannot set an immutable property. Be careful not to delegate recursively! If you tell an initializer to delegate to itself, or if you create a vicious circle of delegating initializers, the compiler won’t stop you (I regard that as a bug), but your running app will hang. For example, don’t say this: struct Digit { // do not do this! var number : Int = 100 init(value:Int) { self.init(number:value) } init(number:Int) { self.init(value:number) } } An initializer can return an Optional wrapping the new instance. In this way, nil can be returned to signal failure. An initializer that behaves this way is a failable initializer. To mark an initializer as failable when declaring it, put a question mark (or, for an implicitly unwrapped Optional, an exclamation mark) after the keyword init. If your failable initializer needs to return nil, explicitly write return nil. It is up to the caller to test the resulting Optional for equivalence with nil, unwrap it, and so forth, as with any Optional. Here’s a version of Dog with an initializer that returns an implicitly unwrapped Optional, returning nil if the name: or license: arguments are invalid: class Dog { let name : String let license : Int init!(name:String, license:Int) { self.name = name self.license = license if name.isEmpty { return nil } if license <= 0 { return nil } } } The resulting value is typed as Dog! — the Optional is implicitly unwrapped — so the caller who instantiates a Dog in this way can use the result directly as if it were simply a Dog instance. But if nil was returned, any attempt on the caller’s part to access members of the Dog instance will result in a crash at runtime: let fido = Dog(name:"", license:0) let name = fido.name // crash Cocoa and Objective-C conventionally return nil from initializers to signal failure; the API for such initializers has been hand-tweaked as a Swift failable initializer if initialization really might fail. For example, the UIImage initializer init?(named:) is a failable initializer, because there might be no image with the given name. It is not implicitly unwrapped, so the resulting value is a UIImage? and must be unwrapped before you can use it. (Most Objective-C initializers, however, are not bridged as failable initializers, even though in theory any Objective-C initializer might return nil.) A property is a variable — one that happens to be declared at the top level of an object type declaration. This means that everything said about variables in Chapter 3 applies. A property has a fixed type; it can be declared with var or let; it can be stored or computed; it can have setter observers. An instance property can also be declared lazy. A stored instance property must be given an initial value. But, as I explained a moment ago, this doesn’t have to be through assignment in the declaration; it can be through an initializer instead. Setter observers are not called during initialization of properties. Code that initializes a property cannot fetch an instance property or call an instance method. Such behavior would require a reference, explicit or implicit, to self; and during initialization, there is no self yet — self is exactly what we are in the process of initializing. Making this mistake can result in some of Swift’s most perplexing compile error messages. For example, this is illegal (and removing the explicit references to self doesn’t make it legal): class Moi { let first = "Matt" let last = "Neuburg" let whole = self.first + " " + self.last // compile error } One solution in that situation would be to make whole a computed property: class Moi { let first = "Matt" let last = "Neuburg" var whole : String { return self.first + " " + self.last } } That’s legal because the computation won’t actually be performed until after self exists. Another solution is to declare whole as lazy: class Moi { let first = "Matt" let last = "Neuburg" lazy var whole : String = self.first + " " + self.last } Again, that’s legal because the reference to self won’t be performed until after self exists. Similarly, a property initializer can’t call an instance method, but a computed property can, and so can a lazy property. As I demonstrated in Chapter 3, a variable’s initializer can consist of multiple lines of code if you write it as a define-and-call anonymous function. If this variable is an instance property, and if that code is to refer to other instance properties or instance methods, the variable must be declared lazy: class Moi { let first = "Matt" let last = "Neuburg" lazy var whole : String = { var s = self.first s.appendContentsOf(" ") s.appendContentsOf(self.last) return s }() } If a property is an instance property (the default), it can be accessed only through an instance, and its value is separate for each instance. For example, let’s start once again with a Dog class: class Dog { let name : String let license : Int init(name:String, license:Int) { self.name = name self.license = license } } Our Dog class has a name instance property. Then we can make two different Dog instances with two different name values, and we can access each Dog instance’s name through the instance: let fido = Dog(name:"Fido", license:1234) let spot = Dog(name:"Spot", license:1357) let aName = fido.name // "Fido" let anotherName = spot.name // "Spot" A static/class property, on the other hand, is accessed through the type, and is scoped to the type, which usually means that it is global and unique. I’ll use a struct as an example: struct Greeting { static let friendly = "hello there" static let hostile = "go away" } Now code elsewhere can fetch the values of Greeting.friendly and Greeting.hostile. That example is neither artificial nor trivial; immutable static/class properties are a convenient and effective way to supply your code with nicely namespaced constants. Unlike instance properties, static properties can be initialized with reference to one another; the reason is that static property initializers are lazy (see Chapter 3): struct Greeting { static let friendly = "hello there" static let hostile = "go away" static let ambivalent = friendly + " but " + hostile } Notice the lack of self in that code. In static/class code, self means the type itself. I like to use self explicitly wherever it would be implicit, but here I can’t use it without arousing the ire of the compiler (I regard this as a bug). To clarify the status of the terms friendly and hostile, I can use the name of the type, just as any other code would do: struct Greeting { static let friendly = "hello there" static let hostile = "go away" static let ambivalent = Greeting.friendly + " but " + Greeting.hostile } On the other hand, if I write ambivalent as a computed property, I can use self: struct Greeting { static let friendly = "hello there" static let hostile = "go away" static var ambivalent : String { return self.friendly + " but " + self.hostile } } On the other other hand, I’m not allowed to use self when the initial value is set by a define-and-call anonymous function (again, I regard this as a bug): struct Greeting { static let friendly = "hello there" static let hostile = "go away" static var ambivalent : String = { return self.friendly + " but " + self.hostile // compile error }() } A method is a function — one that happens to be declared at the top level of an object type declaration. This means that everything said about functions in Chapter 2 applies. By default, a method is an instance method. This means that it can be accessed only through an instance. Within the body of an instance method, self is the instance. To illustrate, let’s continue to develop our Dog class: class Dog { let name : String let license : Int let whatDogsSay = "Woof" init(name:String, license:Int) { self.name = name self.license = license } func bark() { print(self.whatDogsSay) } func speak() { self.bark() print("I'm \(self.name)") } } Now I can make a Dog instance and tell it to speak: let fido = Dog(name:"Fido", license:1234) fido.speak() // Woof I'm Fido In my Dog class, the speak method calls the instance method bark by way of self, and obtains the value of the instance property name by way of self; and the bark instance method obtains the value of the instance property whatDogsSay by way of self. This is because instance code can use self to refer to this instance. Such code can omit self if the reference is unambiguous; thus, for example, I could have written this: func speak() { bark() print("I'm \(name)") } But I never write code like that (except by accident). Omitting self, in my view, makes the code harder to read and maintain; the loose terms bark and name seem mysterious and confusing. Moreover, sometimes self cannot be omitted. For example, in my implementation of init(name:license:), I must use self to disambiguate between the parameter name and the property self.name. A static/class method is accessed through the type, and self means the type. I’ll use our Greeting struct as an example: struct Greeting { static let friendly = "hello there" static let hostile = "go away" static var ambivalent : String { return self.friendly + " but " + self.hostile } static func beFriendly() { print(self.friendly) } } And here’s how to call the static beFriendly method: Greeting.beFriendly() // hello there There is a kind of conceptual wall between static/class members, on the one hand, and instance members on the other; even though they may be declared within the same object type declaration, they inhabit different worlds. A static/class method can’t refer to “the instance” because there is no instance; thus, a static/class method cannot directly refer to any instance properties or call any instance methods. An instance method, on the other hand, can refer to the type by name, and can thus access static/class properties and can call static/class methods. (I’ll talk later in this chapter about another way in which an instance method can refer to the type.) For example, let’s return to our Dog class and grapple with the question of what dogs say. Presume that all dogs say the same thing. We’d prefer, therefore, to express whatDogsSay not at instance level but at class level. This would be a good use of a static property. Here’s a simplified Dog class that illustrates: class Dog { static var whatDogsSay = "Woof" func bark() { print(Dog.whatDogsSay) } } Now we can make a Dog instance and tell it to bark: let fido = Dog() fido.bark() // Woof A subscript is an instance method that is called in a special way — by appending square brackets to an instance reference. The square brackets can contain arguments to be passed to the subscript method. You can use this feature for whatever you like, but it is suitable particularly for situations where this is an object type with elements that can be appropriately accessed by key or by index number. I have already described (in Chapter 3) the use of this syntax with strings, and it is familiar also from dictionaries and arrays; you can use square brackets with strings and dictionaries and arrays exactly because Swift’s String and Dictionary and Array types declare subscript methods. The syntax for declaring a subscript method is somewhat like a function declaration and somewhat like a computed property declaration. That’s no coincidence! A subscript is like a function in that it can take parameters: arguments can appear in the square brackets when a subscript method is called. A subscript is like a computed property in that the call is used like a reference to a property: you can fetch its value or you can assign into it. To illustrate, I’ll write a struct that treats an integer as if it were a string, returning a digit that can be specified by an index number in square brackets; for simplicity, I’m deliberately omitting any sort of error-checking: struct Digit { var number : Int init(_ n:Int) { self.number = n } subscript(ix:Int) -> Int { get {get { let s = String(self.number) return Int(String(s[s.startIndex.advancedBy(ix)]))! } } }let s = String(self.number) return Int(String(s[s.startIndex.advancedBy(ix)]))! } } } After the keyword subscript we have a parameter list stating what parameters are to appear inside the square brackets; by default, their names are not externalized. Then, after the arrow operator, we have the type of value that is passed out (when the getter is called) or in (when the setter is called); this is parallel to the type declared for a computed property, even though the syntax with the arrow operator is like the syntax for the returned value in a function declaration. Finally, we have curly braces whose contents are exactly like those of a computed property. You can have get and curly braces for the getter, and set and curly braces for the setter. If there’s a getter and no setter, the word get and its curly braces can be omitted. The setter receives the new value as newValue, but you can change that name by supplying a different name in parentheses after the word set. Here’s an example of calling the getter; the instance with appended square brackets containing the arguments is used just as if you were getting a property value: var d = Digit(1234) let aDigit = d[1] // 2 Now I’ll expand my Digit struct so that its subscript method includes a setter (and again I’ll omit error-checking): struct Digit { var number : Int init(_ n:Int) { self.number = n } subscript(ix:Int) -> Int { get { let s = String(self.number) return Int(String(s[s.startIndex.advancedBy(ix)]))! } set { var s = String(self.number) let i = s.startIndex.advancedBy(ix) s.replaceRange(i...i, with: String(newValue)) self.number = Int(s)! } } } And here’s an example of calling the setter; the instance with appended square brackets containing the arguments is used just as if you were setting a property value: var d = Digit(1234) d[0] = 2 // now d.number is 2234 An object type can declare multiple subscript methods, provided their signatures distinguish them as different functions. An object type may be declared inside an object type declaration, forming a nested type: class Dog { struct Noise { static var noise = "Woof" } func bark() { print(Dog.Noise.noise) } } A nested object type is no different from any other object type, but the rules for referring to it from the outside are changed; the surrounding object type acts as a namespace, and must be referred to explicitly in order to access the nested object type: Dog.Noise.noise = "Arf" The Noise struct is thus namespaced inside the Dog class. This namespacing provides clarity: the name Noise does not float free, but is explicitly associated with the Dog class to which it belongs. Namespacing also allows more than one Noise struct to exist, without any clash of names. Swift built-in object types often take advantage of namespacing; for example, the String struct is one of several structs that contain an Index struct, with no clash of names. (It is also possible, through Swift’s privacy rules, to hide a nested object type, in such a way that it cannot be referenced from the outside at all. This is useful for organization and encapsulation when one object type needs a second object type as a helper, but no other object type needs to know about the second object type. Privacy is discussed in Chapter 5.) On the whole, the names of object types will be global, and you will be able to refer to them simply by using their names. Instances, however, are another story. Instances must be deliberately created, one by one. That is what instantiation is for. Once you have created an instance, you can cause that instance to persist, by storing the instance in a variable with sufficient lifetime; using that variable as a reference, you can send instance messages to that instance, accessing instance properties and calling instance methods. Direct instantiation of an object type is the act of creating a brand new instance of that type, directly, yourself. It involves you calling an initializer. In many cases, though, some other object will create or provide the instance for you. A simple example is what happens when you manipulate a String, like this: let s = "Hello, world" let s2 = s.uppercaseString In that code, we end up with two String instances. The first one, s, we created using a string literal. The second one, s2, was created for us when we accessed the first string’s uppercaseString property. Thus we have two instances, and they will persist independently as long as our references to them persist; but we didn’t get either of them by calling an initializer. In other cases, the instance you are interested in will already exist in some persistent fashion; the problem will then be to find a way of getting a reference to that instance. Let’s say, for example, that this is a real-life iOS app. You will certainly have a root view controller, which will be an instance of some type of UIViewController. Let’s say it’s an instance of the ViewController class. Once your app is up and running, this instance already exists. It would then be utterly counterproductive to attempt to speak to the root view controller by instantiating the ViewController class: let theVC = ViewController() All that code does is to make a second, different instance of the ViewController class, and your messages to that instance will be wasted, as it is not the particular already existing instance that you wanted to talk to. That is a very common beginner mistake; don’t make it. Getting a reference to an already existing instance can be, of itself, an interesting problem. Instantiation is definitely not how to do it. But how do you do it? Well, it depends. In this particular situation, the goal is to obtain, from any code, a reference to your app’s root view controller instance. I’ll describe, just for the sake of the example, how you would do it. Getting a reference always starts with something you already have a reference to. Often, this will be a class. In iOS programming, the app itself is an instance, and there is a class that holds a reference to that instance and will hand it to you whenever you ask for it. That class is the UIApplication class, and the way to get a reference to the app instance is to call its sharedApplication class method: let app = UIApplication.sharedApplication() Now we have a reference to the application instance. The application instance has a keyWindow property: let window = app.keyWindow Now we have a reference to our app’s key window. That window owns the root view controller, and will hand us a reference to it, as its own rootViewController property; the app’s keyWindow is an Optional, so to get at its rootViewController we must unwrap the Optional: let vc = window?.rootViewController And voilà, we have a reference to our app’s root view controller. To obtain the reference to this persistent instance, we created, in effect, a chain of method calls and properties leading from the known to the unknown, from a globally available class to the particular desired instance: let app = UIApplication.sharedApplication() let window = app.keyWindow let vc = window?.rootViewController Clearly, we can write that chain as an actual chain, using repeated dot-notation: let vc = UIApplication.sharedApplication().keyWindow?.rootViewController You don’t have to chain your instance messages into a single line — chaining through multiple let assignments is completely efficient, possibly more legible, and certainly easier to debug — but it’s a handy formulaic convenience and is particularly characteristic of dot-notated object-oriented languages like Swift. The general problem of getting a reference to a particular already existing instance is so interesting and pervasive that I will devote much of Chapter 13 to it. An enum is an object type whose instances represent distinct predefined alternative values. Think of it as a list of known possibilities. An enum is the Swift way to express a set of constants that are alternatives to one another. An enum declaration includes case statements. Each case is the name of one of the alternatives. An instance of an enum will represent exactly one alternative — one case. For example, in my Albumen app, different instances of the same view controller can list any of four different sorts of music library contents: albums, playlists, podcasts, or audiobooks. The view controller’s behavior is slightly different in each case. So I need a sort of four-way switch that I can set once when the view controller is instantiated, saying which sort of contents this view controller is to display. That sounds like an enum! Here’s the basic declaration for that enum; I call it Filter, because each case represents a different way of filtering the contents of the music library: enum Filter { case Albums case Playlists case Podcasts case Books } That enum doesn’t have an initializer. You can write an initializer for an enum, as I’ll demonstrate in a moment; but there is a default mode of initialization that you’ll probably use most of the time — the name of the enum followed by dot-notation and one of the cases. For example, here’s how to make an instance of Filter representing the Albums case: let type = Filter.Albums As a shortcut, if the type is known in advance, you can omit the name of the enum; the bare case must still be preceded by a dot. For example: let type : Filter = .Albums You can’t say .Albums just anywhere out of the blue, because Swift doesn’t know what enum it belongs to. But in that code, the variable is explicitly declared as a Filter, so Swift knows what .Albums means. A similar thing happens when passing an enum instance as an argument in a function call: func filterExpecter(type:Filter) {} filterExpecter(.Albums) In the second line, I create an instance of Filter and pass it, all in one move, without having to include the name of the enum. That’s because Swift knows from the function declaration that a Filter is expected here. In real life, the space savings when omitting the enum name can be considerable — especially because, when talking to Cocoa, the enum type names are often long. For example: let v = UIView() v.contentMode = .Center A UIView’s contentMode property is typed as a UIViewContentMode enum. Our code is neater and simpler because we don’t have to include the name UIViewContentMode explicitly here; .Center is nicer than UIViewContentMode.Center. But either is legal. Code inside an enum declaration can use a case name without dot-notation. The enum is a namespace; code inside the declaration is inside the namespace, so it can see the case names directly. Instances of an enum with the same case are regarded as equal. Thus, you can compare an enum instance for equality against a case. Again, the type of enum is known from the first term in the comparison, so the second term can omit the enum name: func filterExpecter(type:Filter) { if type == .Albums { print("it's albums") } } filterExpecter(.Albums) // "it's albums" Optionally, when you declare an enum, you can add a type declaration. The cases then all carry with them a fixed (constant) value of that type. If the type is an integer numeric type, the values can be implicitly assigned, and will start at zero by default. In this code, .Mannie carries a value of 0, .Moe carries of a value of 1, and so on: enum PepBoy : Int { case Mannie case Moe case Jack } If the type is String, the implicitly assigned values are the string equivalents of the case names. In this code, .Albums carries a value of "Albums", and so on: enum Filter : String { case Albums case Playlists case Podcasts case Books } Regardless of the type, you can assign values explicitly as part of the case declarations: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" } The types attached to an enum in this way are limited to numbers and strings, and the values assigned must be literals. The values carried by the cases are called their raw values. An instance of this enum has just one case, so it has just one fixed raw value, which can be retrieved with its rawValue property: let type = Filter.Albums print(type.rawValue) // Albums Having each case carry a fixed raw value can be quite useful. In my Albumen app, the Filter cases really do have those String values, and so when the view controller wants to know what title string to put at the top of the screen, it simply retrieves the current type’s rawValue. The raw value associated with each case must be unique within this enum; the compiler will enforce this rule. Therefore, the mapping works the other way: given a raw value, you can derive the case. For example, you can instantiate an enum that has raw values by using its rawValue: initializer: let type = Filter(rawValue:"Albums") However, the attempt to instantiate the enum in this way might fail, because you might supply a raw value corresponding to no case; therefore, this is a failable initializer, and the value returned is an Optional. In that code, type is not a Filter; it’s an Optional wrapping a Filter. This might not be terribly important, however, because the thing you are most likely to want to do with an enum is to compare it for equality with a case of the enum; you can do that with an Optional without unwrapping it. This code is legal and works correctly: let type = Filter(rawValue:"Albums") if type == .Albums { // ... The raw values discussed in the preceding section are fixed in advance: a given case carries with it a certain raw value, and that’s that. Alternatively, you can construct a case whose constant value can be set when the instance is created. To do so, do not declare any type for the enum as a whole; instead, append a tuple type to the name of the case. There will usually be just one type in this tuple, so what you’ll write will look like a type name in parentheses. Any type may be declared. Here’s an example: enum Error { case Number(Int) case Message(String) case Fatal } That code means that, at instantiation time, an Error instance with the .Number case must be assigned an Int value, an Error instance with the .Message case must be assigned a String value, and an Error instance with the .Fatal case can’t be assigned any value. Instantiation with assignment of a value is really a way of calling an initialization function, so to supply the value, you pass it as an argument in parentheses: let err : Error = .Number(4) The attached value here is called an associated value. What you are supplying here is actually a tuple, so it can contain literal values or value references; this is legal: let num = 4 let err : Error = .Number(num) The tuple can contain more than one value, with or without names; if the values have names, they must be used at initialization time: enum Error { case Number(Int) case Message(String) case Fatal(n:Int, s:String) } let err : Error = .Fatal(n:-12, s:"Oh the horror") An enum case that declares an associated value is actually an initialization function, so you can capture a reference to that function and call the function later: let fatalMaker = Error.Fatal let err = fatalMaker(n:-1000, s:"Unbelievably bad error") I’ll explain how to extract the associated value from an actual instance of such an enum in Chapter 5. At the risk of sounding like a magician explaining his best trick, I will now reveal how an Optional works. An Optional is simply an enum with two cases: .None and .Some. If it is .None, it carries no associated value, and it equates to nil. If it is .Some, it carries the wrapped value as its associated value. An explicit enum initializer must do what default initialization does: it must return a particular case of this enum. To do so, set self to the case. In this example, I’ll expand my Filter enum so that it can be initialized with a numeric argument: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" static var cases : [Filter] = [Albums, Playlists, Podcasts, Books] init(_ ix:Int) { self = Filter.cases[ix] } } Now there are three ways to make a Filter instance: let type1 = Filter.Albums let type2 = Filter(rawValue:"Playlists")! let type3 = Filter(2) // .Podcasts In that example, we’ll crash in the third line if the caller passes a number that’s out of range (less than 0 or greater than 3). If we want to avoid that, we can make this a failable initializer and return nil if the number is out of range: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" static var cases : [Filter] = [Albums, Playlists, Podcasts, Books] init!(_ ix:Int) { if !(0...3).contains(ix) { return nil } self = Filter.cases[ix] } } An enum can have multiple initializers. Enum initializers can delegate to one another by saying self.init(...). The only requirement is that, at some point in the calling chain, self must be set to a case; if that doesn’t happen, your enum won’t compile. In this example, I improve my Filter enum so that it can be initialized with a String raw value without having to say rawValue: in the call. To do so, I declare a failable initializer with a string parameter that delegates to the built-in failable rawValue: initializer: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" static var cases : [Filter] = [Albums, Playlists, Podcasts, Books] init!(_ ix:Int) { if !(0...3).contains(ix) { return nil } self = Filter.cases[ix] } init!(_ rawValue:String) { self.init(rawValue:rawValue) } } Now there are four ways to make a Filter instance: let type1 = Filter.Albums let type2 = Filter(rawValue:"Playlists") let type3 = Filter(2) // .Podcasts let type4 = Filter("Playlists") An enum can have instance properties and static properties, but there’s a limitation: an enum instance property can’t be a stored property. This makes sense, because if two instances of the same case could have different stored instance property values, they would no longer be equal to one another — which would undermine the nature and purpose of enums. Computed instance properties are fine, however, and the value of the property can vary by rule in accordance with the case of self. In this example from my real code, I’ve associated a search function with each case of my Filter enum, suitable for fetching the songs of that type from the music library: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" var query : MPMediaQuery { switch self { case .Albums: return MPMediaQuery.albumsQuery() case .Playlists: return MPMediaQuery.playlistsQuery() case .Podcasts: return MPMediaQuery.podcastsQuery() case .Books: return MPMediaQuery.audiobooksQuery() } } If an enum instance property is a computed variable with a setter, other code can assign to this property. However, that code’s reference to the enum instance must be a variable ( var), not a constant ( let). If you try to assign to an enum instance property through a let reference, you’ll get a compile error. An enum can have instance methods (including subscripts) and static methods. Writing an enum method is straightforward. Here’s an example from my own code. In a card game, the cards draw themselves as rectangles, ellipses, or diamonds. I’ve abstracted the drawing code into an enum that draws itself as a rectangle, an ellipse, or a diamond, depending on its case: enum ShapeMaker { case Rectangle case Ellipse case Diamond func drawShape (p: CGMutablePath, inRect r : CGRect) -> () { switch self { case Rectangle: CGPathAddRect(p, nil, r) case Ellipse: CGPathAddEllipseInRect(p, nil, r) case Diamond: CGPathMoveToPoint(p, nil, r.minX, r.midY) CGPathAddLineToPoint(p, nil, r.midX, r.minY) CGPathAddLineToPoint(p, nil, r.maxX, r.midY) CGPathAddLineToPoint(p, nil, r.midX, r.maxY) CGPathCloseSubpath(p) } } } An enum instance method that modifies the enum itself must be marked as mutating. For example, an enum instance method might assign to an instance property of self; even though this is a computed property, such assignment is illegal unless the method is marked as mutating. An enum instance method can even change the case of self, by assigning to self; but again, the method must be marked as mutating. The caller of a mutating instance method must have a variable reference to the instance ( var), not a constant reference ( let). In this example, I add an advance method to my Filter enum. The idea is that the cases constitute a sequence, and the sequence can cycle. By calling advance, I transform a Filter instance into an instance of the next case in the sequence: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" static var cases : [Filter] = [Albums, Playlists, Podcasts, Books] mutating func advance() { var ix = Filter.cases.indexOf(self)! ix = (ix + 1) % 4 self = Filter.cases[ix] } } And here’s how to call it: var type = Filter.Books type.advance() // type is now Filter.Albums (A subscript setter is always considered mutating and does not have to be specially marked.) An enum is a switch whose states have names. There are many situations where that’s a desirable thing. You could implement a multistate value yourself; for example, if there are five possible states, you could use an Int whose values can be 0 through 4. But then you would have to provide a lot of additional overhead — making sure that no other values are used, and interpreting those numeric values correctly. A list of five named cases is much better! Even when there are only two states, an enum is often better than, say, a mere Bool, because the enum’s states have names. With a Bool, you have to know what true and false signify in a particular usage; with an enum, the name of the enum and the names of its cases tell you its significance. Moreover, you can store extra information in an enum’s associated value or raw value; you can’t do that with a mere Bool. For example, in my LinkSame app, the user can play a real game with a timer or a practice game without a timer. At various places in the code, I need to know which type of game this is. The game types are the cases of an enum: enum InterfaceMode : Int { case Timed = 0 case Practice = 1 } The current game type is stored in an instance property interfaceMode, whose value is an InterfaceMode. Thus, it’s easy to set the game type by case name: // ... initialize new game ... self.interfaceMode = .Timed And it’s easy to examine the game type by case name: // notify of high score only if user is not just practicing if self.interfaceMode == .Timed { // ... So what are the raw value integers for? That’s the really clever part. They correspond to the segment indexes of a UISegmentedControl in the interface! Whenever I change the interfaceMode property, a setter observer also selects the corresponding segment of the UISegmentedControl ( self.timedPractice), simply by fetching the rawValue of the current enum case: var interfaceMode : InterfaceMode = .Timed { willSet (mode) { self.timedPractice?.selectedSegmentIndex = mode.rawValue } } A struct is the Swift object type par excellence. An enum, with its fixed set of cases, is a reduced, specialized kind of object. A class, at the other extreme, will often turn out to be overkill; it has some features that a struct lacks, but if you don’t need those features, a struct may be preferable. Of the numerous object types declared in the Swift header, only four are classes (and you are unlikely to encounter any of them consciously). On the contrary, nearly all the built-in object types provided by Swift itself are structs. A String is a struct. An Int is a struct. A Range is a struct. An Array is a struct. And so on. That shows how powerful a struct can be. A struct that doesn’t have an explicit initializer and that doesn’t need an explicit initializer — because it has no stored properties, or because all its stored properties are assigned default values as part of their declaration — automatically gets an implicit initializer with no parameters, init(). For example: struct Digit { var number = 42 } That struct can be initialized by saying Digit(). But if you add any explicit initializers of your own, you lose that implicit initializer: struct Digit { var number = 42 init(number:Int) { self.number = number } } Now you can say Digit(number:42), but you can’t say Digit() any longer. Of course, you can add an explicit initializer that does the same thing: struct Digit { var number = 42 init() {} init(number:Int) { self.number = number } } Now you can say Digit() once again, as well as Digit(number:42). A struct that has stored properties and that doesn’t have an explicit initializer automatically gets an implicit initializer derived from its instance properties. This is called the memberwise initializer. For example: struct Digit { var number : Int // can use "let" here } That struct is legal — indeed, it is legal even if the number property is declared with let instead of var — even though it seems we have not fulfilled the contract requiring us to initialize all stored properties in their declaration or in an initializer. The reason is that this struct automatically has a memberwise initializer which does initialize all its properties. In this case, the memberwise initializer is called init(number:). The memberwise initializer exists even for var stored properties that are assigned a default value in their declaration; thus, this struct has a memberwise initializer init(number:), in addition to its implicit init() initializer: struct Digit { var number = 42 } But if you add any explicit initializers of your own, you lose the memberwise initializer (though of course you can write an explicit initializer that does the same thing). If a struct has any explicit initializers, then they must fulfill the contract that all stored properties must be initialized either by direct initialization in the declaration or by all initializers. If a struct has multiple explicit initializers, they can delegate to one another by saying self.init(...). A struct can have instance properties and static properties, and they can be stored or computed variables. If other code wants to set a property of a struct instance, its reference to that instance must be a variable ( var), not a constant ( let). A struct can have instance methods (including subscripts) and static methods. If an instance method sets a property, it must be marked as mutating, and the caller’s reference to the struct instance must be a variable ( var), not a constant ( let). A mutating instance method can even replace this instance with another instance, by setting self to a different instance of the same struct. (A subscript setter is always considered mutating and does not have to be specially marked.) I very often use a degenerate struct as a handy namespace for constants. I call such a struct “degenerate” because it consists entirely of static members; I don’t intend to use this object type to make any instances. Nevertheless, there is absolutely nothing wrong with this use of a struct. For example, let’s say I’m going to be storing user preference information in Cocoa’s NSUserDefaults. NSUserDefaults is a kind of dictionary: each item is accessed through a key. The keys are typically strings. A common programmer mistake is to write out these string keys literally every time a key is used; if you then misspell a key name, there’s no penalty at compile time, but your code will mysteriously fail to work correctly. The proper approach is to embody these keys as constant strings and use the names of the strings; that way, if you make a mistake typing the name of a string, the compiler can catch you. A struct with static members is a great way to define those constant strings and clump their names into a namespace: struct Default { static let Rows = "CardMatrixRows" static let Columns = "CardMatrixColumns" static let HazyStripy = "HazyStripy" } That code means that I can now refer to an NSUserDefaults key with a name, such as Default.HazyStripy. If a struct declares static members whose values are instances of the same struct type, you can omit the struct name when supplying a static member where an instance of this struct type is expected — as if the struct were an enum: struct Thing { var rawValue : Int = 0 static var One : Thing = Thing(rawValue:1) static var Two : Thing = Thing(rawValue:2) } let thing : Thing = .One // no need to say Thing.One here The example is artificial, but the situation is not; many Objective-C enums are bridged to Swift as this kind of struct (and I’ll talk about them later in this chapter). A class is similar to a struct, with the following key differences: Classes are reference types. This means, among other things, that a class instance has two remarkable features that are not true of struct instances or enum instances: let), you can change the value of an instance property through that reference. An instance method of a class never has to be marked mutating(and cannot be). In Objective-C, classes are the only object type. Some built-in Swift struct types are magically bridged to Objective-C class types, but your custom struct types don’t have that magic. Thus, when programming iOS with Swift, a primary reason for declaring a class, rather than a struct, is as a form of interchange with Objective-C and Cocoa. A major difference between enums and structs, on the one hand, and classes, on the other hand, is that enums and structs are value types, whereas classes are reference types. A value type is not mutable in place. In practice, this means that you can’t change the value of an instance property of a value type. It looks like you can do it, but in reality, you can’t. For example, consider a struct. A struct is a value type: struct Digit { var number : Int init(_ n:Int) { self.number = n } } Now, it looks as if you can change a Digit’s number property. That, after all, is the whole purpose of declaring that property as a var; and Swift’s syntax of assignment would certainly lead us to believe that changing a Digit’s number is possible: var d = Digit(123) d.number = 42 But in reality, in that code, we are not changing the number property of this Digit instance; we are, in fact, making a different Digit instance and replacing the first one. To see that this is true, add a setter observer: var d : Digit = Digit(123) { didSet { print("d was set") } } d.number = 42 // "d was set" In general, then, when you change an instance value type, you are actually replacing that instance with another instance. That explains why it is impossible to mutate a value type instance if the reference to that instance is declared with let. As you know, an initialized variable declared with let cannot be assigned to. If that variable refers to a value type instance, and that value type instance has a property, and we try to assign to that property, even if the property is declared with var, the compiler will stop us: struct Digit { var number : Int init(_ n:Int) { self.number = n } } let d = Digit(123) d.number = 42 // compile error The reason is that this change would require us to replace the Digit instance inside the d shoebox. But we can’t replace the Digit instance pointed to by d with another Digit instance, because that would mean assigning into d — which the let declaration forbids. That, in turn, is why an instance method of a struct or enum that sets a property of the instance must be marked explicitly with the mutating keyword. For example: struct Digit { var number : Int init(_ n:Int) { self.number = n } mutating func changeNumberTo(n:Int) { self.number = n } } Without the mutating keyword, that code won’t compile. The mutating keyword assures the compiler that you understand what’s really happening here: if that method is called, it replaces the instance. The result is that this method can be called only on a reference declared with var, not let: let d = Digit(123) d.changeNumberTo(42) // compile error None of what I’ve just said, however, applies to class instances! Class instances are reference types, not value types. An instance property of a class, to be settable, must be declared with var, obviously; but the reference to a class instance does not have to be declared with var in order to set that property through that reference: class Dog { var name : String = "Fido" } let rover = Dog() rover.name = "Rover" // fine In the last line of that code, the class instance pointed to by rover is being mutated in place. No implicit assignment to rover is involved, and so the let declaration is powerless to prevent the mutation. A setter observer on a Dog variable is not called when a property is set: var rover : Dog = Dog() { didSet { print("did set rover") } } rover.name = "Rover" // nothing in console The setter observer would be called if we were to set rover explicitly (to another Dog instance), but it is not called merely because we change a property of the Dog instance already pointed to by rover. Those examples involve a declared variable reference. Exactly the same difference between a value type and a reference type may be seen with a parameter of a function call. The compiler will stop us in our tracks if we try to assign into an enum parameter’s instance property or a struct parameter’s instance property. This doesn’t compile: func digitChanger(d:Digit) { d.number = 42 // compile error } To make that code compile, we must declare the parameter with var: func digitChanger(var d:Digit) { d.number = 42 } But this compiles even without the var declaration: func dogChanger(d:Dog) { d.name = "Rover" } The underlying reason for these differences between value types and reference types is that, with a reference type, there is in effect a concealed level of indirection between your reference to the instance and the instance itself; the reference actually refers to a pointer to the instance. This, in turn, has another important implication: it means that when a class instance is assigned to a variable or passed as an argument to a function, you can wind up with multiple references to the same object. That is not true of structs and enums. When an enum instance or a struct instance is assigned to a variable, or passed to or from a function, what is assigned or passed is essentially a new copy of that instance. But when a class instance is assigned to a variable, or passed to or from a function, what is assigned or passed is a reference to the same instance. To prove it, I’ll assign one reference to another and then mutate the second reference — and then I’ll examine what happened to the first reference. Let’s start with the struct: var d = Digit(123) print(d.number) // 123 var d2 = d // assignment! d2.number = 42 print(d.number) // 123 In that code, we changed the number property of d2, a struct instance; but nothing happened to the number property of d. Now let’s try the class: var fido = Dog() print(fido.name) // Fido var rover = fido // assignment! rover.name = "Rover" print(fido.name) // Rover In that code, we changed the name property of rover, a class instance — and the name property of fido was changed as well! That’s because, after the assignment in the third line, fido and rover refer to one and the same instance. When an enum or struct instance is assigned, it is effectively copied; a fresh, separate instance is created. But when a class instance is assigned, you get a new reference to the same instance. The same thing is true of parameter passing. Let’s start with the struct: func digitChanger(var d:Digit) { d.number = 42 } var d = Digit(123) print(d.number) // 123 digitChanger(d) print(d.number) // 123 We passed our Digit struct instance d to the function digitChanger, which set the number property of its local parameter d to 42. Nevertheless, the number property of our Digit d remains 123. That’s because the Digit that arrives inside digitChanger is quite literally a different Digit. The act of passing a Digit as a function argument creates a separate copy. But with a class instance, what is passed is a reference to the same instance: func dogChanger(d:Dog) { // no "var" needed d.name = "Rover" } var fido = Dog() print(fido.name) // "Fido" dogChanger(fido) print(fido.name) // "Rover" The change made to d inside the function dogChanger affected our Dog instance fido! Handing a class instance to a function does not copy that instance; it is more like lending that instance to the function. The ability to generate multiple references to the same instance is significant particularly in a world of object-based programming, where objects persist and can have properties that persist along with them. If object A and object B are both long-lived objects, and if they both have a Dog property (where Dog is a class), and if they have each been handed a reference to one and the same Dog instance, then either object A or object B can mutate its Dog, and this mutation will affect the other’s Dog. You can thus be holding on to an object, only to discover that it has been mutated by someone else behind your back. The problem is even more acute in a multithreaded app, where one and the same object can be mutated differently, in place, by two different threads. None of these issues arise with a value type; this difference can, indeed, be an important reason for preferring a struct to a class when you’re designing an object type. The fact that class instances are reference types can thus be bad. But it is also good! It’s good because it means that passing a class instance is simple: all you’re doing is passing a pointer. No matter how big and complicated a class instance may be, no matter how many properties it may have containing vast amounts of data, passing the instance is incredibly fast and efficient, because no new data is generated. Moreover, the extended lifetime of a class instance, as it passed around, can be crucial to its functionality and its integrity; a UIViewController needs to be a class, not a struct, because an individual UIViewController instance, no matter how it gets passed around, must continue to represent the same single real and persistent view controller in your running app’s view controller hierarchy. Two classes. As far as the Swift language itself is concerned, there is no requirement that a class should have any superclass, or, if it does have a superclass, that it should ultimately be descended from any particular base class. Thus, a Swift program can have many classes that have no superclass, and it can have many independent hierarchical subclass trees, each descended from a different base class. Cocoa, however, doesn’t work that way. In Cocoa, there is effectively just one base class — NSObject, which embodies all the functionality necessary for a class to be a class in the first place — and all other classes are subclasses, at some level, of that one base class. Cocoa thus consists of one huge tree of hierarchically arranged classes, even before you write a single line of code or create any classes of your own. We can imagine diagramming this tree as an outline. And in fact Xcode will show you this outline (Figure 4-1): in an iOS project window, choose View → Navigators → Show Symbol Navigator and click Hierarchical, with the first and third icons in the filter bar selected (blue). The Cocoa classes are the part of the tree descending from NSObject. The reason for having a superclass–subclass relationship in the first place is to allow related classes to share functionality. Suppose, for example, we have a Dog class and a Cat class, and we are considering declaring a walk method for both of them. We might reason that both a dog and a cat walk in pretty much the same way, by virtue of both being quadrupeds. So it might make sense to declare. To declare that a certain class is a subclass of a certain superclass, add a colon and the superclass name after the class’s name in its declaration. So, for example: class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped {} class Cat : Quadruped {} Now let’s prove that Dog has indeed inherited walk from Quadruped: let fido = Dog() fido.walk() // walk walk walk Observe that, in that code, the walk message can be sent to a Dog instance just as if the walk instance method were declared in the Dog class, even though the walk instance method is in fact declared in a superclass of Dog. That’s inheritance at work. The purpose of subclassing is not merely so that a class can inherit another class’s methods; it’s so that it can also declare methods of its own. Typically, a subclass consists of the methods inherited from its superclass and then some. If Dog has no methods of its own, after all, it’s hard to see why it should exist separately from Quadruped. But if a Dog knows how to do something that not every Quadruped knows how to do — let’s say, bark — then it makes sense as a separate class. If we declare bark in the Dog class, and walk in the Quadruped class, and make Dog a subclass of Quadruped, then Dog inherits the ability to walk from the Quadruped class and also knows how to bark: class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped { func bark () { print("woof") } } Again, let’s prove that it works: let fido = Dog() fido.walk() // walk walk walk fido.bark() // woof Within a class, it is a matter of indifference whether that class has an instance method because that method is declared in that class or because the method is declared in a superclass and inherited. A message to self works equally well either way. In this code, we have declared a barkAndWalk instance method that sends two messages to self, without regard to where the corresponding methods are declared (one is native to the subclass, one is inherited from the superclass): class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped { func bark () { print("woof") } func barkAndWalk() { self.bark() self.walk() } } And here’s proof that it works: let fido = Dog() fido.barkAndWalk() // woof walk walk walk It is also permitted for a subclass to redefine a method inherited from its superclass. For example, perhaps some dogs bark differently from other dogs. We might have a class NoisyDog, for instance, that is a subclass of Dog. Dog declares bark, but NoisyDog also declares. In Swift, when you override something inherited from a superclass, you must explicitly acknowledge this fact by preceding its declaration with the keyword override. So, for example: class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped { func bark () { print("woof") } } class NoisyDog : Dog { override func bark () { print("woof woof woof") } } And let’s try it: let fido = Dog() fido.bark() // woof let rover = NoisyDog() rover.bark() // woof woof woof Observe that a subclass function by the same name as a superclass’s function is not necessarily, of itself, an override. Recall that Swift can distinguish two functions with the same name, provided they have different signatures. Those are different functions, and so an implementation of one in a subclass is not an override of the other in a superclass. An override situation exists only when the subclass redefines the same function that it inherits from a superclass — using the same name, including the external parameter names, and the same signature. It often happens that we want to override something in a subclass and yet access the thing overridden in the superclass. This is done by sending a message to the keyword super. Our bark implementation in NoisyDog is a case in point. What NoisyDog really does when it barks is the same thing Dog does when it barks, but more times. We’d like to express that relationship in our implementation of NoisyDog’s bark. To do so, we have NoisyDog’s bark implementation send the bark message, not to self (which would be circular), but to super; this causes the search for a bark instance method implementation to start in the superclass rather than in our own class: class Dog : Quadruped { func bark () { print("woof") } } class NoisyDog : Dog { override func bark () { for _ in 1...3 { super.bark() } } } And it works: let fido = Dog() fido.bark() // woof let rover = NoisyDog() rover.bark() // woof woof woof A subscript function is a method. If a superclass declares a subscript, the subclass can declare a subscript with the same signature, provided it designates it with the override keyword. To call the superclass subscript implementation, the subclass can use square brackets after the keyword super (e.g. super[3]). Along with methods, a subclass also inherits its superclass’s properties. Naturally, the subclass may also declare additional properties of its own. It is possible to override an inherited property (with some restrictions that I’ll talk about later). A class declaration can prevent the class from being subclassed by preceding the class declaration with the final keyword. A class declaration can prevent a class member from being overridden by a subclass by preceding the member’s declaration with the final keyword. Initialization of a class instance is considerably more complicated than initialization of a struct or enum instance, because of the existence of class inheritance. The chief task of an initializer is to ensure that all properties have an initial value, thus making the instance well-formed as it comes into existence; and an initializer may have other tasks to perform that are essential to the initial state and integrity of this instance. A class, however, may have a superclass, which may have properties and initializers of its own. Thus we must somehow ensure that when a subclass is initialized, its superclass’s properties are initialized and the tasks of its initializers are performed in good order, in addition to initializing the properties and performing the initializer tasks of the subclass itself. Swift solves this problem coherently and reliably — and ingeniously — by enforcing some clear and well-defined rules about what a class initializer must do. The rules begin with a distinction between the kinds of initializer that a class can have: init(). self.init(...). convenience. It is a delegating initializer; it must contain the phrase self.init(...). Moreover, a convenience initializer must delegate to a designated initializer: when it says self.init(...), it must call a designated initializer in the same class — or else it must call another convenience initializer in the same class, thus forming a chain of convenience initializers which ends by calling a designated initializer in the same class. Here are some examples. This class has no stored properties, so it has an implicit init() initializer: class Dog { } let d = Dog() This class’s stored properties have default values, so it has an implicit init() initializer too: class Dog { var name = "Fido" } let d = Dog() This class has stored properties without default values; it has a designated initializer, and all of those properties are initialized in that designated initializer: class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } } let d = Dog(name:"Rover", license:42) This class is similar to the previous example, but it also has two convenience initializers. The caller doesn’t have to supply any parameters, because a convenience initializer with no parameters calls through a chain of convenience initializers ending with a designated initializer: class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } convenience init() { self.init(license:1) } } let d = Dog() Note that the rules about what else an initializer can say and when it can say it, as I described them earlier in this chapter, are still in force. A designated initializer cannot, except in order to initialize a property, say self until all of this class’s properties have been initialized. A convenience initializer is a delegating initializer, so it cannot say self until after it has called, directly or indirectly, a designated initializer (and cannot set an immutable property at all). Having defined and distinguished between designated initializers and convenience initializers, we are ready for the rules about what happens with regard to initializers when a class is itself a subclass of some other class: selfwith the designated initializers that the convenience initializers must call. If a subclass declares any designated initializers of its own, the entire game changes drastically. Now, no initializers are inherited! The existence of an explicit designated initializer blocks initializer inheritance. The only initializers the subclass now has are the initializers that you explicitly write. (However, there’s an exception, which I’ll come to in a moment.) Every designated initializer in the subclass now has an extra requirement: it must call one of the superclass’s designated initializers, by saying super.init(...). Moreover, the rules about saying self continue to apply. A subclass designated initializer must do things in this order: super.init(...), and the initializer that it calls must be a designated initializer. selffor any other reason — to call an instance method, say, or to access an inherited property. Convenience initializers in the subclass are still subject to the rules I’ve already outlined. They must call self.init(...), calling a designated initializer directly or (through a chain of convenience initializers) indirectly. In the absence of inherited initializers, the initializer that a convenience initializer calls must be explicitly present in the subclass. If a designated initializer doesn’t call super.init(...), then super.init() is called implicitly if possible. This code is legal: class Cat { } class NamedCat : Cat { let name : String init(name:String) { self.name = name } } In my view, this feature of Swift is a mistake: Swift should not indulge in secret behavior, even if that behavior might be considered “helpful.” I believe that that code should not compile; a designated initializer should always have to call super.init(...) explicitly. Superclass initializers can be overridden in the subclass, in accordance with these restrictions: override. override. The superclass designated initializer that an override designated initializer calls with super.init(...)can be the one that it overrides. Generally, if a subclass has any designated initializers, no initializers are inherited. But if a subclass overrides all of its superclass’s designated initializers, then the subclass does inherit the superclass’s convenience initializers. A failable designated initializer cannot say return nil until after it has completed all of its own initialization duties. Thus, for example, a failable subclass designated initializer must see to it that all the subclass’s properties are initialized and must call super.init(...) before it can say return nil. (There is a certain delicious irony here: before it can tear the instance down, the initializer must finish building the instance up. But this is necessary in order to ensure that the superclass is given a coherent opportunity to do its own initialization.) If an initializer called by a failable initializer is failable, the calling syntax does not change, and no additional test is needed — if a called failable initializer fails, the whole initialization process will fail (and will be aborted) immediately. A failable initializer that returns an implicitly unwrapped Optional ( init!) is treated just like a normal initializer ( init) for purposes of overriding and delegation. For a failable initializer that returns an ordinary Optional ( init?), there are some additional restrictions: initcan override init?, but not vice versa. init?can call init. initcan call init?by saying initand unwrapping the result (with an exclamation mark, because if the init?fails, you’ll crash). Here’s a meaningless example, just to show the legal syntax: class A:NSObject { init?(ok:Bool) { super.init() // init? can call init } } class B:A { override init(ok:Bool) { // init can override init? super.init(ok:ok)! // init can call init? using "!" } } At no time can a subclass initializer set a constant ( let) property of a superclass. This is because, by the time the subclass is allowed to do anything other than initialize its own properties and call another initializer, the superclass has finished its own initialization and the door for initializing its constants has closed. Here are some basic examples. We start with a class whose subclass has no explicit initializers of its own: class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { } Given that code, you can make a NoisyDog like this: let nd1 = NoisyDog(name:"Fido", license:1) let nd2 = NoisyDog(license:2) That code is legal, because NoisyDog inherits its superclass’s initializers. However, you can’t make a NoisyDog like this: let nd3 = NoisyDog() // compile error That code is illegal. Even though a NoisyDog has no properties of its own, it has no implicit init() initializer; its initializers are its inherited initializers, and its superclass, Dog, has no implicit init() initializer to inherit. Now here is a class whose subclass’s only explicit initializer is a convenience initializer: class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { convenience init(name:String) { self.init(name:name, license:1) } } Observe how NoisyDog’s convenience initializer fulfills its contract by calling self.init(...) to call a designated initializer — which it happens to have inherited. Given that code, there are three ways to make a NoisyDog, just as you would expect: let nd1 = NoisyDog(name:"Fido", license:1) let nd2 = NoisyDog(license:2) let nd3 = NoisyDog(name:"Rover") Next, here is a class whose subclass declares a designated initializer: class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { init(name:String) { super.init(name:name, license:1) } } NoisyDog’s explicit initializer is now a designated initializer. It fulfills its contract by calling a designated initializer in super. NoisyDog has now cut off inheritance of all initializers; the only way to make a NoisyDog is like this: let nd1 = NoisyDog(name:"Rover") Finally, here is a class whose subclass overrides its designated initializers: class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { override init(name:String, license:Int) { super.init(name:name, license:license) } } NoisyDog has overridden all of its superclass’s designated initializers, so it inherits its superclass’s convenience initializers. There are thus two ways to make a NoisyDog: let nd1 = NoisyDog(name:"Rover", license:1) let nd2 = NoisyDog(license:2) Those examples illustrate the main rules that you should keep in your head. You probably don’t need to memorize the remaining rules, because the compiler will enforce them, and will keep slapping you down until you get them right. There’s one more thing to know about class initializers: a class initializer may be preceded by the keyword required. This means that a subclass may not lack it. This, in turn, means that if a subclass implements designated initializers, thus blocking inheritance, it must override this initializer. Here’s a (rather pointless) example: class Dog { var name : String required init(name:String) { self.name = name } } class NoisyDog : Dog { var obedient = false init(obedient:Bool) { self.obedient = obedient super.init(name:"Fido") } } // compile error That code won’t compile. init(name:) is marked required; thus, our code won’t compile unless we inherit or override init(name:) in NoisyDog. But we cannot inherit it, because, by implementing the NoisyDog designated initializer init(obedient:), we have blocked inheritance. Therefore we must override it: class Dog { var name : String required init(name:String) { self.name = name } } class NoisyDog : Dog { var obedient = false init(obedient:Bool) { self.obedient = obedient super.init(name:"Fido") } required init(name:String) { super.init(name:name) } } Observe that our overridden required intializer is not marked with override, but is marked with required, thus guaranteeing that the requirement continues drilling down to any further subclasses. I have explained what declaring an initializer as required does, but I have not explained why you’d need to do it. I’ll give examples later in this chapter. The initializer inheritance rules can cause some rude surprises to pop up when you’re subclassing one of Cocoa’s classes. For example, when programming iOS, you will surely declare a UIViewController subclass. Let’s say you give your subclass a designated initializer. A designated initializer in the superclass, UIViewController, is init(nibName:bundle:), so, in obedience to the rules, you call that from your designated initializer: class ViewController: UIViewController { init() { super.init(nibName:"MyNib", bundle:nil) } } So far, so good; but you are then surprised to find that code elsewhere that makes a ViewController instance no longer compiles: let vc = ViewController(nibName:"MyNib", bundle:nil) // compile error That code was legal until you wrote your designated initializer; now it isn’t. The reason is that by implementing a designated initializer in your subclass, you have blocked initializer inheritance! Your ViewController class used to inherit the init(nibName:bundle:) initializer from UIViewController; now it doesn’t. You need to override that initializer as well, even if all your implementation does is to call the overridden initializer: class ViewController: UIViewController { init() { super.init(nibName:"MyNib", bundle:nil) } override init(nibName: String?, bundle: NSBundle?) { super.init(nibName:nibName, bundle:bundle) } } The code that instantiates ViewController now does indeed compile: let vc = ViewController(nibName:"MyNib", bundle:nil) // fine But now there’s a further surprise: ViewController itself doesn’t compile! The reason is that there is also a required initializer being imposed upon ViewController, and you must implement that as well. You didn’t know about this before, because, when ViewController had no explicit initializers, you were inheriting the required initializer; now you’ve blocked inheritance. Fortunately, Xcode’s Fix-It feature offers to supply a stub implementation; it doesn’t do anything (in fact, it crashes if called), but it satisfies the compiler: required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } I’ll explain later in this chapter how this required initializer is imposed. A class, and only a class (not the other flavors of object type), can have a deinitializer. This is a function declared with the keyword deinit followed by curly braces containing the function body. You never call this function yourself; it is called by the runtime when an instance of this class goes out of existence. If a class has a superclass, the subclass’s deinitializer (if any) is called before superclass’s deinitializer (if any). The idea of a deinitializer is that you might want to perform some cleanup, or just log to the console to prove to yourself that your instance is going out of existence in good order. I’ll take advantage of deinitializers when I discuss memory management issues in Chapter 5. A subclass can override its inherited properties. The override must have the same name and type as the inherited property, and must be marked with override. (A property cannot have the same name as an inherited property but a different type, as there is no way to distinguish them.) The following additional rules apply: Alternatively, the subclass’s override may be a computed variable. In that case: The overriding property’s functions may refer to — and may read from and write to — the inherited property, through the super keyword. A class can have static members, marked static, just like a struct or an enum. It can also have class members, marked class. Both static and class members are inherited by subclasses (as static and class members). The chief difference between static and class methods from the programmer’s point of view is that a static method cannot be overridden; it is as if static were a synonym for class final. Here, for example, I’ll use a static method to express what dogs say: class Dog { static func whatDogsSay() -> String { return "woof" } func bark() { print(Dog.whatDogsSay()) } } A subclass now inherits whatDogsSay, but can’t override it. No subclass of Dog may contain any implementation of a class method or a static method whatDogsSay with this same signature. Now I’ll use a class method to express what dogs say: class Dog { class func whatDogsSay() -> String { return "woof" } func bark() { print(Dog.whatDogsSay()) } } A subclass inherits whatDogsSay, and can override it, either as a class function or as a static function: class NoisyDog : Dog { override class func whatDogsSay() -> String { return "WOOF" } } The difference between static properties and class properties is similar, but with an additional, rather dramatic qualification: static properties can be stored, but class properties can only be computed. Here, I’ll use a static class property to express what dogs say: class Dog { static var whatDogsSay = "woof" func bark() { print(Dog.whatDogsSay) } } A subclass inherits whatDogsSay, but can’t override it; no subclass of Dog can declare a class or static property whatDogsSay. Now I’ll use a class property to express what dogs say. It cannot be a stored property, so I’ll have to use a computed property instead: class Dog { class var whatDogsSay : String { return "woof" } func bark() { print(Dog.whatDogsSay) } } A subclass inherits whatDogsSay and can override it either as a class property or as a static property. But even as a static property the subclass’s override cannot be a stored property, in keeping with the rules of property overriding that I outlined earlier: class NoisyDog : Dog { override static var whatDogsSay : String { return "WOOF" } } When a computer language has a hierarchy of types and subtypes, it must resolve the question of what such a hierarchy means for the relationship between the type of an object and the declared type of a reference to that object. Swift obeys the principles of polymorphism. In my view, it is polymorphism that turns an object-based language into a full-fledged object-oriented language. We may summarize Swift’s polymorphism principles as follows: To see what these principles mean in practice, imagine we have a Dog class, along with its subclass, NoisyDog: class Dog { } class NoisyDog : Dog { } let d : Dog = NoisyDog() The substitution rule says that the last line is legal: we can assign a NoisyDog instance to a reference, d, that is typed as a Dog. The internal identity rule says that, under the hood, d now is a NoisyDog. You may be asking: How is the internal identity rule manifested? If a reference to a NoisyDog is typed as a Dog, in what sense is this “really” a NoisyDog? To illustrate, let’s examine what happens when a subclass overrides an inherited method. Let me redefine Dog and NoisyDog to demonstrate: class Dog { func bark() { print("woof") } } class NoisyDog : Dog { override func bark() { super.bark(); super.bark() } } Now look at this code and tell me whether it compiles and, if so, what happens when it runs: func tellToBark(d:Dog) { d.bark() } var d = NoisyDog() tellToBark(d) That code does compile. We create a NoisyDog instance and pass it to a function that expects a Dog parameter. This is permitted, because NoisyDog is a Dog subclass (substitution). A NoisyDog can be used wherever a Dog is expected. Typologically, a NoisyDog is a kind of Dog. But when the code actually runs, how does the object referred to by the local variable d inside the tellToBark function react to being told to bark? On the one hand, d is typed as Dog, and a Dog barks by saying "woof" once. On the other hand, in our code, when tellToBark is called, what is really passed is a NoisyDog instance, and a NoisyDog barks by saying "woof" twice. What will happen? Let’s find out: func tellToBark(d:Dog) { d.bark() } var d = NoisyDog() tellToBark(d) // woof woof The result is "woof woof". The internal identity rule says that what matters when a message is sent is not how the recipient of that message is typed through this or that reference, but what that recipient actually is. What arrives inside tellToBark is a NoisyDog, regardless of the type of variable that holds it; thus, the bark message causes this object to say "woof" twice. It is a NoisyDog! Here’s another important consequence of polymorphism — the meaning of the keyword self. It means the actual instance, and thus its meaning depends upon the type of the actual instance — even if the word self appears in a superclass’s code. For example: class Dog { func bark() { print("woof") } func speak() { self.bark() } } class NoisyDog : Dog { override func bark() { super.bark(); super.bark() } } What happens when we tell a NoisyDog to speak? Let’s try it: let d = NoisyDog() d.speak() // woof woof The speak method is declared in Dog, the superclass — not in NoisyDog. The speak method calls the bark method. It does this by way of the keyword self. (I could have omitted the explicit reference to self here, but self would still be involved implicitly, so I’m not cheating by making self explicit.) There’s a bark method in Dog, and an override of the bark method in NoisyDog. Which bark method will be called? The word self is encountered within the Dog class’s implementation of speak. But what matters is not where the word self appears but what it means. It means this instance. And the internal identity principle tells us that this instance is a NoisyDog! Thus, it is NoisyDog’s override of bark that is called. Thanks to polymorphism, you can take advantage of subclasses to add power and customization to existing classes. This is important particularly in the world of iOS programming, where most of the classes are defined by Cocoa and don’t belong to you. The UIViewController class, for example, is defined by Cocoa; it has lots of built-in methods that Cocoa will call, and these methods perform various important tasks — but in a generic way. In real life, you’ll make a UIViewController subclass and override those methods to do the tasks appropriate to your particular app. This won’t bother Cocoa in the slightest, because (substitution principle) wherever Cocoa expects to receive or to be talking to a UIViewController, it will accept without question an instance of your UIViewController subclass. And this substitution will also work as expected, because (internal identity principle) whenever Cocoa calls one of those UIViewController methods on your subclass, it is your subclass’s override that will be called. Polymorphism is cool, but it is also slow. It requires dynamic dispatch, meaning that the runtime has to think about what a message to a class instance means. This is another reason for preferring a struct over a class where possible: structs don’t need dynamic dispatch. Alternatively, you can reduce the need for dynamic dispatch by declaring a class or a class member final or private, or by turning on Whole Module Optimization (see Chapter 6). The Swift compiler, with its strict typing, imposes severe restrictions on what messages can be sent to an object reference. The messages that the compiler will permit to be sent to an object reference are those permitted by the reference’s declared type, including its inheritance. This means that, thanks to the internal identity principle of polymorphism, an object may be capable of receiving messages that the compiler won’t permit us to send. This puts us in a serious bind. For example, let’s give NoisyDog a method that Dog doesn’t have: class Dog { func bark() { print("woof") } } class NoisyDog : Dog { override func bark() { super.bark(); super.bark() } func beQuiet() { self.bark() } } In that code, we configure a NoisyDog so that we can tell it to beQuiet. Now look at what happens when we try to tell an object typed as a Dog to be quiet: func tellToHush(d:Dog) { d.beQuiet() // compile error } let d = NoisyDog() tellToHush(d) Our code doesn’t compile. We can’t send the beQuiet message to this object, even though it is in fact a NoisyDog and has a beQuiet method. That’s because the reference d inside the function body is typed as a Dog — and a Dog has no beQuiet method. There is a certain irony here: for once, we know more than the compiler does! We know that our code would run correctly — because d really is a NoisyDog — if only we could get our code to compile in the first place. We need a way to say to the compiler, “Look, compiler, just trust me: this thing is going to turn out to be a NoisyDog when the program actually runs, so let me send it this message.” There is in fact a way to do this — casting. To cast, you use a form of the keyword as followed by the name of the type you claim something really is. Swift will not let you cast just any old type to any old other type — for example, you can’t cast a String to an Int — but it will let you cast a superclass to a subclass. This is called casting down. When you cast down, the form of the keyword as that you must use is as! with an exclamation mark. The exclamation mark reminds you that you are forcing the compiler to do something it would rather not do: func tellToHush(d:Dog) { (d as! NoisyDog).beQuiet() } let d = NoisyDog() tellToHush(d) That code compiles, and works. A useful way to rewrite the example is like this: func tellToHush(d:Dog) { let d2 = d as! NoisyDog d2.beQuiet() d2.beQuiet() } let d = NoisyDog() tellToHush(d) The reason that way of rewriting the code is useful is in case we have other NoisyDog messages to send to this object. Instead of casting every time we want to send a message to it, we cast the object once to its internal identity type, and assign it to a variable. Now that variable’s type — inferred, in this case, from the cast — is that internal identity type, and we can send multiple messages to the variable. A moment ago, I said that the as! operator’s exclamation mark reminds you that you are forcing the compiler’s hand. It also serves as a warning: your code can now crash! The reason is that you might be lying to the compiler. Casting down is a way of telling the compiler to relax its strict type checking and to let you call the shots. If you use casting to make a false claim, the compiler may permit it, but you will crash when the app runs: func tellToHush(d:Dog) { (d as! NoisyDog).beQuiet() // compiles, but prepare to crash...! } let d = Dog() tellToHush(d) In that code, we told the compiler that this object would turn out to be a NoisyDog, and the compiler obediently took its hands off and allowed us to send the beQuiet message to it. But in fact, this object was a Dog when our code ran, and so we ultimately crashed when the cast failed because this object was not a NoisyDog. To prevent yourself from lying accidentally, you can test the type of an instance at runtime. One way to do this is with the keyword is. You can use is in a condition; if the condition passes, then cast, in the knowledge that your cast is safe: func tellToHush(d:Dog) { if d is NoisyDog { let d2 = d as! NoisyDog d2.beQuiet() } } The result is that we won’t cast d to a NoisyDog unless it really is a NoisyDog. An alternative way to solve the same problem is to use Swift’s as? operator. This casts down, but with the option of failure; therefore what it casts to is (you guessed it) an Optional — and now we are on familiar ground, because we know how to deal safely with an Optional: func tellToHush(d:Dog) { let noisyMaybe = d as? NoisyDog // an Optional wrapping a NoisyDog if noisyMaybe != nil { noisyMaybe!.beQuiet() } } That doesn’t look much cleaner or shorter than our previous approach. But remember that we can safely send a message to an Optional by optionally unwrapping the Optional! Thus we can skip the assignment and condense to a single line: func tellToHush(d:Dog) { (d as? NoisyDog)?.beQuiet() } First we use the as? operator to obtain an Optional wrapping a NoisyDog (or nil). Then we optionally unwrap that Optional and send a message to it. If d isn’t a NoisyDog, the Optional will be nil and the message won’t be sent. If d is a NoisyDog, the Optional will be unwrapped and the message will be sent. Thus, that code is safe. Recall from Chapter 3 that comparison operators applied to an Optional are automatically applied to the object wrapped by the Optional. The as!, as?, and is operators work the same way. Consider an Optional d wrapping a Dog (that is, d is a Dog? object). This might, in actual fact, be wrapping either a Dog or a NoisyDog; the substitution principle applies to Optional types, because it applies to the type of thing wrapped by the Optional. To find out which it is, you might be tempted to use is. But can you? After all, an Optional is neither a Dog nor a NoisyDog — it’s an Optional! Well, the good news is that Swift knows what you mean; when the thing on the left side of is is an Optional, Swift pretends that it’s the value wrapped in the Optional. Thus, this works just as you would hope: let d : Dog? = NoisyDog() if d is NoisyDog { // it is! When using is with an Optional, the test fails in good order if the Optional is nil. Thus our is test really does two things: it checks whether the Optional is nil, and if it is not, it then checks whether the wrapped value is the type we specify. What about casting? You can’t really cast an Optional to anything. But you can use the as! operator with an Optional, because Swift knows what you mean; when the thing on the left side of as! is an Optional, Swift treats it as the wrapped type. Moreover, the consequence of applying the as! operator is that two things happen: Swift unwraps first, and then casts. This code works, because d is unwrapped to give us d2, which is a NoisyDog: let d : Dog? = NoisyDog() let d2 = d as! NoisyDog d2.beQuiet() That code, however, is not safe. You shouldn’t cast like that, without testing first, unless you are very sure of your ground. If d were nil, you’d crash in the second line because you’re trying to unwrap a nil Optional. And if d were a Dog, not a NoisyDog, you’d still crash in the second line when the cast fails. That’s why there’s also an as? operator, which is safe — but yields an Optional: let d : Dog? = NoisyDog() let d2 = d as? NoisyDog d2?.beQuiet() Another way you’ll use casting is during a value interchange between Swift and Objective-C when two types are equivalent. For example, you can cast a Swift String to a Cocoa NSString, and vice versa. That’s not because one is a subclass of the other, but because they are bridged to one another; in a very real sense, they are the same type. When you cast from String to NSString, you’re not casting down, and what you’re doing is not dangerous, so you use the as operator, with no exclamation mark. I gave an example, in Chapter 3, of a situation where you might need to do that: let s = "hello" let range = (s as NSString).rangeOfString("ell") // (1,3), an NSRange The cast from String to NSString tells Swift to stay in the Cocoa world as it calls rangeOfString, and thus causes the result to be the Cocoa result, an NSRange, rather than a Swift Range. A number of common classes are bridged in this way between Swift and Objective-C. Often, you won’t need to cast as you cross the bridge from Swift to Objective-C, because Swift will automatically cast for you. For example, a Swift Int and a Cocoa NSNumber are two very different things; nevertheless, you can often use an Int where an NSNumber is expected, without casting, like this: let ud = NSUserDefaults.standardUserDefaults() ud.setObject(1, forKey: "Test") In that code, we used an Int, namely 1, where Objective-C expects an NSObject instance. An Int is not an NSObject instance; it isn’t even a class instance (it’s a struct instance). But Swift sees that an NSObject is expected, decides that an NSNumber would best represent an Int, and crosses the bridge for you. Thus, what winds up being stored in NSUserDefaults is an NSNumber. Coming back the other way, however, when you call objectForKey:, Swift has no information about what this value really is, so you have to cast explicitly if you want an Int — and now you are casting down (as I’ll explain in more detail later): let i = ud.objectForKey("Test") as! Int That cast works because ud.objectForKey("Test") yields an NSNumber wrapping an integer, and casting that to a Swift Int is permitted — the types are bridged. But if ud.objectForKey("Test") were not an NSNumber (or if it were nil), you’d crash. If you’re not sure of your ground, use is or as? to be safe. It can be useful for an instance to refer to its own type — for example, to send a message to that type. In an earlier example, a Dog instance method fetched a Dog class property by sending a message to the Dog type explicitly — by using the word Dog: class Dog { class var whatDogsSay : String { return "Woof" } func bark() { print(Dog.whatDogsSay) } } The expression Dog.whatDogsSay seems clumsy and inflexible. Why should we have to hard-code into Dog a knowledge of what class it is? It has a class; it should just know what it is. In Objective-C, you may be accustomed to using the class instance method to deal with this situation. In Swift, an instance might not have a class (it might be a struct instance or an enum instance); what a Swift instance has is a type. The instance method that Swift provides for this purpose is the dynamicType method. An instance can access its type through this method. Thus, if you don’t like the notion of a Dog instance calling a Dog class method by saying Dog explicitly, there’s another way: class Dog { class var whatDogsSay : String { return "Woof" } func bark() { print(self.dynamicType.whatDogsSay) } } An important thing about using dynamicType instead of hard-coding a class name is that it obeys polymorphism: class Dog { class var whatDogsSay : String { return "Woof" } func bark() { print(self.dynamicType.whatDogsSay) } } class NoisyDog : Dog { override class var whatDogsSay : String { return "Woof woof woof" } } Now watch what happens: let nd = NoisyDog() nd.bark() // Woof woof woof If we tell a NoisyDog instance to bark, it says "Woof woof woof". The reason is that dynamicType means, “The type that this instance actually is, right now.” That’s what makes this type dynamic. We send the bark message to a NoisyDog instance. The bark implementation refers to self.dynamicType; self means this instance, which is a NoisyDog, and so self.dynamicType is the NoisyDog class, and it is NoisyDog’s version of whatDogsSay that is fetched. You can also use dynamicType for learning the name of an object’s type, as a string — typically for debugging purposes. When you say print(myObject.dynamicType), you’ll see the type name in the console. In some situations, you may want to pass an object type as a value. That is legal; an object type is itself an object. Here’s what you need to know: Type. dynamicType), possibly followed by the keyword selfusing dot-notation. For example, here’s a function that accepts a Dog type as its parameter: func typeExpecter(whattype:Dog.Type) { } And here’s an example of calling that function: typeExpecter(Dog) // or: typeExpecter(Dog.self) Or you could call it like this: let d = Dog() // or: let d = NoisyDog() typeExpecter(d.dynamicType) // or: typeExpecter(d.dynamicType.self) Why might you want to do something like that? A typical situation is that your function is a factory for instances: given a type, it creates an instance of that type, possibly prepares it in some way, and returns it. You can use a variable reference to a type to make an instance of that type, by explicitly sending it an init(...) message. For example, here’s a Dog class with an init(name:) initializer, and its NoisyDog subclass: class Dog { var name : String init(name:String) { self.name = name } } class NoisyDog : Dog { } And here’s a factory method that creates a Dog or a NoisyDog, as specified by its parameter, gives it a name, and returns it: func dogMakerAndNamer(whattype:Dog.Type) -> Dog { let d = whattype.init(name:"Fido") // compile error return d } As you can see, since whattype refers to a type, we can call its initializer to make an instance of that type. However, there’s a problem. The code doesn’t compile. The reason is that the compiler is in doubt as to whether the init(name:) initializer is implemented by every possible subtype of Dog. To reassure it, we must declare that initializer with the required keyword: class Dog { var name : String required init(name:String) { self.name = name } } class NoisyDog : Dog { } I promised I’d tell you why you might need to declare an initializer as required; now I’m fulfilling that promise! The required designation reassures the compiler; every subclass of Dog must inherit or reimplement init(name:), so it’s legal to send the init(name:) message to a type reference that might refer to Dog or some subclass of Dog. Now our code compiles, and we can call our function: let d = dogMakerAndNamer(Dog) // d is a Dog named Fido let d2 = dogMakerAndNamer(NoisyDog) // d2 is a NoisyDog named Fido In a class method, self stands for the class — polymorphically. This means that, in a class method, you can send a message to self to call an initializer polymorphically. Here’s an example. Let’s say we want to move our instance factory method into Dog itself, as a class method. Let’s call this class method makeAndName. We want this class method to create and return a named Dog of whatever class we send the makeAndName message to. If we say Dog.makeAndName(), we should get a Dog. If we say NoisyDog.makeAndName(), we should get a NoisyDog. That type is the polymorphic self class, so our makeAndName class method initializes self: class Dog { var name : String required init(name:String) { self.name = name } class func makeAndName() -> Dog { let d = self.init(name:"Fido") return d } } class NoisyDog : Dog { } It works as expected: let d = Dog.makeAndName() // d is a Dog named Fido let d2 = NoisyDog.makeAndName() // d2 is a NoisyDog named Fido But there’s a problem. Although d2 is in fact a NoisyDog, it is typed as a Dog. This is because our makeAndName class method is declared as returning a Dog. That isn’t what we meant to say. What we want to say is that this method returns an instance of the same type as the class to which the makeAndName message was originally sent. In other words, we need a polymorphic type declaration! That type is Self (notice the capitalization). It is used as a return type in a method declaration to mean “an instance of whatever type this is at runtime.” Thus: class Dog { var name : String required init(name:String) { self.name = name } class func makeAndName() -> Self { let d = self.init(name:"Fido") return d } } class NoisyDog : Dog { } Now when we call NoisyDog.makeAndName() we get a NoisyDog typed as a NoisyDog. Self also works for instance method declarations. Therefore, we can write an instance method version of our factory method. Here, we start with a Dog or a NoisyDog and tell it to have a puppy of the same type as itself: class Dog { var name : String required init(name:String) { self.name = name } func havePuppy(name name:String) -> Self { return self.dynamicType.init(name:name) } } class NoisyDog : Dog { } And here’s some code to test it: let d = Dog(name:"Fido") let d2 = d.havePuppy(name:"Fido Junior") let nd = NoisyDog(name:"Rover") let nd2 = nd.havePuppy(name:"Rover Junior") As expected, d2 is a Dog, but nd2 is a NoisyDog typed as a NoisyDog. All this terminology can get a bit confusing, so here’s a quick summary: .dynamicType dynamicType. .Type Dogmeans a Dog instance is expected (or an instance of one its subclasses), but Dog.Typemeans that the Dog type itself is expected (or the type of one of its subclasses). .self Dog.Typeis expected, you can pass Dog.self. (It is not illegal to send .selfto an instance, but it is pointless.) self In instance code, this instance, polymorphically. In static/class code, this type, polymorphically; self.init(...) instantiates the type. Self A protocol is a way of expressing commonalities between otherwise unrelated types. For example, a Bee object and a Bird object might need to have certain features in common by virtue of the fact that both a bee and a bird can fly. Thus, it might be useful to define a Flier type. The question is: In what sense can both Bee and Bird be Fliers? One possibility, of course, is class inheritance. If Bee and Bird are both classes, there’s a class hierarchy of superclasses and subclasses. So Flier could be the superclass of both Bee and Bird. The problem is that there may be other reasons why Flier can’t be the superclass of both Bee and Bird. A Bee is an Insect; a Bird isn’t. Yet they both have the power of flight — independently. We need a type that cuts across the class hierarchy somehow, tying remote classes together. Moreover, what if Bee and Bird are not both classes? In Swift, that’s a very real possibility. Important and powerful objects can be structs instead of classes. But there is no struct hierarchy of superstructs and substructs! That, after all, is one of the major differences between structs and classes. Yet structs need the ability to possess and express formal commonalities every bit as much as classes do. How can a Bee struct and a Bird struct both be Fliers? Swift solves this problem through the use of protocols. Protocols are tremendously important in Swift; the Swift header defines over 70 of them! Moreover, Objective-C has protocols as well; Swift protocols correspond roughly to these, and can interchange with them. Cocoa makes heavy use of protocols. A protocol is an object type, but there are no protocol objects — you can’t instantiate a protocol. A protocol is much more lightweight than that. A protocol declaration is just a list of properties and methods. The properties have no values, and the methods have no code! The idea is that a “real” object type can formally declare that it belongs to a protocol type; this is called adopting or conforming to the protocol. An object type that adopts a protocol is signing a contract stating that it actually implements the properties and methods listed by the protocol. For example, let’s say that being a Flier consists of no more than implementing a fly method. Then a Flier protocol could specify that there must be a fly method; to do so, it lists the fly method with no function body, like this: protocol Flier { func fly() } Any type — an enum, a struct, a class, or even another protocol — can then adopt this protocol. To do so, it lists the protocol after a colon after its name in its declaration. (If the adopter is a class with a superclass, the protocol comes after a comma after the superclass specification.) Let’s say Bird is a struct. Then it can adopt Flier like this: struct Bird : Flier { } // compile error So far, so good. But that code won’t compile. The Bird struct has made a promise to implement the features listed in the Flier protocol. Now it must keep that promise! The fly method is the only requirement of the Flier protocol. To satisfy that requirement, I’ll just give Bird an empty fly method: protocol Flier { func fly() } struct Bird : Flier { func fly() { } } That’s all there is to it! We’ve defined a protocol, and we’ve made a struct adopt that protocol. Of course, in real life you’ll probably want to make the adopter’s implementation of the protocol’s methods do something; but the protocol says nothing about that. New in Swift 2.0, a protocol can declare a method and provide its implementation, thanks to protocol extensions, which I’ll discuss later in this chapter. Perhaps at this point you’re scratching your head over why this is a useful thing to do. We made a Bird a Flier, but so what? If we wanted a Bird to know how to fly, why didn’t we just give Bird a fly method without adopting any protocol? The answer has to do with types. Don’t forget, a protocol is a type. Our protocol, Flier, is a type. Therefore, I can use Flier wherever I would use a type — to declare the type of a variable, for example, or the type of a function parameter: func tellToFly(f:Flier) { f.fly() } Think about that code for a moment, because it embodies the entire point of protocols. A protocol is a type — so polymorphism applies. Protocols give us another way of expressing the notion of type and subtype. This means that, by the substitution principle, a Flier here could be an instance of any object type — an enum, a struct, or a class. It doesn’t matter what object type it is, as long as it adopts the Flier protocol. If it adopts the Flier protocol, then it must have a fly method, because that’s exactly what it means to adopt the Flier protocol! Therefore the compiler is willing to let us send the fly message to this object. A Flier is, by definition, an object that can be told to fly. The converse, however, is not true: an object with a fly method is not automatically a Flier. It isn’t enough to obey the requirements of a protocol; the object type must adopt the protocol. This code won’t compile: struct Bee { func fly() { } } let b = Bee() tellToFly(b) // compile error A Bee can be sent the fly message, qua Bee. But tellToFly doesn’t take a Bee parameter; it takes a Flier parameter. Formally, a Bee is not a Flier. To make a Bee a Flier, simply declare formally that Bee adopts the Flier protocol. This code does compile: struct Bee : Flier { func fly() { } } let b = Bee() tellToFly(b) Enough of birds and bees; we’re ready for a real-life example! As I’ve already said, Swift is chock full of protocols already. Let’s make one of our own object types adopt one. One of the most useful Swift protocols is CustomStringConvertible. The CustomStringConvertible protocol requires that we implement a description String property. If we do that, a wonderful thing happens: when an instance of this type is used in string interpolation or po command in the console), the description property value is used automatically to represent it. Recall, for example, the Filter enum, from earlier in this chapter. I’ll add a description property to it: enum Filter : String { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" var description : String { return self.rawValue } } But that isn’t enough, in and of itself, to give Filter the power of the CustomStringConvertible protocol; to do that, we also need to adopt the CustomStringConvertible protocol formally. There is already a colon and a type in the Filter declaration, so an adopted protocol comes after a comma: enum Filter : String, CustomStringConvertible { case Albums = "Albums" case Playlists = "Playlists" case Podcasts = "Podcasts" case Books = "Audiobooks" var description : String { return self.rawValue } } We have now made Filter formally adopt the CustomStringConvertible protocol. The CustomStringConvertible protocol requires that we implement a description String property; we do implement a description String property, so our code compiles. Now we can hand a Filter to description will appear automatically: let type = Filter.Albums print(type) // Albums print("It is \(type)") // It is Albums Behold the power of protocols. You can give any object type the power of string conversion in exactly the same way. Note that a type can adopt more than one protocol! For example, the built-in Double type adopts CustomStringConvertible, Hashable, Comparable, and other built-in protocols. To declare adoption of multiple protocols, list each one after the first protocol in the declaration, separated by comma. For example: struct MyType : CustomStringConvertible, Hashable, Comparable { // ... } (Of course, that code won’t compile unless I also declare the required methods in MyType, so that MyType really does adopt those protocols.) A protocol is a type, and an adopter of a protocol is its subtype. Polymorphism applies. Therefore, the operators for mediating between an object’s declared type and its real type work when the object is declared as a protocol type. For example, given a protocol Flier that is adopted by both Bird and Bee, we can use the is operator to test whether a particular Flier is in fact a Bird: func isBird(f:Flier) -> Bool { return f is Bird } Similarly, as! and as? can be used to cast an object declared as a protocol type down to its actual type. This is important to be able to do, because the adopting object will typically be able to receive messages that the protocol can’t receive. For example, let’s say that a Bird can get a worm: struct Bird : Flier { func fly() { } func getWorm() { } } A Bird can fly qua Flier, but it can getWorm only qua Bird. Thus, you can’t tell just any old Flier to get a worm: func tellGetWorm(f:Flier) { f.getWorm() // compile error } But if this Flier is a Bird, clearly it can get a worm. That is exactly what casting is all about: func tellGetWorm(f:Flier) { (f as? Bird)?.getWorm() } Protocol declaration can take place only at the top level of a file. To declare a protocol, use the keyword protocol followed by the name of the protocol, which, being an object type, should start with a capital letter. Then come curly braces which may contain the following: A property declaration in a protocol consists of var (not let), the property name, a colon, its type, and curly braces containing the word get or the words get set. In the former case, the adopter’s implementation of this property can be writable, while in the latter case, it must be: the adopter may not implement a get set property as a read-only computed property or as a constant ( let) stored property. To declare a static/class property, precede it with the keyword static. A class adopter is free to implement this as a class property. A method declaration in a protocol is a function declaration without a function body — that is, it has no curly braces and thus it has no code. Any object function type is legal, including init and subscript. (The syntax for declaring a subscript in a protocol is the same as the syntax for declaring a subscript in an object type, except that there will be no function bodies, so the curly braces, like those of a property declaration in a protocol, will contain get or get set.) To declare a static/class method, precede it with the keyword static. A class adopter is free to implement this as a class method. If a method, as implemented by an enum or struct, might need to be declared mutating, the protocol must specify the mutating designation; the adopter cannot add mutating if the protocol lacks it. However, the adopter may omit mutating if the protocol has it. A protocol can introduce a local synonym for a type that it mentions in its declarations by declaring a type alias. For example, typealias Time = Double allows the Time type to be referred to inside the protocol’s curly braces; elsewhere (such as in an adopting object type), the Time type doesn’t exist, but the Double type is a match for it. There are other ways to use a type alias in a protocol, which I’ll discuss later. A protocol can itself adopt one or more protocols; the syntax is just as you would expect — a colon after the protocol’s name in the declaration, followed by a comma-separated list of the protocols it adopts. In effect, this gives you a way to create an entire secondary hierarchy of types! The Swift headers make heavy use of this. A protocol that adopts another protocol may repeat the contents of the adopted protocol’s curly braces, for clarity; but it doesn’t have to, as this repetition is implicit. An object type that adopts such a protocol must satisfy the requirements of this protocol and all protocols that the protocol adopts. If the only purpose of a protocol would be to combine other protocols by adopting all of them, without adding any new requirements, and if this combination is used in just one place in your code, you can avoid formally declaring the protocol in the first place by creating the combining protocol on the fly. To do so, use a type name protocol<...,...>, where the contents of the angle brackets is a comma-separated list of protocols. In Objective-C, a protocol member can be declared optional, meaning that this member doesn’t have to be implemented by the adopter, but it may be. For compatibility with Objective-C, Swift allows optional protocol members, but only in a protocol explicitly bridged to Objective-C by preceding its declaration with the @objc attribute. In such a protocol, an optional member — meaning a method or property — is declared by preceding its declaration with the keyword optional: @objc protocol Flier { optional var song : String {get} optional func sing() } Only a class can adopt such a protocol, and this feature will work only if the class is an NSObject subclass, or the optional member is marked with the @objc attribute: class Bird : Flier { @objc func sing() { print("tweet") } } An optional member is not guaranteed to be implemented by the adopter, so Swift doesn’t know whether it’s safe to send a Flier either the song message or the sing message. In the case of an optional property like song, Swift solves the problem by wrapping its value in an Optional. If the Flier adopter doesn’t implement the property, the result is nil and no harm done: let f : Flier = Bird() let s = f.song // s is an Optional wrapping a String This is one of those rare situations where you can wind up with a double-wrapped Optional. For example, if the value of the optional property song were a String?, then fetching its value from a Flier would yield a String??. An optional property can be declared {get set} by its protocol, but there is no legal syntax for setting such a property in an object of that protocol type. For example, if f is a Flier and song is declared {get set}, you can’t set f.song. I regard this as a bug in the language. In the case of an optional method like sing, things are more elaborate. If the method is not implemented, we must not be permitted to call it in the first place. To handle this situation, the method itself is automatically typed as an Optional version of its declared type. To send the sing message to a Flier, therefore, you must unwrap it. The safe approach is to unwrap it optionally, with a question mark: let f : Flier = Bird() f.sing?() That code compiles — and it also runs safely. The effect is to send the sing message to f only if this Flier adopter implements sing. If this Flier adopter doesn’t implement sing, nothing happens. You could have force-unwrapped the call — f.sing!() — but then your app would crash if the adopter doesn’t implement sing. If an optional method returns a value, that value is wrapped in an Optional as well. For example: @objc protocol Flier { optional var song : String {get} optional func sing() -> String } If we now call sing?() on a Flier, the result is an Optional wrapping a String: let f : Flier = Bird() let s = f.sing?() // s is an Optional wrapping a String If we force-unwrap the call — sing!() — the result is either a String (if the adopter implements sing) or a crash (if it doesn’t). Many Cocoa protocols have optional members. For example, your iOS app will have an app delegate class that adopts the UIApplicationDelegate protocol; this protocol has many methods, all of them optional. That fact, however, will have no effect on how you implement those methods; you don’t need to mark them in any special way. Your app delegate class is already a subclass of NSObject, so this feature just works. Either you implement a method or you don’t. Similarly, you will often make your UIViewController subclass adopt a Cocoa delegate protocol with optional members; again, this is an NSObject subclass, so you’ll just implement the methods you want to implement, with no special marking. (I’ll talk more about Cocoa protocols in Chapter 10, and about delegate protocols in Chapter 11.) A protocol declared with the keyword class after the colon after its name is a class protocol, meaning that it can be adopted only by class object types: protocol SecondViewControllerDelegate : class { func acceptData(data:AnyObject!) } (There is no need to say class if this protocol is already marked @objc; the @objc attribute implies that this is also a class protocol.) A typical reason for declaring a class protocol is to take advantage of special memory management features that apply only to classes. I haven’t discussed memory management yet, but I’ll continue the example anyway (and I’ll repeat it when I do talk about memory management, in Chapter 5): class SecondViewController : UIViewController { weak var delegate : SecondViewControllerDelegate? // ... } The keyword weak marks the delegate property as having special memory management. Only a class instance can participate in this kind of special memory management. The delegate property is typed as a protocol, and a protocol might be adopted by a struct or an enum type. So to satisfy the compiler that this object will in fact be a class instance, and not a struct or enum instance, the protocol is declared as a class protocol. Suppose that a protocol declares an initializer. And suppose that a class adopts this protocol. By the terms of this protocol, this class and any subclass it may ever have must implement this initializer. Therefore, the class must not only implement the initializer, but it must also mark it as required. An initializer declared in a protocol is thus implicitly required, and the class is forced to make that requirement explicit. Consider this simple example, which won’t compile: protocol Flier { init() } class Bird : Flier { init() {} // compile error } That code generates an elaborate but perfectly informative compile error message: “Initializer requirement init() can only be satisfied by a required initializer in non-final class Bird.” To compile our code, we must designate our initializer as required: protocol Flier { init() } class Bird : Flier { required init() {} } The alternative, as the compile error message informs us, would be to mark the Bird class as final. This would mean that it cannot have any subclasses — thus guaranteeing that the problem will never arise in the first place. If Bird were marked final, there would be no need to mark its init as required. In the above code, Bird is not marked as final, and its init is marked as required. This, as I’ve already explained, means in turn that any subclass of Bird that implements any designated initializers — and thus loses initializer inheritance — must implement the required initializer and mark it required as well. That fact is responsible for a strange and annoying feature of real-life iOS programming with Swift that I mentioned earlier in this chapter. Let’s say you subclass the built-in Cocoa class UIViewController — something that you are extremely likely to do. And let’s say you give your subclass an initializer — something that you are also extremely likely to do: class ViewController: UIViewController { init() { super.init(nibName: "ViewController", bundle: nil) } } That code won’t compile. The compile error says: “ required initializer init(coder:) must be provided by subclass of UIViewController.” We are now in a position to understand what’s going on. It turns out that UIViewController adopts a protocol, NSCoding. And this protocol requires an initializer init(coder:). None of that is your doing; UIViewController and NSCoding are declared by Cocoa, not by you. But that doesn’t matter! This is the same situation I was just describing. Your UIViewController subclass must either inherit init(coder:) or must explicitly implement it and mark it required. Well, your subclass has implemented a designated initializer of its own — thus cutting off initializer inheritance. Therefore it must implement init(coder:) and mark it required. But that makes no sense if you are not expecting init(coder:) ever to be called on your UIViewController subclass. You are being forced to write an initializer for which you can provide no meaningful functionality! Fortunately, Xcode’s Fix-It feature will offer to write the initializer for you, like this: required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } That code satisfies the compiler. (I’ll explain in Chapter 5 why it’s a legal initializer even though it doesn’t fulfill an initializer’s contract.) It also deliberately crashes if it is ever called. If you do have functionality for this initializer, you will delete the fatalError line and insert your own functionality in its place. A minimum meaningful implementation would be super.init(coder:aDecoder), but of course if your class has properties that need initialization, you will need to initialize them first. Not only UIViewController but lots of built-in Cocoa classes adopt NSCoding. You will encounter this problem if you subclass any of those classes and implement your own initializer. It’s just something you’ll have to get used to. One of the wonderful things about Swift is that so many of its features, rather than being built-in and accomplished by magic, are implemented in Swift and are exposed to view in the Swift header. Literals are a case in point. The reason you can say 5 to make an Int whose value is 5, instead of formally initializing Int by saying Int(5), is not because of magic (or at least, not entirely because of magic). It’s because Int adopts a protocol, IntegerLiteralConvertible. Not only Int literals, but all literals work this way. The following literal convertible protocols are declared in the Swift header: Your own object type can adopt a literal convertible protocol as well. This means that a literal can appear where an instance of your object type is expected! For example, here we declare a Nest type that contains some number of eggs (its eggCount): struct Nest : IntegerLiteralConvertible { var eggCount : Int = 0 init() {} init(integerLiteral val: Int) { self.eggCount = val } } Because Nest adopts IntegerLiteralConvertible, we can pass an Int where a Nest is expected, and our init(integerLiteral:) will be called automatically, causing a new Nest object with the specified eggCount to come into existence at that moment: func reportEggs(nest:Nest) { print("this nest contains \(nest.eggCount) eggs") } reportEggs(4) // this nest contains 4 eggs A generic is a sort of placeholder for a type, into which an actual type will be slotted later. This is useful because of Swift’s strict typing. Without sacrificing that strict typing, there are situations where you can’t or don’t want to specify too precisely in a certain region of your code what the exact type of something is going to be. It is important to understand that generics do not in any way relax Swift’s strict typing. In particular, they do not postpone resolution of a type until runtime. When you use a generic, your code will still specify its real type; that real type is known with complete specificity at compile time! The particular region of your code where the type is expected uses a generic so that it doesn’t have to specify the type fully, but at the point where that code is used by other code, the type is specified. The placeholder is generic, but it is resolved to an actual specific type whenever the generic is used. An Optional is a good example. Any type of value can be wrapped up in an Optional. Yet you are never in any doubt as to what type is wrapped up in a particular Optional. How can this be? It’s because Optional is a generic type. Here’s how an Optional works. I have already said that an Optional is an enum, with two cases: .None and .Some. If an Optional’s case is .Some, it has an associated value — the value that is wrapped by this Optional. But what is the type of that associated value? On the one hand, one wants to say that it can be any type; that, after all, is why anything can be wrapped up in an Optional. On the other hand, any given Optional that wraps a value wraps a value of some specific type. When you unwrap an Optional, that unwrapped value needs to be typed as what it is, so that it can be sent messages appropriate for that type. The solution to this sort of problem is a Swift generic. The declaration for the Optional enum in the Swift header starts like this: enum Optional<Wrapped> { // ... } That syntax means: “In the course of this declaration, I’m going to be using a made-up type — a type placeholder — that I call Wrapped. It’s a real and individual type, but I’m not going to say more about it right now. All you need to know is that whenever I say Wrapped, I mean this one particular type. When an actual Optional is created, it will be perfectly clear what type Wrapped stands for, and then, wherever I say Wrapped, you should substitute the type that it stands for.” Let’s look at more of the Optional declaration: enum Optional<Wrapped> { case None case Some(Wrapped) init(_ some: Wrapped) // ... } Having declared that Wrapped is a placeholder, we proceed to use it. There’s a case .None. There’s also a case .Some, which has an associated value — of type Wrapped. We also have an initializer, which takes a parameter — of type Wrapped. Thus, the type with which we are initialized — whatever type that may be — is type Wrapped, and thus is the type of value that is associated with the .Some case. It is this identity between the type of the initializer parameter and the type of the .Some associated value that allows the latter to be resolved. In the declaration of the Optional enum, Wrapped is a placeholder. But in real life, when an actual Optional is created, it will be initialized with an actual value of some definite type. Usually, we’ll use the question-mark syntactic sugar (type String?) and the initializer will be called for us behind the scenes, but let’s call the initializer explicitly for the sake of clarity: let s = Optional("howdy") That code resolves the type of Wrapped for this particular Optional instance! Obviously, "howdy" is a String. As a result, the compiler knows that for this particular Optional<Wrapped>, Wrapped is String. Under the hood, wherever Wrapped appears in the declaration of the Optional enum, the compiler substitutes String. Thus, the declaration for the particular Optional referred to by the variable s looks, in the compiler’s mind, like this: enum Optional<String> { case None case Some(String) init(_ some: String) // ... } That is the pseudocode declaration of an Optional whose Wrapped placeholder has been replaced everywhere with the String type. We can summarize this by saying that s is an Optional<String>. In fact, that is legal syntax! We can create the same Optional like this: let s : Optional<String> = "howdy" A great many of the built-in Swift types involve generics. In fact, this feature of the language seems to be designed with the Swift types in mind; generics exist exactly so that the Swift types can do what they need to do. Here’s a list of the places where generics, in one form or another, can be declared in Swift: Self In a protocol, use of the keyword Self (note the capitalization) turns the protocol into a generic. Self is a placeholder meaning the type of the adopter. For example, here’s a Flier protocol that declares a method that takes a Self parameter: protocol Flier { func flockTogetherWith(f:Self) } That means that if the Bird object type were to adopt the Flier protocol, its implementation of flockTogetherWith would need to declare its f parameter as a Bird. A protocol can declare a type alias without defining what the type alias stands for — that is, the typealias statement doesn’t include an equal sign. This turns the protocol into a generic; the alias name, called an associated type, is a placeholder. For example: protocol Flier { typealias Other func flockTogetherWith(f:Other) func mateWith(f:Other) } An adopter will declare some particular type where the generic uses the type alias name, thus resolving the placeholder. If the Bird struct adopts the Flier protocol and declares the f parameter of flockTogetherWith as a Bird, that declaration resolves Other to Bird for this particular adopter — and now Bird must declare the f parameter for mateWith as a Bird as well: struct Bird : Flier { func flockTogetherWith(f:Bird) {} func mateWith(f:Bird) {} } This form of generic protocol is ultimately the same as the previous form; where I’ve written f:Other, Swift understands this to mean f:Self.Other, and in fact it is legal (and possibly clearer) to write that. A function declaration can use a generic placeholder type for any of its parameters, for its return type, and within its body. Declare the placeholder name in angle brackets after the function name: func takeAndReturnSameThing<T> (t:T) -> T { return t } The caller will use some particular type where the placeholder appears in the function declaration, thus resolving the placeholder: let thing = takeAndReturnSameThing("howdy") Here, the type of the argument "howdy" used in the call resolves T to String; therefore this call to takeAndReturnSameThing will also return a String, and the variable capturing the result, thing, is inferred to String as well. An object type declaration can use a generic placeholder type anywhere within its curly braces. Declare the placeholder name in angle brackets after the object type name: struct HolderOfTwoSameThings<T> { var firstThing : T var secondThing : T init(thingOne:T, thingTwo:T) { self.firstThing = thingOne self.secondThing = thingTwo } } A user of this object type will use some particular type where the placeholder appears in the object type declaration, thus resolving the placeholder: let holder = HolderOfTwoSameThings(thingOne:"howdy", thingTwo:"getLost") Here, the type of the thingOne argument, "howdy", used in the initializer call, resolves T to String; therefore thingTwo must also be a String, and the properties firstThing and secondThing are Strings as well. For generic functions and object types, which use the angle bracket syntax, the angle brackets may contain multiple placeholder names, separated by comma. For example: func flockTwoTogether<T, U>(f1:T, _ f2:U) {} The two parameters of flockTwoTogether can now be resolved to two different types (though they do not have to be different). All our examples so far have permitted any type to be substituted for the placeholder. Alternatively, you can limit the types that are eligible to be used for resolving a particular placeholder. This is called a type constraint. The simplest form of type constraint is to put a colon and a type name after the placeholder’s name when it first appears. The type name after the colon can be a class name or a protocol name. For example, let’s return to our Flier and its flockTogetherWith function. Suppose we want to say that the parameter of flockTogetherWith should be declared by the adopter as a type that adopts Flier. You would not do that by declaring the type of that parameter as Flier in the protocol: protocol Flier { func flockTogetherWith(f:Flier) } That code says: You can’t adopt this protocol unless you declare a function flockTogetherWith whose f parameter is declared as Flier: struct Bird : Flier { func flockTogetherWith(f:Flier) {} } That isn’t what we want to say! We want to say that Bird should be able to adopt Flier while declaring f as being of some Flier adopter type, such as Bird. The way to say that is to use a placeholder constrained as a Flier. For example, we could do it like this: protocol Flier { typealias Other : Flier func flockTogetherWith(f:Other) } Unfortunately, that’s illegal: a protocol can’t use itself as a type constraint. The workaround is to declare an extra protocol that Flier itself will adopt, and constrain Other to that protocol: protocol Superflier {} protocol Flier : Superflier { typealias Other : Superflier func flockTogetherWith(f:Other) } Now Bird can be a legal adopter like this: struct Bird : Flier { func flockTogetherWith(f:Bird) {} } In a generic function or a generic object type, the type constraint appears in the angle brackets. For example: func flockTwoTogether<T:Flier>(f1:T, _ f2:T) {} Now you can’t call flockTwoTogether with two String parameters, because a String is not a Flier. Moreover, if Bird and Insect both adopt Flier, flockTwoTogether can be called with two Bird parameters or with two Insect parameters — but not with a Bird and an Insect, because T is just one placeholder, signifying one Flier adopter type. A type constraint on a placeholder is often useful as a way of assuring the compiler that some message can be sent to an instance of the placeholder type. For example, let’s say we want to implement a function myMin that returns the smallest from a list of the same type. Here’s a promising implementation as a generic function, but there’s one problem — it doesn’t compile: func myMin<T>(things:T...) -> T { var minimum = things[0] for ix in 1..<things.count { if things[ix] < minimum { // compile error minimum = things[ix] } } return minimum } The problem is the comparison things[ix] < minimum. How does the compiler know that the type T, the type of things[ix] and minimum, will be resolved to a type that can in fact be compared using the less-than operator in this way? It doesn’t, and that’s exactly why it rejects that code. The solution is to promise the compiler that the resolved type of T will in fact work with the less-than operator. The way to do that, it turns out, is to constrain T to Swift’s built-in Comparable protocol; adoption of the Comparable protocol exactly guarantees that the adopter does work with the less-than operator: func myMin<T:Comparable>(things:T...) -> T { Now myMin compiles, because it cannot be called except by resolving T to an object type that adopts Comparable and hence can be compared with the less-than operator. Naturally, built-in object types that you think should be comparable, such as Int, Double, String, and Character, do in fact adopt the Comparable protocol! If you look in the Swift headers, you’ll find that the built-in min global function is declared in just this way, and for just this reason. A generic protocol (a protocol whose declaration mentions Self or has an associated type) can be used as a type only in a generic, as a type constraint. This won’t compile: protocol Flier { typealias Other func fly() } func flockTwoTogether(f1:Flier, _ f2:Flier) { // compile error f1.fly() f2.fly() } To use a generic Flier protocol as a type, we must write a generic and use Flier as a type constraint. For example: protocol Flier { typealias Other func fly() } func flockTwoTogether<T1:Flier, T2:Flier>(f1:T1, _ f2:T2) { f1.fly() f2.fly() } In the examples so far, the user of a generic resolves the placeholder’s type through inference. But there’s another way to perform resolution: the user can resolve the type manually. This is called explicit specialization. In some situations, explicit specialization is mandatory — namely, if the placeholder type cannot be resolved through inference. There are two forms of explicit specialization: The adopter of a protocol can resolve the protocol’s associated type manually through a typealias declaration using the protocol’s alias name with an explicit type assignment. For example: protocol Flier { typealias Other } struct Bird : Flier { typealias Other = String } The user of a generic object type can resolve the object’s placeholder type(s) manually using the same angle bracket syntax used to declare the generic in the first place, with actual type names in the angle brackets. For example: class Dog<T> { var name : T? } let d = Dog<String>() (That explains the Optional<String> type used earlier in this chapter and in Chapter 3.) You cannot explicitly specialize a generic function. You can, however, declare a generic type with a nongeneric function that uses the generic type’s placeholder; explicit specialization of the generic type resolves the placeholder, and thus resolves the function: protocol Flier { init() } struct Bird : Flier { init() {} } struct FlierMaker<T:Flier> { static func makeFlier() -> T { return T() } } let f = FlierMaker<Bird>.makeFlier() // returns a Bird When a class is generic, you can subclass it, provided you resolve the generic. (This is new in Swift 2.0.) You can do this either through a matching generic subclass or by resolving the superclass generic explicitly. For example, here’s a generic Dog: class Dog<T> { var name : T? } You can subclass it as a generic whose placeholder matches that of the superclass: class NoisyDog<T> : Dog<T> {} That’s legal because the resolution of the NoisyDog placeholder T will resolve the Dog placeholder T. The alternative is to subclass an explicitly specialized Dog: class NoisyDog : Dog<String> {} When a generic placeholder is constrained to a generic protocol with an associated type, the associated type name can be chained with dot-notation to the placeholder name to specify that type. Here’s an example. Imagine that in a game program, soldiers and archers are enemies of one another. I’ll express this by subsuming a Soldier struct and an Archer struct under a Fighter protocol that has an Enemy associated type, which is itself constrained to be a Fighter (again, I’ll need an extra protocol that Fighter adopts): protocol Superfighter {} protocol Fighter : Superfighter { typealias Enemy : Superfighter } I’ll resolve that associated type manually for both structs: struct Soldier : Fighter { typealias Enemy = Archer } struct Archer : Fighter { typealias Enemy = Soldier } Now I’ll create a generic struct to express the opposing camps of these fighters: struct Camp<T:Fighter> { } Now suppose that a camp may contain a spy from the opposing camp. What is the type of that spy? Well, if this is a Soldier camp, it’s an Archer; and if it’s an Archer camp, it’s a Soldier. More generally, since T is a Fighter, it’s the type of the Enemy of this adopter of Fighter. I can express that neatly by chaining the associated type name to the placeholder name: struct Camp<T:Fighter> { var spy : T.Enemy? } The result is that if, for a particular Camp, T is resolved to Soldier, T.Enemy means Fighter — and vice versa. We have created a correct and inviolable rule for the type that a Camp’s spy must be. This won’t compile: var c = Camp<Soldier>() c.spy = Soldier() // compile error We’ve tried to assign an object of the wrong type to this Camp’s spy property. But this does compile: var c = Camp<Soldier>() c.spy = Archer() Longer chains of associated type names are possible — in particular, when a generic protocol has an associated type which is itself constrained to a generic protocol with an associated type. For example, let’s give each type of Fighter a characteristic weapon: a soldier has a sword, while an archer has a bow. I’ll make a Sword struct and a Bow struct, and I’ll unite them under a Wieldable protocol: protocol Wieldable { } struct Sword : Wieldable { } struct Bow : Wieldable { } I’ll add a Weapon associated type to Fighter, which is constrained to be a Wieldable, and once again I’ll resolve it manually for each type of Fighter: protocol Superfighter { typealias Weapon : Wieldable } protocol Fighter : Superfighter { typealias Enemy : Superfighter } struct Soldier : Fighter { typealias Weapon = Sword typealias Enemy = Archer } struct Archer : Fighter { typealias Weapon = Bow typealias Enemy = Soldier } Now let’s say that every Fighter has the ability to steal his enemy’s weapon. I’ll give the Fighter generic protocol a steal(weapon:from:) method. How can the Fighter generic protocol express the parameter types in a way that causes its adopter to declare this method with the proper types? The from: parameter type is this Fighter’s Enemy. We already know how to express that: it’s the placeholder plus dot-notation with the associated type name. Here, the placeholder is the adopter of this protocol — namely, Self. So the from: parameter type is Self.Enemy. And what about the weapon: parameter type? That’s the Weapon of that Enemy! So the weapon: parameter type is Self.Enemy.Weapon: protocol Fighter : Superfighter { typealias Enemy : Superfighter func steal(weapon:Self.Enemy.Weapon, from:Self.Enemy) } (That code will compile, and will mean the same thing, if we omit Self. But Self would still be the implicit start of the chain, and I think it makes the meaning of the code clearer.) The result is that the following declarations for Soldier and Archer correctly adopt the Fighter protocol, and the compiler approves: struct Soldier : Fighter { typealias Weapon = Sword typealias Enemy = Archer func steal(weapon:Bow, from:Archer) { } } struct Archer : Fighter { typealias Weapon = Bow typealias Enemy = Soldier func steal (weapon:Sword, from:Soldier) { } } The example is artificial (though, I hope, sufficiently vivid); but the concept is not. The Swift headers make heavy use of associated type chains; the associated type chain Generator.Element is particularly common, because it expresses the type of the element of a sequence. The SequenceType generic protocol has an associated type Generator, which is constrained to be an adopter of the generic GeneratorType protocol, which in turn has an associated type Element. A simple type constraint limits the types eligible for resolving a placeholder to a single type. Sometimes, you want to limit the eligible resolving types still further: you want additional constraints. In a generic protocol, the colon in a type alias constraint is effectively the same as the colon that appears in a type declaration. Thus, it can be followed by multiple protocols, or by a superclass and multiple protocols: class Dog { } class FlyingDog : Dog, Flier { } protocol Flier { } protocol Walker { } protocol Generic { typealias T : Flier, Walker typealias U : Dog, Flier } In the Generic protocol, the associated type T can be resolved only as a type that adopts the Flier protocol and the Walker protocol, and the associated type U can be resolved only as a type that is a Dog (or a subclass of Dog) and that adopts the Flier protocol. In the angle brackets of a generic function or object type, that syntax is illegal; instead, you can append a where clause, consisting of one or more comma-separated additional constraints on a declared placeholder: func flyAndWalk<T where T:Flier, T:Walker> (f:T) {} func flyAndWalk2<T where T:Flier, T:Dog> (f:T) {} A where clause can also impose additional constraints on the associated type of a generic protocol that already constrains a placeholder, using an associated type chain (described in the preceding section). This pseudocode shows what I mean; I’ve omitted the content of the where clause, to focus on what the where clause will be constraining: protocol Flier { typealias Other } func flockTogether<T:Flier where T.Other /*???*/ > (f:T) {} As you can see, the placeholder T is already constrained to be a Flier. Flier is itself a generic protocol, with an associated type Other. Thus, whatever type resolves T will resolve Other. The where clause constrains further the types eligible to resolve T, by restricting the types eligible to resolve Other. So what sort of restriction are we allowed to impose on our associated type chain? One possibility is the same sort of restriction as in the preceding example — a colon followed by a protocol that it must adopt, or by a class that it must descend from. Here’s an example with a protocol: protocol Flier { typealias Other } struct Bird : Flier { typealias Other = String } struct Insect : Flier { typealias Other = Bird } func flockTogether<T:Flier where T.Other:Equatable> (f:T) {} Both Bird and Insect adopt Flier, but they are not both eligible as the argument in a call to the flockTogether function. The flockTogether function can be called with a Bird argument, because a Bird’s Other associated type is resolved to String, which adopts the built-in Equatable protocol. But flockTogether can’t be called with an Insect argument, because an Insect’s Other associated type is resolved to Bird, which doesn’t adopt the Equatable protocol: flockTogether(Bird()) // okay flockTogether(Insect()) // compile error Here’s an example with a class: protocol Flier { typealias Other } class Dog { } class NoisyDog : Dog { } struct Pig : Flier { typealias Other = NoisyDog // or Dog } func flockTogether<T:Flier where T.Other:Dog> (f:T) {} The flockTogether function can be called with a Pig argument, because Pig adopts Flier and resolves Other to a Dog or a subclass of Dog: flockTogether(Pig()) // okay Instead of a colon, we can use an equality operator == followed by a type. The type at the end of the associated type chain must then be this exact type — not merely an adopter or subclass. For example: protocol Flier { typealias Other } protocol Walker { } struct Kiwi : Walker { } struct Bird : Flier { typealias Other = Kiwi } struct Insect : Flier { typealias Other = Walker } func flockTogether<T:Flier where T.Other == Walker> (f:T) {} The flockTogether function can be called with an Insect argument, because Insect adopts Flier and resolves Other to Walker. But it can’t be called with a Bird argument. Bird adopts Flier, and it resolves Other to an adopter of Walker, namely Kiwi — but that isn’t good enough to satisfy the == restriction. The same sort of thing would be true if we had said == Dog in the previous example. A Pig argument would no longer be acceptable if Pig resolves Other to NoisyDog; Pig must resolve Other to Dog itself in order to be acceptable as an argument. The type on the right side of the == operator can itself be an associated type chain. The resolved types at the ends of the two chains must then be identical. For example: protocol Flier { typealias Other } struct Bird : Flier { typealias Other = String } struct Insect : Flier { typealias Other = Int } func flockTwoTogether<T:Flier, U:Flier where T.Other == U.Other> (f1:T, _ f2:U) {} The flockTwoTogether function can be called with a Bird and a Bird, and it can be called with an Insect and an Insect, but it can’t be called with an Insect and a Bird, because they don’t resolve the Other associated type to the same type. The Swift header makes extensive use of where clauses with an == operator, especially as a way of restricting a sequence type. For example, the String appendContentsOf method is declared twice, like this: mutating func appendContentsOf(other: String) mutating func appendContentsOf<S : SequenceType where S.Generator.Element == Character>(newElements: S) I mentioned in Chapter 3 that appendContentsOf can concatenate a String to a String. But that’s not the only kind of thing that appendContentsOf can concatenate to a String! A character sequence is legal too: var s = "hello" s.appendContentsOf(" world".characters) // "hello world" And so is an array of Character: s.appendContentsOf(["!" as Character]) Those are both sequences of characters — and the generic in the second appendContentsOf method declaration is how you specify that. It’s a sequence, because it’s a type that adopts the SequenceType protocol. But it’s not just any old sequence; its Generator.Element associated type chain must be resolved to Character. The Generator.Element chain, as I mentioned earlier, is Swift’s way of expressing the notion of a sequence’s element type. The Array struct has an appendContentsOf method too, but it’s declared a little differently: mutating func appendContentsOf<S : SequenceType where S.Generator.Element == Element>(newElements: S) A sequence must be of just one type. If a sequence consists of String elements, you can add more elements to it, but only String elements; you can’t add a sequence of Int elements to a sequence of String elements. An array is a sequence; it is a generic whose Element placeholder is the type of its elements. So the Array struct uses the == operator in its appendContentsOf method declaration to enforce this rule: the element type of the argument sequence must be the same as the element type of the existing array. An extension is a way of injecting your own code into an object type that has already been declared elsewhere; you are extending an existing object type. You can extend your own object types; you can also extend one of Swift’s object types or one of Cocoa’s object types, in which case you are adding functionality to a type that doesn’t belong to you! Extension declaration can take place only at the top level of a file. To declare an extension, put the keyword extension followed by the name of an existing object type, then optionally a colon plus the names of any protocols you want to add to the list of those adopted by this type, and finally curly braces containing the usual things that go inside an object type declaration — with the following restrictions: In my real programming life, I sometimes extend a built-in Swift or Cocoa type just to encapsulate some missing functionality by expressing it as a property or method. Here are some examples from actual apps. In a card game, I need to shuffle the deck, which is stored in an array. I extend Swift’s built-in Array type to give it a shuffle method: extension Array { mutating func shuffle () { for i in (0..<self.count).reverse() { let ix1 = i let ix2 = Int(arc4random_uniform(UInt32(i+1))) (self[ix1], self[ix2]) = (self[ix2], self[ix1]) } } } Cocoa’s Core Graphics framework has many useful functions associated with the CGRect struct, and Swift already extends CGRect to add some helpful properties and methods; but there’s no shortcut for getting the center point (a CGPoint) of a CGRect, something that in practice one very often needs. I extend CGRect to give it a center property: extension CGRect { var center : CGPoint { return CGPointMake(self.midX, self.midY) } } An extension can declare a static or class method; since an object type is usually globally available, this can often be an excellent way to slot a global function into an appropriate namespace. For example, in one of my apps, I find myself frequently using a certain color (a UIColor). Instead of creating that color repeatedly, it makes sense to encapsulate the instructions for generating it in a global function. But instead of making that function completely global, I make it — appropriately enough — a class method of UIColor: extension UIColor { class func myGoldenColor() -> UIColor { return self.init(red:1.000, green:0.894, blue:0.541, alpha:0.900) } } Now I can use that color throughout my code simply by saying UIColor.myGoldenColor(), completely parallel to built-in class methods such as UIColor.redColor(). Another good use of an extension is to make built-in Cocoa classes work with your private data types. For example, in my Zotz app, I’ve defined an enum whose raw values are the key strings to be used when archiving or unarchiving a property of a Card: enum Archive : String { case Color = "itsColor" case Number = "itsNumber" case Shape = "itsShape" case Fill = "itsFill" } The only problem is that in order to use this enum when archiving, I have to take its rawValue each time: coder.encodeObject(s1, forKey:Archive.Color.rawValue) coder.encodeObject(s2, forKey:Archive.Number.rawValue) coder.encodeObject(s3, forKey:Archive.Shape.rawValue) coder.encodeObject(s4, forKey:Archive.Fill.rawValue) That’s just ugly. An elegant fix (suggested in a WWDC 2015 video) is to teach NSCoder, the class of coder, what to do when the forKey: argument is an Archive instead of a String. In an extension, I overload the encodeObject:forKey: method: extension NSCoder { func encodeObject(objv: AnyObject?, forKey key: Archive) { self.encodeObject(objv, forKey:key.rawValue) } } In effect, I’ve moved the rawValue call out of my code and into NSCoder’s code. Now I can archive a Card without saying rawValue: coder.encodeObject(s1, forKey:Archive.Color) coder.encodeObject(s2, forKey:Archive.Number) coder.encodeObject(s3, forKey:Archive.Shape) coder.encodeObject(s4, forKey:Archive.Fill) Extensions on one’s own object types can help to organize one’s code. A frequently used convention is to add an extension for each protocol one’s object type needs to adopt, like this: class ViewController: UIViewController { // ... UIViewController method overrides go here ... } extension ViewController : UIPopoverPresentationControllerDelegate { // ... UIPopoverPresentationControllerDelegate methods go here ... } extension ViewController : UIToolbarDelegate { // ... UIToolbarDelegate methods go here ... } An extension on your own object type is also a way to spread your definition of that object type over multiple files, if you feel that several shorter files are better than one long file. When you extend a Swift struct, a curious thing happens with initializers: it becomes possible to declare an initializer and keep the implicit initializers: struct Digit { var number : Int } extension Digit { init() { self.init(number:42) } } That code means that you can instantiate a Digit by calling the explicitly declared initializer — Digit() — or by calling the implicit memberwise initializer — Digit(number:7). Thus, the explicit declaration of an initializer through an extension did not cause us to lose the implicit memberwise initializer, as would have happened if we had declared the same initializer inside the original struct declaration. New in Swift 2.0, you can extend a protocol. When you do, you can add methods and properties to the protocol, just as for any object type. Unlike a protocol declaration, these methods and properties are not mere requirements, to be fulfilled by the adopter of the protocol; they are actual methods and properties, to be inherited by the adopter of the protocol! For example: protocol Flier { } extension Flier { func fly() { print("flap flap flap") } } struct Bird : Flier { } Observe that Bird can now adopt Flier without implementing the fly method. Even if we were to add func fly() as a requirement in the Flier protocol declaration, Bird could still adopt Flier without implementing the fly method. That’s because the Flier protocol extension supplies the fly method! Bird thus inherits an implementation of fly: let b = Bird() b.fly() // flap flap flap An adopter can implement a method inherited from a protocol extension, thus overriding that method: struct Insect : Flier { func fly() { print("whirr") } } let i = Insect() i.fly() // whirr But be warned: this kind of inheritance is not polymorphic. The adopter’s implementation is not an override; it is merely another implementation. The internal identity rule does not apply; it matters how a reference is typed: let f : Flier = Insect() f.fly() // flap flap flap Even though f is internally an Insect (as we can discover with the is operator), the fly message is being sent to an object reference typed as a Flier, so it is Flier’s implementation of the fly method that is called, not Insect’s implementation. To get something that looks like polymorphic inheritance, we must declare fly as a requirement in the original protocol: protocol Flier { func fly() // * } extension Flier { func fly() { print("flap flap flap") } } struct Insect : Flier { func fly() { print("whirr") } } Now an Insect maintains its internal integrity: let f : Flier = Insect() f.fly() // whirr This difference makes sense, because adoption of a protocol does not (and must not) introduce the overhead of dynamic dispatch. Therefore the compiler must make a static decision. If the method is declared as a requirement in the original protocol, we are guaranteed that the adopter implements it, and so we can (and do) call the adopter’s implementation. But if the method exists only in the protocol extension, then deciding whether the adopter reimplements it would require dynamic dispatch at runtime, and that would defeat the nature of protocols — so the compiler messages the protocol extension. The chief benefit of protocol extensions is that they allow code to be moved to an appropriate scope. Here’s an example from my Zotz app. I have four enums, each representing an attribute of a Card: Fill, Color, Shape, and Number. They all have an Int raw value. I was tired of having to say rawValue: every time I initialized one of these enums from its raw value, so I gave each enum a delegating initializer with no externalized parameter name, that calls the built-in init(rawValue:) initializer: enum Fill : Int { case Empty = 1 case Solid case Hazy init?(_ what:Int) { self.init(rawValue:what) } } enum Color : Int { case Color1 = 1 case Color2 case Color3 init?(_ what:Int) { self.init(rawValue:what) } } // ... and so on ... I didn’t like the repetition of my initializer declaration, but in Swift 1.2 and before, there was nothing I could do about that. In Swift 2.0, I can move that declaration into a protocol extension. It turns out that an enum with a raw value automatically adopts the built-in generic RawRepresentable protocol, where the raw value type is a type alias called RawValue. So I can shoehorn my initializer into the RawRepresentable protocol: extension RawRepresentable { init?(_ what:RawValue) { self.init(rawValue:what) } } enum Fill : Int { case Empty = 1 case Solid case Hazy } enum Color : Int { case Color1 = 1 case Color2 case Color3 } // ... and so on ... In the Swift standard library, protocol extensions have meant that many global functions can be recast as methods. For example, in Swift 1.2 and earlier, enumerate (see Chapter 3) was a global function: func enumerate<Seq:SequenceType>(base:Seq) -> EnumerateSequence<Seq> It was a global function because it had to be. This is a function that is to apply only to sequences — adopters of the SequenceType protocol. Prior to Swift 2.0, how could that be expressed? An enumerate method might have been declared as a requirement of the SequenceType protocol, but this would mean merely that every adopter of SequenceType must implement it; it wouldn’t provide an implementation. The only way to do that was as a global function, with the sequence as parameter, using a generic constraint to guard the door, so to speak, so that only a sequence could be passed as argument. In Swift 2.0, however, enumerate is a method, declared in an extension to the SequenceType protocol: extension SequenceType { func enumerate() -> EnumerateSequence<Self> } Now there’s no need for a generic constraint. There’s no need for a generic. There’s no need for a parameter! This is a method of SequenceType; the sequence to be enumerated is the sequence to which the enumerate message is sent. That example could be greatly multiplied; a lot of Swift standard library global functions were turned into methods in Swift 2.0. This change effectively transforms the feel of the language. When you extend a generic type, the placeholder type names are visible to your extension declaration. That’s good, because you might need to use them; but it can make your code a little mystifying, because you seem to be using an undefined type name out of the blue. It might be a good idea to add a comment, to remind yourself what you’re up to: class Dog<T> { var name : T? } extension Dog { func sayYourName() -> T? { // T is the type of self.name return self.name } } New in Swift 2.0, a generic type extension can include a where clause. This has the same effect as any generic constraint: it limits which resolvers of the generic can call the code injected by this extension, and assures the compiler that your code is legal for those resolvers. As with protocol extensions, this means that a global function can be turned into a method. Recall this example from earlier in this chapter: func myMin<T:Comparable>(things:T...) -> T { var minimum = things[0] for ix in 1..<things.count { if things[ix] < minimum { minimum = things[ix] } } return minimum } Why did I make that a global function? Because before Swift 2.0, I had to. Let’s say I wanted to make this a method of Array. In Swift 1.2 and before, you could extend Array, and your extension could refer to Array’s generic placeholder; but it couldn’t constrain that placeholder further. Thus, there was no way to inject a method into Array while guaranteeing that the placeholder would be a Comparable — and so the compiler wouldn’t permit the use of the < operator on an element of the array. In Swift 2.0, I can constrain the generic placeholder further, and so I can make this a method of Array: extension Array where Element:Comparable { // Element is the element type func min() -> Element { var minimum = self[0] for ix in 1..<self.count { if self[ix] < minimum { minimum = self[ix] } } return minimum } } That method can be called only on an array of Comparable elements; it isn’t injected into other kinds of arrays, so the compiler won’t permit it to be called: let m = [4,1,5,7,2].min() // 1 let d = [Digit(12), Digit(42)].min() // compile error The second line doesn’t compile, because I haven’t made my Digit struct adopt the Comparable protocol. Once again, this change in the Swift language has resulted in a major wholesale reorganization of the Swift standard library, allowing global functions to be moved into struct extensions and protocol extensions as methods. For example, the global find function from Swift 1.2 and before has become, in Swift 2.0, the CollectionType indexOf method; it is constrained so that the collection’s elements are Equatables, because you can’t find a needle in a haystack unless you have a way of identifying the needle when you see it: extension CollectionType where Generator.Element : Equatable { func indexOf(element: Self.Generator.Element) -> Self.Index? } That’s a protocol extension, and it is also a generic extension constrained with a where clause — neither of which was possible before Swift 2.0. Swift provides a few built-in types as general umbrella types, capable of embracing multiple real types under a single heading. The umbrella type most commonly encountered in real-life iOS programming is AnyObject. It is actually a protocol; as a protocol, it is completely empty, requiring no properties or methods. It has the special feature that all class types conform to it automatically. Thus, it is possible to assign or pass any class instance where an AnyObject is expected, and to cast in either direction: class Dog { } let d = Dog() let any : AnyObject = d let d2 = any as! Dog Certain Swift types, which are not class types — such as String and the basic numeric types — are bridged to Objective-C types, which are class types, defined by the Foundation framework. This means that, in the presence of the Foundation framework, a Swift bridged type can be assigned, passed, or cast to an AnyObject, even if it is not a class type — because it will be cast first to its Objective-C bridged class type automatically, behind the scenes — and an AnyObject can be cast down to a Swift bridged type. For example: let s = "howdy" let any : AnyObject = s // implicitly casts to NSString let s2 = any as! String let i = 1 let any2 : AnyObject = i // implicitly casts to NSNumber let i2 = any2 as! Int The common way to encounter an AnyObject is in the course of interchange with Objective-C. Swift’s ability to cast any class type to and from an AnyObject parallels Objective-C’s ability to cast any class type to and from an id. In effect, AnyObject is the Swift version of id. NSUserDefaults, NSCoding, and key–value coding (Chapter 10), for example, all allow you to retrieve an object of indeterminate class by a string key name; such an object will arrive into Swift as an AnyObject — in particular, as an Optional wrapping an AnyObject, because there might be no such key, in which case Cocoa needs to be able to return nil. In general, however, an AnyObject will be of little use to you; you’ll want to let Swift know what sort of object this really is. Unwrapping the Optional and casting down from AnyObject is up to you. If you’re perfectly sure of your ground, you can force-unwrap and force-cast with the as! operator: required init ( coder decoder: NSCoder ) { let s = decoder.decodeObjectForKey(Archive.Color) as! String // ... } Of course, you’d better be telling the truth when you cast down an AnyObject with as!, or you will crash when the code runs and the cast turns out to be impossible. You can use the is and as? operators, if you’re in doubt, to make sure your cast is safe. A surprising feature of AnyObject is that it can be used to suspend the compiler’s judgment as to whether a certain message can be sent to an object — similar to Objective-C, where typing something as an id causes the compiler to suspend judgment about what messages can be sent to it. Thus, you can send a message to an AnyObject without bothering to cast to its real type. (Nevertheless, if you know the object’s real type, you probably will cast to that type.) You can’t send just any old message to an AnyObject; the message must correspond to a class member that meets one of the following criteria: @objc(or dynamic). This feature is fundamentally parallel to optional protocol members, which I discussed earlier in this chapter — with some slight differences. Let’s start with two classes: class Dog { @objc var noise : String = "woof" @objc func bark() -> String { return "woof" } } class Cat {} The Dog property noise and the Dog method bark are marked @objc, so they are visible as potential messages to be sent to an AnyObject. To prove it, I’ll type a Cat as an AnyObject and send it one of these messages. Let’s start with the noise property: let c : AnyObject = Cat() let s = c.noise That code, amazingly, compiles. Moreover, it doesn’t crash when the code runs! The noise property has been typed as an Optional wrapping its original type. Here, that’s an Optional wrapping a String. If the object typed as AnyObject doesn’t implement noise, the result is nil and no harm done. Moreover, unlike an optional protocol property, the Optional in question is implicitly unwrapped. Therefore, if the AnyObject turns out to have a noise property (for example, if it had been a Dog), the resulting implicitly unwrapped String can be treated directly as a String. Now let’s try it with a method call: let c : AnyObject = Cat() let s = c.bark?() Again, that code compiles and is safe. If the Object typed as AnyObject doesn’t implement bark, no bark() call is performed; the method result type has been wrapped in an Optional, so s is typed as String? and has been set to nil. If the AnyObject turns out to have a bark method (for example, if it had been a Dog), the result is an Optional wrapping the returned String. If you call bark!() on the AnyObject instead, the result will be a String, but you’ll crash if the AnyObject doesn’t implement bark. Unlike an optional protocol member, you can even send the message with no unwrapping. This is legal: let c : AnyObject = Cat() let s = c.bark() That’s just like force-unwrapping the call: the result is a String, but it’s possible to crash. Sometimes, what you want to know is not what type an object is, but whether an object itself is the particular object you think it is. This problem can’t arise with a value type, but it can arise with a reference type, where there can be more than one distinct reference to one and the same object. A class is a reference type, so the problem can arise with class instances. Swift’s solution is the identity operator ( ===). This operator is available for instances of object types that adopt the AnyObject protocol — like classes! It compares one object reference with another. It is not a comparison of values for equality, like the equality operator ( ==); you’re asking whether two object references refer to one and the same object. There is also a negative version of the identity operator ( !==). A typical use case is that a class instance arrives from Cocoa, and you need to know whether it is in fact a particular object to which you already have a reference. For example, an NSNotification has an object property that helps identify the notification (usually, it is the original sender of the notification); Cocoa is agnostic about its underlying type, so this is another of those situations where you’ll receive an AnyObject wrapped in an Optional. Like ==, the === operator works seamlessly on an Optional, so you can use it to make sure that a notification’s object property is the object you expect: func changed(n:NSNotification) { let player = MPMusicPlayerController.applicationMusicPlayer() if n.object === player { // ... } } AnyClass is the class of AnyObject. It corresponds to the Objective-C Class type. It arises typically in declarations where a Cocoa API wants to say that a class is expected. For example, the UIView layerClass class method is declared, in its Swift translation, like this: class func layerClass() -> AnyClass That means: if you override this method, implement it to return a class. This will presumably be a CALayer subclass. To return an actual class in your implementation, send the self message to the name of the class: override class func layerClass() -> AnyClass { return CATiledLayer.self } A reference to an AnyClass object behaves much like a reference to an AnyObject object. You can send it any Objective-C message that Swift knows about — any Objective-C class message. To illustrate, once again I’ll start with two classes: class Dog { @objc static var whatADogSays : String = "woof" } class Cat {} Objective-C can see whatADogSays, and it sees it as a class property. Therefore you can send whatADogSays to an AnyClass reference: let c : AnyClass = Cat.self let s = c.whatADogSays A reference to a class, such as you can obtain by sending dynamicType to an instance reference, or by sending self to the type name, is of a type that adopts AnyClass, and you can compare references to such types with the === operator. In effect, this is a way of finding out whether two references to classes refer to the same class. For example: func typeTester(d:Dog, _ whattype:Dog.Type) { if d.dynamicType === whattype { // ... } } The condition is true only if d and whattype are the same type (without regard to polymorphism); for example, if Dog has a subclass NoisyDog, then the condition is true if the parameters are Dog() and Dog.self or NoisyDog and NoisyDog.self, but not if they are NoisyDog() and Dog.self. This is valuable, despite the lack of polymorphism, because you can’t use the is operator when the thing on the right side is a type reference — it has to be a literal type name. The Any type is a type alias for an empty protocol that is automatically adopted by all types. Thus, where an Any object is expected, absolutely any object can be passed: func anyExpecter(a:Any) {} anyExpecter("howdy") // a struct instance anyExpecter(String) // a struct anyExpecter(Dog()) // a class instance anyExpecter(Dog) // a class anyExpecter(anyExpecter) // a function An object typed as Any can be tested against, or cast down to, any object or function type. To illustrate, here’s a protocol with an associated type, and two adopters who explicitly resolve it: protocol Flier { typealias Other } struct Bird : Flier { typealias Other = Insect } struct Insect : Flier { typealias Other = Bird } Now here’s a function that takes a Flier along with a second parameter typed as Any, and tests whether that second parameter’s type is the same as the Flier’s resolved Other type; the test is legal because Any can be tested against any type: func flockTwoTogether<T:Flier>(flier:T, _ other:Any) { if other is T.Other { print("they can flock together") } } If we call flockTwoTogether with a Bird and an Insect, the console says “they can flock together.” If we call it with a Bird and an object of any other type, it doesn’t. Swift, in common with most modern computer languages, has built-in collection types Array and Dictionary, along with a third type, Set. Array and Dictionary are sufficiently important that the language accommodates them with some special syntax. At the same time, like most Swift types, they are quite thinly provided with related functions; some missing functionality is provided by Cocoa’s NSArray and NSDictionary, to which they are respectively bridged. The Set collection type is bridged to Cocoa’s NSSet. An array (Array, a struct) is an ordered collection of object instances (the elements of the array) accessible by index number, where an index number is an Int numbered from 0. Thus, if an array contains four elements, the first has index 0 and the last has index 3. A Swift array cannot be sparse: if there is an element with index 3, there is also an element with index 2 and so on. The most salient feature of Swift arrays is their strict typing. Unlike some other computer languages, a Swift array’s elements must be uniform — that is, the array must consist solely of elements of the same definite type. Even an empty array must have a definite element type, despite the fact that it happens to lack elements at this moment. An array is itself typed in accordance with its element type. Arrays whose elements are of different types are considered, themselves, to be of two different types: an array of Int elements is of a different type from an array of String elements. Array types are polymorphic in accordance with their element types: if NoisyDog is a subclass of Dog, then an array of NoisyDog can be used where an array of Dog is expected. If all this reminds you of Optionals, it should. Like an Optional, a Swift array is a generic. It is declared as Array<Element>, where the placeholder Element is the type of a particular array’s elements. The uniformity restriction is not as severe as it might seem at first glance. An array must have elements of just one type, but types are very flexible. By a clever choice of type, you can have an array whose elements are of different types internally. For example: To declare or state the type of a given array’s elements, you could explicitly resolve the generic placeholder; an array of Int elements would thus be an Array<Int>. However, Swift offers syntactic sugar for stating an array’s element type, using square brackets around the name of the element type, like this: [Int]. That’s the syntax you’ll use most of the time. A literal array is represented as square brackets containing a list of its elements separated by comma (and optional spaces): for example, [1,2,3]. The literal for an empty array is empty square brackets: []. An array’s default initializer init(), called by appending empty parentheses to the array’s type, yields an empty array of that type. Thus, you can create an empty array of Int like this: var arr = [Int]() Alternatively, if a reference’s type is known in advance, the empty array [] can be inferred to that type. Thus, you can also create an empty array of Int like this: var arr : [Int] = [] If you’re starting with a literal array containing elements, you won’t usually need to declare the array’s type, because Swift will infer it by looking at the elements. For example, Swift will infer that [1,2,3] is an array of Int. If the array element types consist of a class and its subclasses, like Dog and NoisyDog, Swift will infer the common superclass as the array’s type. Even [1, "howdy"] is a legal array literal; it is inferred to be an array of NSObject. However, in some cases you will need to declare an array reference’s type explicitly even while assigning a literal to that array: let arr : [Flier] = [Insect(), Bird()] An array also has an initializer whose parameter is a sequence. This means that if a type is a sequence, you can split an instance of it into the elements of an array. For example: Array(1...3)generates the array of Int [1,2,3]. Array("hey".characters)generates the array of Character ["h","e","y"]. Array(d), where dis a Dictionary, generates an array of tuples of the key–value pairs of d. Another array initializer, init(count:repeatedValue:), lets you populate an array with the same value. In this example, I create an array of 100 Optional strings initialized to nil: let strings : [String?] = Array(count:100, repeatedValue:nil) That’s the closest you can get in Swift to a sparse array; we have 100 slots, each of which might or might not contain a string (and to start with, none of them do). When you assign, pass, or cast one array type to another array type, you are operating on the individual elements of the array. Thus, for example: let arr : [Int?] = [1,2,3] That code is actually a shorthand: to treat an array of Int as an array of Optionals wrapping Int means that each individual Int in the original array must be wrapped in an Optional. And that is exactly what happens: let arr : [Int?] = [1,2,3] print(arr) // [Optional(1), Optional(2), Optional(3)] Similarly, suppose we have a Dog class and its NoisyDog subclass; then this code is legal: let dog1 : Dog = NoisyDog() let dog2 : Dog = NoisyDog() let arr = [dog1, dog2] let arr2 = arr as! [NoisyDog] In third line, we have an array of Dog. In the fourth line, we cast this array down to an array of NoisyDog, meaning that we cast each individual Dog in the first array to a NoisyDog (and we won’t crash when we do that, because each element of the first array really is a NoisyDog). You can test all the elements of an array with the is operator by testing the array itself. For example, given the array of Dog from the previous code, you can say: if arr is [NoisyDog] { // ... That will be true if each element of the array is in fact a NoisyDog. Similarly, the as? operator will cast an array to an Optional wrapping an array, which will be nil if the underlying cast cannot be performed: let dog1 : Dog = NoisyDog() let dog2 : Dog = NoisyDog() let dog3 : Dog = Dog() let arr = [dog1, dog2] let arr2 = arr as? [NoisyDog] // Optional wrapping an array of NoisyDog let arr3 = [dog2, dog3] let arr4 = arr3 as? [NoisyDog] // nil The reason for casting down an array is exactly the same as the reason for casting down any value — it’s so that you can send appropriate messages to the elements of that array. If NoisyDog declares a method that Dog doesn’t have, you can’t send that message to an element of an array of Dog. Somehow, you need to cast that element down to a NoisyDog so that the compiler will let you send it that message. You can cast down an individual element, or you can cast down the entire array; you’ll do whichever is safe and makes sense in a particular context. Array equality works just as you would expect: two arrays are equal if they contain the same number of elements and all the elements are pairwise equal in order: let i1 = 1 let i2 = 2 let i3 = 3 if [1,2,3] == [i1,i2,i3] { // they are equal! Two arrays don’t have to be of the same type to be compared against one another for equality, but the test won’t succeed unless they do in fact contain objects that are equal to one another. Here, I compare a Dog array against a NoisyDog array; they are in fact equal because the dogs they contain are the same dogs in the same order: let nd1 = NoisyDog() let d1 = nd1 as Dog let nd2 = NoisyDog() let d2 = nd2 as Dog if [d1,d2] == [nd1,nd2] { // they are equal! Because an array is a struct, it is a value type, not a reference type. This means that every time an array is assigned to a variable or passed as argument to a function, it is effectively copied. I do not mean to imply, however, that merely assigning or passing an array is expensive, or that a lot of actual copying takes place every time. If the reference to an array is a constant, clearly no copying is actually necessary; and even operations that yield a new array derived from another array, or that mutate an array, may be quite efficient. You just have to trust that the designers of Swift have thought about these problems and have implemented arrays efficiently behind the scenes. Although an array itself is a value type, its elements are treated however those elements would normally be treated. In particular, an array of class instances, assigned to multiple variables, results in multiple references to the same instances. The Array struct implements subscript methods to allow access to elements using square brackets after a reference to an array. You can use an Int inside the square brackets. For example, in an array consisting of three elements, if the array is referred to by a variable arr, then arr[1] accesses the second element. You can also use a Range of Int inside the square brackets. For example, if arr is an array with three elements, then arr[1...2] signifies the second and third elements. Technically, an expression like arr[1...2] yields something called an ArraySlice. However, an ArraySlice is very similar to an array; for example, you can subscript an ArraySlice in just the same ways you would subscript an array, and an ArraySlice can be passed where an array is expected. In general, therefore, you will probably pretend that an ArraySlice is an array. If the reference to an array is mutable ( var, not let), then a subscript expression can be assigned to. This alters what’s in that slot. Of course, what is assigned must accord with the type of the array’s elements: var arr = [1,2,3] arr[1] = 4 // arr is now [1,4,3] If the subscript is a range, what is assigned must be an array. This can change the length of the array being assigned to: var arr = [1,2,3] arr[1..<2] = [7,8] // arr is now [1,7,8,3] arr[1..<2] = [] // arr is now [1,8,3] arr[1..<1] = [10] // arr is now [1,10,8,3] (no element was removed!) It is a runtime error to access an element by a number larger than the largest element number or smaller than the smallest element number. If arr has three elements, speaking of arr[-1] or arr[3] is not illegal linguistically, but your program will crash. It is legal for the elements of an array to be arrays. For example: let arr = [[1,2,3], [4,5,6], [7,8,9]] That’s an array of arrays of Int. Its type declaration, therefore, is [[Int]]. (No law says that the contained arrays have to be the same length; that’s just something I did for clarity.) To access an individual Int inside those nested arrays, you can chain subscript operations: let arr = [[1,2,3], [4,5,6], [7,8,9]] let i = arr[1][1] // 5 If the outer array reference is mutable, you can also write into a nested array: var arr = [[1,2,3], [4,5,6], [7,8,9]] arr[1][1] = 100 You can modify the inner arrays in other ways as well; for example, you can insert additional elements into them. An array is a collection (CollectionType protocol), which is itself a sequence (SequenceType protocol). If those terms have a familiar ring, they should: the same is true of a String’s characters, which I called a character sequence in Chapter 3. For this reason, an array has a striking similarity to a character sequence. As a collection, an array’s count read-only property reports the number of elements it contains. If an array’s count is 0, its isEmpty property is true. An array’s first and last read-only properties return its first and last elements, but they are wrapped in an Optional because the array might be empty and so these properties would need to be nil. This is one of those rare situations in Swift where you can wind up with an Optional wrapping an Optional. For example, consider an array of Optionals wrapping Ints, and what happens when you get the last property of such an array. An array’s largest accessible index is one less than its count. You may find yourself calculating index values with reference to the count; for example, to refer to the last two elements of arr, you can say: let arr = [1,2,3] let arr2 = arr[arr.count-2...arr.count-1] // [2,3] Swift doesn’t adopt the modern convention of letting you use negative numbers as a shorthand for that calculation. On the other hand, for the common case where you want the last n elements of an array, you can use the suffix method: let arr = [1,2,3] let arr2 = arr.suffix(2) // [2,3] Both suffix and its companion prefix have the remarkable feature that there is no penalty for going out of range: let arr = [1,2,3] let arr2 = arr.suffix(10) // [1,2,3] (and no crash) Instead of describing the size of the suffix or prefix by its count, you can express the limit of the suffix or prefix by its index: let arr = [1,2,3] let arr2 = arr.suffixFrom(1) // [2,3] let arr3 = arr.prefixUpTo(1) // [1] let arr4 = arr.prefixThrough(1) // [1,2] An array’s startIndex property is 0, and its endIndex property is its count. Moreover, an array’s indices property is a half-open range whose endpoints are its startIndex and endIndex — that is, a range accessing the entire array. If you start with a mutable reference to this range, you can modify its startIndex and endIndex to derive a new range. We did the same thing with a character sequence in Chapter 3; but an array’s index values are Ints, so you can use ordinary arithmetic operations: let arr = [1,2,3] var r = arr.indices r.startIndex = r.endIndex-2 arr2 = arr[r] // [2,3] The indexOf method reports the index of the first occurrence of an element in an array, but it is wrapped in an Optional so that nil can be returned if the element doesn’t appear in the array. If the array consists of Equatables, the comparison uses == to identify the element being sought: let arr = [1,2,3] let ix = arr.indexOf(2) // Optional wrapping 1 Even if the array doesn’t consist of Equatables, you can supply your own function that takes an element type and returns a Bool, and you’ll get back the first element for which that Bool is true. In this example, my Bird struct has a name String property: let aviary = [Bird(name:"Tweety"), Bird(name:"Flappy"), Bird(name:"Lady")] let ix = aviary.indexOf {$0.name.characters.count < 5} // Optional(2) As a sequence, an array’s contains method reports whether it contains an element. Again, you can rely on the == operator if the elements are Equatables, or you can supply your own function that takes an element type and returns a Bool: let arr = [1,2,3] let ok = arr.contains(2) // true let ok2 = arr.contains {$0 > 3} // false The startsWith method reports whether an array’s starting elements match the elements of a given sequence of the same type. Once more, you can rely on the == operator for Equatables, or you can supply a function that takes two values of the element type and returns a Bool stating whether they match: let arr = [1,2,3] let ok = startsWith(arr, [1,2]) // true let ok2 = arr.startsWith([1,-2]) {abs($0) == abs($1)} // true The elementsEqual method is the sequence generalization of array comparison: the two sequences must be of the same length, and either their elements must be Equatables or you can supply a matching function. The minElement and maxElement methods return the smallest or largest element in an array, wrapped in an Optional in case the array is empty. If the array consists of Comparables, you can let the < operator do its work; alternatively, you can supply a function that returns a Bool stating whether the smaller of two given elements is the first: let arr = [3,1,-2] let min = arr.minElement() // Optional(-2) let min2 = arr.minElement {abs($0)<abs($1)} // Optional(1) If the reference to an array is mutable, the append and appendContentsOf instance methods add elements to the end of it. The difference between them is that append takes a single value of the element type, while appendContentsOf takes a sequence of the element type. For example: var arr = [1,2,3] arr.append(4) arr.appendContentsOf([5,6]) arr.appendContentsOf(7...8) // arr is now [1,2,3,4,5,6,7,8] The + operator is overloaded to behave like appendContentsOf (not append!) when the left-hand operand is an array, except that it generates a new array, so it works even if the reference to the array is a constant. If the reference to the array is mutable, you can extend it in place with the += operator. Thus: let arr = [1,2,3] let arr2 = arr + [4] // arr2 is now [1,2,3,4] var arr3 = [1,2,3] arr3 += [4] // arr3 is now [1,2,3,4] If the reference to an array is mutable, the instance method insert(atIndex:) inserts a single element at the given index. To insert multiple elements at once, use assignment into a range-subscripted array, as I described earlier (and there is also an insertContentsOf(at:) method). If the reference to an array is mutable, the instance method removeAtIndex removes the element at that index; the instance method removeLast removes the last element, and removeFirst removes the first element. These methods also return the value that was removed from the array; you can ignore the returned value if you don’t need it. These methods do not wrap the returned value in an Optional, and accessing an out-of-range index will crash your program. Another form of removeFirst lets you specify how many elements to remove, but returns no value; it, too, can crash if there aren’t that many elements. On the other hand, popFirst and popLast do wrap the returned value in an Optional, and are thus safe even if the array is empty. If the reference is not mutable, you can use the dropFirst and dropLast methods to return an array (actually, a slice) with the end element removed. The joinWithSeparator instance method starts with an array of arrays. It extracts their individual elements, and interposes between each sequence of extracted elements the elements of its parameter array. The result is an intermediate sequence called a JoinSequence, and might have to be coerced further to an Array if that’s what you were after. For example: let arr = [[1,2], [3,4], [5,6]] let arr2 = Array(arr.joinWithSeparator([10,11])) // [1, 2, 10, 11, 3, 4, 10, 11, 5, 6] Calling joinWithSeparator with an empty array as parameter is thus a way to flatten an array of arrays: let arr = [[1,2], [3,4], [5,6]] let arr2 = Array(arr.joinWithSeparator([])) // [1, 2, 3, 4, 5, 6] There’s also a flatten instance method that does the same thing. Again, it returns an intermediate sequence (or collection), so you might want to coerce to an Array: let arr = [[1,2], [3,4], [5,6]] let arr2 = Array(arr.flatten()) // [1, 2, 3, 4, 5, 6] The reverse instance method yields a new array whose elements are in the opposite order from the original. The sortInPlace and sort instance methods respectively sort the original array (if the reference to it is mutable) and yield a new sorted array based on the original. Once again, you get two choices: if this is an array of Comparables, you can let the < operator dictate the new order; alternatively, you can supply a function that takes two parameters of the element type and returns a Bool stating whether the first parameter should be ordered before the second (just like minElement and maxElement). For example: var arr = [4,3,5,2,6,1] arr.sortInPlace() // [1, 2, 3, 4, 5, 6] arr.sortInPlace {$0 > $1} // [6, 5, 4, 3, 2, 1] In that last line, I provided an anonymous function. Alternatively, of course, you can pass as argument the name of a declared function. In Swift, comparison operators are the names of functions! Therefore, I can do the same thing more briefly, like this: var arr = [4,3,5,2,6,1] arr.sortInPlace(>) // [6, 5, 4, 3, 2, 1] The split instance method breaks an array into an array of arrays at the elements that pass a specified test, which is a function that takes a value of the element type and returns a Bool; the elements passing the test are eliminated: let arr = [1,2,3,4,5,6] let arr2 = arr.split {$0 % 2 == 0} // split at evens: [[1], [3], [5]] An array is a sequence, and so you can enumerate it, inspecting or operating with each element in turn. The simplest way is by means of a for...in loop; I’ll have more to say about this construct in Chapter 5: let pepboys = ["Manny", "Moe", "Jack"] for pepboy in pepboys { print(pepboy) // prints Manny, then Moe, then Jack } Alternatively, you can use the forEach instance method. Its parameter is a function that takes an element of the array (or other sequence) and returns no value. Think of it as the functional equivalent of the imperative for...in loop: let pepboys = ["Manny", "Moe", "Jack"] pepboys.forEach {print($0)} // prints Manny, then Moe, then Jack If you need the index numbers as well as the elements, call the enumerate instance method and loop on the result; what you get on each iteration is a tuple: let pepboys = ["Manny", "Moe", "Jack"] for (ix,pepboy) in pepboys.enumerate() { print("Pep boy \(ix) is \(pepboy)") // Pep boy 0 is Manny, etc. } // or: pepboys.enumerate().forEach {print("Pep boy \($0.0) is \($0.1)")} Swift also provides three powerful array transformation instance methods. Like forEach, these methods all enumerate the array for you, so that the loop is buried implicitly inside the method call, making your code tighter and cleaner. Let’s start with the map instance method. It yields a new array, each element of which is the result of passing the corresponding element of the old array through a function that you supply. This function accepts a parameter of the element type and returns a result which may be of some other type; Swift can usually infer the type of the resulting array elements by looking at the type returned by the function. For example, here’s how to multiply every element of an array by 2: let arr = [1,2,3] let arr2 = arr.map {$0 * 2} // [2,4,6] Here’s another example, to illustrate the fact that map can yield an array with a different element type: let arr = [1,2,3] let arr2 = arr.map {Double($0)} // [1.0, 2.0, 3.0] Here’s a real-life example showing how neat and compact your code can be when you use map. In order to remove all the table cells in a section of a UITableView, I have to specify the cells as an array of NSIndexPath objects. If sec is the section number, I can form those NSIndexPath objects individually like this: let path0 = NSIndexPath(forRow:0, inSection:sec) let path1 = NSIndexPath(forRow:1, inSection:sec) // ... Hmmm, I think I see a pattern here! I could generate my array of NSIndexPath objects by looping through the row values using for...in. But with map, there’s a much tighter way to express the same loop ( ct is the number of rows in the section): let paths = Array(0..<ct).map {NSIndexPath(forRow:$0, inSection:sec)} Actually, map is a CollectionType instance method — and a Range is itself a CollectionType. Therefore, I don’t need the cast to an array: let paths = (0..<ct).map {NSIndexPath(forRow:$0, inSection:sec)} The filter instance method also yields a new array. Each element of the new array is an element of the old array, in the same order; but some of the elements of the old array may be omitted — they were filtered out. What filters them out is a function that you supply; it accepts a parameter of the element type and returns a Bool stating whether this element should go into the new array. For example: let pepboys = ["Manny", "Moe", "Jack"] let pepboys2 = pepboys.filter{$0.hasPrefix("M")} // [Manny, Moe] Finally, we come to the reduce instance method. If you’ve learned LISP or Scheme, you’re probably accustomed to reduce; otherwise, it can be a bit mystifying at first. It’s a way of combining all the elements of an array (actually, a sequence) into a single value. This value’s type — the result type — doesn’t have to be the same as the array’s element type. You supply a function that takes two parameters; the first is of the result type, the second is of the element type, and the result is the combination of those two parameters, as the result type. The result of each iteration becomes the first parameter in the next iteration, along with the next element of the array as the second parameter. Thus, the output of combining pairs accumulates, and the final accumulated value is the final output of the reduce function. However, that doesn’t explain where the first parameter for the first iteration comes from. The answer is that you have to supply it as the first argument of the reduce call. That will all be easier to understand with a simple example. Let’s assume we’ve got an array of Int. Then we can use reduce to sum all the elements of the array. Here’s some pseudocode where I’ve left out the first argument of the reduce call, so that you can think about what it needs to be: let sum = arr.reduce(/*???*/) {$0 + $1} Each pair of parameters will be added together to get the first parameter on the next iteration. The second parameter on every iteration is an element of the array. So the question is, what should the first element of the array be added to? We want the actual sum of all the elements, no more and no less; so clearly the first element of the array should be added to 0! So here’s actual working code: let arr = [1, 4, 9, 13, 112] let sum = arr.reduce(0) {$0 + $1} // 139 Once again, we can write that code more briefly, because the + operator is the name of a function of the required type: let sum = arr.reduce(0, combine:+) In my real iOS programming life, I depend heavily on these methods, often using two or even all three of them together, nested or chained or both. Here’s an example; it’s rather elaborate, but it’s very typical of how neatly you can do things with arrays using Swift, so bear with me. I have a table view that displays data divided into sections. Under the hood, the data is an array of arrays of String — a [[String]] — where each subarray represents the rows of a section. Now I want to filter that data to eliminate all strings that don’t contain a certain substring. I want to keep the sections intact, but if removing strings removes all of a section’s strings, I want to eliminate that section array entirely. The heart of the action is the test for whether a string contains a substring. I’m going to use Cocoa methods for that, in part because they allow me to do a case-insensitive search. If s is a string from my array, and target is the substring we’re looking for, then the code for looking to see whether s contains target case-insensitively is as follows: let options = NSStringCompareOptions.CaseInsensitiveSearch let found = s.rangeOfString(target, options: options) Recall the discussion of rangeOfString in Chapter 3. If found is not nil, the substring was found. Here, then, is the actual code, preceded by some sample data for exercising it: let arr = [["Manny", "Moe", "Jack"], ["Harpo", "Chico", "Groucho"]] let target = "m" let arr2 = arr.map { $0.filter { let options = NSStringCompareOptions.CaseInsensitiveSearch let found = $0.rangeOfString(target, options: options) return (found != nil) } }.filter {$0.count > 0} After the first two lines, setting up the sample data, what remains is a single command — a map call, whose function consists of a filter call, with a filter call chained to it. If that code doesn’t prove to you that Swift is cool, nothing will. When you’re programming iOS, you import the Foundation framework (or UIKit, which imports Foundation) and thus the Objective-C NSArray type. Swift’s Array type is bridged to Objective-C’s NSArray type. However, such bridging is possible only if the types of the elements in the array can be bridged. Objective-C’s rules for what can be an element of an NSArray are both looser and tighter than Swift’s. On the one hand, the elements of an NSArray do not all have to be of the same type. On the other hand, an element of an NSArray must be an object, as Objective-C understands that term. In general, a type is bridged to Objective-C if it can be cast up to AnyObject — meaning that it is a class type, or else a specially bridged struct such as Int, Double, or String. Passing a Swift array to Objective-C is thus usually easy. If your Swift array consists of things that can be cast up to AnyObject, you’ll just pass the array, either by assignment or as an argument in a function call: let arr = [UIBarButtonItem(), UIBarButtonItem()] self.navigationItem.leftBarButtonItems = arr self.navigationItem.setLeftBarButtonItems(arr, animated: true) To call an NSArray method on a Swift array, you may have to cast to NSArray: let arr = ["Manny", "Moe", "Jack"] let s = (arr as NSArray).componentsJoinedByString(", ") // s is "Manny, Moe, Jack" A Swift Array seen through a var reference is mutable, but an NSArray isn’t mutable no matter how you see it. For mutability in Objective-C, you need an NSMutableArray, a subclass of NSArray. You can’t cast, assign, or pass a Swift array to an NSMutableArray; you have to coerce. The best way is to call the NSMutableArray initializer init(array:), to which you can pass a Swift array directly: let arr = ["Manny", "Moe", "Jack"] let arr2 = NSMutableArray(array:arr) arr2.removeObject("Moe") To convert back from an NSMutableArray to a Swift array, you can cast; if you want an array of the original Swift type, you’ll need to cast twice in order to quiet the compiler: var arr = ["Manny", "Moe", "Jack"] let arr2 = NSMutableArray(array:arr) arr2.removeObject("Moe") arr = arr2 as NSArray as! [String] If a Swift object type can’t be cast up to AnyObject, it isn’t bridged to Objective-C, and the compiler will stop you if you try to pass an Array containing an instance of that type where an NSArray is expected. In such a situation, you’ll need to “bridge” the array elements yourself. Here, for example, I have a Swift array of CGPoint. That’s perfectly fine in Swift, but CGPoint is a struct, which Objective-C doesn’t see as an object, so you can’t put one in an NSArray. If I try to pass this array where an NSArray is expected, I’ll get a compiler error: “ [CGPoint] is not convertible to NSArray.” The solution is to wrap each CGPoint in an NSValue, an Objective-C object type specifically designed to act as a carrier for nonobject types; now we have a Swift array of NSValue, which can subsequently be handed to Objective-C: let arrNSValues = arrCGPoints.map { NSValue(CGPoint:$0) } Another case in point is a Swift array of Optionals. An Objective-C collection can’t contain nil (because, in Objective-C, nil isn’t an object). Therefore you can’t put an Optional in an NSArray. You’ll have to do something with those Optionals before passing the array where an NSArray is expected. If an Optional wraps a value, you can unwrap it. But if an Optional wraps no value (it is nil), you can’t unwrap it. One solution is to do what you would do in Objective-C. An Objective-C NSArray can’t contain nil, so Cocoa provides a special class, NSNull, whose singleton instance, NSNull(), can stand in for nil where an object is needed. Thus, if I have an array of Optionals wrapping Strings, I can unwrap those that aren’t nil and substitute NSNull() for those that are: let arr2 : [AnyObject] = arr.map{if $0 == nil {return NSNull()} else {return $0!}} (In Chapter 5, I’ll write that code much more compactly.) Now let’s talk about what happens when an NSArray arrives from Objective-C into Swift. There won’t be any problem crossing the bridge: the NSArray will arrive safely as a Swift Array. But a Swift Array of what? Of itself, an NSArray carries no information about what type of element it contains. The default, therefore, is that an Objective-C NSArray will arrive as a Swift array of AnyObject. Fortunately, you won’t encounter this default anywhere near as often as in the past. Starting in Xcode 7, the Objective-C language has been modified so that the declaration of an NSArray, NSDictionary, or NSSet — the three collection types that are bridged to Swift — can include element type information. (Objective-C calls this a lightweight generic.) In iOS 9, the Cocoa APIs have been revised so that they do include this information. Thus, for the most part, the arrays you receive from Cocoa will be correctly typed. For example, this elegant code was previously impossible: let arr = UIFont.familyNames().map { UIFont.fontNamesForFamilyName($0) } The result is an array of arrays of String, listing all available fonts grouped by family. That code is possible because both of those UIFont class methods are now seen by Swift as returning an array of String. Previously, those arrays were untyped — they were arrays of AnyObject — and casting down to an array of String was up to you. It is still perfectly possible, though far less likely, that you will receive an array of AnyObject from Objective-C. If that happens, then usually you will want to cast it down or otherwise transform it into an array of some specific Swift type. Here’s an Objective-C class containing a method whose return type of NSArray hasn’t been marked up with an element type: @implementation Pep - (NSArray*) boys { return @[@"Mannie", @"Moe", @"Jack"]; } @end To call that method and do anything useful with the result, it will be necessary to cast that result down to an array of String. If I’m sure of my ground, I can force the cast: let p = Pep() let boys = p.boys() as! [String] As with any cast, though, be sure you don’t lie! An Objective-C array can contain more than one type of object. Don’t force such an array to be cast down to a type to which not all the elements can be cast, or you’ll crash when the cast fails; you’ll need a more deliberate strategy for eliminating or otherwise transforming the problematic elements. A dictionary (Dictionary, a struct) is an unordered collection of object pairs. In each pair, the first object is the key; the second object is the value. The idea is that you use a key to access a value. Keys are usually strings, but they don’t have to be; the formal requirement is that they be types that adopt the Hashable protocol, meaning that they adopt Equatable and also have a hashValue property (an Int) such that two equal keys have equal hash values and two unequal keys do not. Thus, the hash values can be used behind the scenes for rapid key access. Swift numeric types, strings, and enums are Hashables. As with arrays, a given dictionary’s types must be uniform. The key type and the value type don’t have to be the same type, and they often will not be. But within any dictionary, all keys must be of the same type, and all values must be of the same type. Formally, a dictionary is a generic, and its placeholder types are ordered key type, then value type: Dictionary<Key,Value>. As with arrays, however, Swift provides syntactic sugar for expressing a dictionary’s type, which is what you’ll usually use: [Key: Value]. That’s square brackets containing a colon (and optional spaces) separating the key type from the value type. This code creates an empty dictionary whose keys (when they exist) will be Strings and whose values (when they exist) will be Strings: var d = [String:String]() The colon is used also between each key and value in the literal syntax for expressing a dictionary. The key–value pairs appear between square brackets, separated by comma, just like an array. This code creates a dictionary by describing it literally (and the dictionary’s type of [String:String] is inferred): var d = ["CA": "California", "NY": "New York"] The literal for an empty dictionary is square brackets containing just a colon: [:]. This notation can be used provided the dictionary’s type is known in some other way. Thus, this is another way to create an empty [String:String] dictionary: var d : [String:String] = [:] If you try to fetch a value through a nonexistent key, there is no error, but Swift needs a way to report failure; therefore, it returns nil. This, in turn, implies that the value returned when you successfully access a value through a key must be an Optional wrapping the real value! Access to a dictionary’s contents is usually by subscripting. To fetch a value by key, subscript the key to the dictionary reference: let d = ["CA": "California", "NY": "New York"] let state = d["CA"] Bear in mind, however, that after that code, state is not a String — it’s an Optional wrapping a String! Forgetting this is a common beginner mistake. If the reference to a dictionary is mutable, you can also assign into a key subscript expression. If the key already exists, its value is replaced. If the key doesn’t already exist, it is created and the value is attached to it: var d = ["CA": "California", "NY": "New York"] d["CA"] = "Casablanca" d["MD"] = "Maryland" // d is now ["MD": "Maryland", "NY": "New York", "CA": "Casablanca"] Alternatively, call updateValue(forKey:); it has the advantage that it returns the old value wrapped in an Optional, or nil if the key wasn’t already present. By a kind of shorthand, assigning nil into a key subscript expression removes that key–value pair if it exists: var d = ["CA": "California", "NY": "New York"] d["NY"] = nil // d is now ["CA": "California"] Alternatively, call removeValueForKey; it has the advantage that it returns the removed value before it removes the key–value pair. The removed value is returned wrapped in an Optional, so a nil result tells you that this key was never in the dictionary to begin with. As with arrays, a dictionary type is legal for casting down, meaning that the individual elements will be cast down. Typically, only the value types will differ: let dog1 : Dog = NoisyDog() let dog2 : Dog = NoisyDog() let d = ["fido": dog1, "rover": dog2] let d2 = d as! [String : NoisyDog] As with arrays, is can be used to test the actual types in the dictionary, and as? can be used to test and cast safely. Dictionary equality, like array equality, works as you would expect. A dictionary has a count property reporting the number of key–value pairs it contains, and an isEmpty property reporting whether that number is 0. A dictionary has a keys property reporting all its keys, and a values property reporting all its values. They are effectively opaque structs (a LazyForwardCollection, if you must know), but when you enumerate them with for...in, you get the expected type: var d = ["CA": "California", "NY": "New York"] for s in d.keys { print(s) // s is a String } A dictionary is unordered! You can enumerate it (or its keys, or its values), but do not expect the elements to arrive in any particular order. You can extract all a dictionary’s keys or values at once, by coercing the keys or values property to an array: var d = ["CA": "California", "NY": "New York"] var keys = Array(d.keys) You can also enumerate a dictionary itself. As you might expect from what I’ve already said, each iteration provides a key–value tuple: var d = ["CA": "California", "NY": "New York"] for (abbrev, state) in d { print("\(abbrev) stands for \(state)") } You can extract a dictionary’s entire contents at once as an array (of key–value tuples) by coercing the dictionary to an array: var d = ["CA": "California", "NY": "New York"] let arr = Array(d) // [("NY", "New York"), ("CA", "California")] Like an array, a dictionary and its keys property and its values property are collections (CollectionType) and sequences (SequenceType). Therefore, everything I said about arrays as collections and sequences in the previous section is applicable! For example, if a dictionary d has Int values, you can sum them with the reduce instance method: let sum = d.values.reduce(0, combine:+) You can obtain its smallest value (wrapped in an Optional): let min = d.values.minElement() You can list the values that match some criterion: let arr = Array(d.values.filter{$0 < 2}) (The coercion to Array is needed because the sequence resulting from filter is lazy: there isn’t really anything in it until we enumerate it or collect it into an array.) The Foundation framework dictionary type is NSDictionary, and Swift’s Dictionary type is bridged to it. Considerations for passing a dictionary across the bridge are parallel to those I’ve already discussed for arrays. The untyped bridged API characterization of an NSDictionary will be [NSObject:AnyObject], using the Objective-C Foundation object base class for the keys; there are various reasons for this choice, but from Swift’s point of view the main one is that AnyObject is not a Hashable. NSObject, on the other hand, is extended by the Swift APIs to adopt Hashable; and since NSObject is the base class for Cocoa classes, any Cocoa class type will be Hashable. Thus, any NSDictionary can cross the bridge. Like NSArray, NSDictionary key and value types can now be marked in Objective-C. The most common key type in a real-life Cocoa NSDictionary is NSString, so you might well receive an NSDictionary as a [String:AnyObject]. Specific typing of an NSDictionary’s values, however, is much rarer; dictionaries that you pass to and receive from Cocoa will very often have values of different types. It is not at all surprising to have a dictionary whose keys are strings but whose values include a string, a number, a color, and an array. For this reason, you will usually not cast down the entire dictionary’s type; instead, you’ll work with the dictionary as having AnyObject values, and cast when fetching an individual value from the dictionary. Since the value returned from subscripting a key is itself an Optional, you will typically unwrap and cast the value as a standard single move. Here’s an example. A Cocoa NSNotification object comes with a userInfo property. It is an NSDictionary that might itself be nil, so the Swift API characterizes it like this: var userInfo: [NSObject : AnyObject]? { get } Let’s say I’m expecting this dictionary to be present and to contain a "progress" key whose value is an NSNumber containing a Double. My goal is to extract that NSNumber and assign the Double that it contains to a property, self.progress. Here’s one way to do that safely, using optional unwrapping and optional casting ( n is the NSNotification object): let prog = (n.userInfo?["progress"] as? NSNumber)?.doubleValue if prog != nil { self.progress = prog! } That’s an optional chain that ends by fetching an NSNumber’s doubleValue property, so prog is implicitly typed as an Optional wrapping a Double. The code is safe, because if there is no userInfo dictionary, or if it doesn’t contain a "progress" key, or if that key’s value isn’t an NSNumber, nothing happens, and prog will be nil. I then test prog to see whether it is nil; if it isn’t, I know that it’s safe to force-unwrap it, and that the unwrapped value is the Double I’m after. (In Chapter 5 I’ll describe another syntax for accomplishing the same goal, using conditional binding.) Conversely, here’s a typical example of creating a dictionary and handing it off to Cocoa. This dictionary is a mixed bag: its values are a UIFont, a UIColor, and an NSShadow. Its keys are all strings, which I obtain as constants from Cocoa. I form the dictionary as a literal and pass it, all in one move, with no need to cast anything: UINavigationBar.appearance().titleTextAttributes = [ NSFontAttributeName : UIFont(name: "ChalkboardSE-Bold", size: 20)!, NSForegroundColorAttributeName : UIColor.darkTextColor(), NSShadowAttributeName : { let shad = NSShadow() shad.shadowOffset = CGSizeMake(1.5,1.5) return shad }() ] As with NSArray and NSMutableArray, if you want Cocoa to mutate a dictionary, you must coerce to NSMutableDictionary. In this example, I want to do a join between two dictionaries, so I harness the power of NSMutableDictionary, which has an addEntriesFromDictionary: method: var d1 = ["NY":"New York", "CA":"California"] let d2 = ["MD":"Maryland"] let mutd1 = NSMutableDictionary(dictionary:d1) mutd1.addEntriesFromDictionary(d2) d1 = mutd1 as NSDictionary as! [String:String] // d1 is now ["MD": "Maryland", "NY": "New York", "CA": "California"] That sort of thing is needed quite often, because there’s no native method for adding the elements of one dictionary to another dictionary. Indeed, native utility methods involving dictionaries in Swift are disappointingly thin on the ground: there really aren’t any. Still, Cocoa and the Foundation framework are right there, so perhaps Apple feels there’s no point duplicating in the Swift standard library the functionality that already exists in Foundation. If having to drop into Cocoa bothers you, you can write your own library; for example, addEntriesFromDictionary: is easily reimplemented as a Swift Dictionary instance method through an extension: extension Dictionary { mutating func addEntriesFromDictionary(d:[Key:Value]) { // generic types for (k,v) in d { self[k] = v } } } A set (Set, a struct) is an unordered collection of unique objects. It is thus rather like the keys of a dictionary! Its elements must be all of one type; it has a count and an isEmpty property; it can be initialized from any sequence; you can cycle through its elements with for...in. But the order of elements is not guaranteed, and you should make no assumptions about it. The uniqueness of set elements is implemented by constraining their type to adopt the Hashable protocol, just like the keys of a Dictionary. Thus, the hash values can be used behind the scenes for rapid access. Checking whether a set contains a given element, which you can do with the contains instance method, is very efficient — far more efficient than doing the same thing with an array. Therefore, if element uniqueness is acceptable (or desirable) and you don’t need indexing or a guaranteed order, a set can be a much better choice of collection than an array. The fact that a set’s elements are Hashables means that they must also be Equatables. This makes sense, because the notion of uniqueness depends upon being able to answer the question of whether a given object is already in the set. There are no set literals in Swift, but you won’t need them because you can pass an array literal where a set is expected. There is no syntactic sugar for expressing a set type, but the Set struct is a generic, so you can express the type by explicitly specializing the generic: let set : Set<Int> = [1, 2, 3, 4, 5] In that particular example, however, there was no need to specialize the generic, as the Int type can be inferred from the array. It sometimes happens (more often than you might suppose) that you want to examine one element of a set as a kind of sample. Order is meaningless, so it’s sufficient to obtain any element, such as the first element. For this purpose, use the first instance property; it returns an Optional, just in case the set is empty and has no first element. The distinctive feature of a set is the uniqueness of its objects. If an object is added to a set and that object is already present, it isn’t added a second time. Conversion from an array to a set and back to an array is thus a quick and reliable way of uniquing the array — though of course order is not preserved: let arr = [1,2,1,3,2,4,3,5] let set = Set(arr) let arr2 = Array(set) // [5,2,3,1,4], perhaps A set is a collection (CollectionType) and a sequence (SequenceType), so it is analogous to an array or a dictionary, and what I have already said about those types generally applies to a set as well. For example, Set has a map instance method; it returns an array, but of course you can turn that right back into a set if you need to: let set : Set = [1,2,3,4,5] let set2 = Set(set.map {$0+1}) // {6, 5, 2, 3, 4}, perhaps If the reference to a set is mutable, a number of instance methods spring to life. You can add an object with insert; if the object is already in the set, nothing happens, but there is no penalty. You can remove an object and return it by specifying the object itself (or something equatable to it), with the remove method; it returns the object wrapped in an Optional, or nil if the object was not present. You can remove and return the first object (whatever “first” may mean) with removeFirst; it crashes if the set is empty, so take precautions (or use popFirst, which is safe). Equality comparison ( ==) is defined for sets as you would expect; two sets are equal if every element of each is also an element of the other. If the notion of a set brings to your mind visions of Venn diagrams from elementary school, that’s good, because sets have instance methods giving you all those set operations you remember so fondly. The parameter can be a set, or it can be any sequence, which will be converted to a set; for example, it might be an array, a range, or even a character sequence: intersect, intersectInPlace union, unionInPlace exclusiveOr, exclusiveOrInPlace subtract, subtractInPlace isSubsetOf, isStrictSubsetOf isSupersetOf, isStrictSupersetOf falseif the two sets consist of the same elements. isDisjointWith Here’s a real-life example of elegant Set usage from one of my apps. I have a lot of numbered pictures, of which we are to choose one randomly. But I don’t want to choose a picture that has recently been chosen. Therefore, I keep a list of the numbers of all recently chosen pictures. When it’s time to choose a new picture, I convert the list of all possible numbers to a Set, convert the list of recently chosen picture numbers to a Set, and subtract to get a list of unused picture numbers! Now I choose a picture number at random and add it to the list of recently chosen picture numbers: let ud = NSUserDefaults.standardUserDefaults() var recents = ud.objectForKey(RECENTS) as? [Int] if recents == nil { recents = [] } var forbiddenNumbers = Set(recents!) let legalNumbers = Set(1...PIXCOUNT).subtract(forbiddenNumbers) let newNumber = Array(legalNumbers)[ Int(arc4random_uniform(UInt32(legalNumbers.count))) ] forbiddenNumbers.insert(newNumber) ud.setObject(Array(forbiddenNumbers), forKey:RECENTS) An option set (technically, an OptionSetType) is Swift’s way of treating as a struct a certain type of enumeration commonly used in Cocoa. It is not, strictly speaking, a Set; but it is deliberately set-like, sharing common features with Set through the SetAlgebraType protocol. Thus, an option set has contains, insert, and remove methods, along with all the various set operation methods. The purpose of option sets is to help you grapple with Objective-C bitmasks. A bitmask is an integer whose bits are used as switches when multiple options are to be specified simultaneously. Such bitmasks are very common in Cocoa. In Objective-C, and in Swift prior to Swift 2.0, bitmasks are manipulated through the arithmetic bitwise-or and bitwise-and operators. Such manipulation can be mysterious and error-prone. Thanks to option sets, Swift 2.0 allows bitmasks to be manipulated through set operations instead. For example, when specifying how a UIView is to be animated, you are allowed to pass an options: argument whose value comes from the UIViewAnimationOptions enumeration, whose definition (in Objective-C) begins as follows:, // ... }; Pretend that an NSUInteger is 8 bits (it isn’t, but let’s keep things simple and short). Then this enumeration means that (in Swift) the following name–value pairs are defined: UIViewAnimationOptions.LayoutSubviews 0b00000001 UIViewAnimationOptions.AllowUserInteraction 0b00000010 UIViewAnimationOptions.BeginFromCurrentState 0b00000100 UIViewAnimationOptions.Repeat 0b00001000 UIViewAnimationOptions.Autoreverse 0b00010000 These values can be combined into a single value — a bitmask — that you pass as the options: argument for your animation. All Cocoa has to do to understand your intentions is to look to see which bits in the value that you pass are set to 1. So, for example, 0b00011000 would mean that UIViewAnimationOptions.Repeat and UIViewAnimationOptions.Autoreverse are both true (and that the others, by implication, are all false). The question is how to form the value 0b00011000 in order to pass it. You could form it directly as a literal and set the options: argument to UIViewAnimationOptions(rawValue:0b00011000); but that’s not a very good idea, because it’s error-prone and makes your code incomprehensible. In Objective-C, you’d use the arithmetic bitwise-or operator, analogous to this Swift code: let val = UIViewAnimationOptions.Autoreverse.rawValue | UIViewAnimationOptions.Repeat.rawValue let opts = UIViewAnimationOptions(rawValue: val) In Swift 2.0, however, the UIViewAnimationOptions type is an option set struct (because it is marked as NS_OPTIONS in Objective-C), and therefore can be treated much like a Set. For example, given a UIViewAnimationOptions value, you can add an option to it using insert: var opts = UIViewAnimationOptions.Autoreverse opts.insert(.Repeat) Alternatively, you can start with an array literal, just as if you were initializing a Set: let opts : UIViewAnimationOptions = [.Autoreverse, .Repeat] To indicate that no options are to be set, pass an empty option set ( []). This is a major change from Swift 1.2 and earlier, where the convention was to pass nil — illogically, since this value was never an Optional. The inverse situation is that Cocoa hands you a bitmask, and you want to know whether a certain bit is set. In this example from a UITableViewCell subclass, the cell’s state comes to us as a bitmask; we want to know about the bit indicating that the cell is showing its edit control. In the past, it was necessary to extract the raw values and use the bitwise-and operator: override func didTransitionToState(state: UITableViewCellStateMask) { let editing = UITableViewCellStateMask.ShowingEditControlMask.rawValue if state.rawValue & editing != 0 { // ... the ShowingEditControlMask bit is set ... } } That’s a tricky formula, all too easy to get wrong. In Swift 2.0, this is an option set, so the contains method tells you the answer: override func didTransitionToState(state: UITableViewCellStateMask) { if state.contains(.ShowingEditControlMask) { // ... the ShowingEditControlMask bit is set ... } } Swift’s Set type is bridged to Objective-C NSSet. The untyped medium of interchange is Set<NSObject>, because NSObject is seen as Hashable. Of course, the same rules apply as for arrays. An Objective-C NSSet expects its elements to be class instances, and Swift will help by bridging where it can. In real life, you’ll probably start with an array and coerce it to a set or pass it where a set is expected, as in this example from my own code: let types : UIUserNotificationType = [.Alert, .Sound] // a bitmask let category = UIMutableUserNotificationCategory() category.identifier = "coffee" let settings = UIUserNotificationSettings( // second parameter is an NSSet forTypes: types, categories: [category]) Coming back from Objective-C, you’ll get a Set of NSObject if Objective-C doesn’t know what this is a set of, and in that case you would probably cast down as needed. As with NSArray, however, NSSet can now be marked up to indicate its element type; many Cocoa APIs have been marked up, and no casting will be necessary: override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) { let t = touches.first // an Optional wrapping a UITouch // ... } No credit card required
https://www.oreilly.com/library/view/ios-9-programming/9781491936764/ch04.html
CC-MAIN-2019-39
refinedweb
40,587
60.75
I really didn't want to write another byte-swapping article. But I had to. You see, every night I wake up from a recurring nightmare that some poor BeOS user will be unable to interoperate their BeOS app between Intel and PowerPC machines. I toss and turn, gripped with worry and unable to fall back asleep. Then I think of the public flogging the guilty application writer will get for this terrible deed, and I drift back into a pleasant slumber. The truth is that PR2 has some new byte-swapping macros to prepare you for our Release 3 Intel release and I wanted to tell you about them. Judicious use of these macros *now* will mean that in most cases your application will recompile and interoperate when Release 3 rolls around later. The new stuff in PR2 is contained in the file byteorder.h. The macros can be summarized as follows: B_LENDIAN_TO_HOST_XXX() convert from little-endian to host format B_BENDIAN_TO_HOST_XXX() convert from big-endian to host format B_HOST_TO_LENDIAN_XXX() convert from host to little-endian format B_HOST_TO_BENDIAN_XXX() convert from host to big-endian format Where XXX equals one of the following types: INT16, INT32, INT64, FLOAT or DOUBLE. These macros swap in the case where the host format doesn't match the desired format (big or little endian), but are no-ops in the case where the host format matches the desired format. Additionally, byteorder.h contains two identifiers that will tell you the host endianness: B_HOST_IS_LENDIAN1 on Intel, 0 on PowerPC B_HOST_IS_BENDIAN0 on Intel, 1 on PowerPC Here is a simple example demonstrating usage of these new macros. If you read something from a device register (typically little-endian), you might write some code like this to read the data out into a useful form: count= B_LENDIAN_TO_HOST_INT32( device_registers-> count) On Intel, this macro does nothing, but on PowerPC it will swap the data for you. It's much easier to read code like this than code that contains a lot of #ifdefs to deal with byte-order issues. #ifdefs are generally frowned upon, and you should be ashamed of yourself if you use them. The following sample program further demonstrates the usage of these new macros: #include <stdio.h> #include <stdlib.h> #include <byteorder.h> /* * Simple employee record */ typedef struct { char first_name[16]; /* First name */ char last_name[16]; /* Last name */ float salary; /* Salary */ int16 number; /* Employee number */ char lendian; /* Is record in little-endian format? */ char padding; /* Padding, to float boundary (4-bytes) */ } employee_t; /* * Convert an employee record to host format */ static void convert_to_host_format(employee_t * emp) { if ( emp-> lendian) { /* * Little-endian to host conversion */ emp-> number= B_LENDIAN_TO_HOST_INT16( emp-> number); emp-> salary= B_LENDIAN_TO_HOST_FLOAT( emp-> salary); } else { /* * Big-endian to host conversion */ emp-> number= B_BENDIAN_TO_HOST_INT16( emp-> number); emp-> salary= B_BENDIAN_TO_HOST_FLOAT( emp-> salary); } } main(int argc, char ** argv) { employee_t emp; switch ( argc) { case 1: /* * Read an employee record from standard input * and print it */ if ( fread(& emp, 1, sizeof( emp), stdin) != sizeof( emp)) { fprintf( stderr, "Error reading employee record\n"); exit(1); } convert_to_host_format(& emp); printf("Name: %s %s, #%d, $%8.2f\n", emp. emp. emp. number, emp. salary); break; case 5: /* * Use the command-line arguments to create an * employee record * and write it to standard output. */ memset(& emp, 0, sizeof( emp)); strcpy( emp. argv[1]); strcpy( emp. argv[2]); emp. number= atoi( argv[3]); emp. salary= atof( argv[4]); /* * Note endian-ness for reading later */ emp. lendian= B_HOST_IS_LENDIAN; if ( fwrite(& emp, 1, sizeof( emp), stdout) != sizeof( emp)) { fprintf( stderr, "Error writing employee record\n"); exit(1); } break; default: fprintf( stderr, "incorrect number of arguments\n"); exit(1); } exit(0); } This program reads and writes employee records. The records can be written on Intel and read by PowerPC or vice-versa. The program is written in a style I prefer, which is to never swap when writing data, as long as there is an extra field to indicate the endianness of the data when written. Then when reading, you swap the data only if necessary. The advantages of this style are the following: You don't swap when exchanging data between like-minded machines, like Intel to Intel or PowerPC to PowerPC. This is the common case, after all. The code is easier to understand, because there is only swapping code for one direction (reading). The write case looks pretty much like it would if you didn't have to worry about these things (but unfortunately, you do). Sometimes you don't have the liberty to define your own data format because you are dealing with some standard format, such as JPEG or device registers. The macros will still be helpful to you in this case, and I recommend using them. These macros won't solve all of your interoperability problems. You still need to worry about alignment issues. The employee record in this sample program was carefully laid out to be interoperable using natural alignment techniques. Natural alignment is the subject of a previous Newsletter article... Be Engineering Insights: Will Your DATA Look Like ATAD? ...and won't be discussed further here. So, that's all I have to say. There is now plenty of information available to you on how to prepare for Intel. If you design your PR2 application right, it is possible to just recompile it for Release 3/Intel and have it interoperate with PowerPC. If you can do this, pour yourself a drink and enjoy it. You deserve it. In the short time I've been at Be I've been getting in touch with as many of our developers as I can. I hear a lot of developers say, "I don't want to risk doing apps for the BeOS until there are lots of apps out for it." To address this I have here a somewhat edited version of an e-mail exchange I had with a developer. (I rewrote it to make me look good.) Reading this just might make you a million dollars. ME: Hi, I'm Dave Johnson, new Developer Evangelist at Be, Inc. I'm going through the database becoming familiar with all the developers, etc. DEVELOPER: Dave, as much as I love the BeOS, I have to hold off developing for it until we see Premiere, Photoshop, or After Effects running on it. Until there are big-name applications I don't think you'll have the customer base to support my product. ME: You already know the BeOS is a powerful tool that enables you to write applications with significantly superior performance—RIGHT NOW -- so you CAN solve specific problems for specific customers. The fact is that there are customers out there who want that performance because it will save them time and money. So for them the argument about whether there are "enough applications" is irrelevant. Service bureaus, video editors, graphic design houses don't CARE if Quicken is on the BeOS yet! They care if you can help them save or make money. Developers who worry about the size of the existing customer base believe that you write a generic application and just sort of put it out there and wait for customers to come to you. In fact this is already happening. We're doing everything we can to get the BeOS into the hands of lots and lots of computer users. But there's also another way to look at this situation. DEVELOPER: What's that? If you're telling me not to worry about your existing customer base, how are you proposing I can make money now? ME: By solving specific problems for specific customers—that's how people get rich. If you solve one problem in a compelling way for customers they'll pay you for doing it. Developers who write products that solve specific problems will do well. The BeOS exists, developers understand what it's good at, and you're going to see very good products with vastly superior performance running on it. So why don't YOU do one? Why don't YOU get there first and establish your specific area of problem solving? How often do opportunities like that come up? DEVELOPER: I don't do one, because I really don't want to reinvent the wheel. ME: DUDE! Reinventing the wheel on a multi-threaded symmetric multiprocessing OS means that you'll have something to show that blows the customers away! You'll either make a mint selling to those customers, or Adobe will BUY IT from you when they decide it's time to be on the BeOS. You want to be there first. DEVELOPER: Well. . . ME: Name one other OS that gives you superior performance on the same hardware. Be is going to do well because it is a compelling solution. Developers know that as soon as they see and use the BeOS. And YOU will win by writing applications that take advantage of the BeOS's performance to solve problems for customers. DEVELOPER: Yes, the BeOS is a compelling solution, and yes, the geeks get it. But it does no good if the geeks aren't _allowed_ to develop for it. Adobe isn't going to port until they know that Be's "viable," financially. I think you guys are doing a great job, and the BeOS is coming right along. PR2 is great! The real truth is that you guys have absolutely no competition. You blow NT out of the water, and make the Mac OS look like a hack. BUT I still think your main goal should be getting as many apps, and the sexiest apps on board. Get Adobe to sign on. No one wants to spend money on a set of apps that are all 1.0 versions, from tiny three-person startup companies! ME: Leading edge developers think that entering an arena where there is limited competition is a GREAT idea! They see it as a huge opportunity. How often do you get opportunities like this? DEVELOPER: But why would a customer buy my product—even one tailored to solve a specific problem—to run on an OS that he isn't already using? ME: Because applications running on the BeOS demonstrate significant performance improvements that translate into MONEY for the customer. Visit some of those customers and learn their needs. Take a multiprocessor computer with you and show a finished app that uses threaded multiprocessing and does one thing for them in a third of the time it takes them now. Just ONE thing. You can do that with the BeOS. Think about all those service bureaus, all the Kinkos, graphic designers, photo processors, video editors, musicians, you name it... Time is money, and speeding things up saves money for those businesses. Lots of businesses are performing some task on a computer that operates inefficiently. Apps running on the BeOS will outperform apps running on Mac or Windows or NT. Period. DEVELOPER: Gosh. I'm glad I listened to you. I AM going to go out and develop a BeOS application and make a million dollars! SUMMARY: Hey you developers, listen up, it's opportunity knocking! What I'm suggesting is that you locate key target business segments and find out specifically where in their business the benefits of BeOS will make them money. Find out where they are bogged down by slow programs, which programs, etc... Then YOU develop a product that takes advantage of the BeOS to solve that performance problem. Customers will PAY YOU to do that! The developers that write those applications for those customers are going to make a lot of money. Go out there and DO IT. :. So I'm sitting here at my desk, minding my own business and writing lovely new documentation for the Media Kit, when all of a sudden the powers that be decide to have the new guy start writing articles for the Newsletter. After momentarily thinking that was a pretty good joke, I realize they're actually serious. As this slowly sinks in, I'm staring at my screen, where pages and pages of sample code and half-written Media Kit documentation are mocking me. Slowly it dawns on me that I can impress everyone with my speedy article writing by ripping out a piece of one of these sample programs I've been working on and writing an article about it. After perusing my source code, I realize that what I have in my hands is nothing less than the answer to the age-old question: "How do I take an arbitrarily formatted sound and stuff it into that 16-bit stereo sound stream?" What I'll present here is a C function that can mix any sound format into a stream buffer. Ideally, you'll take this code and convert it into a member function in your sound output class. The BeOS audio streams (both input and output) are 16-bit stereo. Once you've subscribed to the output stream (by creating a BDACStream object and subscribing to it with a BSubscriber), you can enter the stream and start receiving buffers of audio data that you can alter to your heart's content. This is great when the sound you want to play is in 16-bit stereo form. But things get ugly when it's 8-bit mono or some other format. Then you have to manually fix things like sample size and endianness, and mix the sound into the stream, preferably without destroying whatever sound is already in the buffers you receive. Since you always know what format the output stream is going to be, you can make your life easier by defining a structure I call standard_frame, which defines the format of a single frame of audio data in 16-bit stereo format: struct standard_frame { int16 left; // Left channel's sample int16 right; // Right channel's sample }; typedef struct standard_frame standard_frame; Now you can create the MixStandardFrames() function. It accepts a lot of parameters. When you add this function to your audio output class, you can probably eliminate a lot of these parameters by referencing member variables of your class instead. status_t MixStandardFrames(char * soundData, int32 index, int32 count, standard_frame * out, int32 fileFormat, int32 byteOrder, int32 sampleSize, int32 channelCount) { soundData is a pointer to a buffer containing the sound data to mix into the output buffer. To make your life really easy, it takes a pointer to the beginning of the complete sound. The index parameter is the frame number of the sound to start playing at. So if you want to play starting at the 5,000th frame of the sound, you'd pass a pointer to the 0th frame in soundData and the number 5,000 for index. The count parameter is the number of frames you want to mix, and out is a pointer to a 16-bit stereo sound buffer into which those frames will be mixed. The remaining parameters specify the file format ( B_WAVE_FILE, B_AIFF_FILE, etc), byte order ( B_BIG_ENDIAN or B_LITTLE_ENDIAN), sample size (1 or 2 bytes), and channel count (1 or 2 channels, for mono and stereo). These parameters allow MixStandardFrames to interpret the incoming data correctly. MixStandardFrames has lots of local variables. The shortIn, byteIn, and ubyteIn pointers will be used for type-casted pointers to the sound data. The other parameters are used, as you'll see later, by the code that does the actual mixing of the sound data. register short * shortIn; // Used for 16-bit samples register char * byteIn; // For 8-bit signed samples register uchar * ubyteIn; // For 8-bit unsigned samples register int32 sample0, sample1; // Mixed left and right samples register int32 temp0, temp1; // Used in clipping computation register int32 stereo; // Stereo or not? register int32 frame; // The frame offset being processed The first thing to do is verify that the input pointers aren't null. That would be a bad thing, and as good programmers we fear bad things, so check for that straight off. if (! soundData|| ! out) { return B_ERROR; } Then set the stereo variable to 1 if the sound you're playing is in stereo format, or 0 if the sound is mono. This will be used as a multiplier as you make your way through the input sound data. if ( channelCount== 1) { stereo= 0; } else { stereo= 1; } Now it's time to prepare for the real work. Set up byte, unsigned byte, and short pointers to the first frame of the sound data to be played. When you actually start processing the data, you'll use the pointer appropriate to the type of input data you have. The pointer is cast to the appropriate type and you add the offset to the specified first frame. The offset is either index or index*2, depending on whether or not the sound is stereo or not. So you multiply index by stereo+1, which will either be index*1 or index*2. The result is a pointer to the first frame to be processed. Then you initialize the frame counter to zero. byteIn= ((char *) soundData)+( index*( stereo+1)); ubyteIn= ((uchar *) soundData)+( index*( stereo+1)); shortIn= ((short *) soundData)+( index*( stereo+1)); frame= 0; // Start at the very beginning // (a very good place to start) Now it's time to process the actual sound. Start by dealing with 8-bit sounds, which have a sample size of one byte. Loop through each frame, from 0 to count, using the frame variable as a counter. if ( sampleSize== 1) { while ( frame< count) { if ( fileFormat== B_WAVE_FILE) { sample0= out-> left+ (int32) (int8) (* ubyteIn- 128) << 8; ubyteIn+= stereo; sample1= out-> right+ (int32) (int8) (* ubyteIn- 128) << 8; ubyteIn++; } else { sample0= out-> left+ (int32) (int8) (* byteIn) << 8; byteIn+= stereo; sample1= out-> right+ (int32) (int8) (* byteIn) << 8; byteIn++; } The above code computes one frame of mixed sound data. Note that 8-bit WAVE files get special treatment. That's because 8-bit WAVE format samples are unsigned, rather than signed. So you have to convert the data into signed format by subtracting 128 from the samples. sample0 is the left channel's sample. Grab the next sample from the buffer containing the sound you're playing (pointed to by either ubyteIn or byteIn), shift it left eight bits to convert it into a 16-bit number, and add it to out-> left, which is the existing sample in the output buffer. Now add the value of the stereo flag to the input pointer. This will only increment the input pointer if the sound is in stereo format (remember that stereo is 0 for mono and 1 for stereo). That way, when you read the right channel, if the sound is stereo, you'll get the right channel's sample, but if the sound is mono, the left channel will be duplicated into the right. Process the right channel's sample, sample1, the same way. Now you have to deal with possible clipping problems. Clipping occurs when the value of the mixed samples is outside the range that can be represented by a 16-bit number. This is why you mix the sounds in 32-bit variables: You can detect overflow and compensate for it. First, add $8000 to the left and right samples we've mixed, then mask off the low word. This value, stored in temp0 for the left channel and temp1 for the right, is non-zero if the sample overflowed. Now store the sample into the output buffer. If temp0 (for the left channel) or temp1 (for the right channel) is non-zero, replace the output sample with $7FFF if the sample is positive (this is the maximum possible sample value) or $8000 if the sample is negative (this is the minimum possible sample value). This clips the sample to the range of valid 16-bit integers without accidentally wrapping around and sounding *really* strange. Finally, increment the out pointer to point to the next frame of audio in the output buffer, and increment frame, which is the number of frames you've processed so far, and continue looping until the desired number of frames have been mixed into the output buffer. } } 16-bit input data is handled almost exactly the same way, with two significant differences: WAVE files don't need special consideration, and you have to deal with both big-endian and little-endian input. Since BeOS is natively big-endian, you have to use the read_16_swap() function to read little-endian data. This function returns, byte-swapped, the 16-bit value located at a given address. The input data is pointed to here by shortIn, which represents the 16-bit input data. Otherwise, this code is exactly the same as the 8-bit code above. else { while ( frame< count) { if ( byteOrder== B_LITTLE_ENDIAN) { sample0= out-> left+ (int32) read_16_swap( shortIn); shortIn+= stereo; sample1= out-> right+ (int32) read_16_swap( shortIn); shortIn++; } else { sample0= out-> left+ (int32) * shortIn; shortIn+= stereo; sample1= out-> right+ (int32) * shortIn; shortIn++; } } } Finally, return B_OK to indicate that, as far as this code can tell, nothing went too terribly wrong while mixing the sound data: return B_OK; } This code can be called easily from your stream function to mix any kind of sound data into the output stream. The following sample hook function assumes the existence of certain variables. Normally you'll have these within the class containing the hook function, but space doesn't permit me to show the entire class. These variables are: playFrameNumber: Specifies the frame number of the first frame of audio data to play. soundLength: Specifies the length, in frames, of the sound being played. soundData: Pointer to the sound data to play. fileFormat, byteOrder, sampleSize, and channelCount specify the file format, byte ordering, sample size, and number of channels in the sound. These are passed straight through to MixStandardFrame(). static bool OutStreamHook(void * userData, char * buffer, size_t count, void * header) { int32 frameCount; // If the sound is done playing, exit the stream if ( playFrameNumber>= soundLength) { return false; } // Compute the number of frames to copy frameCount= count/4; // Compute buffer size in frames if (( soundLength- playFrameNumber) < frameCount) { frameCount= soundLength- playFrameNumber; if (! frameCount) { return false; // No more data! } } // Mix ourselves into the buffer. MixStandardFrames( soundData, playFrameNumber, frameCount, (standard_frame *) buffer, fileFormat, byteOrder, sampleSize, channelCount); playFrameNumber+= frameCount; return true; } This hook function starts by looking to see if the sound has finished playing (the frame to be played is greater than or equal to the number of frames in the sound); if so, it returns false, which tells the BSubscriber to remove us from the stream. Now compute the number of frames to copy into the buffer, by dividing the size of the file in bytes by four and storing that result in frameCount. Each frame of 16-bit stereo sound is four bytes long. If the number of frames left to play in the sound ( soundLength - playFrameNumber) is less than the number of frames in the buffer, change frameCount to that value. This can happen when you are approaching the end of the sound, and there are fewer frames of sound left to play than there are in the buffer you've received. Once that's all done, call MixStandardFrames() to mix the sound into the buffer, then add frameCount to playFrameNumber to keep track of the next frame to be played. Finally, return true, which tells the BSubscriber that you're not done playing yet. Perhaps this isn't the best of weeks to discuss e-commerce, right in the wake of an ambitious announcement from one of our noble and worthy elders, but we just made the BeOS available for Web download (visit. From one angle, this isn't news, we said we'd do it, we're doing it, a little late, we're just doing our job and, in any case, doesn't everyone promote and deliver their software on the Web? True. But, from another angle, our latest move raises a few questions. The hardest one stems from the "free trial" status of our product. If we start on that slope, some critics say, how are we going to make money, how are we going to wean users from the "free BeOS habit"? Here is our perspective: The world isn't waiting for us, we've got to earn the trust and the commitment of developers and customers. The BeOS is an unproven platform within a context where one company, Microsoft, is enormously successful, and other operating systems have slowed down, in the best of cases, or failed. Of course, in order to make our case, we can write columns, go to industry conferences and give speeches, or go to trade shows and run demonstrations. This is good, necessary perhaps, but not sufficient. Clearly positioning the product is good, comparing the benefits of a specialized Media OS versus the comforts and limits of a general-purpose platform is commendable marketing hygiene. Opposing IBM's strategy with OS/2 (better DOS than DOS, better Windows than Windows) to our goal of coexisting with Windows is a useful disclaimer. But, in the end, the most potent marketing weapon in our business still is word-of-mouth. If users and developers say good things about the BeOS and our company, its business practices and its people, this is much more credible than any commercial. If, on the contrary, our product is perceived as poorly designed, our staff as unresponsive, no amount of advertising will correct the problem. This is well and good, but it doesn't quite address the "freebie habit" problem. Why don't we charge for PR2 downloads? After all, it's nice, stable and usable, right? Right now, our view is we need to develop the installed base of BeOS users more than we need the revenue. We'd rather make this investment as soon as possible, learn what users like and dislike and maximize the installed based BeOS developers can count on. And, as we plan to enter the Intel space, we want to gather as much momentum and experience as possible, while keeping in mind that this new field isn't the PowerPC market, only bigger. Mac developers have discovered that fact when they moved to Windows. The context, the competition, the belief systems, the purchasing habits are all different. Which puts us in the slightly paradoxical situation of wanting to gain experience quickly while realizing that the knowledge gained must be applied selectively. Back to the main issue, our current plan is to let the trial version, free, and the paid for product coexist. Many companies do that today, and rather successfully. You can use Eudora Lite or pay for Eudora Pro, your choice. You can use QuickView, or pay for the QuickView Plus version, both very nice products. And there are many, many more such examples. These companies obviously want to make it easy for the prospective paying customer to make up his or her mind, and they have faith in the statistical outcome of the trial. Qualcomm, for instance, knows some people will keep using the "free" Eudora Lite instead of forking the $50 or so required for the Pro version. But, as long as the "Lite" users are happy, they contribute to the positive buzz about Eudora. This, I hasten to say, doesn't mean to represent our detailed calculus of possibilities for the free vs. paid for versions of the BeOS, these examples merely intend to show it can work and that users are willing to move from free to "commercial." We'll keep in mind that history repeats itself in mysterious way as we happily add bandwidth for the demand created by the Web availability of our product—and as we are chasing freshly discovered bugs. drag-a-window-between-workspaces feature was a new found source of amusement for one of our listeners. Although initially offered as an off-topic thread, others joined in to offer window management suggestions: Hot-key window delivery: Bucky bit-click on a window to send it to some other workspace. Workspace scrolling. Multiple graphics card support, with separate workspaces running on each. Spatial arrangement of workspaces to support multi-headed setups. Smarter desktop layout when switching from a big to a small workspace. The spatial arrangement/multi-headed question generated the most debate: Should individual workspaces be views into a single universal workspace area? Should each monitor have its own list of workspaces that it can switch to, exclusive of all other monitors? And so on. Jerome Chan is curious: “How hard is it to write a web browser? I mean, if the Linux people can write an entire OS why can't we write a web browser ourselves?” Craig Longman suggests looking at the HTML spec to get an idea of the task's complexity. And then move on to Java, and JavaScript, and CSS... Pierre-Emmanuel Chaut pointed out that many of the elements of a good browser are already available (through Kftp, rRaster, Felix, and so on). It's (ahem) simply a matter of putting the pieces together. Various members of the BeDevTalk community offered their services for a international group effort. The thread then veered into a discussion of XML ("Extensible Markup Language") vs HTML. Is XML the Web language of the future? Swap file questions: How about providing a "remote" swap file feature, in which the swap file is on a different drive? How about a swap file on a separate partition? the kernel handle multiple swap files? The swap-file-across-a-wire idea was generally accepted as an obviously beneficial improvement. But does it make sense to dedicate a partition as a swap file? Jake Hamby points out that the BeOS is quite efficient when handling VM I/O, so assigning a partition might not gain anything. Osma Ahvenlampi offered this: If you can physically place the partition on the outer cylinders, you can take advantage of the cylinders' naturally heightened speed. THE BE LINE: (From Dominic Giampaolo) Although the kernel can handle "remote" swap files, there's currently no way to tell it to do so. We'll consider it, but it's not likely to happen in the next release. John Tegen sent a well-reasoned description of the necessity of and requirements for an intelligent open system. But where to go from here, wherever "here" is? Somehow, the thread immediately skidded into a discussion of the "Project Magic" Opera browser. A number of listeners have pledged their money to this port; others question the wisdom of buying a product before it exists. More interestingly (and generally), the notion of "nativeness" was scrutinized. Is moving code from one platform to another and stitching up the loose ends enough to deem the result "native"? Many folks don't think so. A native app should take advantage of an OS's features, which implies a significant amount of rewriting. In the case of the BeOS, according to Jon Watte, a primary feature that should be taken advantage of is the multi-threaded environment: “There is no shame in having written code that does not port cleanly to the BeOS, because you couldn't know that we would come along and suddenly do everything so much better. However, what we do is make sure that there is glory in writing code that DOES work in a multi-threaded environment.” Which headers should an app developer use when compiling? Are older headers (where PR1 is oldest) better because of the larger customer base? Or, given the subscription model, will all/most users be automatically "on the bus"? Relatedly, it was suggested that the compiler should note the version of the OS that an app is compiled under. If the app is launched on a newer version, a "renew your OS subscription" Alert could pop up. THE BE LINE: You should compile against the headers of the oldest OS version that you want to run on. This means, of course, that you can't take advantage of newer features.
https://www.haiku-os.org/legacy-docs/benewsletter/Issue2-45.html
CC-MAIN-2017-47
refinedweb
5,322
61.06
Timeline Jan 7, 2007: - 8:21 PM Ticket #1427 (NITF driver reading from incorrect file offsets) created by - I'm reporting a bug in the NITF driver. The 2007-01-07 snapshot of … - 8:14 AM Changeset [10578] by - Properly duplicate the GCP list instead of copying the array pointer. - 7:45 AM Changeset [10577] by - Updated to the latest JasPer (1.900.1). - 5:00 AM Changeset [10576] by - Quick addition for the previous update. - 4:56 AM Changeset [10575] by - JasPer UUID library updated. Jan 6, 2007: - 12:28 PM Changeset [10574] by - A few updates based on feedback. - 11:42 AM Changeset [10573] by - Added IMPORTANT notice about Windows CE guide is under construction … - 10:03 AM Changeset [10572] by - Added *.dylib to .cvsignore patterns. - 9:39 AM Changeset [10571] by - Added updates for download page, and gdalautotest. - 9:38 AM Changeset [10570] by - Updated to refer to GDAL 1.4.0 release specifically. - 9:03 AM Changeset [10569] by - Fixed last fix. It was returning NULL in other conditions such as a … Jan 5, 2007: - 8:12 PM Changeset [10568] by - Added .cvsignore files for swig/csharp subtree. - 3:21 PM Changeset [10567] by - New - 3:21 PM Changeset [10566] by - added RFC9 - 1:34 PM Changeset [10565] by - Added note about ESRI:: support in SetFromUserInput?(). - 8:40 AM Changeset [10564] by - Added gdal-announce. - 8:02 AM Changeset [10563] by - Improved Lambert Conformal Conic projection handling from GDAL to ILWIS - 1:55 AM Ticket #1424 (subset in an hdf5 file let gdal_info and gdal_translate bail) created by - […] Jan 4, 2007: - 7:54 PM Changeset [10562] by - handle tiff files with improper color table scaling (bug 1384) - 2:15 PM Changeset [10561] by - Fixed a typo causing that the manifest has not been embedded to … - 1:31 PM Changeset [10560] by - Ensure that CPLReadLineL() returns NULL at end of file. - 12:57 PM Changeset [10559] by - Added motion about mailing list. - 11:07 AM Changeset [10558] by - Updated VC++ 2005 project for GDAL on Windows CE - 9:18 AM Changeset [10557] by - gdal.pm etc. have been replaced with lib/Geo/GDAL.pm etc. - 9:01 AM Changeset [10556] by - removed references to pod-files Jan 3, 2007: - 1:22 PM Changeset [10555] by - Reverting the recent changes. - 12:27 PM Changeset [10554] by - added LD_LIBRARY_PATH when invoking the tests - 7:23 AM Changeset [10553] by - Adding prefix for the current directory in the dllmap specification. Jan 2, 2007: - 1:07 PM Changeset [10552] by - Don't use paragraph breaks in Projects list. - 7:27 AM Changeset [10551] by - Update a few items. - 7:16 AM Changeset [10550] - This commit was manufactured by cvs2svn to create tag 'gdal_1_4_0'. - 7:16 AM Changeset [10549] by - updated to 1.4.0 - 7:14 AM Changeset [10548] by - Updated release month. - 7:13 AM Changeset [10547] by - Update download locations. - 7:11 AM Changeset [10546] by - Remove gdal.osgeo.org. Use download.osgeo.org/gdal for downloads. - 6:44 AM Changeset [10545] by - Added support for service description in the url. - 5:18 AM Changeset [10544] by - New Visual C++ 2005 project files for Windows CE port. - 4:44 AM Changeset [10543] by - Fixed URL to mirror on the osgeo.org. - 3:24 AM Changeset [10542] by - Added temporary IMPORTANT note to wince/README file. - 3:22 AM Changeset [10541] by - Updated cpl_config.h.wince and tif_config.h.wince. - 1:33 AM Changeset [10540] by - [WCE] Removed .cvsignore files from wince/*. Run cvs update -P to … - 1:29 AM Changeset [10539] by - [WCE] Removed project files. Cleaning and preparing for new projects. - 1:25 AM Changeset [10538] by - [WCE] Removed wince/wcelibcex. Users will be asked to download the … Jan 1, 2007: - 9:44 PM Changeset [10537] by - Added "optimized" RasterIO() method implementations. - 7:35 PM Changeset [10536] by - Added config based support for simple jpeg files. - 6:56 PM Changeset [10535] by - update Dec 30, 2006: - 7:51 AM Changeset [10534] by - Fixed spelling mistake. Dec 29, 2006: - 11:29 AM Changeset [10533] by - added swapnil - 11:17 AM Changeset [10532] by - Tom Russo's latest patch for geotiff finding (bug 1344) Dec 25, 2006: - 11:11 AM Changeset [10531] by - Support for the MONO/GCC builds - 11:01 AM Ticket #1419 (ogrinfo crashes on an E00 file) created by - […] Dec 23, 2006: - 3:35 PM Changeset [10530] by - Softening csharp's depreciation Dec 22, 2006: - 12:32 PM Ticket #1418 (Conversion of MrSID datasets from NASA Zulu to JP2ECW hangs) created by - […] - 10:43 AM Changeset [10529] by - apply libgeotiff finding patch from Tom in bug 1344 - 7:59 AM Ticket #1417 (Incomplete import of data from TAB file) created by - […] Dec 21, 2006: - 10:10 AM Ticket #1416 (Problem with MapInfo file and NTF system) created by - I have downloaded a vectorial map of France in MapInfo? format (freely … - 6:25 AM Changeset [10528] by - solved memory leaks bug by removing CPLStrdup calls everywhere used … Dec 20, 2006: - 8:02 PM Changeset [10527] by - apply patch for 1240 - 7:38 PM Changeset [10526] by - Update the release version for the NEWS. - 6:49 PM Changeset [10525] by - use mkdir -p like the patch in bug 1192. $(DESTDIR) support was added … - 1:11 PM Changeset [10524] by - Updated to beta2. - 7:44 AM Changeset [10523] by - an attempt to fix bug 1344 - 6:46 AM Ticket #1415 (ogr2ogr/shapefile fails to generate valid polygons with hole touching ...) created by - Using ogr2ogr to generate a shapefile will produce invalid polygons … - 6:33 AM Changeset [10522] by - Solved one warning for some compilers: an unsigned value was compared … Dec 19, 2006: - 9:58 PM Changeset [10521] by - Added 1.4.0 information. - 6:49 PM Changeset [10520] by - white space improvements Dec 18, 2006: - 7:40 PM Changeset [10519] by - DGN is fact creatable. - 2:53 PM Changeset [10518] by - $(DESTDIR) support - 2:36 PM Changeset [10517] by - add support for postgres' --includedir-server option (8.1 and older?) - 1:35 PM Changeset [10516] by - Apply Schuyler's patch for bug 1392 - 1:27 PM Changeset [10515] by - add back the IF_ERROR_RETURN_NONE typemap because we're using it for … - 11:22 AM Ticket #1414 (world files and locale) created by - When operating in the C locale, GDAL cannot read world files using … - 10:35 AM Ticket #1413 (Whitespace no longer accepted by style string parser) created by - […] - 10:00 AM Changeset [10514] by - Make sure error stack is cleared before HEshutdown() or else later … - 9:49 AM Changeset [10513] by - Modified CPLReadLineL() to fix … - 8:22 AM Changeset [10512] by - added support for x-ogc as well as ogc namespace Dec 17, 2006: - 11:58 PM Ticket #1412 (Schema names are not quoted by OGR tools (ogrinfo/ogr2ogr) when using ...) created by - Hey people, When I create a schema with a - in the name and create a … - 7:49 PM Changeset [10511] by - Avoid really big memory leak in readblock. - 7:42 PM Changeset [10510] by - avoid leak of external band file handles - 7:42 PM Changeset [10509] by - avoid leak of pachTileInfo - 6:20 PM Changeset [10508] by - avoid leak of fpQube FILE *. - 6:17 PM Changeset [10507] by - avoid gcp related memory leak - 6:17 PM Changeset [10506] by - call HEshutdown() to clear error stack. - 4:36 PM Changeset [10505] by - Added SpatialAce?. - 3:37 PM Changeset [10504] by - support JPEG files with multiple header chunks - 3:36 PM Changeset [10503] by - added support for reading PE strings in ProjectionX blocks Dec 14, 2006: - 9:56 PM Changeset [10502] by - added Oblique Stereographic Dec 13, 2006: - 5:44 PM Ticket #1396 (missing tiff tags to read/write metadata) created by - […] - 10:53 AM Changeset [10501] by - Added OGRStyleTable::Clone() method. - 10:35 AM Changeset [10500] by - Added OGR_F_SetStyleStringDirectly() declaration. - 10:34 AM Changeset [10499] by - Added SetStyleStringDirectly?(), GetStyleTable?() and … - 10:24 AM Changeset [10498] by - Added OGRLayer::SetStyleTableDirectly?(), … - 10:17 AM Changeset [10497] by - Added documentation for style table methods. - 10:11 AM Ticket #1395 (ogr2ogr -s_srs "+init=epsg:26592" -t_srs "+init=epsg:32633") created by - Hello Frank, I am transforming SHP from Italian GaussBoagaEst? to … - 10:05 AM Changeset [10496] by - Clear style table in destructor. Dec 12, 2006: - 3:32 PM Ticket #1393 (gdalwarp severely fails to project into sinusoidal) created by - […] - 2:47 PM Ticket #1392 (patch to add more informative error messages to gdal_crs.c) created by - […] - 1:22 PM Changeset [10495] by - replace the pod docs with doxygen dox (requires modified doxygenfilter) - 9:14 AM Changeset [10494] by - Fixed method names in OGRStyleVector interface. Dec 11, 2006: - 7:58 PM Changeset [10493] by - Ensure saved character gets properly initialized. - 1:47 PM Ticket #1389 (problem w/ date fields using ogr2ogr to load shapefile into postgis) created by - the SQL generated by ogr2ogr seems to be truncating the date field. … - 12:50 PM Changeset [10492] by - new wrappers because new interface - 12:46 PM Changeset [10491] by - apply some typemaps for GetProjectionMethod?* methods (only for … - 12:42 PM Changeset [10490] by - typemap(out) char free for GetParameterList? - 12:32 PM Changeset [10489] by - typemaps for GIntBig and char CSL - 8:31 AM Ticket #1387 (Problems with MSLINK attributes) created by - […] Dec 10, 2006: - 11:11 AM Changeset [10488] by - new wrappers after changes in interface - 11:06 AM Changeset [10487] by - IF_ERROR_RETURN_NONE, which skips returning the error code - 11:05 AM Changeset [10486] by - Make stats in GetStatistics? output variables, also a new pattern … - 8:26 AM Changeset [10485] by - add NITFReadBLOCKA prototype Dec 9, 2006: - 8:59 PM Changeset [10484] by - added blocka support from Reiner Beck - 8:59 PM Changeset [10483] by - Removed extra OGRFME register function. - 8:58 PM Changeset [10482] by - Various updates, including re: blocka support. - 10:01 AM Changeset [10481] by - utilize the %perlcode in gdal_perl.i (uses, PackCharacter?) - 9:57 AM Changeset [10480] by - added sub PackCharacter?, the uses, and the VERSION - 9:54 AM Changeset [10479] by - * empty log message * - 9:53 AM Ticket #1386 (tiff_write_6 test fails with segmentation fault) created by - […] Dec 8, 2006: - 10:25 PM Changeset [10478] by - Fixed indentation - 8:59 PM Changeset [10477] by - Adjust our computed histogram min/max to match what erdas expects. - 5:23 PM Changeset [10476] by - flesh out aux statistics generation - 2:00 PM Changeset [10475] by - Fixed to use VSI*L API. - 12:53 PM Changeset [10474] by - Added support for all uppercase name of coverage … - 7:01 AM Changeset [10473] by - oops. Unflip the x and y axis for bug 1380 - 6:03 AM Ticket #1385 (reading gcs.csv & pcs.csv is not locale invariant) created by - I have to set the locale to "setlocale(LC_ALL, "C");" before doing an … Note: See TracTimeline for information about the timeline view.
http://trac.osgeo.org/gdal/timeline?from=2007-01-07T20%3A30%3A19-0800&precision=second
CC-MAIN-2016-44
refinedweb
1,797
68.1
walterb972 Points Struggling to get While loop to work on this one I'm struggling to get the while loop to work on this code challenge. When I run the code without the while loop, it work fine and tells me if the random number is even or odd. When I add my while loop structure, the program does nothing. Can anyone point me the right direction here? import random start = 5 def even_odd(num): # If % 2 is 0, the number is even. # Since 0 is falsey, we have to invert it with not. return not num % 2 while start is True: num = random.randint(1, 99) if even_odd(num) is True: print("{} is even".format(num)) else: print("{} is odd".format(num)) start -= 1 1 Answer Steven Parker203,440 Points In Python, "is" is an identity comparison. It only matches if both sides refer to the same thing (not just have the same value). An equality comparison is the double equal sign operator ("=="). But you don't need that here, since "start" is a number it will never be equal to "True" (a boolean). You can test a value for "truthiness" just by naming it: while start: Similarly, you can test the result of calling "even_odd" directly also. walterb972 Points walterb972 Points That helps, thanks!
https://teamtreehouse.com/community/struggling-to-get-while-loop-to-work-on-this-one
CC-MAIN-2020-45
refinedweb
216
82.95
> one should be able to define two instances having the same signature, as > long as they are in different namespaces [snip] > But now, ghc complains about two instances of Foo Integer, although > there should be none in the namespace main. It's a Haskell problem, not a ghc one. Class instances are not constrained by module boundaries. Other people have found this to be a problem, e.g. in combination with tools like Strafunski - you just cannot encapsulate a class instance in a module. It's a design flaw in Haskell. > I have not found any documentation on why ghc behaves like this and > whether this conforms to the haskell language specification. > Is there any haskell compiler out there that is able to compile the > above example? I think Clean (a very similar language) permits to limit the scope of class instances. Stefan
http://www.haskell.org/pipermail/haskell/2004-July/014322.html
CC-MAIN-2014-41
refinedweb
143
72.56
Password input on a treewidgetitem Hello all (and happy christmas/holidays!) To the point: How do I get a QTreeWidgetItem to respect a QLineEdits setEchoMode(QLineEdit.Password) ? I've been banging my head against the wall for this for the last day: I have a subclass of QTreeWidgetItem (which simply adds one extra field to the class) I create an instance of it, add it to my TreeWidget: twi = DIMTreeWidgetItem.DIMTreeWidgetItem(uuid.uuid4(), [field_name, '<Empty>']) ... self.ui_instance.main_window.treeWidget.addTopLevelItem(twi) I edit that an instance based on a double click of that item with: self.ui_instance.main_window.treeWidget.editItem(item, column) This works fine. I have a delegate attached to that column which is simply: def __init__(self, parent=None, *args): QStyledItemDelegate.__init__(self, parent, *args) def createEditor(self, parent, option, index): le = QLineEdit('', parent) le.setEchoMode(QLineEdit.PasswordEchoOnEdit) return le But it seems this only effects the item during editing. What is the correct way for me to obscure the content of the treewidgetitem after editing? I haven't done this myself, but I'm guessing you need to override QStyledItemDelegate's paint() method as well. So something like they've done here. - VRonin Qt Champions 2018 last edited by This question is identical to You just need to translate the C++ code of the solution to python, should be easy
https://forum.qt.io/topic/98057/password-input-on-a-treewidgetitem/2
CC-MAIN-2019-43
refinedweb
223
56.35
Your official information source from the .NET Web Development and Tools group at Microsoft.. Ever since ASP.NET's inception over a decade ago, the product has consumed cryptography in some form. We have a variety of use cases: ViewState, ScriptResource.axd and WebResource.axd URLs, FormsAuthentication tickets, membership passwords, and more. And for a while, we just assumed that the types in the System.Security.Cryptography namespace solved all our problems automatically, ignorant of the fact that the callers have the ultimate responsibility to call the APIs correctly. This led to MS10-070, whereby attackers exploited the fact that ASP.NET misused these cryptographic primitives and were able to read sensitive files from the web application directory. We quickly released a patch for that issue, but at the same time we realized that we needed to perform a more thorough investigation of cryptographic uses inside ASP.NET. Through a joint effort between members of the ASP.NET security team, the .NET Framework security team, and Microsoft's crypto board, we identified several areas for improvement and set to work drafting changes. Whenever any discussion of cryptography in ASP.NET comes up, the topic of conversation eventually comes around to the <machineKey> element. And the confusion is understandable since the term is overloaded. There are four attributes in particular which are most immediately interesting. The format of decryptionKey and validationKey is as follows: key-format = (hex-string | ("AutoGenerate" [",IsolateApps"] [",IsolateByAppId"])) Normally these keys are expected to be represented by hex strings, but developers can also specify that ASP.NET use auto-generated keys instead of explicitly-specified keys. If an auto-generated key is used, the runtime will automatically populate the registry key HKCU\Software\Microsoft\ASP.NET\4.0.30319.0\AutoGenKeyV4 with a random number generated by a cryptographically-secure RNG. The registry key holds enough random bits for both an encryption key and a validation key to exist side-by-side without overlapping, and the value is itself protected using DPAPI. There is an important consequence of the above: the auto-generated machine key is unique per user (where the user is usually the Windows identity of the worker process) on a given machine. If two web applications are deployed on a machine, and if those applications' process identities are equivalent, then the auto-generated keys will be the same for both applications.. ASP.NET provides two optional modifiers that can further alter the auto-generated machine key before it is consumed by the application. (The particular transformation mechanism is described later in this post.) The currently supported modifiers are: If no explicit hex key is specified in an application's configuration file, then the runtime assumes a default value of AutoGenerate,IsolateApps. Thus by default the user's auto-generated machine key is transformed with the application's virtual path, and this transformed value is used as the cryptographic key material. One notable change to ASP.NET 4's cryptographic pipeline is that we added support for the SHA-2 family of algorithms. This was possible partly due to the fact that Windows XP / Server 2003 were the minimum system requirements for .NET 4, and the latest service pack for both OSes at the time brought native support for SHA-2. We also added a configuration option for specifying the particular algorithms used, so any developer is able to swap in their own SymmetricAlgorithm or KeyedHashAlgorithm-derived types. When the runtime is asked to use auto-generated machine keys in ASP.NET 4, it selects AES with a 192-bit key (this is a holdover from when ASP.NET used Triple DES, which takes a 192-bit key) and HMACSHA256 with a 256-bit key. Consider only the encryption key for now. The auto-generated machine key as retrieved from the registry will provide the full 192 bits of entropy. Assume that those bytes are: ee 1c df 76 16 ed 18 37 70 05 30 a8 17 d0 e6 69 97 65 21 de 00 3b 92 70 ee 1c df 76 16 ed 18 37 70 05 30 a8 17 d0 e6 69 97 65 21 de 00 3b 92 70 Remember: the auto-generated key as stored in the registry contains both the encryption and the validation keys. The encryption and validation keys are extracted individually from this registry entry, and the transformation is applied to each independently. Remember: the auto-generated key as stored in the registry contains both the encryption and the validation keys. The encryption and validation keys are extracted individually from this registry entry, and the transformation is applied to each independently. Furthermore, recall that if IsolateApps is specified, this key is further transformed before being consumed by the application. The particular manner in which this occurs is that the runtime hashes the application's virtual path into a 32-bit integer, and these 32 bits replace the first 32 bits of the value that we got from the registry. Thus if the application’s virtual path is "/myapp", and if that string hashes to 0x179AB900, then IsolateApps will transform the key read from the registry into: 17 9a b9 00 16 ed 18 37 70 05 30 a8 17 d0 e6 69 97 65 21 de 00 3b 92 70 17 9a b9 00 16 ed 18 37 70 05 30 a8 17 d0 e6 69 97 65 21 de 00 3b 92 70 The immediate consequence of this is that the 192-bit key used for encryption contains only 160 bits of entropy. (If IsolateByAppId is also specified, then the next 32 bits will be likewise replaced by the hash of the application's AppDomainAppID, and the total entropy is reduced to 128 bits.) It is important to note that neither the application's virtual path nor its application ID is secret knowledge, and in fact the former is trivial to guess since it is often in the URL itself. So if an application is deployed to the virtual path "/" and is using the default behavior of AutoGenerate,IsolateApps, it can be assumed that the first 32 bits of encryption key material are 4e ba 98 b2, which is the hash of the string "/". Transformation of the validation key works in a similar fashion. The default auto-generated key is a 256-bit key for use with HMACSHA256. IsolateApps reduces the entropy to 224 bits, and IsolateApps and IsolateByAppId together will reduce the entropy to 192 bits. The particular design of <machineKey> requires that there be only a single set of encryption and validation keys at any given time. There are two implications to this design. The first is that this presents a hardship to organizations which require that cryptographic key material be refreshed on a regular basis. The standard way of doing this is to update the keys in Web.config, then redeploy the affected applications. However, all existing encrypted or MACed data will then be rendered invalid since the framework will no longer be able to interpret existing ciphertext payloads. The second implication has a greater impact on application security but is more subtle to most observers. Things should come more into focus with a bit of exposition. The API FormsAuthentication.Encrypt takes as a parameter a FormsAuthenticationTicket instance, and it returns a string corresponding to the protected version of that ticket. More specifically, the API serializes the FormsAuthenticationTicket instance into a binary form (the plaintext), and this binary form is run through encryption and MACing processes to produce the ciphertext. Typical usage is as follows: var ticket = new FormsAuthenticationTicket("username", false, 30); string encrypted = FormsAuthentication.Encrypt(ticket); var ticket = new FormsAuthenticationTicket("username", false, 30); string encrypted = FormsAuthentication.Encrypt(ticket); (A similar code path is invoked by FormsAuthentication.SetAuthCookie and other related methods.) In earlier versions of ASP.NET, the ticket serialization routine automatically prepended 64 bits of randomness before outputting fields like the username, creation and expiration dates, etc. Assuming a good RNG, there is a 1 in 256 chance of the first byte being any particular value, such as 0x54. This would normally seem harmless, but... The ScriptResourceHandler (ScriptResource.axd) type provides several services for AJAX-enabled ASP.NET applications. The API is called via ScriptResource.axd?d=xyz, where xyz is ciphertext. ScriptResourceHandler will extract the plaintext and perform some action depending on the value of the first plaintext byte. If this first byte is 0x54, the plaintext payload is dumped to the response. (This behavior is intended to support AJAX navigation on browsers which do not include native support for the feature.) Putting these two details together, one reasons that there is a 1 in 256 chance that an encrypted FormsAuthentication ticket given to a client can be echoed back to ScriptResource.axd for decryption. This highlights the second implication mentioned above: since there is a single set of cryptographic keys for the application, all components necessarily share the same set of keys. All cryptographic consumers within an application need to be aware of each other and differentiate their payloads; otherwise, attackers can start playing the individual consumers off one another. A weakness in a single component can quickly turn into a weakness in an entirely different part of the system. The project FormsAuthScriptResource in the sample solution demonstrates the above problem with forms authentication and ScriptResource.axd. The application mimics logging out and back in several times in succession until ScriptResource.axd accepts the provided forms authentication ticket as valid. Keep in mind that since we fixed this particular bug as part of MS11-100, I have modified this particular project's Web.config such that the application exhibits the old (pre-fix) behavior. This was done strictly for educational purposes, and production applications should never disable any of our security fixes. (It should be noted that the root cause of CVE-2011-3416 was a forms authentication ticket serialization flaw that was privately disclosed to us by a third party. The "payload collision" flaw mentioned above was an internal find by our security team. Since fixing CVE-2011-3416 required us to change the forms authentication ticket payload format anyway, we just piggybacked the payload collision fix on top of it, and the whole package went out as part of the MS11-100 release.) Finally, I want to discuss the MachineKey.Encode and Decode APIs. These APIs were added in ASP.NET 4 due to high customer demand for some form of programmatic access to the crypto pipeline. A common use case is that the application needs to round-trip a piece of data via an untrusted client and doesn't want the client to decipher or tamper with the data. The easiest way to write such code in 4.0 is: string ciphertext = ...; // provided by client byte[] decrypted = MachineKey.Decode(ciphertext, MachineKeyProtection.All); string ciphertext = ...; // provided by client byte[] decrypted = MachineKey.Decode(ciphertext, MachineKeyProtection.All); As described above, since the same cryptographic keys are used throughout the ASP.NET pipeline, it turns out that the Decode method can also be used to decrypt payloads like forms authentication tickets. Consumers of the Decode method can try to defend against clients passing these payloads through, but it requires developers to be cognizant of the fact that the Decode method can even be abused in this manner. Even more dangerous is the way in which the Encode method is often called: byte[] plaintext = ...; // provided by client string ciphertext = MachineKey.Encode(plaintext, MachineKeyProtection.All); byte[] plaintext = ...; // provided by client string ciphertext = MachineKey.Encode(plaintext, MachineKeyProtection.All); Now consider what happens if a client provides this payload: 01 02 83 c7 3d c6 96 53 cf 08 fe 00 80 30 05 6d f3 d1 08 00 05 61 00 64 00 6d 00 69 00 6e 00 00 01 2f 00 ff 01 02 83 c7 3d c6 96 53 cf 08 fe 00 80 30 05 6d f3 d1 08 00 05 61 00 64 00 6d 00 69 00 6e 00 00 01 2f 00 ff This payload happens to correspond to the current serialized forms authentication ticket format (post-MS11-100). The ticket has a username of "admin" and an expiration date of January 1, 2015. If this payload is passed to MachineKey.Encode, and if the resulting ciphertext is then returned to the client, then the client has successfully managed to forge a forms authentication ticket. It should be apparent that these APIs are a double-edged sword. By providing access to the cryptographic pipeline, we are providing developers with a great deal of power, but we are also trusting developers to use the APIs correctly. And therein lies the problem: correct usage requires intimate knowledge of ASP.NET internal payload formats (not just existing formats, but also any format we might add in the future!), and this is simply an onerous and unrealistic expectation. It's a pit of failure, delicately lined with crocodiles and pointy spikes. In tomorrow's post, I'll discuss pipeline changes in ASP.NET 4.5, including new configuration switches and APIs that lend themselves to the pit of success. No Blogless Levi now. Nice one Useful information, now on to part 2. But it appears there is no web.config reference for .NET 4.5 on MSDN... so for example "IsolateByAppId" is not documented, and cannot be found by MSDN's search: IsolateByAppId
http://blogs.msdn.com/b/webdev/archive/2012/10/22/cryptographic-improvements-in-asp-net-4-5-pt-1.aspx
CC-MAIN-2014-35
refinedweb
2,232
54.73
#include <BCP_problem_core.hpp> Collaboration diagram for BCP_problem_core: Core cuts and variables never leave the formulation. Definition at line 31 of file BCP_problem_core.hpp. The copy constructor is declared but not defined to disable it. The default constructor creates an empty core description: no variables/cuts and an empty matrix. This constructor "takes over" the arguments. The created core description will have the content of the arguments in its data members while the arguments lose their content. Definition at line 65 of file BCP_problem_core.hpp. References cuts, BCP_vec< T >::swap(), and vars. The desctructor deletes all data members. Delete all data members. This method purges the pointer vector members, i.e., deletes the object the pointers in the vectors point to. The assignment operator is declared but not defined to disable it. Return the number of variables in the core. Definition at line 78 of file BCP_problem_core.hpp. References BCP_vec< T >::size(), and vars. Return the number of cuts in the core. Definition at line 80 of file BCP_problem_core.hpp. References cuts, and BCP_vec< T >::size(). Pack the contents of the core description into the buffer. Unpack the contents of the core description from the buffer. A vector of pointers to the variables in the core of the problem. These are the variables that always stay in the problem formulation. Definition at line 48 of file BCP_problem_core.hpp. Referenced by BCP_problem_core(), and varnum(). A vector of pointers to the cuts in the core of the problem. These are the cuts that always stay in the problem formulation. Definition at line 51 of file BCP_problem_core.hpp. Referenced by BCP_problem_core(), and cutnum(). A pointer to the constraint matrix corresponding to the core variables and cuts. Definition at line 54 of file BCP_problem_core.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_b_c_p__problem__core.html
crawl-003
refinedweb
290
62.34
Provided by: erlang-manpages_22.0.7+dfsg-1build1_all NAME global_group - Grouping nodes to global name registration groups. DESCRIPTION This module makes it possible to partition the nodes of a system into global groups. Each global group has its own global namespace, see global(3erl). The main advantage of dividing systems into global groups is that the background load decreases while the number of nodes to be updated is reduced when manipulating globally registered names. The Kernel configuration parameter global_groups defines the global groups (see also kernel(7) and config. DATA()} EXPORTS global_groups() -> {GroupName, GroupNames} | undefined Types: GroupName = group_name() GroupNames = [GroupName] Returns a tuple containing the name of the global group that the local node belongs to, and the list of all other known group names. Returns undefined if no global groups are defined. info() -> [info_item()] Types:, State is equal to synced. If no global groups are defined, State that have subscribed to nodeup and nodedown messages. monitor_nodes(Flag) -> ok Types: Flag = boolean() Depending on Flag, the calling process starts subscribing (Flag equal to true) or stops subscribing (Flag equal to false) to node status change messages. A process that has subscribed receives the messages {nodeup, Node} and {nodedown, Node} when a group node connects or disconnects, respectively. own_nodes() -> Nodes Types: Nodes = [Node :: node()] Returns the names of all group nodes, regardless of their current status. registered_names(Where) -> Names Types: Where = where() Names = [Name :: name()] Returns a list of all names that are globally registered on the specified node or in the specified global group. send(Name, Msg) -> pid() | {badarg, {Name, Msg}} send(Where, Name, Msg) -> pid() | {badarg, {Name, Msg}} Types: Where = where() Name = name() Msg = term() Searches for Name, globally registered on the specified node or in the specified global group, or (if argument Where is not provided) in any global group. The global groups are searched in the order that they appear in the value of configuration parameter global_groups. If Name is found, message Msg is sent to the corresponding pid. The pid is also the return value of the function. If the name is not found, the function returns {badarg, {Name, Msg}}.erl). Returns {error, {'invalid global_groups definition', Bad}} if configuration parameter global_groups has an invalid value Bad. whereis_name(Name) -> pid() | undefined whereis_name(Where, Name) -> pid() | undefined Types: Where = where() Name = name() Searches for Name, globally registered on the specified node or in the specified global group, or (if argument Where is not provided) in any global group. The global groups are searched in the order that they appear in the value of configuration parameter global_groups. If Name. SEE ALSO global(3erl), erl(1)
http://manpages.ubuntu.com/manpages/eoan/man3/global_group.3erl.html
CC-MAIN-2019-47
refinedweb
436
52.9
Asked by: ListBox.SelectedItem stuck at whatever is first selected and other strange ListBox behavior Hi, I've built a List<Participant> (Participant being my custom type) and assigned this to ListBox.ItemsSource. (I'm aware this is more the Windows Forms way than the WPF way of databinding, but I'd still like to understand what's going on here - even though suggestions on how to perform my task in a more WPF-y way are also welcome.) I've set ListBox.SelectionMode to Single, and attached an event handler to the SelectionChanged event. In this handler, I dump the SelectedIndex and SelectedItem to debug output. Contrary to my expectation, 1) SelectedIndex is always -1. This isn't really causing me any trouble, but it's certainly not what I'd expect. 2) SelectedItem refers to the correct object the first time (in the windows lifetime) I select an item, but then keeps referring to this first-selected item regardless of whether I deselect (by clicking the item again while holding down the CTRL key) or select some other item. 3) If I keep clicking around and scrolling a bit (the list has ~320 items) within the list, now and then the list suddenly displays a bunch of items - sometimes many in a row, sometimes non-contiguous items, but it seems always "nearby" items! - as if they were selected. The list keeps firing the SelectionChanged event, but SelectedIndex is forever -1 and SelectedItem (and SelectedValue) always refers to whatever I had selected the first time. The item type Participant overrides GetHashCode and Equals as follows: public override bool Equals(object obj) { var other = obj as Participant; if (other == null) return false; return (EmailAddress == other.emailAddress && Name == other.Name); } public override int GetHashCode() { return EmailAddress.GetHashCode(); } All the instances in the list have unique email addresses, but even if this wasn't the case I've never told the list anything about duplicates or anything like that, so it seems to me the list should, if it needs to perform equality tests at all, do so based on Object.Equals (a reference comparison) rather than the item type's implementation. How it is even possible to get a UI state that apparently indicates lots of selected items is a mystery to me given that my XAML states SelectionMode="Single", but then again according to SelectedIndex nothing's selected at all, and according to SelectedItem/Value it's always whatever I selected first... For someone new to WPF this isn't an encouraging experience! In case you'd rather suggest a good WPF way to do things, an explanation of what I'm trying to do follows: I want to display a list of participants, from where the user is required to select one and only one item. The participants should be presented with name and email address. The participant data comes from a file (a CSV file if it matters). A textbox should allow searching (filtering) the list items; any participant whose name or email address contains the search string should be displayed in the list. The list should always reflect the textbox (filter it's items accordingly), but the textbox should also reflect the list if an item is selected. The presentation of participants (which includes name and email) doesn't correspond to any single property on the Participant type, though I can always make a presentation wrapper for the type that does expose such a property if this helps. Please note: I am certain that this weird listbox behaivor does not result from anything I'm doing to it's data source. I've commented out all code that touches it, assign the ItemsSource only once, and get precisely the same problems anyway. Question All replies Hi The Dag, I performed a test based on your description, I created a custom type and created a list, bound it to the ListBox.ItemsSource and showed the items in a specific DataTemplate. It can work and output the correct SelectedIndex and SelectedItem. I share my test code, hope it can help you; By the way, recommend you to use the ObservableCollection<T> collection type, since it has implemented the INotifyCollectionChanged interface that can notify the collection changed when any item has been changed in the Control. Please refer to to know more about "Binding to Collections" in WPF. <Grid> <ListBox x: <ListBox.ItemTemplate> <DataTemplate> <Grid> <Border Margin="5" BorderBrush="Blue" BorderThickness="1"> <TextBlock Text="{Binding text}" /> </Border> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Grid> C#: public partial class Window1 : Window { public Window1() { InitializeComponent(); ObservableCollection<Item> items = new ObservableCollection<Item>(); for (int i = 0; i < 320; i++) { items.Add(new Item() { text = "text " + i.ToString() }); } list.DataContext = items; } private void list_SelectionChanged(object sender, SelectionChangedEventArgs e) { Console.WriteLine((sender as ListBox).SelectedIndex); Console.WriteLine((sender as ListBox).SelectedItem); } } public class Item { public string text { get; set; } } I think it is unneccessary to implement the Equals method for the bound item class, binding engine can expand the bound items to each ListBoxItem container and can know the corresponding item in the ListBoxItem. If it is possible, could please share some XAML and behind-code about your application? ! Hi Bob, Thanks for the effort to help me out! I've tried with an ObservableCollecion<T> as well, but then discovered that this collection doesn't support sorting (at least, there's no Sort(IComparer<T>) method), which I needed. While it seems odd that one should have to choose between the collection being observable and it being sortable, I didn't want to spend much time on *that* right now, so I just reverted to the List<T>. My code has now changed and suddenly the list box began behaving as I'd expect it to, i.e. the selected index, item, and value now reflect what appears selected in the UI, and it's now only possible to select one. So unfortunately I'm not able to reproduce it anymore. It makes me feel a bit stupid - having changed *something* in my code that cured it and having no idea *what* - but that's the way it is. I really don't think I was doing anything too exotic though, so it would be interesting to know how it could be reproduced, as surely the listbox shouldn't ever behave like it did. In fact, I wonder if it may be the case that it behaved very strangely right up until the point when I rebooted the machine. But that too is a long shot; I certainly did try closing down VS-2010 and restarting it, as I am not 100% convinced of it's internal correctness. (I do occasionally get "unknown compiler error (NullReferenceException)" and similar messages, though usually a simple rebuild (of my tiny project) seems to be enough to get past it. Hi The Dag, Well, regarding the sort for ObservableCollecion<T> or List<T> that bound to a ItemsControl, we can according to below articles, to set the sort for the CollectionView of the ItemsControl: How to: Sort and Group Data Using a View in XAML How to: Sort Data in a View And the following thread discuss a solution to implement the sort on a derived ObservablCollection<T> class: And regarding the SelectedIndex and SelectedItem, I know an issue about it, please refer to this thread: If the items in the ListBox/ListView (ItemsControl) are the same object reference, the selection behavior may occur incorrectly, such as we add the same string in to the ListBox, and select them, the selection is wrong. So please ensure the objects in the ItemSource are the different references of the object. For the error of the compiler, there is some error in your environment, like Operation System, .Net Framework or Visual Studio; I am not sure, it need to be debugged deeply. If you have any problem, pleaser feel free to let me know. Sincerely, Bob Bao Please remember to mark the replies as answers if they help and unmark them if they provide no help. Are you looking for a typical code sample? Please download all in one code framework ! I used to run into very similar issue inside of a DataGridView. One column of that grid was an editable ComboBox. I would load the contents of the grid first, and based on values of other TextBox cell, different drop-down list could be loaded in that ComboBox. Initially I used a common binding source, but that would result in the symptoms descriped in this post. After totally skipping usage of the binding source, things started working as expected, which is right in line with comments from Bob Bao below. Slighly modified code that works properly:; DataGridViewComboBoxCell cBoxCell = dgvr.Cells[3] as DataGridViewComboBoxCell; if (cBoxCell != null) { cBoxCell.DataSource = dv.ToTable(); cBoxCell.ValueMember = "Account_Number"; cBoxCell.DisplayMember = cmAccountingCSAccountDisplayColumn; } } } } } } Code that used to get me stuck o nthe first row of the DataGridView:; bindingSourceAccountMappingAccounts.DataSource = dv.ToTable(); DataGridViewComboBoxCell cBoxCell = dgvr.Cells[3] as DataGridViewComboBoxCell; if (cBoxCell != null) { cBoxCell.DataSource = bindingSourceAccountMappingAccounts; cBoxCell.ValueMember = "Account_Number"; cBoxCell.DisplayMember = cmAccountingCSAccountDisplayColumn; } } } } } } Dominik Ras mintol1@poczta.onet.pl
https://social.msdn.microsoft.com/Forums/vstudio/en-US/8ddc921b-fecc-4053-b770-405bb7b070d6/listboxselecteditem-stuck-at-whatever-is-first-selected-and-other-strange-listbox-behavior?forum=wpf
CC-MAIN-2015-32
refinedweb
1,518
51.99
Payload Logging¶ Logging of request and response payloads from your Seldon Deployment can be accomplished by adding a logging section to any part of the Seldon deployment graph. An example is shown below: apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: seldon-model spec: name: test-deployment predictors: - componentSpecs: - spec: containers: - image: seldonio/mock_classifier:1.3 name: classifier graph: children: [] endpoint: type: REST name: classifier type: MODEL logger: url: mode: all name: example replicas: 1 The logging for the top level requets response is provided by: logger: url: mode: all In this example both request and response payloads as specified by the mode attribute are sent as CloudEvents to the url. The specification is: url: Any url. Optional. If not provided then it will default to the default knative borker in the namespace of the Seldon Deployment. mode: Either request, responseor all Setting Global Default¶ If you don’t want to set up the custom logger every time, you are able to set it with the defaultRequestLoggerEndpointPrefix Helm Chart Variable as outlined in the helm chart advanced settings section. You just have to provide the prefix, which would then always be suffixed by “. namespace is the namespace where your model is running, and hence there will be a requirement to run a request logger on every namespace. An example would be setting it to a request logger who’s service can be accessible through custom-request-logger, and assuming we deploy our request logger in the namespace deep-learning, then we should set the helm variable as: #...other variables executor: defaultRequestLoggerEndpointPrefix: '.' #...other variables So when the model runs in the deep-learning namespace, it will send all the input and output requests to the service. You will still need to make sure the model is deployed with a specification on what requests will be logged, i.e. all, request or response (as outlined above). Example Notebook¶ You can try out an example notebook with logging
https://docs.seldon.io/projects/seldon-core/en/latest/analytics/logging.html
CC-MAIN-2020-40
refinedweb
326
50.46
# Accessibility (aka a11y) We built Redwood to make building websites more accessible (we write all the config so you don't have to), but Redwood's also built to help you make more accessible websites. Accessibility shouldn't be a nice-to-have. It should be a given from the start, a core feature that's built-in and well-supported. There's a lot of great tooling out there that'll not only help you build accessible websites, but also help you learn exactly what that means. With all this tooling, do I still have to manually test my application? Unequivocally, yes. Even with all the tooling in the world, manual testing's still important, especially for accessibility. The GDS Accessibility team found that automated testing only catches ~30% of all the issues. But just because the tools don't catch 'em all doesn't mean they're not valuable. It'd be much harder to learn what to look for without them. # Accessible Routing with Redwood Router For single-page applications (SPAs), accessibility starts with the Router. Without a full-page refresh, you just can't be sure that things like announcements and focus are being taken care of the way they're supposed to be. Here's a great example of how disorienting SPAs can be to screen-reader users. On navigation, nothing's announced. It's important not to understate the severity of this; the lack of an announcement isn't just buggy behavior, it's broken. Normally the onus would be on you as a developer to announce to screen-reader users that they've navigated somewhere new. That's a lot to ask, and hard to get right, especially when you're just trying to build your app. Luckily, if you're writing good content and marking it up semantically, there's nothing you have to do! Redwood automatically and always announces pages on navigation. Redwood looks for announcements in this order: RouteAnnouncement h1 document.title location.pathname The reason for this is that announcements should be as specific as possible; more specific usually means more descriptive, and more descriptive usually means that users can not only orient themselves and navigate through the content, but also find it again. If you're not sure if your content is descriptive enough, see the W3 guidelines. Even though Redwood looks for a RouteAnnouncement first, you don't have to have one on every page—it's more than ok for the h1 to be what's announced. RouteAnnouncement is there for when the situation calls for a custom announcement. The API is simple: RouteAnnouncement's children will be announced; note that this can be something on the page, or can be visually hidden using the visuallyHidden prop: // web/src/pages/HomePage.js import { RouteAnnouncement } from '@redwoodjs/router' const HomePage = () => { return ( // this will still be visible <RouteAnnouncement> <h1>Welcome to my site!</h1> <RouteAnnouncement> ) } export default HomePage // web/src/pages/AboutPage.js import { RouteAnnouncement } from '@redwoodjs/router' const AboutPage = () => { return ( <h1>Welcome to my site!</h1> // this won't be visible <RouteAnnouncement visuallyHidden> All about me <RouteAnnouncement> ) } export default AboutPage Whenever possible, it's good to maintain parity between the visual and audible experience of your app. That's just to say that visuallyHidden shouldn't be the first thing you reach for. But it's there if you need it! Note that if you have more than one RouteAnnouncement, Redwood uses the most specific one, that way if you have multiple layouts, you can override as needed.
https://redwoodjs.com/docs/accessibility
CC-MAIN-2021-17
refinedweb
591
54.02
Red Hat Bugzilla – Bug 64399 ValueError: bad marshal data Last modified: 2007-11-30 17:07:11 EST Description of problem: With up2date-2.7.61-7.x.2 on one particular client all invocations of `up2date` yield a traceback with 'bad marshal data' during one of the imports. What does this indicate as the likely problem and solution? Google seems to indicate corrupted python modules; reinstall up2date client? Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. up2date Actual Results: Traceback (innermost last): File "/usr/sbin/up2date", line 24, in ? from up2date_client import lilocfg File "/usr/share/rhn/up2date_client/lilocfg.py", line 14, in ? import lilo, iutil File "/usr/share/rhn/up2date_client/lilo.py", line 13, in ? import iutil ValueError: bad marshal data Additional info: hmm, could be. Whats `rpm -V up2date` show? Have never seen that error before, and the iutil module hasnt changed in months. So corrupt files seems as likely as anything... rpm -ivh --force fixed it. Didn't get rpm -V output before that, sorry. Will
https://bugzilla.redhat.com/show_bug.cgi?id=64399
CC-MAIN-2017-13
refinedweb
177
53.27
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "wk26" - Here's my piece of advice for new devs out there: 1 - Pick one language to learn first and stick with it, untill you grasp some solid fundamentals. (Variables, functions, classes, namespaces, scope, at least) 2 - Pick an IDE, and stick with it for now. Don't worry about tools yet. Comment everything you're coding. The important thing is to comment why you wrote it, and not what it does. Research git and start using version control, even when coding by yourself alone. 3 - Practice, pratice and pratice. If you got stuck, try reading the language docs first and see if you can figure it out yourself. If all else fails, then go to google and stackoverflow. Avoid copying the solution, type it all and try to understand it. 4 - After you feel you need to go to the next level, research best practices first, and start to apply them to your code. Try to make it modular as it grows. Then learn about tools, preprocessors and frameworks. 5 - Always keep studying. Never give up. We all feel that we have no idea of what we are doing sometimes. That's normal. You will understand eventually. ALWAYS KEEP STUDYING.9 - - Advice to New Devs: Peer review code with co-workers and constantly learn and improve. Ask a lot of questions and during your down time learn something new. :) Top Tags
https://devrant.com/search?term=wk26
CC-MAIN-2022-40
refinedweb
258
75.61
Romain Guy's Magic InfiniteProgressPanel I've mentioned how simple, elegant and powerful JavaFX Script is. Here's a perfect example of these three qualities. A few weeks ago I wanted to show a progress indicator in a multi-tier application that has a rich client developed in JavaFX Script. Whenever the client is waiting for a response from the server, I wanted to show this progress indicator. The screenshot above shows the InfiniteProgressPanel widget that I used from the JavaFX Script UI library. It was invented by Romain Guy, a co-author of the book Filthy Rich Clients. It is a great alternative to a progress bar due to the fact that you don't have to continually calculate what percent complete the operation is. This is because the progress bar is circular (and therefore infinite). In addition, it is very easy to use: You just place the InfiniteProgressPanel in the UI containment hierarchy, and bind its progress attribute to a Boolean variable that is true whenever you want the progress indicator to appear. Here's the code for this compiled JavaFX Script example: /* * CompiledInfiniteProgress.fx * * Developed 2007 by James L. Weaver (jim.weaver at lat-inc.com) * to serve as a compiled JavaFX Script example. */ import javafx.ui.*; import java.lang.System; Frame { var busy = false title: "Infinite Progress Panel Demo" width: 400 height: 400 background: Color.WHITE visible: true onClose: function() { System.exit(0); } content: InfiniteProgressPanel { progress: bind busy content: FlowPanel { content: [ Button { text: "Get Busy" action: function() { busy = true; ConfirmDialog { title: "Patience is a Virtue" message: "Simulating a busy condition" visible:true onYes: function() { busy = false } }; } } ] } } } Please compile and run this example and try it out for yourself! When you click the Get Busy button, the confirmation dialog will be displayed and the infinite progress indicator will appear. When you dismiss the dialog by clicking the OK button, the progress indicator will disappear. By the way, the next several posts are going to highlight various UI widgets similar to the way this post does. I'd like to give you an appreciation of the rich set of widgets available in JavaFX Script, and how easy they are to use.! More details to follow, Jim Weaver JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-side Applications Immediate eBook (PDF) download available at the book's Apress site Thanks for this info, i tried your post and it run more or less ok. But thanks anyway, it is always a pleasure to learn. Posted by: Paneles Solares | March 20, 2009 at 10:44 PM Alex, Thanks for your kind words. As I stated in this post, I plan to show code examples of UI widgets in the next few posts. Do you feel like that is what you and many other readers of this weblog would like to see? Are there other JavaFX-related topics that you'd like me to blog about? Thanks, Jim Weaver Posted by: Jim Weaver | January 09, 2008 at 09:31 AM Hi Jim, I tried again with another build compiler and it worked nicely. Thank you for your good work. Please Keep on the nice work Regards, Posted by: AlexChang | January 09, 2008 at 08:23 AM Alex, Please try the next build. It's working in the current compiler code base, and by now it should be in the continuous build. Thanks, Jim Weaver Posted by: Jim Weaver | January 09, 2008 at 07:25 AM Hi,Jim I tried yr example for InfiniteProgressPanel,but got a msg of cannot find symbol. symbol :variable progress location:class targetfile.fx error indicates that a location error. I use build no.868 compiler Regards, Posted by: AlexChang | January 09, 2008 at 05:13 AM
http://learnjavafx.typepad.com/weblog/2008/01/romain-guys-mag.html
crawl-002
refinedweb
621
62.98
greater_than¶ paddle.fluid.layers. greater_than) – If is None, the op will create a variable as output tensor, the shape and data type of this tensor is the same as input x. If is not None, the op will set the variable as output tensor, the shape and data type of this tensor should be the same as input x. Default value is None. - Returns The tensor variable storing the output, the output shape is the same as input x. - Return type Variable, the output data type is bool. Examples import paddle.fluid as fluid import numpy as np label = fluid.layers.assign(np.array([2, 3], dtype='int32')) limit = fluid.layers.assign(np.array([3, 2], dtype='int32')) out = fluid.layers.greater_than(x=label, y=limit) #out=[False, True] out1 = label > limit #out1=[False, True]
https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/greater_than.html
CC-MAIN-2020-05
refinedweb
135
60.01
I think this says it all = Duck Typing, Tuple, Open Class, method_missing .. class C { public dynamic myField; public dynamic MyProp { get; set; } public dynamic MyMethod(dynamic d) { return d.Foo(); } public delegate dynamic MyDelegate(dynamic d); } and this IDynamicObject interface public class MyDynamicObject : IDynamicObject { public MetaObject GetMetaObject(Expression parameter) { return new MyMetaObject(parameter); } } and meta object overrides public class MyMetaObject : MetaObject { public MyMetaObject(Expression parameter) : base(parameter, Restrictions.Empty) { } public override MetaObject Call(CallAction action, MetaObject[] args) { Console.WriteLine("Call of method {0}", action.Name); return this; } public override MetaObject SetMember(SetMemberAction action, MetaObject[] args) { Console.WriteLine("SetMember of property {0}", action.Name); return this; } public override MetaObject GetMember(GetMemberAction action, MetaObject[] args) { Console.WriteLine("GetMember of property {0}", action.Name); return this; } } On Tue, Oct 28, 2008 at 10:14 PM, Dody Gunawinata <empirebuilder at gmail.com>wrote: > Hmm..is there any hosted VPC made available? I'm based in Cairo and a 23GB > download will probably tie up the whole bandwidth of Africa. > Dody G. > > > On Tue, Oct 28, 2008 at 6:58 AM, Curt Hagenlocher <curt at hagenlocher.org>wrote: > >> Watch the the videos from the PDC, once they're posted. I know that some >> of your questions about use of DynamicObject from within C# are answered by >> Jim Hugunin's talk. Also, if you can download the 23 GB(!) VPC image with >> the Visual Studio 2010 CTP, you'll be able to try the walkthroughs -- and I >> think that one specifically addresses the XML scenario. >> >> >> On Mon, Oct 27, 2008 at 9:32 PM, Dody Gunawinata <empirebuilder at gmail.com >> > wrote: >> >>> I'd call it missed opportunity if what C# 4.0 dynamic does is only to >>> provide the same facilities that VB6 or VB.Net have provided long time ago. >>> For example, the COM interop thing is nice, but then for the past 8 years >>> you know that if you want to script some Office or COM objects you don't use >>> C# or IronPython or (insert dynamic language). >>> But it looks like there are more here. DynamicObject type and >>> IDynamicObject looks intriguing and I wonder it finally allows more natural >>> XML or ActiveRecord object access via property call >>> like myDocument.Customer.Name.FirstName without resorting to some static >>> code generation. I also wonder if this also allows C# to simulate open >>> classes just by putting a hashtable of object and stuff delegate, data, >>> property into them . >>> >>> >>> Dody G. >>> >>> On Tue, Oct 28, 2008 at 1:52 AM, Tim Roberts <timr at probo.com> wrote: >>> >>>> On Tue, 28 Oct 2008 00:36:05 +0200, "Dody Gunawinata" >>>> <empirebuilder at gmail.com> wrote: >>>> > Yup. It looks like more information needed on all of these dynamic >>>> features. >>>> > In the first glance it looks like C# 4.0 is turning into VB 6 :) >>>> > >>>> >>>> Are you saying that would be a bad thing? Let us recall that VB6 was >>>> arguably the most successful release in Visual Basic history. There are >>>> still people who won't allow themselves to be dragged away from VB6, >>>> more than a decade after its release. >>>> >>>> -- >>>> Tim Roberts, timr at probo.com >>>> Providenza & Boekelheide, Inc. >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at lists.ironpython.com >>>> >>>> >>> >>> >>> >>> -- >>> nomadlife.org >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at lists.ironpython.com >>> >>> >>> >> > > > -- > nomadlife.org > > -- nomadlife.org -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/ironpython-users/2008-October/008785.html
CC-MAIN-2017-04
refinedweb
551
58.58
Hacker School admitted me to its Winter 2013 batch last January. I don't think I've written here about my experience on the interviews yet, and I'll do so now. There was so much to say about the second of them that I failed to compose either a diary entry or any letters to friends containing much of value. So what follows is reconstructed from later notes and my memory. When I was admitted, the practice was to conduct two interviews. The first was a general conversation begetting I suppose a sense of my personal (not professional) goals in applying to Hacker School and how my personality would mesh with the rest of the ecosystem of the place. The second one, which had to be done with a different person, involved pairing on some piece of code that I had already written — adding a small feature or refactoring. At core, both of these are evaluations of personality. So I can't offer any advice beyond: be your own true self, and in the pairing interview share your actual thinking about the details of your code. Hacker School is a very effective environment for someone who reflects critically about how their code works and reacts without prickliness to other people's suggestions. For the pairing interview I submitted a Python progress-bar class that I remembered worked when I stopped work on it half a year before. Thirty minutes before the interview I tried it out, found it was broken, and went into high hysterics trying to figure out what was wrong. That proved to be much better preparation for the interview than all the placid reading of the code that I had done earlier. I fixed the last of the bugs a minute or so before the interview started, and was in a good state of mind to talk confidently about details. The interviewer asked me to walk her through the code, and as I did I pointed out places where there I thought I could improve functionality. I started out on a fairly high level, but the interviewer wanted to know about details of implementation. My little class relied on the multiprocessing module to run the progress bar independently of the task whose progress was being tracked and continually received information from the task as ctypes values, and we talked a bit about that. At the very end she asked me how the use of multiprocessing affected running time, something I had not yet learned how to measure and consider. (I started doing so immediately after the interview; the answer is that running time is affected to a disastrous degree, a cost of something like a thirteen-fold slowdown. The project now lives in a directory named abandoned_but_do_not_discard.) She also commented on my style of naming imported modules, for instance: import struct import sys import os import time as T import random as R import multiprocessing as M import ctypes as C Any module with a long name or that I refer to often I name with one or at most two capital letters. That saves typing and makes it easy to see quickly and unambiguously when a module is in use. The interviewer remarked that she hadn't seen that done before and wondered where I learned it. I no longer recall where I learned it or whether it was my own shortcut. When I joined the batch I was told it wasn't part of best practice and I stopped using it, but I've started again recently; I think it's effective and clear to anyone reading the code with sufficient care. My feeling was that the interview went well. But when I heard back from Hacker School at the end of the day I observed that I felt surprised that I had been admitted. I can't recommend Hacker School enough for someone in a situation like mine. At the institution's request for a testimonial recently I described that situation and Hacker School's effect as follows: I came to programming after a decades-long career in a totally unrelated and non-quantitative field. After exposure to a couple of programming languages, and after taking some math and theory courses, I was still a very green peach. Hacker School ripened me into the ways of the guild of coders: I learned how to read other people's code, to pair-program, to ask for help in a way that was useful to myself and others; I got used to jargon, along with how and when to use it; I learned various best practices and the names of many, many tools. Above all else, I learned to think of myself as a coder, really and truly, and I find myself now part of a growing family of pleasant, helpful brother- and sister-coders. There are things I would like to say about what I see as the long-term effects of the presence of this group of "pleasant, helpful brother- and sister-coders" in the field. But I will save that for another day. [end]
https://dpb.bitbucket.io/my-hacker-school-pairing-interview.html
CC-MAIN-2017-39
refinedweb
852
63.63
I propose today to learn how to detect the devices connected to the bus i2c in MicroPython. For this tutorial, we will retrieve the measurements returned by a BME280 environment sensor that measures temperature, atmospheric pressure and humidity. The data will then be displayed on a very classic 128×64 pixel monochrome SSD1360 OLED display in Arduino and DIY projects. I advise you to use the uPiCraft code editor to develop in MicroPython if you are on Windows. On Linux / Raspberry Pi or macOS, follow this tutorial to learn how to use rshell. Hardware and circuit for an ESP8266 or ESP32 project For this tutorial, I used a Wemos d1 mini based ESP8266. The SCL pin is on pin D1 (GPIO5) and the pin SDA on D2 (GPIO4). The OLED screen and the BME280 must be powered by 3V3. ESP32 project The i2c bus is on pins GPIO22 (SCL) and GPIO21 (SDA). However, pin 21 is missing on the Wemos LoLin32 Lite I am currently using. Fortunately, the library I2C allows to choose other pins as we will see later. Scan devices connected to the i2c bus in MicroPython The i2c bus is natively supported by MicroPython. All methods are detailed on the online documentation here. The following methods are available for connecting to a sensor and reading data in the registers. The methods are accessible from the machine class. - I2C.init (scl, sda, freq = 400000), initializes the i2c bus. The sda and scl pins must be indicated. It is also possible to change the frequency of the bus. - I2C.deinit (), stop the i2c bus - I2C.scan (), scan the i2c bus and return the addresses of the devices found - I2C.start (), generates a Start condition on the bus - I2C.stop (), or a Stop - I2C.readinto () reads the bytes on the bus and stores in a buffer - I2C.write (), write the buffer on the bus - I2C.readfrom (addr, nbytes, stop = True), reads nbytes on the slave. Return an object - I2C.readfrom_into (addr, buf, stop = True), same but stores in a buffer the bytes returned - I2C.writeto (addr, buf, stop = True), allows to write the contents of a buffer to the slave at the indicated address - I2C.readfrom_mem (addr, memaddr, nbytes, *, addrsize = 8), read at the memory address of the slave nbytes - I2C.readfrom_mem_into (addr, memaddr, buf, *, addrsize = 8), same but stores in a buffer the nbytes returned by the slave - I2C.writeto_mem (addr, memaddr, buf, *, addrsize = 8), writes the buffer to the specified slave’s memory By default, MicroPython automatically converts hex and decimal values. In general, manufacturers indicate the i2c address in hex. This little bit of code does the opposite conversion. You will be able to more easily identify connected devices. Then in the code, you can use either the address in hex or decimal, MicroPython does the automatic conversion. Create a new script and paste the code below. Save the script as scanneri2c.py, for example, and start the execution by pressing the F5 key. uPicraft will upload it and launch the script. If the wiring is correct, the scanner must detect the OLED screen (0x3c typically) and the BME280 sensor (0x76 in general). You can also use a BME180 that will not return the humidity level. Now let’s take a closer look at how we use the i2c bus in MicroPython. First, we need to create an I2C object that will connect to the i2c bus and communicate with the connected devices. On an ESP8266, the SCL pin is on pin D1 (GPIO5) and the pin SDA on D2 (GPIO4). However, the library does not require the use of official pins. As with Arduino code, we can assign other pins. Here, I connected the i2c bus on board pins 22 and 18 the ESP32 Wemos Lolin32 Lite. import machine i2c = machine.I2C(scl=machine.Pin(22), sda=machine.Pin(18)) # ESP8266 5/4 Then the i2c method. scan () retrieves device addresses as a hex array. Read the measurements of a BMP180 / BME280 in MicroPython There are some functional drivers on GitHub or internet. No need to re-invent the wheel. For this tutorial, I used the driver adapted by Catdog2. This driver is based on Adafruit Adafruit_I2C.py library. Go to GitHub to get the BME280 drivers code and paste it into a new script. Save the script as bme280.py. Do F5 to upload it (nothing will run on the board). I have a weakness for the BME280 because it saves a DHT11 / DHT21 or DHT22 if you want to measure the temperature and humidity. We also gain in compactness and energy saving, essential conditions for projects of objects connected on battery. Create a new script and paste the following code. The beginning is identical. We start by initializing an I2C object by indicating the bus pins and then create a bme280 object. It is passed in parameter the object i2c. It is also possible to pass the BME280 address as a parameter if it is different from the default address 0x76 (with the previous scanner, it is easy to find it). The values () method retrieves the measures directly formatted with the unit. import machine, time, bme280 i2c = machine.I2C(scl=machine.Pin(22), sda=machine.Pin(18)) bme = bme280.BME280(i2c=i2c,address=0x76) while True: print("BME280 values:") temp,pa,hum = bme.values print(temp) print(pa) print(hum) time.sleep_ms(2000) Embed an OLED SSD1306 display into MicroPython Now that we have concrete measures, we will display them on a small monochrome OLED screen very classic and already seen in several tutorials. uPiCraft already embeds a driver for SSD1306 displays. It is in the Upi_lib section. Copy the ssd1306.py library to the board. If you do not use uPiCraft, you can recover the original drivers on GitHub. You will need 6 lines of code to display text on an OLED screen in MicroPython. After creating an ssd1306 object that requires horizontal resolution in pixels, the vertical resolution and the i2c object. It can also be passed the screen address if it is different from 0x3c. Compared to the Arduino library, we can pass the screen resolution at initialization without having to go modify the library. import machine, ssd1306 i2c = machine.I2C(scl=machine.Pin(22), sda=machine.Pin(18)) oled = ssd1306.SSD1306_I2C(128, 64, i2c, 0x3c) oled.fill(0) oled.text("Hello World", 0, 0) oled.show() We then have some methods to manage the display: - poweroff(), turns off the screen. Convenient for battery operation. - contrast(), to adjust the contrast - invert(), invert the colors of the screen (finally white and black!) - show(), to refresh the view - fill(), to fill the screen in black (1) or white (0) - pixel(), to turn on a particular pixel - scroll(), scroll the screen. - text(), to display on text at the indicated x, y position - Draw lines hline ), vline() or any line line() - Draw a rect rect rectangle() or rectangle filled fill_rect() It is a very powerful library that has nothing to envy to Arduino libraries (Adafruit GFX among others). It fully supports the Frame Buffer class of MicroPython. Whatever the resolution of the screen used, just indicate the horizontal and vertical resolution to have a correct display. Full project code Here we come to the end of this tutorial. We know how to do a lot of extra things in MicroPython now: - Scan the i2c bus - Retrieve data on an i2c sensor - Display text or simple geometric shapes on a monochrome OLED display SSD1306 Create a new script and paste the following code. Modify the following parameters at the beginning of the code according to your configuration: - pinScl = 22 #ESP8266 GPIO5 (D1) - pinSda = 18 #ESP8266 GPIO4 (D2) - addrOled = 60 #0x3c - addrBME280 = 118 #0x76 - hSize = 64 # Hauteur ecran en pixels | display heigh in pixels - wSize = 128 # Largeur ecran en pixels | display width in pixels Save and send the code with the F5 key. I have tested with several screens (124×64, 124×32, 64×48), everything works perfectly the first time! I was really surprised how easy it is to manage an OLED display with MicroPython code. As the code is not compiled, we gain a lot of time for the development. In the next tutorial, we’ll see how to handle multiple Dallas DS18B20 probes. -
https://diyprojects.io/oled-display-ssd1306-micropython-example-digital-barometer-bme280-i2c/
CC-MAIN-2021-49
refinedweb
1,379
66.23
Chatlog 2011-02-14 From RDFa Working Group Wiki See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version. 15:00:00 <manu1> Chair: Manu 15:00:00 <manu1> Present: Ivan, ShaneM, Toby, Nathan, Manu 15:00:00 <manu1> scribenick: manu1 15:00:00 <Zakim> +[IPcaller] 15:00:00 <webr3> Zakim, I am IPcaller 15:00:00 <Zakim> ok, webr3, I now associate you with [IPcaller] 15:00:00 <ivan> zakim, dial ivan-voip 15:00:00 <Zakim> ok, ivan; the call is being made 15:00:00 <Zakim> +Ivan 15:00:00 <manu1> Agenda: 15:00:00 <manu1> Topic: ISSUE-83: CURIEs must require colon 15:00:00 <manu1> 15:00:00 <manu1> Manu: Nathan says that CURIEs should contain a ":" 15:00:00 <manu1> Manu: Shane says that we never wanted @vocab to affect CURIE processing in the way that it does right now - it's a spec bug. 15:00:00 <manu1> Shane: These issues are orthogonal. 15:00:00 <manu1> Nathan: I agree, they're orthogonal - and if we accept @vocab affecting only Term processing, the issue goes away. 15:00:00 <manu1> Ivan: The old CURIE spec allows a colon-less CURIE, right? 15:00:00 <manu1> Ivan: If I haveShane</div> 15:00:00 <tinkster> The <div> above is not the sort of CURIE without a colon that ISSUE-83 discusses. 15:00:00 <ShaneM> This is what Mark proposed in his 'tokens' approach 15:00:00 <tinkster> The CURIE without a colon is a CURIE with no prefix, no colon, but with a suffix. 15:00:00 <ShaneM> tinkster: yes 15:00:00 <tinkster> The definition of CURIE requires a colon whenever a prefix is given. 15:00:00 <ShaneM> tinkster: yes 15:00:00 <webr3> but not a prefix when a colon is given 15:00:00 <ShaneM> q+ to discuss the difference between a 'term' and a 'token' 15:00:00 <manu1> Ivan: What's the difference between a prefix-less CURIE and Term? 15:00:00 <manu1> Manu: The difference is artificial. 15:00:00 <webr3> iva, all terms are current valid CURIEs too, you just can't use them 15:00:00 <manu1> ack shaneM 15:00:00 <Zakim> ShaneM, you wanted to discuss the difference between a 'term' and a 'token' 15:00:00 <manu1> Shane: We debated this ad-nauseum - at the end of the day, we agreed to introduce terms as a way of having this abstraction. 15:00:00 <manu1> Shane: If you want to re-open the issue - we could do that. 15:00:00 <manu1> Ivan: Maybe the only thing we're missing here is that we don't have an attribute to define terms. 15:00:00 <manu1> Ivan: Maybe we should have @prefix and @term. 15:00:00 <manu1> Shane: You can use @vocab, but that only helps if all of your terms are in the same URL. 15:00:00 <webr3> q+ to say it wouldn't work 15:00:00 <manu1> ack webr3 15:00:00 <manu1> ack [IPcaller] 15:00:00 <Zakim> [IPcaller], you wanted to say it wouldn't work 15:00:00 <manu1> Nathan: It wouldn't necessarily work - you don't know if it's a relative IRI or a CURIE. 15:00:00 <ShaneM> is fine 15:00:00 <tinkster> <entry xmlns="" content="This content attribute is in no namespace, even though the entry element is." /> 15:00:00 <manu1> Shane: Looking at our spec, there is language in some of the XHTML family specs that talks about this - I don't think RDFa Syntax has that language - it defines a module in Chapter 9, but it doesn't do it in the way that we normally define modules. The text isn't there that would lock it down. 15:00:00 <manu1> Shane: For example, in the @role attribute spec... 15:00:00 <ShaneM> 15:00:00 <manu1> Ivan: The only instance we know about, is how RDFa is used in ODF - it's in the XHTML namespace - they do it correctly. 15:00:00 <manu1> Shane: Should processors work the way that Toby's does? Yes - is that what most people are going to implement - probably not. 15:00:00 <manu1> Shane: Should we test for it? 15:00:00 <webr3> so we can do <elem about="foo" xhv: ? 15:00:00 <ShaneM> well.. no. XHTML says that is an error 15:00:00 <webr3> tfft 15:00:00 <ShaneM> XHTML M12N says you cannot combine the no namesapce and namespaced versions on the same element 15:00:00 <manu1> Manu: I don't think this should be a processor conformance requirement - it complicates processors too much, for very little benefit. 15:00:00 <manu1> Shane: We agreed to introduce XML+RDFa - are we specifying that via an XHTML-style module? Or are we saying that there are RDFa attributes that go in the null namespace? 15:00:00 <manu1> Ivan: The latter 15:00:00 <manu1> Manu: The latter 15:00:00 <tinkster> I think this should be the host language's business. It a host language wants to put it into namespace, that's their funeral. 15:00:00 <tinkster> (which doesn't mean that we don't need to discuss it - after all, we're trying to take the XHTML+RDFa host language through last call as well!) 15:00:00 <manu1> Manu: In the RDFa Core spec, in the XML+RDFa section - we suggest that RDFa attributes are placed into the null namespace. If a host language wants to place them into a difference namespace, they can do so. 15:00:00 <ShaneM> For more on XHTML M12N attribute collections see 15:00:00 <manu1> PROPOSAL: The RDFa Core spec, in the XML+RDFa section should suggest that RDFa attributes are placed into the null namespace. Host Languages are allowed to place the RDFa attributes into a different namespace. 15:00:00 <ivan> +1 15:00:00 <manu1> +1 15:00:00 <ShaneM> +1 15:00:00 <webr3> +1 15:00:00 <tinkster> +1 15:00:00 <manu1> RESOLVED: The RDFa Core spec, in the XML+RDFa section should suggest that RDFa attributes are placed into the null namespace. Host Languages are allowed to place the RDFa attributes into a different namespace. 15:00:00 <manu1> Topic: ISSUE-84: Cool URIs and HTTPRange-14 15:00:00 <manu1> 15:00:00 <manu1> Manu: WWW TAG wants us to warn people that there is no follow your nose story (yet) for fragment identifiers. 15:00:00 <manu1> Manu: There is no document that states how to interpret fragment identifiers via RFC3984 - no other document states this - they just want use to state that there is an issue 15:00:00 <manu1> Ivan: This whole issue is completely transparent to RDFa - RDFa is a serialization. 15:00:00 <manu1> Ivan: It's an important issue, but it doesn't have to do w/ the RDFa spec. 15:00:00 <manu1> Shane: We could add language in there to explain this. 15:00:00 <manu1> "Using #foo with RDFa is a great thing. However you should note that the media type registrations (RFC 3986) don't yet talk about this practice." 15:00:00 <manu1> "If you care about the possibility that the element and the linked data node might have different properties, you should use different fragment ids." 15:00:00 <manu1> <div id="currency" about="#currency"> 15:00:00 <manu1> 15:00:00 <manu1> <div id="ivan">This has nothing to do with Ivan</div> 15:00:00 <manu1> <div about="#ivan">This has something to do with Ivan</div> 15:00:00 <manu1> Manu: So, this is a best practice issue. 15:00:00 <manu1> Nathan: The @id is a locally scoped name - it's part of the representation. 15:00:00 <manu1> Nathan: The two @about and @id point to two different thing. 15:00:00 <manu1> Nathan: What the TAG is saying is that when you follow your nose, in some cases you get the element - in other cases you get a semantic object. 15:00:00 <manu1> What is this URL: ? 15:00:00 <webr3> with @about two strings are concatenated to create a single name / logical constant -> "" an opaque id 15:00:00 <manu1> Ivan: It's a token - for a reasoning agent a URI is a token. 15:00:00 <tinkster>afasdfa</div> <div about="#ivan-lala"></div> 15:00:00 <manu1> Manu: Yes, we'd want to do that 15:00:00 <webr3> tinkster, only as part of the dereferencing process.. you have to split on #, dereference left part, then right part is a locally scoped identifier within the representation - the two are dif 15:00:00 <manu1> Ivan: I think this is for the cookbook and not for RDFa Core. 15:00:00 <manu1> Shane: If we can put this in the document in a short paragraph, let's do that. 15:00:00 <tinkster> Dereferencing has little to do with it. 15:00:00 <tinkster> I may not know what the URI <> identifies until I've dereferenced it, but the representations of <> should not define it inconsistently. 15:00:00 <ShaneM> ACTION: ShaneM Introduce a short paragraph about how frgids are not well defined by the corresponding RFCs and therefore why using them incorrect is potentially risky. 15:00:00 <trackbot> Created ACTION-64 - Introduce a short paragraph about how frgids are not well defined by the corresponding RFCs and therefore why using them incorrect is potentially risky. [on Shane McCarron - due 2011-02-21]. 15:00:00 <manu1> Topic: Ensuring xmlns backwards-compatibility, but not mentioning xmlns 15:00:00 <manu1> 15:00:00 <manu1> Manu: There seemed to be general agreement on the mailing list that we shouldn't specify xmlns: as one of the RDFa attributes, but should instead deprecate it. This is partly due to the HTML WG decision to not support decentralized extensibility, and also because we now have @prefix and that seems to sit with people better than xmlns: 15:00:00 <manu1> Manu: Shane, do you have any issues or concerns with this direction? 15:00:00 <manu1> Shane: Nope, sounds fine. 15:00:00 <manu1> Side-discussion about the political aspects of MUST vs. SHOULD vs. MAY - general agreement that the nuance will be lost on most people. Important thing is to support backwards compatibility, but show that RDFa is moving away from xmlns: 15:00:00 <ShaneM> ACTION: ShaneM to remove the definition of xmlns:prefix in section 5 and examples. Add prose in processing rules to ensure that processors are required to process xmlns:prefix for backward compatibility. Removed xmlns:prefix from all examples. 15:00:00 <trackbot> Created ACTION-65 - Remove the definition of xmlns:prefix in section 5 and examples. Add prose in processing rules to ensure that processors are required to process xmlns:prefix for backward compatibility. Removed xmlns:prefix from all examples. [on Shane McCarron - due 2011-02-21]. 15:00:00 <ShaneM> For backward compatibility, some Host Languages MAY also permit the 15:00:00 <ShaneM> definition of mappings via <aref>xmlns</aref>. In this case, the value to be mapped is 15:00:00 <ShaneM> set by the XML namespace 15:00:00 <ShaneM> prefix, and the value to map is the value of the attribute — a URI. 15:00:00 <manu1> Manu: We should make it clear that we intend to remove xmlns: from RDFa and that authors shouldn't use it anymore. 15:00:00 <ShaneM> For backward compatibility, RDFa Processors MUST also also permit the definition of mappings via <aref>xmlns</aref>. In this case, the value to be mapped is set by the XML namespace prefix, and the value to map is the value of the attribute — a URI. (Note that prefix mapping via <aref>xmlns</aref> is deprecated, and may be removed in a future version of this specification.) 15:00:00 <manu1> Manu: +1 for something to that effect. 15:00:00 <manu1> Nathan: I think it should be "SHOULD" 15:00:00 <webr3> it wasn't made an ISSUE, i just proposed on list 15:00:00 <webr3> no issue number 15:00:00 <webr3> all text is in: 15:00:00 <webr3> (exactly as discussed) 15:00:00 <manu1> PROPOSAL: Adopt the text that Shane specified above changing the MUST to a SHOULD: For backward compatibility, RDFa Processors SHOULD... 15:00:00 <manu1> +1 15:00:00 <ivan> +1 15:00:00 <webr3> +1 (and remove xmlns:prefix def text) 15:00:00 <ShaneM> +1 15:00:00 <manu1> RESOLVED: Adopt the text that Shane specified above changing the MUST to a SHOULD: For backward compatibility, RDFa Processors SHOULD... 15:00:00 <manu1> Topic: Supporting Terms in @prefix 15:00:00 <manu1> Manu: We already discussed this earlier in the call 15:00:00 <manu1> Manu: If we wanted to support prefix="foafname:". We can't do it because relative IRIs became ambiguous - are they a CURIE or are they a relative IRI? Having colon-less CURIEs is problematic because they're ambiguous vs. relative IRIs that may conflict w/ prefixes imported via @profile. 15:00:00 <manu1> Ivan: What happens when you do this about="foo" and then you pull in a prefix via a @profile that defines "foo" as a prefix? Your markup all of a sudden starts having a different meaning which is hard to debug. 15:00:00 <manu1> General agreement that supporting tokens in @prefix is not possible to do in a way that is safe w/ the current design. 15:00:00 <manu1> Topic: RDFa Error Vocabulary 15:00:00 <manu1> Ivan: Do we want to integrate this into the document? 15:00:00 <manu1> Manu: I thought that we had agreed to do that? 15:00:00 <manu1> Shane: I thought so too. 15:00:00 <manu1> 15:00:00 <manu1> Shane: That will go in an appendix? 15:00:00 <manu1> Ivan: Should it be in the rdfapg namespace? or just in the rdfa namespace? 15:00:00 <manu1> Manu: Same namespace. 15:00:00 <webr3> ok 15:00:00 <manu1> Nathan: From a design perspective, it might be nicer to have them in different namespaces - but I'm easy. 15:00:00 <manu1> General agreement that we should put the RDFa processor graph vocabulary terms in the RDFa namespace. 15:00:00 <manu1> Topic: ISSUE-77: Adding in extra blank node default subjects 15:00:00 <manu1> 15:00:00 <manu1> Ivan: Don't really understand what he's asking for here? 15:00:00 <manu1> Ivan:. 15:00:00 <manu1> Manu: Are we saying that the profile will have processing rules in addition to prefixes/terms? 15:00:00 <manu1> Ivan: Anything in the header would have a subject as a blank node. 15:00:00 <manu1> Ivan: I think that this is not possible because there are other statements in the header that we generate triples for - this would be a backwards compatibility issue. 15:00:00 <ivan> apage satanas 15:00:00 <manu1> Manu: We had discussed this before - you'd need a generic plug-in architecture for the RDfa processing rules - we don't want to go down that road (it would take forever to get it right, if that was even possible) 15:00:00 <manu1> Manu: The result of that would be very difficult to understand for implementers - very meta. 15:00:00 <webr3> " we can't at the minute " 15:00:00 <manu1> Ivan: That's fine if we don't support this w/ me - it's asking for a /lot/ to happen. 15:00:00 <manu1> Ivan: We're shifting towards a default profile - I don't expect Facebook would create this document even if we had this functionality. 15:00:00 <ivan> ogp -> 15:00:00 <ivan> ogp -> 15:00:00 <ivan> 15:00:00 <manu1> Ivan: If they add new properties, that's beyond our control. 15:00:00 <manu1> Shane: We were going to tell people that if they wanted to be in the default profile, they absolutely should not change the semantics of the vocabulary document over time. 15:00:00 <manu1> General agreement that attempting to do this correctly would become a specification and implementation nightmare. 15:00:00 <ivan> action: ivan to answer Harry on issue 77 15:00:00 <trackbot> Created ACTION-66 - Answer Harry on issue 77 [on Ivan Herman - due 2011-02-21]. 15:00:00 <Zakim> -[IPcaller] 15:00:00 <ivan> zakim, drop me 15:00:00 <Zakim> Ivan is being disconnected 15:00:00 <Zakim> -ShaneM 15:00:00 <Zakim> -Ivan 15:00:00 <Zakim> -manu1 15:00:00 <Zakim> Team_(rdfa)16:00Z has ended 15:00:00 <Zakim> Attendees were +1.612.217.aaaa, ShaneM, manu1, [IPcaller], Ivan # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000207
http://www.w3.org/2010/02/rdfa/wiki/Chatlog_2011-02-14
CC-MAIN-2015-22
refinedweb
2,837
66.57
Introduction: - Now that the basic concepts of Struts Framework are clear, let us design our first Struts 2 application that prints a welcome message of “Hello World!” to user with a single button click. - Struts 2 application can be coded in two different ways: - Without using Struts 2 plugin - By using Struts 2 plugin provided by NetBeans IDE. - The application created without using Struts 2 plugin becomes a simple MVC application (more like a Struts 1 application) and transformation of that into Struts 2 becomes a tedious. -). Requirements: Chapter 06 - Struts Installation (If Struts is not already installed) Chapter 07 - Steps for creating Struts 2 Application for basic requirements Program to print user name on button click: - Here, the required things are : (rest all can be deleted from the previous step based on chapter 7) - “index.jsp” by selecting new Java server page document. - tag elements and attributes. - As per our requirement, we need a label, button and a form which encapsulates these elements. So to create the web page, following needs to be added inside <body> tag: - <S: label: Labels can be placed even without <s:label/> tag. - <s:textfield: Name attribute is a compulsory attribute because it will be used for identification of text field in other forms - <s:submit : value attribute represents the text displayed on the button. Submit button is normally used to submit the values of the current form to the next form. Normal button does not submit the value, but can be used for simple operations to be performed on the same page (that is, not on the next page). And cancel button is used for form cancelling etc. (as per requirement). - So the index.jsp can be coded as follows: // index.jsp <%@page <s:label <s:textfield <s:submit </s:form> </body> </html> - After running this jsp, following output can be seen: - On button click, the page is transferred to “hello_action” page which is still unavailable. So following error can be seen: - The page written in action of <s:form> tag can be seen in the URL in above snapshot. - We also need a result page which displays the name of the user entered in text field after button click. So create another jsp page named as “result.jsp” inside jsp folder. - Add the following statement to the result page for displaying user name : Hello <s:property - <s:property> tag is used to access the form field names from form elements using value attribute. Make sure that the value for “value” attribute is same as the name of form element you want to fetch. - So the result.jsp looks like: // result.jsp <%@page</h1> </body> </html> - Now to create action pages, create a folder named action_jsp inside source packages. The folder here is known as package. - Create action class “hello_action” inside “action_jsp” package as follows: - name (check the name attribute of text field in our form). So the getter setter will be: public void setName(String name) { this.name = name; } public String getName() { return name; } - Now, on button click this class will be executed. We need one more method that returns a string of “success” if the action class is executed successfully. That method can be of any name such as: execute() or calculate() etc. The “success” string is pre-declared in the properties of Struts actions. This resembles returning true or false based on the successful run of action class. public String execute() { return “success”; } - So the hello_action java file will be as follows: // hello_action.java package action_jsp; import com.opensymphony.xwork2.ActionSupport; // everything in here must be kept public to have package level access. public class hello_action extends ActionSupport { private String name; // getter and setter methods for the form element value public String getName() { return name; } public void setName(String name) { this.name = name; } //method that returns success or failure on action public String execute() { return "success"; } } -> [Note : FQN : Fully Qualified Name the class. ] - In our example, struts.xml can written as: <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" ""> <struts> <package name = "default" extends = "struts-default"> <action name = "hello_action" class = "action_jsp.hello_action" method = "execute"> <result name = "success">result.jsp</result> <result name = "failure">index.jsp</result> </action> </package> </struts> - Just like our struts.xml file, we also have web.xml file (inside> - Running the application will show following page: - Enter user name in text field and click on button. The action page is executed which results in success and from the <result> tag, the value for success is executed. That is, the result page is displayed. - The user name is fetched from the first form (index.jsp) stored in a variable called “name”. The value of this variable is retrieved using <s:property> tag using value attribute.
https://wideskills.com/struts/program-to-print-user-name-button-click-in-struts
CC-MAIN-2022-27
refinedweb
791
65.62
The CallContactItem class provides a wrapper for call entries. More... #include <CallContactItem> Inherits QObject. The CallContactItem class provides a wrapper for call entries. It simplifies the interaction between CallContactModel and CallContactListView and connects the actual call data such as the phone number with a contact. This class is part of the Qt Extended server and cannot be used by other Qt Extended applications. See also CallContactListView, CallContactDelegate, and CallContactModel. Creates a new CallContactItem with the given parent. The item is based on the call information provided by cli. See also QCallListItem. Returns the call data associated with this item. Returns the contact associated to the call entry. See also setContact(). Returns the id of the associated contact. Returns the pixmap for this call entry. The most useful pixmap is the contacts image. If the contact doesn't have a protrait the phone numbers type (dialed/received/missed) pixmap is returned. Returns an icon representing the contact model field of the call entry. Returns a null icon if no icon is available. See also QContactModel::fieldIcon(). If this call item is associated to a contact this function returns the called number. Otherwise the extra information contains the call details such as "Dialed 2. March 14:30". Returns the type of the associated contact field. See also QContactModel::Field. Returns the phone number associated to this call item. The given number becomes associated to contact. See also contact(). This is an overloaded member function, provided for convenience. The contact item becomes associated with id and model is used to to lookup the details of id. Translates the QCallListItem::CallType enum value st into the appropriate QCallList::ListType field. Returns a suitable display string for the contact or the phone number of this entry if no contact information is avilable. Returns the pixmap for type.
https://doc.qt.io/archives/qtextended4.4/callcontactitem.html
CC-MAIN-2021-31
refinedweb
302
53.37
What is the latest version of Hibernate using in current industry? Hibernate DataSource Properties What are the properties in DataSource Explain? also explain hibernate properties? Manisha - Mar 22nd, 2016 DataSource objects, which are the preferred means of getting a connection to a data source. DataSource objects can provide connection pooling and distributed transactions. Objects instantiated by cla... POJOs What are POJOs?How POJOs are created & used in Hibernate? Thomas John - Sep 21st, 2015 A Java Bean is a Java class that has getters and setters in order to conform to Java-bean standards. Essentially you declare properties private and use getters and setters to read or modify the data. ... teddythms - Mar 24th, 2011 You can use the Hibernate Mapping Files and POJOs from a Database wizard to generate files for you. The wizard can generate a POJO and a corresponding mapping file for each table that you select i... What is component mapping in hibernate? Thomas John - Sep 21st, 2015 Hibernate component represents as a group of values or properties, not entity (table). I have a customer table that has address fields. I will create 2 models - customer, address. In this case, Addres... mani.sivapuram - Feb 21st, 2011 I will explain one realtime scenario here.Consider one Address class as given belowpublic class Address{private Integer doorNo;private String street;//provide setters and getters for above properties}... What is the difference between and merge and update Read Best Answer Editorial / Best Answerpromisingram - Member Since Oct-2008 | Oct 19th, 2008 Use update() if you are. Thomas John - Sep 21st, 2015 I create an object say Employee with id 1. I create this object by getting the student record from the database. Object s1 is in the session Employee e1 = null; Object o = session1.get(Employee.cl... Mukesh Vishwakarma - Aug 25th, 2015 Update will work at same session. After close the session or detached the state, update will not work. But in the case of Merge after closed the session it will work. Store images using Hibernate How do you store a image in oracle using hibernate ? Thomas John - Sep 21st, 2015 1. Create a table with column having data type as blob 2. In the HBM file, specify the type as binary Suresh - Oct 26th, 2013 We need follow procedure to insert image file in to db s/w. convert image file----->byte[]------>java.sql.blob obj--------->Hb session obj-------->jdbc driver---------->insert in db table value When would you use Hibernate and Spring JDBC Template ? Mukesh - Aug 25th, 2015 As per my view Hibernate is complete framework for ORM tool which including broad future for handing database. It has also keeping feature of Mapping (1 to 1, 1 to M,.....) but in Spring JDBC not covered to these all things sampra - Mar 19th, 2012 Spring jdbc and hibernate are working in same way ..it depends what technologies we are using How to create primary key using hibernate? sherry - Jul 14th, 2015 Using @Id annotation with the field or its getter. To attach generator user, @GeneratedValue vimala - Jul 13th, 2012 In hibernate mapping fileCode - .hbm.xml add <id> under <class> - <class name="class name" table="table name"> - <id name="pojo variable name" column="table column name"/>//primarykey mapping - <property name="pojo variable name" column="table column name/>//non-primarykey mapping - </class> Why we are using Hibernate? why we are using Hibernate instead of JDBC where JDBC is less time taking than Hibernate? sherry - Jul 14th, 2015 Industry research results: Hibernate adds < 10% performance overhead compared to pure JDBC calls. Very tough to achieve once the business to DAO mappings, custom caching etc. is self-written for the p... Kaushal - Apr 8th, 2015 Hibernate is a syntactic sugar coating over JDBC that helps you focus only on your logic to tackle the problem while taking care of rest of the things like registering drivers, exception handling and all the other stuff. Also it shields your code from being dependent on the type of Database. Please explain the relationships in hibernate Akshya Kumar Jena - Jul 8th, 2015 Below mentioned relationships are supported by Hibernate. 1. One to One 2. One to Many 3. Many to One 4. Many to Many 5. One to Many / Many to One bi-directional Senthilmurugan - May 2nd, 2015 One to One Relationships One to Many and Many to One Relationships Many to Many Relationships Self Referencing Relationships What is the use of cascade in hbm file? Read Best Answer Editorial / Best Answeranupsadhwani60 - Member Since Sep-2007 | Sep 16th, 2007). venkat - Jan 22nd, 2013 Using cascade property we can ensure relationship with parent to child harikishanrao - Dec 1st, 2010 Whenever we have a parent child relation ship, if parent record is changed then child record should also be changed. Primary key must be reflected in the child key What is the main advantage of using the hibernate than using the sql ABHAY RAI - Aug 25th, 2012 Hibernate provide object oriented functionality and hibernate easily migrate different database without any query changes. Ayaz Roomy - Apr 27th, 2012 - Hibernate is something which is totally based on ObjectOrientedProgramming concept.where as SQL is based on Querys .But Hibernate also uses sql queries but it is using Objects to handle the res... Hibernate JDBC Driver What type of JDBC driver is used in hibernate? nabi_alam - Mar 30th, 2012 It depends on the Database what we are using.According to the database we use corresponding Driver. Q1: What is the difference between Hibernate and EJB 2.1? Q2: Which one, among EJB 2.1 and Hibernate, is better?Explain with reason.Q3: When to use EJB 2.1 and when to use Hibernate? Explain with scenarios. srinivasaraobora - Sep 20th, 2011 Advantages of Hibernate Over EJB Entity bean is also used for object oriented view. It needs lot of things to configure for make it possible and Lot of coding need for it. After all these performance ... sony v george - Apr 4th, 2007 Basically Ejb and Hibenate is enterly different one But having realtion with Entitybean in Ejb and Hibernate. Entity bean is also used for object orientd view . It need lot things to configures fo mak... How do we create new table apart from mapping with existing table ? CrackHead - Aug 8th, 2011 Hibernate automatically creates new tables if there are corresponding POJOs. If you know how to write/read to/from an existing table using Hibernate, then you can easily create a table. Step#1: Creat... What is database persistence?How well it is implemented in Hibernate Read Best Answer Editorial / Best Answermani.sivapuram - Member Since May-2010 | Feb 14th, 2011. mani.sivapuram - Feb 14th, 2011 There are mainly three states in hibernate a) Transient state b) Persistant state c) Detatched state Take the employee object place the data in that object like usename, password, empid. Now th... prasadkuppa - Apr 23rd, 2008 Preserving the data inside a database is database persistence. Persistence is once of the fundamental concepts of application developement. There are many alternatives to implement persistence. Object... What is the advantage of Hibernate over jdbc? Ballu Ashok - Jan 21st, 2010 Hibernate Advantages:Productivity: Improves the productivity by eliminating the JDBC tedious code. ... imam.tunduru - Sep 9th, 2009 Advantages OF Hibernate: - First of all will provide business entity form of representation of DB tables. - Hibernate provides an interface to develop our application code which is not specific to an... How JPA and hibernate are related as per EJB 3 specifications? krishna_kanth83 - May 29th, 2008 first things first ..i have just seen ppl venting thier personal grudge over java / jpa or whatever over this forum.suns official sacceptence that jpa is a failure. please lets keep it out.as for the ... Sri - Oct 12th, 2007 JPA is official acceptance from SUN about its failure on EJB. It has to abandon its EJB model to go to ORM model. Finally, Sun should provide developers some tools to migrate Java programs to .net easily with out pain Why do you need ORM tools like hibernate? mani.sivapuram - Feb 14th, 2011 since we want to reduce burden on the developer so for big projects if any maintaince is requered then problem finding and bug fixing is easy in case of hibernate.I can obey with one thing it we need ... meetaskjain - Mar 8th, 2010 Hibernate allows the application developer to concentrate on the business logic instead of writing complex SQLs. Aprt from this hibernate would take care of the database connections and managing conne... Hibernate What does Hibernate mean? How it's work? teddythms - Mar 24th, 2011 Hibernate is a framework, which enables your applications to interact with DBs using Object Relational Mapping.Hibernate provides a solution to map database tables to a class. It copies the database d... Ans
http://www.geekinterview.com/Interview-Questions/J2EE/Hibernate
CC-MAIN-2016-36
refinedweb
1,453
58.08
As you can see from the table, however, more than half of the functions in the toolkit are totally new for Visual Studio .NET developers. The purposes behind most of them can be gleaned from their titles (see the CHM file for full descriptions and parameters) and, considering this is a free set of classes, as you review the function list I'm sure you will agree that many could be very useful. As with native Visual Studio .NET functions, some of the ones in the VFP Toolkit are overloaded; fortunately, IntelliSense will kick in when you need some coaching, and there is an XML dynamic help file included with the toolkit. Using the VFP Toolkit As mentioned earlier, the VFP Toolkit comes with an installation batch file. Once registered, you can begin using the VFP Toolkit like any other .NET assembly. To incorporate the toolkit into your application right-click on the References tab in the Solution Explorer and click Add Reference. If the DLL was registered properly, you should see an entry for it under the .NET tab of the Add Reference dialog box (look for the Visual FoxPro Toolkit for .NET entry in the References list). If you don't see the toolkit, click Browse, locate and select the DLL, and then click Open. This adds the toolkit to the Selected Components section of the Add Reference dialog box. Click OK and you are ready for the next step, which is to add the namespace to your code. Now that you have added a reference to the toolkit, you can begin using it in your applications. In order to use the toolkit you need recognize some fundamental differences between VB.NET and C#. For VB.NET, you have two options: the first is to add toolkit classes through the Solution Explorer. To do this, right-click on your project and click Properties to display the Property Pages dialog box. Then, click the Imports option under Common Properties. Type the name of each class you want to include in your project into the Namespace box, and then click Add Import. The other option is to do it directly in code (remember, there's code for everything in .NET!). Add it to the top of your class or form, like this: Imports VFPToolkit Imports VFPToolkit.vfpData Imports VFPToolkit.strings using VFPToolkit; Or include only selected classes: using VfpData = VFPToolkit.vfpData; using VfpStrings = VFPToolkit.strings; To illustrate how to use VFP Toolkit functions in your project, I've constructed a simple Windows form in VB.NET that uses three of the many functions available: GetWordCount(), which counts the number of words in a string; Lower(), which is a redirection function to change a string to lower-case; and Browse(), which displays a form with data from a table. Figure 1 shows the form itself. The C# code is somewhat similar, as shown in Listing 2. Again, I removed the Windows form code. Have another look at the table of toolkit functions and I think you will appreciate what this set of classes can bring to your application. Imagine trying to code all of those from scratch on your own. Under the Hood of the VFP Toolkit The VFP Toolkit classes are built into what is called an assembly. In Visual Studio .NET, assemblies provide flexibility when dealing with issues like security, version control (including side-by-side execution of differing versions) and debugging. Assemblies are the basic building blocks of a remotely deployable application, as only the ones required for startup need to be present for the application to run. Any other required assemblies are retrieved on demand, which keeps the initial deployment costs low. The VFP Toolkit is an example of a static assembly. A static assembly is one that is already defined and contains all of the resources it may require (bitmaps, JPEGs, resource files, etc.). .NET can also construct dynamic assemblies, which are built at run-time and can be saved to disk after execution. Assemblies can also have various compositions. VB.NET and C# projects can be compiled into single-file assemblies (where the code module has a single entry point) or as library assemblies. Library assemblies, like class libraries, have components (types) that can be called by other assemblies, but have no entry point. Assemblies can also be constructed as multi-file assemblies. This is useful if you have multiple code modules under development by different developers, written in different languages, or you need to construct an application that can be downloaded via the <object> tag in IE. Multi-file assemblies must be compiled using the command-line compilers or Visual Studio .NET with the Managed Extensions for C++. At a minimum, an assembly must contain type information and implementation (class definitions and the code that goes with them) as well as an assembly manifest. This is the metadata that describes how the elements of an assembly relate to each other. An assembly manifest contains the assembly name, its version number, culture (language-specific information, and only required when building satellite assemblies), strong name information (public key information from the publisher, if given a strong name), a list of files in the assembly, type reference information and information on referenced assemblies. An example of this information is in the AssemblyInfo.cs portion of the VFPToolKitNet_CSharpNET_Source project, a portion of which is reproduced here: [assembly: AssemblyTitle ("Visual FoxPro Toolkit for .NET")] [assembly: AssemblyDescription ("Visual FoxPro Toolkit for .NET")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany ("Team: Kamal Patel, Cathi Gero, " + Rick Hodder, Nancy Folsom, Ken Levy")] [assembly: AssemblyProduct ("Visual FoxPro Toolkit for .NET")] [assembly: AssemblyCopyright ("Public Domain (none)")] [assembly: AssemblyTrademark("-")] [assembly: AssemblyCulture("")] <compiler command> /out:<file name> <module name> Or the following for a library assembly: <compiler command> /t:library <module name> So to create a library assembly from a VB.NET module, you execute this: vbc /out:myCodeLibrary.dll /t:library myCode.vb For more information on assemblies, refer to the Visual Studio .NET help. A Great Set of Tools for a Great Price For VFP developers and anyone else wanting to learn .NET and add functionality to their developmental toolbox, the VFP Toolkit is a great resource. Again, this is public domain software, which you can download from GotDotNet. Check it out, and I think you'll agree that this is a must-have set of classes for your application. The VFP section of the GotDotNet site has much more information for Visual FoxPro developers looking to expand their knowledge of Visual Studio .NET. In addition to the VFP Toolkit, there are several white papers regarding COM Interop, XML Web services and .NET Interop within VFP applications (to name a few). Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/codemag/Article/8497/0/page/3
CC-MAIN-2017-04
refinedweb
1,144
55.13
User:Alux/using references From OpenStreetMap Wiki Eventually I'd like this page to move into the main namespace and for it to be linked to from the Template:Reference_copyright_warning. Please discuss this page: User_talk:Alux/using_references Supplied links should be used for reference only, e.g. Before surveying a location, to find any data like listed status, to verify that a location is under contruction (to stop armchair mappers reverting changes to what they can see on arial imagry) etc. If there are links to maps, the maps should obvioulsy not be used unless they are under the correct licence. Links could be used to contact the sites to tell them about OSM and get them to swap map poviders.
https://wiki.openstreetmap.org/wiki/User:Alux/using_references
CC-MAIN-2018-13
refinedweb
121
56.79
Q. Mention what is team foundation server? Team foundation server is used for inter-communication between the tester, developer team, project manager and CEO while working on software development. Q. List out the functionalities provided by team foundation server? 1. Project Management 2. Tracking work items 3. Version Control 4. Test case management 5. Build Automation 6. Reporting 7. Virtual Lab Management Q. Explain TFS in respect to GIT? TFS: 1. Team foundation server is a Microsoft Version. It supports about 5 million lines of code 2. TFS integrates with Visual Studio, SharePoint and Active directory 3. TFS is more secure as you can assign read and write permission to an individual file 4. TFS requires SQL server to store all kind of data’s 5. TFS is centralized where the vast majority of the information is stored on the server 6. TFS does not support safe merge between unrelated branches 7. In TFS, you can do manual test tracking 8. Installation will take about ½ day 9. Analytics reports and chart option is given GIT: 1. Git is open source, and designed to support the source code of Linux Kernel and supports about 15 million lines of code. Development process is distributed all around the world. 2. Git does not support any of these 3. Git is less secure as the whole git repositories is regulated by file system 4. Git is based on Distributed Version Control System (DVCS) that means every developer’s copy can access every version of every file from anywhere 5. Git keeps every local copy fully independent 6. Git allows safe merge between unrelated branches 7. In Git, you cannot do manual test tracking 8. Installation will only take 10 minutes 9. Analytics reports and chart is not represented Q. Explain how you can create a Git-TFS in Visual Studio 2013 express? To create a Git-TFS in Visual Studio 2013 express 1. Create an account with MS TFS service if you don’t have inhouse TFS server 2. After that, you will be directed to TFS page, where you will see tow option for creating project, one with new team project and another with a new team project+Git 3. The account URL will be found right below “Getting Started.” 4. Click on create git project and it will take you to a new window, where you specify details about the project like project name, description, the process template, version control, etc. and once completed click on create project. 5. Now you can create a local project in team foundation server by creating a new project in Visual studio and do not forget to mark the check box that says “Add to source control” 6. In the next window, select mark Git as your version control and click ok, and you will be able to see the alteration made in the source code 7. After that, commit your code, right click a file in team explorer and you can compare version differences Q.. 1. You can use TFS Lab 2. Customize work items/process templates Q. Explain what kind or report server you can add in TFS? TFS uses SQL for its data storage, so you have to add SQL server reporting services to provide a report server for TFS. Q. How one would know whether the report is updated in TFS? For each report, there will be an option “Date Last Updated” in the lower light corner, when you click or select that option, it will give details about when it was last updated. Q. Explain how you can restore hidden debugger commands in Visual Studio 2013? To restore the debugger feature that is hidden, you have to add the command back to the command 1. Open your project, click on Tools menu and then click customize 2. Tap the command tab in the customize dialog box 3. In the menu bar, drop down, choose the debug menu for which you want to contain the restored command 4. Tap on the Add command button 5. In the Add command box, choose the command you want to add and click OK 6. Repeat the step to add another command Q. Explain how you can track your code by customizing the scroll bar in Visual Studio 2013? To show the annotations on the scroll bar 1. You can customize the scroll bar to display code changes, breakpoints, bookmarks and errors 2. Open the scroll bar options page 3. Choose the option “show annotations over vertical scroll bar”, and then choose the annotations you want to see 4. You can replace anything in the code that frequently appears in the file which is not meant to be Q. Setting up Team foundation server I have to setup team foundation server for a company, something that I don’t have any experience in. The company will have about 5 or so developers that will be using it. Is this a big task or something that is fairly easy to do (with instructions)? Any helpful tutorials that you can recommend? Any recommendations on server specs for a team of 5-10? A: Disregard the “Cliff’s Note” link – it’s for VSTS 2005. There’s no reason to install an old version – the installer (and everything else about the product) is MUCH improved with VSTS2008. Also make sure you install SP1 – it’s not just bug fixes but some MAJOR enhancements. Instructions for install are here: TEAM FOUNDATION VSTS2008 INSTALL GUIDE make sure you closely follow the recommendations for the Accounts necessary for install. Blog post with recommendations for SERVER SPECS The link that Espo posted is excellent walkthroughs for configuring TFS after you get it installed. TFS 2008 SP1 DOWNLOAD Also you will want the following TFS 2008 POWER TOOLS in particular there is a “Team Foundation Server Best Practices Analyzer” which you can run against the server before the install to make sure everything is patched correctly etc (and afterwards to make sure the install went properly). It will require Windows Powershell installed on the server as pre-req. Also you will want TEAM SYSTEM WEB ACCESS 2008 SP1 – (formerly Team Plain) which will allow you to access the features of TFS as a web application. Q. Is there a way to link work items across projects in TFS In Team Foundation Server is there a way to have work items in one project linked to other projects so they show up in the reports in both. We are thinking about keeping release engineering items in their own project and want them linked to the project they are actaully for as well. Is this possible? So for instance I would create the item under release engineering assign it to an engineer and then link it to Product X so it showed up as a work item for Project X as well. A: This is possible in TFS 2010 at least: LINK TFS WORK ITEM TO DIFFERENT PROJECT Not sure on the effects on reporting though. Q. How can I see all items checked out by other users in TFS? I want a list of all the checked out files, by all users, in a project in TFS 2005. All I can see now aremy checked out files – in the pending changes window. I remember that in Source Safe there was such an option – is there one in TFS 2005? A: The OCTOBER 2008 EDITION OF THE TFS POWER TOOLS includes “Team Members” functionality that allows you to do this, and more. There is more information on this feature on BRIAN HARRY’S BLOG. tf status itemspec /user:* /recursive in the VS Command Prompt. itemspec is the TFS path to the item you want to search for checkouts. No extra installations needed Q. TFS: Moving from one server to another How do I move my code and change history from one TFS server to another? A: Use this: TFS TO TFS MIGRATION TOOL from CodePlex Q. What to do with a branch after the merge with TFS After merging a branch back to the “trunk” what do most people do with the branch. Just delete it? Move it to another area? Change it’s permissions? The concern we have is that developers who are away, and don’t read their mail could come to work and continue working on the branch, after the merge has been done. A: Once the branch is definitely dead then I like to delete them. You can always undelete something in TFS if necessary (Options, Source Control, Show Deleted Items). Dev’s working in that area without realizing it may get some strange behaviour (i.e. files dissapearing when they do a Get Latest) however it get’s them to figure out something has happened pretty quickly. That said, sometimes it can take a while to ensure that the branch is definitely deceased in which case changing the permissions on the branch so that only a limited number of people can edit the files on that branch is a handy technique. You can have one person lock all the files in the branch with a check-out lock but I’ve not found that to work too well when freezing a branch – permissions seems to work better so that you do not have to have a bunch of pending changes (the locks) to manage for all the files in the branch and also you can have more than one person working on it while it is being frozen. Q. How do i find and view a TFS changeset by comment text with TFS I need to find a changeset by comment, and or by developer. Maybe I’m just blind today, but I don’t see a simple way in the Sorce Control Exploer to do this task? A: With the Power Tools installed: tf history $/ -r | ? { $_.comment -like *findme* } EASY WAY and no 3rd party apps/add-ons needed: Open Source Control Explorer “View History” from the root of TFS server Scroll to the bottom (it’s fast with hitting “End” button continuously) Select all records, copy Open Excel and paste Now the Excel will allow you to search through comments (Excel’s a native app, don’t argue..). Q. TFS: Create a new project from an existing one in TFS What is the best way to create a completely new project in TFS by copying an existing one? I have an ASP.NET project that will have 50+ “releases” per year. Each release is a distinct entity that needs to remain independent of all others. Once created, I want to make sure that any change to one (the source project or the copy) does not affect the other. This is for source control only. I do not need to copy any work items. In the pre-TFS world I would do this by simply copying the folder that contained all of the project files. This had me 90% of the way to the new app, which I could then tailor for the new release. It is very rare that I need to actually add functionality to the base application, and even when I do it never affects existing apps. Is this still possible using TFS, by copying my local folders and then adding the copy into TFS as a new project? Any suggestions? One branch per release looks like the “standard” way of doing this but I will quickly end up with dozens of branches that really aren’t related, and I’d rather keep each new project as it’s own distinct project, with no chance of changes in one affecting the other. Thanks! 1. Thanks for the responses. I think you’ve all given me enough insight to get started. Richard, thanks for the detail. I was a bit concerned that it might be too easy to accidentally merge the branches. A: I would recommend using branching. Create a branch for each release from the main branch. As long as you do not merge the branches they will remain independent. Changes to the main branch will only affect releases created after the those changes were made. You could copy the files and create a new project, but you may run into a couple of problems: 1. The projects “remember” that they were in TFS, there is a bit of manual work to clean up special files etc. 2. TFS may slow down when you have many projects, compared with a single project with branches Q. How can I exclude a specific files from TFS source control We have mutiple config files (app.DEV.config, app.TEST.config, etc) and a pre-build event that copies the correct config file to app.config. Obviously the configuration specific files are in source control — but at the moment so is App.Config, and that shouldn’t be. How can I mark that one file as excluded from source control, but obviously not from the project. I’m using VS 2005, and 2005 Team Explorer. A: There is a checkin policy in the MS Power Tools which lets you screen filenames against a regular expression. See: MICROSOFT FOUNDATION SERVER POWER TOOLS While checkin policies are not completely foolproof, they are the closest thing TFS has to enforcing user-defined rules like what you’re looking for. If you just want to exclude a single file from Source Control, then select it in the Solution Explorer and choose “Exclude from Source Control” from the File>Source Control menu. (And as the others have said, you can also cloak a file or folder, which means it stays in Source Control and is visible to everyone else on the team, but it’s not copied to your PC until you decide to uncloak it; or you can delete the file, which means it gets deleted from everybody’s PCs when they get latest – but neither of these options will prevent such files being added to source control in the first place) It’s easy in TFS2012, create a .tfignore file Skip code block ###################################### # Ignore .cpp files in the ProjA sub-folder and all its subfolders ProjA*.cpp # # Ignore .txt files in this folder *.txt # # Ignore .xml files in this folder and all its sub-folders *.xml # # Ignore all files in the Temp sub-folder Temp # # Do not ignore .dll files in this folder nor in any of its sub-folders !*.dll Q. TFS Work Item Query against TFS Groups Does anybody know how to create a work item query in TFS that will query users against a TFS group? (ie, AssignedTo = [project]Contributors) A: In visual studio 2008, there is an ‘In Group’ operator in the query editor. You can use it and specify any TFS group. If that doesn’t work, try this. This is a fairly convoluted way to get the query working, but will work, involves using the group security identifier (SID) to bound the query. SELECT [System.Id], [System.Title] FROM WorkItems WHERE [System.TeamProject] = @project AND [System.AssignedTo] IN GROUP ‘S-1-9-1551374245-1204400969-2402986413-2179408616-1-3207752628-1878591311-2685660887- 2074056714′ ORDER BY [System.Id] To find the SID of the specific group your interested in, run the tfssecurity.exe utility as Run as Administrator with the /i Contributors and server parameter //server:MyTFSServer. This will return something like the following. Resolving identity “Contributors”… SID: S-1-9-1551374245-1204400969-2402986413-2179408616-1-3207752628-1878591311-2685660887- 2074056714 DN: Identity type: Team Foundation Server application group Group type: Contributors Project scope: Server scope Display name: Contributors Description: Members of this application group can perform all privileged operations on the server. Its long winded, but once you know the SID, and build the WIQ query, and save it, that will be it. Q. Edit other users’ alerts for a project in TFS I am unexpectedly taking over for the previous administrator of our TFS system who left the company rather abruptly. I was made an admin of TFS and on the TFS application server before this happened, but there is still at least one thing I can’t figure out. It seems there are some alerts set up under the previous administrator’s account that sends email to all of us whenever a file is checked in. I can’t say for sure exactly what this subscription looks like, but I’m guessing it’s either in the “Project Alerts” dialog for the project, or it’s an alert set up with TFS Power Tools’ Alert Editor. I can’t see alerts set up by other users in these areas. Is there any way short of directly editing the tbl_subscription table directly to try removing or changing these alerts? (I think I see the alert in that SQL table, but I don’t want to directly hack the database.) A: I’m really glad to say that in the next version of TFS and in the current version of the Azure-hosted version of TFS, administrators can manage other user’s alerts using Team Web Access. Thankfully, this should be a scenario of frustration in the past. I have some more information available on a blog post written for this topic here: Q. TFS Get Specific Version into separate folder I’m currently working on a project with TFS source control. We’ve just gotten in a bug report for an older version of the code, and I need to pull down that version of code to test it out. My first thought would be to “Get Specific Version” to pull down the code, but I’d rather not get that version into my current workspace directory. Is there an easy way to “Get Specific Version” into a separate (e.g. temporary/throw-away folder), so I can quickly look into this bug in the older version of code, and not disturb my current work? A: I just found one easy way to do this: Create a new Workspace in TFS pointing to a separate folder, then switchover to this new workspace and do a Get Specific Version here. Q. How to browse and view files stored in a Team Foundation Server without using Visual Studio I’m looking for a tool to browse and view files stored within a Team Foundation Server without using Visual Studio. As I’m doing most development on a virtual machine, it’s very annoying to wake it up only to have a look on a certain file. So is there a way to browse a TFS without Visual Studio? A: The TFS Power Tools now have Windows Shell Extensions, so you manipulate source control files using only Windows explorer. Your solution could be to just keep a working copy of the solution and then you do whatever manipulations you need to using Windows Explorer. Q. How to find all changes below a certain point in the TFS source control tree I need to know what changes (if any) have happened at a particular level in our source control tree. Is there some way to make such a query of TFS? A: Using Team Explorer: Open Source Control Explorer Navigate to desired source control folder Right-click and choose View History Shows you all of the changesets that have been checked in at that level in the tree or below. Using the tf utility: tf history c:localFolder -r -format:detailed Here’s a link to the tf history documentation for more details on usage: LINK Using the TFS SDK to do it programatically: Here’s a sample method based on some of our code. It takes a path, start time and end time and gives you all of the changeset details below that path in between the two specified times: Skip code block private StringBuilder GetTfsModifications(string tfsPath, DateTime startTime, DateTime endTime) { StringBuilder bodyContent = new StringBuilder(); TeamFoundationServer tfs = new TeamFoundationServer(“YourServerNameHere”); VersionControlServer vcs = (VersionControlServer)tfs.GetService(typeof(VersionControlServer)); // Get collection of changesets below the given path System.Collections.IEnumerable changesets = vcs.QueryHistory( tfsPath, VersionSpec.Latest, 0, RecursionType.Full, null, new DateVersionSpec(startTime), new DateVersionSpec(endTime), int.MaxValue, true, false); // Iterate through changesets and extract any data you want from them foreach (Changeset changeset in changesets) { StringBuilder changes = new StringBuilder(); StringBuilder assocWorkItems = new StringBuilder(); // Create a list of the associated work items for the ChangeSet foreach (WorkItem assocWorkItem in changeset.WorkItems) { assocWorkItems.Append(assocWorkItem.Id.ToString()); } // Get details from each of the changes in the changeset foreach (Change change in changeset.Changes) { changes.AppendLine(string.Format(CultureInfo.InvariantCulture, “t{0}t{1}”, PendingChange.GetLocalizedStringForChangeType(change.ChangeType), change.Item.ServerItem)); } // Get some details from the changeset and append the individual change details below it if (changes.Length > 0) { bodyContent.AppendLine(string.Format(CultureInfo.InvariantCulture, “{0}t{1}t{2}t{3}t{4}”, changeset.ChangesetId, changeset.Committer.Substring(changeset.Committer.IndexOf(‘’) + 1), changeset.CreationDate, changeset.Comment, assocWorkItems.ToString())); bodyContent.Append(changes.ToString()); } } return bodyContent; } Q. Unified Diff in TFS Anyone know if this (generating a unified diff) is possible and if so, how? A: This is what I usually do: Generate a unified diff from tf command line to get a patch file tf diff /recursive /format:unified . >> diff.patch Download patch utility from Apply patch with patch.exe -p0 < diff.patch Obviously this assumes that the source files are already checked out. If they are not, especially when you are applying patches across branches, write a simple shell script to go through the diff file, get the files path and tf edit them. Q. TFS: Overwrite a branch with another Is it possible overwrite a branch with another? Or is the only solution to delete branch B and make a new branch from Branch A? A: Unless you’re running TFS 2010, I’d recommend using Merge + Resolve to bring the two branches back in sync. tf merge A B -r -force -version:1~T tf resolve B -r -auto:acceptTheirs That should equalize everything, except for files that were only created in B and never merged back. Use Folder Diff to find & reconcile them. Delete + rebranch in 2005/2008 runs the risk of nightmarish-to-debug namespace conflicts in the future. The other option, if you have 2008, is to Destroy + rebranch. Obviously that assumes you are ok with losing all the history from the original copy of B. Q. How to browse TFS changesets? I want to browse TFS changesets. I do NOT want to search changesets by specifying a file contained within the changeset. I do not want to specify which user I think created the changeset. I simply want to key in a changeset number and look at that changeset. Or maybe view a range, and then browse those. No specified file, no specified user. TFS 2008 seems to not want to allow me to do this. I must be missing something. How do you do this? A: In Source Control Explorer, hit CTRL+G. This will bring up the Find Changesets dialog. Unfortunately it’s kind of one-size-fits-all in VS 2008: you have to work inside a big bulky search dialog, even if you already know the number(s). In your case, flip the radio button to search by range and then key in the desired changeset number as both the start & end of the range. The VS 2010 version of this dialog simplifies the “lookup single changeset by #” use case, FWIW. My personal preference: if you have a console window open, there’s a quicker route. Simply type tf changeset 12345. If using the Power Tools, you can substitute “Get-TfsChangeset” or “tfchangeset” for improved performance and programmability. Q. Team Foundation Server How to Edit file without checking it out i’m working with TFS and i need to edit file localy without checking it out . another case if some one checked in the file and i need to change my locally copy. what should I do ? in Visual source safe we can do that by removing the read only check on the file. A: Ok, this is relatively easy in VS2010, and quite normal. I mean the locking model of source control is obsolete anyway. In vs2010, click Tools -> Options -> Source Control -> Environment and select Allow checked-in items to be edited. This should stop the TF client from marking files as read-only. Also you may have to change the Editing drop-down in Source Control -> Environment to Do nothing. Q. How do i remove files from the Pending Changes list in TFS when those files have been deleted locally as the question says…anyone now how I can remove these files. I didn’t commit the changes in TFS before I deleted them locally and now they always appear in the Pending Changes window…I’d like to get rid of them thanks A: Right-click on the file in the Pending Changes list, choose Undo. Brian Solved I recreated the deleted forlder directory and files in TFS and checked them in. Then I deleted them and they were gone from the Pending changes window. Q. How to add an image to a TFS work item; as an image, not as an attachment Our team is in the process of begining a project which is being managed using TFS. Several requiremens which existed only in Word documents are being migrated to TFS work items. The Word documents contain various diagrams and images which we need included in the work item, specifically under the ‘Details’ and ‘Analysis’ tabs. The problem is that images cannot be pasted into these tabs as images. The only option to add images to the work item appears to be as an attachment. Could someone confirm this? Any assistance is appreciated. A: You can change the text boxes to accept HTML, but that may still require the image to be hosted elsewhere. HowTo: It may also be best to just link to the existing document. We have to do this for now, because we have a large repository of existing documentation that no-one wants to bother converting. Q. How to unlock a file from someone else in Team Foundation Server We have a project that is stored within our TFS server and some files were Checked-Out by me from another computer and another user (both of which are not used anymore). Is there a way to force the unlocking of the file (no changes were made to it so it’s safe to do so if I can only do it). A: You can use the Status Sidekick of TFS Sidekicks tool and unlock the files which are checked out by other users. To do this you should be a part of Administrator group of that particular Team Project (or) your group should have the permissions to undo and unlock the other user changes which by default Administrator group has. You can get the tool here: Q. How can I get all my checkins in Visual Studio 2010 TFS? On Many occasions I need to review my checkins . Is there a way I can get all my checkins in TFS? I dont mean view history on a particular file,but all my checkins! If I can filter based on start and end date would be great A: Open “Team Explorer” (Found in “View”-menu) Find the team-project and expand “Team Members” Right-click the team member and select “Show Checkin History”. But it is very strange that one cannot do this filtering directly, when viewing the entire history of a team-project. Yet another alternative is to use the “Link to”-search within TFS WorkItem: Open a TFS Workitem Choose the “All Links”-tab Press the “Link To”-button In the new dialog set “Link Type” to “Changeset” and press “Browse…”-button. Now you have a TFS search dialog, where one can specify username and other filtering See also VS2010 – FIND INFORMATION ABOUT A CHANGESET Q. What permissions are needed to add/edit work items in TFS I would like to grant my Q/A team permissions to create an edit bugs in TFS. I could just throw them into the Contributors group, but I would rather create a Q/A group and assign it permissions specific to creating work items. What permissions do I need to grant them. (TFS 2008) A: It’s actually going to be found under Area security. Within VS 2008: In Team Explorer, click on the appropriate team project that you want to check/change Choose theTeam–>Team Project Settings–>Areas and Iterations choose the appropriate area. If you don’t have any defined, or you want it to apply to all of them, choose the “Area” node. Click on the.. button. The permissions to edit work items is set here:Edit work items in this node Q. How to use the blame feature in TFS? How do you use the blame feature in TFS? A: The TFS equivalent is the Annotate command I believe. (Simply right click on the versioned file in the source code explorer and select “Annotate”.) There’s more information on this over on MSDN. Q. Tips and tricks to increase productivity / efficiency with Team Foundation Server I have to use Team Foundation Server 2010 at my company and I’m not very happy with it. There are so many features or just default behavior I’d expect from a CVS that TFS seems to lack (compared to svn, git or perforce, which I have experience with), so my question is: which tricks do you know, which hidden features are out there to make TFS easier to use / more convenient? Perhaps I should elaborate a bit and list what I think could be better: 1. The default check-in action when associated with a task is “resolve”, though in 99% of all check-ins, I only want to “associate” my commit with the task. There’s only 1 commit (the last) that “resolves” the task, so why is that the default? Can I change that? 2. In the check-in dialog, when double-clicking a file, Notepad is launched and shows the contents of the file. Notepad. Seriously? What about the Visual Studio editor? Anyway, I’d like to see the differences to review the changes I’ve made, not the contents of the file. The diff tool is hidden in a submenu. This might seem trivial, but when I have to check 10+ files it’s just annoying to always right-click, open submenu, click to diff. 3. The diff tool. Merging with it isn’t really straightforward, also the conflict detection mechanism is somewhat lacking. The (Tortoise-)SVN / Git merge tools or that of Perforce are way better here. 4. Creating a new file, opening a file for the first time, comparing a file with a previous version etc takes forever (that is, 3-10+ seconds). Our TFS server is in-house and has absolutely no load – also why does Visual Studio have to contact the TFS server when I just create a new file (which I might not even check in)? Is there perhaps an option to turn that off? 5. Readonly files. All files are read-only when checked-in and become writeable when edited for the first time. This is really annyoing when the application crashes because of that. Windows Azure for example modifies a web.config file and fails whenever I check out because the file is read-only then. These are just the most prominent things that I think are really annoying and unnecessary. I didn’t have the pleasure to branch and merge yet, but from what I’ve heard so far it won’t be very enjoyable as well… So again: If you know some tricks, settings, featuers that make working with TFS less inconvenient, please share them. A: 1) is customizable if you reconfigure the work items. (You can also change any combination of fields/states/available values/etc.) 2) is a pain, but if you use the dockable “Pending Changes” window instead then it’ll open the file in the editor. I suspect this is a drawback of the Checkin dialogue being modal. 3) you can customize – the option’s a little tucked away, but it’s on Tools/Options dialogue under Source Control/Visual Studio Team Foundation Server/Configure User Tools. Some third party tools (like BeyondCompare) have pages on their website with details of how to configure them with VS. 4) I’ve not seen the speed problems, although I do agree about the overhead on creating a file. Not sure if that’s configurable. Q. Team Foundation Server: How to view changeset history I’d like to know how to view entire changeset history in Team Foundation Server for a given project. this is what I want to see- starting from changeset 1 all the way to the current changeset: show me change #, username, date of submission, description, files that were changed, etc. note: i don’t want to just see the history for a given file, or dir, i want to see the history for the whole darn thing. i.e., what happened in changeset 1, what happened in changeset 2, what happened in changeset 3, etc. A: You can go to the SOURCE CONTROL EXPLORER in Visual Studio and right-click on your project and select VIEW HISTORY. This will show you the list of all changesets made to that project, who made them, the date they were made and any comment added to those changesets. if you double-click on any particular changeset, you can see the files that were changed in that one changeset. edited to add links. Q. Why does tfs prompt me to overwrite every file? A: To work under TFS source control, you will need the following: 1. Your solution must be in source control! 2. Your source control provider must be set to TFS. Tools > Options > Source Control and make sure it’s the default SC provider. 3. You must have a workspace mapping on your local drive that tells TFS where the solution should be located on your hard drive. Open Source Control and there is a drop-down list at the top of the window that shows the currently selected workspace. Drop this combo-box down and it gives an option to edit the workspace, where you can tell TFS where specific folders in its hierarchy are to be located on your hard drive. 4. You need to bind the solution to source control to tell TFS that you want to work in a source controlled way on this solution. (From memory) go to File > Source Control > Change Source Control and usually just clicking the “Bind” button is enough. 5. You may need to synchronise your PC with the server. Open the source control window, right click on the parent folder of the solution, and do a “Get Specific Version”. Set this to “Latest” and tick the checkbox to get files that the source control system thinks you already have. This will synchronise your PC with the network so TFS knows what’s going on. (Note: The other checkbox will cause TFS to overwrite writable files, which could mean you lose any local changes you have made, so take a backup of your code first, and be careful about which options you enable) I’ve been a bit brief, but if you find you need to do any of the above, I should have given you enough info that you can search the web for more specific help. Q. How to Find TFS Changesets Not Linked to Work Items Is there a way, either via a query or programmatically, to identify all TFS Changesets that are NOT linked to a Work Item? A: Using the TFS PowerToy’s PowerShell module: From whatever folder in your workspace you’re interested in: Get-TfsItemHistory . -Recurse | Where-Object { $_.WorkItems.Length -eq 0 } This will get the history for the current folder and all subfolders, and then filter for empty workitem lists. Q. How to get the list of all “Change Sets” of a user in TFS? I just want to get the list of all change sets of a user in TFS. I want only the id of the change set, and a link to all items which are being checked in as part of that change set. How can I do that? A: Besides option presented by Richard, you can also do that from within VS: It’s also possible to do it via the API. I can provide a short snippet, if you’re interested. Q. Get current changeset id on workspace for TFS How do I figure out what changeset I currently have in my local workspace? Sure, I can pick one file and view its history. However, if that file was not recently updated, its changeset is probably older than the more recently updated files in the same solution. One possible mistake that we may make is that we view the history on the solution file, however the solution file rarely changes unless you’re adding a new project / making solution-level changes. In the end to figure out the changeset I need to remember what were the latest files changed and view their history. Is there a better way to do this? A: Your answer is on a MSDN blog by Buck Hodges: HOW TO DETERMINE THE LATEST CHANGESET IN YOUR WORKSPACE from the root (top) of your workspace, in cmd perform: tf history . /r /noprompt /stopafter:1 /version:W Q. How to merge new files into another branch in TFS? Ok, in TFS we have a main trunk branch and another branch, let’s call it secondary. We have created a new file in the trunk but when trying to merge that specific file, it does not give us the option to merge to the secondary branch. We’re assuming that it’s because an analagous file does not exist in secondary. Is this the cause of the problem and if so, how can we get that new file from the trunk intosecondary? Here, we are merging a file that does exist in secondary. As you can see, the dropdown lists all three of our branches (secondary is actually the middle one): Now, when I try to merge a file which was created in trunk after secondary was branched,secondary is no longer listed as a target branch. A: trying to merge that specific file To understand TFS it helps to remember that the unit of change is the changeset, and it is changesets (not files) that are checked-in and merged. We’re assuming that it’s because an analagous file does not exist in secondary This is correct – at the version (changeset number) that the target branch is at, this file simply does not exist, so there is nothing to merge to. In general, you don’t gain anything by selecting a particular file in the merge source dialog – as it says, it asks you to select the source and targe branches. Specify the branches at their root, chooseSelected changesets only, and TFS will show you a list of changesets that exist in source but have not been merged to target. If you only want the one that added this new file, you can select it in that list. Q. TFS shortcut to compare a modified file with the latest version Right now I have to pull up Pending Changes window, right-click on the file and select Compare->With Latest Version… Is there a faster way to look at my modifications? A: Keyboard Shortcut for doing TFS Compare: 1. shift+ enter on file a)will compare the files 2. shift+ double click on file a)will compare file in background 3. Create visual studio mapping for the commands: Set the shortcut for compare folder under Source Control Explorer, you should set the shortcut keys for File.TfsFolderDiff command. Set the shortcut for compare specific file under Source Control Explorer, you should set the shortcut keys for File.TfsCompare command Note: To set the keyboard shortcuts, open “Tools > Options”. In the dialog that opens, go to “Keyboard”. Example: Note: The folder compare shortcut is only valid from the “Source Control Explorer”. It is the same as right clicking in the “Source Control Explorer” and selecting “Compare…”. References: 1. KEYBOARD SHORTCUT FOR FILE COMPARE? 2. COMPARISON KEYBOARD SHORTCUTS FOR PENDING CHANGES IN TFS by Alex Meyer-Gleaves. 3. HOW TO DOUBLE CLICK TO DIFF PENDING CHANGES IN TFS by Richard Banks 4. TFS Shortcut to do a diff on all modified files with latest version Q. How to “order” tasks in TFS I’m relatively new to TFS and I’ve been trying to figure out how to order tasks as follows. Task 1 Task 2 (requires that Task 1 be completed first) Task 3 (requires that Task 1 and Task 2 be completed first) etc Is there a way to do this? We are using TFS 2010. A: Well.. I found it. You can specify that a task is a “Predecessor” or a “Successor” to another task, or multiple tasks when you define the link between the two. There is some basic info about it at this location: Q. TFS Shortcut to do a diff on all modified files with latest version I’m looking for a way to kick off a diff on multiple files very easily. I find it very tedious to have to: 1. Right click on every file 2. Click compare 3. Click with work space version. 4. Rinse and repeat for every file in my change set Ideally, I’d like to be able to highlight all of the files in my change set, and perform one quick action that launched multiple windows of a diff tool, or launches them one after the other. It’s it’s probably good to know that my question is very similar to this question but I’m looking for a way to do this in bulk. A: Use the TF COMMAND LINE UTILITY. It comes with Visual Studio. You’ll have a special command prompt with the tools loaded called “Visual Studio Command Prompt (2010)” in the Start Menu. You should cd to the root directory for the solution. This way you don’t have to provide the commands with a servername, credentials, or workspace information. It will pick it up automatically. I ran the DIFFERENCE COMMAND. Without any parameters, it automatically shows the diff for every pending change. D:my-project> tf diff As you close the diff window, the next change will pop up. Q. TFS Merge And Keep The Associated Changesets/Comments So let’s say I’m working in a development branch and I checked in a change, supplied a comment, and associated a work item. Now I want to merge that back to Main, is there a way I can have TFS merge know to associate that same work item and comment by default when I attempt to check it in? Seems trivial but scale this out to multiple changesets a day and recording the work item numbers to reselect gets very tedious… A: TFS has in my opinion a weakness on this one. All TFS-guides out there suggest that a multiple-branch scheme should be applied – which is absolutely reasonable (see HERE for a great reference). Developers shall be working in ‘playground’ branches & once tests have succeeded, changesets are propagated into moree stable – more Release-near branches. A somewhat duplicate question on that is this one. According to the answers, an EXTENSION by J.Ehn could do what you ‘re after on the link-to-WI aspect. No evidence shows that the add-comments aspect is somehow included – yet this might not make tremendous sense (what should happen if the merge contains multiple commits from the DEV-branch?). Still, it should be possible to fork this implementation and add the comments as well. In the same question E.Blankenship provides with a rough road to ANOTHER alternative. Q. How can I always block checkin of a specific file in TFS There is one file that I always made changes to, but that I never want to be checked in. I would like TFS to block me from ever checking it in or committing the changes. How can I configure TFS to do this? Details: There is another developer on the project that has checked in some bad code that I prefer to comment out on my local machine. Since they are a client, I prefer to keep this a technical exercise rather than make it a political one. A: Visual Studio 2013 (and 2012) This feature is available by selecting the file(s) and going to: File > Source Control > Advanced > Exclude … from Source Control Q. Team Foundation Server – TF Get with changeset number I’m trying to write a very lightweight “build” script which will basically just get a few files from TF (based on a Changeset number). Then I’ll run those files in SQLCMD. I’m using this: tf.exe get c:tfs /version:c2681 /force /recursive However, this appears to get EVERYTHING, not just the files in changeset #2681. I’d like to be able to point it to the root of my tfs workspace, give it a changeset number, and have it just update those few specific files. Also, it appears to be getting older versions (perhaps what was current when changeset #2681 was checked in)? Is there a way to get just those specific files, WITHOUT needing to call them out specifically in the tf get itemspec? EDIT: I actually had to add the /force option in order for it to do anything at all. Without force, it doesn’t appear to even retrieve from the server a file I deleted locally, that’s definitely in the changeset. thanks, Sylvia A: Everything mentioned in Jason’s and Richard’s posts above is correct but I would like to add one thing that may help you. The TFS team ships a set of useful tools separate from VS known as the “Team Foundation Power Tools”. One of the Power Tools is an additional command line utility known as tfpt.exe. tfpt.exe contains a “getcs” command which is equivalent to “get changeset” which seems to be exactly what you are looking for. If you have VS 2010, then you can download the tools HERE. If you have an older version, a bing :) search should help you find the correct version of the tools. If you want to read more about the getcs command, check out Buck Hodges’s post HERE. Q. How can I find all of the labels for a particular TFS project sub-folder? Assume there is a TFS project Project with the subfolders trunk and 1.0. trunk contains the latest version of the application code for this project and 1.0 contains the code for the same application for the released version of the same name. There are labels for both sub-folders and all of the labels include files in only one of the sub-folders. [You could also assume that the labels are recursive on a specific (maximum) changeset for all of the files in the entire sub-folder too if that simplifies your answer.] How can I create a list of labels for one of these sub-folders, using Visual Studio, the TFS tf.execommand line tool, or any other tool or code that is publicly (and freely) available. Note I’ve written T-SQL code that queries the TFS version control database directly to generate this info, but I’m curious whether there are ‘better’ ways to do so. A: In Visual Studio, in the Source Control Explorer window, right-click the sub-folder for which you want to list the relevant labels and pick View History from the context menu. In the History window that should appear, there should be a sub-tab Labels that lists labels applied to that sub-folder (but not specific items in that sub-folder). Q. TFS file must remain locked I have some 3rd party dlls checked into TFS Our machines were renamed and now TFS believes they are checked out for edit by me on another machine. I tried tf lock /lock:none contrib64/* /workspace:oldmachine;myusername but I get the error TF10152: The item $/XXX/YYYY/contrib64/third_party.dll must remain locked because its file type prevents multiple check-outs. 1, Is there any way around this ? 2, Is TFS really this bad or is it just me ? 3, Is the purpose of TFS to make us nostalgic for VSS? ps It’s a hosted version so I can’t just get the admin to fix it. A: Undoing the lock won’t work on those files because they are binary, as binaries cannot be merged they must be locked if they are checked out. As the machine the workspace resides on no longer exists (the machine has been renamed) the best thing to do is delete the workspace. from a Visual Studio command prompt tf workspace /delete oldmachine;myusername /collection:http://*tfsserver*:8080/tfs/*collection* This will remove the workspace and undo all pending changes Q. difference between Product Backlog Item and Feature in Team Foundation work item types I have a question about Microsoft Team Foundation. In Visual Studio, Team Explorer, I can create a new work item. Work item types here are dictated by your team’s chosen process template; I’m not sure which process template we’re using. In any case, in Team Explorer, when I want to create a new work item, I’m given a list of work item types to select from, among which are “Product Backlog Item” and “Feature”. I noticed a difference between the two types related to the target resolution date. For a Product Backlog Item, this would seem to be dictated by the iteration end date. For a Feature, it’s not as clear. A Feature is also associated with an iteration (and iteration end date), however Feature also has a separate field called “Target Date”. The mouse hover text for target date is “The target date for completing the feature”. Should I choose “Product Backlog Item” or “Feature” as the work item type for my new work items? What’s the difference between the two? A: It looks like you are using the Scrum process template. The TFS site has published some very brief information about Product Backlog Items and Features and the idea behind creating a new work item type. The difference between the two comes down to what granularity you want to work with your work items at: Product Backlog Items are composed of Tasks and have estimated effort. Features are composed of Product Backlog Items and have target dates. I have not been able to find any official guidance on when to use Features vs Product Backlog Items but I have created my own guidance which I am basing this answer on… Should you create a Feature or a Product Backlog Item? If you think/hope that the new work item that you are going to create will fit into a single sprint you should create a Product Backlog Item and then break it down into tasks for your sprint. If you think/know that the new work item won’t fit into a single sprint you should create a Feature and identify all the value-providing sprint sized items (Product Backlog Items) that the Feature can be broken down into and use these when planning future sprints. [Update 2014-05-19] Microsoft have published more information on how to use Features and the agile portfolio concept that has been implemented in TFS Q. Warning displayed when adding solution to Team Foundation Server 2010 I’m just getting to grips with TFS 2010 (never had any luck with TFS 2008) and I’m trying to add my first solution into TFS. However I am getting the following warning message:. Can someone explain to me what this means and how to resolve it? This warning is displayed when right clicking on the solution in Solution Explorer and selecting “Add to Source Control”. A: Your solution folder structure should resemble: Skip code block Solution Root folder .sln solution file | Project1 folder | Project1.csproj (or .vbproj) | Project2 folder | Project2.csproj (or .vbproj) . . . Q. How to undo another user’s checkout in TFS via the GUI? As the resident TFS admin, on occasion I am asked to undo a checkout (usually a lock) that a user has on a certain file. This can be done via the command line using the TF.exe utility’s Undo command (see), but that’s kind of a pain. Is there a way to undo another user’s checkout via the GUI? A: Out of the box, no, but there are at least a couple of options via add-ons: TFS POWER TOOLS Latest version can be downloaded HERE. Also includes links to older versions. Once installed: 1. Open Source Control Explorer 2. Right-click the item on which checkout is to be undone (or a parent folder of multiple files to be undone) 3. Select Find in Source Control and then Status 4. In the Find in Source Control dialog, leave the Status checkbox marked 5. Optionally, enter a value for the Wildcard textbox 6. Optionally, enter a username in the “Display files checked out to:” textbox and select that radio button 7. Click Find 8. This will result in a list of files 9. Select the items to undo 10. Right-click and select Undo 11. Click Yes when prompted with “Undo all selected changes?” TEAM FOUNDATION SIDEKICKS Another option is to use the Team Foundation Sidekicks application, which can be obtained here: It has a Status sidekick that allows you to query for checked out work items. Once a work item is selected, there are “Undo pending change” and “Undo lock” buttons that can be used. Rights Keep in mind that Q. How to undo another users checkout in TFS? As the resident TFS admin, on occasion I am asked to undo a checkout (usually a lock) that a user has on a certain file checked into source control. How do you undo another user’s checkout? A: There are at least 2 different ways to do this: Command Line There is a command-line utility called Tf.exe that comes with Team Explorer. Find the documentation HERE. It can be accessed by launching a Visual Studio Command Prompt window. The syntax of the command is: tf undo [/workspace:workspacename[;workspaceowner]] [/server:servername] [/recursive] itemspec [/noprompt] For one file tf undo /workspace:workspacename;workspaceowner $/projectname/filename.cs Deleting the workspace tf workspace /delete WorkspaceName;User /server: Q. What’s a Backup and Recovery Process for Team Foundation Server 2010? We have a new installation of TFS 2010 (on SQL Server 2008), and I’m planning the backup and recovery process. It seems the configuration information and data is stored in the Tfs_Configuration and Tfs_DefaultCollection databases (and additional Tfs_[CollectionName] dbs if you have more than one collection). In a test setup, I tried backing up the two dbs, uninstalling TFS, then reinstalling (thinking I could then hook the databases up at some point in the install process). This is where I’m confused. I don’t see an option or clear guidance on how this is suppose to work. A: Download the TFS POWER TOOLS and USE THE BACKUP TOOL (it’s even got a nice GUI)! If you have backed up your TFS 2010 databases and lose your TFS server, you can restore it by restoring all of your TFS databases, reinstalling TFS, and selecting the Application Tier option of the installation wizard. Once you point the wizard at your database, it will recognize the Tfs_Configuration database from the previous installation and restore your previous configuration as well as your collections. Q. Conchango vs MS agile template on TFS 2010 Hi What is the diff between Conchango and the builtin agile template in TFS 2010. Any recommendations ? Thanks A: I have just written a blog that gives you a side by side comparison between: 1. Scrum for Team System v3 2. MSF Agile v5 3. TFS Scrum v1 (beta) Crispin. Q. Can we migrate to a new TFS process template and keep history? We are currently using TFS 2008 with the Scrum for Team System template from Conchango, with a few minor tweaks. We are looking at upgrading to TFS 2010 and we are considering moving to the MSF for Agile template. What is the best way to move to a new process template and keep history? I’d like to be able to create a new team project on the TFS 2010 server, get everything checked-in, and move our source to the new project. It would be nice if we could somehow keep the check-in comment history and possibly even be able to navigate back to the work item history associated with a changeset in the old project. I’d even be willing to migrate the old project as-is over to 2010 and then move the source to a new project, retaining the old project with work items only in 2010. Has anyone gone through the process that can over some advice? A: We are in a simalar situation that you are (right down to the templates we are on versus the one we want to be on), although we only have source code in our existing Team Foundation Server instance. We are planning to do a migration from Team Foundation Server 2008 to Team Foundation Server 2010, as opposed to an upgrade. Although we have not done so yet, you do have the have two options you have outlined. Like you mention, you can migrate the source code and Work Item Tracking to a new Team Project using this tool. It will “compress” the history dates, as TFS will want to add its own timestamp. There will be some potential history issues, from what I understand. Specifically, in TFS 2010, you might have issues comparing versions from the pre-migrated source control. At least, so far, I have in my experiments in our test lab. My understanding of this issue is that it relates to item-mode vs. slotted-mode as the defaults between the two versions. I can look at individual versions and can see history – so that meets our requirements. The other option is source control in one project and work items in another. I have not tried this, because I would imagine that the changeset relationships would be broken on existing work items and would not be generated going forward. This may or may not a be a big deal to you. Also, it might be a good idea to describe your situation in the discussion area of the project on Codeplex. The authors are on the TFS Migration Team at Microsoft and depend on feedback of people in the same boat we are. I have been exchanging a couple of emails with them so far, and they have been quite helpful. Based upon our discussions with the very helpful folks at Microsoft, we are likely going to backup the databases and follow the directions on BRYAN KRIEGER’S BLOG POST (Path 2: Migration Upgrade). I am hoping to make a test run at the upgrade using an older backup as early as next week. Best of luck! I know it is intimidating. Luckily, my installation and configuration experiences with a fresh TFS 2010 install in the lab have been much more smooth than my initial exposure to the TFS 2008 process. Hopefully, you find the same is true. Q. TFS 2010 enforcing Code Analysis in Gated Checkins Hi Is it possible to enforce VS Code Analysis (FxCop) as part of the Gated Checkin policies? So the developers will not be able to checkin unless the CA is passed. Thanks A: Yes this is possible. Please see the following: HOW TO: ADD CHECK-IN POLICIES Q. How to revert (Roll Back) a checkin in TFS 2010 Can anyone tell me how to revert (roll back) a checkin in TFS 2010? A: Without using power tools or command line: 1. ensure Tools->Options->Source Control->Visual Studio Team Foundation Server UNCHECK Get latest version of item on check out 2. View the history of project folder in Source Control Explorer and right click on the changeset to roll back to and choose Get This Version 3. Check out for edit on the project folder in the Source Control Explorer (this should keep your local version you just got from the history) 4. Check in pending changes on the project folder in the Source Control Explorer 5. if visual studio asks you to resolve conflicts, choose keep local and attempt check in of pending changes on the project folder in Source Control Explorer again Q. Why should my small .NET development company upgrade from Team Foundation Server 2008 to 2010? The company for which I work has an MSDN site license and upgrading to TFS 2010 from 2008 is not an expensive option. However, neither my colleagues nor I have been able to find any features that make this something we feel we need. Is anyone experienced with TFS 2010 enough to convince me that my company needs this? To clarify: we have no intention of moving to a different source control product. The question is what features of TFS 2010 are worth an upgrade from TFS 2008? A: disclaimer: I haven’t moved to 2010 yet either, but probably will Biggest feature for me: (we use branching to manage large feature sets and to separate SPs, QFE’s, and major releases) TFS 2010 will track changes across branches. Example: suppose I change something in the Dev branch, and then you merge dev to main. Now suppose that someone uses the Annotate feature (on the main branch) to figure out who changed that code. In TFS 2008, it would report that you made the change (because you checked in the merge). Reportedly, TFS 2010 will be aware that I actually orginated the change in the dev branch, and it will be able to tell you that. That’s gold if you are using branching. Also, correct me if I’m wrong, but didn’t they switch 2010 to use a single SQL database? (or maybe 1 for source control, and one warehouse?). If so, then the backup strategy gets a bit better. MS says that all of the TFS databases should be backed up from the same moment in time, but that’s tough to do when there are 5-odd databases (it’s very difficult to ensure that all of them reflect exactly the same point in time transactionally). If they have consolidated databases, then it should be easier. One other: depending upon the level of MSDN that you have, TFS 2010 might be free for you now. Q. How can I get TFS 2010 to build each project to a separate directory? In our project, we’d like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we’d like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we’re using workflow-based builds, those solutions don’t seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can’t get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn’t getting recognized, or if there’s a better way to achieve this? A: I figured out a nice way to do it. It turns out that since you can set the OutDir to whatever you want within the workflow, if you set it to the empty string, MSBuild will instead use the project-specific OutputPath. That lets us be a lot more flexible. Here’s my entire solution (based on the default build workflow): In the Run MSBuild task, set OutDir to the empty string. In that same task, set your CommandLineArguments to something like the following. This will allow you to have a reference to the TFS default OutDir from your project: String.Format(“/p:CommonOutputPath=””{0}”””, outputDirectory) In each project you want copied to the drop folder, set the OutputPath like so: binRelease $(CommonOutputPath)YourProjectNamebinRelease Check everything in, and you should have a working build that deploys each of your projects to its own folder under the drop folder. Q. TF203015 The Item $/path/file has an incompatible pending change. While trying to unshelve I’m using Visual Studio 2010 Pro against Team Server 2010 and I had my project opened (apparently) as a solution from the repo, but I should’ve opened it as “web site”. I found this out during compile, so I went to shelve my new changes and deleted the project from my local disk, then opened the project again from source (this time as web site) and now I can’t unshelve my files. Is there any way to work around this? Did I blow something up? Do I need to do maintenance at the server? Of course I can’t find an error code for TF203015 anywhere, so no resolution either (hence my inclusion of the number in the title, yeah?) EDIT: I should probably mention that these files were never checked in in the first place. Does that matter? Can you shelve an unchecked item? Is that what I did wrong? EDIT: WHAP – FOUND IT!!! Use “Undo” on the items that don’t exist because they show up in pending changes as checkins. A: Need to close, found the answer. I had deleted the files in trying to reload the workspace, even though I had shelved the changes. Then VS2010 thought those files were still pending to save. I didn’t need that, so I had to figure out to “undo” the changes in Pending Changes. Then I could unshelve. It thought I had two ops (unshelve, commit-for-add) going simultaneously, and I thought I had only one op (unshelve). Q. Share code between projects in tfs 2010 HiWhat is the best way to handle code sharing in TFS 2010? We have a couple of Visual studio projects that other Visual Studio projects use. ex: Shared Project Project 1 Solution -Shared Project -Project 1 Project Project 2 Solution -Shared Project -Project 2 Project Also we have Third party code for example: Third Party -Telerik –2009.1.402.35 –2009.02.0701.35 When I open my “Project 1” solution i want my shared code project to be included in that solution. (thats the way we work today). We basically have one TFS Project that contains all the code. Now we want to use it the “right” (?) way, We would like to have Project 1 and 2 in separate TFS solutions. If I for example makes sure we have all our project in the same structure on disk and just add the shared project to my Project 1 solution (even if the projects reside in two different TFS Projects) would that work with builds? How have you solved the problem, I guess we are not the only ones having shared code between projects? A: I am not sure if there is a “right way” to share code using Team Foundation Server 2010. I would recommend that you check the following blog post as customers are sharing their learnings:. Q. Standalone GUI client for TFS 2010 Source Control I’m looking for a TFS 2010 GUI client that I can use outside of an IDE. I’m only looking to use the source control features in this case. I’m not talking about work items or build management. Ideally it would be a complete client that can be used on a machine where Visual Studio is not installed. Options I know about and why I’m not satisfied with them: TFS POWER TOOLS – WINDOWS SHELL EXTENSION Must have a working copy to use… see CHICKEN OR THE EGG Missing features: view history, branch / merge, revert SVNBRIDGE TFS 2010 not yet supported Are there others that I don’t know about? A: You can install Team Explorer (on the TFS install DVD, or you can download it FROM MSDN) without needing to have VS2010 installed – Team Explorer will install a ‘shell’ VS2010 with only the TFS features available – none of the IDE components. Update: the VS11 BETA version is now available. If you use eclipse I’ve heard good things about the teamprise stuff but haven’t tried it myself. They got bought by microsoft and now you can download the eclipse plugin HERE It also looks like it has a fully functional command line client that you could use instead of TFS Power Tools. From the description: “Eclipse plug-in and cross-platform command-line client for Visual Studio 2010 Team Foundation Server” Q. How can I copy a TFS 2010 Build Definition? Is there any way to copy a build definition? I work in a mainline source control methodology which utilizes many different branches that live for very short periods (ie. a few days to a week). I’d really like to copy a build template and just change the solution to build. Is there any way to do this? A: You can write an add-in to do it. Here’s the code to copy an existing build definition: Skip code block static IBuildDefinition CloneBuildDefinition(IBuildDefinition buildDefinition) { var buildDefinitionClone = buildDefinition.BuildServer.CreateBuildDefinition( buildDefinition.TeamProject); buildDefinitionClone.BuildController = buildDefinition.BuildController; buildDefinitionClone.ContinuousIntegrationType = buildDefinition.ContinuousIntegrationType; buildDefinitionClone.ContinuousIntegrationQuietPeriod = buildDefinition.ContinuousIntegrationQuietPeriod; buildDefinitionClone.DefaultDropLocation = buildDefinition.DefaultDropLocation; buildDefinitionClone.Description = buildDefinition.Description; buildDefinitionClone.Enabled = buildDefinition.Enabled; buildDefinitionClone.Name = String.Format(“Copy of {0}”, buildDefinition.Name); buildDefinitionClone.Process = buildDefinition.Process; buildDefinitionClone.ProcessParameters = buildDefinition.ProcessParameters; foreach (var schedule in buildDefinition.Schedules) { var newSchedule = buildDefinitionClone.AddSchedule(); newSchedule.DaysToBuild = schedule.DaysToBuild; newSchedule.StartTime = schedule.StartTime; newSchedule.TimeZone = schedule.TimeZone; } foreach (var mapping in buildDefinition.Workspace.Mappings) { buildDefinitionClone.Workspace.AddMapping( mapping.ServerItem, mapping.LocalItem, mapping.MappingType, mapping.Depth); } buildDefinitionClone.RetentionPolicyList.Clear(); foreach (var policy in buildDefinition.RetentionPolicyList) { buildDefinitionClone.AddRetentionPolicy( policy.BuildReason, policy.BuildStatus, policy.NumberToKeep, policy.DeleteOptions); } return buildDefinitionClone; } Q. Is MSBuild going to be dead because of Windows Workflow? MSBuild in TFS 2010 has been replaced by Windows Workflow 4.0. It means when you are creating a Build Definition, you won’t have a TFSBuild.proj to edit instead you must edit a workflow to customize your build. BTW am I correct if I say Microsoft is not supporting MSBuild in TFS 2010 and learning MSBuild as a TFS 2010 Team Build administrator doesn’t worth? And another more question: is Microsoft going to replace Visual Studio Projects’ language from MSBuild to something like Windows Workflow? A: I’m the Program Manager for the build automation features of TFS, so I’d like to comment on this question. We haven’t replaced MSBuild with Windows Workflow (WF). We still very much rely on MSBuild as the core build engine, which is its core competency. You’ll find that there are lots of tasks that are still most easily and effectively automated with MSBuild.: 1. If the task requires knowledge of specific build inputs or outputs, use MSBuild 2. If the task is something you need to happen when you build in Visual Studio, use MSBuild 3. If the task is something you only need to happen when you build on the build server, use WF unless it requires knowledge of specific. Q. Using the edit – merge – commit workflow in TFS Source Control Ive been using sourcegear vault and subversion/visual svn for quite a while now and am a big fan of the CVS disconnected style “Edit -> Merge -> Commit” way of using source control. Since we moved to TFS 2010 i have been reintroduced to the horrid “checkout -> edit -> checkin” sourcesafe style way of working. Meaning only one user can work on a file at any time. I cant find anything that suggests this can be changed. Is it possible? A: Checkouts in TFS aren’t generally exclusive. By default, multiple users can checkout a single file. Exceptions are binary file types like JPGs, PNGs, etc. which are checked out exclusively by default. Once you’re ready to commit your changes, you can use the Pending Changes tool window to check for conflicts and merge if necessary. Q. How to Add/Edit the Iteration Field in Team Foundation Server Scrum v1.0 beta Workflow I downloaded and installed the new Team Foundation Server Scrum v1.0 beta work template from Microsoft. I would like to edit the drop down that displays in the Iteration field used when entering a new Sprint work item. If I enter in a release / sprint number that does not exists I get the following message. “New Sprint 1: TFS20017: The ara or iteration provided for field ‘Iteration Path’ could not be found” Does anyone know where I need to go to edit this listing? A: Connect to TFS. 1. In the Team Explorer, select the team project you want to define the iterations for. 2. Click on theTeam menu item in the Visual Studio menu bar. 3. Choose theTeam Project Settings sub-menu 4. ChooseAreas and Iterations… 5. Add sub-nodes as necessary for areas or iterations. Areas and Iterations are defined on a per-project basis, so if you are in one team project when you define them, you won’t be able to access them from another team project. Q. Move projects between collections in TFS 2010 I’d like to move some projects between collections but the only resources I’ve found are these two and they don’t address how to do this in TFS 2010: 1. FEATURE REQUEST 2. MOVE A COLLECTION Does anyone know of any other resource or has information on how to move a project from one collection to another? A: You could look at the TFS INTEGRATION PLATFORM (formerly called the TFS to TFS Migration Tool). That has utilities for moving source code from one instance of TFS to another, which should work if you want to move from one collection to another as well. Q. Error TF218027 when creating a Team Project in TFS 2010 Consider the scenario of a user creating a new Team Project. The user is a developer who wants to create and manage their Team Project. Why can’t this user create a new Team Project, including the Reporting Services components? What can be done to resolve this error? The exception is TF218027: the following reporting folder could not be created on the server running SQL Reporting Services. SQL Reporting services is running under an Active Directory service account created expressly for this purpose. The developer attempting this action is a member of a TFS group with the following permissions. A: I actually BLOGGED about this not too long ago. You usually see this error if Reporting Services gets set up with something other than the NETWORK SERVICE account. FTA: I was playing around with my test instance of Team Foundation Server today, trying to create a new project, when I got error TF218027 when it tried to create the Reporting Services folder for the project. The strange thing was, this was not my first project created on this server. I searched the Internet for anything similar, and found a post that said Reporting Services should be run with the NETWORK SERVICE account. Since this was a hastily put together server, I was running it with the Administrator account, so I tried switching it over. No dice. I got the same TF218027 error, but this time it was due to it not being able to decrypt the symmetric keys. Apparently, it’s a bad thing to change the account on the Reporting Services service. I hastily changed the account back to Administrator and resarted the service. Interestingly enough, this seems to have fixed the problem. Q. Can’t manually add files to TFS We use Codesmith to generate some code, and when we open up the projects, the files are there, in the solution, but there is no way to check them in. The DLL compiles just fine. The only difference to the .csproj is the addition of any new files we generated. But unlike VSS, TFS, does not detect these files. I validated this behavior by editing the .csproj manually. For some reason, the only way to add a file to TFS is through Visual Studio. However, when I remove them from the project, and then include them, I get the usual yellow plus sign. A: You can manually add files to Visual Studio, however changing your project file isn’t the best way to do this. If your project is already under source control and the files you want to add are visible in the Solution Explorer window, you can simply right-click a file and select Include in project. The next time you check your code in, the items will be added. Since you’re using TFS 2010, check out the TEAM FOUNDATION SERVER POWER TOOLS extensions. This includes the Windows Shell Extensions which give you integration into Windows Explorer which let’s you right click on files or folders and add the to TFS outside of Visual Studio. Very nifty! Q. Fetching the comment history for a work item in TFS In most defect trackers there is a comment history associated with a ticket/incident/issue/work item. I wish to get this same information from TFS via the SDK for a work item – ideally: 1. Who created the comment. 2. The text of the comment. 3. Who last updated/edited the comment (if that’s event possible in TFS?) I have determined that a WorkItem has a collection of revisions availabe via the “Revisions” property, and that you can loop through each revision – but a revision does not have a “History” property where I assume I could find the comment created by the user. Also I don’t believe it’s compulsory to record a comment with each change – so I suspect I will need to ignore revisions that don’t have any history property information? REVISIONS PROPERTY ON MSDN Any thoughts on the best way to fetch this “comment history” information for a work item in TFS – is the revisions list the correct way, or should I be using some other part of the API? A: In order to fetch the comment history you need to access the “History” property on the Work Item revision. WORKITEM.HISTORY PROPERTY Obviously the current (latest) version of the work item will have this field as empty, but historical revision comments will be there. TFS: GETTING WORKITEM HISTORY FROM THE API The “History” displayed on a work item in Team Explorer is built by looping through the Revissions and displaying both the fields that were changed and the text in the “History” property Q. Getting “Failed to create mapping” when adding a solution to TFS source control I’ve created a new Team Project in TFS, but when I try to add my solution to it I get: ‘Failed to create mapping Cannot map server path, $/Finance/MyApp, because it is not rooted beneath a team project.’ I can’t find anything on google or here that looks remotely like this problem. A: I had this issue when using Microsoft’s Team Foundation Service from Visual Studio 2012. I had just created the new team project via the TFS website. Although I could see my new project in the ‘add solution’ window, I got the error the OP reported. I had to go into the “Team Explorer” window, then into “Connect to Team Projects” and tick the new project. Then I was able to add my solution to the team project. Q. Why TFS does not allow multiple collections to be connected with a single same Build Controller? According to HERE and HERE TFS 2010 does not allow multiple project collections to use a same Build Controller. Why? I’m going to setup some another more build controller as virtual machines. But it seems somehow not practical. Because our company is going to have several project collections. Is it a good work-around? Is there any better work-around except that using a single project collection for all projects? A: There is a hack to use a single server as a build controller for multiple collections: You should use a new collection when you want the isolation. Examples why you want it are: 1. Security 2. Handover 3. Isolation for multiple customers Q. Configuring TFS2010 so users can create/update bugs but modify nothing else Environment: I am administrator of a project in TFS 2010, but don’t have any administrative rights for the project collection. Is there an easy way that I can set up access rights for a group of users so that they can: 1. Create/update “Bug” work items only 2. View all other work items 3. Execute work item “Team Queries” and create their own queries 4. No access to source control The idea is I want them to enter bugs, but I don’t want them creating/modifying User Stories or Tasks, nor do I want them to have access to source control. From what I can see, the standard groups don’t have fine enough control: Contributors can create all work item types Readers can view files in Source Control as well as work items. UPDATE Limiting access to Source Control is covered by Ewald’s answer. However Ewald also indicates that there isn’t a realistic way to set up security on the “Work Item Type” level so that users can only enter/update bugs. He suggests it could be achieved by customising every work item definition and setting field rules for every field on every work item type, but this is a lot of work, and in any case I want to avoid customising the process template. I’ve therefore created an issue on Connect for this: A: Having recently migrated from VSS to TFS2010 I haven’t looked back. I love the way everything is integrated. Without restating what was said before some of the great features are: 1. Proper branching & merging 2. AD integration, no more setting up users in VSS 3. Easy to see who has what checked out 4. Easy to see check-in history (great for code reviews) 5. TFS Power Tools add custom check-in policies and Windows Explorer context menu 6. Work items, tracking and their association with changesets 7. Inbuilt reporting 8. Team Project Portals – so non developers can access TFS reports/work item info etc 9. Speed, it’s so much faster than VSS 10. Source is stored in SQL server and check-in operations are transactional and not file based, no more running VSS clean up I found that rather than migrating source code using the migration tool a fresh check-in was the quickest way, keeping SourceSafe in read-only for the odd time I have to refer to the history. Q. Set up user permissions for Team Foundation Server 2010 We have installed TFS 2010 with success but wonder how to set the users permissions. We are small projects with five developers, a manager and a secretary. Each developer is working itself with one or more projects, we have no cooperation between any projects. We want everyone to be able to see all the code for each project, but that only those who are responsible for the code to change it. However, we want everyone to create Work Items for all projects. How should we set this up? A: For detailed information about TFS 2010 permissions you can check this If you want a user to be able to read the source code you have to give him/her only Read permission and to prevent him from changing code you have to deny check out and check in permissions. You can set these permissions by right clicking the folder or file in Source Control Explorer, Clicking Properties and clicking Security tab. For Work Items you have to give WORK_ITEM_WRITE and WORK_ITEM_READ permissions. You can do by right-clicking the project in Team Explorer, clicking Areas and Iterations, and on the Area tab, clicking Security Q. Adding parameter to “Queue New Build” Dialog I built a custom build processes template based on the DefaultTemplate.xaml and added a few parameters. They show up fine on the Build definition window but I can not find a way to have them be displayed on the parameters tab of the Queue New Build Dialog. I am hoping that this is possible, I would rather not need to define a separate build for each variation of parameters. A: You can define that in the Metadata parameter: You can play with the “View the parameter when” option at the bottom Q. Branching in TFS 2010 and being forced to re-download the code When i create a branch from the mainline in TFS 2010 i have to download all of the code i have just branched. I already have the latest mainline version on my laptop so why is TFS requiring me to effectively download whats already on my hard disk? Even if i copy the mainline files into a folder and map the new branch to this folder it still performs a fully recursive get and chokes our bandwith for 30 minutes or so. This seems like such a waste of time and bandwith – is their a workaround/procedure that i am not aware of? A: For faster switching between branches, give the tf get /remap option a try. From Brian Harry’s blog:. Note that this requires TFS 2008 SP1 or later. Q. How to update current sprint team queries in TFS 2010? We are using VS 2010 and TFS 2010 with the Microsoft Scrum Template. We use the Team Queries for the Current Sprint like the Sprint Backlog query. The problem is when we move to sprint 2 the “Current Sprint” still points to sprint 1. Is there a way to to tell TFS that we are now currently in sprint 2 and have the queries use a variable to run against instead of hard-coding the sprint? For example: If you look at the screen shot below you will notice that the definition of the query uses a variable called “@Project” for the team project. Is there a way to have a variable for the sprint? A: Tom, What you are asking for is not available in TFS 2010. There are not even dates on the iterations, so TFS does not know what the current iteration is. In TFS11 (vNext) we have added the dates on the iterations. It now knows in which iteration you are, and this is also reflected on the backlog page in web access. In the preview version that is out now it is however not possible to add a filter clause to your queries to filter on the current iteration (something like @CurrentIteration). We have heard strong feedback to add this in the product before it ships. It is also very high on our wish list, but we need to fix other things first before we can ship. You can add this request on USER VOICE. If the idea gets lots of votes it makes it easier to build a case that we need to put this in. But we cannot promise anything. Ewald – TFS Program Manager Q. Removing branch mapping in Team Foundation Server 2010 I’ve got a solution in source control with multiple projects. When I first migrated old code to TFS, I created main, dev, release areas for branches. Being new to TFS at the time, I branched a single project to the dev area, which created the little silver branch icon to show the relationship between them. Having done that, of course, I can no longer branch above or below that spot. So, I can’t branch a whole solution. I tried removing the mapping/association of branches so that I could branch from higher in the tree, but can’t find a way to do that. I backed up source control and deleted all other associated branches except for the original one in the Main branch, but the association is still there despite having deleted the others (I assume TFS still contains history of those associations to the other branches I created). My question is: How do I safely remove branch associations (the silver branch icon) while keeping history if possible (I don’t care about being able to merge anymore), so that I can branch from another area higher or lower in the tree? A: Evidently, the right-click menu does not have this option due to user feedback. It can be accessed via File > Source Control > Branching and Merging > Convert to Folder Source: MICROSOFT CONNECT Q. Email Notifications/Alerts from Builds in TFS 2010 I am having problems getting the team alerts to work in TFS 2010. Under “Team > Project Alerts”, I have checked the box to send both myself and a colleague an email upon a completed build. I know I have entered the correct email addresses, correct syntax as far as separating the emails, yet neither I nor my colleague receive any emails when the build is complete. So far, I haven’t found anything online regarding troubleshooting this issue. I was wondering if anyone has encountered the same problem or otherwise knows of a solution to this problem. A: Usually this boils down to a configuration problem within the TFS setup. If you have access to the tfs server, run the TFS Administration Console. If you click on TFS / Application Tier on the left, you’ll see the Application Tier settings come up. Scroll down to the Email Alert Settings. Make sure it’s enabled and has the correct configuration for sending messages. Q. Can you do a TFS get without needing a workspace? I’m trying to change our build scripts from using SourceSafe to TFS without using MsBuild (yet). One hiccup is that the workspace directory is renamed and archived by the scripts, which makes TFS think it doesn’t need to get any files. Even with the /force flag it just gets the directories without getting the source files. I am currently using TF.exe get “Product/Main/Project1” /force /recursive /noprompt To save me managing workspaces in the scripts or using intermediate directories, does anyone know of a command that can get directories and code without needing a workspace? A: It’s not possible to run a tf get without a workspace. The reason is that the server needs to know the mapping between the server paths and the local paths. If you are working with a large number of files, it is not a good idea to: Create & Delete a new workspace every time Or, Create a new workspace (and then never delete it) The reason for this is that every time you do a Get, the server keeps track of which files, at which versions were downloaded to which workspace. If you never clean up these workspaces, then the table that stores this information will grow over time. Additionally, if you are creating & deleting a workspace all the time, the server has to write all these rows, then delete them when you are done. This is unnecessary. You really should try and reuse the same workspace each time. If you do, the server is very efficient about only sending you files that have changed since you last downloaded them. Even if your build is moving from one branch to another, you can use tf get /remap which is sometimes more efficient if the branches share common files. Although it doesn’t solve your problem, it is possible to list files and download files without a workspace. To list files: tf dir $/Product/Main/Project1 /R To download a file: tf view $/Product/Main/Project1/file.cs With a creative batch file, you can string these two together with a FOR command. However I would recommend trying to solve your workspace problem first, since that is the way that TFS was intended to be used. Q. Why and when to have multiple build agents? Consider TFS 2010’s ability for a Build Controller to have 1+ build agents. Since builds are a subjective topic to the team/environment, consider an environment where builds are performed on commit/check-in. Each Project Collection will have 10+ Team Projects, but perhaps only 1 or 2 are being committed to in a day. 1. When should a TFS administrator consider creating a new build agent? 2. Do multiple agents run in parallel? 3. When a single agent is defined to a Build Controller, does it run serially? 4. MSDN STATES: “IF YOU SET UP YOUR AGENTS TO HAVE SPECIALIZED CAPABILITIES…”. What does this mean? A technology/platform differentiator? How can you setup your agents to have specialized capabilities? How can ‘tagging’ build agents be used effectively in an environment where builds are (typically) performed on each check-in. A: You use multiple build agents to support multiple build machines (I work currently with a build farm with 3 build machines – and thus 3 build agents – to distribute the load). You also might want to have multiple build agents to be able to run builds in parallel. This is a nice feature to share resources, but a requirement when you start working with Test/Lab Management features. With the capabilities: for example you can setup a build agent with version 1 of a 3rd party component, and a second build agent with version 2. With tagging you can specify in the build definition which build agent it will choose from out of the pool of build agents. Q. Team Foundation Server: Assign work item to a group instead of an individual user In TFS 2010, is there a way that I can assign a work item to a group (i.e. Developers or Designers) instead of an individual user? I’d also want to be able to create a query so that we can filter on that group as well. A: Yes, you can. If your group is a member of the larger group that can be assigned to, then it will appear in the list of assignable users. For example, a user hierarchy might be like this: [Assignable Users] [Developers] [Project Managers] Mark Avenius Joe Schmoe EDIT As for the query, you can have the clause Assigned To contains @Me, which I believe will do what you want. Q. How can this be achieved with Team Build 2010? A: This blog post should help you out: Essentially, you create a new ‘Platform’ for each project. Team Build will put each platform in a different directory by default, so you get a different directory for each of your projects. Build configuration dialog: Drop folder output: Q. Multiple copies of a solution on one user/machine with TFS 2010 Is there a way to pull two copies of a single solution from TFS 2010 for the same user/machine? A:. Get Updates on Tech posts, Interview & Certification questions and training schedules
https://mindmajix.com/tfs-interview-questions
CC-MAIN-2017-39
refinedweb
16,057
70.73